id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
42,536,774 | https://en.wikipedia.org/wiki/Multilevel%20fast%20multipole%20method | The multilevel fast multipole method (MLFMM) is used along with method of moments (MoM) a numerical computational method of solving linear partial differential equations which have been formulated as integral equations of large objects almost faster without loss in accuracy. This method is an alternative formulation of the technology behind the MoM and is applicable to much larger structures like radar cross-section (RCS) analysis, antenna integration on large structures, reflector antenna design, finite size antenna arrays, etc., making full-wave current-based solutions of such structures a possibility.
Method
The MLFMM is based on the Method of Moments (MoM), but reduces the memory complexity from to , and the solving complexity from to , where represents the number of unknowns and the number of iterations in the solver. This method subdivides the Boundary Element mesh into different clusters and if two clusters are in each other's far field, all calculations that would have to be made for every pair of nodes can be reduced to the midpoints of the clusters with almost no loss of accuracy. For clusters not in the far field, the traditional BEM has to be applied. That is MLFMM introduces different levels of clustering (clusters made out of smaller clusters) to additionally enhance computation speed.
References
Numerical differential equations
Numerical analysis
Computational electromagnetics | Multilevel fast multipole method | [
"Physics",
"Mathematics"
] | 271 | [
"Computational electromagnetics",
"Computational mathematics",
"Computational physics",
"Mathematical relations",
"Numerical analysis",
"Approximations"
] |
55,585,008 | https://en.wikipedia.org/wiki/Non-relativistic%20quantum%20electrodynamics | Non-relativistic quantum electrodynamics (NRQED) is a low-energy approximation of quantum electrodynamics which describes the interaction of (non-relativistic, i.e. moving at speeds much smaller than the speed of light) spin one-half particles (e.g., electrons) with the quantized electromagnetic field.
NRQED is an effective field theory suitable for calculations in atomic and molecular physics, for example for computing QED corrections to bound energy levels of atoms and molecules.
References
Quantum electrodynamics | Non-relativistic quantum electrodynamics | [
"Physics"
] | 117 | [
"Quantum mechanics",
"Quantum physics stubs"
] |
52,697,333 | https://en.wikipedia.org/wiki/Magic%20triangle%20%28mathematics%29 | A magic triangle is a magic arrangement of the integers from 1 to to triangular figure.
Perimeter magic triangle
A magic triangle or perimeter magic triangle is an arrangement of the integers from 1 to on the sides of a triangle with the same number of integers on each side, called the order of the triangle, so that the sum of integers on each side is a constant, the magic sum of the triangle. Unlike magic squares, there are different magic sums for magic triangles of the same order. Any magic triangle has a complementary triangle obtained by replacing each integer in the triangle with .
Examples
Order-3 magic triangles are the simplest (except for trivial magic triangles of order 1).
Other magic triangles
Other magic triangles use Triangular number or square number of vertices to form magic figure. Matthew Wright and his students in St. Olaf College developed magic triangles with square numbers. In their magic triangles, the sum of the k-th row and the (n-k+1)-th row is same for all k. Its one modification uses triangular numbers instead of square numbers. Another magic triangle form is magic triangles with triangular numbers with different summation. In this magic triangle, the sum of the k-th row and the (n-k)-th row is same for all k.
Another magic triangle form is magic triangles with square numbers with different summation. In this magic triangle, the sum of the 2x2 subtriangles are same for all subtriangles.
Magic Triangles have also been discovered, such that when its elements are squared, we obtain another magic triangle.
See also
Magic hexagon
Antimagic square
Magic polygon
References
Magic figures | Magic triangle (mathematics) | [
"Mathematics"
] | 334 | [
"Recreational mathematics",
"Magic figures",
"Combinatorics"
] |
52,701,130 | https://en.wikipedia.org/wiki/Methanogens%20in%20digestive%20tract%20of%20ruminants | Methanogens are a group of microorganisms that produce methane as a byproduct of their metabolism. They play an important role in the digestive system of ruminants. The digestive tract of ruminants contains four major parts: rumen, reticulum, omasum and abomasum. The food with saliva first passes to the rumen for breaking into smaller particles and then moves to the reticulum, where the food is broken into further smaller particles. Any indigestible particles are sent back to the rumen for rechewing. The majority of anaerobic microbes assisting the cellulose breakdown occupy the rumen and initiate the fermentation process. The animal absorbs the fatty acids, vitamins and nutrient content on passing the partially digested food from the rumen to the omasum. This decreases the pH level and initiates the release of enzymes for further breakdown of the food which later passes to the abomasum to absorb remaining nutrients before excretion. This process takes about 9–12 hours.
Some of the microbes in the ruminant digestive system are:
Fibrobacter (Bacteroides) succinogenes is a gram negative, cellulolytic and amylolytic methanogen that produces formates, acetates and succinates.
Ruminococcus albus is a cellulolytic, xylanolytic bacterium producing ethanol, hydrogen, carbon dioxide, formates and acetates.
Ruminococcus flavefaciens is a cellulolytic, xylanolytic bacteria producing formates, acetates, hydrogen and succinates.
Butyrivibrio fibrisolvens is a proteolytic, cellulolytic, xylanolytic microbe producing lactate, butyrate, ethanol, hydrogen, carbon dioxide, formates and acetates.
Streptococcus bovis is an amylolytic, major soluble sugar fermenter, proteolytic, microbe resulting in lactate, acetate and formate.
Ruminobacter (Bacteroides) amylophilus amylolytic, propionate, proteolytic, organism that forms, formates, acetates and succinates.
Prevotella (Bacteroides) ruminocola amylolytic, xylanolytic, propionate, proteolytic, microbe that creates, formates, acetates, succinates and propionate.
Succinimonas amylolytica amylolytic, dextrinolytic, bacteria forming acetates and succinates.
Selenomonas ruminantium amylolytic, major soluble sugar fermenter, glycerol-utilizing, lactate-utilizing, proteolytic, microbe producing acetates, lactates, hydrogen, carbon dioxide and propionates.
Lachnospira multiparus propionate, proteolytic, A microbe that results in production of lactate, ethanol, hydrogen, carbon dioxide, formates and acetates.
Succinivibrio dextrinosolvens propionate, dextrinolytic, bacteria forming formates, acetates, lactates and succinates
Methanobrevibacter ruminantium methanoic, hydrogen utilizing, archaea involved in the creation of methane
Methanosarcina barkeri methanoic, hydrogen utilizing, archaea involved in the creation of methane and carbon dioxide.
References
Ruminants
Microorganisms
Digestive system
Methane | Methanogens in digestive tract of ruminants | [
"Chemistry",
"Biology"
] | 771 | [
"Digestive system",
"Methane",
"Organ systems",
"Greenhouse gases",
"Microorganisms"
] |
54,036,461 | https://en.wikipedia.org/wiki/Kouteck%C3%BD%E2%80%93Levich%20equation | The Koutecký–Levich equation models the measured electric current at an electrode from an electrochemical reaction in relation to the kinetic activity and the mass transport of reactants.
The Koutecký–Levich equation can be written as:
where
im is the measured current (A).
iK is the kinetic current (A) from the electrochemical reactions.
iMT is the mass transport current (A).
Note the similarity of this equation to the conductance of an electrical circuits in parallel.
The Koutecký–Levich equation is also commonly expressed as:
The kinetic current (iK) can be modeled by the Butler-Volmer Equation and is characterized by being potential dependent. On the other hand, the mass transport current (iMT) depends on the particular electrochemical setup and amount of stirring.
Koutecký–Levich plot
In the case a rotating disk electrode setup is used and the electrode is flat and smooth, the iMT can modeled using the Levich equation. Inserted in the Koutecký–Levich equation, we get:
where:
BL is the Levich Constant.
ω is the angular rotation rate of the electrode (rad/s)
From an experimental data set where the current is measured at different rotation rates, it is possible to extract the kinetic current from a so-called Koutecký–Levich plot. In a Koutecký–Levich plot the inverse measured current is plotted versus the inverse square root of the rotation rate. This will linearize the data set and the inverse of the kinetic current can be obtained by extrapolating the line to the ordinate. This y-intercept corresponds to taking the rotation rate up to infinity, where the reaction is not mass-transport limited. Koutecký–Levich analysis is therefore used to determine the kinetic constants of the reaction such as the kinetic constant and the symmetry factor .
References
Electrochemical equations | Koutecký–Levich equation | [
"Chemistry",
"Mathematics"
] | 387 | [
"Mathematical objects",
"Equations",
"Electrochemistry",
"Electrochemistry stubs",
"Physical chemistry stubs",
"Electrochemical equations"
] |
54,041,957 | https://en.wikipedia.org/wiki/Engineering%20biology | Engineering biology is the set of methods for designing, building, and testing engineered biological systems which have been used to manipulate information, construct materials, process chemicals, produce energy, provide food, and help maintain or enhance human health and environment.
History
Rapid advances in the ability to genetically modify biological organisms have advanced a new engineering discipline, commonly referred to as synthetic biology. This approach seeks to harness the power of living systems for a variety of manufacturing applications, such as advanced therapeutics, sustainable fuels, chemical feedstocks, and advanced materials. To date, research in synthetic biology has typically relied on trial-and-error approaches, which are costly, laborious, and inefficient. Engineering biology methods include a combination of traditional biological techniques such as bioinformatics, molecular biology, and wet cell biology, as well as conventional engineering practices such as design and computation.
References
Bibliography
H.R.4521 - America COMPETES Act of 2022
https://www.congress.gov/congressional-record/2022/03/17/senate-section/article/S1237-5
Schuergers, N., Werlang, C., Ajo-Franklin, C., & Boghossian, A. (2017). A Synthetic Biology Approach to Engineering Living Photovoltaics. Energy & Environmental Science. doi:10.1039/C7EE00282C
Teague, B. P., Guye, P., & Weiss, R. (2016). Synthetic Morphogenesis. Cold Spring Harbor Perspectives in Biology, 8(9), a023929. doi:10.1101/cshperspect.a023929
Kelley, N. J. (2015). Engineering Biology for Science & Industry : Accelerating Progress. http://nancyjkelley.com/wp-content/uploads/Meeting-Summary.Final_.6.9.15-Formatted.pdf
H.R.591. - Engineering Biology Research and Development Act of 2015. https://www.congress.gov/bill/114th-congress/house-bill/591
Kelley, N. J. (2014). The promise and challenge of engineering biology in the United States. Industrial Biotechnology, 10(3), 137–139. doi:10.1089/ind.2014.1516
↑ Beal, J., Weiss, R., Densmore, D., Adler, A., Babb, J., Bhatia, S., ... & Loyall, J. (2011, June). TASBE: A tool-chain to accelerate synthetic biological engineering. In Proceedings of the 3rd International Workshop on Bio-Design Automation (pp. 19–21). http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.467.7189&rep=rep1&type=pdf
Schrödinger, E. (1946). What is life?: the physical aspect of the living cell. Cambridge.
Engineering Biology Problems Book. 2016. DOI:10.2139/ssrn.2898429
Biotechnology
Molecular genetics
Systems biology
Bioinformatics
Biocybernetics
Appropriate technology
Artificial objects | Engineering biology | [
"Physics",
"Chemistry",
"Engineering",
"Biology"
] | 695 | [
"Synthetic biology",
"Biological engineering",
"Artificial objects",
"Biotechnology",
"Bioinformatics",
"Molecular genetics",
"Physical objects",
"nan",
"Molecular biology",
"Matter",
"Systems biology"
] |
54,044,650 | https://en.wikipedia.org/wiki/Chandrasekhar%27s%20variational%20principle | In astrophysics, Chandrasekhar's variational principle provides the stability criterion for a static barotropic star, subjected to radial perturbation, named after the Indian American astrophysicist Subrahmanyan Chandrasekhar.
Statement
A baratropic star with and is stable if the quantity
is non-negative for all real functions that conserve the total mass of the star .
where
is the coordinate system fixed to the center of the star
is the radius of the star
is the volume of the star
is the unperturbed density
is the small perturbed density such that in the perturbed state, the total density is
is the self-gravitating potential from Newton's law of gravity
is the Gravitational constant
References
Variational principles
Stellar dynamics
Astrophysics
Fluid dynamics
Equations of astronomy | Chandrasekhar's variational principle | [
"Physics",
"Chemistry",
"Astronomy",
"Mathematics",
"Engineering"
] | 166 | [
"Mathematical principles",
"Variational principles",
"Concepts in astronomy",
"Chemical engineering",
"Astrophysics",
"Equations of astronomy",
"Piping",
"Fluid dynamics",
"Astronomical sub-disciplines",
"Stellar dynamics"
] |
41,110,491 | https://en.wikipedia.org/wiki/Time%20domain%20electromagnetics | In engineering, time domain electromagnetics refers to one of two general groups of techniques (in mathematics, often called ansätze) that describe electromagnetic wave motion. In contrast with frequency domain electromagnetics, which are based on the Fourier or Laplace transform, time domain keeps time as an explicit independent variable in descriptive equations or wave motion.
References
S. M. Rao, E. K. Miller, Time Domain Electromagnetics, Academic Press: San Diego etc., 1999.
External links
The Virtual Institute for Nonlinear Optics (VINO), a research collaboration devoted to the investigation of X-waves and conical waves in general
Nolinear X-waves page at the nlo.phys.uniroma1.it website.
Electrodynamics | Time domain electromagnetics | [
"Physics",
"Mathematics"
] | 153 | [
"Electrodynamics",
"Classical mechanics stubs",
"Classical mechanics",
"Dynamical systems"
] |
41,114,338 | https://en.wikipedia.org/wiki/Radiant%20heating%20and%20cooling | Radiant heating and cooling is a category of HVAC technologies that exchange heat by both convection and radiation with the environments they are designed to heat or cool. There are many subcategories of radiant heating and cooling, including: "radiant ceiling panels", "embedded surface systems", "thermally active building systems", and infrared heaters. According to some definitions, a technology is only included in this category if radiation comprises more than 50% of its heat exchange with the environment; therefore technologies such as radiators and chilled beams (which may also involve radiation heat transfer) are usually not considered radiant heating or cooling. Within this category, it is practical to distinguish between high temperature radiant heating (devices with emitting source temperature >≈300 °F), and radiant heating or cooling with more moderate source temperatures. This article mainly addresses radiant heating and cooling with moderate source temperatures, used to heat or cool indoor environments. Moderate temperature radiant heating and cooling is usually composed of relatively large surfaces that are internally heated or cooled using hydronic or electrical sources. For high temperature indoor or outdoor radiant heating, see: Infrared heater. For snow melt applications see: Snowmelt system.
History
Radiant heating and cooling originated as separate systems but now share a similar form. Radiant heating has a long history in Asia and Europe. The earliest systems, from as early as 5000 BC, were found in northern China and Korea. Archaeological findings show kang and dikang, heated beds and floors in ancient Chinese homes. Kang originated in the 11th century BC as “to dry” later evolving into a heated bed, while dikang expanded this concept to a heated floor. In Korea, the ondol system, meaning "warm stone," used flues beneath the floor to channel smoke from a kitchen stove, heating flat stones that radiated heat into the room above. Over time, the ondol system adapted to use coal and later transitioned to water-based systems in the 20th century, remaining a common heating system in Korean buildings.
In Europe, the Roman hypocaust system, developed around the 3rd century BC, was an early radiant heating method using a furnace connected to underfloor and wall flues to circulate hot air in public baths and villas. This technology spread across the Roman Empire but declined after its fall, replaced by simpler fireplaces in the Middle Ages. In this period, systems like the Kachelofen from Austria and Germany used thermal masses for efficient heat storage and distribution. During the 18th century, radiant heating gained renewed use in Europe, driven by advancements in thermal storage techniques, such as heated flues for efficient heat distribution and a better understanding of how materials retain and transfer heat. In the early 19th century, developments in water-based systems with embedded hot water pipes paved the way for modern radiant heating, providing indoor comfort through heat transfer.
Radiant cooling also has ancient roots. In the 8th century, Mesopotamian builders used snow-packed walls to cool indoor space. The concept resurfaced in the 20th century with hydronic cooling systems in Europe, embedding cool water pipes in structures to absorb and dissipate heat, meeting cooling loads. Radiant cooling became more widely adopted in the 1990s, with the implementation of floor cooling. Today, modern radiant systems typically use water as a thermal medium for efficient heat transfer and are widely adopted in residential, commercial, and industrial buildings. While valued for its potential to enhance energy efficiency, quiet operation, and thermal comfort, their performance varies with design and application, leading to ongoing discussions.
Radiant Heating
Radiant heating is a technology for heating indoor and outdoor areas. Heating by radiant energy is observed every day, the warmth of the sunshine being the most commonly observed example. Radiant heating as a technology is more narrowly defined. It is the method of intentionally using the principles of radiant heat to transfer radiant energy from an emitting heat source to an object. Designs with radiant heating are seen as replacements for conventional convection heating as well as a way of supplying confined outdoor heating.
Indoor
The heat energy is emitted from a warm element, such as a floor, wall or overhead panel, and warms people and other objects in rooms rather than directly heating the air. The internal air temperature for radiant heated buildings may be lower than for a conventionally heated building to achieve the same level of body comfort, when adjusted so the perceived temperature is actually the same. One of the key advantages of radiant heating systems is a much decreased circulation of air inside the room and the corresponding spreading of airborne particles.
Radiant heating systems can be divided into:
Underfloor heating systems—electric or hydronic
Wall heating systems
Radiant ceiling panels
Underfloor and wall heating systems often are called low-temperature systems. Since their heating surface is much larger than other systems, a much lower temperature is required to achieve the same level of heat transfer. This provides an improved room climate with healthier humidity levels. The lower temperatures and large surface area of underfloor heating systems make them ideal heat emitters for air source heat pumps, evenly and effectively radiating the heat energy from the system into rooms within a home.
The maximum temperature of the heating surface can vary from depending on the room type. Radiant overhead panels are mostly used in production and warehousing facilities or sports centers; they hang a few meters above the floor and their surface temperatures are much higher.
Outdoors
In the case of heating outdoor areas, the surrounding air is constantly moving. Relying on convection heating is in most cases impractical, the reason being that, once you heat the outside air, it will blow away with air movement. Even in a no-wind condition, the buoyancy effects will carry away the hot air. Outdoor radiant heaters allow specific spaces within an outdoor area to be targeted, warming only the people and objects in their path. Radiant heating systems may be gas-fired or use electric infrared heating elements. An example of the overhead radiant heaters are the patio heaters often used with outdoor serving. The top metal disc reflects the radiant heat onto a small area.
Radiant cooling
Radiant cooling is the use of cooled surfaces to remove sensible heat primarily by thermal radiation and only secondarily by other methods like convection. Radiant systems that use water to cool the radiant surfaces are examples of hydronic systems. Unlike “all-air” air conditioning systems that circulate cooled air only, hydronic radiant systems circulate cooled water in pipes through specially-mounted panels on a building's floor or ceiling to provide comfortable temperatures. There is a separate system to provide air for ventilation, dehumidification, and potentially additionally cooling. Radiant systems are less common than all-air systems for cooling, but can have advantages compared to all-air systems in some applications.
Since the majority of the cooling process results from removing sensible heat through radiant exchange with people and objects and not air, occupant thermal comfort can be achieved with warmer interior air temperatures than with air based cooling systems. Radiant cooling systems potentially offer reductions in cooling energy consumption. The latent loads (humidity) from occupants, infiltration and processes generally need to be managed by an independent system. Radiant cooling may also be integrated with other energy-efficient strategies such as night time flushing, indirect evaporative cooling, or ground source heat pumps as it requires a small difference in temperature between desired indoor air temperature and the cooled surface.
Passive daytime radiative cooling uses a material that fluoresces in the infrared atmospheric window, a frequency range where the atmosphere is unusually transparent, so that the energy goes straight out to space. This can cool the heat-fluorescent object to below ambient air temperature, even in full sun.
Advantages
Radiant cooling systems offer lower energy consumption than conventional cooling systems based on research conducted by the Lawrence Berkeley National Laboratory. Radiant cooling energy savings depend on the climate, but on average across the US savings are in the range of 30% compared to conventional systems. Cool, humid regions might have savings of 17% while hot, arid regions have savings of 42%. Hot, dry climates offer the greatest advantage for radiant cooling as they have the largest proportion of cooling by way of removing sensible heat. While this research is informative, more research needs to be done to account for the limitations of simulation tools and integrated system approaches. Much of the energy savings is also attributed to the lower amount of energy required to pump water as opposed to distribute air with fans. By coupling the system with building mass, radiant cooling can shift some cooling to off-peak night time hours. Radiant cooling appears to have lower first costs and lifecycle costs compared to conventional systems. Lower first costs are largely attributed to integration with structure and design elements, while lower life cycle costs result from decreased maintenance. However, a recent study on comparison of VAV reheat versus active chilled beams & DOAS challenged the claims of lower first cost due to added cost of piping
Limiting factors
Because of the potential for condensate formation on the cold radiant surface (resulting in water damage, mold and the like), radiant cooling systems have not been widely applied. Condensation caused by humidity is a limiting factor for the cooling capacity of a radiant cooling system. The surface temperature should not be equal or below the dew point temperature in the space. Some standards suggest a limit for the relative humidity in a space to 60% or 70%. An air temperature of would mean a dew point between . There is, however, evidence that suggests decreasing the surface temperature to below the dew point temperature for a short period of time may not cause condensation. Also, the use of an additional system, such as a dehumidifier or DOAS, can limit humidity and allow for increased cooling capacity.
Classification of Radiant Systems
Radiant systems, encompassing both heating and cooling, transfer heat or coolness directly through surfaces, such as floors, ceilings, or walls, instead of relying on forced-air systems. These systems are broadly categorized into three types: thermally activated building systems (TABS), embedded surface systems, and radiant ceiling panels.
Chilled slabs
Radiant cooling from a slab can be delivered to a space from the floor or ceiling. Since radiant heating systems tend to be in the floor, the obvious choice would be to use the same circulation system for cooled water. While this makes sense in some cases, delivering cooling from the ceiling has several advantages.
First, it is easier to leave ceilings exposed to a room than floors, increasing the effectiveness of thermal mass. Floors offer the downside of coverings and furnishings that decrease the effectiveness of the system.
Second, greater convective heat exchange occurs through a chilled ceiling as warm air rises, leading to more air coming in contact with the cooled surface.
Cooling delivered through the floor makes the most sense when there is a high amount of solar gain from sun penetration, because the cool floor can more easily remove those loads than the ceiling.
Chilled slabs, compared to panels, offer more significant thermal mass and therefore can take better advantage of outside diurnal temperatures swings. Chilled slabs cost less per unit of surface area, and are more integrated with structure.
Partial radiant systems
Chilled beams are hybrid systems that combine radiant and convective heat transfer. While not purely radiant, they are suited for spaces with varying thermal loads and integrate well with ceilings for flexible placement and ventilation.
Thermal comfort
The operative temperature is an indicator of thermal comfort which takes into account the effects of both convection and radiation. Operative temperature is defined as a uniform temperature of a radiantly black enclosure in which an occupant would exchange the same amount of heat by radiation plus convection as in the actual nonuniform environment.
With radiant systems, thermal comfort is achieved at warmer interior temp than all-air systems for cooling scenario, and at lower temperature than all-air systems for heating scenario.
Thus, radiant systems can helps to achieve energy savings in building operation while maintaining the wished comfort level.
Thermal comfort in radiant vs. all-air buildings
Based on a large study performed using Center for the Built Environment's Indoor environmental quality (IEQ) occupant survey to compare occupant satisfaction in radiant and all-air conditioned buildings, both systems create equal indoor environmental conditions, including acoustic satisfaction, with a tendency towards improved temperature satisfaction in radiant buildings.
Radiant temperature asymmetry
The radiant temperature asymmetry is defined as the difference between the plane radiant temperature of the two opposite sides of a small plane element. As regards occupants within a building, thermal radiation field around the body may be non-uniform due to hot and cold surfaces and direct sunlight, bringing therefore local discomfort. The norm ISO 7730 and the ASHRAE 55 standard give the predicted percentage of dissatisfied occupants (PPD) as a function of the radiant temperature asymmetry and specify the acceptable limits. In general, people are more sensitive to asymmetric radiation caused by a warm ceiling than that caused by hot and cold vertical surfaces. The detailed calculation method of percentage dissatisfied due to a radiant temperature asymmetry is described in ISO 7730.
Design considerations
While specific design requirements will depend on the type of radiant system, a few issues are common to most radiant systems.
For cooling application, radiant systems can lead condensation issues. Local climate needs to be evaluated and taken into account in the design. Air dehumidification can be necessary for humid climate.
Many types of radiant systems incorporate massive building elements. The thermal mass involved will have a consequence on the thermal response of the system. The operation schedule of a space and the control strategy of the radiant system play a key role in the proper functioning of the system.
Many types of radiant systems incorporate hard surfaces which influence indoor acoustics. Additional acoustic solutions may need to be considered.
A design strategy to reduce acoustical impacts of radiant systems is using free-hanging acoustical clouds. Cooling experiments on free-hanging acoustical clouds for an office room showed that for 47% cloud coverage of the ceiling area, 11% reduction in cooling capacity was caused by the cloud coverage. Good acoustic quality can be achieved with only minor reduction of cooling capacity. Combining acoustical clouds and ceiling fans can offset the modest reduction in cooling capacity from a radiant cooled ceiling caused by the presence of the clouds, and results in increase in cooling capacity.
Control Strategies and Considerations
Heating, Ventilation, and Air Conditioning (HVAC) systems require a control system to supply heating or cooling to a space. The control strategies applied depend on the type of HVAC system used, and these strategies ultimately determine the system’s energy consumption. Radiant systems differ from other HVAC systems in terms of heat transfer mechanisms and the potential risk of condensation, requiring tailored control strategies to address these unique characteristics.
High Thermal Mass Considerations
Radiant systems transfer heat by heating or cooling structural elements, such as concrete slabs or ceilings, rather than directly delivering hot or cold air. These elements primarily release heat through radiation. The response time—the time it takes for the system to reach the setpoint temperature—depends on the material's thermal mass: low thermal mass materials, such as metal panels, respond quickly, while high thermal mass materials, such as concrete slabs, adjust more slowly.When integrated with high thermal mass elements, radiant systems face challenges due to delayed temperature adjustments. This delay can lead to over-adjustments, resulting in increased energy consumption and reduced thermal comfort.To address this problem, model Predictive Control (MPC) is often employed to predict future thermal demands and adjust heat supply proactively. For instance, MPC leverages the thermal mass of radiant systems by storing heat during off-peak times, before it is needed. This allows operations to start at night, when electricity costs and urban electricity grid loads are lower. Additionally, cooler nighttime air improves the efficiency of cooling equipment, such as air-source heat pumps, further optimizing energy use. By employing these strategies, radiant systems effectively overcome thermal mass challenges while reducing daytime electricity demand, enhancing grid stability, and lowering operational costs.
Condensation Risks and Mitigation Strategies
Radiant cooling systems can experience condensation when the surface temperature drops below the dew point of the surrounding air. This may cause occupant discomfort, promote mold growth, and damage radiant surfaces. The risk is particularly high in humid climates, where warm, moist air enters through open windows and contacts cold radiant cooling surfaces. To prevent this, radiant cooling systems must be paired with effective ventilation strategies to control indoor humidity levels.
Hydronic radiant systems
Radiant cooling systems are usually hydronic, cooling using circulating water running in pipes in thermal contact with the surface. Typically the circulating water only needs to be 2–4 °C below the desired indoor air temperature. Once having been absorbed by the actively cooled surface, heat is removed by water flowing through a hydronic circuit, replacing the warmed water with cooler water.
Depending on the position of the pipes in the building construction, hydronic radiant systems can be sorted into 4 main categories:
Embedded Surface Systems: pipes embedded within the surface layer (not within the structure)
Thermally Active Building Systems (TABS): the pipes thermally coupled and embedded in the building structure (slabs, walls)
Capillary Surface Systems: pipes embedded in a layer at the inner ceiling/wall surface
Radiant Panels: metal pipes integrated into panels (not within the structure); heat carrier close to the surface
Types (ISO 11855)
The norm ISO 11855-2
focuses on embedded water based surface heating and cooling systems and TABS. Depending on construction details, this norm distinguishes 7 different types of those systems (Types A to G)
Type A with pipes embedded in the screed or concrete (“wet” system)
Type B with pipes embedded outside the screed (in the thermal insulation layer, “dry” system)
Type C with pipes embedded in the leveling layer, above which the second screed layer is placed
Type D include plane section systems (extruded plastic / group of capillary grids)
Type E with pipes embedded in a massive concrete layer
Type F with capillary pipes embedded in a layer at the inner ceiling or as a separate layer in gypsum
Type G with pipes embedded in a wooden floor construction
Energy sources
Radiant systems are associated with low-exergy systems. Low-exergy refers to the possibility to utilize ‘low quality energy’ (i.e. dispersed energy that has little ability to do useful work). Both heating and cooling can in principle be obtained at temperature levels that are close to the ambient environment. The low temperature difference requires that the heat transmission takes place over relative big surfaces as for example applied in ceilings or underfloor heating systems.
Radiant systems using low temperature heating and high temperature cooling are typical example of low-exergy systems.
Energy sources such as geothermal (direct cooling / geothermal heat pump heating) and solar hot water are compatible with radiant systems. These sources can lead to important savings in terms of primary energy use for buildings.
Commercial buildings using radiant cooling
Some well-known buildings using radiant cooling include Bangkok's Suvarnabhumi Airport, the Infosys Software Development Building 1 in Hyderabad, IIT Hyderabad, and the San Francisco Exploratorium. Radiant cooling is also used in many zero net energy buildings.
Physics
Heat radiation is the energy in the form of electromagnetic waves emitted by a solid, liquid, or gas as a result of its temperature.
In buildings, the radiant heat flow between two internal surfaces (or a surface and a person) is influenced by the emissivity of the heat emitting surface and by the view factor between this surface and the receptive surface (object or person) in the room. Thermal (longwave) radiation travels at the speed of light, in straight lines. It can be reflected. People, equipment, and surfaces in buildings will warm up if they absorb thermal radiation, but the radiation does not noticeably heat up the air it is traveling through. This means heat will flow from objects, occupants, equipment, and lights in a space to a cooled surface as long as their temperatures are warmer than that of the cooled surface and they are within the direct or indirect line of sight of the cooled surface. Some heat is also removed by convection because the air temperature will be lowered when air comes in contact with the cooled surface.
The heat transfer by radiation is proportional to the power of four of the absolute surface temperature.
The emissivity of a material (usually written ε or e) is the relative ability of its surface to emit energy by radiation. A black body has an emissivity of 1 and a perfect reflector has an emissivity of 0.
In radiative heat transfer, a view factor quantifies the relative importance of the radiation that leaves an object (person or surface) and strikes another one, considering the other surrounding objects. In enclosures, radiation leaving a surface is conserved, therefore, the sum of all view factors associated with a given object is equal to 1.
In the case of a room, the view factor of a radiant surface and a person depend on their relative positions. As a person is often changing position and as a room might be occupied by many persons at the same time, diagrams for omnidirectional person can be used.
Thermal response time
Response time (τ95), aka time constant, is used to analyze the dynamic thermal performance of radiant systems. The response time for a radiant system is defined as the time it takes for the surface temperature of a radiant system to reach 95% of the difference between its final and initial values when a step change in control of the system is applied as input. It is mainly influenced by concrete thickness, pipe spacing, and to a less degree, concrete type. It is not affected by pipe diameter, room operative temperature, supply water temperature, and water flow regime. By using response time, radiant systems can be classified into fast response (τ95< 10 min, like RCP), medium response (1 h<τ95<9 h, like Type A, B, D, G) and slow response (9 h< τ95<19 h, like Type E and Type F). Additionally, floor and ceiling radiant systems have different response times due to different heat transfer coefficients with room thermal environment, and the pipe-embedded position.
Other HVAC systems that exchange heat by radiation
Fireplaces and woodstoves
See also
Glossary of HVAC
References
Further reading
ASHRAE Handbook. HVAC Systems and Equipment 2012. Chapter 13. Hydronic Heating and Cooling.
Kessling, W., Holst, S., Schuler, M. Innovative Design Concept for the New Bangkok International Airport, NBIA.
Olesen, B.W. Radiant Heating and Cooling by Water-based systems. Technical University of Denmark, International Centre for Indoor Environment and Energy.
External links
Radiant cooling research at the Center for the Built Environment
Center for the Built Environment's Occupant Indoor Environmental Quality (IEQ) Survey
US Dept of Energy Guide to Radiant Heating
Infrared Heater Safety Council
Radiant Panel Association
Map of buildings using hydronic radiant heating and cooling systems
Environmental design
de:Strahlungsheizung | Radiant heating and cooling | [
"Engineering"
] | 4,683 | [
"Environmental design",
"Design"
] |
41,115,741 | https://en.wikipedia.org/wiki/Simple%20space | In algebraic topology, a branch of mathematics, a simple space is a connected topological space that has a homotopy type of a CW complex and whose fundamental group is abelian and acts trivially on the homotopy and homology of the universal covering space, though not all authors include the assumption on the homotopy type.
Examples
Topological groups
For example, any topological group is a simple space (provided it satisfies the condition on the homotopy type).
Eilenberg-Maclane spaces
Most Eilenberg-Maclane spaces are simple since the only nontrivial homotopy group is in degree . This means the only non-simple spaces are for nonabelian.
Universal covers
Every connected topological space has an associated (universal) simple space from the universal cover ; indeed, and the universal cover is its own universal cover.
References
Dennis Sullivan, Geometric Topology
Algebraic topology | Simple space | [
"Mathematics"
] | 182 | [
"Topology stubs",
"Fields of abstract algebra",
"Topology",
"Algebraic topology"
] |
41,116,218 | https://en.wikipedia.org/wiki/Microbial%20biogeography | Microbial biogeography is a subset of biogeography, a field that concerns the distribution of organisms across space and time. Although biogeography traditionally focused on plants and larger animals, recent studies have broadened this field to include distribution patterns of microorganisms. This extension of biogeography to smaller scales—known as "microbial biogeography"—is enabled by ongoing advances in genetic technologies.
The aim of microbial biogeography is to reveal where microorganisms live, at what abundance, and why. Microbial biogeography can therefore provide insight into the underlying mechanisms that generate and hinder biodiversity. Microbial biogeography also enables predictions of where certain organisms can survive and how they respond to changing environments, making it applicable to several other fields such as climate change research.
History
Schewiakoff (1893) theorized about the cosmopolitan habitat of free-living protozoans. In 1934, Lourens Baas Becking, based on his own research in California's salt lakes, as well as work by others on salt lakes worldwide, concluded that "everything is everywhere, but the environment selects". Baas Becking attributed the first half of this hypothesis to his colleague Martinus Beijerinck (1913).
Baas Becking hypothesis of cosmopolitan microbial distribution would later be challenged by other works.
Microbial vs macro-organism biogeography
The biogeography of macro-organisms (i.e., plants and animals that can be seen with the naked eye) has been studied since the eighteenth century. For macro-organisms, biogeographical patterns (i.e., which organism assemblages appear in specific places and times) appear to arise from both past and current environments. For example, polar bears live in the Arctic but not the Antarctic, while the reverse is true for penguins; although both polar bears and penguins have adapted to cold climates over many generations (the result of past environments), the distance and warmer climates between the north and south poles prevent these species from spreading to the opposite hemisphere (the result of current environments). This demonstrates the biogeographical pattern known as "isolation with geographic distance" by which the limited ability of a species to physically disperse across space (rather than any selective genetic reasons) restricts the geographical range over which it can be found.
The biogeography of microorganisms (i.e., organisms that cannot be seen with the naked eye, such as fungi and bacteria) is an emerging field enabled by ongoing advancements in genetic technologies, in particular cheaper DNA sequencing with higher throughput that now allows analysis of global datasets on microbial biology at the molecular level. When scientists began studying microbial biogeography, they anticipated a lack of biogeographic patterns due to the high dispersibility and large population sizes of microbes, which were expected to ultimately render geographical distance irrelevant. Indeed, in microbial ecology the oft-repeated saying by Lourens Baas Becking that "everything is everywhere, but the environment selects" has come to mean that as long as the environment is ecologically appropriate, geological barriers are irrelevant. However, recent studies show clear evidence for biogeographical patterns in microbial life, which challenge this common interpretation: the existence of microbial biogeographic patterns disputes the idea that "everything is everywhere" while also supporting the idea that environmental selection includes geography as well as historical events that can leave lasting signatures on microbial communities.
Microbial biogeographic patterns are often similar to those of macro-organisms. Microbes generally follow well-known patterns such as the distance decay relationship, the abundance-range relationship, and Rapoport's rule. This is surprising given the many disparities between microorganisms and macro-organisms, in particular their size (micrometers vs. meters), time between generations (minutes vs. years), and dispersibility (global vs. local). However, important differences between the biogeographical patterns of microorganism and macro-organism do exist, and likely result from differences in their underlying biogeographic processes (e.g., drift, dispersal, selection, and mutation). For example, dispersal is an important biogeographical process for both microbes and larger organisms, but small microbes can disperse across much greater ranges and at much greater speeds by traveling through the atmosphere (for larger animals dispersal is much more constrained due to their size). As a result, many microbial species can be found in both northern and southern hemispheres, while larger animals are typically found only at one pole rather than both. Furthermore, microorganisms, such as bacteria, are affected by conditions at very small scales that may differ from the scales that are typically considered for macro-organisms. For example, soil bacterial diversity is shaped by the carbon input and connectivity in microscale aqueous habitats.
Distinct patterns
Reversed and non-monotonous latitudinal diversity gradients
Larger organisms tend to exhibit latitudinal gradients in species diversity, with larger biodiversity existing in the tropics and decreasing toward more temperate polar regions. In contrast, studies on indoor fungal communities and global topsoil microbiomes found microbial biodiversity to be significantly higher in temperate zones than in the tropics. Interestingly, different buildings exhibited the same indoor fungal composition in any given location, where similarity increased with proximity. Thus, despite human efforts to control indoor climates, outside environments appear to be the strongest determinant of indoor fungal composition. On the other hand, the strong biogeographical pattern of soil bacteria is typically attributed to changes in environmental factors such as soil pH. However, soil pH may be a biogeographical proxy that is affected by a soils climatic water balance, which mediates carbon inputs and the connectivity of bacterial aqueous habitats.
Bipolar latitude distributions
Certain microbial populations exist in opposite hemispheres and at complementary latitudes. These 'bipolar' (or 'antitropical') distributions are much rarer in macro-organisms; although macro-organisms exhibit latitude gradients, 'isolation by geographic distance' prevents bipolar distributions (e.g., polar bears are not found at both poles). In contrast, a study on marine surface bacteria showed not only a latitude gradient, but also complementarity distributions with similar populations at both poles, suggesting no "isolation by geographic distance". This is likely due to differences in the underlying biogeographic process, dispersal, as microbes tend to disperse at high rates and far distances by traveling through the atmosphere.
Seasonal variations
Microbial diversity can exhibit striking seasonal patterns at a single geographical location. This is largely due to dormancy, a microbial feature not seen in larger animals that allows microbial community composition to fluctuate in relative abundance of persistent species (rather than actual species present). This is known as the "seed-bank hypothesis" and has implications for our understanding of ecological resilience and thresholds to change.
Applications
Directed panspermia
Panspermia suggests that life can be distributed throughout outer space via comets, asteroids, and meteoroids. Panspermia assumes that life can survive the harsh space environment, which features vacuum conditions, intense radiation, extreme temperatures, and a dearth of available nutrients. Many microorganisms are able to evade such stressors by forming spores or entering a state of low-metabolic dormancy. Studies in microbial biogeography have even shown that the ability of microbes to enter and successfully emerge from dormancy when their respective environmental conditions are favorable contributes to the high levels of microbial biodiversity observed in almost all ecosystems. Thus microbial biogeography can be applied to panspermia as it predicts that microbes are able to protect themselves from the harsh space environment, know to emerge when conditions are safe, and also take advantage of their dormancy capability to enhance biodiversity wherever they may land.
Directed panspermia is the deliberate transport of microorganisms to colonize another planet. If aiming to colonize an Earth-like environment, microbial biogeography can inform decisions on the biological payload of such a mission. In particular, microbes exhibit latitudinal ranges according to Rapoport's rule, which states that organisms living at lower latitudes (near the equator) are found within smaller latitude ranges than those living at higher latitudes (near the poles). Thus the ideal biological payload would include widespread, higher-latitude microorganisms that can tolerate of a wider range of climates. This is not necessarily the obvious choice, as these widespread organisms are also rare in microbial communities and tend to be weaker competitors when faced with endemic organisms. Still, they can survive in a range of climates and thus would be ideal for inhabiting otherwise lifeless Earth-like planets with uncertain environmental conditions. Extremophiles, although tough enough to withstand the space environment, may not be ideal for directed panspermia as any given extremophile species requires a very specific climate to survive. However, if the target was closer to Earth, such as a planet or moon in our Solar System, it may be possible to select a specific extremophile species for the well-defined target environment.
See also
Microbiomes of the built environment
Microbial ecology
References
Biogeography
Microorganisms | Microbial biogeography | [
"Biology"
] | 1,909 | [
"Biogeography",
"Microorganisms"
] |
60,404,248 | https://en.wikipedia.org/wiki/%CE%93-Oryzanol | γ-Oryzanol is a mixture of lipids derived from rice (Oryza sativa). γ-Oryzanol occurs mainly in the fat fraction of rice bran and rice bran oil.
Originally thought to be a single chemical compound, it is now known to be a mixture of ferulic acid esters of phytosterols and triterpenoids, particularly cycloartenyl ferulate, 24-methylenecycloartanyl ferulate, and campesteryl ferulate, which together account for 80% of γ-oryzanol.
Composition
Minor constituents include , , , , , , and .
Uses
γ-Oryzanol has been used in Japan for menopausal symptoms, mild anxiety, stomach upset, and high cholesterol. It is still approved in China for this use. However, there is no meaningful evidence supporting its efficacy for these purposes.
In the United States, it is sold as a sports supplement, but existing research does not support the belief that it has any ergogenic or testosterone-raising effects.
References
Lipids
Carboxylate esters | Γ-Oryzanol | [
"Chemistry"
] | 238 | [
"Organic compounds",
"Biomolecules by chemical classification",
"Lipids"
] |
60,405,154 | https://en.wikipedia.org/wiki/Cell%20culturing%20in%20open%20microfluidics | Open microfluidics can be employed in the multidimensional culturing of cell types for various applications including organ-on-a-chip studies, oxygen-driven reactions, neurodegeneration, cell migration, and other cellular pathways.
Usage and benefits
The use of conventional microfluidic devices for cell studies has already improved upon the cost effectiveness and sample volume requirement, however using open microfluidic channels adds the benefit of removing syringe pumps to drive flow, now governed by surface tensions that drive spontaneous capillary flow (SCF), and exposes cells to the surrounding environment. The miniaturization of this process allows for improved sensitivity, high throughput, and ease of manipulation and integration, as well as dimensions that can be more physiologically relevant. The benefits of both open and closed microfluidic platforms have allowed the option for the combination of the two, where the device is open for the introduction and culturing of cells, and can be sealed prior to analysis.
Design
Cells and proteins can be patterned in microfluidic devices with one of the channel walls exposed in different geometries and designs depending on the behaviors and interactions to be studied, such as quorum sensing or co-culturing of several types of cells. A majority of cell culturing has been carried out by introducing the cells in a perfused conditioned medium to simulate the desired cell populations in traditional close-channel microfluidic devices. The challenge to support the cell growth and simultaneously study multiple cell types in a single device with an exposed channel is that the interactions between cells in this medium needs to be controlled since the timing and location of the interactions is critical. This issue can be addressed in several ways including the modification of the device design, using droplet microfluidics, and cell sorting. Not only does this allow for the ease of manipulating the environment of the cells, but having an open channel wall allows for a better understanding of biological interactions at this interface. Creating designs of microfluidic platforms with different compartments that are isolated and have different dimensions allows for co-culturing of several types of cells. These devices often incorporate droplet formation to encapsulate cells and act as transport and reaction vehicles in two or more immiscible phases, making it possible to carry out numerous parallel analyses using different conditions. Open microfluidics has also been coupled with fluorescence-activated cell sorting (FACS) to allow for cells to be contained in individually sorted compartments in an open microfluidic network for culturing in an exposed environment. The exposure of one of the channel walls introduces the issue of evaporation and therefore cell loss, however this issue can be minimized by using droplet microfluidics where the cell-containing droplets are submerged in a fluorinated oil. Although evaporation is a major disadvantage of using an open microfluidic system for cell culturing, the advantages over a closed system include ease of manipulation and access to the cells. For certain applications, such as the study of drug transport and lung function using alveolar epithelium cells, air exposure to is essential for developing the lungs.
PDMS
Polydimethylsiloxane (PDMS) is a common material for open microfluidic devices that introduces additional advantages and disadvantages. The adsorption of small biological molecules from cell culturing samples as well as the release of oligomers into the culture medium have both been posed as issues of using PDMS for biological studies, however these can be reduced by adopting pretreatment procedures to create optimal environments. Advantages of using PDMS include the ease of surface modification, low cost, biocompatibility, and optical transparency. In addition, PDMS is an attractive material to use for generating oxygen gradients for cell culturing in studies that involve monitoring ROS governed cellular pathways due to its oxygen permeability. Plastics such as polystyrene can be used to create microfluidic devices by embossing and bonding methods, CNC milling, injection molding, or stereolithography. Devices created with polystyrene by these methods include microfluidic platforms that integrate several microfluidic systems, creating arrays to study several cell cultures simultaneously. Another type of material that is used for open-microfluidic cell culturing is paper-based microfluidics. Cell culturing on paper-based microfluidic devices is accomplished either by encapsulating cells in a hydrogel or directly seeding them in stacked cellulose filter papers and the cell culture medium is passively transported to the culture areas. A major advantage of this type of open-microfluidics includes the low cost, the variety of dimensions of porous papers that are commercially available, improved cell viability, adhesion, and migration over tissue culture plates. In addition, it is an attractive substrate for 3D cell culture devices due to its ability to incorporate essential characteristics such as oxygen and nutrient gradients, fluid flow that can control cell migration, and stacking filter papers with different cells suspended in hydrogel to monitor cellular interactions or complex populations.
References
Microfluidics
Cell culture | Cell culturing in open microfluidics | [
"Materials_science",
"Biology"
] | 1,059 | [
"Model organisms",
"Cell culture",
"Microfluidics",
"Microtechnology"
] |
60,408,484 | https://en.wikipedia.org/wiki/24-Norcholestane | 24-Norcholestane, a steroid derivative, is used as a biomarker to constrain the source age of sediments and petroleum through the ratio between 24-norcholestane and 27-norcholestane (24-norcholestane ratio, NCR), especially when used with other age diagnostic biomarkers, like oleanane. While the origins of this compound are still unknown, it is thought that they are derived from diatoms due to their identification in diatom rich sediments and environments. In addition, it was found that 24-norcholestane levels increased in correlation with diatom evolution. Another possible source of 24-norcholestane is from dinoflagellates, albeit to a much lower extent.
Structure
24-Norcholestane is a tetracyclic compound, with 20R,5α(H),14α(H),17α(H) stereochemistry, derived from steroids or sterols. It consists of three 6-membered rings and one 5-membered ring, with carbon 24 removed from the side chain off of C17.
Background
24-Norcholestane is a 26-carbon (C26) sterane created from the removal of carbon 24 from cholestane. It has been found that 24-norcholestane is relatively high in abundance, up to 10% of sterols, in Thalassiosira aff. antarctica, a diatom. It has also been found in the dinoflagellate Gymnodinium simplex, albeit at much lower levels (around 0.2% of sterols).
Origins
Since 24-norcholestane origins are still unknown, the synthesis of it is also unknown as well. However, some pathways have been proposed. Possible sources of 24-norcholestane include 24-norcholesterol, which is present in many marine invertebrates and some algae in addition to diatoms and dinoflagellates.
Measurement techniques
Samples are collected from rocks or crude oils. Asphaltenes are first extracted before the sample is fractionated by passing through a silica column and eluting with solvents of increasing polarity. Traditional gas chromatography-mass spectrometry (GC/MS) techniques are not used, as C26 steranes are present in samples in much lower quantities, generally a magnitude lower, as compared to more common C27, C28, and C29 steranes. Instead, GC/MS/MS (GC-tandem MS) techniques are used for better analysis of C26 steranes.
Use as a biomarker
A 1993 study found high levels of 24-norcholestane in Middle Miocene marine siliceous sediments from Japan.
In a 1998 study, it was found that 24-norcholestanes were present in higher levels than their 27-norcholestane analogs in Cretaeous or younger oils and sediments. Diatom fossils were first recognized in the Jurassic age, with corresponding samples having NCR>0.2. Cretaceous age samples had NCR>0.4, and Oligocene age and younger samples had NCR>0.6. Thus, having higher NCR ratios is indicative of a younger age. It appears that 24-norcholestane is not present until the emergence of diatoms.
A 2008 study found that 24-norcholestanes also correlated with dinoflagellates in lacustrine sediments in China.
In a 2012 study, 24-norcholestanes were found in oils and Cambrian–Ordovician source rocks from the Tarim Basin in China at much higher levels than in the 1998 study (NCRs were equivalent to Cretaceous source rocks). Since diatoms do not appear at this time, the authors contribute these abnormally high levels to dinoflagellates.
See also
Biomarker
Cholestane
Nor-
References
Steroids
Hydrocarbons
Biomarkers
Jurassic first appearances | 24-Norcholestane | [
"Chemistry",
"Biology"
] | 842 | [
"Organic compounds",
"Hydrocarbons",
"Biomarkers"
] |
60,412,593 | https://en.wikipedia.org/wiki/Brown%E2%80%93Rho%20scaling | In quantum chromodynamics (QCD), Brown–Rho (BR) scaling is an approximate scaling law for hadrons in an ultra-hot, ultra-dense medium, such as hadrons in the quark epoch during the first microsecond of the Big Bang or within neutron stars.
According to Gerald E. Brown and Mannque Rho in their 1991 publication in Physical Review Letters:
refers to the pole mass of the ρ meson, whereas refers to the in-medium mass (or running mass in the medium) of the ρ meson according to QCD sum rules. The omega meson, sigma meson, and neutron are denoted by
ω, σ, and N, respectively. The symbol denotes the free-space pion decay constant. (Decay constants have a "running time" and a "pole time" similar to the "running mass" and "pole mass" concepts, according to special relativity.) The symbol is also used to denote the pion decay constant.
The hypothesis of Brown–Rho scaling is supported by experimental evidence on beta decay of 14C to the 14N ground state.
See also
Quantum chromodynamics
QCD matter
Pion decay constant
References
Quantum chromodynamics | Brown–Rho scaling | [
"Physics"
] | 257 | [
"Particle physics stubs",
"Particle physics"
] |
60,415,297 | https://en.wikipedia.org/wiki/Radial%20basis%20function%20interpolation | Radial basis function (RBF) interpolation is an advanced method in approximation theory for constructing high-order accurate interpolants of unstructured data, possibly in high-dimensional spaces. The interpolant takes the form of a weighted sum of radial basis functions. RBF interpolation is a mesh-free method, meaning the nodes (points in the domain) need not lie on a structured grid, and does not require the formation of a mesh. It is often spectrally accurate and stable for large numbers of nodes even in high dimensions.
Many interpolation methods can be used as the theoretical foundation of algorithms for approximating linear operators, and RBF interpolation is no exception. RBF interpolation has been used to approximate differential operators, integral operators, and surface differential operators.
Examples
Let and let be 15 equally spaced points on the interval . We will form where is a radial basis function, and choose such that ( interpolates at the chosen points). In matrix notation this can be written as
Choosing , the Gaussian, with a shape parameter of , we can then solve the matrix equation for the weights and plot the interpolant. Plotting the interpolating function below, we see that it is visually the same everywhere except near the left boundary (an example of Runge's phenomenon), where it is still a very close approximation. More precisely the maximum error is roughly at .
Motivation
The Mairhuber–Curtis theorem says that for any open set in with , and linearly independent functions on , there exists a set of points in the domain such that the interpolation matrix
is singular.
This means that if one wishes to have a general interpolation algorithm, one must choose the basis functions to depend on the interpolation points. In 1971, Rolland Hardy developed a method of interpolating scattered data using interpolants of the form . This is interpolation using a basis of shifted multiquadric functions, now more commonly written as , and is the first instance of radial basis function interpolation. It has been shown that the resulting interpolation matrix will always be non-singular. This does not violate the Mairhuber–Curtis theorem since the basis functions depend on the points of interpolation. Choosing a radial kernel such that the interpolation matrix is non-singular is exactly the definition of a strictly positive definite function. Such functions, including the Gaussian, inverse quadratic, and inverse multiquadric are often used as radial basis functions for this reason.
Shape-parameter tuning
Many radial basis functions have a parameter that controls their relative flatness or peakedness. This parameter is usually represented by the symbol with the function becoming increasingly flat as . For example, Rolland Hardy used the formula for the multiquadric, however nowadays the formula is used instead. These formulas are equivalent up to a scale factor. This factor is inconsequential since the basis vectors have the same span and the interpolation weights will compensate. By convention, the basis function is scaled such that as seen in the plots of the Gaussian functions and the bump functions.
A consequence of this choice is that the interpolation matrix approaches the identity matrix as leading to stability when solving the matrix system. The resulting interpolant will in general be a poor approximation to the function since it will be near zero everywhere, except near the interpolation points where it will sharply peakthe so-called "bed-of-nails interpolant" (as seen in the plot to the right).
On the opposite side of the spectrum, the condition number of the interpolation matrix will diverge to infinity as leading to ill-conditioning of the system. In practice, one chooses a shape parameter so that the interpolation matrix is "on the edge of ill-conditioning" (eg. with a condition number of roughly for double-precision floating point).
There are sometimes other factors to consider when choosing a shape-parameter. For example the bump function
has a compact support (it is zero everywhere except when ) leading to a sparse interpolation matrix.
Some radial basis functions such as the polyharmonic splines have no shape-parameter.
See also
Kriging
References
Numerical analysis
Approximation theory
Interpolation | Radial basis function interpolation | [
"Mathematics"
] | 875 | [
"Approximation theory",
"Computational mathematics",
"Mathematical relations",
"Numerical analysis",
"Approximations"
] |
47,121,451 | https://en.wikipedia.org/wiki/Phase%20inversion%20%28chemistry%29 | Phase inversion or phase separation is a chemical phenomenon exploited in the fabrication of artificial membranes. It is performed by removing the solvent from a liquid-polymer solution, leaving a porous, solid membrane.
Process
Phase inversion is a common method to form filtration membranes, which are typically formed using artificial polymers. The method of phase inversion is highly dependent on the type of polymer used and the solvent used to dissolve the polymer.
Phase inversion can be carried out through one of four typical methods:
Reducing the temperature of the solution
Immersing the polymer solution into anti-solvent
Exposing the polymer solution to a vapor of anti-solvent
Evaporating the solvent in atmospheric air or at high temperature
The rate at which phase inversion occurs and the characteristics of the resulting membrane are dependent on several factors, including:
Solubility of solvent in the anti-solvent
Insolubility of the polymer in the anti-solvent
Temperature of the anti-solvent
Characterization
Phase inversion membranes are typically characterized by their mean pore diameter and pore diameter distribution. This can be measured using a number of established analytical techniques such as the analysis of gas adsorption-desorption isotherms, porosimetry, or more niche approaches such as Evapoporometry. A Scanning electron microscope (SEM) can be used to characterize membranes with larger pore sizes, such as microfiltration and ultrafiltration membranes, while Transmission electron microscopy (TEM) can be used for all membrane types, including small pore membranes such as nanofiltration and reverse osmosis, though optical techniques tend to analyze only a small sample area that may not be representative of the sample as a whole.
In emulsions
In emulsions a phase inversion is when the dispersed phase becomes the dispersion medium and the dispersion medium becomes the dispersed phase, for example when cream becomes butter.
See also
Membrane
List of synthetic polymers
Microfiltration
Ultrafiltration
Nanofiltration
Reverse Osmosis
Hollow fiber membrane
References
Polymer chemistry | Phase inversion (chemistry) | [
"Chemistry",
"Materials_science",
"Engineering"
] | 397 | [
"Materials science",
"Polymer chemistry"
] |
47,124,512 | https://en.wikipedia.org/wiki/Triple%20deck%20theory | Triple deck theory is a theory that describes a three-layered boundary-layer structure when sufficiently large disturbances are present in the boundary layer. This theory is able to successfully explain the phenomenon of boundary layer separation, but it has found applications in many other flow setups as well, including the scaling of the lower-branch instability (T-S) of the Blasius flow, etc. James Lighthill, Lev Landau and others were the first to realize that to explain boundary layer separation, different scales other than the classical boundary-layer scales need to be introduced. These scales were first introduced independently by James Lighthill and E. A. Müller in 1953. The triple-layer structure itself was independently discovered by Keith Stewartson (1969) and V. Y. Neiland (1969) and by A. F. Messiter (1970). Stewartson and Messiter considered the separated flow near the trailing edge of a flat plate, whereas Neiland studied the case of a shock impinging on a boundary layer.
Suppose and are the streamwise and transverse coordinate with respect to the wall and be the Reynolds number, the boundary layer thickness is then . The boundary layer coordinate is . Then the thickness of each deck is
The lower deck is characterized by viscous, rotational disturbances, whereas the middle deck (same thickness as the boundary-layer thickness) is characterized by inviscid, rotational disturbances. The upper deck, which extends into the potential flow region, is characterized by inviscid, irrotational disturbances.
The interaction zone identified by Lighthill in the streamwise direction is
The most important aspect of the triple-deck formulation is that pressure is not prescribed, and so it has to be solved as part of the boundary-layer problem. This coupling between velocity and pressure reintroduces ellipticity to the problem, which is in contrast to the parabolic nature of the classical boundary layer of Prandtl.
Flow near the trailing edge of a flat plate
Let the length scales be normalized with the plate length and the velocity scale by the free-stream velocity ; then the only parameter in the problem is the Reynolds number . Let the origin of the coordinate system be located at the trailing edge of the plate. Further let be the non-dimensional velocity components, be the non-dimensional pressure field and be the non-dimensional stream function such that and . For shortness of notation, let us introduce the small parameter . The coordinate for horizontal interaction and for the three decks can then be defined by
As (or ), the solution should approach the asymptotic behaviour of the Blasius solution, which is given by
where is the Blasisus function which satisfies subjected to . As (or ), the solution should approach the asymptotic behaviour of the Goldstein's near wake, which is given by
where and . The Goldstein's inner wake solution is not needed here.
Middle deck
The solution in the middle deck is found to be
where is referred to as the displacement function and is referred to as the pressure function, to be determined from the upper and lower deck problems. Note that the correction to the Blasius stream function is of the order , although the pressure perturbation is only order
Upper deck
In the upper deck, the solution is found to given by
where . Furthermore, the upper deck problem also provides the relation between the displacement and the pressure function as
in which stands for Cauchy principal value. One may notice that the pressure function and the derivative of the displacement function (aka transverse velocity) forms a Hilbert transform pair.
Lower deck
In the lower deck, the solution is given by
where will satisfy a boundary-layer type equations driven by the pressure gradient and the slip-velocity of order generated by the middle deck. It is convenient to introduce and , where and must satisfy
These equations are subjected to the conditions
where . The displacement function and therefore must be obtained as part of the solution. The above set of equations may resemble normal boundary-layer equations, however it has an elliptic character since the pressure gradient term now is non-local, i.e., the pressure gradient at a location depends on other locations as well. Because of this, these equations are sometimes referred to as the interactive boundary-layer equations. The numerical solution of these equations were obtained by Jobe and Burggraf in 1974.
See also
Flow separation
Boundary layer
References
Fluid dynamics | Triple deck theory | [
"Chemistry",
"Engineering"
] | 891 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
36,890,602 | https://en.wikipedia.org/wiki/Sustained%20load%20cracking | Sustained load cracking, or SLC, is a metallurgical phenomenon that occasionally develops in pressure vessels and structural components under stress for sustained periods of time.
It is particularly noted in aluminium pressure vessels such as diving cylinders.
Sustained load cracking is not a manufacturing defect; it is a phenomenon associated with certain alloys and service conditions:
6351 aluminum alloy
Overstressing due to excessive filling pressure
Abuse and mechanical damage
Occurrence
Crack growth is reported to be very slow by Luxfer, a major manufacturer of aluminium high-pressure cylinders. Cracks are reported to develop over periods in the order of 8 or more years before reaching a stage where the cylinder is likely to leak, which allows timely detection by properly trained inspectors using eddy-current crack-detection equipment.
SLC cracks have been detected in cylinders produced by several manufacturers, including Luxfer, Walter Kidde, and CIG gas cylinders.
Most of the cracking has been observed in the neck and shoulder areas of cylinders, though some cracks in the cylindrical part have also been reported.
History
The phenomenon was first noticed in 1983 in hoop-wound fibre-reinforced aluminium alloy cylinders, which burst in use in the USA. The alloy was 6351 with a relatively high lead content (400 ppm), but even after the lead content was lowered, the problem recurred, and subsequently the problem was detected in monolithic aluminium cylinders. The first incidence of an SLC crack in the cylindrical part of a cylinder was reported in 1999.
Detection
Neck cracks are readily observed during inspection, but body and shoulder cracks are more difficult to detect. Neck thread cracks can be non-destructively tested using eddy-current crack-detection equipment. This is reported to be reliable for alloy 6351, but false positives have been reported for tests on alloy 6061.
Contributing factors
All of these forms of crack development are the result of the cylinder being subject to high pressure for prolonged periods. The cracks are intergranular and occur at grain boundaries. There is no evidence of stress corrosion or fatigue.
The presence of a relatively high lead content has been identified as a contributory factor. Cracking at the grain boundaries is accelerated in the presence of lead. The presence of bismuth is also suspected to be contributory.
Alloy composition has also been found to be a factor. Alloy 6061 has shown good resistance to SLC, as have alloys 5283 and 7060.
Manufacturing defects such as folds on the inside surface have been shown to be harmful, particularly for parallel-threaded cylinders.
Grain size has been shown to be of relatively minor importance.
Alloy composition
See also
References
Diving equipment failure modes
Metallurgy | Sustained load cracking | [
"Chemistry",
"Materials_science",
"Engineering"
] | 533 | [
"Metallurgy",
"Materials science",
"nan"
] |
36,892,586 | https://en.wikipedia.org/wiki/Quasi-harmonic%20approximation | The quasi-harmonic approximation is a phonon-based model of solid-state physics used to describe volume-dependent thermal effects, such as the thermal expansion. It is based on the assumption that the harmonic approximation holds for every value of the lattice constant, which is to be viewed as an adjustable parameter.
Overview
The quasi-harmonic approximation expands upon the harmonic phonon model of lattice dynamics. The harmonic phonon model states that all interatomic forces are purely harmonic, but such a model is inadequate to explain thermal expansion, as the equilibrium distance between atoms in such a model is independent of temperature.
Thus in the quasi-harmonic model, from a phonon point of view, phonon frequencies become volume-dependent in the quasi-harmonic approximation, such that for each volume, the harmonic approximation holds.
Thermodynamics
For a lattice, the Helmholtz free energy F in the quasi-harmonic approximation is
where Elat is the static internal lattice energy, Uvib is the internal vibrational energy of the lattice, or the energy of the phonon system, T is the absolute temperature, V is the volume and S is the entropy due to the vibrational degrees of freedom.
The vibrational energy equals
where N is the number of terms in the sum, is introduced as the characteristic temperature for a phonon with wave vector k in the i-th band at volume V and is shorthand for the number of (k,i)-phonons at temperature T and volume V. As is conventional, is the reduced Planck constant and kB is the Boltzmann constant. The first term in Uvib is the zero-point energy of the phonon system and contributes to the thermal expansion as a zero-point thermal pressure.
The Helmholtz free energy F is given by
and the entropy term equals
,
from which F = U - TS is easily verified.
The frequency ω as a function of k is the dispersion relation. Note that for a constant value of V, these equations corresponds to that of the harmonic approximation.
By applying a Legendre transformation, it is possible to obtain the Gibbs free energy G of the system as a function of temperature and pressure.
Where P is the pressure. The minimal value for G is found at the equilibrium volume for a given T and P.
Derivable quantities
Once the Gibbs free energy is known, many thermodynamic quantities can be determined as first- or second-order derivatives. Below are a few which cannot be determined through the harmonic approximation alone.
Equilibrium volume
V(P,T) is determined as a function of pressure and temperature by minimizing the Gibbs free energy.
Thermal expansion
The volumetric thermal expansion αV can be derived from V(P,T) as
Grüneisen parameter
The Grüneisen parameter γ is defined for every phonon mode as
where i indicates a phonon mode. The total Grüneisen parameter is the sum of all γis. It is a measure of the anharmonicity of the system and closely related to the thermal expansion.
References
Dove, Martin T. (1993). Introduction to lattice dynamics, Cambridge university press. .
Condensed matter physics
Lattice models | Quasi-harmonic approximation | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 654 | [
"Phases of matter",
"Materials science",
"Lattice models",
"Computational physics",
"Condensed matter physics",
"Statistical mechanics",
"Matter"
] |
36,893,435 | https://en.wikipedia.org/wiki/Catskill-Delaware%20Water%20Ultraviolet%20Disinfection%20Facility | The Catskill-Delaware Water Ultraviolet Disinfection Facility is a ultraviolet (UV) water disinfection plant built in Westchester County, New York to disinfect water for the New York City water supply system. The compound is the largest ultraviolet germicidal irradiation plant in the world.
The UV facility treats water delivered by two of the city's aqueduct systems, the Catskill Aqueduct and the Delaware Aqueduct, via the Kensico Reservoir. (The city's third supply system, the New Croton Aqueduct, has a separate treatment plant.)
The plant has 56 energy-efficient UV reactors, and cost the city $1.6 billion. Mayor Michael Bloomberg created research groups between 2004-2006 to decide the best and most cost-effective ways to modernize the city's water filtration process, as a secondary stage following the existing chlorination and fluoridation facilities. The UV technology effectively controls microorganisms such as giardia and cryptosporidium which are resistant to chlorine treatment. The city staff determined that the cheapest alternatives to a UV system would cost over $3 billion. In response to this finding, Bloomberg decided to set up a public competitive contract auction. Ontario based Trojan Technologies won the contract.
The facility treats of water per day. The new facility was originally set to be in operation by the end of 2012. The facility opened on October 8, 2013.
References
Environment of New York (state)
Environment of New York City
Water treatment facilities
Ultraviolet radiation
2013 establishments in New York (state)
Water infrastructure of New York City | Catskill-Delaware Water Ultraviolet Disinfection Facility | [
"Physics",
"Chemistry"
] | 326 | [
"Spectrum (physical sciences)",
"Water treatment",
"Water treatment facilities",
"Electromagnetic spectrum",
"Ultraviolet radiation"
] |
48,827,098 | https://en.wikipedia.org/wiki/Iodopindolol | Iodopindolol is a beta-adrenergic selective antagonist tagged with radioactive iodine-125. It has been used to map beta receptors in cellular experiments.
See also
Pindolol
References
Beta blockers
Radiopharmaceuticals
Organoiodides
Isopropylamino compounds
N-isopropyl-phenoxypropanolamines
Indoles | Iodopindolol | [
"Chemistry"
] | 82 | [
"Chemicals in medicine",
"Radiopharmaceuticals",
"Medicinal radiochemistry"
] |
48,831,673 | https://en.wikipedia.org/wiki/Alectinib | Alectinib (INN), sold under the brand name Alecensa, is an anticancer medication that is used to treat non-small-cell lung cancer (NSCLC). It blocks the activity of anaplastic lymphoma kinase (ALK). It is taken by mouth. It was developed by Chugai Pharmaceutical Co. Japan, which is part of the Hoffmann-La Roche group.
The most common side effects include constipation, muscle pain and edema (swelling) including of the ankles and feet, the face, the eyelids and the area around the eyes.
Alectinib was approved for medical use in Japan in 2014, the United States in 2015, Canada in 2016, Australia in 2017, the European Union in 2017, and the United Kingdom in 2021.
Medical uses
In the European Union, alectinib is indicated for the first-line treatment of adults with anaplastic lymphoma kinase (ALK)-positive advanced non-small cell lung cancer (NSCLC); and for the treatment of adults with ALK‑positive advanced NSCLC previously treated with crizotinib.
In the United States, it is indicated for the treatment of people with anaplastic lymphoma kinase (ALK)-positive metastatic non-small cell lung cancer (NSCLC) as detected by an FDA-approved test. In April 2024, the US Food and Drug Administration (FDA) expanded the indication of alectinib to include adjuvant treatment following tumor resection in people with anaplastic lymphoma kinase (ALK)-positive non-small cell lung cancer (NSCLC), as detected by an FDA-approved test.
Contraindications
There are no reported contraindications.
Side effects
Apart from unspecific gastrointestinal effects such as constipation (in 34% of patients) and nausea (22%), common adverse effects in studies included oedema (swelling; 34%), myalgia (muscle pain; 31%), anaemia (low red blood cell count), sight disorders, light sensitivity and rashes (all below 20%). Serious side effects occurred in 19% of patients; fatal ones in 2.8%.
Interactions
Alectinib has a low potential for interactions. While it is metabolised by the liver enzyme CYP3A4, and blockers of this enzyme accordingly increase its concentrations in the body, they also decrease concentrations of the active metabolite M4, resulting in only a small overall effect. Conversely, CYP3A4 inducers decrease alectinib concentrations and increase M4 concentrations. Interactions via other CYP enzymes and transporter proteins cannot be excluded but are unlikely to be of clinical significance.
Pharmacology
Mechanism of action
The substance potently and selectively blocks two receptor tyrosine kinase enzymes: anaplastic lymphoma kinase (ALK) and the RET proto-oncogene. The active metabolite M4 has similar activity against ALK. Inhibition of ALK subsequently blocks cell signalling pathways, including STAT3 and the PI3K/AKT/mTOR pathway, and induces death (apoptosis) of tumour cells.
Pharmacokinetics
When taken with a meal, the absolute bioavailability of the drug is 37%, and highest blood plasma concentrations are reached after four to six hours. Steady state conditions are reached within seven days. Plasma protein binding of alectinib and M4 is over 99%. The enzyme mainly responsible for alectinib metabolism is CYP3A4; other CYP enzymes and aldehyde dehydrogenases only play a small role. Alectinib and M4 account for 76% of the circulating substance, while the rest are minor metabolites.
Plasma half-life of alectinib is 32.5 hours, and that of M4 is 30.7 hours. 98% are excreted via the faeces, of which 84% are unchanged alectinib and 6% are M4. Less than 1% are found in the urine.
Chemistry
Alectinib has a pKa of 7.05. It is used in form of the hydrochloride, which is a white to yellow-white lumpy powder.
History
The approvals were based mainly on two trials: In a Japanese Phase I–II trial, after approximately 2 years, 19.6% of patients had achieved a complete response, and the 2-year progression-free survival rate was 76%. In February 2016 the J-ALEX phase III study comparing alectinib with crizotinib was terminated early because an interim analysis showed that progression-free survival was longer with alectinib.
In November 2017, the FDA approved alectinib for the first-line treatment of people with ALK-positive metastatic non-small cell lung cancer. This based on the phase 3 ALEX trial comparing it with crizotinib.
Efficacy was demonstrated in a global, randomized, open-label trial (ALINA, NCT03456076) in participants with ALK-positive NSCLC who had complete tumor resection. Eligible participants were required to have resectable stage IB (tumors ≥ 4 cm) to IIIA NSCLC (by AJCC 7th edition) with ALK rearrangements identified by a locally performed FDA-approved ALK test or by a centrally performed VENTANA ALK (D5F3) CDx assay. A total of 257 participants were randomized (1:1) to receive alectinib 600 mg orally twice daily or platinum-based chemotherapy following tumor resection. The application was granted priority review and orphan drug designations.
In April 2024, the FDA approved alectinib as an adjuvant treatment for people with ALK-positive early-stage lung cancer. This was based on the Phase III ALINA study [NCT03456076].
In April 2024, the Committee for Medicinal Products for Human Use of the European Medicines Agency adopted a positive opinion for the use of alectinib for adjuvant treatment of resected non‑small cell lung cancer (NSCLC). In June 2024, the EU approved alectinib as an adjuvant treatment for people in the EU with ALK-positive early-stage lung cancer. This was based on the Phase III ALINA study [NCT03456076].
In October 2024, the UK`s NICE recommended alectinib as an adjuvant treatment for adults for the treatment of stage 1B to 3A ALK-positive non-small-cell lung cancer.
Society and culture
Legal status
Alectinib was approved in Japan in July 2014, for the treatment of ALK fusion-gene positive, unresectable, advanced or recurrent non-small-cell lung cancer (NSCLC).
Alectinib was granted an accelerated approval by the US Food and Drug Administration (FDA) in December 2015, to treat people with advanced ALK-positive NSCLC whose disease worsened after, or who could not tolerate, treatment with crizotinib (Xalkori).
It received conditional approval by the European Medicines Agency in February 2017, for the same indication. The approval was upgraded from conditional to full approval in December 2017.
References
External links
Drugs developed by Hoffmann-La Roche
Drugs developed by Genentech
Carbazoles
Ketones
4-Morpholinyl compounds
Nitriles
Piperidines
Receptor tyrosine kinase inhibitors
Orphan drugs | Alectinib | [
"Chemistry"
] | 1,573 | [
"Ketones",
"Nitriles",
"Functional groups"
] |
32,720,512 | https://en.wikipedia.org/wiki/Cell%20bank | A cell bank is a facility that stores cells of specific genome for the purpose of future use in a product or medicinal needs, but can also describe the entity of stored cells itself. Cell banks often contain expansive amounts of base cell material that can be utilized for various projects. Cell banks can be used to generate detailed characterizations of cell lines and can also help mitigate cross-contamination of a cell line. Utilizing cell banks also reduces the cost of cell culture processes, providing a cost-efficient alternative to keeping cells in culture constantly. Cell banks are commonly used within fields including stem cell research and pharmaceuticals, with cryopreservation being the traditional method of keeping cellular material intact. Cell banks also effectively reduce the frequency of a cell sample diversifying from natural cell divisions over time.
Types of cell banks
When referring to cell banks as a resource in fields like biopharmaceutical manufacturing, four different types can be distinguished:
Research and development cell banks
Master cell bank
Working cell bank
End-of-production cell bank
While research and development cell banks (R&D CBs) are, as the name already suggests, used for research purposes, they also function as platforms for so-called master cell banks (MCBs). With a sufficient number of validated cells, they are the starting point in the biopharmaceutical production process of cell-based products. The production itself is performed in working cell banks (WCBs), and once the production process has been completed, an end-of-production cell bank (EoPCB) is established as a reference as well as for quality control.
Storage
Before putting the donated cell lines into storage, they are first proliferated and multiplied into a large number of identical cells before being stored in a number of cryovials. Along with the cells, cryoprotection agents are also added to the vials to protect the cells from rupturing from ice crystals during the freezing process. 10% DMSO solution is a common cryoprotection agent. These cryovials are then placed into a tray, labeled with the cell line's genetic data, and placed into cryogenic freezers. The freezers contain nitrogen in either liquid or vapor form, and the cells are frozen at a rate of -1 to -3 degrees Celsius per minute until a temperature of -196 degrees Celsius is reached. At a temperature of -196 degrees Celsius, metabolic processes within the cells are significantly slowed to stop all cell growth, thus preserving the cell line, which is especially useful when the cell line has a limited number of cell divisions. Cells can be stored for an extended amount of time in this state, reducing the rate of degradation of cellular material.
Freezing
The general freezing process for mammalian cells involves suspending a small density of the cells of interest in a solution of cryopreservation agents in a cryovial and freezing the cells to a temperature of -196 degrees Celsius. A slow freezing rate is important to maintaining the health of the cell culture. Freezing the cells at a rate of -1 to -3 degrees Celsius per minute is generally acceptable in maintaining cell culture health. Freezing too quickly risks damaging the cells. At a freezing rate of -5 degrees Celsius per minute, significant decreases of the thawed cell culture is observed. Even more pronounced decreases in cell culture health is observed at faster freezing rates, to the point that the cell culture cannot maintain a cell density. The use of cryopreservation agents is also key to the freezing process. A common cryoprotection agent used is 10% solution of DMSO, which acts to protect the cells from the rupturing caused by ice crystals during freezing and during thawing. DMSO has been observed to be toxic to cells, and requires dilution after the cells are thawed.
Thawing
Rapid thaws are recommend in bringing the cells out of cryopreservation and starting up their normal metabolic processes. Minimizing the exposure of the cryovial and its contents to room, or ambient temperatures is important. Rapid thaws are important to prevent the contents of the vial from melting and refreezing rapidly, which could cause ice crystals to form rupture the cells in the vial. Thaws can be performed in a few minutes within a water bath at a temperature around 37 °C. Experimentation has shown that a slower thaw in a controlled environment such as an incubator also can be used to safely thaw cryofrozen cells. Thawing in an incubator avoids the risk of contamination involved in thawing in a water bath, however takes a significantly longer amount of time and resources. Post thaw, the cells need to be transferred from the cryovial into another vessel and resuspended in media. By diluting the concentration of the cryoprotection agent present, negative effects such as toxicity from the cryoprotection agents on metabolically active cells can be mitigated.
History
Originally, scientists kept collections of cellular material for their own use, but not for the scientific community at large. The first person accredited with making a cell bank for widespread use was Kral, a Czechoslovakian scientist who created his cell bank collection in the late 1890s.
Currently, there are a large number of "culture collections and bioresource centers" that serve an individual part of the process of bioengineering. Some examples of these include the World Federation for Culture Collections and the International Society for Biological and Environmental Repositories. In January 2003, the UK Stem Cell Bank was established to serve as a central unit for specimen collection and human testing. The National Stem Cell Bank was established in October 2005 in Madison, Wisconsin in order to serve as a repository specifically for stem cell lines. It currently hosts 13 of the 21 stem cell lines that exist in the world and are listed on the Stem Cell Registry hosted by the National Institutes of Health.
In 1987, the World Health Organization established a reference cell bank to provide a resource for the development of vaccines and other biological medicines. Another reference cell bank was established by the World Health Organization in 2007 as a result of stability issues with MRC-5 cells.
See also
Tissue bank
References
Stem cells
Biological engineering | Cell bank | [
"Engineering",
"Biology"
] | 1,269 | [
"Biological engineering"
] |
32,722,000 | https://en.wikipedia.org/wiki/Test%20and%20evaluation%20master%20plan | Test and evaluation master plan (TEMP) is a critical aspect of project management involving complex systems that must satisfy specification requirements. The TEMP is used to support programmatic events called milestone decisions that separate the individual phases of a project. For military systems, the level of funding determines the Acquisition Category and the organization responsible for the milestone decision.
A traceability matrix is generally used to link items within the TEMP to items within specifications.
Definition
The Test and Evaluation Master Plan documents the overall structure and objectives of the Test & Evaluation for a program. It covers activities over a program’s life-cycle and identifies evaluation criteria for the testers.
The test and evaluation master plan consists of individual tests. Each test contains the following.
Test Scenario
Data Collection
Performance Evaluation
Test scenario
The test scenario establishes test conditions. This is typically associated with a specific mission profile. For military systems, this would be a combat scenario, and it may involve Live Fire Test and Evaluation (LFT&E). For commercial systems, this would involve a specific kind of situation involving the use of the item being developed.
For example, cold weather operation may require operation to be evaluated at temperatures below C using an environmental chamber. Evaluation of operation with a vehicle interface may require compatibility evaluation with a vibration test. Evaluation of an Internet store would require the system to take the user through a product purchase while the system is loaded with other traffic.
The test scenario identifies the following.
Items required for testing
Instructions to set up the items that will be used during the test
General description for how to operate the system under test
Specific actions and events that will take place during the test
Data collection
Data collection identifies information that must be collected during the test. This involves preliminary setup before the test begins. This may involve preparation for any of the following.
Settings for the system under test
Separate instrumentation
Written notes from direct observation
Sample collection
Systems that incorporate a computer typically require the ability to extract and record specific kinds of data from the system while it is operating normally.
Electronic data collection may be started and stopped as one of the actions described in the test scenario.
When data access is restricted, so the transfer of data between organizations may require a Data Collection Plan. This can occur with classified military systems.
Data is analyzed after testing is complete. This analysis is called performance evaluation.
Performance evaluation
Measures of effectiveness are specific metrics that are used to measure results in the overall mission and execution of assigned tasks.
These may have flexible performance limits associated with the outcome of a specific event. For example, the first round fired from a gun aimed using a radar would not impact a specific location, but the position can be measured using the radar, so the system should be able to deliver a round within a specific radius after several rounds have been fired. The number of rounds required to land one inside the limit is the MOE. The radius would be a Measure Of Performance (MOP).
Measures of performance are specific metrics that have a pass or fail limit that must be satisfied. These are generally identified with the words shall or must in the specification.
One type of MOP is the distance that a vehicle with a specific load must travel at a specific speed before running out of fuel.
Another type of MOP is the distance that a radar can detect a 1 square meter reflector.
Measures of suitability evaluate the ability to be supported in its intended operational environment.
As an example, this may be an evaluation of the Mean Time Between Failure (MTBF) that is evaluated during other testing. A system with excessive failures may satisfy all other requirements and not be suitable for use. A gun that jams when dirty is not suitable for military use.
These requirements are associated with ilities.
Reliability
Availability
Maintainability
Supportability
Usability
Purpose
The results of the TEMP evaluation are used for multiple purposes.
Rejection: funding termination decision
Redesign: modification and re-test funding decision
Full rate production: acceptance funding decision
Mission planning
Mission planning involves translation of MOEs, MOPs, and MOSs into the following.
Required Operational Capability (ROC)
Projected Operational Environment (POE)
Diagnostic Testing via built-in test (BIT) or planned maintenance
ROC and POE are specific expectations evaluated by the TEMP that are used to determine how to deploy assets to satisfy a specific mission requirement. Diagnostic testing ensures these expectations are satisfied for the duration of the mission.
Other usages
The term 'test and evaluation master plan' as a distinct overall guide to the Test and Evaluation functions in a development has also been used by the Australian Department of Defence. Others do use the term, or similar terms such as 'Master Test and Evaluation Plan'.
See also
Defense Acquisition Guide (DAG), section 9 Test and Evaluation
Director, Operational Test and Evaluation
References
External links
PMBOK Guide and Standard
Defense Acquisition University
Department of the Navy, Instructions
Air Force Instruction
Army Instruction
USMC Instructions
DOT&E TEMP Guidebook, 2017
Project management
Schedule (project management)
Military acquisition
United States defense procurement
Systems engineering | Test and evaluation master plan | [
"Physics",
"Engineering"
] | 1,001 | [
"Systems engineering",
"Physical quantities",
"Time",
"Spacetime",
"Schedule (project management)"
] |
32,722,112 | https://en.wikipedia.org/wiki/Matsubara%20frequency | In thermal quantum field theory, the Matsubara frequency summation (named after Takeo Matsubara) is a technique used to simplify calculations involving Euclidean (imaginary time) path integrals.
In thermal quantum field theory, bosonic and fermionic quantum fields are respectively periodic or antiperiodic in imaginary time , with periodicity . Matsubara summation refers to the technique of expanding these fields in Fourier series
The frequencies are called the Matsubara frequencies, taking values from either of the following sets (with ):
bosonic frequencies:
fermionic frequencies:
which respectively enforce periodic and antiperiodic boundary conditions on the field .
Once such substitutions have been made, certain diagrams contributing to the action take the form of a so-called Matsubara summation
The summation will converge if tends to 0 in limit in a manner faster than . The summation over bosonic frequencies is denoted as (with ), while that over fermionic frequencies is denoted as (with ). is the statistical sign.
In addition to thermal quantum field theory, the Matsubara frequency summation method also plays an essential role in the diagrammatic approach to solid-state physics, namely, if one considers the diagrams at finite temperature.
Generally speaking, if at , a certain Feynman diagram is represented by an integral , at finite temperature it is given by the sum .
Summation formalism
General formalism
The trick to evaluate Matsubara frequency summation is to use a Matsubara weighting function hη(z) that has simple poles located exactly at . The weighting functions in the boson case η = +1 and fermion case η = −1 differ. The choice of weighting function will be discussed later. With the weighting function, the summation can be replaced by a contour integral surrounding the imaginary axis.
As in Fig. 1, the weighting function generates poles (red crosses) on the imaginary axis. The contour integral picks up the residue of these poles, which is equivalent to the summation. This procedure is sometimes called Sommerfeld-Watson transformation.
By deformation of the contour lines to enclose the poles of g(z) (the green cross in Fig. 2), the summation can be formally accomplished by summing the residue of g(z)hη(z) over all poles of g(z),
Note that a minus sign is produced, because the contour is deformed to enclose the poles in the clockwise direction, resulting in the negative residue.
Choice of Matsubara weighting function
To produce simple poles on boson frequencies , either of the following two types of Matsubara weighting functions can be chosen
depending on which half plane the convergence is to be controlled in. controls the convergence in the left half plane (Re z < 0), while controls the convergence in the right half plane (Re z > 0). Here is the Bose–Einstein distribution function.
The case is similar for fermion frequencies. There are also two types of Matsubara weighting functions that produce simple poles at
controls the convergence in the left half plane (Re z < 0), while controls the convergence in the right half plane (Re z > 0). Here is the Fermi–Dirac distribution function.
In the application to Green's function calculation, g(z) always have the structure
which diverges in the left half plane given 0 < τ < β. So as to control the convergence, the weighting function of the first type is always chosen . However, there is no need to control the convergence if the Matsubara summation does not diverge. In that case, any choice of the Matsubara weighting function will lead to identical results.
Table of Matsubara frequency summations
The following table contains
for some simple rational functions g(z). The symbol η = ±1 is the statistical sign, +1 for bosons and -1 for fermions.
[1] Since the summation does not converge, the result may differ upon different choice of the Matsubara weighting function.
[2] (1 ↔ 2) denotes the same expression as the before but with index 1 and 2 interchanged.
Applications in physics
Zero temperature limit
In this limit , the Matsubara frequency summation is equivalent to the integration of imaginary frequency over imaginary axis.
Some of the integrals do not converge. They should be regularized by introducing the frequency cutoff , and then subtracting the divergent part (-dependent) from the integral before taking the limit of . For example, the free energy is obtained by the integral of logarithm,
meaning that at zero temperature, the free energy simply relates to the internal energy below the chemical potential. Also the distribution function is obtained by the following integral
which shows step function behavior at zero temperature.
Green's function related
Time domain
Consider a function G(τ) defined on the imaginary time interval (0,β). It can be given in terms of Fourier series,
where the frequency only takes discrete values spaced by 2/β.
The particular choice of frequency depends on the boundary condition of the function G(τ). In physics, G(τ) stands for the imaginary time representation of Green's function
It satisfies the periodic boundary condition G(τ+β)=G(τ) for a boson field. While for a fermion field the boundary condition is anti-periodic G(τ + β) = −G(τ).
Given the Green's function G(iω) in the frequency domain, its imaginary time representation G(τ) can be evaluated by Matsubara frequency summation. Depending on the boson or fermion frequencies that is to be summed over, the resulting G(τ) can be different. To distinguish, define
with
Note that τ is restricted in the principal interval (0,β). The boundary condition can be used to extend G(τ) out of the principal interval. Some frequently used results are concluded in the following table.
Operator switching effect
The small imaginary time plays a critical role here. The order of the operators will change if the small imaginary time changes sign.
Distribution function
The evaluation of distribution function becomes tricky because of the discontinuity of Green's function G(τ) at τ = 0. To evaluate the summation
both choices of the weighting function are acceptable, but the results are different. This can be understood if we push G(τ) away from τ = 0 a little bit, then to control the convergence, we must take as the weighting function for , and for .
Bosons
Fermions
Free energy
Bosons
Fermions
Diagram evaluations
Frequently encountered diagrams are evaluated here with the single mode setting. Multiple mode problems can be approached by a spectral function integral.
Here is a fermionic Matsubara frequency, while is a bosonic Matsubara frequency.
Fermion self energy
Particle-hole bubble
Particle-particle bubble
Appendix: Properties of distribution functions
Distribution functions
The general notation stands for either Bose (η = +1) or Fermi (η = −1) distribution function
If necessary, the specific notations nB and nF are used to indicate Bose and Fermi distribution functions respectively
Relation to hyperbolic functions
The Bose distribution function is related to hyperbolic cotangent function by
The Fermi distribution function is related to hyperbolic tangent function by
Parity
Both distribution functions do not have definite parity,
Another formula is in terms of the function
However their derivatives have definite parity.
Bose–Fermi transmutation
Bose and Fermi distribution functions transmute under a shift of the variable by the fermionic frequency,
However shifting by bosonic frequencies does not make any difference.
Derivatives
First order
In terms of product:
In the zero temperature limit:
Second order
Formula of difference
Case a = 0
Case a → 0
Case b → 0
The function cη
Definition:
For Bose and Fermi type:
Relation to hyperbolic functions
It is obvious that is positive definite.
To avoid overflow in the numerical calculation, the tanh and coth functions are used
Case a = 0
Case b = 0
Low temperature limit
For a = 0:
For b = 0:
In general,
See also
Imaginary time
Thermal quantum field theory
External links
Agustin Nieto: Evaluating Sums over the Matsubara Frequencies. arXiv:hep-ph/9311210
Github repository: MatsubaraSum A Mathematica package for Matsubara frequency summation.
A. Taheridehkordi, S. Curnoe, J.P.F. LeBlanc: Algorithmic Matsubara Integration for Hubbard-like models.. arXiv:cond-mat/1808.05188
References
Quantum field theory | Matsubara frequency | [
"Physics"
] | 1,807 | [
"Quantum field theory",
"Quantum mechanics"
] |
32,723,441 | https://en.wikipedia.org/wiki/Dual-Stage%204-Grid | The Dual-Stage 4-Grid (DS4G) is an electrostatic ion thruster design developed by the European Space Agency, in collaboration with the Australian National University. The design was derived by D. Fern from Controlled Thermonuclear Reactor experiments that use a 4-grid mechanism to accelerate ion beams.
A 4-grid ion thruster with only 0.2 m diameter is projected to absorb 250 kW power. With that energy input rate, the thruster could produce a thrust of 2.5 N. The specific impulse (a measure of fuel efficiency), could reach 19,300 s at an exhaust velocity of 210 km/s if xenon propellant was used. The potentially attainable power and thrust densities would substantially extend the power absorption of current ion thrusters to far more than 100 kW. These characteristics facilitate the development of ion thrusters that can result in extraordinary high-end velocities.
Electrical power requirements
Like with thruster concepts such as VASIMR, the dual-stage-4-grid ion thrusters are mainly limited by the necessary power supply for their operation. For example, if solar panels were to supply more than 250 kW, the size of the solar array would surpass the size of the solar panels of the International Space Station. To provide 250 kW with Stirling radioisotope generators would require roughly 17 tonnes of plutonium-238 (for which the US stockpile as of 2013 was no more than 20 kg), and so a nuclear thermal reactor would be needed.
Experiments proposed and tests done
Comparison
See also
Specific impulse
References
Ion engines | Dual-Stage 4-Grid | [
"Physics",
"Chemistry",
"Astronomy"
] | 327 | [
"Matter",
"Outer space",
"Ion engines",
"Astronomy stubs",
"Outer space stubs",
"Ions"
] |
61,555,327 | https://en.wikipedia.org/wiki/Transient%20hot%20wire%20method | The transient hot wire method (THW) is a very popular, accurate and precise technique to measure the thermal conductivity of gases, liquids, solids, nanofluids and refrigerants in a wide temperature and pressure range. The technique is based on recording the transient temperature rise of a thin vertical metal wire with infinite length when a step voltage is applied to it. The wire is immersed in a fluid and can act both as an electrical heating element and a resistance thermometer. The transient hot wire method has advantage over the other thermal conductivity methods, since there is a fully developed theory and there is no calibration or single-point calibration. Furthermore, because of the very small measuring time (1 s) there is no convection present in the measurements and only the thermal conductivity of the fluid is measured with very high accuracy.
Most of the transient hot wire sensors used in academia consist of two identical very thin wires with only difference in the length. Sensors using a single wire are used both in academia and industry with the advantage over the two-wire sensors in the ease of handling of the sensor and change of the wire.
An ASTM standard is published for the measurements of engine coolants using a single-transient hot wire method.
History
200 years ago scientists were using a crude version of this method to make the first ever thermal conductivity measurements on gases.
1781 - Joseph Priestley attempts to measure the ability of different gases to conduct heat using the heated wire experiment.
1931 - Sven Pyk and Bertil Stalhane proposed the first “transient” hot wire method for the measurement of thermal conductivity of solids and powders. Unlike previous methods, the one devised by Pyk and Stalhane used shorter measurement times due to the transient nature of the measurement.
1971 - Davis et al. proposed introduced a modified design with thinner and shorter wires, enabling measuring times between 1 and 10 seconds using a digital voltmeter. J. W. Haarman who introduced the electronic Wheatstone bridge that is a common feature of other modern transient methods.
1976 - Healy et al. published a journal article detailing the theory of the transient hot wire, described by an ideal solution with appropriate corrections to address effects like convection, among others.
References
Materials testing
Heat conduction
Heat transfer | Transient hot wire method | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 464 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Materials science",
"Materials testing",
"Thermodynamics",
"Heat conduction"
] |
61,557,800 | https://en.wikipedia.org/wiki/Transcriptional%20memory | Transcriptional memory is a biological phenomenon, initially discovered in yeast, during which cells primed with a particular cue show increased rates of gene expression after re-stimulation at a later time. This event was shown to take place: in yeast during growth in galactose and inositol starvation; plants during environmental stress; in mammalian cells during LPS and interferon induction. Prior work has shown that certain characteristics of chromatin may contribute to the poised transcriptional state allowing faster re-induction. These include: activity of specific transcription factors, retention of RNA polymerase II at the promoters of poised genes, activity of chromatin remodeling complexes, propagation of H3K4me2 and H3K36me3 histone modifications, occupancy of the H3.3 histone variant, as well as binding of nuclear pore components. Moreover, locally bound cohesin was shown to inhibit establishment of transcriptional memory in human cells during interferon gamma stimulation.
References
Cells
Microbiology
Gene therapy | Transcriptional memory | [
"Chemistry",
"Engineering",
"Biology"
] | 212 | [
"Gene therapy",
"Microbiology",
"Genetic engineering",
"Microscopy"
] |
61,564,298 | https://en.wikipedia.org/wiki/Louvain-la-Neuve%20Cyclotron | The Louvain-la-Neuve Cyclotron is a brutalist architectural complex of the University of Louvain built from 1970 to 1972 in Louvain-la-Neuve, Walloon Brabant, Belgium, notably holding UCLouvain's CYCLONE particle accelerators. It is the first building completed by the university when it moved following the Leuven crisis and was the largest cyclotron in Europe at the time of its construction. The Louvain Cyclotron can also refer to Belgium's first cyclotron built in Louvain (Leuven) in 1947, which was replaced by the Louvain-la-Neuve center.
In addition to two particle accelerators of the Cyclotron Research Center, the complex holds the UCLouvain Schools of Mathematics and Physics and corresponding research institutes, the Centre for Applied Molecular Technologies, the UCLouvain radiation protection service, a business incubator and a shared workspace.
Location
The Cyclotron is located at number 1, Chemin du Cyclotron, East of the city of Louvain-la-Neuve, between Boulevard Baudouin Ier, Avenue Louis de Geer and the Chemin du Cyclotron, and North of the Louvain-la-Neuve Science Park.
History
First cyclotron in Heverlee
After the Second World War, under the impetus of Professor Marc de Hemptinne, the Catholic University of Louvain began the construction of a cyclotron for the acceleration of deuterons in Heverlee, a suburb of the city of Louvain. The Heverlee cyclotron was built in 1947 and inaugurated in 1952. From 1952 to 1959, it was used to produce radioactive isotopes and fast neutrons. Later, the cyclotron was used for the study of nuclear reactions and for spectroscopy of very short-lived states.
It was replaced by the research center in Louvain-la-Neuve, where the core of this first Belgian cyclotron is still visible, installed as a monument in front of the new cyclotron. The 6m3 core has been painted red.
Construction of the city of Louvain-la-Neuve
Louvain-la-Neuve undoubtedly ranks as the most ambitious of the cities built from scratch in Belgium. Forced to leave the city of Leuven during the Leuven crisis, the French-speaking part of the Catholic University of Leuven decides to move to a new location in 1968 on an agricultural plateau located North-East of Ottignies in Brabant, on the other side of Belgium's official language border, and started the construction of a new university town.
Construction of the Louvain-la-Neuve Cyclotron
As part of this move, the University of Louvain decided to build a new cyclotron, named CYCLONE (CYClotron de LOuvain-la-Neuve). In 1968, Roger Bastin, an architect inspired by the refined line of great names in modernism such as Le Corbusier in France and Alvar Aalto in Finland, provided the Université catholique de Louvain with a master plan for the planned city. Bastin's master plan was not adopted, but in return, he was entrusted with the Cyclotron project, which he and his partners Guy Van Oost and Pierre Lamby would carry out. The Cyclotron is the very first construction site in Louvain-la-Neuve, and its development begun in 1970, while the foundation stone for the University's new main buildings was not laid until February 2, 1971. The Cyclotron also became the first completed building of Louvain-la-Neuve: its construction ended in 1971 and it was inaugurated in 1972, before the inauguration of the city with its first inhabitants, which happened in October of the same year.
Cyclotron Research Center
The Cyclotron Research Centre was equipped in the early 1970s with a first particle accelerator called CYCLONE110, built by Thomson-CSF in collaboration with the Ateliers de constructions électriques de Charleroi (ACEC) and used for nuclear physics, isotope production, medical and technological applications. A second accelerator, called CYCLONE30, was designed and built by the Cyclotron Research Centre team between 1984 and 1987: mainly designed for industrial and medical applications, it is mainly used for isotope production. Further models of CYCLONE30 were built by Ion Beam Applications.
Yves Jongen and Ion Beam Applications
The first director of the Cyclotron Research Centre was Yves Jongen, from Nivelles, who studied electronic engineering at the Catholic University of Louvain in the 1960s, extending his studies with a specialization in nuclear physics. In August 1970, Jongen moved to a house located near the future centre of the new city of Louvain-la-Neuve, which was still entirely under construction, which earned him the status of the first inhabitant of Louvain-la-Neuve. As the director of the Cyclotron Research Centre, Yves Jongen had the idea of reducing the size and cost of the particle accelerator, which led him to develop, in the mid-1970s, a cyclotron specially adapted for clinical use leading to the creation of Ion Beam Applications (IBA) in 1986, which settled in front of the Cyclotron's Eastern Tower. IBA became a university spin-off of UCLouvain.
Heritage status
The Louvain-la-Neuve Cyclotron has received the status of Registered monument and is included in the Inventory of immovable cultural heritage of the Walloon region under reference 25121-INV-0070-01.
Architecture
Brutalism
While the city centre of Louvain-la-Neuve is built with only a nod to brutalist tendencies, the Eastern part of the city, which was first in the city's chronological development, is of clear brutalist architecture. The Cyclotron's off-centre position allowed architect Roger Bastin "to escape the Louvain-la-Neuve style where Wanlin's Walloon brick, slate roofs, snuff boxes and wooden frames dominate".
The Cyclotron's buildings are very representative of the brutalist style, characterized by facades of uncoated raw concrete, whose surfaces often have a texture inherited from formwork wood, leading to raw formwork concrete, retaining the mark of the wooden boards used for molding, their grain and their joint lines.
The pavement of the paths surrounding the Cyclotron is made of white concrete pavers known as "Blanc de Bierges", a type of paving stone found throughout the city of Louvain-la-Neuve and that has marked its urban landscape.
Buildings
Structure of the complex
The Cyclotron's architectural complex, which develops around a central garden, consists of three office and laboratory towers (one to the west, which was the first completed, one to the north and one to the east) each 24 meters wide, and several low-rise buildings, housing auditorii up to 110 seats, several classrooms and a medical unit.
The Cyclotron's bunker and laboratories
The Cyclotron itself is a simply designed space with 3 meter thick concrete walls and an accelerator tower which is 16 meters high. The laboratories (ateliers) and technical facilities are located a bit lower than the towers.
The North Tower (Marc de Hemptinne building)
The North Tower, called after Marc de Heptinne, one of the university's most famous physicists, is the highest of the three and one of Louvain-la-Neuve's largest buildings. Its five floors and adjacent wings currently house:
the Cyclotron Resource Centre, which tests components with heavy ion, proton, neutron and Cobalt-60 beams and produces microporous membranes;
the Louvain School of Mathematics and the School of Physics of the Faculty of Science;
the Mathematics and Physics Research Institute;
the Centre for Applied Molecular Technologies at the Institute for Experimental and Clinical Research;
the Louvain Institute of Biomolecular Science and Technology;
the Georges Lemaître Centre for Earth and Climate Research;
ten auditorii, the largest of which (CYCL01) has 106 seats.
The East Tower
The eastern tower, located in front of the headquarters of Ion Beam Applications, goes lower than the North Tower, with only three floors. It also houses multiple services, both in the domains of physics and of entrepreneurship & business management, projects associated with the Louvain School of Management:
the Radiation Protection Department of the University of Louvain;
the Business and Innovation Centre (CEI), a business incubator for startups and innovative SMEs;
Yncubator (Young Entrepreneurs Lab), Louvain-la-Neuve's incubator for student-led projects;
the Louvain Coworking Space.
Public art around the Cyclotron
Science Park sculptures
The Louvain-la-Neuve Science Park, which stretches just behind de Cyclotron, shows a rich collection of public artworks.
1970s and 1980s
In 1976, artist R.M. Lovell-Cooper created a stainless steel sculpture for company Afine: entitled Affinités, the sculpture stands at n° 10 rue du Bosquet and "evokes two hands raised towards the sky, the ends of which are trying to get closer".
At n° 15 on the same street stands a bronze sculpture on a rough stone base entitled L'Endormie VI, created in 1980 by the Belgian sculptor Olivier Strebelle for company Cyanamid Benelux.
At n° 4, at the corner formed with the rue du Bosquet, Hubert Minnebo's 1988 sculpture in hammered and welded copper entitled Bien motivés, ils symbolisèrent leur rayonnement.
Sculptures from the 1990s
In front of the headquarters of Ion Beam Applications (n° 6 avenue Jean-Étienne Lenoir) inaugurated in 1991, Italian artist Mauro Staccioli erected a Corten steel sculpture of about 6 metres in diameter entitled Anneau [Ring]. For IBA, the Italian artist chose the symbolism of the circle: "With the specificity of IBA's activity, I think it is relevant to establish a link between form and content that finds, in the geometric form, the beauty of the rational and the essence of research".
Regard de Lumière, set up in 1993, is Charles Delporte's flagship artwork: "This piece is my torch, my jewel". According to critics Christophe Dosogne and Wivine de Traux, "this sculpture, created in 1948, can be found in Moscow, Paris, Namur, Brussels and Damme, in different materials and sizes". Here in Louvain-la-Neuve, where it stands at n° 4 Albert Einstein Avenue, it is made of bronze skated on a polyester base and "represents the trinity of the theological virtues: faith, hope and charity".
In 1998, Marie-Paule Haar installed an untitled sculpture at n° 1 Albert Einstein Avenue, consisting of a 5 meter high powder-coated aluminum structure. Standing on the parking lot of the Louvain-la-Neuve driving test centre, the sculpture refers to a motorway interchange loop.
In front of n° 15 Albert Einstein Avenue stands a 2 meter high bronze sculpture on blue stone entitled Le Porteur d'eau, created by Thérèse Chotteau in 1999 for Realco. "The man placed in balance on the inclined space of the support carries on his shoulder a bronze wavy line representing water. Thérèse Chotteau's water carrier echoes Realco's activity, which produces enzymes and bacteria for water purification."
Sculptures from the 2000s
At the corner of avenue Albert Einstein and avenue Jean-Étienne Lenoir, at the foot of the postmodern New Tech Center building (DSW Architects, 1999), Vinciane Renard created the artwork Valentin, a bronze sculpture, in 2000.
At n° 1 rue du Bosquet, also in the year 2000, artist Roxane Enescu erected Lapte, a titanium sculpture promoted by company Fasska. According to the artist, "This open, ascending and evolving circle is a metallic surface element made of titanium sheet that wraps itself around a space inhabited by human beings (...). This essential curve suggests the pregnant woman's stomach, the cradle, softness, well-being". Many stylized figures have been carved from metal.
Patience is a sculpture in black Denée marble and crinoid limestone created in 2005 by Romanian-born artist Marian Sava for company Immosc (located at n° 7 rue du Bosquet).
La Découverte [The Discovery], made in 2005 by Luc Vanhonnacker at n° 13, avenue Albert Einstein, reveals fragments of the human body gushing out of the blue stone.
In search of the lost star or Scrutateur d'étoiles, created in 2006 by Philip Aguirre y Otegui for Interscience at n° 2 avenue Jean-Étienne Lenoir is a 3 meter high concrete sculpture.
At the corner of the avenue Albert Einstein and rue Louis De Geer, Geneviève Vastrade erected a kind of totem pole made of an old stone agricultural roller. This scroll engraved with letters, some of which are carved upside down evokes both the agricultural land on which Louvain-la-Neuve is built and the scrolls of the presses used in printing plants, such as the Denef printer and publisher in front of which the sculpture stands.
With his steel sheet sculpture L'Alu Blister, created in 2007 at n° 5 rue du Bosquet, Vincent Strebell chose to refer to the activity of sponsoring firm Constantia, specialized in printing aluminium foils for the pharmaceutical industry.
References
Cyclotron
Cyclotron
Cyclotron
Cyclotron
Accelerator physics
UCLouvain | Louvain-la-Neuve Cyclotron | [
"Physics"
] | 2,923 | [
"Accelerator physics",
"Applied and interdisciplinary physics",
"Experimental physics"
] |
61,566,443 | https://en.wikipedia.org/wiki/Monroe%20Avenue%20Water%20Filtration%20Plant | The Monroe Avenue Water Filtration Plant is a municipal water treatment plant located at 430 Monroe Avenue NW in Grand Rapids, Michigan. Built in 1910, it was likely the first water filtration plant in Michigan. In 1945, the plant was the site of the first public introduction of water fluoridation in the United States. It was listed on the National Register of Historic Places in 2002. The building now serves as an event center, known as Clearwater Place.
History
By the 1870s, the city of Grand Rapids realized the need for a city-wide water system. Bonds were issued in 1874, and a reservoir constructed. However, by the 1900s, there was increasing pressure to find a new source of clean water for the city. In 1910 bonds were issued to construct a filtration plant to clean water from the Grand River. The city hired nationally known New York City engineers Rudolph Hering and George Warren Fuller of Hering and Fuller Engineers to design the new plant, and construction began in 1910. Gentz Brothers of Grand Rapids was the general contractor. The plant was first put on line in 1912, and was an immediate success, substantially reducing water-borne diseases in the city. By the 1920s, however, the plant already needed to be expanded. A large addition, designed by R E. Harrison, was constructed in 1922-24. Additional expansion was done in 1935.
In 1944, the Grand Rapids City Commission authorized fluoridation of the city's water supply, the first city in the United States to do so. Actual application to the water began in early 1945. In 1961, Grand Rapids constructed a large regional filtration plant using water from Lake Michigan, relegating the Monroe Avenue plant to use as a backup facility. In 1988, the plant was designated as a Michigan Historic Civil Engineering Landmark by the American Society of Civil Engineers. The plant was closed in 1992. In 2005, DeVries Development began renovating the building into a mixed use space, including offices and apartments, named "Clearwater Place." Renovation was completed in 2008. In 2017, the building was renovated into an events center.
Description
The Monroe Avenue Water Filtration Plant consists of two buildings (only one of which, the main building, is historically significant) and two wash tanks. The main building is a simple two-story, red brick Romanesque Revival structure sitting on a concrete base. It has a hipped roof covered in green tile, some of which has been replaced with asphalt shingles. Square towers are sited on the corners of the front facade. These towers contain side entrances under triple sets of arches. Prominently visible is a large, hip roofed central tower, located at the rear, known as the "head house." The two wash tanks are large brick structures located to wither side of the main building. They have conical, low-pitched roofs clad in green tile, and a single row of small rectangular windows.
See also
Glendive City Water Filtration Plant, NRHP-listed in Glendive, Montana
References
External links
Clearwater Place
Monroe Avenue Water Filtration Plant at the Historical Marker Database
Further reading
"Monroe Water Filtration Plant Turns 100 (November/December 2024). Michigan History p. 56. Lansing, Michigan: Historical Society of Michigan. ISSN 0026-2196. Retrieved via Gale OneFile
National Register of Historic Places in Grand Rapids, Michigan
Romanesque Revival architecture in Michigan
Industrial buildings completed in 1912
Water supply infrastructure on the National Register of Historic Places
Water treatment facilities
Water in Michigan
1912 establishments in Michigan | Monroe Avenue Water Filtration Plant | [
"Chemistry"
] | 718 | [
"Water treatment",
"Water treatment facilities"
] |
39,756,603 | https://en.wikipedia.org/wiki/Generalized%20filtering | Generalized filtering is a generic Bayesian filtering scheme for nonlinear state-space models. It is based on a variational principle of least action, formulated in generalized coordinates of motion. Note that "generalized coordinates of motion" are related to—but distinct from—generalized coordinates as used in (multibody) dynamical systems analysis. Generalized filtering furnishes posterior densities over hidden states (and parameters) generating observed data using a generalized gradient descent on variational free energy, under the Laplace assumption. Unlike classical (e.g. Kalman-Bucy or particle) filtering, generalized filtering eschews Markovian assumptions about random fluctuations. Furthermore, it operates online, assimilating data to approximate the posterior density over unknown quantities, without the need for a backward pass. Special cases include variational filtering, dynamic expectation maximization and generalized predictive coding.
Definition
Definition: Generalized filtering rests on the tuple :
A sample space from which random fluctuations are drawn
Control states – that act as external causes, input or forcing terms
Hidden states – that cause sensory states and depend on control states
Sensor states – a probabilistic mapping from hidden and control states
Generative density – over sensory, hidden and control states under a generative model
Variational density – over hidden and control states with mean
Here ~ denotes a variable in generalized coordinates of motion:
Generalized filtering
The objective is to approximate the posterior density over hidden and control states, given sensor states and a generative model – and estimate the (path integral of) model evidence to compare different models. This generally involves an intractable marginalization over hidden states, so model evidence (or marginal likelihood) is replaced with a variational free energy bound. Given the following definitions:
Denote the Shannon entropy of the density by . We can then write the variational free energy in two ways:
The second equality shows that minimizing variational free energy (i) minimizes the Kullback-Leibler divergence between the variational and true posterior density and (ii) renders the variational free energy (a bound approximation to) the negative log evidence (because the divergence can never be less than zero). Under the Laplace assumption the variational density is Gaussian and the precision that minimizes free energy is . This means that free-energy can be expressed in terms of the variational mean (omitting constants):
The variational means that minimize the (path integral) of free energy can now be recovered by solving the generalized filter:
where is a block matrix derivative operator of identify matrices such that
Variational basis
Generalized filtering is based on the following lemma: The self-consistent solution to satisfies the variational principle of stationary action, where action is the path integral of variational free energy
Proof: self-consistency requires the motion of the mean to be the mean of the motion and (by the fundamental lemma of variational calculus)
Put simply, small perturbations to the path of the mean do not change variational free energy and it has the least action of all possible (local) paths.
Remarks: Heuristically, generalized filtering performs a gradient descent on variational free energy in a moving frame of reference: , where the frame itself minimizes variational free energy. For a related example in statistical physics, see Kerr and Graham who use ensemble dynamics in generalized coordinates to provide a generalized phase-space version of Langevin and associated Fokker-Planck equations.
In practice, generalized filtering uses local linearization over intervals to recover discrete updates
This updates the means of hidden variables at each interval (usually the interval between observations).
Generative (state-space) models in generalized coordinates
Usually, the generative density or model is specified in terms of a nonlinear input-state-output model with continuous nonlinear functions:
The corresponding generalized model (under local linearity assumptions) obtains the from the chain rule
Gaussian assumptions about the random fluctuations then prescribe the likelihood and empirical priors on the motion of hidden states
The covariances factorize into a covariance among variables and correlations among generalized fluctuations that encodes their autocorrelation:
Here, is the second derivative of the autocorrelation function evaluated at zero. This is a ubiquitous measure of roughness in the theory of stochastic processes. Crucially, the precision (inverse variance) of high order derivatives fall to zero fairly quickly, which means it is only necessary to model relatively low order generalized motion (usually between two and eight) for any given or parameterized autocorrelation function.
Special cases
Filtering discrete time series
When time series are observed as a discrete sequence of observations, the implicit sampling is treated as part of the generative process, where (using Taylor's theorem)
In principle, the entire sequence could be used to estimate hidden variables at each point in time. However, the precision of samples in the past and future falls quickly and can be ignored. This allows the scheme to assimilate data online, using local observations around each time point (typically between two and eight).
Generalized filtering and model parameters
For any slowly varying model parameters of the equations of motion or precision generalized filtering takes the following form (where corresponds to the variational mean of the parameters)
Here, the solution minimizes variational free energy, when the motion of the mean is small. This can be seen by noting . It is straightforward to show that this solution corresponds to a classical Newton update.
Relationship to Bayesian filtering and predictive coding
Generalized filtering and Kalman filtering
Classical filtering under Markovian or Wiener assumptions is equivalent to assuming the precision of the motion of random fluctuations is zero. In this limiting case, one only has to consider the states and their first derivative . This means generalized filtering takes the form of a Kalman-Bucy filter, with prediction and correction terms:
Substituting this first-order filtering into the discrete update scheme above gives the equivalent of (extended) Kalman filtering.
Generalized filtering and particle filtering
Particle filtering is a sampling-based scheme that relaxes assumptions about the form of the variational or approximate posterior density. The corresponding generalized filtering scheme is called variational filtering. In variational filtering, an ensemble of particles diffuse over the free energy landscape in a frame of reference that moves with the expected (generalized) motion of the ensemble. This provides a relatively simple scheme that eschews Gaussian (unimodal) assumptions. Unlike particle filtering it does not require proposal densities—or the elimination or creation of particles.
Generalized filtering and variational Bayes
Variational Bayes rests on a mean field partition of the variational density:
This partition induces a variational update or step for each marginal density—that is usually solved analytically using conjugate priors. In generalized filtering, this leads to dynamic expectation maximisation. that comprises a D-step that optimizes the sufficient statistics of unknown states, an E-step for parameters and an M-step for precisions.
Generalized filtering and predictive coding
Generalized filtering is usually used to invert hierarchical models of the following form
The ensuing generalized gradient descent on free energy can then be expressed compactly in terms of prediction errors, where (omitting high order terms):
Here, is the precision of random fluctuations at the i-th level. This is known as generalized predictive coding [11], with linear predictive coding as a special case.
Applications
Generalized filtering has been primarily applied to biological timeseries—in particular functional magnetic resonance imaging and electrophysiological data. This is usually in the context of dynamic causal modelling to make inferences about the underlying architectures of (neuronal) systems generating data. It is also used to simulate inference in terms of generalized (hierarchical) predictive coding in the brain.
See also
Dynamic Bayesian network
Kalman filter
Linear predictive coding
Optimal control
Particle filter
Recursive Bayesian estimation
System identification
Variational Bayesian methods
References
External links
software demonstrations and applications are available as academic freeware (as Matlab code) in the DEM toolbox of SPM
papers collection of technical and application papers
Bayesian estimation
Systems theory
Control theory
Nonlinear filters
Linear filters
Signal estimation
Stochastic differential equations
Markov models | Generalized filtering | [
"Mathematics"
] | 1,662 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
39,758,073 | https://en.wikipedia.org/wiki/International%20Conference%20on%20Computational%20Intelligence%20Methods%20for%20Bioinformatics%20and%20Biostatistics | The International Conference on Computational Intelligence Methods for Bioinformatics and Biostatistics (CIBB) is a yearly scientific conference focused on machine learning and computational intelligence applied to bioinformatics, biostatistics, and medical informatics.
Organization and history
The CIBB conferences are typically organized by members of the IEEE Computational Intelligence Society (IEEE CIS) and the International Neural Network Society (INNS), among others. Their main themes are machine learning, data mining, and computational intelligence algorithms applied to biological and biostatistical problems.
The CIBB conference was originally started by Francesco Masulli (Università di Genova), Antonina Starita (Università di Pisa), and Roberto Tagliaferri (Università di Salerno) as a special session within other international conferences held in Italy: the 14th Italian Workshop on Neural Networks (2004), the 6th International Workshop on Fuzzy Logic and Applications (2005), the 7th International Fuzzy Logic and Intelligent Technologies in Nuclear Science Conference on Applied Artificial Intelligence (2006), and the 7th International Workshop on Fuzzy Logic and Applications (2007). Because of the broad participation of researchers to the CIBB special session at the latter meeting, which included twenty-six submitted papers, the CIBB steering committee decided to turn CIBB into an autonomous conference starting with the 2008 edition in Vietri sul Mare, Italy.
During their first editions, the CIBB conferences were organized and attended mainly by Italian researchers at various academic locations throughout Italy. As international audience and importance of the conference grew, following editions moved outside Italy. The 2012 CIBB conference was held for the first time outside Europe, in Houston, Texas.
Format
The conference is a single track meeting that includes invited talks as well as oral and poster presentations of refereed papers. It usually lasts three days in September, and traditionally includes some special sessions about the application of computational intelligence to specific aspects of biology (for example, the "Special session on machine learning in health informatics and biological systems" at CIBB 2018,) and occasionally some tutorials.
At the 2011 conference edition in Gargnano, the scientific committee gave a young researcher best paper award.
Publications
Proceedings of the conferences are published as a book series by Springer Science+Business Media, whereas selected papers are published in journals such as BMC Bioinformatics and BMC Medical Informatics and Decision Making.
Editions
Future:
CIBB 2025, September 10–12, Milan, Italy, EU – 20th edition.
Past:
CIBB 2024, September 4–6, Benevento, Italy, EU – 19th edition.
CIBB 2023, September 4–6, Padua, Italy, EU – 18th edition.
CIBB 2021, November 15–17, online, virtual edition– 17th edition.
CIBB 2019, September 4–6, Bergamo, Italy, EU – 16th edition.
CIBB 2018, September 6–8, Almada, Portugal, EU – 15th edition.
CIBB 2017, September 7–9 Cagliari, Italy, EU – 14th edition.
CIBB 2016, September 1–3, Stirling, Scotland, United Kingdom – 13th edition.
CIBB 2015, September 10–12, Naples, Italy, EU – 12th edition.
CIBB 2014, June 26–28, Cambridge, England, United Kingdom – 11th edition.
CIBB 2013, June 20–22, Nice, France, EU – 10th edition. Preceded by PRIB 2013.
CIBB 2012, July 12–14, Houston, Texas, USA – 9th edition.
CIBB 2011, June 30–July 2, Gargnano, Italy, EU – 8th edition.
CIBB 2010, September 16–18, Palermo, Italy, EU – 7th edition.
CIBB 2009, October 15–17, Genoa, Italy, EU – 6th edition.
CIBB 2008, October 3–4, Vietri sul Mare, Italy, EU – 5th edition.
CIBB 2007, July 7–10, Portofino, Italy, EU – 4th edition. Special session of WILF 2007.
CIBB 2006, August 30, Genoa, Italy, EU – 3rd edition. Special session of FLINS 2006.
CIBB 2005, September 15–17, Crema, Italy, EU – 2nd edition. Special session of WILF 2005.
CIBB 2004, September 14–15, Perugia, Italy, EU – 1st edition. Special session of WIRN 2004.
References
External links
CIBB conference series on WikiCfp.com
Artificial intelligence conferences
Computer science conferences
Bioinformatics | International Conference on Computational Intelligence Methods for Bioinformatics and Biostatistics | [
"Technology",
"Engineering",
"Biology"
] | 929 | [
"Bioinformatics",
"Computer science",
"Computer science conferences",
"Biological engineering"
] |
39,761,773 | https://en.wikipedia.org/wiki/Central%20nervous%20system%20effects%20from%20radiation%20exposure%20during%20spaceflight | Travel outside the Earth's protective atmosphere, magnetosphere, and in free fall can harm human health, and understanding such harm is essential for successful crewed spaceflight. Potential effects on the central nervous system (CNS) are particularly important. A vigorous ground-based cellular and animal model research program will help quantify the risk to the CNS from space radiation exposure on future long distance space missions and promote the development of optimized countermeasures.
Possible acute and late risks to the CNS from galactic cosmic rays (GCRs) and solar proton events (SPEs) are a documented concern for human exploration of the Solar System. In the past, the risks to the CNS of adults who were exposed to low to moderate doses of ionizing radiation (0 to 2 Gy (Gray) (Gy = 100 rad)) have not been a major consideration. However, the heavy ion component of space radiation presents distinct biophysical challenges to cells and tissues as compared to the physical challenges that are presented by terrestrial forms of radiation. Soon after the discovery of cosmic rays, the concern for CNS risks originated with the prediction of the light flash phenomenon from single HZE nuclei traversals of the retina; this phenomenon was confirmed by the Apollo astronauts in 1970 and 1973. HZE nuclei are capable of producing a column of heavily damaged cells, or a microlesion, along their path through tissues, thereby raising concern over serious impacts on the CNS. In recent years, other concerns have arisen with the discovery of neurogenesis and its impact by HZE nuclei, which have been observed in experimental models of the CNS.
Human epidemiology is used as a basis for risk estimation for cancer, acute radiation risks, and cataracts. This approach is not viable for estimating CNS risks from space radiation, however. At doses above a few Gy, detrimental CNS changes occur in humans who are treated with radiation (e.g., gamma rays and protons) for cancer. Treatment doses of 50 Gy are typical, which is well above the exposures in space even if a large SPE were to occur. Thus, of the four categories of space radiation risks (cancer, CNS, degenerative, and acute radiation syndromes), the CNS risk relies most extensively on experimental data with animals for its evidence base. Understanding and mitigating CNS risks requires a vigorous research program that will draw on the basic understanding that is gained from cellular and animal models, and on the development of approaches to extrapolate risks and the potential benefits of countermeasures for astronauts.
Several experimental studies, which use heavy ion beams simulating space radiation, provide constructive evidence of the CNS risks from space radiation. First, exposure to HZE nuclei at low doses (<50 cGy) significantly induces neurocognitive deficits, such as learning and behavioral changes as well as operant reactions in the mouse and rat. Exposures to equal or higher doses of low-LET radiation (e.g., gamma or X rays) do not show similar effects. The threshold of performance deficit following exposure to HZE nuclei depends on both the physical characteristics of the particles, such as linear energy transfer (LET), and the animal age at exposure. A performance deficit has been shown to occur at doses that are similar to the ones that will occur on a Mars mission (<0.5 Gy). The neurocognitive deficits with the dopaminergic nervous system are similar to aging and appear to be unique to space radiation. Second, exposure to HZE disrupts neurogenesis in mice at low doses (<1 Gy), showing a significant dose-related reduction of new neurons and oligodendrocytes in the subgranular zone (SGZ) of the hippocampal dentate gyrus. Third, reactive oxygen species (ROS) in neuronal precursor cells arise following exposure to HZE nuclei and protons at low dose, and can persist for several months. Antioxidants and anti-inflammatory agents can possibly reduce these changes. Fourth, neuroinflammation arises from the CNS following exposure to HZE nuclei and protons. In addition, age-related genetic changes increase the sensitivity of the CNS to radiation.
Research with animal models that are irradiated with HZE nuclei has shown that important changes to the CNS occur with the dose levels that are of concern to NASA. However, the significance of these results on the morbidity to astronauts has not been elucidated. One model of late tissue effects suggests that significant effects will occur at lower doses, but with increased latency. It is to be noted that the studies that have been conducted to date have been carried out with relatively small numbers of animals (<10 per dose group); therefore, testing of dose threshold effects at lower doses (< 0.5 Gy) has not been carried out sufficiently at this time. As the problem of extrapolating space radiation effects in animals to humans will be a challenge for space radiation research, such research could become limited by the population size that is used in animal studies. Furthermore, the role of dose protraction has not been studied to date. An approach to extrapolate existing observations to possible cognitive changes, performance degradation, or late CNS effects in astronauts has not been discovered. New approaches in systems biology offer an exciting tool to tackle this challenge. Recently, eight gaps were identified for projecting CNS risks. Research on new approaches to risk assessment may be needed to provide the necessary data and knowledge to develop risk projection models of the CNS from space radiation.
Introduction
Both GCRs and SPEs are of concern for CNS risks. The major GCRs are composed of protons, α-particles, and particles of HZE nuclei with a broad energy spectrum ranging from a few tens to above 10 000 MeV/u. In interplanetary space, GCR organ dose and dose-equivalent of more than 0.2 Gy or 0.6 Sv per year, respectively, are expected. The high energies of GCRs allow them to penetrate to hundreds of centimeters of any material, thus precluding radiation shielding as a plausible mitigation measure to GCR risks on the CNS. For SPEs, the possibility exists for an absorbed dose of over 1 Gy from an SPE if crew members are in a thinly shielded spacecraft or performing a spacewalk. The energies of SPEs, although substantial (tens to hundreds of MeV), do not preclude radiation shielding as a potential countermeasure. However, the costs of shielding may be high to protect against the largest events.
The fluence of charged particles hitting the brain of an astronaut has been estimated several times in the past. One estimate is that during a 3-year mission to Mars at solar minimum (assuming the 1972 spectrum of GCR), 20 million out of 43 million hippocampus cells and 230 thousand out of 1.3 million thalamus cell nuclei will be directly hit by one or more particles with charge Z> 15. These numbers do not include the additional cell hits by energetic electrons (delta rays) that are produced along the track of HZE nuclei or correlated cellular damage. The contributions of delta rays from GCR and correlated cellular damage increase the number of damaged cells two- to three-fold from estimates of the primary track alone and present the possibility of heterogeneously damaged regions, respectively. The importance of such additional damage is poorly understood.
At this time, the possible detrimental effects to an astronaut's CNS from the HZE component of GCR have yet to be identified. This is largely due to the lack of a human epidemiological basis with which to estimate risks and the relatively small number of published experimental studies with animals. RBE factors are combined with human data to estimate cancer risks for low-LET radiation exposure. Since this approach is not possible for CNS risks, new approaches to risk estimation will be needed. Thus, biological research is required to establish risk levels and risk projection models and, if the risk levels are found to be significant, to design countermeasures.
Description of central nervous system risks of concern to NASA
Acute and late CNS risks from space radiation are of concern for Exploration missions to the moon or Mars. Acute CNS risks include: altered cognitive function, reduced motor function, and behavioral changes, all of which may affect performance and human health. Late CNS risks are possible neurological disorders such as Alzheimer's disease, dementia, or premature aging. The effect of the protracted exposure of the CNS to the low dose-rate (< 50 mGy h–1) of proton, HZE particles, and neutrons of the relevant energies for doses up to 2 Gy is of concern.
Current NASA permissible exposure limits
PELs for short-term and career astronaut exposure to space radiation have been approved by the NASA Chief Health and Medical Officer. The PELs set requirements and standards for mission design and crew selection as recommended in NASA-STD-3001, Volume 1. NASA has used dose limits for cancer risks and the non-cancer risks to the BFOs, skin, and lens since 1970. For Exploration mission planning, preliminary dose limits for the CNS risks are based largely on experimental results with animal models. Further research is needed to validate and quantify these risks, however, and to refine the values for dose limits. The CNS PELs, which correspond to the doses at the region of the brain called the hippocampus, are set for time periods of 30 days or 1 year, or for a career with values of 500, 1,000, and 1,500 mGy-Eq, respectively. Although the unit mGy-Eq is used, the RBE for CNS effects is largely unknown; therefore, the use of the quality factor function for cancer risk estimates is advocated. For particles with charge Z>10, an addition PEL requirement limits the physical dose (mGy) for 1 year and the career to 100 and 250 mGy, respectively. NASA uses computerized anatomical geometry models to estimate the body self-shielding at the hippocampus.
Evidence
Review of human data
Evidence of the effects of terrestrial forms of ionizing radiation on the CNS has been documented from radiotherapy patients, although the dose is higher for these patients than would be experienced by astronauts in the space environment. CNS behavioral changes such as chronic fatigue and depression occur in patients who are undergoing irradiation for cancer therapy. Neurocognitive effects, especially in children, are observed at lower radiation doses. A recent review on intelligence and the academic achievement of children after treatment for brain tumors indicates that radiation exposure is related to a decline in intelligence and academic achievement, including low intelligence quotient (IQ) scores, verbal abilities, and performance IQ; academic achievement in reading, spelling, and mathematics; and attention functioning. Mental retardation was observed in the children of the atomic-bomb survivors in Japan who were exposed to radiation prenatally at moderate doses (<2 Gy) at 8 to 15 weeks post-conception, but not at earlier or later prenatal times.
Radiotherapy for the treatment of several tumors with protons and other charged particle beams provides ancillary data for considering radiation effects for the CNS. NCRP Report No. 153 notes charge particle usage “for treatment of pituitary tumors, hormone-responsive metastatic mammary carcinoma, brain tumors, and intracranial arteriovenous malformations and other cerebrovascular diseases.” In these studies are found associations with neurological complications such as impairments in cognitive functioning, language acquisition, visual spatial ability, and memory and executive functioning, as well as changes in social behaviors. Similar effects did not appear in patients who were treated with chemotherapy. In all of these examples, the patients were treated with extremely high doses that were below the threshold for necrosis. Since cognitive functioning and memory are closely associated with the cerebral white volume of the prefrontal/frontal lobe and cingulate gyrus, defects in neurogenesis may play a critical role in neurocognitive problems in irradiated patients.
Review of space flight issues
The first proposal concerning the effect of space radiation on the CNS was made by Cornelius Tobias in his 1952 description of light flash phenomenon caused by single HZE nuclei traversals of the retina. Light flashes, such as those described by Tobias, were observed by the astronauts during the early Apollo missions as well as in dedicated experiments that were subsequently performed on Apollo and Skylab missions. More recently, studies of light flashes were made on the Russian Mir space station and the ISS. A 1973 report by the NAS considered these effects in detail. This phenomenon, which is known as a Phosphene, is the visual perception of flickering light. It is considered a subjective sensation of light since it can be caused by simply applying pressure on the eyeball. The traversal of a single, highly charged particle through the occipital cortex or the retina was estimated to be able to cause a light flash. Possible mechanisms for HZE-induced light flashes include direction ionization and Cherenkov radiation within the retina.
The observation of light flashes by the astronauts brought attention to the possible effects of HZE nuclei on brain function. The microlesion concept, which considered the effects of the column of damaged cells surrounding the path of an HZE nucleus traversing critical regions of the brain, originated at this time. An important task that still remains is to determine whether and to what extent such particle traversals contribute to functional degradation within the CNS.
The possible observation of CNS effects in astronauts who were participating in past NASA missions is highly unlikely for several reasons. First, the lengths of past missions are relatively short and the population sizes of astronauts are small. Second, when astronauts are traveling in LEO, they are partially protected by the magnetic field and the solid body of the Earth, which together reduce the GCR dose-rate by about two-thirds from its free space values. Furthermore, the GCR in LEO has lower LET components compared to the GCR that will be encountered in transit to Mars or on the lunar surface because the magnetic field of the Earth repels nuclei with energies that are below about 1,000 MeV/u, which are of higher LET. For these reasons, the CNS risks are a greater concern for long-duration lunar missions or for a Mars mission than for missions on the ISS.
Radiobiology studies of central nervous system risks for protons, neutrons, and high-Z high-energy nuclei
Both GCR and SPE could possibly contribute to acute and late CNS risks to astronaut health and performance. This section presents a description of the studies that have been performed on the effects of space radiation in cell, tissue, and animal models.
Effects in neuronal cells and the central nervous system
Neurogenesis
The CNS consists of neurons, astrocytes, and oligodendrocytes that are generated from multipotent stem cells. NCRP Report No. 153 provides the following excellent and short introduction to the composition and cell types of interest for radiation studies of the CNS: “The CNS consists of neurons differing markedly in size and number per unit area. There are several nuclei or centers that consist of closely packed neuron cell bodies (e.g., the respiratory and cardiac centers in the floor of the fourth ventricle). In the cerebral cortex the large neuron cell bodies, such as Betz cells, are separated by a considerable distance. Of additional importance are the neuroglia which are the supporting cells and consist of astrocytes, oligodendroglia, and microglia. These cells permeate and support the nervous tissue of the CNS, binding it together like a scaffold that also supports the vasculature. The most numerous of the neuroglia are Type I astrocytes, which make up about half the brain, greatly outnumbering the neurons. Neuroglia retain the capability of cell division in contrast to neurons and, therefore, the responses to radiation differ between the cell types. A third type of tissue in the brain is the vasculature which exhibits a comparable vulnerability for radiation damage to that found elsewhere in the body. Radiation-induced damage to oligodendrocytes and endothelial cells of the vasculature accounts for major aspects of the pathogenesis of brain damage that can occur after high doses of low-LET radiation.” Based on studies with low-LET radiation, the CNS is considered a radioresistant tissue. For example: in radiotherapy, early brain complications in adults usually do not develop if daily fractions of 2 Gy or less are administered with a total dose of up to 50 Gy. The tolerance dose in the CNS, as with other tissues, depends on the volume and the specific anatomical location in the human brain that is irradiated.
In recent years, studies with stem cells uncovered that neurogenesis still occurs in the adult hippocampus, where cognitive actions such as memory and learning are determined. This discovery provides an approach to understand mechanistically the CNS risk of space radiation. Accumulating data indicate that radiation not only affects differentiated neural cells, but also the proliferation and differentiation of neuronal precursor cells and even adult stem cells. Recent evidence points out that neuronal progenitor cells are sensitive to radiation. Studies on low-LET radiation show that radiation stops not only the generation of neuronal progenitor cells, but also their differentiation into neurons and other neural cells. NCRP Report No. 153 notes that cells in the SGZ of the dentate gyrus undergo dose-dependent apoptosis above 2 Gy of X-ray irradiation, and the production of new neurons in young adult male mice is significantly reduced by relatively low (>2 Gy) doses of X rays. NCRP Report No. 153 also notes that: “These changes are observed to be dose dependent. In contrast there were no apparent effects on the production of new astrocytes or oligodendrocytes. Measurements of activated microglia indicated that changes in neurogenesis were associated with a significant dose-dependent inflammatory response even 2 months after irradiation. This suggests that the pathogenesis of long-recognized radiation-induced cognitive injury may involve loss of neural precursor cells from the SGZ of the hippocampal dentate gyrus and alterations in neurogenesis.”
Recent studies provide evidence of the pathogenesis of HZE nuclei in the CNS. The authors of one of these studies were the first to suggest neurodegeneration with HZE nuclei, as shown in figure 6-1(a). These studies demonstrate that HZE radiation led to the progressive loss of neuronal progenitor cells in the SGZ at doses of 1 to 3 Gy in a dose-dependent manner. NCRP Report No. 153 notes that “Mice were irradiated with 1 to 3 Gy of 12C or 56Fe-ions and 9 months later proliferating cells and immature neurons in the dentate SGZ were quantified. The results showed that reductions in these cells were dependent on the dose and LET. Loss of precursor cells was also associated with altered neurogenesis and a robust inflammatory response, as shown in figures 6-1(a) and 6-1(b). These results indicate that high-LET radiation has a significant and long-lasting effect on the neurogenic population in the hippocampus that involves cell loss and changes in the microenvironment. The work has been confirmed by other studies. These investigators noted that these changes are consistent with those found in aged subjects, indicating that heavy-particle irradiation is a possible model for the study of aging.”
Oxidative damage
Recent studies indicate that adult rat neural precursor cells from the hippocampus show an acute, dose-dependent apoptotic response that was accompanied by an increase in ROS. Low-LET protons are also used in clinical proton beam radiation therapy, at an RBE of 1.1 relative to megavoltage X rays at a high dose. NCRP Report No. 153 notes that: “Relative ROS levels were increased at nearly all doses (1 to 10 Gy) of Bragg-peak 250 MeV protons at post-irradiation times (6 to 24 hours) compared to unirradiated controls. The increase in ROS after proton irradiation was more rapid than that observed with X rays and showed a well-defined dose response at 6 and 24 hours, increasing about 10-fold over controls at a rate of 3% per Gy. However, by 48 hours post-irradiation, ROS levels fell below controls and coincided with minor reductions in mitochondrial content. Use of the antioxidant alpha-lipoic acid (before or after irradiation) was shown to eliminate the radiation-induced rise in ROS levels. These results corroborate the earlier studies using X rays and provide further evidence that elevated ROS are integral to the radioresponse of neural precursor cells.” Furthermore, high-LET radiation led to
significantly higher levels of oxidative stress in hippocampal precursor cells as compared to lower-LET radiations (X rays, protons) at lower doses (≤1 Gy) (figure 6-2). The use of the antioxidant lipoic acid was able to reduce ROS levels below background levels when added before or after 56Fe-ion irradiation. These results conclusively show that low doses of 56Fe-ions can elicit significant levels of oxidative stress in neural precursor cells at a low dose.
Neuroinflammation
Neuroinflammation, which is a fundamental reaction to brain injury, is characterized by the activation of resident microglia and astrocytes and local expression of a wide range of inflammatory mediators. Acute and chronic neuroinflammation has been studied in the mouse brain following exposure to HZE. The acute effect of HZE is detectable at 6 and 9 Gy; no studies are available at lower doses. Myeloid cell recruitment appears by 6 months following exposure. The estimated RBE value of HZE irradiation for induction of an acute neuroinflammatory response is three compared to that of gamma irradiation. COX-2 pathways are implicated in neuroinflammatory processes that are caused by low-LET radiation. COX-2 up-regulation in irradiated microglia cells leads to prostaglandin E2 production, which appears to be responsible for radiation-induced gliosis (overproliferation of astrocytes in damaged areas of the CNS).
Behavioral effects
As behavioral effects are difficult to quantitate, they consequently are one of the most uncertain of the space radiation risks. NCRP Report No. 153 notes that: “The behavioral neurosciences literature is replete with examples of major differences in behavioral outcome depending on the animal species, strain, or measurement method used. For example, compared to unirradiated controls, X-irradiated mice show hippocampal-dependent spatial learning and memory impairments in the Barnes maze, but not in the Morris water maze which, however, can be used to demonstrate deficits in rats. Particle radiation studies of behavior have been accomplished with rats and mice, but with some differences in the outcome depending on the endpoint measured.”
The following studies provide evidence that space radiation affects the CNS behavior of animals in a somewhat dose- and LET-dependent manner.
Sensorimotor effects
Sensorimotor deficits and neurochemical changes were observed in rats that were exposed to low doses of 56Fe-ions. Doses that are below 1 Gy reduce performance, as tested by the wire suspension test. Behavioral changes were observed as early as 3 days after radiation exposure and lasted up to 8 months. Biochemical studies showed that the K+-evoked release of dopamine was significantly reduced in the irradiated group, together with an alteration of the nerve signaling pathways. A negative result was reported by Pecaut et al., in which no behavioral effects were seen in female C57/BL6 mice in a 2- to 8-week period following their exposure to 0, 0.1, 0.5 or 2 Gy accelerated 56Fe-ions (1 GeV/u56Fe) as measured by open-field, rotorod, or acoustic startle habituation.
Radiation-induced changes in conditioned taste aversion
There is evidence that deficits in conditioned taste aversion (CTA) are induced by low doses of heavy ions. The CTA test is a classical conditioning paradigm that assesses the avoidance behavior that occurs when the ingestion of a normally acceptable food item is associated with illness. This is considered a standard behavioral test of drug toxicity. NCRP Report No. 153 notes that: “The role of the dopaminergic system in radiation-induced changes in CTA is suggested by the fact that amphetamine-induced CTA, which depends on the dopaminergic system, is affected by radiation, whereas lithium chloride-induced CTA, which does not involve the dopaminergic system, is not affected by radiation. It was established that the degree of CTA due to radiation is LET-dependent ([figure 6-3]) and that 56Fe-ions are the most effective of the various low and high LET radiation types that have been tested. Doses as low as ~0.2 Gy of 56Fe-ions appear to have an effect on CTA.”
The RBE of different types of heavy particles on CNS function and cognitive/behavioral performance was studied in Sprague-Dawley rats. The relationship between the thresholds for the HZE particle-induced disruption of amphetamine-induced CTA learning is shown in figure 6-4; and for the disruption of operant responding is shown in figure 6-5. These figures show a similar pattern of responsiveness to the disruptive effects of exposure to either 56Fe or 28Si particles on both CTA learning and operant responding. These results suggest that the RBE of different particles for neurobehavioral dysfunction cannot be predicted solely on the basis of the LET of the specific particle.
Radiation effect on operant conditioning
Operant conditioning uses several consequences to modify a voluntary behavior. Recent studies by Rabin et al. have examined the ability of rats to perform an operant order to obtain food reinforcement using an ascending fixed ratio (FR) schedule. They found that 56Fe-ion doses that are above 2 Gy affect the appropriate responses of rats to increasing work requirements. NCRP Report No. 153 notes that "The disruption of operant response in rats was tested 5 and 8 months after exposure, but maintaining the rats on a diet containing strawberry, but not blueberry, extract were shown to prevent the disruption. When tested 13 and 18 months after irradiation, there were no differences in performance between the irradiated rats maintained on control, strawberry or blueberry diets. These observations suggest that the beneficial effects of antioxidant diets may be age dependent."
Spatial learning and memory
The effects of exposure to HZE nuclei on spatial learning, memory behavior, and neuronal signaling have been tested, and threshold doses have also been considered for such effects. It will be important to understand the mechanisms that are involved in these deficits to extrapolate the results to other dose regimes, particle types, and, eventually, astronauts. Studies on rats were performed using the Morris water maze test 1 month after whole-body irradiation with 1.5 Gy of 1 GeV/u 56Fe-ions. Irradiated rats demonstrated cognitive impairment that was similar to that seen in aged rats. This leads to the possibility that an increase in the amount of ROS may be responsible for the induction of both radiation- and age-related cognitive deficits.
NCRP Report No. 153 notes that: “Denisova et al. exposed rats to 1.5 Gy of 1 GeV/u56Feions and tested their spatial memory in an eight-arm radial maze. Radiation exposure impaired the rats’ cognitive behavior, since they committed more errors than control rats in the radial maze and were unable to adopt a spatial strategy to solve the maze. To determine whether these findings related to brain-region specific alterations in sensitivity to oxidative stress, inflammation or neuronal plasticity, three regions of the brain, the striatum, hippocampus and frontal cortex that are linked to behavior, were isolated and compared to controls. Those that were irradiated were adversely affected as reflected through the levels of dichlorofluorescein, heat shock, and synaptic proteins (for example, synaptobrevin and synaptophysin). Changes in these factors consequently altered cellular signaling (for example, calcium-dependent protein kinase C and protein kinase A). These changes in brain responses significantly correlated with working memory errors in the radial maze. The results show differential brain-region-specific sensitivity induced by 56Fe irradiation ([figure 6-6]). These findings are similar to those seen in aged rats, suggesting that increased oxidative stress and inflammation may be responsible for the induction of both radiation and age-related cognitive deficits.”
Acute central nervous system risks
In addition to the possible in-flight performance and motor skill changes that were described above, the immediate CNS effects (i.e., within 24 hours following exposure to low-LET radiation) are anorexia and nausea. These prodromal risks are dose-dependent and, as such, can provide an indicator of the exposure dose. Estimates are ED50 = 1.08 Gy for anorexia, ED50 = 1.58 Gy for nausea, and ED50=2.40 Gy for emesis. The relative effectiveness of different radiation types in producing emesis was studied in ferrets and is illustrated in figure 6-7. High-LET radiation at doses that are below 0.5 Gy show greater relative biological effectiveness compared to low-LET radiation. The acute effects on the CNS, which are associated with increases in cytokines and chemokines, may lead to disruption in the proliferation of stem cells or memory loss that may contribute to other degenerative diseases.
Computer models and systems biology analysis of central nervous system risks
Since human epidemiology and experimental data for CNS risks from space radiation are limited, mammalian models are essential tools for understanding the uncertainties of human risks. Cellular, tissue, and genetic animal models have been used in biological studies on the CNS using simulated space radiation. New technologies, such as three-dimensional cell cultures, microarrays, proteomics, and brain imaging, are used in systematic studies on CNS risks from different radiation types. According to biological data, mathematical models can be used to estimate the risks from space radiation.
Systems biology approaches to Alzheimer's disease that consider the biochemical pathways that are important in CNS disease evolution have been developed by research that was funded outside NASA. Figure 6-8 shows a schematic of the biochemical pathways that are important in the development of Alzheimer's disease. The description of the interaction of space radiation within these pathways would be one approach to developing predictive models of space radiation risks. For example, if the pathways that were studied in animal models could be correlated with studies in humans who are suffering from Alzheimer's disease, an approach to describe risk that uses biochemical degrees-of-freedom could be pursued. Edelstein-Keshet and Spiros have developed an in silico model of senile plaques that are related to Alzheimer's disease. In this model, the biochemical interactions among TNF, IL-1B, and IL-6 are described within several important cell populations, including astrocytes, microglia, and neurons. Further, in this model soluble amyloid causes microglial chemotaxis and activates IL-1B secretion. Figure 6-9 shows the results of the Edelstein-Keshet and Spiros model simulating plaque formation and neuronal death. Establishing links between space radiation-induced changes to the changes that are described in this approach can be pursued to develop an in silico model of Alzheimer's disease that results from space radiation.
Figure 6-8.Molecular pathways important in Alzheimer's disease. From Kyoto Encyclopedia of Genes and Genomes. Copyrighted image located at http://www.genome.jp/kegg/pathway/hsa/hsa05010.html
Other interesting candidate pathways that may be important in the regulation of radiation-induced degenerative CNS changes are signal transduction pathways that are regulated by Cdk5. Cdk5 is a kinase that plays a key role in neural development; its aberrant expression and activation are associated with neurodegenerative processes, including Alzheimer's disease. This kinase is up-regulated in neural cells following ionizing radiation exposure.
Risks in context of exploration mission operational scenarios
Projections for space missions
Reliable projections of CNS risks for space missions cannot be made from the available data. Animal behavior studies indicate that high-HZE radiation has a high RBE, but the data are not consistent. Other uncertainties include: age at exposure, radiation quality, and dose-rate effects, as well as issues regarding genetic susceptibility to CNS risk from space radiation exposure. More research is required before CNS risk can be estimated.
Potential for biological countermeasures
The goal of space radiation research is to estimate and reduce uncertainties in risk projection models and, if necessary, develop countermeasures and technologies to monitor and treat adverse outcomes to human health and performance that are relevant to space radiation for short-term and career exposures, including acute or late CNS effects from radiation exposure. The need for the development of countermeasures to CNS risks is dependent on further understanding of CNS risks, especially issues that are related to a possible dose threshold, and if so, which NASA missions would likely exceed threshold doses. As a result of animal experimental studies, antioxidant and anti-inflammation are expected to be effective countermeasures for CNS risks from space radiation. Diets of blueberries and strawberries were shown to reduce CNS risks after heavy-ion exposure. Estimating the effects of diet and nutritional supplementation will be a primary goal of CNS research on countermeasures.
A diet that is rich in fruit and vegetables significantly reduces the risk of several diseases. Retinoids and vitamins A, C, and E are probably the most well-known and studied natural radioprotectors, but hormones (e.g., melatonin), glutathione, superoxide dismutase, and phytochemicals from plant extracts (including green tea and cruciferous vegetables), as well as metals (especially selenium, zinc, and copper salts) are also under study as dietary supplements for individuals, including astronauts, who have been overexposed to radiation. Antioxidants should provide reduced or no protection against the initial damage from densely ionizing radiation such as HZE nuclei, because the direct effect is more important than the free-radical-mediated indirect radiation damage at high LET. However, there is an expectation that some benefits should occur for persistent oxidative damage that is related to inflammation and immune responses. Some recent experiments suggest that, at least for acute high-dose irradiation, an efficient radioprotection by dietary supplements can be achieved, even in the case of exposure to high-LET radiation. Although there is evidence that dietary antioxidants (especially strawberries) can protect the CNS from the deleterious effects of high doses of HZE particles, because the mechanisms of biological effects are different at low dose-rates compared to those of acute irradiation, new studies for protracted exposures will be needed to understand the potential benefits of biological countermeasures.
Concern about the potential detrimental effects of antioxidants was raised by a recent meta-study of the effects of antioxidant supplements in the diet of normal subjects. The authors of this study did not find statistically significant evidence that antioxidant supplements have beneficial effects on mortality. On the contrary, they concluded that β-carotene, vitamin A, and vitamin E seem to increase the risk of death. Concerns are that the antioxidants may allow rescue of cells that still sustain DNA mutations or altered genomic methylation patterns following radiation damage to DNA, which can result in genomic instability. An approach to target damaged cells for apoptosis may be advantageous for chronic exposures to GCR.
Individual risk factors
Individual factors of potential importance are genetic factors, prior radiation exposure, and previous head injury, such as concussion. Apolipoprotein E (ApoE) has been shown to be an important and common factor in CNS responses. ApoE controls the redistribution of lipids among cells and is expressed at high levels in the brain. New studies are considering the effects of space radiation for the major isoforms of ApoE, which are encoded by distinct alleles (ε2, ε3, and ε4). The isoform ApoE ε4 has been shown to increase the risk of cognitive impairments and to lower the age for Alzheimer's disease. It is not known whether the interaction of radiation sensitivity or other individual risks factors is the same for high- and low-LET radiation. Other isoforms of ApoE confer a higher risk for other diseases. People who carry at least one copy of the ApoE ε4 allele are at increased risk for atherosclerosis, which is also suspected to be a risk increased by radiation. People who carry two copies of the ApoE ε2 allele are at risk for a condition that is known as hyperlipoproteinemia type III. It will therefore be extremely challenging to consider genetic factors in a multipleradiation-risk paradigm.
Conclusion
Reliable projections for CNS risks from space radiation exposure cannot be made at this time due to a paucity of data on the subject. Existing animal and cellular data do suggest that space radiation can produce neurological and behavioral effects; therefore, it is possible that mission operations will be impacted. The significance of these results on the morbidity to astronauts has not been elucidated, however. It is to be noted that studies, to date, have been carried out with relatively small numbers of animals (<10 per dose group); this means that testing of dose threshold effects at lower doses (<0.5 Gy) has not yet been carried out to a sufficient extent. As the problem of extrapolating space radiation effects in animals to humans will be a challenge for space radiation research, such research could become limited by the population size that is typically used in animal studies. Furthermore, the role of dose protraction has not been studied to date. An approach has not been discovered to extrapolate existing observations to possible cognitive changes, performance degradation, or late CNS effects in astronauts. Research on new approaches to risk assessment may be needed to provide the data and knowledge that will be necessary to develop risk projection models of the CNS from space radiation. A vigorous research program, which will be required to solve these problems, must rely on new approaches to risk assessment and countermeasure validation because of the absence of useful human radio-epidemiology data in this area.
See also
Health threat from cosmic rays
Spaceflight radiation carcinogenesis
References
External links
Radiation health effects
Space medicine
Spaceflight health effects
Central nervous system | Central nervous system effects from radiation exposure during spaceflight | [
"Chemistry",
"Materials_science"
] | 8,219 | [
"Radiation effects",
"Radiation health effects",
"Radioactivity"
] |
39,763,592 | https://en.wikipedia.org/wiki/Langbeinites | Langbeinites are a family of crystalline substances based on the structure of langbeinite with general formula , where M is a large univalent cation (such as potassium, rubidium, caesium, or ammonium), and M' is a small divalent cation (for example, magnesium, calcium, manganese, iron, cobalt, nickel, copper, zinc or cadmium). The sulfate group, , can be substituted by other tetrahedral anions with a double negative charge such as tetrafluoroberyllate (), selenate (), chromate (), molybdate (), or tungstates. Although monofluorophosphates are predicted, they have not been described. By redistributing charges other anions with the same shape such as phosphate also form langbeinite structures. In these the M' atom must have a greater charge to balance the extra three negative charges.
At higher temperatures the crystal structure is cubic P213. However, the crystal structure may change to lower symmetries at lower temperatures, for example, P21, P1, or P212121. Usually this temperature is well below room temperature, but in a few cases the substance must be heated to acquire the cubic structure.
Crystal structure
The crystal structures of langbeinites consist of a network of oxygen vertex-connected tetrahedral polyanions (such as sulfate) and distorted metal ion-oxygen octahedra. The unit cell contains four formula units. In the cubic form the tetrahedral anions are slightly rotated from the main crystal axes. When cooled, this rotation disappears and the tetrahedra align, resulting in lower energy as well as lower crystal symmetry.
Examples
Sulfates include dithallium dicadmium sulfate, dirubidium dicadmium sulfate, dipotassium dicadmium sulfate, dithallium manganese sulfate, and dirubidium dicalcium trisulfate.
Selenates include diammonium dimanganese selenate. A diammonium dicadmium selenate langbeinite could not be crystallised from water, but a trihydrate exists.
Chromate based langbeinites include dicaesium dimanganese chromate.
Molybdates include . Potassium members are absent, as are zinc and copper containing solids, which all crystallize in different forms. Manganese, magnesium, cadmium and some nickel double molybdates exist as langbeinites.
Double tungstates of the form are predicted to exist in the langbeinite form.
An examples with tetrafluroberyllate is dipotassium dimanganese tetrafluoroberyllate (). Other tetrafluoroberyllates may include: ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;; ; ; ;
.
The phosphate containing langbeinites were found in 1972 with the discovery of , and since then a few more phosphates that also contain titanium have been found such as and . By substituting metals in , A from (K, Rb, Cs), and M from (Cr, Fe, V), other langbeinites are made. The NASICON-type structure competes for these kinds of phosphates, so not all possibilities are langbeinites.
Other phosphate based substances include , , , , , . Sodium barium diiron tris-(phosphate) () is yet another variation with the same structure but differently charged ions. Most phosphates of this kind of formula do not form langbeinites, instead crystallise in the NASICON structure with archetype .
A langbeinite with arsenate is known to exist by way of .
Properties
Physical properties
Langbeinite-family crystals can show ferroelectric or ferroelastic properties. Diammonium dicadmium sulfate identified by Jona and Pepinsky with a unit cell size of 10.35 Å becomes ferroelectric when the temperature drops below 95 K. The phase transition temperature is not fixed, and can vary depending on the crystal or history of temperature change. So for example the phase transition in diammonium dicadmium sulfate can occur between 89 and 95 K. Under pressure the highest phase transition temperature increases. ∂T/∂P = 0.0035 degrees/bar. At 824 bars there is a triple point with yet another transition diverging at a slope of ∂T/∂P = 0.103 degrees/bar. For dipotassium dimanganese sulfate pressure causes the transition to rise at the rate of 6.86 °C/kbar. The latent heat of the transition is 456 cal/mol.
Dithallium dicadmium sulfate was shown to be ferroelectric in 1972.
Dipotassium dicadmium sulfate is thermoluminescent with stronger outputs of light at 350 and 475 K. This light output can be boosted forty times with a trace amount of samarium. Dipotassium dimagnesium sulfate doped with dysprosium develops thermoluminescence and mechanoluminescence after being irradiated with gamma rays. Since gamma rays occur naturally, this radiation induced thermoluminescence can be used to date evaporites in which langbeinite can be a constituent.
At higher temperatures the crystals take on cubic form, whereas at the lowest temperatures they can transform to an orthorhombic crystal group. For some types there are two more phases, and as the crystal is cooled it goes from cubic, to monoclinic, to triclinic to orthorhombic. This change to higher symmetry on cooling is very unusual in solids. For some langbeinites only the cubic form is known, but that may be because it has not been studied at low enough temperatures yet. Those that have three phase transitions go through these crystallographic point groups: P213 – P21 – P1 – P212121, whereas the single phase change crystals only have P213 – P212121.
has a transition temperature above room temperature, so that it is ferroelectric in standard conditions. The orthorhombic cell size is a=10.2082 Å, b=10.2837 Å, c=10.1661 Å.
Where the crystals change phase there is a discontinuity in the heat capacity. The transitions may show thermal hysteresis.
Different cations can be substituted so that for example and can form solid solutions for all ratios of thallium and potassium. Properties such as the phase transition temperature and unit cell sizes vary smoothly with the composition.
Langbeinites containing transition metals can be coloured. For example, cobalt langbeinite shows a broad absorption around 555 nm due to the cobalt 4T1g(F)→4T1g(P) electronic transition.
The enthalpy of formation (ΔfHm) for solid at 298.2 K is , and for it is .
Sulfates
Fluoroberyllates
Phosphates
Phosphate silicates
Mixed anion phosphates
Vanadates
The orthovanadates have four formula per cell, with a slightly distorted cell that has orthorhombic symmetry.
Arsenates
Selenates
Langbeinite structured double selenates are difficult to make, perhaps because selenate ions arranged around the dication leave space for water, so hydrates crystallise from double selenate solutions. For example, when ammonia selenate and cadmium selenate solution is crystallized it forms diammonium dicadmium selenate trihydrate: and when heated it loses both water and ammonia to form a pyroselenate rather than a langbeinite.
Molybdates
Tungstates
Preparation
Diammonium dicadmium sulfate can be made by evaporating a solution of ammonium sulfate and cadmium sulfate. Dithallium dicadmium sulfate can be made by evaporating a water solution at 85 °C. Other substances may be formed during crystallisation from water such as Tutton's salts or competing compounds like .
Potassium and ammonium nickel langbeinite can be made from nickel sulfate and the other sulfates by evaporating a water solution at 85 °C.
Dipotassium dizinc sulfate can be formed into large crystals by melting zinc sulfate and potassium sulfate together at 753 K. A crystal can be slowly drawn out of the melt from a rotating crucible at about 1.2 mm every hour.
can be made by heating , , , water and hydrochloric acid to 180 °C for eight days under pressure.
converts to on heating to 200 °C.
The sol-gel method produces a gel from a solution mixture, which is then heated. can be made by mixing solutions of , , , and dripping in . The gel produced was dried out at 95 °C and then baked at various temperatures from 400 to 1100 °C.
Langbeinites crystals can be made by the Bridgman technique, Czochralski process or flux technique.
A Tutton's salt may be heat treated and dehydrate, e.g. can be made from heated to 100 °C, forming as a side product. Similarly the ammonium vanadium Tutton's salt, , heated to 160 °C in a closed tube produces . At lower temperatures a hydroxy compound is formed.
Use
Few uses have been made of these substances. Langbeinite itself can be used as an "organic" fertiliser with potassium, magnesium and sulfur, all needed for plant growth. Electrooptic devices could be made from some of these crystals, particularly those that have cubic transition temperatures as temperatures above room temperature. Research continues into this. Ferroelectric crystals could store information in the location of domain walls.
The phosphate langbeinites are insoluble, stable against heat, and can accommodate a large number of different ions, and have been considered for immobilizing unwanted radioactive waste.
Zirconium phosphate langbeinites containing rare earth metals have been investigated for use in white LEDs and plasma displays. Langbeinites that contain bismuth are photoluminescent.
In case of iron-containing ones complex magnetic behavior may be found.
References
Crystals
Ferroelectric materials
Crystal structure types | Langbeinites | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,168 | [
"Physical phenomena",
"Ferroelectric materials",
"Double salts",
"Crystal structure types",
"Salts",
"Materials",
"Electrical phenomena",
"Crystallography",
"Crystals",
"Hysteresis",
"Matter"
] |
39,765,053 | https://en.wikipedia.org/wiki/Multi-stage%20programming | Multi-stage programming (MSP) is a variety of metaprogramming in which compilation is divided into a series of intermediate phases, allowing typesafe run-time code generation.
Statically defined types are used to verify that dynamically constructed types are valid and do not violate the type system.
In MSP languages, expressions are qualified by notation that specifies the phase at which they are to be evaluated. By allowing the specialization of a program at run-time, MSP can optimize the performance of programs: it can be considered as a form of partial evaluation that performs computations at compile-time as a trade-off to increase the speed of run-time processing.
Multi-stage programming languages support constructs similar to the Lisp construct of quotation and eval, except that scoping rules are taken into account.
References
External links
MetaOCaml
Programming paradigms
Type systems | Multi-stage programming | [
"Mathematics"
] | 184 | [
"Type theory",
"Mathematical structures",
"Type systems"
] |
50,302,833 | https://en.wikipedia.org/wiki/Tissue%20remodeling | Tissue remodeling is the reorganization or renovation of existing tissues. Tissue remodeling can be either physiological or pathological. The process can either change the characteristics of a tissue such as in blood vessel remodeling, or result in the dynamic equilibrium of a tissue such as in bone remodeling. Macrophages repair wounds and remodel tissue by producing extracellular matrix and proteases to modify that specific matrix.
A myocardial infarction induces tissue remodeling of the heart in a three-phase process: inflammation, proliferation, and maturation. Inflammation is characterized by massive necrosis in the infarcted area. Inflammatory cells clear the dead cells. In the proliferation phase, inflammatory cells die by apoptosis, being replaced by myofibroblasts which produce large amounts of collagen. In the maturation phase, myofibroblast numbers are reduced by apoptosis, allowing for infiltration by endothelial cells (for blood vessels) and cardiomyocytes (heart tissue cells). Usually, however, much of the tissue remodeling is pathological, resulting in a large amount of fibrous tissue. By contrast, aerobic exercise can produce beneficial cardiac tissue remodeling in those suffering from left ventricular hypertrophy.
Programmed cellular senescence contributes to beneficial tissue remodeling during embryonic development of the fetus.
In a brain stroke the penumbra area surrounding the ischemic event initially undergoes a damaging remodeling, but later transitions to a tissue remodeling characterized by repair.
Vascular remodeling refers to a compensatory change in blood vessel walls due to plaque growth. Vascular expansion is called positive remodeling, whereas vascular constriction is called negative remodeling.
Tissue remodeling occurs in adipose tissue with increased body fat. In obese subjects, this remodeling is often pathological, characterized by excessive inflammation and fibrosis.
See also
Collagen hybridizing peptide, a molecular marker to directly image tissue remodeling
References
Tissue engineering | Tissue remodeling | [
"Chemistry",
"Engineering",
"Biology"
] | 442 | [
"Biological engineering",
"Bioengineering stubs",
"Cloning",
"Chemical engineering",
"Biotechnology stubs",
"Tissue engineering",
"Medical technology stubs",
"Medical technology"
] |
50,311,265 | https://en.wikipedia.org/wiki/Ionic%20liquids%20in%20carbon%20capture | The use of ionic liquids in carbon capture is a potential application of ionic liquids as absorbents for use in carbon capture and sequestration. Ionic liquids, which are salts that exist as liquids near room temperature, are polar, nonvolatile materials that have been considered for many applications. The urgency of climate change has spurred research into their use in energy-related applications such as carbon capture and storage.
Carbon capture using absorption
Ionic liquids as solvents
Amines are the most prevalent absorbent in postcombustion carbon capture technology today. In particular, monoethanolamine (MEA) has been used in industrial scales in postcombustion carbon capture, as well as in other CO2 separations, such as "sweetening" of natural gas. However, amines are corrosive, degrade over time, and require large industrial facilities. Ionic liquids on the other hand, have low vapor pressures . This property results from their strong Coulombic attractive force. Vapor pressure remains low through the substance's thermal decomposition point (typically >300 °C). In principle, this low vapor pressure simplifies their use and makes them "green" alternatives. Additionally, it reduces risk of contamination of the CO2 gas stream and of leakage into the environment.
The solubility of CO2 in ionic liquids is governed primarily by the anion, less so by the cation. The hexafluorophosphate (PF6–) and tetrafluoroborate (BF4–) anions have been shown to be especially amenable to CO2 capture.
Ionic liquids have been considered as solvents in a variety of liquid-liquid extraction processes, but never commercialized. Beside that, ionic liquids have replaced the conventional volatile solvents in industry such as absorption of gases or extractive distillation. Additionally, ionic liquids are used as co-solutes for the generation of aqueous biphasic systems, or purification of biomolecules.
Process
A typical CO2 absorption process consists of a feed gas, an absorption column, a stripper column, and output streams of CO2-rich gas to be sequestered, and CO2-poor gas to be released to the atmosphere. Ionic liquids could follow a similar process to amine gas treating, where the CO2 is regenerated in the stripper using higher temperature. However, ionic liquids can also be stripped using pressure swings or inert gases, reducing the process energy requirement. A current issue with ionic liquids for carbon capture is that they have a lower working capacity than amines. Task-specific ionic liquids that employ chemisorption and physisorption are being developed in an attempt to increase the working capacity. 1-butyl-3-propylamineimidazolium tetrafluoroborate is one example of a TSIL.
Research
In 2023, a research team composed of Chuo University, Nihon University, Kanazawa University, and the Research Institute of Innovative Technology for the Earth utilized electronic state informatics to design and synthesize ionic liquids. Subsequently, they conducted precise measurements of CO2 solubility and successfully developed ionic liquids with the highest physical absorption capacity for CO2 to date.
Drawbacks
Selectivity
In carbon capture an effective absorbent is one which demonstrates a high selectivity, meaning that CO2 will preferentially dissolve in the absorbent compared to other gaseous components. In post-combustion carbon capture the most salient separation is CO2 from N2, whereas in pre-combustion separation CO is primarily separated from H2. Other components and impurities may be present in the flue gas, such as hydrocarbons, SO2, or H2S. Before selecting the appropriate solvent to use for carbon capture it is critical to ensure that at the given process conditions and flue gas composition CO2 maintains a much higher solubility in the solvent than the other species in the flue gas and thus has a high selectivity.
The selectivity of CO2 in ionic liquids has been widely studied by researchers. Generally, polar molecules and molecules with an electric quadrupole moment are highly soluble in liquid ionic substances. It has been found that at high process temperatures the solubility of CO2 decreases, while the solubility of other species, such as CH4 and H2, may increase with increasing temperature, thereby reducing the effectiveness of the solvent. However, the solubility of N2 in ionic liquids is relatively low and does not increase with increasing temperature so the use of ionic liquids in post-combustion carbon capture may be appropriate due to the consistently high CO2/N2 selectivity. The presence of common flue gas impurities such as H2S severely inhibits CO2 solubility in ionic liquids and should be carefully considered by engineers when choosing an appropriate solvent for a particular flue gas.
Viscosity
A primary concern with the use of ionic liquids for carbon capture is their high viscosity compared with that of commercial solvents. Ionic liquids which employ chemisorption depend on a chemical reaction between solute and solvent for CO2 separation. The rate of this reaction is dependent on the diffusivity of CO2 in the solvent and is thus inversely proportional to viscosity. The self diffusivity of CO2 in ionic liquids are generally to the order of 10−10 m2/s, approximately an order of magnitude less than similarly performing commercial solvents used on CO2 capture. The viscosity of an ionic liquid can vary significantly according to the type of anion and cation, the alkyl chain length, and the amount of water or other impurities in the solvent. Because these solvents can be “designed” and these properties chosen, developing ionic liquids with lowered viscosities is a current topic of research. Supported ionic liquid phases (SILPs) are one proposed solution to this problem.
Tunability
As required for all separation techniques, ionic liquids exhibit selectivity towards one or more of the phases of a mixture. 1-Butyl-3-methylimidazolium hexafluorophosphate (BMIM-PF6) is a room-temperature ionic liquid that was identified early on as a viable substitute for volatile organic solvents in liquid-liquid separations. Other [PF6]- and [BF4]- containing ionic liquids have been studied for their CO2 absorption properties, as well as 1-ethyl-3-methylimidazolium (EMIM) and unconventional cations like trihexyl(tetradecyl) phosphonium ([P66614]). Selection of different anion and cation combinations in ionic liquids affects their selectivity and physical properties. Additionally, the organic cations in ionic liquids can be "tuned" by changing chain lengths or by substituting radicals. Finally, ionic liquids can be mixed with other ionic liquids, water, or amines to achieve different properties in terms of absorption capacity and heat of absorption. This tunability has led some to call ionic liquids "designer solvents." 1-butyl-3-propylamineimidazolium tetrafluoroborate was specifically developed for CO2 capture; it is designed to employ chemisorption to absorb CO2 and maintain efficiency under repeated absorption/regeneration cycles. Other ionic liquids have been simulated or experimentally tested for potential use as CO2 absorbents.
Proposed industrial applications
Currently, CO2 capture uses mostly amine-based absorption technologies, which are energy intensive and solvent intensive. Volatile organic compounds alone in chemical processes represent a multibillion-dollar industry. Therefore, ionic liquids offer an alternative that prove attractive should their other deficiencies be addressed.
During the capture process, the anion and cation play a crucial role in the dissolution of CO2. Spectroscopic results suggest a favorable interaction between the anion and CO2, wherein CO2 molecules preferentially attach to the anion. Furthermore, intermolecular forces, such as hydrogen bonds, van der Waals bonds, and electrostatic attraction, contributes to the solubility of CO2 in ionic liquids. This makes ionic liquids promising candidates for CO2 capture because the solubility of CO2 can be modeled accurately by the regular solubility theory (RST), which reduces operational costs in developing more sophisticated model to monitor the capture process.
References
Further reading
Carbon capture and storage
Ions
Ionic liquids | Ionic liquids in carbon capture | [
"Physics",
"Chemistry",
"Engineering"
] | 1,737 | [
"Geoengineering",
"Carbon capture and storage",
"Ions",
"Matter"
] |
50,313,995 | https://en.wikipedia.org/wiki/QFabric | QFabric is a proprietary technology proposed by Juniper Networks. In contrary to open standards such as OpenFlow, QFabric is regarded as a vendor proprietary approach.
Its goal is to simplify the traditional tree architecture of L2/L3 switches to a single tier any-to-any connectivity.
Competing Technologies
Competing technologies to QFabric include IEEE 802.1aq, MC-LAG, VXLAN, FabricPath, Virtual Cluster Switching (VCS), and the IETF TRILL standard.
System Components
QFabric System Components consists of:
QFabric Nodes - fixed-configuration edge platforms that connect to networked data center devices
QFabric Interconnect - a high-speed transport device that connects all QFabric Nodes in a full-mesh topology
QFabric Director - which provides control and management services for the full QFabric System
Performance Improvement
For data center architecture, QFabric creates a single logical switch that connects the entire data center rather than tiers of multiple access aggregation and core switches. The reason why this can improve performance is that, instead of going through multiple tiers of switches in a traditional network, packets only get through the infrastructure in a single hop, so this can reduce the delay significantly.
For example, a typical switch can handle 200 ports, while QFabric can scale up to 6000 ports with lossless 10Gbps speed.
In a QFX3000-M QFabric System, which supports up to 768 10GbE ports, the average end-to-end latency can be as short as 3 microseconds. In QFX3000-G QFabric System, although it supports up to 6,144 10GbE ports, by connecting all nodes in a full-mesh topology, it can achieve an average port-to-port latency of 5 microseconds.
Network topology
References | QFabric | [
"Mathematics"
] | 401 | [
"Network topology",
"Topology"
] |
38,318,077 | https://en.wikipedia.org/wiki/EPS%20Europhysics%20Prize | The EPS CMD Europhysics Prize is awarded since 1975 by the Condensed Matter Division of the European Physical Society, in recognition of recent work (completed in the 5 years preceding the attribution of the award) by one or more individuals, for scientific excellence in the area of condensed matter physics. It is one of Europe's most prestigious prizes in the field of condensed matter physics. Several laureates of the EPS CMD Europhysics Prize also received a Nobel Prize in Physics or Chemistry.
Laureates
Source: European Physical Society
2024: Andrea Cavalleri for his pioneering studies of photo-induced emergent phases of quantum materials: from enhanced superconductivity to the control of materials topology.
2023: Claudia Felser and Andrei Bernevig for seminal contributions to the classification, prediction, and discovery of novel topological quantum materials.
2022: Agnès Barthélémy, Manuel Bibes, Ramamoorthy Ramesh and Nicola Spaldin for seminal contributions to the physics and applications of multiferroic and magnetoelectric materials.
2020: Jörg Wrachtrup - Pioneering studies on quantum coherence in solid-state systems and their applications for sensing, and, in particular, for major breakthroughs in the study of the optical and spin properties of nitrogen vacancy centers in diamond.
2018: Lucio Braicovich and Giacomo Claudio Ghiringhelli - The development and scientific exploration of high-resolution Resonant Inelastic X-ray Scattering (RIXS).
2016: , Alexei N. Bogdanov, Christian Pfleiderer, , Ashvin Vishwanath - Theoretical prediction, experimental discovery and theoretical analysis of a magnetic skyrmion phase in MnSi, a new state of matter.
2014: Harold Y. Hwang, Jochen Mannhart and - for the discovery and investigation of electron liquids at oxide interfaces
2012: Steven T. Bramwell, Claudio Castelnovo, Santiago Grigera, Roderich Moessner, Shivaji Sondhi and Alan Tennant - Prediction and experimental observation of magnetic monopoles in spin ice
2010: Hartmut Buhmann, Charles Kane, Eugene J. Mele, Laurens W. Molenkamp and Shoucheng Zhang - Theoretical prediction and the experimental observation of the quantum spin Hall effect and topological insulators
2008: Andre Geim and Kostya Novoselov - Discovering and isolating a single free-standing atomic layer of carbon (graphene) and elucidating its remarkable electronic properties
2006: Antoine Georges, Gabriel Kotliar, , Dieter Vollhardt - Development and application of the dynamical mean field theory
2005: David Awschalom, Tomasz Dietl, Hideo Ohno - For their work on ferromagnetic semiconductors and spintronics
2004: Michel Devoret, Daniel Estève, Johan Mooij, Yasunobu Nakamura - Realisation and demonstration of the quantum bit concept based on superconducting circuits
2003: Heino Finkelmann, Mark Warner - Discovery of a new class of materials called liquid crystal elastomers
2002: , Jonathan Friedman, Dante Gatteschi, Roberta Sessoli, - Development of the field of quantum dynamics of nanomagnets, including the discovery of quantum tunnelling and interference in dynamics of magnetization
2001: Sumio Iijima, Cees Dekker, Thomas W. Ebbesen, Paul L. McEuen - Discovery of multi- and single-walled carbon nanotubes and pioneering studies of their fundamental mechanical and electronic properties
2000: Paolo Carra, Gerrit van der Laan, Gisela Schütz - Pioneering work in establishing the field of magnetic x-ray dichroism
1999: , Michael Reznikov - For developing novel techniques for noise measurements in solids leading to experimental observation of carriers with a fractional charge
1998: Thomas Maurice Rice - Original contributions to the theory of strongly correlated electron systems
1997: Albert Fert, Peter Grünberg, Stuart Parkin - Discovery and contribution to the understanding of the giant magneto-resistance effect in transition-metal multilayers and demonstrations of its potential for technological applications
1996: Richard Friend - Pioneering work on semiconducting organic polymer materials and demonstration of an organic light emitting diode
1995: Yakir Aharonov, Michael V. Berry - Introduction of fundamental concepts in physics that have profound impact on condensed matter science
1994: Donald R. Huffman, Wolfgang Krätschmer, Harry Kroto, Richard Smalley - New molecular forms of carbon and their production in the solid state
1993: Boris L. Altshuler, Arkadii G. Aronov, David E. Khmelnitskii, Anatoly I. Larkin, Boris Spivak - Theoretical work on coherent phenomena in disordered conductors
1992: Gerhard Ertl, Harald Ibach, J. Peter Toennies - Pioneering studies of surface structures, dynamics and reactions through the development of novel experimental methods
1991: Klaus Bechgaard, Denis Jérome - Synthesis of a new class of organic metals and the discovery of their superconductivity and novel magnetic properties
1990: Roberto Car, Michele Parrinello - A novel and powerful method for the ab-initio calculation of molecular dynamics
1989: Frank Steglich, Hans-Rudolf Ott, Gilbert G. Lonzarich - Pioneering investigations of heavy-fermion metals
1988: J. Georg Bednorz, K. Alex Müller - Discovery of high-temperature superconductivity
1987: Igor K. Yanson - Point-contact spectroscopy in metals
1986: Ferenc Mezei - Neutron spin echo spectroscopy
1985: , Michael Pepper - The experimental study of low dimensional physics
1984: Gerd Binnig, Heinrich Rohrer - Scanning tunnelling microscope
1983: Isaac F. Silvera - Atomic and solid hydrogen
1982: Klaus von Klitzing - Experimental demonstration of the quantized Hall resistance
1981: No award
1980: O. Krogh Andersen, Andries Rinse Miedema - Original methods for the calculation of the electronic properties of materials
1979: Eric A. Ash, Jeffrey H. Collins, Yuri V. Gulaev, K.A. Ingebrigtsen, E.G.S. Paige - The physical principles of surface acoustic wave devices
1978: Zhores Alferov - Heterojunctions
1977: Walter Eric Spear - Amorphous silicon devices
1976: Wolfgang Helfrich - Contributions to the physics of liquid crystals
1975: Victor S. Bagaev, Leonid V. Keldysh, Jaroslav E. Pokrovsky, Michel Voos - The condensation of excitons
See also
List of physics awards
References
Awards of the European Physical Society
Condensed matter physics awards | EPS Europhysics Prize | [
"Physics",
"Materials_science"
] | 1,383 | [
"Condensed matter physics awards",
"Condensed matter physics"
] |
38,323,559 | https://en.wikipedia.org/wiki/Order-8%20triangular%20tiling | In geometry, the order-8 triangular tiling is a regular tiling of the hyperbolic plane. It is represented by Schläfli symbol of {3,8}, having eight regular triangles around each vertex.
Uniform colorings
The half symmetry [1+,8,3] = [(4,3,3)] can be shown with alternating two colors of triangles:
Symmetry
From [(4,4,4)] symmetry, there are 15 small index subgroups (7 unique) by mirror removal and alternation operators. Mirrors can be removed if its branch orders are all even, and cuts neighboring branch orders in half. Removing two mirrors leaves a half-order gyration point where the removed mirrors met. In these images fundamental domains are alternately colored black and white, and mirrors exist on the boundaries between colors. Adding 3 bisecting mirrors across each fundamental domains creates 832 symmetry. The subgroup index-8 group, [(1+,4,1+,4,1+,4)] (222222) is the commutator subgroup of [(4,4,4)].
A larger subgroup is constructed [(4,4,4*)], index 8, as (2*2222) with gyration points removed, becomes (*22222222).
The symmetry can be doubled to 842 symmetry by adding a bisecting mirror across the fundamental domains. The symmetry can be extended by 6, as 832 symmetry, by 3 bisecting mirrors per domain.
Related polyhedra and tilings
From a Wythoff construction there are ten hyperbolic uniform tilings that can be based from the regular octagonal and order-8 triangular tilings.
Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 10 forms.
It can also be generated from the (4 3 3) hyperbolic tilings:
See also
Order-8 tetrahedral honeycomb
Tilings of regular polygons
List of uniform planar tilings
List of regular polytopes
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Isohedral tilings
Order-8 tilings
Regular tilings
Triangular tilings | Order-8 triangular tiling | [
"Physics"
] | 533 | [
"Isogonal tilings",
"Tessellation",
"Hyperbolic tilings",
"Isohedral tilings",
"Symmetry"
] |
38,323,622 | https://en.wikipedia.org/wiki/Snub%20trioctagonal%20tiling | In geometry, the order-3 snub octagonal tiling is a semiregular tiling of the hyperbolic plane. There are four triangles, one octagon on each vertex. It has Schläfli symbol of sr{8,3}.
Images
Drawn in chiral pairs, with edges missing between black triangles:
Related polyhedra and tilings
This semiregular tiling is a member of a sequence of snubbed polyhedra and tilings with vertex figure (3.3.3.3.n) and Coxeter–Dynkin diagram . These figures and their duals have (n32) rotational symmetry, being in the Euclidean plane for n=6, and hyperbolic plane for any higher n. The series can be considered to begin with n=2, with one set of faces degenerated into digons.
From a Wythoff construction there are ten hyperbolic uniform tilings that can be based from the regular octagonal tiling.
Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 10 forms.
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
See also
Snub hexagonal tiling
Floret pentagonal tiling
Order-3 heptagonal tiling
Tilings of regular polygons
List of uniform planar tilings
Kagome lattice
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Chiral figures
Hyperbolic tilings
Isogonal tilings
Semiregular tilings
Snub tilings | Snub trioctagonal tiling | [
"Physics",
"Chemistry"
] | 378 | [
"Snub tilings",
"Semiregular tilings",
"Isogonal tilings",
"Tessellation",
"Chirality",
"Hyperbolic tilings",
"Chiral figures",
"Symmetry"
] |
38,324,409 | https://en.wikipedia.org/wiki/DNA%20digital%20data%20storage | DNA digital data storage is the process of encoding and decoding binary data to and from synthesized strands of DNA.
While DNA as a storage medium has enormous potential because of its high storage density, its practical use is currently severely limited because of its high cost and very slow read and write times.
In June 2019, scientists reported that all 16 GB of text from the English Wikipedia had been encoded into synthetic DNA. In 2021, scientists reported that a custom DNA data writer had been developed that was capable of writing data into DNA at 1 Mbps.
Encoding methods
Many methods for encoding data in DNA are possible. The optimal methods are those that make economical use of DNA and protect against errors. If the message DNA is intended to be stored for a long period of time, for example, 1,000 years, it is also helpful if the sequence is obviously artificial and the reading frame is easy to identify.
Encoding text
Several simple methods for encoding text have been proposed. Most of these involve translating each letter into a corresponding "codon", consisting of a unique small sequence of nucleotides in a lookup table. Some examples of these encoding schemes include Huffman codes, comma codes, and alternating codes.
Encoding arbitrary data
To encode arbitrary data in DNA, the data is typically first converted into ternary (base 3) data rather than binary (base 2) data. Each digit (or "trit") is then converted to a nucleotide using a lookup table. To prevent homopolymers (repeating nucleotides), which can cause problems with accurate sequencing, the result of the lookup also depends on the preceding nucleotide. Using the example lookup table below, if the previous nucleotide in the sequence is T (thymine), and the trit is 2, the next nucleotide will be G (guanine).
Various systems may be incorporated to partition and address the data, as well as to protect it from errors. One approach to error correction is to regularly intersperse synchronization nucleotides between the information-encoding nucleotides. These synchronization nucleotides can act as scaffolds when reconstructing the sequence from multiple overlapping strands.
In vivo
The genetic code within living organisms can potentially be co-opted to store information. Furthermore synthetic biology can be used to engineer cells with "molecular recorders" to allow the storage and retrieval of information stored in the cell's genetic material. CRISPR gene editing can also be used to insert artificial DNA sequences into the genome of the cell. For encoding developmental lineage data (molecular flight recorder), roughly 30 trillion cell nuclei per mouse * 60 recording sites per nucleus * 7-15 bits per site yields about 2 TeraBytes per mouse written (but only very selectively read).
In-vivo light-based direct image and data recording
A proof-of-concept in-vivo direct DNA data recording system was demonstrated through incorporation of optogenetically regulated recombinases as part of an engineered "molecular recorder" allows for direct encoding of light-based stimuli into engineered E.coli cells. This approach can also be parallelized to store and write text or data in 8-bit form through the use of physically separated individual cell cultures in cell-culture plates.
This approach leverages the editing of a "recorder plasmid" by the light-regulated recombinases, allowing for identification of cell populations exposed to different stimuli. This approach allows for the physical stimulus to be directly encoded into the "recorder plasmid" through recombinase action. Unlike other approaches, this approach does not require manual design, insertion and cloning of artificial sequences to record the data into the genetic code. In this recording process, each individual cell population in each cell-culture plate culture well can be treated as a digital "bit", functioning as a biological transistor capable of recording a single bit of data.
History
The idea of DNA digital data storage dates back to 1959, when the physicist Richard P. Feynman, in "There's Plenty of Room at the Bottom: An Invitation to Enter a New Field of Physics" outlined the general prospects for the creation of artificial objects similar to objects of the microcosm (including biological) and having similar or even more extensive capabilities. In 1964–65, Mikhail Samoilovich Neiman, the Soviet physicist, published 3 articles about microminiaturization in electronics at the molecular-atomic level, which independently presented general considerations and some calculations regarding the possibility of recording, storage, and retrieval of information on synthesized DNA and RNA molecules. After the publication of the first M.S. Neiman's paper and after receiving by Editor the manuscript of his second paper (January, the 8th, 1964, as indicated in that paper) the interview with cybernetician Norbert Wiener was published. N. Wiener expressed ideas about miniaturization of computer memory, close to the ideas, proposed by M. S. Neiman independently. These Wiener's ideas M. S. Neiman mentioned in the third of his papers. This story is described in details.
One of the earliest uses of DNA storage occurred in a 1988 collaboration between artist Joe Davis and researchers from Harvard University. The image, stored in a DNA sequence in E.coli, was organized in a 5 x 7 matrix that, once decoded, formed a picture of an ancient Germanic rune representing life and the female Earth. In the matrix, ones corresponded to dark pixels while zeros corresponded to light pixels.
In 2007 a device was created at the University of Arizona using addressing molecules to encode mismatch sites within a DNA strand. These mismatches were then able to be read out by performing a restriction digest, thereby recovering the data.
In 2011, George Church, Sri Kosuri, and Yuan Gao carried out an experiment that would encode a 659 kb book that was co-authored by Church. To do this, the research team did a two-to-one correspondence where a binary zero was represented by either an adenine or cytosine and a binary one was represented by a guanine or thymine. After examination, 22 errors were found in the DNA.
In 2012, George Church and colleagues at Harvard University published an article in which DNA was encoded with digital information that included an HTML draft of a 53,400 word book written by the lead researcher, eleven JPEG images and one JavaScript program. Multiple copies for redundancy were added and 5.5 petabits can be stored in each cubic millimeter of DNA. The researchers used a simple code where bits were mapped one-to-one with bases, which had the shortcoming that it led to long runs of the same base, the sequencing of which is error-prone. This result showed that besides its other functions, DNA can also be another type of storage medium such as hard disk drives and magnetic tapes.
In 2013, an article led by researchers from the European Bioinformatics Institute (EBI) and submitted at around the same time as the paper of Church and colleagues detailed the storage, retrieval, and reproduction of over five million bits of data. All the DNA files reproduced the information with an accuracy between 99.99% and 100%. The main innovations in this research were the use of an error-correcting encoding scheme to ensure the extremely low data-loss rate, as well as the idea of encoding the data in a series of overlapping short oligonucleotides identifiable through a sequence-based indexing scheme. Also, the sequences of the individual strands of DNA overlapped in such a way that each region of data was repeated four times to avoid errors. Two of these four strands were constructed backwards, also with the goal of eliminating errors. The costs per megabyte were estimated at $12,400 to encode data and $220 for retrieval. However, it was noted that the exponential decrease in DNA synthesis and sequencing costs, if it continues into the future, should make the technology cost-effective for long-term data storage by 2023.
In 2013, a software called DNACloud was developed by Manish K. Gupta and co-workers to encode computer files to their DNA representation. It implements a memory efficiency version of the algorithm proposed by Goldman et al. to encode (and decode) data to DNA (.dnac files).
The long-term stability of data encoded in DNA was reported in February 2015, in an article by researchers from ETH Zurich. The team added redundancy via Reed–Solomon error correction coding and by encapsulating the DNA within silica glass spheres via Sol-gel chemistry.
In 2016 research by Church and Technicolor Research and Innovation was published in which, 22 MB of a MPEG compressed movie sequence were stored and recovered from DNA. The recovery of the sequence was found to have zero errors.
In March 2017, Yaniv Erlich and Dina Zielinski of Columbia University and the New York Genome Center published a method known as DNA Fountain that stored data at a density of 215 petabytes per gram of DNA. The technique approaches the Shannon capacity of DNA storage, achieving 85% of the theoretical limit. The method was not ready for large-scale use, as it costs $7000 to synthesize 2 megabytes of data and another $2000 to read it.
In March 2018, University of Washington and Microsoft published results demonstrating storage and retrieval of approximately 200MB of data. The research also proposed and evaluated a method for random access of data items stored in DNA. In March 2019, the same team announced they have demonstrated a fully automated system to encode and decode data in DNA.
Research published by Eurecom and Imperial College in January 2019, demonstrated the ability to store structured data in synthetic DNA. The research showed how to encode structured or, more specifically, relational data in synthetic DNA and also demonstrated how to perform data processing operations (similar to SQL) directly on the DNA as chemical processes.
In April 2019, due to a collaboration with TurboBeads Labs in Switzerland, Mezzanine by Massive Attack was encoded into synthetic DNA, making it the first album to be stored in this way.
In June 2019, scientists reported that all 16 GB of Wikipedia have been encoded into synthetic DNA. In 2021, CATALOG reported that they had developed a custom DNA writer capable of writing data at 1 Mbps into DNA.
The first article describing data storage on native DNA sequences via enzymatic nicking was published in April 2020. In the paper, scientists demonstrate a new method of recording information in DNA backbone which enables bit-wise random access and in-memory computing.
In 2021, a research team at Newcastle University led by N. Krasnogor implemented a stack data structure using DNA, allowing for last-in, first-out (LIFO) data recording and retrieval. Their approach used hybridization and strand displacement to record DNA signals in DNA polymers, which were then released in reverse order. The study demonstrated that data structure-like operations are possible in the molecular realm. The researchers also explored the limitations and future improvements for dynamic DNA data structures, highlighting the potential for DNA-based computational systems.
Davos Bitcoin Challenge
On January 21, 2015, Nick Goldman from the European Bioinformatics Institute (EBI), one of the original authors of the 2013 Nature paper, announced the Davos Bitcoin Challenge at the World Economic Forum annual meeting in Davos. During his presentation, DNA tubes were handed out to the audience, with the message that each tube contained the private key of exactly one bitcoin, all coded in DNA. The first one to sequence and decode the DNA could claim the bitcoin and win the challenge. The challenge was set for three years and would close if nobody claimed the prize before January 21, 2018.
Almost three years later on January 19, 2018, the EBI announced that a Belgian PhD student, Sander Wuyts, of the University of Antwerp and Vrije Universiteit Brussel, was the first one to complete the challenge. Next to the instructions on how to claim the bitcoin (stored as a plain text and PDF file), the logo of the EBI, the logo of the company that printed the DNA (CustomArray), and a sketch of James Joyce were retrieved from the DNA.
The Lunar Library
The Lunar Library, launched on the Beresheet Lander by the Arch Mission Foundation, carries information encoded in DNA, which includes 20 famous books and 10,000 images. This was one of the optimal choices of storage, as DNA can last a long time. The Arch Mission Foundation suggests that it can still be read after billions of years.
The lander crashed on 11 April 2019 and was lost.
DNA of things
The concept of the DNA of Things (DoT) was introduced in 2019 by a team of researchers from Israel and Switzerland, including Yaniv Erlich and Robert Grass. DoT encodes digital data into DNA molecules, which are then embedded into objects. This gives the ability to create objects that carry their own blueprint, similar to biological organisms. In contrast to Internet of things, which is a system of interrelated computing devices, DoT creates objects which are independent storage objects, completely off-grid.
As a proof of concept for DoT, the researcher 3D-printed a Stanford bunny which contains its blueprint in the plastic filament used for printing. By clipping off a tiny bit of the ear of the bunny, they were able to read out the blueprint, multiply it and produce a next generation of bunnies. In addition, the ability of DoT to serve for steganographic purposes was shown by producing non-distinguishable lenses which contain a YouTube video integrated into the material.
See also
DNA computing
DNA nanotechnology
Nanobiotechnology
Natural computing
Plant-based digital data storage
5D optical data storage
References
Further reading
DNA Sequencing Caught in Deluge of Data. The New York Times (NYTimes.com).
DNA
Molecular biology
Storage media
Computational biology | DNA digital data storage | [
"Chemistry",
"Biology"
] | 2,879 | [
"Biochemistry",
"Computational biology",
"Molecular biology"
] |
38,324,933 | https://en.wikipedia.org/wiki/Applications%20of%20nanotechnology | The applications of nanotechnology, commonly incorporate industrial, medicinal, and energy uses. These include more durable construction materials, therapeutic drug delivery, and higher density hydrogen fuel cells that are environmentally friendly. Being that nanoparticles and nanodevices are highly versatile through modification of their physiochemical properties, they have found uses in nanoscale electronics, cancer treatments, vaccines, hydrogen fuel cells, and nanographene batteries.
Nanotechnology's use of smaller sized materials allows for adjustment of molecules and substances at the nanoscale level, which can further enhance the mechanical properties of materials or grant access to less physically accessible areas of the body.
Industrial applications
Potential applications of carbon nanotubes
Nanotubes can help with cancer treatment. They have been shown to be effective tumor killers in those with kidney or breast cancer. Multi-walled nanotubes are injected into a tumor and treated with a special type of laser that generates near-infrared radiation for around half a minute. These nanotubes vibrate in response to the laser, and heat is generated. When the tumor has been heated enough, the tumor cells begin to die. Processes like this one have been able to shrink kidney tumors by up to four-fifths.
Ultrablack materials, made up of “forests” of carbon nanotubes, are important in space, where there is more light than is convenient to work with. Ultrablack material can be applied to camera and telescope systems to decrease the amount of light and allow for more detailed images to be captured.
Nanotubes show promise in treating cardiovascular disease. They could play an important role in blood vessel cleanup. Theoretically, nanotubes with SHP1i molecules attached to them would signal macrophages to clean up plaque in blood vessels without destroying any healthy tissue. Researchers have tested this type of modified nanotube in mice with high amounts of plaque buildup; the mice that received the nanotube treatment showed statistically significant reductions in plaque buildup compared to the mice in the placebo group. Further research is needed for this treatment to be given to humans.
Nanotubes may be used in body armor for future soldiers. This type of armor would be very strong and highly effective at shielding soldiers’ bodies from projectiles and electromagnetic radiation. It is also possible that the nanotubes in the armor could play a role in keeping an eye on soldiers’ conditions.
Construction
Nanotechnology's ability to observe and control the material world at a nanoscopic level can offer great potential for construction development. Nanotechnology can help improve the strength and durability of construction materials, including cement, steel, wood, and glass.
By applying nanotechnology, materials can gain a range of new properties. The discovery of a highly ordered crystal nanostructure of amorphous C-S-H gel and the application of photocatalyst and coating technology result in a new generation of materials with properties like water resistance, self-cleaning property, wear resistance, and corrosion protection. Among the new nanoengineered polymers, there are highly efficient superplasticizers for concrete and high-strength fibers with exceptional energy absorbing capacity.
Experts believe that nanotechnology remains in its exploration stage and has potential in improving conventional materials such as steel. Understanding the composite nanostructures of such materials and exploring nanomaterials' different applications may lead to the development of new materials with expanded properties, such as electrical conductivity as well as temperature-, moisture- and stress-sensing abilities.
Due to the complexity of the equipment, nanomaterials have high cost compared to conventional materials, meaning they are not likely to feature high-volume building materials. In special cases, nanotechnology can help reduce costs for complicated problems. But in most cases, the traditional method for construction remains more cost-efficient. With the improvement of manufacturing technologies, the costs of applying nanotechnology into construction have been decreasing over time and are expected to decrease more.
Nanoelectronics
Nanoelectronics refers to the application of nanotechnology on electronic components. Nanoelectronics aims to improve the performance of electronic devices on displays and power consumption while shrinking them. Therefore, nanoelectronics can help reach the goal set up in Moore's law, which predicts the continued trend of scaling down in the size of integrated circuits.
Nanoelectronics is a multidisciplinary area composed of quantum physics, device analysis, system integration, and circuit analysis. Since de Broglie wavelength in the semiconductors may be on the order of 100 nm, the quantum effect at this length scale becomes essential. The different device physics and novel quantum effects of electrons can lead to exciting applications.
Health applications
Nanobiotechnology
The terms nanobiotechnology and bionanotechnology refer to the combination of ideas, techniques, and sciences of biology and nanotechnology. More specifically, nanobiotechnology refers to the application of nanoscale objects for biotechnology while bionanotechnology refers to the use of biological components in nanotechnology.
The most prominent intersection of nanotechnology and biology is in the field of nanomedicine, where the use of nanoparticles and nanodevices has many clinical applications in delivering therapeutic drugs, monitoring health conditions, and diagnosing diseases. Being that much of the biological processes in the human body occur at the cellular level, the small size of nanomaterials allows for them to be used as tools that can easily circulate within the body and directly interact with intercellular and even intracellular environments. In addition, nanomaterials can have physiochemical properties that differ from their bulk form due to their size, allowing for varying chemical reactivities and diffusion effects that can be studied and changed for diversified applications.
A common application of nanomedicine is in therapeutic drug delivery, where nanoparticles containing drugs for therapeutic treatment of disease are introduced into the body and act as vessels that deliver the drugs to the targeted area. The nanoparticle vessels, which can be made of organic or synthetic components, can further be functionalized by adjusting their size, shape, surface charge, and surface attachments (proteins, coatings, polymers, etc.). The opportunity for functionalizing nanoparticles in such ways is especially beneficial when targeting areas of the body that have certain physiochemical properties that prevent the intended drug from reaching the targeted area alone; for example, some nanoparticles are able to bypass the Blood Brain Barrier to deliver therapeutic drugs to the brain. Nanoparticles have recently been used in cancer therapy treatments and vaccines. Magnetic nanorobots have demonstrated capabilities to prevent and treat antimicrobial resistant bacteria. Application of nanomotor implants have been proposed to achieve thorough disinfection of the dentine.
In vivo imaging is also a key part in nanomedicine, as nanoparticles can be used as contrast agents for common imaging techniques such as computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET). The ability for nanoparticles to localize and circulate in specific cells, tissues, or organs through their design can provide high contrast that results in higher sensitivity imaging, and thus can be applicable in studying pharmacokinetics or visual disease diagnosis.
Energy applications
The energy applications of nanotechnology relates to using the small size of nanoparticles to store energy more efficiently. This promotes the use of renewable energy through green nanotechnology by generating, storing, and using energy without emitting harmful greenhouse gases such as carbon dioxide.
Solar Cells
Nanoparticles used in solar cells are increasing the amount of energy absorbed from sunlight.
Hydrogen Fuel Cells
Nanotechnology is enabling the use of hydrogen energy at a much higher capacity. Hydrogen fuel cells, while they are not an energy source themselves, allow for storing energy from sunlight and other renewable sources in an environmentally-friendly fashion without any emissions. Some of the main drawbacks of traditional hydrogen fuel cells are that they are expensive and not durable enough for commercial uses. However, by using nanoparticles, both the durability and price over time improve significantly. Furthermore, conventional fuel cells are too large to be stored in volume, but researchers have discovered that nanoblades can store greater volumes of hydrogen that can then be saved inside carbon nanotubes for long-term storage.
Nanographene Batteries
Nanotechnology is giving rise to nanographene batteries that can store energy more efficiently and weigh less. Lithium-ion batteries have been the primary battery technology in electronics for the last decade, but the current limits in the technology make it difficult to densify batteries due to the potential dangers of heat and explosion. Graphene batteries being tested in experimental electric cars have promised capacities 4 times greater than current batteries with the cost being 77% lower. Additionally, graphene batteries provide stable life cycles of up to 250,000 cycles, which would allow electric vehicles and long-term products a reliable energy source for decades.
References
Nanotechnology | Applications of nanotechnology | [
"Materials_science",
"Engineering"
] | 1,855 | [
"Nanotechnology",
"Materials science"
] |
38,327,135 | https://en.wikipedia.org/wiki/Pipeline%20Pilot | Pipeline Pilot is a desktop software application developed by Dassault Systèmes. Initially focused on extract, transform, and load (ETL) processes and data analytics, the software has evolved to offer broader capabilities in various scientific and industrial applications.
Pipeline Pilot uses a visual and dataflow programming interface, allowing users to design workflows for data processing. The software's functionality spans several domains, including cheminformatics, QSAR, next-generation sequencing, image analysis, and text analytics.
Pipeline Pilot is primarily used in industries that require extensive data processing and analysis, including life sciences, materials science, and engineering. The software allows users to create workflows by dragging and dropping functional components that automate data analysis tasks, integrate with databases, and perform various scientific computations. These workflows are referred to as "protocols" and can be shared and reused within teams or organizations.
The product supports multiple programming languages, including Python, .NET, Matlab, Perl, SQL, Java, VBScript, and R, giving users flexibility in integrating custom code into their workflows. Additionally, Pipeline Pilot offers support for PilotScript, its own scripting language based on PLSQL, which allows users to perform custom data manipulations within their workflows.
Pipeline Pilot has continued to expand its capabilities with additional modules and toolsets for specific scientific tasks, such as next-generation sequencing analysis, cheminformatics, and polymer property prediction.
History
Pipeline Pilot was initially developed by SciTegic, a company that was acquired by BIOVIA in 2004. In 2014, BIOVIA became part of Dassault Systèmes.
Originally designed for applications in chemistry, Pipeline Pilot's capabilities have since been expanded to support a wider range of data processing tasks, including extract, transform, and load (ETL) processes, as well as general analytical and data processing tasks across various fields. The software is used in domains such as life sciences, materials science, and engineering, providing users with tools for creating automated workflows for data analysis and scientific computation.
Overview
Pipeline Pilot is a software tool designed for data manipulation and analysis. It provides a graphical user interface for users to construct workflows that integrate and process data from multiple sources, including CSV files, text files, and databases. The software is commonly used in extract, transform, and load (ETL) tasks.
The interface, known as the Pipeline Pilot Professional Client, allows users to create workflows by selecting and arranging individual data processing units called "components." These components perform a variety of functions such as loading, filtering, joining, or modifying data. Additional components can carry out more complex tasks, such as constructing regression models, training neural networks, or generating reports in formats like PDF.
Pipeline Pilot follows a component-based architecture where components serve as nodes in a workflow, connected by "pipes" that represent data flow in a directed graph. This framework enables the processing of data as it moves between the components.
Users have the flexibility to work with pre-installed components or develop custom ones within workflows, referred to as "protocols." Protocols, which consist of linked components, can be saved, reused, and shared, enabling streamlined data processing. The interface visualizes the connections between components, simplifying complex data workflows by presenting them as sequences of operations.
Component collections
Pipeline Pilot offers several add-ons called "collections," which are groups of specialized functions aimed at specific domains, such as genetic information processing or polymer analysis. These collections are available to users for an additional licensing fee.
The collections are organized into two main groups: science-specific and generic. The science-specific collections focus on areas like chemistry, biology, and materials modeling, while the generic collections provide tools for reporting, data analysis, and document search. Below is an overview of the available collections:
Custom scripts
Pipeline Pilot is commonly used for processing large and complex datasets, often exceeding 1TB in size. In its early development, Pipeline Pilot introduced a scripting language called "PilotScript," which allows users to write basic scripts that can be integrated into a protocol. Over time, support for additional programming languages was added, including Python, .NET, Matlab, Perl, SQL, Java, VBScript, and R. These languages can be used through APIs that execute commands without requiring the graphical user interface.
PilotScript, a language modeled on PLSQL, is used within specific components like the "Custom Manipulator (PilotScript)" or "Custom Filter (PilotScript)." An example of a simple PilotScript command is shown below, where a property named "Hello" is added to each record passing through the component with the value "Hello World!":
Hello := "Hello World!";
References
Science software
Enterprise application integration
Extract, transform, load tools
Bioinformatics software
Computational chemistry software
Computer vision software
Data analysis software
Data mining and machine learning software
Data and information visualization software
Laboratory software
Mass spectrometry software
Natural language processing software
Numerical software
Plotting software
Proprietary software
Visual programming languages | Pipeline Pilot | [
"Physics",
"Chemistry",
"Mathematics",
"Biology"
] | 1,041 | [
"Spectrum (physical sciences)",
"Computational chemistry software",
"Chemistry software",
"Bioinformatics software",
"Bioinformatics",
"Computational chemistry",
"Mass spectrometry software",
"Mass spectrometry",
"Numerical software",
"Mathematical software"
] |
43,975,548 | https://en.wikipedia.org/wiki/Hopf%20construction | In algebraic topology, the Hopf construction constructs a map from the join of two spaces and to the suspension of a space out of a map from to . It was introduced by in the case when and are spheres. used it to define the J-homomorphism.
Construction
The Hopf construction can be obtained as the composition of a map
and the suspension
of the map from to .
The map from to can be obtained by regarding both sides as a quotient of where is the unit interval. For one identifies with and with , while for one contracts all points of the form to a point and also contracts all points of the form to a point. So the map from to factors through .
References
Algebraic topology | Hopf construction | [
"Mathematics"
] | 144 | [
"Fields of abstract algebra",
"Topology",
"Algebraic topology"
] |
43,979,118 | https://en.wikipedia.org/wiki/Lithium%20atom | A lithium atom is an atom of the chemical element lithium. Stable lithium is composed of three electrons bound by the electromagnetic force to a nucleus containing three protons along with either three or four neutrons, depending on the isotope, held together by the strong force. Similarly to the case of the helium atom, a closed-form solution to the Schrödinger equation for the lithium atom has not been found. However, various approximations, such as the Hartree–Fock method, can be used to estimate the ground state energy and wavefunction of the atom. The quantum defect is a value that describes the deviation from hydrogenic energy levels.
Further reading
W. Zheng et al. / Appl. Math. Comput. 153 (2004) 685–695 "Numerical solutions of the Schrödinger equation for the ground lithium by the finite element method"
Atoms
Lithium | Lithium atom | [
"Physics"
] | 181 | [
"Nuclear and atomic physics stubs",
"Atoms",
"Matter",
"Nuclear physics"
] |
43,980,327 | https://en.wikipedia.org/wiki/ZmEu%20%28vulnerability%20scanner%29 | ZmEu is a computer vulnerability scanner which searches for web servers that are open to attack through the phpMyAdmin program,
It also attempts to guess SSH passwords through brute-force methods, and leaves a persistent backdoor. It was developed in Romania and was especially common in 2012.
It is apparently named after Zmeu, a dragon-like being in Romanian folklore.
References
Computer security software | ZmEu (vulnerability scanner) | [
"Engineering"
] | 84 | [
"Cybersecurity engineering",
"Computer security software"
] |
41,119,032 | https://en.wikipedia.org/wiki/Aragonite%20Hazardous%20Waste%20Incinerator | The Aragonite Hazardous Waste Incinerator is a waste disposal facility currently operated by Clean Harbors. It is located in
Aragonite, Tooele County, Utah, United States, located in the western portion of the state.
Site geography and early history
The Utah Test and Training Range lies to the west and the Dugway Proving Grounds lie to the southwest. Interstate 80, exit 56 provides access to Aragonite. The site lies northwest of the Cedar Mountains. The low Grassy Mountains lie to the north.
Aragonite lies along the Hastings Cutoff, a historical transmontane route taken by nineteenth-century pioneers. Aragonite was established in the early twentieth century for the mining of aragonite, though all mining operations in the area have ceased. A 1950s-era mining guide described a small townsite, but the area is now uninhabited and almost totally demolished.
The historical Aragonite site has been described as "an old mining town from the early 20th century that mined aragonite. This mine was only in operation for a few years but today [in 2009] the mineshafts are still open and a few bunkhouses remain, as well as an old truck."
Waste disposal
Just east of the historical townsite is a large hazardous waste incineration facility. This facility was known as the Aptus Incinerator, and was built there in 1991 after Tooele County established the surrounding lands as the West Desert Hazardous Industries District.
According to the Provo Daily Herald, the Aptus incinerator at Aragonite was the first hazardous waste incinerator in Utah. In 1992, it had the capacity to burn 70,000 tons of waste per year, most of which came from out-of-state sources. The incinerator was, at times, operated by Westinghouse, Rollins, Laidlaw, and Safety-Kleen, and is now operated by Clean Harbors. In 2013, it was reported that Utah medical facilities were considering using the Aragonite disposal facility instead of the Stericycle facility, which is much closer to Salt Lake City.
The facility has been the subject of several penalties administered by the EPA. A 2009 Associated Press story reported on a settlement reached after 48 regulatory violations were uncovered, including some relating to fires at the facility. The Salt Lake City Tribune described the facility as an "alleged serial violator" in 2014, noting yearly fines for reporting errors, inventory discrepancies, improper storage, and inadvertent air pollutant releases.
In 2017, an armed man threatened to explode a bomb at the facility, and was shot dead by state highway patrol officers.
References
Ghost towns in Tooele County, Utah
Ghost towns in Utah
Incinerators
Hazardous waste | Aragonite Hazardous Waste Incinerator | [
"Chemistry",
"Technology"
] | 547 | [
"Incinerators",
"Incineration",
"Hazardous waste"
] |
41,119,232 | https://en.wikipedia.org/wiki/Mechanisms%20of%20mindfulness%20meditation | Mindfulness has been defined in modern psychological terms as "paying attention to relevant aspects of experience in a nonjudgmental manner", and maintaining attention on present moment experience with an attitude of openness and acceptance. Meditation is a platform used to achieve mindfulness. Both practices, mindfulness and meditation, have been "directly inspired from the Buddhist tradition" and have been widely promoted by Jon Kabat-Zinn. Mindfulness meditation has been shown to have a positive impact on several psychiatric problems such as depression and therefore has formed the basis of mindfulness programs such as mindfulness-based cognitive therapy, mindfulness-based stress reduction and mindfulness-based pain management. The applications of mindfulness meditation are well established, however the mechanisms that underlie this practice are yet to be fully understood. Many tests and studies on soldiers with PTSD have shown tremendous positive results in decreasing stress levels and being able to cope with problems of the past, paving the way for more tests and studies to normalize and accept mindful based meditation and research, not only for soldiers with PTSD, but numerous mental inabilities or disabilities.
Four components of mindfulness meditation have been proposed to describe much of the mechanism of action by which mindfulness meditation may work: attention regulation, body awareness, emotion regulation, and change in perspective on the self. All of the components described above are connected to each other. For example, when a person is triggered by an external stimulus, the executive attention system attempts to maintain a mindful state. There is also a heightened body awareness such as a rapid heartbeat which triggers an emotional response. The response is then regulated so that it does not become habitual, but constantly changes from moment to moment experience. This eventually leads to a change in the perspective of the self.
Attention regulation
Attention regulation is the task of focusing attention on an object, acknowledging any distractions, and then returning your focus back to the object. Some evidence for mechanisms responsible for attention regulation during mindfulness meditation are shown below.
Mindfulness meditators showed greater activation of rostral anterior cingulate cortex (ACC) and dorsal medial prefrontal cortex (MPFC). This suggests that meditators have a stronger processing of conflict/distraction and are more engaged in emotional regulation. However, as the meditators become more efficient at focused attention, regulation becomes unnecessary and consequentially decreases activation of ACC in the long term.
The cortical thickness in the dorsal ACC was also found to be greater in the gray matter of experienced meditators.
There is an increased frontal midline theta rhythm, which is related to attention demanding tasks and is believed to be indicative of ACC activation. High midline theta rhythm has been associated with lowest anxiety score in the Manifest Anxiety Scale (MAS), the highest score in the extrovertive scale of the Maudsley Personality Inventory (MPI) and the lowest score in the neurotic scale of MPI.
The ACC detects conflicting information coming from distractions. When a person is presented with a conflicting stimulus, the brain initially processes the stimulus incorrectly. This is known as error-related negativity (ERN). Before the ERN reaches a threshold, the correct conflict is detected by the frontocentral N2. After the correction, the rostral ACC is activated and allows for executive attention to the correct stimulus. Therefore, mindfulness meditation could potentially be a method for treating attention related disorders such as ADHD and bipolar disorder.
Body awareness
Body awareness refers to focusing on an object/task within the body such as breathing. From a qualitative interview with ten mindfulness meditators, some of the following responses were observed: "When I'm walking, I deliberately notice the sensations of my body moving" and "I notice how foods and drinks affect my thoughts, bodily sensations, and emotions”. The two possible mechanisms by which a mindfulness meditator can experience body awareness are discussed below.
Meditators showed a greater cortical thickness and greater gray matter concentration in the right anterior insula.
On the contrary, subjects who had undergone 8 weeks of mindfulness training showed no significant change in gray matter concentration of the insula, but rather an increase gray matter concentration of the temporo-parietal junction.
The insula is responsible for awareness to stimuli and the thickness of its gray matter correlates to the accuracy and detection of the stimuli by the nervous system. Qualitative evidence suggests that mindfulness meditation impacts body awareness, however this component is not well characterized.
Emotion regulation
Emotions can be regulated cognitively or behaviorally. Cognitive regulation (in terms of mindfulness meditation) means having control over giving attention to a particular stimuli or by changing the response to that stimuli. The cognitive change is achieved through reappraisal (interpreting the stimulus in a more positive manner) and extinction (reversing the response to the stimulus). Behavioral regulation refers to inhibiting the expression of certain behaviors in response to a stimulus. Research suggests two main mechanisms for how mindfulness meditation influences the emotional response to a stimulus.
Mindfulness meditation regulates emotions via increased activation of the dorso-medial PFC and rostral ACC.
Increased activation of the ventrolateral PFC can regulate emotion by decreasing the activity of the amygdala. This was also predicted by a study that observed the effect of a person's mood/attitude during mindfulness on brain activation.
Lateral prefrontal cortex (lPFC) is important for selective attention while ventral prefrontal cortex (vPFC) is involved in inhibiting a response. As noted before, the anterior cingulate cortex (ACC) has been noted for maintaining attention to a stimulus. The amygdala is responsible for generating emotions. Mindfulness meditation is believed to be able to regulate negative thoughts and decrease emotional reactivity through these regions of the brain. Emotion regulation deficits have been noted in disorders such as borderline personality disorder and depression. These deficits have been associated with reduced prefrontal activation and increased amygdala activity, which mindfulness meditation might be able to attenuate.
Pain
Pain is known to activate the following regions of the brain: the anterior cingulate cortex, anterior/posterior insula, primary/secondary somatosensory cortices, and the thalamus. Mindfulness meditation may provide several methods by which a person can consciously regulate pain.
Brown and Jones found that mindfulness meditation decreased pain anticipation in the right parietal cortex and mid-cingulate cortex. Mindfulness meditation also increased the activity of the anterior cingulate cortex (ACC) and ventromedial-prefrontal cortex (vm-PFC). Since the vm-PFC is involved in inhibiting emotional responses to stimuli, anticipation to pain was concluded to be reduced by cognitive and emotional control.
Another study by Grant revealed that meditators showed greater activation of insula, thalamus, and mid-cingulate cortex while a lower activation of the regions responsible for emotion control (medial-PFC, OFC, and amygdala). Meditators were believed to be in a mental state that allowed them to pay close attention to the sensory input from the stimuli and simultaneously inhibit any appraisal or emotional reactivity.
Brown and Jones found that meditators showed no difference in pain sensitivity but rather the anticipation in pain. However, Grant's research showed that meditators experienced lower sensitivity to pain. These conflicting studies illustrate that the exact mechanism may vary with the expertise level or meditation technique.
References
External links
Mindfulness Meditation: Jon Kabat-Zinn
Mindfulness Meditation Pt. 1
Mindfulness Meditation Pt. 2
Neuroscience
Mindfulness (psychology) | Mechanisms of mindfulness meditation | [
"Biology"
] | 1,576 | [
"Neuroscience"
] |
41,120,778 | https://en.wikipedia.org/wiki/Epitope%20binning | Epitope binning is a competitive immunoassay used to characterize and then sort a library of monoclonal antibodies against a target protein. Antibodies against a similar target are tested against all other antibodies in the library in a pairwise fashion to see if antibodies block one another's binding to the epitope of an antigen. After each antibody has a profile created against all of the other antibodies in the library, a competitive blocking profile is created for each antibody relative to the others in the library. Closely related binning profiles indicate that the antibodies have the same or a closely related epitope and are "binned" together. Epitope binning is referenced in the literature under different names such as epitope mapping and epitope characterization. Regardless of the naming, epitope binning is prevalent in the pharmaceutical industry. Epitope Binning is used in the discovery and development of new therapeutics, vaccines, and diagnostics.
See also
Autoimmunity
Epitope mapping
References
Antigenic determinant
Immunologic tests | Epitope binning | [
"Chemistry",
"Biology"
] | 219 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry",
"Immunologic tests"
] |
41,121,217 | https://en.wikipedia.org/wiki/Cryoneurolysis | Cryoneurolysis, also referred to as cryoanalgesia, is a medical procedure that temporarily blocks nerve conduction along peripheral nerve pathways. The procedure, which inserts a small probe to freeze the target nerve, can facilitate complete regeneration of the structure and function of the affected nerve. Cryoneurolysis has been used to treat a variety of painful conditions.
Medical uses
Cryoneuralysis has been used to relieve pain after thoracotomy, mastectomy, and knee or shoulder arthroplasty. Combined with ultrasound imaging, the procedure can be administered using a hand-held device in an office, and appears to provide an expedient, safe, and nonpharmacological option for treating various chronic pain conditions.
Mechanisms of action
Nerve anatomy
Each nerve is composed of a bundle of axons. Each axon is surrounded by the endoneurium connective tissue layer. These axons are bundled into fascicles surrounded by the perineurium connective tissue layer. Multiple fascicles are then surrounded by the epineurium, which is the outermost connective tissue layer of the nerve. The axons of myelinated nerves have a myelin sheath made up of Schwann cells that coat the axon.
Classification and mechanism
Classification of nerve damage was well-defined by Sir Herbert Seddon and Sunderland in a system that remains in use. The adjacent table details the forms (neurapraxia, axonotmesis and neurotmesis) and degrees of nerve injury that occur as a result of exposure to various temperatures, with the intent to interrupt nerve traffic and relieve pain.
Cryoneurolysis treatments that use nitrous oxide (boiling point of −88.5 °C) as the coolant fall in the range of an axonotmesis injury, or 2nd degree injury, according to the Sunderland classification system. Treatments of the nerve in this temperature range are reversible, usually within a few months. Nerves treated in this temperature range experience a disruption of the axon, with Wallerian degeneration occurring distal to the site of injury. The axon and myelin sheath are affected, but all of the connective tissues (endoneurium, perineurium, and epineurium) remain intact. Following Wallerian degeneration, the axon regenerates along the original nerve path at a rate of approximately 1–2 mm per day.
Cryoneurolysis differs from cryoablation in that cryoablation treatments use liquid nitrogen (boiling point of −195.8 °C) as the coolant, and therefore, fall into the range of a neurotmesis injury, or 3rd degree injury according to the Sunderland classification. Treatments of the nerve in this temperature range are irreversible. Nerves treated in this temperature range experience a disruption of both the axon and the endoneurium connective tissue layer.
The efficacy of cryoneuralysis procedures for pain relief depend on the proximity of the probe to the targeted nerve, surface area of tissue covered by the probe, the rate and duration of cold treatment, and the temperature applied.
History
The use of cold for pain relief and as an anti-inflammatory has been known since the time of Hippocrates (460–377 BC). Since then there have been numerous accounts of ice used for pain relief including from the Ancient Egyptians and Avicenna of Persia (982–1070 AD). In 1812 Napoleon's Surgeon General noted that half-frozen soldiers from the Moscow battle were able to tolerate amputations with reduced pain and in 1851, ice and salt mixtures were promoted by Arnott for the treatment of nerve pain. Campbell White, in 1899, was the first to use refrigerants medically, and Allington, in 1950, was the first to use liquid nitrogen for medical treatments. In 1961, Cooper et al. created an early cryoprobe that reached −190 °C using liquid nitrogen. Shortly thereafter, in 1967, an ophthalmic surgeon named Amoils used carbon dioxide and nitrous oxide to create a cryoprobe that reached −70 °C.
Devices
Cryoprobe
Cryoneurolysis is performed with a cryoprobe, which is composed of a hollow cannula that contains a smaller inner lumen. The pressurized coolant (nitrous oxide, carbon dioxide or liquid nitrogen) travels down the lumen and expands at the end of the lumen into the tip of the hollow cannula. No coolant exits the cryoprobe. The expansion of the pressurized liquid causes the surrounding area to cool (known as the Joule–Thomson effect) and the phase change of the liquid to gas also causes the surrounding area to cool. This causes a visible iceball to form and the tissue surrounding the end of the cryoprobe to freeze. The gas form of the coolant then travels up the length of the cryoprobe and is safely expelled. The tissue surrounding the end of the cryoprobe can reach as low as −88.5 °C with nitrous oxide as the coolant, and as low as −195.8 °C with liquid nitrogen. Temperatures below −100 °C are damaging to nerves.
Cryo-S Painless cryoanalgesia device is the next generation of apparatus used by many experts in the field since 1992. The working medium for Cryo-S Painless is carbon dioxide: (−78 °C) or nitrous oxide: (−89 °C), very efficient and easy to use gases. Cryo-S Painless is controlled by a microprocessor and all the parameters are displayed and monitored on a LCD screen. Mode selection probe, cleaning and freezing can be performed automatically using footswitch or touch screen which allows to keep the site of a procedure under sterile conditions. Electronic communication (chip system) between the connected probe and device allows recognition of optimal operating parameters and auto-configures to cryoprobe characteristics. Pressure and gas flow are set automatically, any manual adjustment is not necessary. Cryoprobe temperature, cylinder pressure, gas flow inside of cryoprobe and procedure time are displayed during freezing. Built-in voice communication Built-in neurostimulation (sensory, motor).
Other devices
The Endocare PerCryo Percutaneous Cryoablation device utilizes argon as a coolant and can be used with four different single cryoprobe configurations with a diameter of either 1.7 mm (~16 gauge) or 2.4 mm (~13 gauge) in diameter .
The Myoscience Iovera is a handheld device that uses nitrous oxide as a coolant and can be used with a three-probe configuration with a probe diameter of 0.4 mm (~27 gauge).
References
External links
Neurology procedures
Cryobiology | Cryoneurolysis | [
"Physics",
"Chemistry",
"Biology"
] | 1,423 | [
"Biochemistry",
"Physical phenomena",
"Phase transitions",
"Cryobiology"
] |
41,121,740 | https://en.wikipedia.org/wiki/Peripheral%20nerve%20interface | A peripheral nerve interface is the bridge between the peripheral nervous system and a computer interface which serves as a bi‐directional information transducer recording and sending signals between the human body and a machine processor. Interfaces to the nervous system usually take the form of electrodes for stimulation and recording, though chemical stimulation and sensing are possible. Research in this area is focused on developing peripheral nerve interfaces for the restoration of function following disease or injury to minimize associated losses. Peripheral nerve interfaces also enable electrical stimulation and recording of the peripheral nervous system to study the form and function of the peripheral nervous system. For example, recent animal studies have demonstrated high accuracy in tracking physiological meaningful measures, like joint angle. Many researchers also focus in the area of neuroprosthesis, linking the human nervous system to bionics in order to mimic natural sensorimotor control and function. Successful implantation of peripheral nerve interfaces depend on a number of factors which include appropriate indication, perioperative testing, differentiated planning, and functional training. Typically microelectrode devices are implanted adjacent to, around or within the nerve trunk to establish contact with the peripheral nervous system. Different approaches may be used depending on the type of signal desired and attainable.
Function
The primary purpose of a neural interface is to enable two-way exchange of information with the nervous system for a sustained period of time to enable effective and high density stimulation and recording. The peripheral nervous system (PNS) is responsible for relaying information from the brain and spinal cord to the extremities of the body and back. The function of a peripheral nerve interface is to assist the nervous system when peripheral nerve function is compromised. To supplement the roles of the nervous system, interfaces need to augment motor function as well as discern sensory information. The feasibility of peripheral nerve stimulation to achieve a desired motor output has been demonstrated and is one of the major driving forces for this area of research. Information throughout the nervous system is exchanged primarily through action potentials. These signals occur at varying numbers and intervals dependent on both the neuroanatomical and neurochemical make up of the individual and localized region. Information may be either introduced or read out by inducing or recovering action potentials from the body. Successful development and implementation of a peripheral nerve interface would allow for both the introduction of information to the nervous system, and extraction of information from the nervous system.
Problems and limitations
Problems and limitations in peripheral nerve interfacing are both biophysical and biological in nature. These challenges include:
Fidelity of the interface in terms of functional resolution
Relatively weak, noise-ridden electrical signals causing a challenging interface design constraint
Interface implantation-associated injury to nerve fibers of interest
Stability of the interface over time due to inflammation
Managing inadvertent consequences such as pain or false sensory/motor stimulation due to physical movement or inflammation-associated triggering of neural activity
Application
Peripheral nerve interfaces are used for pain modulation, restoration of motor function following spinal cord injury or stroke, treatment of epilepsy by electrical stimulation of the vagus nerve, nerve stimulation to control micturition, occipital nerve stimulation for chronic migraines and to interface with neuroprosthetics.
Types
A wide variety of electrode designs have been researched, tested, and manufactured. These electrodes lie on a spectrum varying in degrees of invasiveness. Research in this area seeks to address issues centered around peripheral nerve/tissue damage, access to efferent and afferent signals, and selective recording/stimulation of nerve tissue. Ideally peripheral nerve interfaces are optimally designed to interface with biological constraints of peripheral nerve fibers, match the mechanical and electrical properties of the surrounding tissue, biocompatible with minimal immune response, high sensor resolution, are minimally invasive, and chronically stable with low signal-to-noise ratios. Strongest signals are recorded from nodes of ranvier. Peripheral nerve interfaces may be divided into extraneural and intrafascular categories.
Epineurial electrode interface
Epineurial electrodes are fabricated as longitudinal strips holding two or more contact sites to interface with peripheral nerves. These electrodes are placed on the nerve and secured by suturing to the epineurium. The suturing process requires delicate surgery and can be torn from the nerve if excessive motion creates tension. Since the electrode is sutured to the epineurium it is unlikely to damage the nerve trunk.
Helicoidal electrode interface
Helicoidal electrodes are placed circumjacent to the nerve and are made of flexible platinum ribbon in a helical design. This design allows the electrode to conform to the size and shape of the nerve in attempts to minimize mechanical trauma. The structural design causes low selectivity. Helicoidal electrodes are currently used for FES stimulation of the vagus nerve to control intractable epilepsy, sleep apnea, and to treat depressive syndromes.
Book electrode interface
The book electrode consists of a silicone rubber block with slots. Each slot contains three platinum foils which function as electrodes, anode electrodes and one cathode. The spinal roots of the nerve are placed into these slots and the slots are then covered with a flap made of silicone and fixed with silicone glue. This electrode is mostly used to interrupt reflex circuits of the dorsal sacral roots and to control bladder function. Book electrodes are still considered very bulky.
References
DARPA projects
Human–computer interaction
Implants (medicine)
Neural engineering
Neuroprosthetics
User interface techniques
Virtual reality | Peripheral nerve interface | [
"Engineering"
] | 1,117 | [
"Human–computer interaction",
"Human–machine interaction"
] |
42,541,439 | https://en.wikipedia.org/wiki/Closed-loop%20manufacturing | Closed-loop manufacturing (abbreviated CLM) is a closed-loop process of manufacturing and measuring (checking) in the manufacturing machine. The pre-stage to this is inspection in manufacturing. The idea is to reduce costs and improve the quality and accuracy of the produced parts.
General procedure
Closed-loop manufacturing can be done in different ways dependent on the manufacturing technique and on the accuracy requirements.
Planning the sequence (iterations)
Producing nearly the target value on the part
Measuring the real value
Calculating the residual (stop if residual is smaller than needed accuracy)
Manufacturing the residual
Repeat from Step 3
Suitable manufacturing techniques
CLM is very suitable for electrical discharge machining. Milling or turning is also suitable for CLM.
Suitable measuring techniques
In machining measurement techniques have to fulfill special needs. In particular optical techniques have the advantage that they do not touch the part. The following parts are practically used:
Focus variation
Tactile probes
Advantages / Disadvantages
The advantages are:
Reduce tool costs
Improving accuracy
Quality control is done in the machine directly
Measuring can be more accurate because the part does not need to be unloaded
Better deployment of personnel
Integrated sensor eliminates clamping errors
The disadvantages are:
Planning is needed
If planning is not done, machine time is increased
External links
EU EMRP project on traceable in-process dimensional measurement
Manufacturing | Closed-loop manufacturing | [
"Engineering"
] | 267 | [
"Manufacturing",
"Mechanical engineering"
] |
42,547,159 | https://en.wikipedia.org/wiki/Isoxaprolol | Isoxaprolol is an adrenergic antagonist with antiarrhythmic and antihypertensive properties.
References
Alpha blockers
Beta blockers
Isoxazoles
Tert-butyl compounds | Isoxaprolol | [
"Chemistry"
] | 46 | [
"Pharmacology",
"Alpha blockers"
] |
56,967,052 | https://en.wikipedia.org/wiki/Mistral%20G-230-TS | The Mistral G-230-TS is a Swiss aircraft engine, designed and produced by Mistral Engines of Geneva for use in light aircraft.
By March 2018 the engine was no longer advertised on the company website and seems to be out of production.
Design and development
The engine is a twin-rotor, 2X3X displacement, liquid-cooled, gasoline Wankel engine design, with a mechanical gearbox reduction drive with a reduction ratio of 2.8:1. It employs dual electronic ignition systems and produces at 2515 rpm.
Specifications (G-230-TS)
See also
References
External links
Mistral aircraft engines
Pistonless rotary engine | Mistral G-230-TS | [
"Technology"
] | 130 | [
"Engines",
"Pistonless rotary engine"
] |
56,967,599 | https://en.wikipedia.org/wiki/Montreal%20Student%20Space%20Associations | The Montreal Student Space Associations (MSSA; French: Associations Etudiantes Spatiales de Montreal) are a group of student aerospace associations across Quebec universities.
History
In July 2017, a committee of students from Concordia and McGill Universities united under the common goal of promoting space related discussions and awareness to the public through a conference taking place during World Space Week: the Montreal Space Symposium (MSS). Following this collaborative effort, other schools and teams across Montreal joined the committee, and formed together the Montreal Student Space Associations.
The first conference being a success, the group formalized its existence by becoming the Montreal chapter of the Canadian Space Society, and pursuing its outreach effort throughout the rest of the year. The mandate of the MSSA is threefold:
To advocate for the role of students and student-led initiatives within the Canadian space sector;
To showcase the opportunities existing in the space sector, and empower the youth to engage in space related activities leading to careers in the field;
To enhance the space community in Montreal, in order to foster the city to become a widely recognized space hub.
Composition
Concordia University:
Space Concordia
Concordia Institute for Aerospace Design and Innovation (CIADI)
McGill University:
McGill Rocket Team
McGill Space Institute
McGill Space Group
McGill Institute for Air and Space Law
McGill Institute for Aerospace Engineering (MIAE)
École Polytechnique de Montréal:
PolyOrbite
AstroPoly
Institut d'Innovation et de Conception en Aerospatiale de Polytechique (IICAP)
École de Technologie Supérieure:
AÉROÉTS
RockÉTS
Université de Sherbrooke:
QMSat
References
Space organizations
Organizations based in Montreal
Scientific organizations based in Canada | Montreal Student Space Associations | [
"Astronomy"
] | 332 | [
"Astronomy organizations",
"Space organizations"
] |
56,967,954 | https://en.wikipedia.org/wiki/Electro-biochemical%20reactor | Electro-biochemical reactor (EBR) is a type of a bioreactor used in water treatment. EBR is a high-efficiency denitrification, metals, and inorganics removal technology that provides electrons directly to the EBR bioreactor as a substitute for using excess electron donors and nutrients. It was patented by INOTEC, a bioremediation company based in Salt Lake City, UT.
The EBR technology is based on the principle that microbes mediate the removal of metal and inorganic contaminants through electron transfer (redox processes). In conventional bioreactors, these electrons are provided by excess organic electron donors (e.g., organic carbon sources such as methanol, glucose, etc.). They require excess nutrients/chemicals to compensate for inefficient and variable electron availability needed to adjust reactor ORP chemistry, compensate for system sensitivity (fluctuation), and to achieve more consistent constituent removal. The Electro-Biochemical Reactor directly supplies needed electrons to the reactor and microbes, using a low applied potential across the reactor cell (1-3 V) at low milli-Amp levels. As a comparison, one molecule of glucose, often used as a cost-effective electron donor, can provide up to 24 electrons under complete glucose metabolism, while a current of 1 mA provides 6.2x10^15 electrons every second. The small amount of power required can even come from a small solar/battery source.
The EBR systems have been successfully demonstrated in the mining and power generation sectors to remove nitrate, nitrite, selenium, cadmium, molybdenum, nickel, tin, uranium, zinc, antimony, copper, lead, silver, vanadium, and mercury.
References
Bioreactors | Electro-biochemical reactor | [
"Chemistry",
"Engineering",
"Biology"
] | 369 | [
"Bioreactors",
"Biological engineering",
"Bioengineering stubs",
"Chemical reactors",
"Biotechnology stubs",
"Biochemical engineering",
"Microbiology equipment"
] |
56,969,516 | https://en.wikipedia.org/wiki/Combustion%20Theory%20and%20Modelling | Combustion Theory and Modelling is a bimonthly peer-reviewed scientific journal covering research on combustion. The editors-in-chief are Moshe Matalon (University of Illinois at Urbana–Champaign) and Mitchell D. Smooke (Yale University). It is published by Taylor & Francis and was established in 1997. The founding editors are John W. Dold and Mitchell D. Smooke.
Abstracting and indexing
The journal is abstracted and indexed in,
According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.777.
See also
References
External links
Taylor & Francis academic journals
Chemistry journals
Physics journals
English-language journals
Engineering journals
Combustion
Academic journals established in 1997
Bimonthly journals | Combustion Theory and Modelling | [
"Chemistry"
] | 146 | [
"Combustion"
] |
56,970,119 | https://en.wikipedia.org/wiki/7%CE%B1-Methylestradiol | 7α-Methylestradiol (7α-Me-E2), also known as 7α-methylestra-1,3,5(10)-triene-3,17β-diol, is a synthetic estrogen and an active metabolite of the androgen/anabolic steroids trestolone/Methandienone. It is considered to be responsible for the estrogenic activity of trestolone. The compound shows about higher affinity for the estrogen receptor than estradiol.
See also
List of estrogens
Methylestradiol
Ethylestradiol
Ethinylestradiol
Almestrone
References
Abandoned drugs
Secondary alcohols
Estranes
Human drug metabolites
Synthetic estrogens | 7α-Methylestradiol | [
"Chemistry"
] | 154 | [
"Chemicals in medicine",
"Drug safety",
"Human drug metabolites",
"Abandoned drugs"
] |
56,973,579 | https://en.wikipedia.org/wiki/Triisopropylsilane | Triisopropyl silane (TIPS) is an organosilicon compound with the formula (i-Pr)3SiH (i-Pr = isopropyl). This colorless liquid is used as a scavenger in peptide synthesis. It can also act as a mild reducing agent.
In peptide synthesis, TIPS is used as a scavenger for peptide groups being removed from the peptide sequence at the global deprotection. TIPS is able to scavenge carbocations formed in the deprotection of a peptide as it can act as a hydride donor in acidic conditions. Silanes may be preferred as scavengers in place of sulfur-based scavengers.
References
Reducing agents
Silanes
Isopropyl compounds | Triisopropylsilane | [
"Chemistry"
] | 163 | [
"Redox",
"Reducing agents"
] |
56,974,263 | https://en.wikipedia.org/wiki/Memorial%20hall | A memorial hall is a hall built to commemorate an individual or group; most commonly those who have died in war. Most are intended for public use and are sometimes described as utilitarian memorials.
History of the Memorial Hall
In the aftermath of the First World War, many towns and villages looked to commemorate casualties from their communities. Community leaders were expected to organise local committees to construct memorials and halls, for the benefit of the local community, were often seen as appropriate ways in which to honour those who had lost their lives. Most incorporate a plaque or stone, individually naming casualties, although, in some cases, they were built instead of war memorials. Most First World War memorial halls would later go on to be rededicated as memorials to those who also died in the Second World War. In post-war times, many Second World War Memorials would later be rededicated to those who lost their lives in numerous modern wars.
Village hall
Memorial halls often serve the functions of village halls.
Examples
Congregational Memorial Hall
See also
Memorial Hall (disambiguation)
References
Buildings and structures by type
Monuments and memorials | Memorial hall | [
"Engineering"
] | 223 | [
"Buildings and structures by type",
"Architecture"
] |
56,974,691 | https://en.wikipedia.org/wiki/Ultra-high%20temperature%20ceramic%20matrix%20composite | Ultra-high temperature ceramic matrix composites (UHTCMC) are a class of refractory ceramic matrix composites (CMCs) with melting points significantly higher than that of typical CMCs. Among other applications, they are the subject of extensive research in the aerospace engineering field for their ability to withstand extreme heat for extended periods of time, a crucial property in applications such as thermal protection systems (TPS) for high heat fluxes (> 10 MW/m2) and rocket nozzles. Carbon fiber-reinforced carbon (C/C) maintains its structural integrity up to 2000 °C; however, C/C is mainly used as an ablative material, designed to purposefully erode under extreme temperatures in order to dissipate energy. Carbon fiber reinforced silicon carbide matrix composites (C/SiC) and Silicon carbide fiber reinforced silicon carbide matrix composites (SiC/SiC) are considered reusable materials because silicon carbide is a hard material with a low erosion and it forms a silica glass layer during oxidation which prevents further oxidation of inner material. However, above a certain temperature (which depends on the environmental conditions, such as the partial pressure of oxygen), the active oxidation of the silicon carbide matrix begins, resulting in the formation of gaseous silicon monoxide (SiO(g)). This leads to a loss of protection against further oxidation, causing the material to undergo uncontrolled and rapid erosion. For this reason C/SiC and SiC/SiC are used in the range of temperature between 1200 °C - 1400 °C. The oxidation resistance and the thermo-mechanical properties of these materials can be improved by incorporating a fraction of about 20-30% of UHTC phases, e.g., ZrB2, into the matrix.
On the one hand CMCs are lightweight materials with high strength-to-weight ratio even at high temperature, high thermal shock resistance and toughness but suffer of erosion during service. On the other side bulk ceramics made of ultra-high temperature ceramics (e.g. ZrB2, HfB2, or their composites) are hard materials which show low erosion even above 2000 °C but are heavy and suffer of catastrophic fracture and low thermal shock resistance compared to CMCs. Failure is easily under mechanical or thermo-mechanical loads because of cracks initiated by small defects or scratches. current research is focused on combining several reinforcing elements (e.g short carbon fibers, PAN or pitch based continuous carbon fibers, ceramic fibers, graphite sheets, etc) with UHTC phases to reduce the brittleness of these materials.
The European Commission funded a research project, C3HARME, under the NMP-19-2015 call of Framework Programmes for Research and Technological Development in 2016-2020 for the design, manufacturing and testing of a new class of ultra-refractory ceramic matrix composites reinforced with carbon fibers suitable for applications in severe aerospace environments as possible near-zero ablation thermal protection system (TPS) materials (e.g. heat shield) and for propulsion (e.g. rocket nozzle). The demand for reusable advanced materials with temperature capability over 2000 °C has been growing. Recently carbon fiber reinforced zirconium boride-based composites obtained by powder slurry impregnation (SI) and sintering has been investigated. With these promising properties, these materials can be also considered for other applications including as friction materials for braking systems.
Breakthroughs in research
The European Commission funded a research project, C3HARME, under the NMP-19-2015 call of Framework Programmes for Research and Technological Development in 2016-2020 for the design, manufacturing and testing of a new class of ultra-refractory ceramic matrix composites reinforced with silicon carbide fibers and Carbon fibers suitable for applications in severe aerospace environments.
Challenges in manufacturing and machining
The manufacturing and machining of UHTCMCs present new challenges due to the unique properties of these advanced materials. Traditional manufacturing techniques such as casting and molding may not be suitable for UHTCMCs, requiring the development of specific methods like chemical vapor infiltration (CVI), polymer infiltration and pyrolysis (PIP), reactive melt infiltration (RMI), slurry impregnation and sintering (SIS) or by combining multiple processes in sequence. CVI involves the infiltration of a porous preform, typically made of fibers, with a gas-phase precursor that decomposes at high temperatures to form a ceramic matrix. The process begins by placing the fiber preform in a reaction chamber, where it is exposed to a gaseous precursor, such as silicon-containing compounds (e.g., CH4, SiCl4 or SiH4) in the presence of heat. At elevated temperatures, the precursor gases react and deposit a solid ceramic material onto the fibers, forming a dense matrix.
The process also ensures and adequate bonding between the matrix and the reinforcing fibers, enhancing the mechanical properties and thermal stability of the composite. However, CVI is relatively slow due to the need for long infiltration times. The method is also sensitive to process conditions, requiring careful control of temperature, pressure, and precursor concentration to avoid defects like porosity or incomplete infiltration.
PIP involves multiple cycles polymer infiltration followed by pyrolysis, leading to high material performance but is time-consuming and costly due to the need for several infiltration and pyrolysis steps. RMI is faster, as molten metal or ceramic infiltrates the preform, forming a strong composite. However, it requires precise control of the high-temperature process and can be expensive depending on the materials used. SIS is the fastest process ensuring also the largest fraction of UHTC phases in the matrix, but it may face issues with uniformity, bonding between fibers and the matrix. Moreover, sintering occurs via hot pressing (HP) or spark plasma sintering (SPS) furnaces wich required mechanical prussere to produce a low porosity material, so the process allow to produce simple shape and scalability could be an issue.In addition, the consolidation of these materials is done combining a strong mechanical pressing during the sintering process at very high temperature. These furnaces allow simple shapes to be produced, and currently the largest furnaces to date on the market allow side plate sizes around half a meter. Scalability of the process is therefore limited by the ability of these special furnaces with mechanical pressing to exert and control high forces over large areas uniformly at very high temperature (usually graphite pistons and molds).
The choice of process depends on the desired material properties, cost constraints, and production scale. A comparison of mechanical properties and ablation resistance of similar UHTCMC materials obtained by different technologies is reported in ref
Machining these materials is particularly challenging due to their high hardness and low fracture toughness (comparared to metals), which demand advanced tools and techniques to avoid cracking or delamination. Additionally, the anisotropic nature of fiber reinforced materials, arising from the directional arrangement of fibers, adds complexity to achieving precise shapes and finishes. Furthermore, maintaining the integrity of the fiber-matrix interface during processing is critical to preserving the material's mechanical properties. As a result, ongoing research is focused on optimizing manufacturing processes, improving tool materials, and developing novel machining strategies to meet the increasing demand for CMCs and UHTCMCs in industries such as aerospace, automotive, radioisotope formation and rinnovable energies. Their compatibility with cells was studied for possible application in biomedical fields.
References
Ceramic materials
Composite materials | Ultra-high temperature ceramic matrix composite | [
"Physics",
"Engineering"
] | 1,565 | [
"Composite materials",
"Materials",
"Ceramic materials",
"Ceramic engineering",
"Matter"
] |
56,977,848 | https://en.wikipedia.org/wiki/MAP-Seq | MAPseq or Multiplexed Analysis of Projections by Sequencing is a RNA-Seq based method for high-throughput mapping of neuronal projections. It was developed by Anthony M. Zador and his team at Cold Spring Harbor Laboratory and published in Neuron, a Cell Press magazine.
The method works by uniquely labeling neurons in a source region by injecting a viral library encoding a diverse collection of RNA sequences ("barcodes"). The barcode mRNA is expressed at high levels and transported into the axon terminals at distal target projection regions. Following this, the cells from source and putative target regions of interest are harvested, and their RNA is extracted and sequenced. By matching the presence of the unique "barcode" in the source and target tissue, one can map the projections of neuron in a one-to-many fashion.
See also
RNA-Seq
Patch-sequencing
References
Molecular biology
Neuroscience
External links
RNA sequencing
Molecular biology techniques | MAP-Seq | [
"Chemistry",
"Biology"
] | 196 | [
"Genetics techniques",
"Neuroscience",
"Molecular biology stubs",
"RNA sequencing",
"Molecular biology techniques",
"Molecular biology",
"Biochemistry"
] |
56,979,757 | https://en.wikipedia.org/wiki/Food%20and%20biological%20process%20engineering | Food and biological process engineering is a discipline concerned with applying principles of engineering to the fields of food production and distribution and biology. It is a broad field, with workers fulfilling a variety of roles ranging from design of food processing equipment to genetic modification of organisms. In some respects it is a combined field, drawing from the disciplines of food science and biological engineering to improve the earth's food supply.
Creating, processing, and storing food to support the world's population requires extensive interdisciplinary knowledge. Notably, there are many biological engineering processes within food engineering to manipulate the multitude of organisms involved in our complex food chain. Food safety in particular requires biological study to understand the microorganisms involved and how they affect humans. However, other aspects of food engineering, such as food storage and processing, also require extensive biological knowledge of both the food and the microorganisms that inhabit it. This food microbiology and biology knowledge becomes biological engineering when systems and processes are created to maintain desirable food properties and microorganisms while providing mechanisms for eliminating the unfavorable or dangerous ones.
Concepts
Many different concepts are involved in the field of food and biological process engineering. Below are listed several major ones.
Food science
The science behind food and food production involves studying how food behaves and how it can be improved. Researchers analyze longevity and composition (i.e., ingredients, vitamins, minerals, etc.) of foods, as well as how to ensure food safety.
Genetic engineering
Modern food and biological process engineering relies heavily on applications of genetic manipulation. By understanding plants and animals on the molecular level, scientists are able to engineer them with specific goals in mind.
Among the most notable applications of such genetic engineering is the creation of disease or insect resistant plants, such as those modified to produce Bacillus thuringiensis, a bacterium that kills strain-specific varieties of insect upon consumption. However, insects are able to adapt to Bacillus thuringiensis strains, necessitating continued research to maintain disease-resistance.
Food safety
An important task within the realm of food safety is the elimination of microorganisms responsible for food-borne illness. Food and waterborne diseases still pose a serious health concern, with hundreds of outbreaks reported per year since 1971 in the United States alone. The risk of these diseases has risen throughout the years, mainly due to the mishandling of raw food, poor sanitation, and poor socioeconomic conditions. In addition to diseases caused by direct infection by pathogens, some food borne diseases are caused by the presence of toxins produced by microorganisms in food. There are five main types of microbial pathogens which contaminate food and water: viruses, bacteria, fungi, pathogenic protozoa and helminths.
Several bacteria, such as E. coli, Clostridium botulinum, and Salmonella enterica, are well-known and are targeted for elimination via various industrial processes. Though bacteria are often the focus of food safety processes, viruses, protozoa, and molds are also known to cause food-borne illness and are of concern when designing processes to ensure food safety. Although the goal of food safety is to eliminate harmful organisms from food and prevent food-borne illness, detecting said organisms is another important function of food safety mechanisms.
Monitoring and detection
The goal of most monitoring and detection processes is the rapid detection of harmful microorganisms with minimal interruption to the processing of food products. An example of a detection mechanism that relies heavily on biological processes is usage of chromogenic microbiological media.
Chromogenic Microbiological Media
Chromogenic microbiological media use colored enzymes to detect the presence of certain bacteria. In conventional bacteria culturing, bacteria are allowed to grow on a medium that supports many strains. Since it is hard to isolate bacteria, many cultures of different bacteria are able to form. To identify a particular bacteria culture, scientists must identify it using only its physical characteristics. Then further tests can be performed to confirm the presence of the bacteria, such as serology tests that find antibodies formed in organisms as a response to infection. In contrast, chromogenic microbiological media use particular color-producing enzymes that are targeted for metabolism by a certain strain of bacteria. Thus, if the given cultures are present, the media will become colored accordingly as the bacteria metabolize the color-producing enzyme. This greatly facilitates the identification of certain bacteria cultures and can eliminate need for further testing. To guard against misidentification of bacteria, the chromogenic plates typically incorporate additional enzymes that will be processed by other bacteria. Now, as the non-target bacteria interact with the additional enzymes, they will produce colors that distinguish them from the target bacteria.
Mechanisms
Food safety has been practiced for thousands of years, but with the rise of heavily industrial agriculture, the demand for food safety has steadily increased, prompting more research into the ways to achieve greater food safety. A primary mechanism that will be discussed in this article is heating of food products to kill microorganisms, as this has a millennia-long history and is still extensively used. However, more recent mechanisms have been created such as application of ultraviolet light, high pressure, electric field, cold plasma, usage of ozone, and irradiation of food.
Heating
A report given to the Food and Drug Administration by the Institute of Food Technologists thoroughly discusses the thermal processing of food. A notable step in development of heat application to food processing is pasteurization, developed by Louis Pasteur in the nineteenth century. Pasteurization is used to kill microorganisms that could pose risks to consumers or shorten the shelf life of food products. Primarily applied to liquid food products, pasteurization is regularly applied to fruit juice, beer, milk, and ice cream. Heat applied during pasteurization varies from around 60 °C to kill bacteria to around 80 °C to kill yeasts. Most pasteurization processes have been optimized recently to involve several steps of heating at various temperatures and minimize the time needed for the process. A more severe food heating mechanism is thermal sterilization. While pasteurization destroys most bacteria and yeast growing in food products, the goal of sterilization is to kill almost all viable organisms found in food products including yeast, mold, bacteria, and spore forming organisms. Done properly, this process will greatly extend the shelf life of food products and can allow them to be stored at room temperature. As detailed in The Handbook of Food Preservation, thermal sterilization typically involves four steps. First, food products are heated to between 110 and 125 °C, and the products are given time for the heat to travel through the material completely. After this, the temperature must be maintained long enough to kill microorganisms before the food product is cooled to prevent cooking. In practice, though complete sterility of food products could be achieved, the intense and extended heating needed to accomplish this could reduce the nutritive value of the food products, thus, only a partial sterilization is performed.
Low-Temperature Process
Low-temperature processing also plays an essential role in food processing and storage. During this process, microorganisms and enzymes are subjected to low temperatures. Unlike heating, chilling does not destroy the enzymes and microorganisms but simply reduces their activity, which is effective as long as the temperature is maintained. As the temperature is raised, activity will rise again accordingly. It follows that, unlike heating, the effect of preservation by cold is not permanent; hence the importance of maintaining the cold chain throughout the shelf life of the food product.
It is important to note that there are two distinct low temperature processes: chilling and freezing. Chilling is the application of temperatures within the range of 0-8 °C, while freezing is usually below 18 °C. Refrigeration does slow spoilage in food and reduce the risk of bacterial growth, however, it does not improve the quality of the product.
Irradiation
Food irradiation is another notable biological engineering process to achieve food safety. Research into the potential utilization of ionizing irradiation for food preservation started in the 1940s as an extension of studies on the effect of radiation on living cells. The FDA approved usage of ionizing radiation on food products in 1990. This radiation removes electrons from atoms, and these electrons go on to damage the DNA of microorganisms living in the food, killing the microorganisms. Irradiation can be used to pasteurize food products, such as seafood, poultry, and red meat, thus making these food products safer for consumers. Some irradiation is also used to delay fruit ripening processes, which can kill microorganisms that accelerate the ripening and spoilage of produce. Low dosages of radiation can also be used to kill insects living in harvested crops, as the radiation will stunt the insects' development at various stages and damage their ability to reproduce.
Food storage and preservation
Food storage and preservation is a key component of food engineering processes and relies heavily on biological engineering to understand and manipulate the organisms involved. Note that the above food safety processes such as pasteurization and sterilization destroy the microorganisms that also contribute to deterioration of food products while not necessarily posing a risk to people. Understanding of these processes, their effects, and the microorganisms at play in various food processing techniques is a very important biological engineering task within food engineering. Factories and processes must be created to ensure that food products can be processed in an efficient and effective manner, which again relies heavily on biological engineering expertise.
Produce
Preservation and processing of fresh produce poses many biological engineering challenges. Understanding of biology is particularly important to processing produce because most fruits and vegetables are living organisms from the time of harvest to the time of consumption. Before harvesting, understanding of plant ontogeny, or origin and development, and the manipulation of these developmental processes are key components of the industrial agriculture process. Understanding of plant developmental cycles governs how and when plants are harvested, impacts storage environments, and contributes to creating intervention processes. Even after harvesting, fruits and vegetables undergo the biological processes of respiration, transpiration, and ripening. Control over these natural plant processes should be achieved to prevent food spoilage, sprouting or growth of produce during storage, and reduction in quality or desirability, such as through wilting or loss of desirable texture.
Technology
When considering food storage and preservation, the technologies of modified atmosphere and controlled atmosphere are widely used for the storage and packing of several types of foods. They offer several advantages such as delay of ripening and senescence of horticultural commodities, control of some biological processes such as rancidity, insects, bacteria and decay, among others. Controlled atmosphere (CA) storage refers to atmospheres that are different than normal air and strictly controlled at all times. This type of storage manipulates the CO2 and O2 levels within airtight stores of containers. Modified atmosphere (MA) storage refers to any atmosphere different from normal air, typically made by mixing CO2, O2, and N2.
Waste management
Another biological engineering process within food engineering involves the processing of agricultural waste. Though it may fall more within the realm of environmental engineering, understanding how organisms in the environment will respond to the waste products is important for assessing the impact of the processes and comparing waste processing strategies. It is also important to understand which organisms are involved in the decomposition of the waste products, and the byproducts that will be produced as a result of their activity.
To discuss direct application of biological engineering, biological waste processing techniques are used to process organic waste and sometimes create useful byproducts. There are two main processes by which organic matter is processed via microbes: aerobic processes and anaerobic processes. These processes convert organic matter to cell mass through synthesis processes of microorganisms. Aerobic processes occur in the presence of oxygen, take organic matter as input, and produce water, carbon dioxide, nitrate, and new cell mass. Anaerobic processes occur in the absence of oxygen and produce less cell mass than aerobic processes. An additional benefit of anaerobic processes is that they also generate methane, which can be burned as a fuel source. Design of both aerobic and anaerobic biological waste processing plants requires careful control of temperature, humidity, oxygen concentration, and the waste products involved. Understanding of all aspects of the system and how they interact with one another is important for developing efficient waste management plants and falls within the realm of biological engineering.
See also
biological engineering
food science
Genetically modified organism
Genetically modified food
Genetically modified crops
References
Further reading
Gustavo V. Barbosa-Canovas, Liliana Alamilla-Beltran, Efren Parada-Arias, Jorge Welti-Chanes (2015) Water Stress in Biological, Chemical, Pharmaceutical and Food Systems. New York, NY : Springer New York : Imprint: Springer.
Jamuna Aswathanarayn & Rai, V. Ravishankar (2015). Microbial Food Safety and Preservation Techniques. Boca Raton : CRC Press Taylor & Francis Group.
Food science
Biological engineering | Food and biological process engineering | [
"Engineering",
"Biology"
] | 2,655 | [
"Biological engineering"
] |
54,055,676 | https://en.wikipedia.org/wiki/Frank-Kamenetskii%20theory | In combustion, Frank-Kamenetskii theory explains the thermal explosion of a homogeneous mixture of reactants, kept inside a closed vessel with constant temperature walls. It is named after a Russian scientist David A. Frank-Kamenetskii, who along with Nikolay Semenov developed the theory in the 1930s.
Problem description
Sources:
Consider a vessel maintained at a constant temperature , containing a homogeneous reacting mixture. Let the characteristic size of the vessel be . Since the mixture is homogeneous, the density is constant. During the initial period of ignition, the consumption of reactant concentration is negligible (see and below), thus the explosion is governed only by the energy equation. Assuming a one-step global reaction , where is the amount of heat released per unit mass of fuel consumed, and a reaction rate governed by Arrhenius law, the energy equation becomes
where
Non-dimensionalization
An increment in temperature of order , where is the Frank-Kamenetskii temperature is enough to increase the chemical reaction by amount , as is evident from the ratio
Non-dimensional scales of time, temperature, length, and heat transfer may be defined as
where
Note
In a typical combustion process, so that .
Therefore, . That is, fuel consumption time is much longer than ignition time, so fuel consumption is essentially negligible in the study of ignition.
This is why the fuel concentration is assumed to remain the initial fuel concentration .
Substituting the non-dimensional variables in the energy equation from the introduction
Since , the exponential term can be linearized , hence
At , we have and for , needs to satisfy and
Semenov theory
Before Frank-Kamenetskii, his doctoral advisor Nikolay Semyonov (or Semenov) proposed a thermal explosion theory with a simpler model with which he assumed a linear function for the heat conduction process instead of the Laplacian operator. Semenov's equation reads as
in which the exponential term will tend to increase as time proceeds whereas the linear term will tend to decrease . The relevant importance between the two terms are determined by the Damköhler number . The numerical solution of the above equation for different values of is shown in the figure.
Steady-state regime
When , the linear term eventually dominates and the system is able to reach a steady state as . At steady state (), the balance is given by the equation
where represents the Lambert W function. From the properties of Lambert W function, it is easy to see that the steady state temperature provided by the above equation exists only when , where is called as Frank-Kamenetskii parameter as a critical point where the system bifurcates from the existence of steady state to explosive state at large times.
Explosive regime
For , the system explodes since the exponential term dominates as time proceeds. We do not need to wait for a long time for to blow up. Because of the exponential forcing, at a finite value of . This time is interpreted as the ignition time or induction time of the system. When , the heat conduction term can be neglected in which case the problem admits an explicit solution,
At time , the system explodes. This time is also referred to as the adiabatic induction period since the heat conduction term is neglected.
In the near-critical condition, i.e., when , the system takes very long time to explode. The analysis for this limit was first carried out by Frank-Kamenetskii., although proper asymptotics were carried out only later by D. R. Kassoy and Amable Liñán including reactant consumption because reactant consumption is not negligible when . A simplified analysis without reactant consumption is presented here. Let us define a small parameter such that . For this case, the time evolution of is as follows: first it increases to steady-state temperature value corresponding to , which is given by at times of order , then it stays very close to this steady-state value for a long time before eventually exploding at a long time. The quantity of interest is the long-time estimate for the explosion. To find out the estimate, introduce the transformations and that is appropriate for the region where stays close to into the governing equation and collect only the leading-order terms to find out
where the boundary condition is derived by matching with the initial region wherein . The solution to the above-mentioned problem is given by
which immediately reveals that when Writing this condition in terms of , the explosion time in the near-critical condition is found to be
which implies that the ignition time as with a square-root singularity.
Frank-Kamenetskii steady-state theory
Sources:
The only parameter which characterizes the explosion is the Damköhler number . When is very high, conduction time is longer than the chemical reaction time and the system explodes with high temperature since there is not enough time for conduction to remove the heat. On the other hand, when is very low, heat conduction time is much faster than the chemical reaction time, such that all the heat produced by the chemical reaction is immediately conducted to the wall, thus there is no explosion, it goes to an almost steady state, Amable Liñán coined this mode as slowly reacting mode. At a critical Damköhler number the system goes from slowly reacting mode to explosive mode. Therefore, , the system is in steady state. Instead of solving the full problem to find this , Frank-Kamenetskii solved the steady state problem for various Damköhler number until the critical value, beyond which no steady solution exists. So the problem to be solved is
with boundary conditions
the second condition is due to the symmetry of the vessel. The above equation is special case of Liouville–Bratu–Gelfand equation in mathematics.
Planar vessel
For planar vessel, there is an exact solution. Here , then
If the transformations and , where is the maximum temperature which occurs at due to symmetry, are introduced
Integrating once and using the second boundary condition, the equation becomes
and integrating again
The above equation is the exact solution, but maximum temperature is unknown, but we have not used the boundary condition of the wall yet. Thus using the wall boundary condition at , the maximum temperature is obtained from an implicit expression,
Critical is obtained by finding the maximum point of the equation (see figure), i.e., at .
So the critical Frank-Kamentskii parameter is . The system has no steady state (or explodes) for and for , the system goes to a steady state with very slow reaction.
Cylindrical vessel
For cylindrical vessel, there is an exact solution. Though Frank-Kamentskii used numerical integration assuming there is no explicit solution, Paul L. Chambré provided an exact solution in 1952. H. Lemke also solved provided a solution in a somewhat different form in 1913. Here , then
If the transformations and are introduced
The general solution is . But from the symmetry condition at the centre. Writing back in original variable, the equation reads,
But the original equation multiplied by is
Now subtracting the last two equation from one another leads to
This equation is easy to solve because it involves only the derivatives, so letting transforms the equation
This is a Bernoulli differential equation of order , a type of Riccati equation. The solution is
Integrating once again, we have where . We have used already one boundary condition, there is one more boundary condition left, but with two constants . It turns out and are related to each other, which is obtained by substituting the above solution into the starting equation we arrive at . Therefore, the solution is
Now if we use the other boundary condition , we get an equation for as . The maximum value of for which solution is possible is when , so the critical Frank-Kamentskii parameter is . The system has no steady state( or explodes) for and for , the system goes to a steady state with very slow reaction. The maximum temperature occurs at
For each value of , we have two values of since is multi-valued. The maximum critical temperature is .
Spherical vessel
For spherical vessel, there is no known explicit solution, so Frank-Kamenetskii used numerical methods to find the critical value. Here , then
If the transformations and , where is the maximum temperature which occurs at due to symmetry, are introduced
The above equation is nothing but Emden–Chandrasekhar equation, which appears in astrophysics describing isothermal gas sphere. Unlike planar and cylindrical case, the spherical vessel has infinitely many solutions for oscillating about the point , instead of just two solutions, which was shown by Israel Gelfand. The lowest branch will be chosen to explain explosive behavior.
From numerical solution, it is found that the critical Frank-Kamenetskii parameter is . The system has no steady state( or explodes) for and for , the system goes to a steady state with very slow reaction. The maximum temperature occurs at and maximum critical temperature is .
Non-symmetric geometries
For vessels which are not symmetric about the center (for example rectangular vessel), the problem involves solving a nonlinear partial differential equation instead of a nonlinear ordinary differential equation, which can be solved only through numerical methods in most cases. The equation is
with boundary condition on the bounding surfaces.
Applications
Since the model assumes homogeneous mixture, the theory is well applicable to study the explosive behavior of solid fuels (spontaneous ignition of bio fuels, organic materials, garbage, etc.,). This is also used to design explosives and fire crackers. The theory predicted critical values accurately for low conductivity fluids/solids with high conductivity thin walled containers.
See also
Clarke's equation
References
External links
The Frank-Kamenetskii problem in Wolfram solver http://demonstrations.wolfram.com/TheFrankKamenetskiiProblem/
Tracking the Frank-Kamenetskii Problem in Wolfram solver http://demonstrations.wolfram.com/TrackingTheFrankKamenetskiiProblem/
Planar solution in Chebfun solver http://www.chebfun.org/examples/ode-nonlin/BlowupFK.html
Fluid dynamics
Combustion
Explosions | Frank-Kamenetskii theory | [
"Chemistry",
"Engineering"
] | 2,060 | [
"Chemical engineering",
"Combustion",
"Explosions",
"Piping",
"Fluid dynamics"
] |
54,056,040 | https://en.wikipedia.org/wiki/Emden%E2%80%93Chandrasekhar%20equation | In astrophysics, the Emden–Chandrasekhar equation is a dimensionless form of the Poisson equation for the density distribution of a spherically symmetric isothermal gas sphere subjected to its own gravitational force, named after Robert Emden and Subrahmanyan Chandrasekhar. The equation was first introduced by Robert Emden in 1907. The equation readswhere is the dimensionless radius and is the related to the density of the gas sphere as , where is the density of the gas at the centre. The equation has no known explicit solution. If a polytropic fluid is used instead of an isothermal fluid, one obtains the Lane–Emden equation. The isothermal assumption is usually modeled to describe the core of a star. The equation is solved with the initial conditions,
The equation appears in other branches of physics as well, for example the same equation appears in the Frank-Kamenetskii explosion theory for a spherical vessel. The relativistic version of this spherically symmetric isothermal model was studied by Subrahmanyan Chandrasekhar in 1972.
Derivation
For an isothermal gaseous star, the pressure is due to the kinetic pressure and radiation pressure
where
is the density
is the Boltzmann constant
is the mean molecular weight
is the mass of the proton
is the temperature of the star
is the Stefan–Boltzmann constant
is the speed of light
The equation for equilibrium of the star requires a balance between the pressure force and gravitational force
where is the radius measured from the center and is the gravitational constant. The equation is re-written as
Introducing the transformation
where is the central density of the star, leads to
The boundary conditions are
For , the solution goes like
Limitations of the model
Assuming isothermal sphere has some disadvantages. Though the density obtained as solution of this isothermal gas sphere decreases from the centre, it decreases too slowly to give a well-defined surface and finite mass for the sphere. It can be shown that, as ,
where and are constants which will be obtained with numerical solution. This behavior of density gives rise to increase in mass with increase in radius. Thus, the model is usually valid to describe the core of the star, where the temperature is approximately constant.
Singular solution
Introducing the transformation transforms the equation to
The equation has a singular solution given by
Therefore, a new variable can be introduced as , where the equation for can be derived,
This equation can be reduced to first order by introducing
then we have
Reduction
There is another reduction due to Edward Arthur Milne. Let us define
then
Properties
If is a solution to Emden–Chandrasekhar equation, then is also a solution of the equation, where is an arbitrary constant.
The solutions of the Emden–Chandrasekhar equation which are finite at the origin have necessarily at
See also
Lane–Emden equation
Frank-Kamenetskii theory
Chandrasekhar's white dwarf equation
References
Equations of physics
Fluid dynamics
Stellar dynamics
Ordinary differential equations | Emden–Chandrasekhar equation | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 594 | [
"Equations of physics",
"Chemical engineering",
"Mathematical objects",
"Astrophysics",
"Equations",
"Piping",
"Stellar dynamics",
"Fluid dynamics"
] |
55,596,625 | https://en.wikipedia.org/wiki/Hywind%20Scotland | Hywind Scotland is the world's first commercial wind farm using floating wind turbines, situated off Peterhead, Scotland.
The farm has five 6 MW Siemens direct-drive turbines on Hywind floating monopiles, with a total capacity of 30 MW. It is operated by Hywind (Scotland) Limited, a joint venture of Equinor (75%) and Masdar (25%).
Equinor (then: Statoil) launched the world's first operational deep-water floating large-capacity wind turbine in 2009, the 2.3 MW Hywind, which cost 400 million NOK (US$71 million, $31/W). The tall tower with a 2.3 MW Siemens turbine was towed from the Åmøy fjord and offshore into the North Sea in deep water, off of Stavanger, Norway on 9 June 2009 for a two-year test run, but remains working at the site while surviving wind speed and 19 m waves.
In 2015, the company received permission to install the wind farm in Scotland, in an attempt at reducing the cost relative to the original Hywind, in accordance with the Scottish Government's commitment for cost reduction. Manufacturing for the project, with a budgeted cost of NOK2 billion (£152m), started in 2016 in Spain, Norway and Scotland. The turbines were assembled at Stord in Norway in summer 2017 using the Saipem 7000 floating crane, and the finished turbines were moved to near Peterhead. Three suction anchors hold each turbine. Hywind Scotland was commissioned in October 2017.
While cost was reduced compared to the very expensive Hywind One at $31m/MW, it still came with a final capital cost of £264m, or £8.8m/MW, approximately three times the capital cost of fixed offshore windfarms. Measured by unit cost, Hywind's levelized cost of electricity (LCoE) is then £180/MWh ($248/MWh), about three times the typical LCoE of a fixed offshore wind farm at £55/MWh ($75.7/MWh). The high cost is partly compensated by £165.27/MWh from Renewable Obligation Certificates.
In its first 5 years of operation the facility has averaged a capacity factor of 54%, sometimes in 10 meter waves. By shutting down at the worst conditions, it survived Hurricane Ophelia, and then Storm Caroline with wind gusts at and waves of 8.2 metres.
The subsequent 88 MW Hywind Tampen (with concrete floating foundations) became operational at the Snorre and Gullfaks oil fields in Norway in 2023 at a cost of NOK 8 billion or £600m (£6.8/MW).
In May 2024 all 5 turbines were to be towed back to Norway for several months of the heavy maintenance of replacing the main bearings. All turbines were operating again by October 2024.
See also
Offshore wind power
References
2017 establishments in Scotland
2017 in technology
Equinor
Wind farms in Scotland
Offshore wind farms in the North Sea
Floating wind turbines
Energy infrastructure completed in 2017 | Hywind Scotland | [
"Engineering"
] | 650 | [
"Floating wind turbines",
"Offshore engineering"
] |
55,608,661 | https://en.wikipedia.org/wiki/Nuclear%20reactor%20heat%20removal | The removal of heat from nuclear reactors is an essential step in the generation of energy from nuclear reactions. In nuclear engineering there are a number of empirical or semi-empirical relations used for quantifying the process of removing heat from a nuclear reactor core so that the reactor operates in the projected temperature interval that depends on the materials used in the construction of the reactor. The effectiveness of removal of heat from the reactor core depends on many factors, including the cooling agents used and the type of reactor. Common liquid coolants for nuclear reactors include: deionized water (with boric acid as a chemical shim during early burnup), heavy water, the lighter alkaline metals (such as sodium and lithium), lead or lead-based eutectic alloys like lead-bismuth, and NaK, a eutectic alloy of sodium and potassium. Gas cooled reactors operate with coolants like carbon dioxide, helium or nitrogen but some very low powered research reactors have even been air-cooled with Chicago Pile 1 relying on natural convection of the surrounding air to remove the negligible thermal power output. There is ongoing research into using supercritical fluids as reactor coolants but thus far neither the supercritical water reactor nor a reactor cooled with supercritical Carbon Dioxide nor any other kind of supercritical-fluid-cooled reactor has ever been built.
Theoretical framework
The thermal energy produced in nuclear fuel comes mainly from the kinetic energy of fission fragments. Therefore, the heat generated per volume unit is proportional to the fraction of nuclear fissionable fuel burned in the unit of time:
where represents the number of atoms in a cubic meter of fuel, a is the amount of energy released in the fuel in each fission reaction (~181 MeV), is the neutronic flux, and is the effective section of the fission.
The total heat produced in the nuclear reactor is:
where is the mean neutronic flux and V is the fuel volume (normally measured in ).
Recovery of this amount of heat is achieved by using cooling fluids whose temperature at the entrance to the reactor channel will increase with the distance traveled in the channel. The thermal balance of the channel is expressed by the relationship:
where is the flow rate of the cooling agent, is the specific heat at constant pressure, is the increase in the temperature of the fluid after passing a distance in the channel, is the heat generated per unit volume of the fuel, is the fuel cell radius and is the number of channel bars.
Under these conditions, the temperature of the cooling agent at distance z travelled into the cooling channel inside nuclear reactor is obtained by integrating the previous equation:
The difference between the temperature of the outer surface of the tube-channel and the temperature of the fluid is obtained from the relationship:
where is the local heat flow on the casing - cooler contact surface unit and is the heat transfer agent casing-cooling agent.
The heat discharge from the PWR and PHWR reactors is made by pressurized water under forced convection. The general expression for determining the transfer coefficient is given by the Dittus - Boelter equation:
where is Nusselt's number ( , is the heat transfer coefficient, is the equivalent diameter, is the thermal conductivity of the fluid); is a constant (=0.023); is the number of Reynolds ( ) is the average velocity of the fluid in the section considered, is the density of the fluid and is its dynamic viscosity); is the number of Prandtl ().
If the flow of the fluid is made under conditions of a great difference between its temperature and the contact surface, the transfer coefficient is determined from the relationship:
where is the dynamic viscosity of the coolant at the temperature of the adhering fluid film at the surface of the casing. The relation presented above is valid in the case of a long channel with , where is the length of the channel.
The transfer coefficient for cooling the pipes by natural convection is obtained from:
where is the Grashof number given by the expression:
We use the notation for the volume expansion coefficient of the fluid, is the gravitational acceleration and is the difference between the average wall temperatures of the casing and the cooling agent.
In boiling water cooled reactors (BWR) and partly in pressure water cooled reactors (PWR and PHWR) the heat transfer is made with a vapor phase in the cooling medium, which is why this type of heat transfer is called heat transfer in a biphasic system. This allows obtaining much higher transfer coefficients than the one-phase heat transfer described in the Dittus-Boelter equation.
Increasing the flow of heat, reducing the agent flow and lowering the pressure can lead to increased temperature of the cooled surface. If the temperature of the fluid in the channel section that we consider is lower than the boiling temperature under local pressure conditions, the vaporization is limited to the immediate vicinity of the surface and in this case the boiling is called submerged boiling. There is no proportionality between the heat flow and the difference between the surface temperature and the coolant temperature that allows the definition of a heat transfer coefficient similar to the one-phase case. In this situation we can use the equation of Jens and Lottes, which establishes a connection between the difference between the surface temperature and the boiling temperature of the cooling agent under local pressure conditions below the thermal flux :
where and
If the temperature of the fluid in the channel section considered is slightly higher than the boiling temperature under local pressure conditions, the heat transfer is by boiling with nucleation, forming vapor bubbles trained by the cooling agent (that becomes biphasic throughout its entire volume). However, the vapor content is relatively small and the continuous phase remains the liquid phase. The vapor content of the PHW-CANDU reactor is about 0.03-0.04 kg steam / kg of agent, thus increasing the amount of heat transported by the unit mass of agent by over 10%. If the cooled surface temperature far exceeds the boiling temperature of the cooling agent in the channel section, the vapor content of the agent increases considerably, the continuous phase becoming the vapor phase and the liquid phase becoming only a suspension between vapors. The cooled surface remains covered with a liquid film which still provides a very high heat transfer coefficient, at BWR compared to at PWR. The film of liquid is continuously fed with drops from the agent suspension.
A further increase in surface temperature leads to a temporary interruption of continuity of the liquid film adhering to the cooled surface. Watering of the surface continues, however, by the drops of liquid in the suspension that are present in the cooling agent as long as the heat flow remains below a value that depends on local conditions (value that is called critical flux). Over this flux there is a thermal transfer crisis characterized by a sudden decrease in the transfer coefficient due to the presence of only one-phase transfer. The heat transfer coefficient in the pre-crisis period can be determined from the relationship:
where
In these formulas the following notations were made: is pressure losses for the two phases (water and vapors), ( - the thermal flux, - the enthalpy of the biphasic liquid-gaseous mixture). The heat transfer coefficient during the crisis is related to the critical heat flow through a linear relationship, of the equation type that was presented before:
Where is the temperature of the surface in thermal transfer crisis, and is the temperature of the vapor at saturation.
The critical flow is obtained by using the Kutateladze's formula:
where (J/kg)is the latent heat of vaporization, and are density of the liquid and saturation vapor, is the superficial tension in N / m and is the gravitational acceleration. The heat transfer to the gas-cooled reactors is carried out by forced convection. For a gaseous thermal agent, the heat transfer coefficient can be deduced from a relation of the type Dittus-Boelter, but taking into account, for the intervening sizes, the values corresponding to the average temperature of the fluid film denoted by the index :
which differs in the use of water by a slightly lower value of the coefficient a.
Forced flow relationships established for fluids are also not valid for liquid metals. The coefficient of heat transfer for circular pipelines with constant heat flux, where the heat evacuation is achieved by the turbulent flow of the molten metals, can be estimated with a relation of the type:
where is the number of Peclet ().
Examples of heat evacuation hydrodynamic parameters
For exemplification of the above formulas the hydrodynamic parameters of some types of reactors can be found in the following table:
G1 and EL-4 are reactors that were built in France, while VVER-440 is a reactor that has been constructed in the Soviet Union.
References
Nuclear reactors
Nuclear power
Cooling technology | Nuclear reactor heat removal | [
"Physics"
] | 1,792 | [
"Power (physics)",
"Physical quantities",
"Nuclear power"
] |
48,839,520 | https://en.wikipedia.org/wiki/Karlovitz%20number | In combustion, the Karlovitz number is defined as the ratio of chemical time scale to Kolmogorov time scale , named after Béla Karlovitz. The number reads as
.
In premixed turbulent combustion, the chemical time scale can be defined as , where is the thermal diffusivity and is the laminar flame speed and the flame thickness is given by , in which case,
where is the Kolmogorov scale. The Karlovitz number is related to Damköhler number as
if the Damköhler number is defined with Kolmogorov scale. If , the premixed turbulent flame falls into the category of corrugated flamelets and wrinkled flamelets, otherwise into the thin reaction zone or broken reaction zone flames.
Klimov–Williams criterion
In premixed turbulent combustion, the Klimov–Williams criterion or Klimov–Williams limit, named after A.M. Klimov and Forman A. Williams, is the condition where (assuming a Schmidt number of unity). When , the flame thickness is smaller than the Kolmogorov scale, thus the flame burning velocity is not affected by the turbulence field. Here, the burning velocity is given by the laminar flame speed and these laminar flamelets are called as wrinkled flamelets or corrugated flamelets, depending on the turbulence intensity. When , the turbulent transport penetrates into the preheat zone of the flame (thin reaction zone) or even into the reactive-diffusive zone (distributed flames).
References
Chemical kinetics
Combustion
Dimensionless numbers of fluid mechanics
Fluid dynamics
Dimensionless numbers of chemistry | Karlovitz number | [
"Chemistry",
"Engineering"
] | 331 | [
"Chemical reaction engineering",
"Chemical engineering",
"Combustion",
"Piping",
"Chemical kinetics",
"Dimensionless numbers of chemistry",
"Fluid dynamics"
] |
48,840,195 | https://en.wikipedia.org/wiki/Bridge%20law | Bridge law is the body of laws which apply to bridges in a particular jurisdiction.
United States
In the United States, legislative authority to erect a bridge is necessary in three cases: first, when toll is demanded for its use—the right to take toll being a franchise which cannot be claimed without express grant from the state; second, when the state owns the bed of the stream over which the bridge extends, as is the case in all public or navigable streams; third, when the structure interferes or threatens to interfere with navigation. In the last case the authority of state governments is subject to the power given to Congress by the Federal Constitution to “regulate commerce with foreign nations, and among the several states.” (Art. I., §8.) The states may authorize bridges over navigable streams, and may regulate their size, form, and manner of construction. Until Congress intervenes in such cases the power of the states is unlimited. When it does intervene, however, its will is supreme, and its legislation, within the limits of the constitutional grant, overrides that of any state. A bridge constructed over a navigable river in accordance with an act of Congress is a lawful structure, however much it may interfere with the public right of navigation.
See also
Bridges Act
Federal Bridge Gross Weight Formula
Notes
References
Law
Bridges | Bridge law | [
"Physics",
"Engineering"
] | 268 | [
"Structural engineering",
"Transport law",
"Physical systems",
"Transport",
"Bridges"
] |
48,841,659 | https://en.wikipedia.org/wiki/Transferability%20%28economics%29 | Transferability refers to the costs involved in moving goods from one place to another. These include the costs of transportation, the costs of making the goods compliant with the regulations of the shipping destination, and the costs associated with tariffs or duties.
References
Traffic management | Transferability (economics) | [
"Engineering"
] | 52 | [
"Systems engineering",
"Traffic management"
] |
36,904,670 | https://en.wikipedia.org/wiki/Gauge%20theory%20gravity | Gauge theory gravity (GTG) is a theory of gravitation cast in the mathematical language of geometric algebra. To those familiar with general relativity, it is highly reminiscent of the tetrad formalism although there are significant conceptual differences. Most notably, the background in GTG is flat, Minkowski spacetime. The equivalence principle is not assumed, but instead follows from the fact that the gauge covariant derivative is minimally coupled. As in general relativity, equations structurally identical to the Einstein field equations are derivable from a variational principle. A spin tensor can also be supported in a manner similar to Einstein–Cartan–Sciama–Kibble theory. GTG was first proposed by Lasenby, Doran, and Gull in 1998 as a fulfillment of partial results presented in 1993. The theory has not been widely adopted by the rest of the physics community, who have mostly opted for differential geometry approaches like that of the related gauge gravitation theory.
Mathematical foundation
The foundation of GTG comes from two principles. First, position-gauge invariance demands that arbitrary local displacements of fields not affect the physical content of the field equations. Second, rotation-gauge invariance demands that arbitrary local rotations of fields not affect the physical content of the field equations. These principles lead to the introduction of a new pair of linear functions, the position-gauge field and the rotation-gauge field. A displacement by some arbitrary function f
gives rise to the position-gauge field defined by the mapping on its adjoint,
which is linear in its first argument and a is a constant vector. Similarly, a rotation by some arbitrary rotor R gives rise to the rotation-gauge field
We can define two different covariant directional derivatives
or with the specification of a coordinate system
where × denotes the commutator product.
The first of these derivatives is better suited for dealing directly with spinors whereas the second is better suited for observables. The GTG analog of the Riemann tensor is built from the commutation rules of these derivatives.
Field equations
The field equations are derived by postulating the Einstein–Hilbert action governs the evolution of the gauge fields, i.e.
Minimizing variation of the action with respect to the two gauge fields results in the field equations
where is the covariant energy–momentum tensor and is the covariant spin tensor. Importantly, these equations do not give an evolving curvature of spacetime but rather merely give the evolution of the gauge fields within the flat spacetime.
Relation to general relativity
For those more familiar with general relativity, it is possible to define a metric tensor from the position-gauge field in a manner similar to tetrads. In the tetrad formalism, a set of four vectors are introduced. The Greek index μ is raised or lowered by multiplying and contracting with the spacetime's metric tensor. The parenthetical Latin index (a) is a label for each of the four tetrads, which is raised and lowered as if it were multiplied and contracted with a separate Minkowski metric tensor. GTG, roughly, reverses the roles of these indices. The metric is implicitly assumed to be Minkowski in the selection of the spacetime algebra. The information contained in the other set of indices gets subsumed by the behavior of the gauge fields.
We can make the associations
for a covariant vector and contravariant vector in a curved spacetime, where now the unit vectors are the chosen coordinate basis. These can define the metric using the rule
Following this procedure, it is possible to show that for the most part the observable predictions of GTG agree with Einstein–Cartan–Sciama–Kibble theory for non-vanishing spin and reduce to general relativity for vanishing spin. GTG does, however, make different predictions about global solutions. For example, in the study of a point mass, the choice of a "Newtonian gauge" yields a solution similar to the Schwarzschild metric in Gullstrand–Painlevé coordinates. General relativity permits an extension known as the Kruskal–Szekeres coordinates. GTG, on the other hand, forbids any such extension.
References
External links
David Hestenes: Spacetime calculus for gravitation theory – an account of the mathematical formalism explicitly directed to GTG
Gauge theories
Geometric algebra
Theories of gravity
ru:Релятивистская теория гравитации | Gauge theory gravity | [
"Physics"
] | 930 | [
"Theoretical physics",
"Theories of gravity"
] |
39,773,873 | https://en.wikipedia.org/wiki/Fourth%20Industrial%20Revolution | "Fourth Industrial Revolution", "4IR", or "Industry 4.0", is a neologism describing rapid technological advancement in the 21st century. It follows the Third Industrial Revolution (the "Information Age"). The term was popularised in 2016 by Klaus Schwab, the World Economic Forum founder and executive chairman, who asserts that these developments represent a significant shift in industrial capitalism.
A part of this phase of industrial change is the joining of technologies like artificial intelligence, gene editing, to advanced robotics that blur the lines between the physical, digital, and biological worlds.
Throughout this, fundamental shifts are taking place in how the global production and supply network operates through ongoing automation of traditional manufacturing and industrial practices, using modern smart technology, large-scale machine-to-machine communication (M2M), and the Internet of things (IoT). This integration results in increasing automation, improving communication and self-monitoring, and the use of smart machines that can analyse and diagnose issues without the need for human intervention.
It also represents a social, political, and economic shift from the digital age of the late 1990s and early 2000s to an era of embedded connectivity distinguished by the ubiquity of technology in society (i.e. a metaverse) that changes the ways humans experience and know the world around them. It posits that we have created and are entering an augmented social reality compared to just the natural senses and industrial ability of humans alone. The Fourth Industrial Revolution is sometimes expected to mark the beginning of an imagination age, where creativity and imagination become the primary drivers of economic value.
History
The phrase Fourth Industrial Revolution was first introduced by a team of scientists developing a high-tech strategy for the German government. Klaus Schwab, executive chairman of the World Economic Forum (WEF), introduced the phrase to a wider audience in a 2015 article published by Foreign Affairs. "Mastering the Fourth Industrial Revolution" was the 2016 theme of the World Economic Forum Annual Meeting, in Davos-Klosters, Switzerland.
On 10 October 2016, the Forum announced the opening of its Centre for the Fourth Industrial Revolution in San Francisco. This was also subject and title of Schwab's 2016 book. Schwab includes in this fourth era technologies that combine hardware, software, and biology (cyber-physical systems), and emphasises advances in communication and connectivity. Schwab expects this era to be marked by breakthroughs in emerging technologies in fields such as robotics, artificial intelligence, nanotechnology, quantum computing, biotechnology, the internet of things, the industrial internet of things, decentralised consensus, fifth-generation wireless technologies, 3D printing, and fully autonomous vehicles.
In The Great Reset proposal by the WEF, The Fourth Industrial Revolution is included as a strategic intelligence in the solution to rebuild the economy sustainably following the COVID-19 pandemic.
First Industrial Revolution
The First Industrial Revolution was marked by a transition from hand production methods to machines through the use of steam power and water power. The implementation of new technologies took a long time, so the period which this refers to was between 1760 and 1820, or 1840 in Europe and the United States. Its effects had consequences on textile manufacturing, which was first to adopt such changes, as well as iron industry, agriculture, and mining although it also had societal effects with an ever stronger middle class.
Second Industrial Revolution
The Second Industrial Revolution, also known as the Technological Revolution, is the period between 1871 and 1914 that resulted from installations of extensive railroad and telegraph networks, which allowed for faster transfer of people and ideas, as well as electricity. Increasing electrification allowed for factories to develop the modern production line.
Third Industrial Revolution
The Third Industrial Revolution, also known as the Digital Revolution, began in the late 20th century. It is characterized by the shift to an economy centered on information technology, marked by the advent of personal computers, the Internet, and the widespread digitalization of communication and industrial processes.
A book titled The Third Industrial Revolution, by Jeremy Rifkin, was published in 2011, which focused on the intersection of digital communications technology and renewable energy. It was made into a 2017 documentary by Vice Media.
Characteristics
In essence, the Fourth Industrial Revolution is the trend towards automation and data exchange in manufacturing technologies and processes which include cyber-physical systems (CPS), Internet of Things (IoT), cloud computing, cognitive computing, and artificial intelligence.
Machines improve human efficiency in performing repetitive functions, and the combination of machine learning and computing power allows machines to carry out increasingly complex tasks.
The Fourth Industrial Revolution has been defined as technological developments in cyber-physical systems such as high capacity connectivity; new human-machine interaction modes such as touch interfaces and virtual reality systems; and improvements in transferring digital instructions to the physical world including robotics and 3D printing (additive manufacturing); "big data" and cloud computing; improvements to and uptake of Off-Grid / Stand-Alone Renewable Energy Systems: solar, wind, wave, hydroelectric and the electric batteries (lithium-ion renewable energy storage systems (ESS) and EV).
It also emphasizes decentralized decisions – the ability of cyber physical systems to make decisions on their own and to perform their tasks as autonomously as possible. Only in the case of exceptions, interference, or conflicting goals, are tasks delegated to a higher level.
Distinctiveness
Proponents of the Fourth Industrial Revolution suggest it is a distinct revolution rather than simply a prolongation of the Third Industrial Revolution. This is due to the following characteristics:
Velocity — exponential speed at which incumbent industries are affected and displaced
Scope and systems impact – the large amount of sectors and firms that are affected
Paradigm shift in technology policy – new policies designed for this new way of doing are present. An example is Singapore's formal recognition of Industry 4.0 in its innovation policies.
Critics of the concept dismiss Industry 4.0 as a marketing strategy. They suggest that although revolutionary changes are identifiable in distinct sectors, there is no systemic change so far. In addition, the pace of recognition of Industry 4.0 and policy transition varies across countries; the definition of Industry 4.0 is not harmonised. One of the most known figures is Jeremy Rifkin who "agree[s] that digitalization is the hallmark and defining technology in what has become known as the Third Industrial Revolution". However, he argues "that the evolution of digitalization has barely begun to run its course and that its new configuration in the form of the Internet of Things represents the next stage of its development".
Components
The application of the Fourth Industrial Revolution operates through:
Mobile devices
Location detection technologies (electronic identification)
Advanced human-machine interfaces
Authentication and fraud detection
Smart sensors
Big analytics and advanced processes
Multilevel customer interaction and customer profiling
Augmented reality/wearables
On-demand availability of computer system resources
Data visualisation
Industry 4.0 networks a wide range of new technologies to create value. Using cyber-physical systems that monitor physical processes, a virtual copy of the physical world can be designed. Characteristics of cyber-physical systems include the ability to make decentralised decisions independently, reaching a high degree of autonomy.
The value created in Industry 4.0, can be relied upon electronic identification, in which the smart manufacturing require set technologies to be incorporated in the manufacturing process to thus be classified as in the development path of Industry 4.0 and no longer digitisation.
Trends
Smart factory
The Fourth Industrial Revolution fosters "smart factories", which are production environment where facilities and logistics systems are organised with minimal human intervention.
The technical foundations on which smart factories are based are cyber-physical systems that communicate with each other using the Internet of Things and Services. An important part of this process is the exchange of data between the product and the production line. This enables a much more efficient connection of the Supply Chain and better organisation within any production environment.
Within modular structured smart factories, cyber-physical systems monitor physical processes, create a virtual copy of the physical world and make decentralised decisions. Over the internet of things, cyber-physical systems communicate and cooperate with each other and with humans in synchronic time both internally and across organizational services offered and used by participants of the value chain.
Artificial intelligence
Artificial intelligence (AI) has a wide range of applications across all sectors of the economy. It gained prominence following advancements in deep learning during the 2010s, and its impact intensified in the 2020s with the rise of generative AI, a period often referred to as the "AI boom". Models like GPT-4o can engage in verbal and textual discussions and analyze images.
AI is a key driver of Industry 4.0, orchestrating technologies like robotics, automated vehicles, and real-time data analytics. By enabling machines to perform complex tasks, AI is redefining production processes and reducing changeover times. AI could also significantly accelerate, or even automate software development.
Some experts believe that AI alone could be as transformative as an industrial revolution. Multiple companies such as OpenAI and Meta have expressed the goal of creating artificial general intelligence (AI that can do virtually any cognitive task a human can), making large investments in data centers and GPUs to train more capable AI models.
Robotics
Humanoid robots have traditionally lacked usefulness. They had difficulty picking simple objects due to imprecise control and coordination, and they wouldn't understand their environment and how physics works. They were often explicitly programmed to do narrow tasks, failing when encountering new situations. Modern humanoid robots however are typically based on machine learning, in particular reinforcement learning. In 2024, humanoid robots are rapidly becoming more flexible, easier to train and versatile.
Predictive maintenance
Industry 4.0 facilitates predictive maintenance, due to the use of advanced technologies, including IoT sensors. Predictive maintenance, which can identify potential maintenance issues in real time, allows machine owners to perform cost-effective maintenance before the machinery fails or gets damaged. For example, a company in Los Angeles could understand if a piece of equipment in Singapore is running at an abnormal speed or temperature. They could then decide whether or not it needs to be repaired.
3D printing
The Fourth Industrial Revolution is said to have extensive dependency on 3D printing technology. Some advantages of 3D printing for industry are that 3D printing can print many geometric structures, as well as simplify the product design process. It is also relatively environmentally friendly. In low-volume production, it can also decrease lead times and total production costs. Moreover, it can increase flexibility, reduce warehousing costs and help the company towards the adoption of a mass customisation business strategy. In addition, 3D printing can be very useful for printing spare parts and installing it locally, therefore reducing supplier dependence and reducing the supply lead time.
Smart sensors
Sensors and instrumentation drive the central forces of innovation, not only for Industry 4.0 but also for other "smart" megatrends, such as smart production, smart mobility, smart homes, smart cities, and smart factories.
Smart sensors are devices, which generate the data and allow further functionality from self-monitoring and self-configuration to condition monitoring of complex processes.
With the capability of wireless communication, they reduce installation effort to a great extent and help realise a dense array of sensors.
The importance of sensors, measurement science, and smart evaluation for Industry 4.0 has been recognised and acknowledged by various experts and has already led to the statement "Industry 4.0: nothing goes without sensor systems."
However, there are a few issues, such as time synchronisation error, data loss, and dealing with large amounts of harvested data, which all limit the implementation of full-fledged systems. Moreover, additional limits on these functionalities represents the battery power. One example of the integration of smart sensors in the electronic devices, is the case of smart watches, where sensors receive the data from the movement of the user, process the data and as a result, provide the user with the information about how many steps they have walked in a day and also converts the data into calories burned.
Agriculture and food industries
Smart sensors in these two fields are still in the testing stage. These innovative connected sensors collect, interpret and communicate the information available in the plots (leaf area, vegetation index, chlorophyll, hygrometry, temperature, water potential, radiation). Based on this scientific data, the objective is to enable real-time monitoring via a smartphone with a range of advice that optimises plot management in terms of results, time and costs. On the farm, these sensors can be used to detect crop stages and recommend inputs and treatments at the right time. As well as controlling the level of irrigation.
The food industry requires more and more security and transparency and full documentation is required. This new technology is used as a tracking system as well as the collection of human data and product data.
Accelerated transition to the knowledge economy
Knowledge economy is an economic system in which production and services are largely based on knowledge-intensive activities that contribute to an accelerated pace of technical and scientific advance, as well as rapid obsolescence. Industry 4.0 aids transitions into knowledge economy by increasing reliance on intellectual capabilities than on physical inputs or natural resources.
Challenges
Challenges in implementation of Industry 4.0:
Economic
High economic cost
Business model adaptation
Unclear economic benefits/excessive investment
Driving significant economic changes through automation and technological advancements, leading to both job displacement and the creation of new roles, necessitating widespread workforce reskilling and systemic adaptation.
Social
Privacy concerns
Surveillance and distrust
General reluctance to change by stakeholders
Threat of redundancy of the corporate IT department
Loss of many jobs to automatic processes and IT-controlled processes, especially for blue-collar workers
Increased risk of gender inequalities in professions with job roles most susceptible to replacement with AI
Political
Lack of regulation, standards and forms of certifications
Unclear legal issues and data security
Organizational
IT security issues, which are greatly aggravated by the inherent need to open up previously closed production shops
Reliability and stability needed for critical machine-to-machine communication (M2M), including very short and stable latency times
Need to maintain the integrity of production processes
Need to avoid any IT snags, as those would cause expensive production outages
Need to protect industrial know-how (contained also in the control files for the industrial automation gear)
Lack of adequate skill-sets to expedite the transition towards Industry 4.0
Low top management commitment
Insufficient qualification of employees
Country applications
Many countries have set up institutional mechanisms to foster the adoption of Industry 4.0 technologies. For example,
Australia
Australia has a Digital Transformation Agency (est. 2015) and the Prime Minister's Industry 4.0 Taskforce (est. 2016), which promotes collaboration with industry groups in Germany and the USA.
Germany
The term "Industrie 4.0", shortened to I4.0 or simply I4, originated in 2011 from a project in the high-tech strategy of the German government and specifically relates to that project policy, rather than a wider notion of a Fourth Industrial Revolution of 4IR, which promotes the computerisation of manufacturing. The term "Industrie 4.0" was publicly introduced in the same year at the Hannover Fair. Renowned German professor Wolfgang Wahlster is sometimes called the inventor of the "Industry 4.0" term. In October 2012, the Working Group on Industry 4.0 presented a set of Industry 4.0 implementation recommendations to the German federal government. The workgroup members and partners are recognised as the founding fathers and driving force behind Industry 4.0. On 8 April 2013 at the Hannover Fair, the final report of the Working Group Industry 4.0 was presented. This working group was headed by Siegfried Dais, of Robert Bosch GmbH, and Henning Kagermann, of the German Academy of Science and Engineering.
As Industry 4.0 principles have been applied by companies, they have sometimes been rebranded. For example, the aerospace parts manufacturer Meggitt PLC has branded its own Industry 4.0 research project M4.
The discussion of how the shift to Industry 4.0, especially digitisation, will affect the labour market is being discussed in Germany under the topic of Work 4.0.
The federal government in Germany through its ministries of the BMBF and BMWi, is a leader in the development of the I4.0 policy. Through the publishing of set objectives and goals for enterprises to achieve, the German federal government attempts to set the direction of the digital transformation. However, there is a gap between German enterprise's collaboration and knowledge of these set policies. The biggest challenge which SMEs in Germany are currently facing regarding digital transformation of their manufacturing processes is ensuring that there is a concrete IT and application landscape to support further digital transformation efforts.
The characteristics of the German government's Industry 4.0 strategy involve the strong customisation of products under the conditions of highly flexible (mass-) production. The required automation technology is improved by the introduction of methods of self-optimization, self-configuration, self-diagnosis, cognition and intelligent support of workers in their increasingly complex work. The largest project in Industry 4.0 as of July 2013 is the German Federal Ministry of Education and Research (BMBF) leading-edge cluster "Intelligent Technical Systems Ostwestfalen-Lippe (its OWL)". Another major project is the BMBF project RES-COM, as well as the Cluster of Excellence "Integrative Production Technology for High-Wage Countries". In 2015, the European Commission started the international Horizon 2020 research project CREMA (cloud-based rapid elastic manufacturing) as a major initiative to foster the Industry 4.0 topic.
Estonia
In Estonia, the digital transformation dubbed as the 4th Industrial Revolution by Klaus Schwab and the World Economic Forum in 2015 started with the restoration of independence in 1991. Although a latecomer to the information revolution due to 50 years of Soviet occupation, Estonia leapfrogged to the digital era, while skipping the analogue connections almost completely. The early decisions made by Prime Minister Mart Laar on the course of the country's economic development led to the establishment of what is today known as e-Estonia, one of the worlds most digitally advanced nations.
According to the goals set in the Estonia's Digital Agenda 2030, next leaps in the country's digital transformation will be switching to event based and proactive services, both in private and business environment, as well as developing a green, AI-powered and human-centric digital government.
Indonesia
Another example is Making Indonesia 4.0, with a focus on improving industrial performance.
India
India, with its expanding economy and extensive manufacturing sector, has embraced the digital revolution, leading to significant advancements in manufacturing. The Indian program for Industry 4.0 centers around leveraging technology to produce globally competitive products at cost-effective rates while adopting the latest technological advancements of Industry 4.0.
Japan
Society 5.0 envisions a society that prioritizes the well-being of its citizens, striking a harmonious balance between economic progress and the effective addressing of societal challenges through a closely interconnected system of both the digital realm and the physical world. This concept was introduced in 2019 in the 5th Science and Technology Basic Plan for Japanese Government as a blueprint for a forthcoming societal framework.
Malaysia
Malaysia's national policy on Industry 4.0 is known as Industry4WRD. Launched in 2018, key initiatives in this policy include enhancing digital infrastructure, equipping the workforce with 4IR skills, and fostering innovation and technology adoption across industries.
South Africa
South Africa appointed a Presidential Commission on the Fourth Industrial Revolution in 2019, consisting of about 30 stakeholders with a background in academia, industry and government. South Africa has also established an Inter ministerial Committee on Industry 4.0.
South Korea
The Republic of Korea has had a Presidential Committee on the Fourth Industrial Revolution since 2017. The Republic of Korea's I-Korea strategy (2017) is focusing on new growth engines that include AI, drones and autonomous cars, in line with the government's innovation-driven economic policy.
Uganda
Uganda adopted its own National 4IR Strategy in October 2020 with emphasis on e-governance, urban management (smart cities), health care, education, agriculture and the digital economy; to support local businesses, the government was contemplating introducing a local start-ups bill in 2020 which would require all accounting officers to exhaust the local market prior to procuring digital solutions from abroad.
United Kingdom
In a policy paper published in 2019, the UK's Department for Business, Energy & Industrial Strategy, titled "Regulation for the Fourth Industrial Revolution", outlined the need to evolve current regulatory models to remain competitive in evolving technological and social settings.
United States
The Department of Homeland Security in 2019 published a paper called 'The Industrial Internet of things (IIOT): Opportunities, Risks, Mitigation'. The base pieces of critical infrastructure are increasingly digitised for greater connectivity and optimisation. Hence, its implementation, growth and maintenance must be carefully planned and safeguarded. The paper discusses not only applications of IIOT but also the associated risks. It has suggested some key areas where risk mitigation is possible. To increase coordination between the public, private, law enforcement, academia and other stakeholders the DHS formed the National Cybersecurity and Communications Integration Center (NCCIC).
Industry applications
The aerospace industry has sometimes been characterised as "too low volume for extensive automation". However, Industry 4.0 principles have been investigated by several aerospace companies, and technologies have been developed to improve productivity where the upfront cost of automation cannot be justified. One example of this is the aerospace parts manufacturer Meggitt PLC's M4 project.
The increasing use of the industrial internet of things is referred to as Industry 4.0 at Bosch, and generally in Germany. Applications include machines that can predict failures and trigger maintenance processes autonomously or self-organised coordination that react to unexpected changes in production. in 2017, Bosch launched the Connectory, a Chicago, Illinois based innovation incubator that specializes in IoT, including Industry 4.0.
Industry 4.0 inspired Innovation 4.0, a move toward digitisation for academia and research and development. In 2017, the £81M Materials Innovation Factory (MIF) at the University of Liverpool opened as a center for computer aided materials science, where robotic formulation, data capture and modelling are being integrated into development practices.
Criticism
With the consistent development of automation of everyday tasks, some saw the benefit in the exact opposite of automation where self-made products are valued more than those that involved automation. This valuation is named the IKEA effect, a term coined by Michael I. Norton of Harvard Business School, Daniel Mochon of Yale, and Dan Ariely of Duke.
Another problem that is expected to accelerate with the growth of IR4 is the prevalence of mental disorders, a known issue within high-tech operators. Also, the IR4 has sparked significant criticism regarding AI bias and ethical issues, as algorithms used in decision-making processes often perpetuate existing social inequalities, disproportionately impacting marginalized groups while lacking transparency and accountability.
Future
Industry 5.0
Industry 5.0 has been proposed as a strategy to create a paradigm shift for an industrial landscape in which the primary focus should no longer be on increasing efficiency but on promoting the well-being of society and sustainability of the economy and industrial production.
See also
Computer-integrated manufacturing
Cyber manufacturing
Digital modelling and fabrication
Industrial control system
Intelligent maintenance systems
Lights-out manufacturing
List of emerging technologies
Machine to machine
Nondestructive Evaluation 4.0
Simulation software
Technological singularity
Technological unemployment
The War on Normal People
Work 4.0
World Economic Forum 2016
Digitization
Transhumanism
AI boom
References
Sources
2015 neologisms
21st century
Industrial automation
Industrial computing
Internet of things
Technology forecasting
Big data
Industrial Revolution
Fourth Industrial Revolution
Knowledge economy | Fourth Industrial Revolution | [
"Technology",
"Engineering"
] | 4,843 | [
"Industrial computing",
"Industrial engineering",
"Automation",
"Data",
"Big data",
"Industrial automation"
] |
46,278,620 | https://en.wikipedia.org/wiki/Quantum%20thermodynamics | Quantum thermodynamics is the study of the relations between two independent physical theories: thermodynamics and quantum mechanics. The two independent theories address the physical phenomena of light and matter.
In 1905, Albert Einstein argued that the requirement of consistency between thermodynamics and electromagnetism leads to the conclusion that light is quantized, obtaining the relation . This paper is the dawn of quantum theory. In a few decades quantum theory became established with an independent set of rules. Currently quantum thermodynamics addresses the emergence of thermodynamic laws from quantum mechanics. It differs from quantum statistical mechanics in the emphasis on dynamical processes out of equilibrium. In addition, there is a quest for the theory to be relevant for a single individual quantum system.
Dynamical view
There is an intimate connection of quantum thermodynamics with the theory of open quantum systems. Quantum mechanics inserts dynamics into thermodynamics, giving a sound foundation to finite-time-thermodynamics. The main assumption is that the entire world is a large closed system, and therefore, time evolution is governed by a unitary transformation generated by a global Hamiltonian. For the combined system
bath scenario, the global Hamiltonian can be decomposed into:
where is the system Hamiltonian, is the bath Hamiltonian and is the system-bath interaction.
The state of the system is obtained from a partial trace over the combined system and bath:
.
Reduced dynamics is an equivalent description of the system dynamics utilizing only system operators.
Assuming Markov property for the dynamics the basic equation of motion for an open quantum system is the Lindblad equation (GKLS):
is a (Hermitian) Hamiltonian part and :
is the dissipative part describing implicitly through system operators the influence of the bath on the system.
The Markov property imposes that the system and bath are uncorrelated at all times . The L-GKS equation is unidirectional and leads any initial state to a steady state solution which is an invariant of the equation of motion .
The Heisenberg picture supplies a direct link to quantum thermodynamic observables. The dynamics of a system observable represented by the operator, , has the form:
where the possibility that the operator, is explicitly time-dependent, is included.
Emergence of time derivative of first law of thermodynamics
When the first law of thermodynamics emerges:
where power is interpreted as
and the heat current
.
Additional conditions have to be imposed on the dissipator to be consistent with thermodynamics.
First the invariant should become an equilibrium Gibbs state. This implies that the dissipator should commute with the unitary part generated by . In addition an equilibrium state is stationary and stable. This assumption is used to derive the Kubo-Martin-Schwinger stability criterion for thermal equilibrium i.e. KMS state.
A unique and consistent approach is obtained by deriving the generator, , in the weak system bath
coupling limit. In this limit, the interaction energy can be neglected. This approach represents a thermodynamic idealization: it allows energy transfer, while keeping a tensor product separation
between the system and bath, i.e., a quantum version of an isothermal partition.
Markovian behavior involves a rather complicated cooperation between system and bath dynamics. This means that in
phenomenological treatments, one cannot combine arbitrary system Hamiltonians, , with a given L-GKS generator. This observation is particularly important in the context of quantum thermodynamics, where it is tempting to study Markovian dynamics with an arbitrary control Hamiltonian. Erroneous derivations of the quantum master equation can easily lead to a violation of the laws of thermodynamics.
An external perturbation modifying the Hamiltonian of the system will also modify the heat flow. As a result, the L-GKS generator has to be renormalized. For a slow change, one can adopt the adiabatic approach and use the instantaneous system’s Hamiltonian to derive . An important class of problems in quantum thermodynamics is periodically driven systems. Periodic quantum heat engines and power-driven refrigerators fall into this class.
A reexamination of the time-dependent heat current expression using quantum transport techniques has been proposed.
A derivation of consistent dynamics beyond the weak coupling limit has been suggested.
Phenomenological formulations of irreversible quantum dynamics consistent with the second law and implementing the geometric idea of "steepest entropy ascent" or "gradient flow" have been suggested to model relaxation and strong coupling.
Emergence of the second law
The second law of thermodynamics is a statement on the irreversibility of dynamics or, the breakup of time reversal symmetry (T-symmetry). This should be consistent with the empirical direct definition: heat will flow spontaneously from a hot source to a cold sink.
From a static viewpoint, for a closed quantum system, the 2nd law of thermodynamics is a consequence of the unitary evolution. In this approach, one accounts for the entropy change before and after a change in the entire system. A dynamical viewpoint is based on local accounting for the entropy changes in the subsystems and the entropy generated in the baths.
Entropy
In thermodynamics, entropy is related to the amount of energy of a system that can be converted into mechanical work in a concrete process. In quantum mechanics, this translates to the ability to measure and manipulate the system based on the information gathered by measurement. An example is the case of Maxwell’s demon, which has been resolved by Leó Szilárd.
The entropy of an observable is associated with the complete projective measurement of an observable,, where the operator has a spectral decomposition:
where are the projection operators of the eigenvalue
The probability of outcome is The entropy associated with the observable is the Shannon entropy with respect to the possible outcomes:
The most significant observable in thermodynamics is the energy represented by the Hamiltonian operator and its associated energy entropy,
John von Neumann suggested to single out the most informative observable to characterize the entropy of the system. This invariant is obtained by minimizing the entropy with respect to all possible observables. The most informative observable operator commutes with the state of the system. The
entropy of this observable is termed the Von Neumann entropy and is equal to
As a consequence, for all observables. At thermal equilibrium the energy entropy is equal to the von Neumann entropy:
is invariant to a unitary transformation changing the state. The Von Neumann entropy is additive only for a system state that is composed of a tensor product of its subsystems:
Clausius version of the II-law
No process is possible whose sole result is the transfer of heat from a body of lower temperature to a body of higher temperature.
This statement for N-coupled heat baths in steady state becomes
A dynamical version of the II-law can be proven, based on Spohn's inequality:
which is valid for any L-GKS generator, with a stationary state, .
Consistency with thermodynamics can be employed to verify quantum dynamical models of transport. For example, local models for networks where local L-GKS equations are connected through weak links have been thought to violate the second law of thermodynamics. In 2018 has been shown that, by correctly taking into account all work and energy contributions in the full system, local master equations are fully coherent with the second law of thermodynamics.
Quantum and thermodynamic adiabatic conditions and quantum friction
Thermodynamic adiabatic processes have no entropy change. Typically, an external control modifies
the state. A quantum version of an adiabatic process can be modeled by an externally controlled time dependent
Hamiltonian . If the system is isolated, the dynamics are unitary, and therefore, is
a constant. A quantum adiabatic process is defined by the energy entropy being constant. The quantum adiabatic condition is therefore equivalent to no net change in the population of the instantaneous energy levels. This implies that the Hamiltonian should commute with itself at different times: .
When the adiabatic conditions are not fulfilled, additional work is required to reach the final control
value. For an isolated system, this work is recoverable, since the dynamics is unitary and can be reversed. In this case, quantum friction can be suppressed using shortcuts to adiabaticity as demonstrated in the laboratory using a unitary Fermi gas in a time-dependent trap.
The coherence stored in the off-diagonal elements of the density operator carry the required information
to recover the extra energy cost and reverse the dynamics. Typically, this energy is not recoverable, due
to interaction with a bath that causes energy dephasing. The bath, in this case, acts like a measuring
apparatus of energy. This lost energy is the quantum version of friction.
Emergence of the dynamical version of the third law of thermodynamics
There are seemingly two independent formulations of the third law of thermodynamics. Both were originally stated by Walther Nernst. The first formulation is known as the Nernst heat theorem, and can be phrased as:
The entropy of any pure substance in thermodynamic equilibrium approaches zero as the temperature approaches zero.
The second formulation is dynamical, known as the unattainability principle
It is impossible by any procedure, no matter how idealized, to reduce any assembly to absolute zero temperature in a finite number of operations.
At steady state the second law of thermodynamics implies that the total entropy production is non-negative.
When the cold bath approaches the absolute zero temperature,
it is necessary to eliminate the entropy production divergence at the cold side
when , therefore
For the fulfillment of the second law depends on the entropy production of the other baths,
which should compensate for the negative entropy production of the cold bath.
The first formulation of the third law modifies this restriction. Instead of the third law imposes , guaranteeing that at absolute zero the entropy production at the cold bath is zero: .
This requirement leads to the scaling condition of the heat current .
The second formulation, known as the unattainability principle can be rephrased as;
No refrigerator can cool a system to absolute zero temperature at finite time.
The dynamics of the cooling process is governed by the equation:
where is the heat capacity of the bath. Taking and with , we can quantify this formulation by evaluating the characteristic exponent of the cooling process,
This equation introduces the relation between the characteristic exponents and . When then the bath is cooled to zero temperature in a finite time, which implies a violation of the third law. It is apparent from the last equation, that the unattainability principle is more restrictive than the Nernst heat theorem.
Typicality as a source of emergence of thermodynamic phenomena
The basic idea of quantum typicality is that the vast majority of all pure states featuring a common expectation value of some generic observable at a given time will yield very similar expectation values of the same observable at any later time. This is meant to apply to Schrödinger type dynamics in high dimensional Hilbert spaces. As a consequence individual dynamics of expectation values are then typically well described by the ensemble average.
Quantum ergodic theorem originated by John von Neumann is a strong result arising from the mere mathematical structure of quantum mechanics. The QET is a precise formulation of termed normal typicality, i.e. the statement that, for typical large systems, every initial wave function from an energy shell is ‘normal’: it evolves in such a way that for most t, is macroscopically equivalent to the micro-canonical density matrix.
Resource theory
The second law of thermodynamics can be interpreted as quantifying state transformations which are statistically unlikely so that they become effectively forbidden. The second law typically applies to systems composed of many particles interacting; Quantum thermodynamics resource theory is a formulation of thermodynamics in the regime where it can be applied to a small number of particles interacting with a heat bath. For processes which are cyclic or very close to cyclic, the second law for microscopic systems takes on a very different form than it does at the macroscopic scale, imposing not just one constraint on what state transformations are possible, but an entire family of constraints. These second laws are not only relevant for small systems, but also apply to individual macroscopic systems interacting via long-range interactions, which only satisfy the ordinary second law on average. By making precise the definition of thermal operations, the laws of thermodynamics take on a form with the first law defining the class of thermal operations, the zeroth law emerging as a unique condition ensuring the theory is nontrivial, and the remaining laws being a monotonicity property of generalised free energies.
Engineered reservoirs
Nanoscale allows for the preparation of quantum systems in physical states without classical analogs. There, complex out-of-equilibrium scenarios may be produced by the initial preparation of either the working substance or the reservoirs of quantum particles, the latter dubbed as "engineered reservoirs".
There are different forms of engineered reservoirs. Some of them involve subtle quantum coherence or correlation effects, while others rely solely on nonthermal classical probability distribution functions. Interesting phenomena may emerge from the use of engineered reservoirs such as efficiencies greater than the Otto limit, violations of Clausius inequalities, or simultaneous extraction of heat and work from the reservoirs.
See also
Quantum statistical mechanics
Thermal quantum field theory
References
Further reading
F. Binder, L. A. Correa, C. Gogolin, J. Anders, G. Adesso (eds.) (2018). Thermodynamics in the Quantum Regime: Fundamental Aspects and New Directions. Springer, .
Jochen Gemmer, M. Michel, Günter Mahler (2009). Quantum thermodynamics: Emergence of Thermodynamic Behavior Within Composite Quantum Systems. 2nd edition, Springer, .
Heinz-Peter Breuer, Francesco Petruccione (2007). The Theory of Open Quantum Systems. Oxford University Press, .
External links
Go to "Concerning an Heuristic Point of View Toward the Emission and Transformation of Light" to read an English translation of Einstein's 1905 paper. (Retrieved: 2014 Apr 11)
Quantum mechanics
Thermodynamics
Non-equilibrium thermodynamics
Philosophy of thermal and statistical physics | Quantum thermodynamics | [
"Physics",
"Chemistry",
"Mathematics"
] | 3,001 | [
"Philosophy of thermal and statistical physics",
"Non-equilibrium thermodynamics",
"Theoretical physics",
"Quantum mechanics",
"Thermodynamics",
"Statistical mechanics",
"Dynamical systems"
] |
46,280,955 | https://en.wikipedia.org/wiki/Penicillium%20islandicum | Penicillium islandicum is an anamorph species of the genus of Penicillium which produces luteoskyrin, simatoxin, cyclochlorotine (islanditoxin), rugulosin and chitosanase.
Further reading
References
islandicum
Fungi described in 1912
Fungus species | Penicillium islandicum | [
"Biology"
] | 67 | [
"Fungi",
"Fungus species"
] |
46,286,163 | https://en.wikipedia.org/wiki/Photorelaxation | Photorelaxation or photo-vasorelaxation, is described as the relaxation of blood vessels in response to light. This has been reported for around sixty years, it was never described, pursued or explained. It was serendipitously rediscovered by Dr. Gautam Sikka and his mentor Dr. Dan Berkowitz at Johns Hopkins University in Baltimore, USA, and along with his team he not only elucidated the mechanism but is trying to harness light for treatment of cardiovascular disease.
The research by Sikka et al concluded that there light-sensing receptors, melanopsin receptors, are present in blood vessels and mediate wavelength specific, light-dependent vascular relaxation. This photorelaxation signal transduction involves cyclic guanosine monophosphate (cGMP) and phosphodiesterase type 6, but not cGMP-dependent protein kinase or Protein Kinase G (PKG). Furthermore, it is regulated by Beta adrenergic receptor kinase type 1 (βARK or BARK) also called G protein coupled receptor kinase 2 (GRK2), and involves vascular hyperpolarization, and this receptor pathway could be targeted for wavelength-specific light-based therapy in the treatment of diseases that involve altered vasoreactivity.
References
Angiology
G protein-coupled receptors | Photorelaxation | [
"Chemistry"
] | 277 | [
"G protein-coupled receptors",
"Signal transduction"
] |
46,286,281 | https://en.wikipedia.org/wiki/Bacteriophage%20Mu | Bacteriophage Mu, also known as mu phage or mu bacteriophage, is a muvirus (the first of its kind to be identified) of the family Myoviridae which has been shown to cause genetic transposition. It is of particular importance as its discovery in Escherichia coli by Larry Taylor was among the first observations of insertion elements in a genome. This discovery opened up the world to an investigation of transposable elements and their effects on a wide variety of organisms. While Mu was specifically involved in several distinct areas of research (including E. coli, maize, and HIV), the wider implications of transposition and insertion transformed the entire field of genetics.
Anatomy
Phage Mu is nonenveloped, with a head and a tail. The head has an icosahedral structure of about 54 nm in width. The neck is knob-like, and the tail is contractile with a base plate and six short terminal fibers. The genome has been fully sequenced and consists of 36,717 nucleotides, coding for 55 proteins.
History
Mu phage was first discovered by Larry Taylor at UC Berkeley in the late 1950s. His work continued at Brookhaven National Laboratory, where he first observed the mutagenic properties of Mu; several colonies of Hfr E. coli which had been lysogenized with Mu seemed to have a tendency to develop new nutritional markers. With further investigation, he was able to link the presence of these markers to the physical binding of Mu at a certain loci. He likened the observed genetic alteration to the ‘controlling elements’ in maize, and named the phage ‘Mu’, for mutation. This, however, was only the beginning. Over the next sixty years, the complexities of the phage were fleshed out by numerous researchers and labs, resulting in a far deeper understanding of mobile DNA and the mechanisms underlying transposable elements.
Key Mu-related findings
1972–1975: Ahmad Bukhari shows that Mu can insert randomly and prolifically throughout an entire bacterial genome, creating stable insertions. He also demonstrates that the reversion of the gene to its original and undamaged form is possible with the excision Mu.
1979: Jim Shapiro develops a Mu inspired model for transposition involving the ‘Shapiro Intermediate,’ in which both the donor and the target undergo two cleavages and then the donor is ligated into the target, creating two replication forks and allowing for both transposition and replication.
1983: Kiyoshi Mizuuchi develops a protocol for observing transposition in vitro using mini-Mu plasmids, allowing for a greatly increased understanding of the chemical components of transposition.
1994–2012: Because of shared mechanisms of insertion, Mu acts as a useful organism to elucidate the process of HIV integration, eventually leading to HIV integrase inhibitors such as raltegravir in 2008. Additionally, Montano et al. created a crystal structure of the Mu bacteriophage transpososome, allowing for a detailed understanding of the process Mu amplification.
References
External links
Phage Mu at ViralZone
Myoviridae
Viruses
Bacteriophages | Bacteriophage Mu | [
"Biology"
] | 650 | [
"Viruses",
"Tree of life (biology)",
"Microorganisms"
] |
60,416,711 | https://en.wikipedia.org/wiki/Adina%20Paytan | Adina Paytan is a research professor at the Institute of Marine Sciences at the University of California, Santa Cruz. known for research into biogeochemical cycling in the present and the past. She has over 270 scientific publications in journals such as Science, Nature, Proceedings of the National Academy of Sciences, and Geophysical Research Letters.
Career
Paytan is both an interdisciplinary scientist and an advocate for STEM education and public outreach. As a scientist, Paytan uses isotopic and chemical signatures to examine global biogeochemical cycling. This includes studies of groundwater discharge into coastal systems, nutrient cycling, ocean acidification, and paleoceanography. This research includes high resolution measurements of carbon and sulfur isotopes to characterize changes in the marine and atmospheric carbon cycle, using strontium isotopes within barite to infer changes in the global carbon cycle over geologic time, and modern investigations of groundwater discharge as a source of nutrients to the coastal ocean and coral reefs.
Paytan also deliberately works on STEM education and public outreach, and obtained an M.S. in Science Education from the Weizmann Institute in 1987. Paytan served as a mentor for the Centers for Ocean Sciences Education Excellence (COSEE) where she advocated for the role of universities in conducting public outreach. Paytan started the GeoKids program at Stanford in order to educate elementary school children about science. Paytan also mentors masters and Ph.D. students in her lab.
Early life and education
Paytan was born and raised in Israel. As an undergraduate, Paytan encountered geochemistry which she likens to a big complex puzzle. Paytan obtained undergraduate degrees in geology and biology (1985) and an M.S. in Earth Sciences Oceanography (1989) from Hebrew University of Jerusalem. Paytan's Ph.D. is from Scripps Institution of Oceanography (1996) where she worked with Miriam Kastner on using barite as a recorder of ocean chemistry. After postdoctoral work at University of California, San Diego she moved to the Department of Geological and Environmental Sciences at Stanford, and then onto a position at University of California, Santa Cruz.
Awards
Fulbright Scholar in Marine Resources, Portugal (2020)
A.G. Huntsman Award for Excellence in Marine Science (2019)
Fellow, American Geophysical Union (2018)
Fellow, Association for the Sciences of Limnology and Oceanography (ASLO, 2016)
Dansgaard Award, AGU mid-career Paleoceanography Award (2015)
Fellow, Geochemical Society (2014)
American Geophysical Union's Rachel Carson Lecture (2013)
Excellence Chair of the Prof. Dr. Werner Petersen Foundation from GEOMAR
American Geophysical Union's Ocean Sciences Early Career Award (2004)
References
Women oceanographers
Biogeochemists
Fellows of the American Geophysical Union
Scripps Institution of Oceanography alumni
University of California, Santa Cruz faculty
Women chemists
Geochemists | Adina Paytan | [
"Chemistry"
] | 593 | [
"Geochemists",
"Biogeochemistry",
"Biogeochemists"
] |
60,419,343 | https://en.wikipedia.org/wiki/RAPTA | RAPTA (ruthenium arene PTA) is a class of experimental cancer drugs. They consist of a central ruthenium(II) atom complexed to an arene group, chlorides, and 1,3,5-triaza-7-phosphaadamantane (PTA) forming an organoruthenium half-sandwich compound. Other related ruthenium anti-cancer drugs include NAMI-A, KP1019 and BOLD-100.
Structure and properties
It is envisaged that RAPTA derivatives have the “piano stool” structure like others organometallic half-sandwich compound. This is observed by the crystal structure of RAPTA-C which exhibits the archetypal half-sandwich structure. The PTA ligand was designed to make the complexes more soluble in water, and the two labile chlorido ligands can exchange to aquo ligand in the presence of water.
Synthesis
In a typical synthesis, [Ru (η6-p-cymene)Cl2] is reacted with 2 equivalents of PTA for 24 hours under reflux in methanol to yield to [Ru(η6-p-cymene)Cl2(pta)].
RAPTA derivatives
Several derivatives of RAPTA were synthesized, and two of the most notable are [Ru(η6-p-cymene)Cl2(pta)] (RAPTA-C) and [Ru(η6-toluene)Cl2(pta)] (RAPTA-T).
Mode of action
At first, RAPTA was anticipated to hydrolyze and interact with DNA to target primary tumors, which is similar to the platinum analogue cisplatin. Studies showed that adducts form between RAPTA compounds and proteins (especially cathepsin B and thioredoxin reductase(TrxR)). Moreover, the reactivity of RAPTA in the presence of protein was totally different than that of cisplatin. In vitro studies showed that cytotoxicity of RAPTA derivatives was much less as compared to cisplatin, and some RAPTA compounds are not even cytotoxic to healthy cells. Surprisingly, both RAPTA-C and RAPTA-T showed the ability to inhibit lung metastasis in mice bearing middle cerebral artery mammary carcinoma (by measuring the number and weight of the metastases), whilst having small effect on primary tumor. The only ruthenium complex which proves the ability against metastasis was NAMI-A, and this work has high practical application in chemotherapy since the removal of primary tumor can be done through frequent surgery while the number of metastasis treatments are limited.
References
Ruthenium complexes
Half sandwich compounds
Experimental cancer drugs
Phosphine complexes
Ruthenium(II) compounds
Chloro complexes | RAPTA | [
"Chemistry"
] | 592 | [
"Organometallic chemistry",
"Half sandwich compounds"
] |
60,421,497 | https://en.wikipedia.org/wiki/Decolonization%20%28medicine%29 | Decolonization, also bacterial decolonization, is a medical intervention that attempts to rid a patient of an antimicrobial resistant pathogen, such as methicillin-resistant Staphylococcus aureus (MRSA) or antifungal-resistant Candida.
By pre-emptively treating patients who have become colonized with an antimicrobial resistant organism, the likelihood of the patient going on to develop life-threatening healthcare-associated infections is reduced. Common sites of bacterial colonization include the nasal passage, groin, oral cavity and skin.
History
In cooperation with the Centers for Disease Control and Prevention (CDC), the Chicago Antimicrobial Resistance and Infection Prevention Epicenter (C-PIE), Harvard/Irvine Bi-Coastal Epicenter, and Washington University and Barnes Jewish County (BJC) Center for Prevention of Healthcare-Associated Infections conducted a study to test different strategies to prevent and decrease the rate of healthcare-associated infections (HAIs). REDUCE MRSA, which stands for Randomized Evaluation of Decolonization vs. Universal Clearance to Eliminate methicillin-resistant Staphylococcus aureus (MRSA), was completed in September 2011. This study determined decolonization with chlorhexidine and mupirocin of all patients without screening was the most effective method of reducing the presence of MRSA and the overall number of bloodstream infections.
Medical uses
Decolonization is used to reduce rates of infections caused by MRSA. Staphylococcus aureus (S. aureus) is a common cause of hospital related infections, including bloodstream infections and infections of the heart and bone. Additionally, increasing cases of methicillin-susceptible S. aureus (MSSA) and MRSA pose a new challenge as these strains are difficult or impossible to treat with standard antibiotic regimens. Because of the prevalence of S. aureus within the general population and significant number of severe infections caused by this bacteria, decolonization protocols have been implemented in many hospital networks to decrease MRSA infections. By using disinfectants over an extended period of time, decolonization decreases or minimizes patient bacterial load.
Technique
There are several decolonization regimens currently used for MRSA decolonization. Targeted decolonization involves screening patients for MRSA then isolating and implementing decolonization protocols only for patients who test positive for MRSA. On the other hand, universal decolonization involves no screening and decolonization for all patients in a given hospital setting or department.
Products used for decolonization typically involve chlorhexidine rinses for bathing or showering, a mouthwash to clean the oral cavity, and a nasal spray containing mupirocin. It is important to include a mouthwash and nasal spray as individuals commonly carry MRSA in the nose, mouth, and throat. Chlorhexidine is a disinfectant that is used to disinfect skin prior to surgery, surgical instrument sterilization, and in hand disinfectants in healthcare settings. In the mouthwash form, it is commonly used for gingivitis. Mupirocin is a topical antibiotic commonly used for superficial skin infections and has been approved by the FDA nasal decolonization. Though these are the most commonly used products, there are a number of alternative antibiotics and antiseptics, like povidone-iodine, that are used in decolonization.
Typically, patients use chlorhexidine shampoo or body wash daily and mupirocin nasal spray twice daily. The duration of product use for optimal effect is still being studied, but the most widely studied regimen recommends use of the products as mentioned previously for five days twice a month over a sixth month period. There is limited data supporting decolonization or recommendations of duration of decolonization in outpatient settings.
Risks and complications
Decolonization is a relatively safe medical intervention. Local skin irritation is the most common side effect.
See also
Antibiotic
Antifungal
Antiviral drug
References
Bacteria and humans
Antimicrobial resistance | Decolonization (medicine) | [
"Biology"
] | 840 | [
"Bacteria and humans",
"Bacteria"
] |
60,422,252 | https://en.wikipedia.org/wiki/Ammonolysis | In chemistry, ammonolysis (/am·mo·nol·y·sis/) is the process of splitting ammonia into NH2- + H+. Ammonolysis reactions can be conducted with organic compounds to produce amines (molecules containing a nitrogen atom with a lone pair, :N), or with inorganic compounds to produce nitrides. This reaction is analogous to hydrolysis in which water molecules are split. Similar to water, liquid ammonia also undergoes auto-ionization, {2 NH3 ⇌ NH4+ + NH2- }, where the rate constant is k = 1.9 × 10−38.
Organic compounds such as alkyl halides, hydroxyls (hydroxyl nitriles and carbohydrates), carbonyl (aldehydes/ketones/esters/alcohols), and sulfur (sulfonyl derivatives) can all undergo ammonolysis in liquid ammonia.
Organic synthesis
Mechanism: Ammonolysis of Esters
This mechanism is similar to the hydrolysis of esters, the ammonia attacks the electrophilic carbonyl carbon forming a tetrahedral intermediate. The reformation of the C-O double bond ejects the ester. The alkoxide deprotonates the ammonia forming an alcohol and amide as products.
Of haloalkanes
On heating a haloalkane and concentrated ammonia in a sealed tube with ethanol, a series of amines are formed along with their salts. The tertiary amine is usually the major product.
{NH3 ->[\ce{RX}] RNH2 ->[\ce{RX}] R2NH ->[\ce{RX}] R3N ->[\ce{RX}] R4N+}
This is known as Hoffmann's ammonolysis.
Of alcohols
Alcohols can also undergo ammonolysis when in the presence of ammonia. An example is the conversion of phenol to aniline, catalyzed by stannic chloride.
ROH + NH3 A ->[\ce{SnCl4}] RNH2 + H2O
Of carbonyl compounds
The reaction between a ketone and ammonia results in an imine and byproduct water. This reaction is water sensitive and thus drying agents such as aluminum chloride or a Dean–Stark apparatus must be employed to remove water. The resulting imine will react and decompose back into the ketone and the ammonia when in the presence of water. This is due to the fact that this reaction is reversible:
R2CO + NH3 <=> R2CNH + H2O .
Inorganic synthesis
Ammonolysis can be used to synthesize nitrides (and oxynitrides) by reacting various metal precursors with ammonia, some options include chemical vapor deposition, treating metals or metal oxides with ammonia gas, or liquid supercritical ammonia (also known as "ammonothermal" synthesis, analogous to hydrothermal synthesis).
M + NH3 -> MN + 3/2 H2
MO2 + 4/3 NH3 -> MN + 2 H2O + 1/6 N2
The products of these reactions may be complex, with mixtures of oxygen, nitrogen, and hydrogen that can be difficult to characterize.
References
Ammonia
Biochemical reactions | Ammonolysis | [
"Chemistry",
"Biology"
] | 703 | [
"Biochemistry",
"Biochemical reactions"
] |
41,129,889 | https://en.wikipedia.org/wiki/Information%20distance | Information distance is the distance between two finite objects (represented as computer files) expressed as the number of bits in the shortest program which transforms one object into the other one or vice versa on a
universal computer. This is an extension of Kolmogorov complexity. The Kolmogorov complexity of a single finite object is the information in that object; the information distance between a pair of finite objects is the minimum information required to go from one object to the other or vice versa.
Information distance was first defined and investigated in based on thermodynamic principles, see also. Subsequently, it achieved final form in. It is applied in the normalized compression distance and the normalized Google distance.
Properties
Formally the information distance between and is defined by
with a finite binary program for the fixed universal computer
with as inputs finite binary strings . In it is proven that
with
where is the Kolmogorov complexity defined by of the prefix type. This is the important quantity.
Universality
Let be the class of upper semicomputable distances that satisfy the density condition
This excludes irrelevant distances such as for ;
it takes care that if the distance growth then the number of objects within that distance of a given object grows.
If then up to a constant additive term.
The probabilistic expressions of the distance is the first cohomological class in information symmetric cohomology, which may be conceived as a universality property.
Metricity
The distance is a metric up to an additive
term in the metric (in)equalities. The probabilistic version of the metric is indeed unique has shown by Han in 1981.
Maximum overlap
If , then there is a program of length that converts to , and a program of length such that the program converts to . (The programs are of the self-delimiting format which means that one can decide where one program ends and the other begins in concatenation of the programs.) That is, the shortest programs to convert between two objects can be made maximally overlapping: For it can be divided into a program that converts object to object , and another program which concatenated with the first converts to while the concatenation of these two programs is a shortest program to convert between these objects.
Minimum overlap
The programs to convert between objects and can also be made minimal overlapping.
There exists a program of length up to an additive term of that maps to and has small complexity when is known (). Interchanging the two objects we have the other program Having in mind the parallelism between Shannon information theory and Kolmogorov complexity theory, one can say that this result is parallel to the Slepian-Wolf and Körner–Imre Csiszár–Marton theorems.
Applications
Theoretical
The result of An.A. Muchnik on minimum overlap above is an important theoretical application showing
that certain codes exist: to go to finite target object from any object there is a program which almost only
depends on the target object! This result is fairly precise and the error term cannot be significantly improved. Information distance was material in the textbook, it occurs in the Encyclopedia on Distances.
Practical
To determine the similarity of objects such as genomes, languages, music, internet attacks and worms, software programs, and so on, information distance is normalized and the Kolmogorov complexity terms approximated by real-world compressors (the Kolmogorov complexity is a lower bound to the length in bits of a compressed version of the object). The result is the normalized compression distance (NCD) between the objects. This pertains to objects given as computer files like the genome of a mouse or text of a book. If the objects are just given by name such as `Einstein' or `table' or the name of a book or the name `mouse', compression does not make sense. We need outside information about what the name means. Using a data base (such as the internet) and a means to search the database (such as a search engine like Google) provides this information. Every search engine on a data base that provides aggregate page counts can be used in the normalized Google distance (NGD).
A python package for computing all information distances and volumes, multivariate mutual information, conditional mutual information, joint entropies, total correlations, in a dataset of n variables is available .
References
Related literature
Statistical distance | Information distance | [
"Physics"
] | 889 | [
"Physical quantities",
"Statistical distance",
"Distance"
] |
41,133,512 | https://en.wikipedia.org/wiki/Bidirectional%20current | A bidirectional current (BidC) is one which both charges and discharges at once. It is a current that flows primarily in one direction and then in the other.
Complicated systems which have integrated recharging capability sometimes resort to using bidirectional currents, as in Laptops or other systems. Monitoring of a bidirectional current is required for a laptop to report the battery level and charging status. Components are available for this purpose.
See also
Difference amplifier
References
Electric current | Bidirectional current | [
"Physics"
] | 102 | [
"Electric current",
"Wikipedia categories named after physical quantities",
"Physical quantities"
] |
41,134,694 | https://en.wikipedia.org/wiki/O-Cresolphthalein | o-Cresolphthalein is a phthalein dye used as a pH indicator in titrations. It is insoluble in water but soluble in ethanol. Its solution is colourless below pH 8.2, and purple above 9.8. Its molecular formula is C22H18O4. It is used medically to determine calcium levels in the human body, or to synthesize polyamides or polyimides.
Production
o-Cresolphthalein is not produced industrially, rather, it is commercially available. To be produced, the method generally used to synthesize phthalein dyes is effective. This method is used to synthesize phenolphthalein and thymolphthalein. To begin, a 2M equivalent of a phenol or a substituted phenol should be combined with a 1M equivalent of a phthalic anhydride.
Uses
The compound has uses ranging from medicine to laboratory syntheses of chemically similar compounds. o-Cresophthalein has been used to derive polyamides and polyimides, colorimetrically estimate calcium in serum, and predict amount of time to wait before blood collection after a patient receives gadodiamide.
Deriving Polyamides and Polyimides
Aromatic polyamides and polyamides are practical compounds due to their temperature resistance, electrical or insulating characteristics, and their mechanical strength. Some of the polyamides and polyimides that can be synthesized by o-Cresophthalein are polycarbonate, polyacrylate, and epoxy-resin.
The diether-diamine 3,3-bis[4-(4-amino-phenoxy)-3-methylphenyl]phthalide, or BNMP, is synthesized by 12 g o-cresophthalein, 11.5 g p-chloronitrobenzene, 5.1 g anhydrous potassium carbonate, and 55 mL of DMF. The compounds should be refluxed together at 160 °C for eight hours. Once it is done and has cooled, it should be mixed with 0.3 L methanol. A precipitate should form and be vacuum filtered to obtain a solid. It should then be washed with water and dried, yielding a yellow product. It should then be recrystallized from glacial acetic acid to yield yellow needles. The product is BNMP. The reaction can go further by combining 15.5 g of BNMP with 0.18 g 10% Pd/C and 50 mL ethanol. They should be stirred at 80 °C. 7 mL of hydrazine monohydrate should be added drop by drop for one hour. The solution should then be mixed for eight hours. It should then be filtered to separated from the Pd/C and concentrated. The concentrated solution should be added to water, and a precipitate should be formed. It should then be vacuum filtered to isolate the solid, yelding 3,3-Bis[4-(4-aminophenoxy)-3-methylphenyl]pthalide, or BAMP, as a white solid. It should then be purified by water and ethanol.
Colorimetric Estimation of Calcium in Serum
Calcium in a blood sample should be estimated when required medically. Calcium should be precipitated out of 0.1 mL of the blood sample serum as calcium oxalate. After that, the decomposition of the calcium oxalate should occur by heat. Then, the sample should be estimated colorimetrically by o-cresolphthalein complexone. The required liquid complexone is made by dissolving 10 mg o-cresolphthalein complexone in 50 mL alkaline borate, and then 50 mL of 0.05 N HCl are added to make the solution's pH 8.5. This method for calcium determination is efficient and effective, requiring a minimal amount of blood serum sample and a reasonable amount of time.
Determination of Impact of Gadodiamide on Calcium Measurements
Gadolinium is given to patients for magnetic resonance imaging, or an MRI.It is used as a contrast agent for the exam to improve clarity of the images formed. However, it can react in the human body and have detrimental effects. Therefore, the agent should be removed. One of these gadolinium based agents is gadodiamide. Calcium in the body should be determined accurately to ensure that the Gadodiamide does not have adverse effects on the patient. There are two o-cresolphthalein methods to determine amount of calcium. The o-cresolpthalein methods are effective because it is a calcium binding dye. The gadolinium ion with a charge of +3 can be removed from gadodiamide using o-cresolphthalein. For these methods, glomerular filtration rate, or GFR, and time since gadodiamide was given should be recorded. Ultimately, these two factors and the impact of gadodiamide on calcium levels calculated by the o-cresolphthalein method helps to reveal an amount of time that patients must wait after receiving gadodiamide to have blood drawn again, or avoid pseudohypocalcemia.
Safety
NFPA Diamond
To the left is the NFPA diamond as determined by the Safety Data Sheet, or SDS, by Fisher Scientific. There is minimal risk in handling the chemical.
References
External links
https://web.archive.org/web/20150924095330/http://www.sciencelab.com/msds.php?msdsId=9923574
http://www.chemicaldictionary.org/dic/O/o-Cresolphthalein_1298.html
http://www.chemspider.com/Chemical-Structure.62217.html
PH indicators
Triarylmethane dyes | O-Cresolphthalein | [
"Chemistry",
"Materials_science"
] | 1,257 | [
"Titration",
"PH indicators",
"Chromism",
"Chemical tests",
"Equilibrium chemistry"
] |
47,127,924 | https://en.wikipedia.org/wiki/Subcutaneous%20implantable%20defibrillator | Subcutaneous implantable cardioverter defibrillator, or S-ICD, is an implantable medical device for detecting and terminating ventricular tachycardia and ventricular fibrillation in patients at risk of sudden cardiac arrest. It is a type of implantable cardioverter defibrillator but unlike the transvenous ICD, the S-ICD lead is placed just under the skin, leaving the heart and veins untouched.
The S-ICD was developed to reduce the risk of complications associated with transvenous leads. Potential complications, such as infections in the bloodstream and the need to remove or replace the leads in the heart, are minimised or entirely eliminated with the S-ICD system.
Transvenous ICD (leads in the heart)
Pros
The generator is smaller than the S-ICD generator, which may result in a less visible implanted device. This could improve the time needed to get used to the implantable device, although this is subjective. The procedure can usually be done under local anesthesia and light sedation. The transvenous ICD is capable of pacing for bradycardia and delivering antitachycardia pacing (ATP). However, device-related complications were numerically more frequent in patients with transvenous ICDs, inappropriate shocks are less frequent that in those with subcutaneous ICDs.
Cons
The leads go into the vein and heart and will grow into the heart wall over time. This may increase the chance of complications if the leads need to be removed or replaced, as the procedure to extract an intracardiac leads can be a challenge. Because the leads need to go into the heart they need to be relatively thin and flexible, since they have to pass through (and remain in) the heart valve(s) and need to flex with every heartbeat. This makes the leads more vulnerable to lead fracture (and therefore complications). It has been demonstrated that device-related complications were numerically more frequent in patients with transvenous ICDs. Due to the position of the pulse generator under the collarbone, it can be more visible with clothing with low neckline.
Patient selection
Patients who are relatively older, who need ICD for secondary prevention, or who have concomitant bradycardia requiring pacing, or heart failure requiring cardiac resynchronisation therapy are more suitable for transvenous ICD implantation. An older patient with ischemic cardiomyopathy and documented symptomatic ventricular tachycardia is a typical example.
Subcutaneous ICD (lead under the skin)
Pros
The lead does not go into the heart, which means it leaves the veins and the heart completely intact. This reduces chance of complications (e.g. systemic infections). Because the lead does not go into the heart it can be thicker and more robust. This minimizes / reduces the chance of lead fracture. In the event the system needs to be explanted, the procedure is a relatively simple surgical procedure.
Cons
The pulse generator is larger than most transvenous ICD pulse generators. This could result in a longer time needed to get used to it, although this is subjective. Depending on the physique of a person, the S-ICD may be more visible with bare chest. The procedure usually requires deep sedation or general anaesthesia, as creating a larger pocket between the muscles and tunnelling the lead over the sternum, as well as performing defibrillation threshold testing, can be quite painful. The S-ICD can deliver only temporary post-shock pacing, but cannot otherwise address bradycardia and cannot deliver anti-tachycardia pacing. Inappropriate shocks were numerically more frequent in those with subcutaneous ICDs. Defibrillation testing has traditionally been considered mandatory in patients with subcutaneous implantable cardioverter–defibrillator to confirm appropriate ventricular fibrillation detection. However, PRAETORIAN-DFT randomised clinical trial is aiming to demonstrate non-inferiority of omitting DFT in patients undergoing S-ICD implantation in which the S-ICD system components are optimally positioned by calculated PRAETORIAN score.
Patient selection
Patients who are relatively younger, who need ICD for primary prevention, and who do not require pacing or cardiac resynchronisation therapy, are more suitable for S-ICD implantation. A young survivor of aborted sudden cardiac death is a typical example.
Transvenous vs subcutaneous ICD implantation procedure
References
External links
Subcutaneous Implantable Defibrillator (S-ICD) - Official Patient site
EMBLEM™ MRI S-ICD System - Subcutaneous Implantable Defibrillator
Subcutaneous ICD - EMBLEM S-ICD™ System
Cardiology
Heart
Cardiac electrophysiology
Implants (medicine)
Medical devices
Cardiac procedures | Subcutaneous implantable defibrillator | [
"Biology"
] | 1,011 | [
"Medical devices",
"Medical technology"
] |
47,128,053 | https://en.wikipedia.org/wiki/Oligopeptide%20P11-4 | Oligopeptide P11-4 is a synthetic, pH controlled self-assembling peptide used for biomimetic mineralization e.g. for enamel regeneration or as an oral care agent. P11-4 (INCI name Oligopeptide 104) consists of the natural occurring amino acids Glutamine, Glutamic acid, Phenylalanine, Tryptophan and Arginine. The resulting higher molecular structure has a high affinity to tooth mineral.
P11-4 has been developed and patented by The University of Leeds (UK). The Swiss company Credentis has licensed the peptide technology and markets it under the trade names including CUROLOX, REGENAMEL, and EMOFLUOR. They offer three products with this technology. As of June 2016 in Switzerland products are available with new Brand names from Dr. Wild & Co AG.
Mechanism of action
P11-4 is an α-peptide that self-assembles into β-sheet amyloids with a hydrogel appearance at low pH. It builds a 3-D bio-matrix with binding sites for calcium ions serving as nucleation point for hydroxyapatite (tooth mineral) formation. The high affinity to tooth mineral is based on matching distances of Ca-ion binding sites on P11-4 and Ca spacing in the crystal lattice of hydroxyapatite. The matrix formation is pH controlled and thus allows control matrix activity and place of formation.
P11-4 in dental applications
Self assembling properties of P11-4 are used to regenerate early caries lesions. By application of P11-4 on the tooth surface, the peptide diffuse through the intact hypomineralized plate into the early caries lesion body and start, due to the low pH in such a lesion, to self-assemble generating a peptide scaffold mimicking the enamel matrix.
Around the newly formed matrix de-novo enamel-crystals are formed from calcium phosphate present in saliva. Through the remineralization caries activity is significantly reduced in comparison with a fluoride treatment alone.
In aqueous oral care gels the peptide is present as matrix. It binds directly as matrix to the tooth mineral and forms a stable layer on the teeth. This layer does protect the teeth from acid attacks. It also occludes open dentin tubules and thus reduces the dental sensitivity.
Uses
Treatment of initial caries lesions
Regenerating enamel
Dentin hypersensitivity
Acid protection
Availability
Availability of products containing P11-4 vary by country, with some products available only to dentists, and others available to the retail public.
Medical device for caries treatment and enamel regeneration:
CURODONT REPAIR (EU)
REGENAMEL (CH)
Cosmetic products for acid protection and dentin desensitization:
CURODONT PROTECT (EU)
EMOFLUOR PROTECT GEL PROFESSIONAL (CH)
CURODONT D'SENZ (EU & CH)
EMOFLUOR DESENS GEL PROFESSIONAL (CH)
Candida Protect Professional (CH)
See also
Amorphous calcium phosphate (Recaldent)
Remineralisation of teeth
Oligopeptide
Biomimetic materials
Fluoride
References
External links
University of Leeds Centre for Molecular Nanoscience website
credentis ag website
vvardis ag website
Dental materials
Peptide therapeutics
Hendecapeptides
Acetamides | Oligopeptide P11-4 | [
"Physics"
] | 699 | [
"Materials",
"Dental materials",
"Matter"
] |
47,129,698 | https://en.wikipedia.org/wiki/Splitting%20band%20knife | The splitting band knife (or band knife or bandknife) is a kind of knife used in several fields including: tannery, EVA/rubber, foam, cork, shoe and leather goods, paper, carpet and other soft sheet materials. It is a power tool which is very similar in operation to a band saw, with an endless loop blade; the material to be cut is supported by a flat table.
Technical characteristics
A splitting band knife can be produced in different sizes (length x width x thickness) according to the splitting machine on which it has to be fitted. Different technical characteristics define the quality of the product (blade)
The blade can be welded and bevelled, toothed not rectified, rectified on both edge and surfaces, with pre-sharpening made by tools or grinding stones.
A splitting band knife can be produced in several dimensions, usually with a length from , a width between , and a thickness from .
Sectors and use
Tannery sector
In the tannery sector the splitting band knife allows to divide/split leather and textile in its thickness. The final products of this operation are Split and Grain (internal and external parts) of the leather.
Blades can be used to split every kind of material which has to be split in the thickness:
leather
fur
non-woven material
velvet
In the tannery sector, splitting band knives can be used in following working: wet blue, lime, dry, wet white and other tannings.
The blades which are mostly used in this sector are rectified on both edge and surfaces in order to guarantee the best splitting, that means a constance in the thickness of the leather that is produced/split (rectification of surfaces), and also to guarantee the maximum linearity during the splitting process ( back edge); the blade must run as stable as possible without oscillations at all, which could create defects on the leather. Moreover, blades are used to be provided pre-bevelled in order to save time to start the blade running up once it is fitted on the splitting machine.
Rubber, cork and foam sectors
In the fields of rubber and cork, splitting band knives can be used on every kind of material that needs to be split in the thickness such as:
rubber (except vulcanized rubbers)
cork
foam
In this sector blades can be used according to their application, to the splitting machine, to the material and to the cut/split precision required by the final product.
Shoes and leather goods sectors
In the production linked to shoes and leather goods, the splitting band knives allow to divide/split and equalize or “reduce” the leather in the thickness in order to improve the quality of the finished product.
The final product of this working, equalization or “reduction” is a leather ready to become a shoe or leather goods (for example bags, wallets, belts, etc..) The hides used in this sectors are always finished leathers and in dry.
In this field, splitting band knives can be used on every kind of material that needs to be split in the thickness such as:
leather
textiles – linings
rubber – insoles
cardboard components
The blades which are mostly used in this sector are rectified on both edge and surfaces in order to guarantee the best splitting, that means a constance in the thickness of the leather that is produced/split (rectification of surfaces), and also to guarantee the maximum linearity during the splitting process ( back edge); the blade must run as stable as possible without oscillations at all, which could create defects on the leather. Moreover, blades are used to be provided pre-bevelled in order to save time to start the blade running up once it is fitted on the splitting machine.
Paper sector
The splitting band knife can be used also in paper sector and allows to divide/to split the material in the thickness, for example paper reels (from toilet paper to the reels for industrial use, paper towel rolls for domestic use, etc..)
In this production, the final product obtained by the splitting is:
paper for industrial sector: big rolls, reels, etc.. for hygienic uses: handkerchiefs, toilet paper, kitchen rolls
In this sector blades can be used according to their application, to the splitting machine, to the material and to the cut/split precision required by the final product
Band knife machines
Band Knife blades are used on two types of machine (vertical and horizontal) depending on the material being cut/processed.
Vertical
On a vertical band knife machine usually a narrow width band knife blade is used, the most common width being . The length of the band knife blade depends on the supplier of the band knife machine. The dimensions are indicated on a small metal tag pasted or riveted on the machine. The vertical machine band knife blade is most commonly a "double bevel, double edge DBDE" execution to enable cutting while advancing the table and also while retracting the work table, while as the "double bevel, single edge DBSE" execution cuts only in one direction. Productivity is enhanced when the operator cuts both while advancing and also while retracting the work table after adjusting the foam block after each pass for cutting. The DBDE execution blade can have a parallel or twisted 180 degrees welding. The twisted welding execution saves a grinding unit, as both edges pass the same grinding unit after two turns. It has been observed that a narrow width on a vertical band knife machine gives better dimensional accuracies on the foam block. The wider the vertical machine band knife - more the deflection and size variation from one end to the other extreme end.
Horizontal
Horizontal band knife blades are wider usually wide for foam converting is popular, for leather goods wide blade is popular, width is popular for the tannery splitting band knife. There are other widths depending on the machine manufacturer. The horizontal machine band knife blade is supported by a guide to give dimensional accuracies while cutting/splitting. Therefore, only blades that have passed as one main manufacturing step a surface grinding process reach the necessary thickness tolerances of less than . A higher tolerance would lead to marks on the surface of the split material like leather or rubber. Blades are available in different grades of exactness depending on the required exactness on the material to be cut/split. On modern machines in combination with a high grade blade a splitting thickness of for material width is possible.
Blade sharpening
For both the vertical and horizontal band knife machines there is a grinding attachment which continuously sharpens the band knife while it is cutting. It is possible to find a non powered grinding attachment for the vertical machines but for the horizontal band knife machine the grinding attachment for continuously sharpening the blade is powered by electric motors.
History
1808 W. Newberry patent No. 3105 London, including "machinery for ... splitting skins",
1854 J.F. Flanders and J.A. Marden with a patent for a bandknife machine,
1912: Foundation of blade manufacturer Rudolf Alber,
Before WW II: several machinery brands on the market: Turner, Clasen, USM, BMD,
2011: Known Polish pneumatic lifting table manufacturer REXEL started producing Vertical Band knife machines. At the moment there are models: R1250, R1150, R1000, R750, R500 (The number e.g. 1000 is the arm length in cm).
Images
References
Knives
Power tools | Splitting band knife | [
"Physics"
] | 1,516 | [
"Power (physics)",
"Power tools",
"Physical quantities"
] |
47,135,293 | https://en.wikipedia.org/wiki/List%20of%20monuments%20damaged%20by%20conflict%20in%20the%20Middle%20East%20during%20the%2021st%20century | This is a list of monuments suffering damage from conflict in the Middle East during the 21st century. It is sorted by country.
Egypt
The Museum of Islamic Art in Cairo is home to one of the world's most impressive collections of Islamic art. It includes over 100,000 pieces that cover the entirety of Islamic history. The Cairo site was first built in 1881 and underwent a multi-million dollar renovation between 2003 and 2010.
On January 24th, 2014 a car bomb attack targeting the Cairo police headquarters on the other side of the street caused considerable damage to the museum and destroyed many artifacts. It is estimated that 20-30% of the artifacts will need restoration. The blast also severely damaged the buildings facade, wiping out intricate designs in the Islamic style. The Egyptian National Library and Archives in the same building was also affected.
Iraq
Dair Mar Elia, also known as Saint Elijah's monastery. The Christian monastery near Mosul was founded in the late 6th century, and its sanctuary was built in the 11th century. The monastery was damaged during the invasion of 2003, before being completely destroyed by ISIL in 2014.
Nimrud. The ancient Assyrian city around Nineveh Province, Iraq was home to countless treasures of the empire, including statues, monuments, and jewels. Following the 2003 invasion the site has been devastated by looting, with many of the stolen pieces finding homes in a museum abroad.
Great Mosque of Samarra. Once the largest mosque in the world, built in the 9th century on the Tigris River north of Baghdad. The mosque is famous for Malwiya Tower, a 52-meter minaret with spiraling ramps for worshipers to climb. The site was bombed in 2005, in an insurgent attack on a NATO position, destroying the top of the minaret and surrounding walls.
Al-Askari Shrine was severely damaged in a bombing in 2006 by unknown, masked assailants which resulted in the complete destruction of its golden dome.
Tomb of Jonah. The purported resting place of the biblical prophet Jonah, along with a tooth by some believed to be from the whale that consumed him in the myth. The site dated to the 8th century BC, and was of great importance to Christian and Muslim faiths. It was entirely blown up by ISIL militants in 2014 as part of their campaign against perceived apostasy.
Lebanon
Old Beirut suffered through a brutal 15-year civil war, successive battles with Israel, and sweeping urban development. It is referred to as the "Paris of the Middle East" and is known for its impressive landscape Ottoman, French and Art Deco architecture. Officials report that just 400 of 1200 protected historic buildings remain.
Tibnin Castle was damaged during the Israeli invasion of Lebanon in 2024 and one of its walls collapsed.
Libya
Cyrene (Libya). A key city for the Greeks and Romans, established in 630 BC. Famed as the basis for enduring myths and legends, such as that of the huntress heroine of the same name and bride of Apollo. The ruins were some of the best preserved from that period.
In May 2011, a number of objects excavated from Cyrene in 1917 and held in the vault of the National Commercial Bank in Benghazi were stolen. Looters tunnelled into the vault and broke into two safes that held the artefacts which were part of the so-called 'Benghazi Treasure' . The whereabouts of these objects are currently unknown.
Parts of the UNESCO World Heritage Site of Cyrene were destroyed in August 2013 by locals to make way for homes and shops. Approximately 200 vaults and tombs were leveled, as well as a section of a viaduct dating to the third century BC. Artifacts were thrown into a nearby river.
Palestine
Al-Omari Mosque, Gaza. Ancient monument in the heart of Jabalya's old town that dates back to the Mamluk Era. The walls, dome and roof were destroyed by Israeli airstrikes during the 2015 fighting in Gaza, along with dozens more historic sites.According to tradition, the mosque stands on the site of the Philistine temple dedicated to Dagon—the god of fertility—which Samson toppled in the Book of Judges. Later, a temple dedicated to Marnas—god of rain and grain—was erected. Local legend today claims that Samson is buried under the present mosque. The mosque is well known for its minaret, which is square-shaped in its lower half and octagonal in its upper half, typical of Mamluk architectural style. The minaret is constructed of stone from the base to the upper, hanging balcony, including the four-tiered upper half. The pinnacle is mostly made of woodwork and tiles, and is frequently renewed. A simple cupola springs from the octagonal stone drum and is of light construction similar to most mosques in the Levant.
Syria
The ancient city of Bosra. Continually inhabited for 2,500 years, and became the capital of the Romans' Arabian empire/ The centerpiece is a magnificent Roman theatre dating back to the second century that survived intact until the current century. Archeologists have revealed the site is now severely damaged from mortar shelling in 2011-2012 during the Arab Spring.
Citadel of Aleppo. The fortress spans at least four millennia, from the days of Alexander the Great, through Roman, Mongol, and Ottoman rule. The site has barely changed since the 16th century and is one of Syria's most popular World Heritage sites.
In August 2012, during the Battle of Aleppo of the Syrian civil war, the external gate of the citadel was damaged after being shelled during a clash between the Free Syrian Army and the Syrian Army to gain control over the citadel.
During the conflict, the Syrian Army used the Citadel as a military base, with the walls acting as cover while shelling surrounding areas and ancient arrow slits in walls being used by snipers to target rebels. As a result of this contemporary usage, the Citadel has received significant damage.
Armenian Genocide Memorial Church (Der Zor). Memorial site to the 1.5 million killed between 1915 and 1923, the Deir Ez-zor became a yearly destination for pilgrims from around the world. The site included a church, museum, and fire that burned continuously. On 21 September 2014, the memorial complex was blown up by militants of the Islamic State of Iraq and the Levant.
Al-Madina Souq. The covered markets in the Old City are a famous trade center for the region's finest produce, with dedicated sub-souks for fabrics, food, and accessories. The tunnels became the scene of fierce fighting and many of the oldest are now damaged beyond recognition. This was described by UNESCO as a tragedy.
Deir ez-Zor suspension bridge. This French-built suspension bridge was a popular pedestrian crossing and vantage point for its views of the Euphrates River. The bridge was destroyed by Free Syrian Army militiamen during the Syrian civil war in May 2013. Deir Ez-zor's Siyasiyeh Bridge was also destroyed.
Khalid ibn al-Walid Mosque. Among Syria's most famous Ottoman-style mosques, which also shows Mamluk influence through its light and dark contrasts. As of 2007, activities in the mosque were organized by shaykhs Haytham al-Sa'id and Ahmad Mithqan. Stamps depicting the mosque have been issued in several denominations.
The Khalid ibn al-Walid Mosque has been a symbol of anti-government rebels during the Syrian civil war. According to The New York Times, Syrian security forces killed 10 protesters participating in a funeral procession as they were leaving the mosque on 18 July 2011. The mosque, which the Syrian government stated had been turned by the rebels into an "arms and ammunition depot", was abandoned by the rebels on 27 July 2013. Shelling by government forces damaged Khalid's tomb inside the mosque. Following its capture by the Syrian Army, state media showed heavy damage inside the mosque, including some parts of it being burned, and the door to the tomb destroyed.
Krak des Chevaliers. The Crusader castle from the 11th century survived centuries of battles and natural disasters, becoming a World Heritage Site in 2006 along with the adjacent castle of Qal'at Salah El-Din.
During the Syrian Civil War which began in 2011 UNESCO voiced concerns that the conflict might lead to the damage of important cultural sites such as Krak des Chevaliers. It has been reported that the castle was shelled in August 2012 by the Syrian Arab Army, and the Crusader chapel has been damaged. The castle was reported to have been damaged in July 2013 by an airstrike during the Siege of Homs, and once more on the 18th of August 2013 it was clearly damaged yet the amount of destruction is unknown. The Syrian Arab Army recaptured the castle and the village of al-Hosn from rebel forces during the Battle of Hosn on March 20, 2014, although the extent of damage from earlier mortar hits remained unclear.
Palmyra. An "oasis in the Syrian desert" according to UNESCO, this Aramaic city has stood since the second millennium BC and featured some of the most advanced architecture of the period. The site subsequently evolved through Greco-Roman and Persian periods, providing unique historic insight into those cultures.
As a result of the Syrian Civil War, Palmyra experienced widespread looting and damage by combatants. During the summer of 2012, concerns about looting in the museum and the site increased when an amateur video of Syrian soldiers carrying funerary stones was posted. However, according to France 24's report, "From the information gathered, it is impossible to determine whether pillaging was taking place." The following year the facade of the temple of Bel sustained a large hole from mortar fire, and colonnade columns have been damaged by shrapnel. According to Maamoun Abdulkarim, director of antiquities and museums at the Syrian Ministry of Culture, the Syrian Army positioned its troops in some archaeological-site areas, while Syrian opposition soldiers stationed themselves in gardens around the city.
On 13 May 2015, the ISIL launched an attack on the modern town, sparking fears that the iconoclastic group would destroy the site. On 21 May, ISIL forces entered the World Heritage Site. Local residents reported that the Syrian air force bombed the site on 13 June, damaging the northern wall next to the Temple of Baalshamin. The Temple of Baalshamin and the Temple of Bel were demolished by ISIL in August 2015.
The Great Mosque of Aleppo. A World Heritage Site originally built in 715 by the Umayyad dynasty, ranking it among the oldest mosques in the world. The epic structure evolved through successive eras, gaining its famous minaret in the late 11th century.
On 13 October 2012 the mosque was seriously damaged during clashes between the armed groups of the Free Syrian Army and the Syrian Army forces. President Bashar al-Assad issued a presidential decree to form a committee to repair the mosque by the end of 2013.
The mosque was seized by rebel forces in early 2013, and, as of April 2013, is within an area of heavy fighting, with government force stationed away.
On 24 April 2013 the minaret of the mosque was reduced to rubble during an exchange of heavy weapons fire between government forces and rebels during the ongoing Syrian civil war. The Syrian Arab News Agency (SANA) reported that members of Jabhat al-Nusra detonated explosives inside the minaret, while opposition activists said that the minaret was destroyed by Syrian Army tank fire as part of an offensive. Countering assertions by the state media of Jabhat al-Nusra's involvement, opposition sources described them as rebels from the Tawhid Brigades who were fighting government forces around the mosque. The opposition's main political bloc, the Syrian National Coalition (SNC), condemned the minaret's destruction, calling it "an indelible disgrace" and "a crime against human civilization."
Yemen
Sana'a old city. Yemen's capital city of Sana'a has been struck by suicide bombings (for which ISIL has claimed responsibility) and air-strikes by the Saudi-led coalition. These have affected the old fortified city—inscribed on UNESCO's World Heritage List since 1986—and the archaeological site of the pre-Islamic walled city of Baraqish, causing, according to UNESCO, "severe damage".
See also
Destruction of art
List of heritage sites damaged during the Syrian Civil War
List of World Heritage in Danger
Lost artworks
List of destroyed heritage
Destruction of Art in Afghanistan
References
Architecture lists
Middle East
Cultural lists
Lists of demolished buildings and structures
21st century-related lists
Middle East-related lists
Lists of monuments and memorials
Monuments and memorials in Asia | List of monuments damaged by conflict in the Middle East during the 21st century | [
"Engineering"
] | 2,576 | [
"Architecture lists",
"Architecture"
] |
52,713,809 | https://en.wikipedia.org/wiki/Joos%E2%80%93Weinberg%20equation | In relativistic quantum mechanics and quantum field theory, the Joos–Weinberg equation is a relativistic wave equation applicable to free particles of arbitrary spin , an integer for bosons () or half-integer for fermions (). The solutions to the equations are wavefunctions, mathematically in the form of multi-component spinor fields. The spin quantum number is usually denoted by in quantum mechanics, however in this context is more typical in the literature (see references).
It is named after Hans H. Joos and Steven Weinberg, found in the early 1960s.
Statement
Introducing a matrix;
symmetric in any two tensor indices, which generalizes the gamma matrices in the Dirac equation, the equation is
or
Lorentz group structure
For the JW equations the representation of the Lorentz group is
This representation has definite spin . It turns out that a spin particle in this representation satisfy field equations too. These equations are very much like the Dirac equations. It is suitable when the symmetries of charge conjugation, time reversal symmetry, and parity are good.
The representations and can each separately represent particles of spin . A state or quantum field in such a representation would satisfy no field equation except the Klein–Gordon equation.
Lorentz covariant tensor description of Weinberg–Joos states
The six-component spin-1 representation space,
can be labeled by a pair of anti-symmetric Lorentz indexes, , meaning that it transforms as an antisymmetric Lorentz tensor of second rank i.e.
The j-fold Kronecker product of
decomposes into a finite series of Lorentz-irreducible representation spaces according to
and necessarily contains a sector. This sector can instantly be identified by means of a momentum independent projector operator , designed on the basis of , one of the Casimir elements (invariants) of the Lie algebra of the Lorentz group, which are defined as,
where are constant matrices defining the elements of the Lorentz algebra within the representations. The Capital Latin letter labels indicate the finite dimensionality of the representation spaces under consideration which describe the internal angular momentum (spin) degrees of freedom.
The representation spaces are eigenvectors to in () according to,
Here we define:
to be the eigenvalue of the sector. Using this notation we define the projector operator, in terms of :
Such projectors can be employed to search through for and exclude all the rest. Relativistic second order wave equations for any j are then straightforwardly obtained in first identifying the sector in in () by means of the Lorentz projector in () and then imposing on the result the mass shell condition.
This algorithm is free from auxiliary conditions. The scheme also extends to half-integer spins, in which case the Kronecker product of with the Dirac spinor,
has to be considered. The choice of the totally antisymmetric Lorentz tensor of second rank, , in the above equation () is only optional. It is possible to start with multiple Kronecker products of totally symmetric second rank Lorentz tensors, . The latter option should be of interest in theories where high-spin Joos–Weinberg fields preferably couple to symmetric tensors, such as the metric tensor in gravity.
An Example
Source:
The
transforming in the Lorenz tensor spinor of second rank,
The Lorentz group generators within this representation space are denoted by and given by:
where stands for the identity in this space, and are the respective unit operator and the Lorentz algebra elements within the Dirac space, while are the standard gamma matrices. The generators express in terms of the generators in the four-vector,
as
Then, the explicit expression for the Casimir invariant in () takes the form,
and the Lorentz projector on (3/2,0)⊕(0,3/2) is given by,
In effect, the (3/2,0)⊕(0,3/2) degrees of freedom, denoted by
are found to solve the following second order equation,
Expressions for the solutions can be found in.
See also
Higher-dimensional gamma matrices
Bargmann–Wigner equations, alternative equations which describe free particles of any spin
Higher spin theory
References
Quantum mechanics
Quantum field theory
Mathematical physics | Joos–Weinberg equation | [
"Physics",
"Mathematics"
] | 877 | [
"Quantum field theory",
"Applied mathematics",
"Theoretical physics",
"Quantum mechanics",
"Mathematical physics"
] |
52,719,800 | https://en.wikipedia.org/wiki/Pentafluorosulfur%20hypofluorite | Pentafluorosulfur hypofluorite is an oxyfluoride of sulfur in the +6 oxidation state, with a fluorine atom attached to oxygen. The formula is SOF6. In standard conditions it is a gas.
Synthesis
SOF6 can be made by reacting thionyl fluoride with fluorine at 200 °C with a silver difluoride catalyst.
SOF2 + 2F2 → SOF6 (+ some SOF4)
Properties
The molecular shape has five fluorine and one oxygen atom arranged around a sulfur atom in an octahedral arrangement. Another fluorine atom is attached to the oxygen in almost a straight line with the S-O connection. So the molecular formula can also be written as SF5OF. The average S-F distance is 1.53 Å. The angles ∠FSF and ∠FSO are 90°.
The 19F nuclear magnetic resonance spectrum of SOF6 compared to SF6 has a -131.5 ppm shift for the hypofluorite fluorine, and 1.75 ppm for the opposite F. The other four fluorine atoms have a shift of 3.64 ppm. Spin coupling of o-F to SF4 is 17.4 Hz, between SF4 and opposite (apex) SF 155 Hz, and between apex and hypofluorite it is 0.0.
Reactions
Iodide is oxidised to iodine
SOF6 + 2I− + H2O → SO2F2 + I2 + 2HF + 2F−
Alkalis such as potassium hydroxide react
2SOF6 + 12OH− → O2 + 10F− + 5H2O + 2SO3F−
Alkenes react to add to a double bond, with -OSF5 on one carbon, and -F on the other.
C2H4 + SOF6 → FH2CCH2OSF5.
C2F4 + SOF6 → CF3CF2OSF5. C2SOF10 boils at 15°C
SOF6 + ClCH=CH2 → FClCH-CH2-O-SF5
SOF6 + FCH=CH2 → F2CH-CH2-O-SF5
SOF6 + F2C=CH2 → F3C-CH2-O-SF5
SOF6 + SOF4 → mixture of SF6, SOF4, bis-(pentafluorosulfur) peroxide F5SOOSF5 and bis-(pentafluorosulfur) oxide F5SOSF5.
Thermal decomposition produces sulfur hexafluoride and oxygen.
2SOF6 heat over 210° → 2SF6 + O2.
Some reactions of SOF6 result in fluorination of other molecules
SOF6 + CO → F2CO + SOF4.
SOF6 + F2CO → SF5OOCF3
SOF6 + SO3 → F5SOOSO2F
SOF6 + N2F4 → F5SONF2
3SOF6 + Br2 → 2BrF3 + 3SOF4
5SOF6 + I2 → 2IF5 + 5SOF4
PF3 + SOF6 → PF5 + SOF4
NO2 + SOF6 → 2NO2F
References
Fluorides
Sulfur(VI) compounds
Hypofluorites
Gases | Pentafluorosulfur hypofluorite | [
"Physics",
"Chemistry"
] | 729 | [
"Matter",
"Phases of matter",
"Salts",
"Statistical mechanics",
"Fluorides",
"Gases"
] |
43,994,184 | https://en.wikipedia.org/wiki/Homeostatic%20capacity | Homeostatic capacity refers to the capability of systems to self-stabilize in response to external forces or stressors, or more simply the capability of systems to maintain homeostasis. For living organisms, it is life's foundational trait, consisting of a hierarchy and network of traits endowed by nature and shaped by natural selection. Homeostatic capacity comprises a multidimensional network of traits and operates at all scales of biology systems levels including molecular, cellular, physiological, and organismal.
Human homeostatic capacity
In the context of human beings, homeostatic capacity refers to the inherent ability of the body to self-stabilize in response to external and internal stimuli. Homeostatic capacity of the human body erodes with age.
Homeostatic capacity and aging
A hypothesis proffered by the proponents of the Palo Alto Longevity Prize is that the array of ailments associated with aging may be epiphenomena of eroding homeostatic capacity and the process of aging may be halted or reversed by restoring homeostatic capacity to that of a healthy young adult.
See also
Senescence
References
Ageing
Human homeostasis | Homeostatic capacity | [
"Biology"
] | 231 | [
"Human homeostasis",
"Homeostasis"
] |
43,996,093 | https://en.wikipedia.org/wiki/International%20Berthing%20and%20Docking%20Mechanism | The International Berthing and Docking Mechanism (IBDM) is the European androgynous low impact docking mechanism that is capable of docking and berthing large and small spacecraft. The development of the IBDM is under ESA contract with QinetiQ Space as prime contractor.
History
The IBDM development was initiated as a joint development programme with NASA JSC. The first application of the IBDM was intended to be the ISS Crew Return Vehicle (CRV). In the original Agency to Agency agreement, it was decided to develop an Engineering Development Unit (EDU) to demonstrate the feasibility of the system and the associated technologies. NASA JSC were responsible for the system and avionics designs and ESA for the mechanical design. However, since the cancellation of the CRV program, the two Agencies have independently progressed with the docking system development.
The IBDM is designed to be compatible with the International Docking System Standard (IDSS) and is hence compatible with the ISS International Docking Adapters (IDA) on the US side of the ISS.
The European Space Agency started a cooperation with SNC to provide the IBDM for attaching this new vehicle to the ISS in the future. After SNC was selected as a commercial contractor to resupply the International Space Station in January 2016, ESA decided to spend 33 million euros ($36 million) to complete the design of the IBDM and build a flight model for Dream Chaser’s first mission.
Design
The IBDM provides both docking and berthing capability. The docking mechanism comprises a Soft Capture Mechanism (SCS), and a structural mating system called the Hard Capture System (HCS), explained in more detail below. The IBDM avionics runs in hot redundancy.
Soft Capture System
The SCS utilizes active control using 6 servo-actuated legs from RUAG Space (Switzerland) which are coordinated to control the SCS ring in its 6 degrees of freedom. The leg forces are measured to modify the compliance of the SCS ring to facilitate alignment of the active platform during capture. A large range of vehicle mass properties can be handled. Mechanical latches achieve soft capture.
Hard Capture System
The HCS uses structural hook mechanisms to close the sealed mated interface. QinetiQ Space has developed several generations of latches and hooks to come to the final hook design.
SENER (Spain) will be responsible for the further development and qualification of the HCS subsystem.
Features
The key feature of IBDM is that it is a fully computer controlled mechanism, and it is able to take part in smooth low impact docking and berthing (which reduces contact forces and resultant structural loads), autonomous operations in case of failures, flexibility in vehicle mass making it suitable for applications ranging from explorations to resupply missions. A backup safe mode is also available in case of failure.
Application
The American company Sierra Nevada Corporation (SNC) is developing the Dream Chaser, which is a small reusable spacecraft that is selected to transport cargo and/or crew to the ISS. The European Space Agency has started a cooperation with SNC to potentially provide the IBDM for attaching this new vehicle to the ISS in the future. The IBDM will be mounted to the unpressurised cargo module, which will be ejected before reentry.
Status
The IBDM development has successfully passed the Critical Design Review (December 2015).
An engineering model of the mechanism and its heat-redundant avionics has been developed and successfully tested (March 2016). The performance of the system has been verified at the certified SDTS docking test facility at NASA JSC.
The consortium has currently started the manufacturing of the full IBDM qualification model (SCS + HCS).
References
Astrodynamics
Orbital maneuvers
Spacecraft docking systems | International Berthing and Docking Mechanism | [
"Engineering"
] | 765 | [
"Astrodynamics",
"Aerospace engineering"
] |
61,574,273 | https://en.wikipedia.org/wiki/Rem%20Khokhlov | Rem Viktorovich Khokhlov (; July 15, 1926, in Livny – August 8, 1977, in Moscow) was a Soviet physicist and university teacher, rector of Lomonosov Moscow State University, one of the founders of nonlinear optics.
Biography
Khokhlov was born in the family of political officer and graduate of the Moscow Energetic Institute Viktor Khristoforovich Khokhlov and physicist Maria Yakovlevna. He graduated from a seven-year school in 1941 and worked in a car workshop during the Great Patriotic War. In 1944, he externally passed exams in high school and began to study at the Moscow Aviation Institute. In 1945, he moved to the Physics department at Moscow State University, where he spent his whole life. After graduating from university in 1948, he entered graduate school at the Department of Oscillation Physics. In 1952 he defended his thesis with the title of candidate of physical and mathematical sciences(PhD). With his investigations into vibrational physics he belonged to the third generation of the vibration physics school of Leonid I. Mandelstam and Nikolai D. Papaleksi. In 1959, he was sent to a one-year study visit to the United States at Stanford University. In 1962 he was awarded a doctorate (habilitation) in doctoral studies. Khokhlov organized together with S. A. Akhmanov, the first laboratory for nonlinear optics of the Soviet Union at the Lomonosov Moscow State University.
Selected publications
Krasnushkin P. E., Khokhlov R. V. Spatial beats in coupled wave guides. National Research Council of Canada, 1952.
Kaner, V.V., Rudenko, O.V., Khokhlov, R.V. Theory Of Nonlinear Oscillations In Acoustic Resonators. Sov Phys Acoust. 1977
References
Honors
Order of Lenin
Order of the Red Banner of Labor
Jubilee Medal "In Commemoration of the 100th Anniversary of the Birth of Vladimir Ilyich Lenin"
Lenin Prize (1970)
Foreign member of the Bulgarian Academy of Sciences
State Prize of the USSR (1985)
Namesake for the asteroid (3739) Rem (posthumously 1993)
Footnotes
1926 births
People from Livny
1977 deaths
Members of the Central Auditing Commission of the 25th Congress of the Communist Party of the Soviet Union
Soviet physicists
Optical physicists
Theoretical physicists
Moscow State University alumni
Academic staff of Moscow State University
Recipients of the Lenin Prize
Full Members of the USSR Academy of Sciences
Recipients of the USSR State Prize
Burials at Novodevichy Cemetery
Rectors of Moscow State University | Rem Khokhlov | [
"Physics"
] | 530 | [
"Theoretical physics",
"Theoretical physicists"
] |
61,575,689 | https://en.wikipedia.org/wiki/National%20Atmospheric%20Deposition%20Program | The National Atmospheric Deposition Program (NADP) is a Cooperative Research Support Program of the State Agricultural Experiment Stations (NRSP-3). Housed at the Wisconsin State Laboratory of Hygiene at the University of Wisconsin–Madison, the NADP is a collaborative effort between many different groups, such as: Federal, state, tribal, local governmental agencies, educational institutions, private companies, and non-governmental agencies. These organizations work together in order to operate monitoring sites and report deposition data. The NADP provides free access to all of its data, including seasonal and annual averages, trend plots, deposition maps, reports, manuals, and educational brochures.
Overview
Established: 1977
Number of sites: ~350 different site locations
Numbers of users: >37,000
History
Evolution
The National Atmospheric Deposition Program, or NADP, was initiated by the State Agricultural Experiment Station in 1977 to monitor the effects of atmospheric deposition on crops, rangelands, forests, surface waters, and other natural and cultural resources. The initial goal was to provide regional data for the deposition of acids, nutrients, and base cations (including temporal trends/amounts and geographic distributions).
In 1978, the first NADP sites began collecting weekly precipitation samples. In the early 1980s, the National Acid Precipitation Assessment Program (NAPAP) was established, and began to work in collaboration with NADP in order to sustain a long term, quality-assured precipitation monitoring network. This unification brought on a major expansion as well as newfound federal agency support. Today, the NADP National Trends Network (NTN) has more than 250 sites.
In response to emerging issues, the NADP established an additional two networks in the 1990s: The Atmospheric Integrated Research Monitoring Network (AIRMoN), which collected daily samples at five sites, and the Mercury Deposition Network (MDN), which has more than 80 sites (six of which are located in Canada). The MDN collects wet deposition data for both total and methyl mercury in precipitation.
In 2009, the Atmospheric Mercury Network (AMNet) was formed as a fourth network, and as a subset of some MDN sites. The network uses continuous automatic measurement systems to monitor gaseous and particulate concentrations of atmospheric mercury. The Ammonia Monitoring Network (AMoN) was added as a fifth network in October 2010, and it currently has more than 100 sites. AMoN monitors ammonia gas concentrations across the United States to provide consistent and lasting data. The Mercury Litterfall Network (MLN) was approved as the sixth network in 2021 with 22 sites. MLN provides estimates of mercury dry deposition in forested landscapes using passive collectors.
History of the National Acid Precipitation Assessment Program (NAPAP)
The National Acid Precipitation Assessment Program (NAPAP) was a cooperative federal program that was first authorized in 1981 in order to coordinate acid rain research and report those findings to the U.S. Congress. The research, monitoring, and assessment efforts of NAPAP, and other groups in the 1980s, culminated in Title IV of the 1990 Clean Air Act Amendments (CAAA), also known as the Acid Deposition Control Program. Title IX of the CAAA reauthorized NAPAP to conduct acid rain research and monitoring, and to periodically assess the costs, benefits, and effectiveness of Title IV. The NAPAP member agencies were the U.S. Environmental Protection Agency, the U.S. Department of Energy, the U.S. Department of Agriculture, the U.S. Department of Interior, the National Aeronautics and Space Administration, and the National Oceanic and Atmospheric Administration.
The NAPAP published a total of four reports: 1991 (multiple volumes), 1998, 2005, and 2011. The Program was able to describe and document strong reductions in sulfur dioxide and nitrogen oxide emissions, as well as the resulting atmospheric deposition from 1980 to 2010 as various elements of the CAAA were implemented. The NAPAP officially ended with publication of the last report in 2011. To reflect the federal NAPAP role in the NADP, the network name was changed to NADP National Trends Network (NTN)
Organization
Governance
The organizational structure of the NADP follows the State Agricultural Experiment Station Guidelines for Multi-State Research Activities (SAESD, 2006)1. This framework allows any individual or institution to participate in any segment of NADP, whether it be the monitoring or the research aspect of atmospheric deposition. NADP is managed by two groups. The first being Program Management, which is largely a volunteer group made up of site sponsors and supervisors, policy experts from several agencies (at the federal, state, and local levels), scientists and research specialists, and anyone with an interest in atmospheric deposition. Program management is organized through an Executive Committee, Technical Subcommittees, several advisory subcommittees, science subcommittees, and ad hoc groups. The second group is Program Operations, which is managed by a professional staff housed at the Wisconsin State Laboratory of Hygiene at the University of Wisconsin-Madison. The Program Office oversees day to day tasks, including coordinating with the Executive Committee, the individual monitoring networks, the analytical laboratories, the External Quality Assurance Program, and the Network Equipment Depot.
Committees
The NADP is governed by an elected and rotating Executive Committee (8 members). Currently, there are two standing Subcommittees, three standing Advisory Committees, and four Science Committees (highlighted below) that contribute continuous, scheduled suggestions to the Executive Committee. Ad hoc groups and the Program Office also supply crucial input to the Executive Committee.
The Executive Committee (EC) is responsible for considering and, if approved, executing decisions which are often based on the suggestions made by the subcommittees, advisory committees, science committees, and ad hoc groups. In addition, the EC is accountable for financial decisions and securing a balanced, stable, and ongoing program. There are eight voting members, as well as numerous non-voting members, that make decisions and appoint responsibilities to the subcommittees.
The two standing Technical Subcommittees, Education and Outreach Subcommittee (EOS) (formally the Ecological Response and Outreach Subcommittee) and Network Operations Subcommittee (NOS), provide the technical support necessary to promote the goals of NADP. EOS maintains a platform to coordinate outreach and education activities among the network and scientific subcommittees. With approval and recommendation from the Executive Committee, EOS will provide guidance for outreach efforts and educational materials to the Program Office. EOS will provide a forum to enable communication of outreach and education needs, goals and activities of the subcommittees and networks. The goal is to enhance efficiency in messaging and reaching new audiences. The NOS focuses on equipment, research, sampling methods, collection sites, and the evaluation of the issues that arise from these components.
The three advisory subcommittees include the Budget Advisory Committee (BAC), Quality Assurance Advisory Group (QAAG), and Data Management Advisory Group (DMAG). The role of the BAC is to advise the EC with suggestions pertaining to the budget, and to outline financial planning for current and future years. The QAAG is in charge of ensuring quality management in all aspects of NADP, including the Program Office, networks, and laboratories. To do so, they provide recommendations for manuals and procedures to the EC. The DMAG counsels the EC in data management by reviewing data reports and formats in order to ensure that they are in line with the correct protocols.
The science committees do not directly advise NADP networks, but they are closely affiliated. They assess major atmospheric deposition concerns and track scientific interest and participation. The first scientific committee was the Critical Loads of Atmospheric Deposition (CLAD), and the second was the Total Deposition Science Committee (TDep). CLAD and TDep were approved by the EC in 2010 and 2011, respectively. The goal of the CLAD is to provide a forum, across all levels of government and industry, that encourages the use and discussion of technical information and critical load science. TDep seeks to evaluate pressing issues of atmospheric deposition via a collaboration between a wide range of groups. TDep also aims to improve the ability to measure and model wet and dry deposition. To do so, they are working to advance the techniques and procedures which are used to estimate deposition of sulfur, nitrogen, and mercury. In October 2017, the Aeroallergen Monitoring Science Committee (AMSC) was added as the third science committee. AMSC seeks to utilize emerging technologies to advance the science of aeroallergen monitoring, enhance the understanding of quality data collection and evaluation methods, and provide lasting data for national networks. A fourth science committee, the Mercury in the Environment and Links to Deposition Science Committee (MELD), was formed in 2020 to improve our understanding of atmospherically-derived mercury sources, pathways, processes, and effects on the environment.
All NADP operations are administered at the NADP Program Office, which is currently located at the Wisconsin State Laboratory of Hygiene at the University of Wisconsin–Madison. The five main functions of the Program Office are network administration, management, meetings and trainings, data and publications, and quality assurance and management.
Network administration involves overseeing the endeavors of all five networks, managing sample analysis, and coordinating data storage and user availability. These functions are executed from the two analytical laboratories housed at WSLH: The Central Analytical Lab (CAL), which analyses samples from the NTN and AMoN networks, and the Mercury (Hg) Analytical Laboratory (HAL). The HAL was previously housed at Eurofins Frontier Global Sciences, Inc. in Bothell, Washington. In May 2023, the CAL and the HAL were renamed the NADP Analytical Laboratory (NAL). In addition, the Network Equipment Depot, located at the WSLH, provides spare parts for NADP field equipment and troubleshoots site operation problems.
Cooperating agencies
More than 80 sponsors support the NADP: Private companies and other non-governmental organizations, universities, local and state government agencies (i.e. state agricultural experiment stations), national laboratories, Native American environmental organizations, Canadian government agencies, the National Oceanic and Atmospheric Administration, the U.S. Environmental Protection Agency, the U.S. Geological Survey, the National Park Service, the U.S. Fish & Wildlife Service, the Bureau of Land Management, the U.S. Forest Service, the U.S. Department of Agriculture-Agricultural Research Service, the National Science Foundation, and the U.S. Department of Energy.
Networks
NTN
The NTN has over 250 sites that focus on wet deposition chemistry by collecting weekly precipitation samples nationwide. The samples are sent to the NADP Analytical Laboratory (NAL) at the Wisconsin State Lab of Hygiene for analysis and are then used to determine geographic distribution and annual trends. The sample collection and handling methods follow strict clean-handling procedures in order to ensure accurate results. The analytes monitored are: Free acidity (H+ as pH), conductance, calcium (Ca2+), magnesium (Mg2+), sodium (Na+), potassium (K+), sulfate (SO42-), nitrate (NO3−), chloride (Cl−), and ammonium (NH4+). The NAL also measures orthophosphate, but only for quality assurance as an indicator of sample contamination.
MDN
The MDN measures total mercury concentrations on a weekly basis (methyl mercury is measured monthly at some sites), which provides wet deposition data for surface waters and other waterways. The goal is to deliver accurate information that allows researchers to evaluate the linkage between mercury and health, which is strengthened by its large spatial and temporal footprint.
AMNet
The AMNet consists of approximately 15 sites across the U.S. and Canada. The function of these sites is to measure ambient air concentrations of gaseous oxidized mercury (GOM), particulate bound mercury (PBM2.5), and gaseous elemental mercury (GEM). This network works to monitor and report atmospheric mercury that causes dry and total deposition of mercury at select MDN sites. AMNet produces high-resolution data to determine atmospheric mercury trends and models, the ecological consequences of mercury discharging sources, and how to adequately control mercury levels.
AMoN
The AMoN measures ambient ammonia gas concentrations over a two-week period via a Radiello®-passive sampler, which is a simple diffusive sampler that offers higher capacity and faster sampling rates than other devices. Therefore, AMoN can provide reliable data to aid in meeting air quality policies and administration needs. AMoN collects data biweekly to determine the spatial variability and seasonality of ammonia concentrations.
MLN
The MLN can provide an important component of mercury dry deposition to a forested landscape. The importance of litterfall mercury data for quantifying atmospheric mercury deposition to forests was demonstrated with studies at NADP sites in the eastern USA from 2007-2009 and 2007 to 2014.
Closed Networks
AIRMoN
The AIRMoN sites were primarily used to assess the impacts of emission changes such as potential effects from new sources, federal Clean Air Act controls, and source-receptor relationships in atmospheric models. The network measured the same contaminants as the NTN, but sampling occurred daily during precipitation to provide greater temporal resolution. This consistent, high-resolution sampling improved the researchers’ ability to evaluate the data and, therefore, provide reliable results. The network was discontinued in September 2019.
Products
Tabular data products
Reports
Brochures
Annual Data Summaries
Quality Assurance Reports
CLAD Science Committee Reports
TDep Science Committee Reports
AMSC Study Plan
MELD Science Committee Reports
Other helpful sites
Rocky Mountain Research Station - Air, soil, and water resources and quality
NRSP3: The National Atmospheric Deposition Program (NADP)
Standard Operating Procedures (SOP)
Accurate and consistent measurement of gases and deposition at every monitoring site is of the utmost importance to the NADP. This is accomplished, in part, by ensuring that all sites adhere to specific standard operating procedures. This provides consistent methodology at all sites within the networks. The SOPs can be viewed here:
http://nadp.slh.wisc.edu/siteops/
Other Deposition Monitoring Groups
Acid Deposition Monitoring Program in East Asia (EANET)
Canadian Air and Precipitation Monitoring Network (CAPMoN)
Clean Air Status and Trends Network (CASTNET)
Great Lakes National Program Office (GLNPO)
Asia-Pacific Mercury Monitoring Network (APMMN)
References
a. 1SAESD (State Agricultural Experiment Station Directors). 2013. Guidelines for Multistate Research Activities. Developed by SAESD in cooperation with the Cooperative State Research, Education, and Extension Service, USDA (NIFA) and the Experiment Station Committee on Organization and Policy (ESCOP). Approved September 26, 2000, updated August 15, 2013. http://escop.ncsu.edu/docs/MRF Guidelines Revised 08 1 513.pdf
b. NADP Governance Handbook
c. https://nadp.slh.wisc.edu/
External links
Added more references
Soil and crop science organizations
University of Wisconsin–Madison
1977 establishments in Wisconsin
Rain
Air pollution
Environmental chemistry | National Atmospheric Deposition Program | [
"Chemistry",
"Environmental_science"
] | 3,057 | [
"Environmental chemistry",
"nan"
] |
61,581,433 | https://en.wikipedia.org/wiki/MgCu2 | {{DISPLAYTITLE:MgCu2}}
MgCu2 is a binary intermetallic compound of magnesium (Mg) and copper (Cu) adopting cubic crystal structure, more specifically the C15 Laves phase. The space group of MgCu2 is Fdm with lattice parameter a = 7.04 Å.
Preparation
MgCu2 can be prepared by hydrogenation of Mg2Cu or the reaction of magnesium hydride and metallic copper at elevated temperature and pressure:
2 Mg2Cu + 3 H2 → 3 MgH2 + MgCu2
MgH2 + 2 Cu → MgCu2 + H2
MgCu2 can also be prepared by reacting of stoichiometric amounts of metals at about 380 °C in the presence of excess copper.
Properties
MgCu2 can react with boron or its oxide to form magnesium borides. It can also react with magnesium hydride to produce orthorhombic Mg2Cu, liberating hydrogen.
References
See also
Laves phase
Magnesium compounds
Copper compounds
Intermetallics | MgCu2 | [
"Physics",
"Chemistry",
"Materials_science"
] | 214 | [
"Inorganic compounds",
"Metallurgy",
"Intermetallics",
"Condensed matter physics",
"Alloys"
] |
61,583,459 | https://en.wikipedia.org/wiki/UniverseMachine | The UniverseMachine (also known as the Universe Machine) is a project carrying out astrophysical supercomputer simulations of various models of possible universes, created by astronomer Peter Behroozi and his research team at the Steward Observatory and the University of Arizona. Numerous universes with different physical characteristics may be simulated in order to develop insights into the possible beginning and evolution of our universe. A major objective is to better understand the role of dark matter in the development of the universe. According to Behroozi, "On the computer, we can create many different universes and compare them to the actual one, and that lets us infer which rules lead to the one we see."
Besides lead investigator Behroozi, research team members include astronomer Charlie Conroy of Harvard University, physicist Andrew Hearin of the Argonne National Laboratory and physicist Risa Wechsler of Stanford University. Support funding for the project is provided by NASA, the National Science Foundation and the Munich Institute for Astro- and Particle Physics.
Description
Besides using computers and related resources at the NASA Ames Research Center and the Leibniz-Rechenzentrum in Garching, Germany, the research team used the High-Performance Computing cluster at the University of Arizona. Two-thousand processors simultaneously processed the data over three weeks. In this way, the research team generated over 8 million universes, and at least galaxies. The UniverseMachine program continuously produced millions of simulated universes, each containing 12 million galaxies, and each permitted to develop from 400 million years after the Big Bang to the present day.
According to team member Wechsler, "The really cool thing about this study is that we can use all the data we have about galaxy evolution — the numbers of galaxies, how many stars they have and how they form those stars — and put that together into a comprehensive picture of the last 13 billion years of the universe." Wechsler further commented, "For me, the most exciting thing is that we now have a model where we can start to ask all of these questions in a framework that works […] We have a model that is inexpensive enough computationally, that we can essentially calculate an entire universe in about a second. Then we can afford to do that millions of times and explore all of the parameter space."
Results
One of the results of the study suggests that denser dark matter in the early universe does not seem to negatively impact star formation rates, as thought initially. According to the studies, galaxies of a given size were more likely to form stars for much longer, and at a high rate. The researchers expect to extend the project's objectives to include how often stars expire in supernovae, how dark matter may affect the shape of galaxies and eventually, by gaining better general cosmological insights, how life originated.
See also
References
External links
– NASA (14 July 2014)
Universe Model using Artificial Intelligence (IPMU; 28 August 2019)
Astrophysics
Cosmological simulation
Physical cosmology | UniverseMachine | [
"Physics",
"Astronomy"
] | 612 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Astrophysics",
"Computational physics",
"Cosmological simulation",
"Physical cosmology"
] |
42,548,723 | https://en.wikipedia.org/wiki/MCM-41 | MCM-41 (Mobil Composition of Matter No. 41) is a mesoporous material with a hierarchical structure from a family of silicate and alumosilicate solids that were first developed by researchers at Mobil Oil Corporation and that can be used as catalysts or catalyst supports.
Structure
MCM-41 consists of a regular arrangement of cylindrical mesopores that form a one-dimensional pore system. It is characterized by an independently adjustable pore diameter, a sharp pore distribution, a large surface and a large pore volume. The pores are larger than with zeolites and the pore distribution can easily be adjusted. The mesopores have a diameter of 2 nm to 6.5 nm.
Properties
Contrary to zeolites, the framework of MCM-41 has no bronsted acid centers because there is no aluminium contained in the lattice. The acidity of alumina-doped MCM-41 therefore is comparable to that of the amorphous alumosilicates.
MCM-41 is not hydrothermally stable because of the slight wall thickness and the low degree of cross-linking of the silicate units.
Synthesis
To achieve a defined pore diameter surfactants are used that form micelles in the synthesis solution. These micelles form templates that help build up the mesoporous framework. For MCM-41 mostly cetyltrimethylammonium bromide (CTAB) is used.
The surfactant first forms rod-like micelles that subsequently align into hexagonal arrays. After adding silica species these cover the rods. Later, calcination leads to a condensation of the silanol groups so that the silicon atoms are bridged by oxygen atoms. The organic template is oxidized and disappears.
Uses
MCM-41, as the zeolites, are widely used as catalytic cracking. MCM-41 type materials have been widely used as support of heterogeneous catalysts and also used for separations.
References
Mesoporous material | MCM-41 | [
"Materials_science"
] | 423 | [
"Mesoporous material",
"Porous media"
] |
42,552,728 | https://en.wikipedia.org/wiki/RU-58841 |
RU-58841, also known as PSK-3841 or HMR-3841, is a nonsteroidal antiandrogen (NSAA) which was initially developed in the 1980s by Roussel Uclaf, the French pharmaceutical company from which it received its name. It was formerly under investigation by ProStrakan (previously ProSkelia and Strakan) for potential use as a topical treatment for androgen-dependent conditions including acne, pattern hair loss, and excessive hair growth. The compound is similar in structure to the NSAA RU-58642 but contains a different side-chain. These compounds are similar in chemical structure to nilutamide, which is related to flutamide, bicalutamide, and enzalutamide, all of which are NSAAs similarly. RU-58841 can be synthesized either by building the hydantoin moiety or by aryl coupling to 5,5-dimethylhydantoin.
RU-58841 produces cyanonilutamide (RU-56279) and RU-59416 as metabolites in animals. Cyanonilutamide has relatively low affinity for the androgen receptor but shows significant antiandrogenic activity in animals. RU-59416 has very low affinity for the androgen receptor.
See also
Cyanonilutamide
RU-56187
RU-57073
RU-58642
RU-59063
References
Further reading
Abandoned drugs
Primary alcohols
Anti-acne preparations
Hair loss medications
Hair removal
Hydantoins
Nitriles
Nonsteroidal antiandrogens
Trifluoromethyl compounds | RU-58841 | [
"Chemistry"
] | 345 | [
"Nitriles",
"Drug safety",
"Functional groups",
"Abandoned drugs"
] |
42,553,031 | https://en.wikipedia.org/wiki/Beihai%20Tunnel%20%28Beigan%29 | The Beihai Tunnel () is a tunnel in Banli village, Beigan Township, Lienchiang County, Taiwan.
History
The tunnel was opened in 1968 for amphibious landing, 10 years after the end of the Second Taiwan Strait Crisis between the Republic of China Armed Forces and the People's Liberation Army. The construction lasted for around 3 years and claimed the lives of over 100 soldiers. After the Matsu National Scenic Area Administration was established, it took over the management of the tunnel. It renovated the interior of the tunnel and neighboring tourist spots, building an access road and protective railings.
Features
The tunnel is 550 meters long and 9–15 meters wide. Visitors were once able to ride canoe along the tunnel but for several years the site has been closed to visitors due to falling rocks rendering it dangerous.
See also
List of tourist attractions in Taiwan
Zhaishan Tunnel
Beihai Tunnel (Nangan)
Beihai Tunnel (Dongyin)
References
1968 establishments in Taiwan
Beigan Township
Military history of Taiwan
Tunnels completed in 1968
Tunnels in Lienchiang County
Tunnel warfare | Beihai Tunnel (Beigan) | [
"Engineering"
] | 216 | [
"Military engineering",
"Tunnel warfare"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.