id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
51,003,552 | https://en.wikipedia.org/wiki/Elena%20Ram%C3%ADrez%20Parra | Elena Ramírez Parra (born 1972) is a Spanish botanist and researcher who studies the negative effects of environmental stress (drought, salinity of the soil, excess radiation, presence of heavy metals and the high and low temperatures) on plant growth. Applications of her research include improving harvest yield. In 2010, Ramírez won a L'Oréal-UNESCO Award for Women in Science.
Life
Ramírez studied biological sciences at the Autonomous University of Madrid and a doctorate in molecular biology from the same university (2000). Ramírez is a published scholar of botany.
She became a senior researcher at the Centre for Plant Biotechnology and Genomics (Centro de Biotecnología y Genómica de Plantas, CBGP) of the National Instituto Nacional de Investigación y Tecnología Agraria y Alimentaria (INIA) and the Technical University of Madrid (UPM), studying the negative effects that produce environmental stress in plants.
Her investigation characterises the mechanisms that plants use to respond to external stresses like drought, soil salinity, heavy metals and extreme temperatures. She tries to create a generation of new plant varieties that will improve harvests.
In November 2010, she was awarded, with four scientists (Isabel Lastres Becker, Ana Briones Alonso, Mercedes Vila and María Antonia Herrero), the L'Oréal-UNESCO Awards for Women in Science. This endowment of €15000 is to reward the work of women scientists younger than 40.
See also
List of Spanish inventors and discoverers
List of female scientists in the 21st century
References
1972 births
Autonomous University of Madrid alumni
Place of birth missing (living people)
21st-century Spanish women scientists
Living people
Molecular biologists
Soil scientists
21st-century Spanish botanists
Hybridisers
Date of birth missing (living people)
Academic staff of the Technical University of Madrid
Women molecular biologists
Environmental scientists
Spanish women botanists
Women soil scientists | Elena Ramírez Parra | [
"Chemistry",
"Environmental_science"
] | 385 | [
"Molecular biologists",
"Biochemists",
"Environmental scientists",
"Molecular biology"
] |
51,004,588 | https://en.wikipedia.org/wiki/DeepCwind%20Consortium | The DeepCwind Consortium was a national consortium of universities, nonprofits, utilities, and industry leaders. The mission of the consortium was to establish the State of Maine as a national leader in floating offshore wind technology. Much of the consortium's work and resulting research was funded by the U.S. Department of Energy, the National Science Foundations, and others.
The efforts of the DeepCwind Consortium culminated in the University of Maine patent-pending VolturnUS, a floating concrete hull technology can support wind turbines in water depths of 45 meters or more, and has the potential to significantly reduce the cost of offshore wind.
Overview
The DeepCwind Consortium was initially funded in 2009 as part of the American Recovery and Reinvestment Act (ARRA) through the U.S. Department of Energy. The University of Maine received $7.1 million to found the consortium and design and deploy floating offshore turbine prototypes. As part of this funding, the research plan included: "optimization of designs for floating platforms by evaluating options for using more durable, lighter, hybrid composite materials, manufacturability, and deployment logistics."
Floating deepwater wind farms placed ten or more nautical miles (nmi) offshore can play a critical role in reaching the Department of Energy's 20% windpower goal by 2030. Deepwater offshore wind is the dominant U.S. ocean energy resource, representing a potential of nearly 3,100 TW-h/year. It also:
Overcomes viewshed issues that have delayed or prevented some nearshore projects.
Places energy generation closer to major U.S. population centers.
Allows access to a more powerful Class 6 and 7 wind resource.
Reduces over time wind energy costs by reducing transmission costs from remote land sites and by simplifying deployment and maintenance logistics.
With these qualities in mind, Maine planned to construct a 5 GW, $20 billion network of floating offshore wind farms to contribute to the northeast U.S. renewable energy needs. Maine has the deepest waters near its shores, approximately 200 ft deep at 3 nmi, and 89% of Maine's 156 GW offshore wind resource is in deep waters. The state also offers extensive maritime industry infrastructure and proximity to one of the largest energy markets in the country.
Research Outcomes
The DeepCwind Consortium published the Maine Offshore Wind Report in February 2011. The report "examines economics and policy, electrical grid integration, wind and wave, bathymetric, soil, and environmental research. It also includes summaries of assembly and construction sites, critical issues for project development and permitting, and an analysis of the implications of the Jones Act."
In June 2013, the consortium deployed the 20 kW VolturnUS 1:8, a 65-foot-tall floating turbine prototype that is 1:8th the scale of a 6-megawatt (MW), 450-foot rotor diameter design. VolturnUS 1:8 was the first grid-connected offshore wind turbine in the Americas.
In June 2016, the UMaine-led New England Aqua Ventus I project has just won top tier status from the US Department of Energy (DOE) Advanced Technology Demonstration Program for Offshore Wind. New England Aqua Ventus I is a two 6 MW turbine (12 MW) floating offshore wind pilot project 14 miles off Maine's coast, developed by Maine Aqua Ventus, GP, LLC. The objective of the pilot is to demonstrate the technology at full scale, allowing floating farms to be built out-of-sight across the US and the world in the 2020s, bringing lower-cost, clean renewable energy to coastal population centers.
Partnering Organizations and Original Research Initiatives
The University of Maine-led consortium includes universities, nonprofits, and utilities; a wide range of industry leaders in offshore design, offshore construction, and marine structures manufacturing; firms with expertise in wind project siting, environmental analysis, environmental law, composites materials to assist in corrosion-resistant material design and selection, and energy investment; and industry organizations to assist with education and tech transfer activities.
Task 1: Micrositing, Geophysical Investigations, and Geotechnical Engineering
The primary objective of micrositing, geophysical investigations, and geotechnical engineering was the characterization of the seafloor environment for turbine anchoring at the University of Maine Deepwater Offshore Wind Test Site in the Gulf of Maine. Activities coordinated geologic and geotechnical engineering information with the metocean forces assessed in the Offshore Turbine Testing, Monitoring, and Reliability task and assisted in design of efficient moorings and anchors for the floating offshore wind turbines under the Floating Turbines Design and Lab Testing task. An additional objective of this task was to provide site location documentation and address safety and navigation at the site once the turbines and moorings are installed.
Partners: University of Maine Department of Civil and Environmental Engineering, James W. Sewall Company, Maine Maritime Academy, University of Western Australia Centre for Offshore Foundation Research
Task 2: Study of Environmental/Ecological Impacts
Maine Public Law 270, which allowed the establishment of the University of Maine Deepwater Offshore Wind Test Site, required that the following state and federal agencies be consulted concerning environmental monitoring and planning of the test site:
Department of Inland Fisheries and Wildlife
Department of Marine Resources
Department of Conservation
Coast Guard
Army Corps of Engineers
NOAA Fisheries
These agencies required plans for siting, navigation, project removal, remedial action, and environmental/ecological monitoring. They also required reports and updates on site activities.
Micrositing and geophysical investigations activities
Bathymetric surveys
Survey to check for presence of historically significant resources
Site plan creation
Navigation and safety plan creation
Environmental/ecological monitoring activities
There were to be wildlife-specific studies for:
Benthic Invertebrates
Fish
Marine mammals
Birds and bats
No live animals will be collected for the environmental monitoring efforts.
As part of the environmental/ecological monitoring plan that was part of the test site permit application, a review of the potential threats to marine life was considered and mitigation measures were designed. Potential areas of concern addressed in this report included the following:
Entanglement of marine mammals in mooring lines and structures.
Disturbance of migration and feeding areas.
Bird and bat strikes.
Attraction of fish or marine mammals to the platforms.
Interruption of wave energy direction and focus.
Human impacts.
Visual impacts.
Potential disturbance of historical or culturally significant areas.
All micrositing, environmental/ecological monitoring, and permitting activities were conducted from the start of the project in 2010 until the end of the project in 2012.
Partners: University of Maine School of Marine Sciences, University of Maine School of Biology and Ecology, Island Institute, Gulf of Maine Research Institute, New Jersey Audubon Society, Pacific Northwest National Laboratory
Task 3: Permitting and Policy
Under the recently enacted Maine Public Law 2009, Chapter 270 (LD 1465), the University of Maine Deepwater Offshore Wind Test Site is located near Monhegan Island, an area selected by the state. This site was analyzed in detail through the state's site selection process.
With Maine's designation of the test site, the Permitting and Policy team secured specific permits for the proposed project from all applicable local, state, and federal permitting authorities. In conformance with the application requirements, to obtain a General Permit under LD 1465, the Permitting and Policy team, in conjunction with the Micrositing and Ecological Monitoring teams, submitted a report to the required state and federal agencies describing:
Existing commercial fishing and other uses in the project area, as well as the marine resources.
A detailed monitoring plan of benthic invertebrates, fish, marine mammal, and avian wildlife.
Monitoring plan of ambient noise levels possibly associated with project construction and, if needed, any subsequent operations and avoidance and mitigation measures.
Navigation public safety plans.
Project removal plan.
The team worked to develop these plans in consultation with federal, state, and local agencies, as well as stakeholders.
Partners: James W. Sewall Company, Kleinschmidt, HDR/DTA
Task 4: Floating Turbine Design, Material Selection, and Lab Testing
The primary objectives of the Floating Turbine Design task was to:
Partially validate the coupled aeroelastic/hydrodynamic models developed by NREL.
Optimize platform designs by integrating more durable, lighter, hybrid composite materials.
Develop a complete design of one or more scale floating turbine platforms, capable of supporting a wind turbine in the 10 kW to 250 kW range for deployment at the University of Maine Deepwater Offshore Wind Test Site.
Partners: Advanced Structures and Composites Center, Maine Maritime Academy, Technip USA, National Renewable Energy Laboratory, Sandia National Labs, Ashland, Inc., Kenway Corporation, Harbor Technologies, PPG Industries, Owens Corning, Zoltek, Polystrand, Inc.
Task 5: Offshore Turbine Testing, Monitoring, and Reliability
The Physical Oceanography Group (PhOG) at the University of Maine deployed and operated an oceanographic data buoy with real-time telemetry capabilities in the University of Maine Deepwater Offshore Wind Test Site offshore Monhegan Island. The ocean data buoy was deployed prior to the tank testing and remained in operation throughout the project period in order to monitor oceanographic, meteorological, and general environmental conditions. Monitoring emphasis was on wind speed and direction, visibility, directional waves, and water-column currents.
Partners: University of Maine Physical Oceanography Group, University of Maine School of Marine Sciences
Task 6: Education and Outreach
The University of Maine, Maine Maritime Academy, and Northern Maine Community College developed several degree programs to create a trained workforce for the State of Maine. To ensure the community at large understands our research and development, the DeepCwind Consortium found numerous opportunities to share and discuss goals, activity plans, and the results of research. The consortium held public meetings to discuss the site selection for the University of Maine Deepwater Offshore Wind Test Site, presented at conferences within the state and across the country, and even taught an interactive wind-wave tank testing activity to over 500 K-12 students across Maine. These and other activities continued through the duration of the project.
Partners: Advanced Structures and Composites Center, University of Maine College of Engineering,
University of Maine Department of Industrial Cooperation, Maine Maritime Academy, Northern Maine Community College, American Composites Manufacturers Association, Maine Composites Alliance, Maine Wind Industry Initiative
Task 8: Fabrication and Deployment
UMaine and its subcontractors led the fabrication and deployment of the approximately 1/8 scale floating wind turbine platform. The deployment study sought to identify key deployment and installation factors. The performance data gathered from the 1/8 scale platform was used to further validate platform numerical models developed by NREL and others.
Partners: Advanced Structures and Composites Center, Cianbro Corporation, General Dynamics Bath Iron Works, Maine Maritime Academy, Emera Maine, Central Maine Power Company, Technip USA, Reed and Reed, SGC
See also
Wind power in Maine
UMaine Deepwater Offshore Wind Test Site
VolturnUS
References
External links
Advanced Structures and Composites Center - DeepCwind Consortium
Wind power in Maine
Offshore engineering
Renewable energy policy in the United States
2009 establishments in Maine | DeepCwind Consortium | [
"Engineering"
] | 2,237 | [
"Construction",
"Offshore engineering"
] |
51,008,352 | https://en.wikipedia.org/wiki/Hydrodynamic%20scour | Hydrodynamic scour is the removal of sediment such as silt, sand and gravel from around the base of obstructions to the flow in the sea, rivers and canals. Scour, caused by fast flowing water, can carve out scour holes, compromising the integrity of a structure. It is an interaction between the hydrodynamics and the geotechnical properties of the substrate. It is a notable cause of bridge failure and a problem with most marine structures supported by the seabed in areas of significant tidal and ocean current. It can also affect biological ecosystems and heritage assets.
Mechanism
Any obstruction within flowing water will produce changes in velocity within the water column. The flow changes that occur in the vicinity of the substrate may cause differential movement in the bed materials near the obstruction. The magnitude of these changes varies with stream velocity, feature shape and substrate character. Generally the stream bed is deepened at the upstream end of the obstruction, and the material removed from the substrate is usually deposited in the sheltered areas behind or adjacent to the obstruction where the local velocity is sufficiently reduced. In conditions of high stream velocities, mobile bed materials, and poorly streamlined structures scour depths can be quite deep.
See also
Bridge scour
Tidal scour
Kolk_(vortex)
References
Erosion
Fluvial geomorphology
Hydraulic_engineering | Hydrodynamic scour | [
"Physics",
"Engineering",
"Environmental_science"
] | 272 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering"
] |
39,563,860 | https://en.wikipedia.org/wiki/Transfer%20entropy | Transfer entropy is a non-parametric statistic measuring the amount of directed (time-asymmetric) transfer of information between two random processes. Transfer entropy from a process X to another process Y is the amount of uncertainty reduced in future values of Y by knowing the past values of X given past values of Y. More specifically, if and for denote two random processes and the amount of information is measured using Shannon's entropy, the transfer entropy can be written as:
where H(X) is Shannon's entropy of X. The above definition of transfer entropy has been extended by other types of entropy measures such as Rényi entropy.
Transfer entropy is conditional mutual information, with the history of the influenced variable in the condition:
Transfer entropy reduces to Granger causality for vector auto-regressive processes. Hence, it is advantageous when the model assumption of Granger causality doesn't hold, for example, analysis of non-linear signals. However, it usually requires more samples for accurate estimation.
The probabilities in the entropy formula can be estimated using different approaches (binning, nearest neighbors) or, in order to reduce complexity, using a non-uniform embedding.
While it was originally defined for bivariate analysis, transfer entropy has been extended to multivariate forms, either conditioning on other potential source variables or considering transfer from a collection of sources, although these forms require more samples again.
Transfer entropy has been used for estimation of functional connectivity of neurons, social influence in social networks and statistical causality between armed conflict events.
Transfer entropy is a finite version of the directed information which was defined in 1990 by James Massey as
, where denotes the vector and denotes . The directed information places an important role in characterizing the fundamental limits (channel capacity) of communication channels with or without feedback and gambling with causal side information.
See also
Directed information
Mutual information
Conditional mutual information
Causality
Causality (physics)
Structural equation modeling
Rubin causal model
References
External links
, a toolbox, developed in C++ and MATLAB, for computation of transfer entropy between spike trains.
, a toolbox, developed in Java and usable in MATLAB, GNU Octave and Python, for computation of transfer entropy and related information-theoretic measures in both discrete and continuous-valued data.
, a toolbox, developed in MATLAB, for computation of transfer entropy with different estimators.
Causality
Nonlinear time series analysis
Nonparametric statistics
Entropy and information | Transfer entropy | [
"Physics",
"Mathematics"
] | 499 | [
"Dynamical systems",
"Entropy",
"Physical quantities",
"Entropy and information"
] |
39,570,044 | https://en.wikipedia.org/wiki/Socolar%E2%80%93Taylor%20tile | The Socolar–Taylor tile is a single non-connected tile which is aperiodic on the Euclidean plane, meaning that it admits only non-periodic tilings of the plane (due to the Sierpinski's triangle-like tiling that occurs), with rotations and reflections of the tile allowed. It is the first known example of a single aperiodic tile, or "einstein". The basic version of the tile is a simple hexagon, with printed designs to enforce a local matching rule, regarding how the tiles may be placed. One of their papers shows a realization of the tile as a connected set. It is currently unknown whether this rule may be geometrically implemented in two dimensions while keeping the tile a simply simply connected set.
This is, however, confirmed to be possible in three dimensions, and, in their original paper, Socolar and Taylor suggest a three-dimensional analogue to the monotile. Taylor and Socolar remark that the 3D monotile aperiodically tiles three-dimensional space. However the tile does allow tilings with a period, shifting one (non-periodic) two dimensional layer to the next, and so the tile is only "weakly aperiodic".
Physical copies of the three-dimensional tile could not be fitted together without allowing reflections, which would require access to four-dimensional space.
Gallery
References
External links
Previewable digital models of the three-dimensional tile, suitable for 3D printing, at Thingiverse
Original diagrams and further information on Joan Taylor's personal website
Aperiodic tilings | Socolar–Taylor tile | [
"Physics"
] | 318 | [
"Tessellation",
"Aperiodic tilings",
"Symmetry"
] |
39,574,992 | https://en.wikipedia.org/wiki/Oxygen%20compatibility | Oxygen compatibility is the issue of compatibility of materials for service in high concentrations of oxygen. It is a critical issue in space, aircraft, medical, underwater diving and industrial applications.
Aspects include effects of increased oxygen concentration on the ignition and burning of materials and components exposed to these concentrations in service.
Understanding of fire hazards is necessary when designing, operating, and maintaining oxygen systems so that fires can be prevented. Ignition risks can be minimized by controlling heat sources and using materials that will not ignite or will not support burning in the applicable environment. Some materials are more susceptible to ignition in oxygen-rich environments, and compatibility should be assessed before a component is introduced into an oxygen system. Both partial pressure and concentration of oxygen affect the fire hazard.
The issues of cleaning and design are closely related to the compatibility of materials for safety and durability in oxygen service.
Prevention of fire
Fires occur when oxygen, fuel, and heat energy combine in a self-sustaining chemical reaction. In an oxygen system the presence of oxygen is implied, and in a sufficiently high partial pressure of oxygen, most materials can be considered fuel. Potential ignition sources are present in almost all oxygen systems, but fire hazards can be mitigated by controlling the risk factors associated with the oxygen, fuel, or heat, which can limit the tendency for a chemical reaction to occur.
Materials are easier to ignite and burn more readily as oxygen pressure or concentration increase, so operating oxygen systems at the lowest practicable pressure and concentration may be enough to avoid ignition and burning.
Use of materials which are inherently more difficult to ignite or are resistant to sustained burning, or which release less energy when they burn, can, in some cases, eliminate the possibility of fire or minimize the damage caused by a fire.
Although heat sources may be inherent in the operation of an oxygen system, initiation of the chemical reaction between the system materials and oxygen can be limited by controlling the ability of those heat sources to cause ignition. Design features which can limit or dissipate the heat generated to keep temperatures below the ignition temperatures of the system materials will prevent ignition.
An oxygen system should also be protected from external heat sources.
Assessment of oxygen compatibility
The process of assessment of oxygen compatibility would generally include the following stages:
Identification of worst-case operating conditions.
Evaluation of flammability of system materials. Geometry should be considered as most materials are more flammable when they have small cross-sections or are finely divided.
Assessment of the presence and probability of ignition mechanisms. These may include:
Chemical reaction: An exothermic reaction between chemicals that could release sufficient heat to ignite the surrounding materials.
Electrical arc: Electric current arcing with enough energy to ignite the material receiving the arc.
Engine exhaust
Explosive charges
Flow friction: Heat generated by high velocity oxygen flow over a non-metal
Note:Flow friction is a hypothesis. Flow friction has not been experimentally verified and should be considered only in conjunction with validated ignition mechanisms.
Friction between relatively moving parts
Fragments from bursting vessels
Fresh metal exposure: Heat of oxidation released when unoxidized metal is exposed to an oxidizing atmosphere. Usually associated with fracture, impact or friction.
Galling and friction: Heat generated by rubbing components together.
Lightning and other electric arc discharge
Mechanical impact: Heat generated by impact on a material with sufficient energy to ignite it.
Open flames
Particle impact: Heat generated when small particles strike a material with sufficient velocity to ignite the particle or the material.
Personnel smoking
Rapid pressurization: Heat generated by single or multiple adiabatic compression events.
Resonance: Acoustic vibrations in resonant cavities that cause rapid temperature rise.
Static discharge: Discharge of accumulated static electrical charge with enough energy to ignite the material receiving the charge.
Thermal runaway: A process which produces heat faster than it can be dissipated.
Welding
Estimation of the ignition risk and the consequences of ignition. The further development or dissipation of the fire.
Analysis of the consequences of a fire
Compatibility analysis would also consider the history of use of the component or material in similar conditions, or of a similar component.
Oxygen service
Oxygen service implies use in contact with high partial pressures of oxygen. Generally this is taken to mean a higher partial pressure than possible from compressed air, but also can occur at lower pressures when the concentration is high.
Oxygen cleaning
Oxygen cleaning is preparation for oxygen service by ensuring that the surfaces that may come into contact with high partial pressures of oxygen while in use are free of contaminants that increase the risk of ignition.
Oxygen cleaning is a necessary, but not always a sufficient condition for high partial pressure or high concentration oxygen service. The materials used must also be oxygen compatible at all expected service conditions. Aluminium and titanium components are specifically not suitable for oxygen service.
In the case of diving equipment, oxygen cleaning generally involves the stripping down of the equipment into individual components which are then thoroughly cleaned of hydrocarbon and other combustible contaminants using non-flammable, non-toxic cleaners. Once dry, the equipment is reassembled under clean conditions. Lubricants are replaced by specifically oxygen- compatible substitutes during reassembly.
The standard and requirements for oxygen cleaning of diving apparatus varies depending on the application and applicable legislation and codes of practice.
For scuba equipment, the industry standard is that breathing apparatus which will be exposed to concentrations in excess of 40% oxygen by volume should be oxygen cleaned before being put into such service. Surface supplied equipment may be subject to more stringent requirements, as the diver may not be able to remove the equipment in an accident. Oxygen cleaning may be required for concentrations as low as 23% Other common specifications for oxygen cleaning include ASTM G93 and CGA G-4.1.
Cleaning agents used range from heavy-duty industrial solvents and detergents such as liquid freon, trichlorethylene and anhydrous trisodium phosphate, followed by rinsing in deionised water. These materials are now generally deprecated as being environmentally unsound and an unnecessary health hazard. Some strong all-purpose household detergents have been found to do the job adequately. They are diluted with water before use, and used hot for maximum efficacy. Ultrasonic agitation, shaking, pressure spraying and tumbling using glass or stainless steel beads or mild ceramic abrasives are effectively used to speed up the process where appropriate. Thorough rinsing and drying is necessary to ensure that the equipment is not contaminated by the cleaning agent. Rinsing should continue until the rinse water is clear and does not form a persistent foam when shaken. Drying using heated gas – usually hot air – is common and speeds up the process. Use of a low oxygen fraction drying gas can reduce flash-rusting of the interior of steel cylinders.
After cleaning and drying, and before reassembly, the cleaned surfaces are inspected and where appropriate, tested for the presence of contaminants. Inspection under ultraviolet illumination can show the presence of fluorescent contaminants, but is not guaranteed to show all contaminants.
Oxygen service design
Design for oxygen service includes several aspects:
Choice of oxygen compatible materials for exposed components.
Minimising exposed area of materials which are necessary for functional reasons but are less compatible, and avoiding high flow velocities in contact with these materials.
Providing effective heat transfer to avoid raised temperatures of components.
Minimising the possibility of adiabatic heating by sudden increases of pressure - for example by using valves which cannot be opened suddenly to full bore, or opening against a pressure regulator.
Providing smooth surfaces in contact with flow where practicable, and minimising sudden changes in flow direction.
Use of flame arrestors/Flashback arrestors/Oxygen firebreaks in flexible hose
Oxygen compatible materials
As a general rule, oxygen compatibility is associated with a high ignition temperature, and a low rate of reaction once ignited.
Organic materials generally have lower ignition temperatures than metals considered suitable for oxygen service. Therefore the use of organic materials in contact with oxygen should be avoided or minimised, particularly when the material is directly exposed to gas flow. When an organic material must be used for parts such as diaphragms, seals, packing or valve seats, the material with the highest ignition temperature for the required mechanical properties is usually chosen. Fluoroelastomers are preferred where large areas are in direct contact with oxygen flow. Other materials may be acceptable for static seals where the flow does not come into direct contact with the component.
Only tested and certified oxygen compatible lubricants and sealants should be used, and in as small quantities as is reasonably practicable for effective function. Projection of excess sealant or contamination by lubricant into flow regions should be avoided.
Commonly used engineering metals with a high resistance to ignition in oxygen include copper, copper alloys, and nickel-copper alloys, and these metals also do not normally propagate combustion, making them generally suitable for oxygen service. They are also available in free-cutting, castable or highly ductile alloys, and are reasonably strong, so are useful for a wide range of components for oxygen service.
Aluminium alloys have a relatively low ignition temperature, and release a large amount of heat during combustion and are not considered suitable for oxygen service where they will be directly exposed to flow, but are acceptable for storage cylinders where the flow rate and temperatures are low.
Applications
Research
Hazards analyses are performed on materials, components, and systems; and failure analyses determine the cause of fires. Results are used in design and operation of safe oxygen systems.
References
Hazard analysis
Industrial gases | Oxygen compatibility | [
"Chemistry",
"Engineering"
] | 1,943 | [
"Chemical process engineering",
"Industrial gases",
"Safety engineering",
"Hazard analysis"
] |
39,575,312 | https://en.wikipedia.org/wiki/Affine%20gauge%20theory | Affine gauge theory is classical gauge theory where gauge fields are affine connections on the tangent bundle over a smooth manifold . For instance, these are gauge theory of dislocations in continuous media when , the generalization of metric-affine gravitation theory when is a world manifold and, in particular, gauge theory of the fifth force.
Affine tangent bundle
Being a vector bundle, the tangent bundle of an -dimensional manifold admits a natural structure of an affine bundle , called the affine tangent bundle, possessing bundle atlases with affine transition functions. It is associated to a principal bundle of affine frames in tangent space over , whose structure group is a general affine group .
The tangent bundle is associated to a principal linear frame bundle , whose structure group is a general linear group . This is a subgroup of so that the latter is a semidirect product of and a group of translations.
There is the canonical imbedding of to onto a reduced principal subbundle which corresponds to the canonical structure of a vector bundle as the affine one.
Given linear bundle coordinates
on the tangent bundle , the affine tangent bundle can be provided with affine bundle coordinates
and, in particular, with the linear coordinates (1).
Affine gauge fields
The affine tangent bundle admits an affine connection which is associated to a principal connection on an affine frame bundle . In affine gauge theory, it is treated as an affine gauge field.
Given the linear bundle coordinates (1) on , an affine connection is represented by a connection tangent-valued form
This affine connection defines a unique linear connection
on , which is associated to a principal connection on .
Conversely, every linear connection (4) on is extended to the affine one on which is given by the same expression (4) as with respect to the bundle coordinates (1) on , but it takes a form
relative to the affine coordinates (2).
Then any affine connection (3) on is represented by a sum
of the extended linear connection and a basic soldering form
on , where due to the canonical isomorphism of the vertical tangent bundle of .
Relative to the linear coordinates (1), the sum (5) is brought into a sum
of a linear connection and the soldering form (6). In this case, the soldering form (6) often is treated as a translation gauge field, though it is not a connection.
Let us note that a true translation gauge field (i.e., an affine connection which yields a flat linear connection on ) is well defined only on a parallelizable manifold .
Gauge theory of dislocations
In field theory, one meets a problem of physical interpretation of translation gauge fields because there are no fields subject to gauge translations . At the same time, one observes such a field in gauge theory of dislocations in continuous media because, in the presence of dislocations, displacement vectors , , of small deformations are determined only with accuracy to gauge translations .
In this case, let , and let an affine connection take a form
with respect to the affine bundle coordinates (2). This is a translation gauge field whose coefficients describe plastic distortion, covariant derivatives coincide with elastic distortion, and a strength is a dislocation density.
Equations of gauge theory of dislocations are derived from a gauge invariant Lagrangian density
where and are the Lamé parameters of isotropic media. These equations however are not independent since a displacement field can be removed by gauge translations and, thereby, it fails to be a dynamic variable.
Gauge theory of the fifth force
In gauge gravitation theory on a world manifold , one can consider an affine, but not linear connection on the tangent bundle of . Given bundle coordinates (1) on , it takes the form (3) where the linear connection (4) and the basic soldering form (6) are considered as independent variables.
As was mentioned above, the soldering form (6) often is treated as a translation gauge field, though it is not a connection. On another side, one mistakenly identifies with a tetrad field. However, these are different mathematical object because a soldering form is a section of the tensor bundle , whereas a tetrad field is a local section of a Lorentz reduced subbundle of a frame bundle .
In the spirit of the above-mentioned gauge theory of dislocations, it has been suggested that a soldering field can describe sui generi deformations of a world manifold which are given by a bundle morphism
where is a tautological one-form.
Then one considers metric-affine gravitation theory on a deformed world manifold as that with a deformed pseudo-Riemannian metric when a Lagrangian of a soldering field takes a form
,
where is the Levi-Civita symbol, and
is the torsion of a linear connection with respect to a soldering form .
In particular, let us consider this gauge model in the case of small gravitational and soldering fields whose matter source is a point mass. Then one comes to a modified Newtonian potential of the fifth force type.
See also
Connection (affine bundle)
Dislocations
Fifth force
Gauge gravitation theory
Metric-affine gravitation theory
Classical unified field theories
References
A. Kadic, D. Edelen, A Gauge Theory of Dislocations and Disclinations, Lecture Notes in Physics 174 (Springer, New York, 1983),
G. Sardanashvily, O. Zakharov, Gauge Gravitation Theory (World Scientific, Singapore, 1992),
C. Malyshev, The dislocation stress functions from the double curl T(3)-gauge equations: Linearity and look beyond, Annals of Physics 286 (2000) 249.
External links
G. Sardanashvily, Gravity as a Higgs field. III. Nongravitational deviations of gravitational field, .
Gauge theories
Theories of gravity | Affine gauge theory | [
"Physics"
] | 1,230 | [
"Theoretical physics",
"Theories of gravity"
] |
57,210,042 | https://en.wikipedia.org/wiki/Working%20fluid%20selection | Heat engines, refrigeration cycles and heat pumps usually involve a fluid to and from which heat is transferred while undergoing a thermodynamic cycle. This fluid is called the working fluid. Refrigeration and heat pump technologies often refer to working fluids as refrigerants. Most thermodynamic cycles make use of the latent heat (advantages of phase change) of the working fluid. In case of other cycles the working fluid remains in gaseous phase while undergoing all the processes of the cycle. When it comes to heat engines, working fluid generally undergoes a combustion process as well, for example in internal combustion engines or gas turbines. There are also technologies in heat pump and refrigeration, where working fluid does not change phase, such as reverse Brayton or Stirling cycle.
This article summarises the main criteria of selecting working fluids for a thermodynamic cycle, such as heat engines including low grade heat recovery using Organic Rankine Cycle (ORC) for geothermal energy, waste heat, thermal solar energy or biomass and heat pumps and refrigeration cycles. The article addresses how working fluids affect technological applications, where the working fluid undergoes a phase transition and does not remain in its original (mainly gaseous) phase during all the processes of the thermodynamic cycle.
Finding the optimal working fluid for a given purpose – which is essential to achieve higher energy efficiency in the energy conversion systems – has great impact on the technology, namely it does not just influence operational variables of the cycle but also alters the layout and modifies the design of the equipment. Selection criteria of working fluids generally include thermodynamic and physical properties besides economical and environmental factors, but most often all of these criteria are used together.
Selection criteria of working fluids
The choice of working fluids is known to have a significant impact on the thermodynamic as well as economic performance of the cycle. A suitable fluid must exhibit favorable physical, chemical, environmental, safety and economic properties such as low specific volume (high density), viscosity, toxicity, flammability, ozone depletion potential (ODP), global warming potential (GWP) and cost, as well as favorable process characteristics such as high thermal and exergetic efficiency. These requirements apply both to pure (single-component) and mixed (multicomponent) working fluids. Existing research is largely focused on the selection of pure working fluids, with vast number of published reports currently available. An important restriction of pure working fluids is their constant temperature profile during phase change. Working fluid mixtures are more appealing than pure fluids because their evaporation temperature profile is variable, therefore follows the profile of the heat source better, as opposed to the flat (constant) evaporation profile of pure fluids. This enables an approximately stable temperature difference during evaporation in the heat exchanger, coined as temperature glide, which significantly reduces exergetic losses. Despite their usefulness, recent publications addressing the selection of mixed fluids are considerably fewer.
Many authors like for example O. Badr et al. have suggested the following thermodynamic and physical criteria which a working fluid should meet for heat engines like Rankine cycles. There are some differences in the criteria concerning the working fluids used in heat engines and refrigeration cycles or heat pumps, which are listed below accordingly:
Common criteria for both heat engines and refrigeration cycles
The saturation pressure at the maximum temperature of the cycle should not be excessive. Very high pressures lead to mechanical stress problems, and therefore, unnecessarily expensive components may be required.
The saturation pressure at the minimum temperature of the cycle (i.e. the condensing pressure) should not be so low as to lead to problems of sealing against infiltration of the atmospheric air into the system.
The triple point should lie below the expected minimum ambient temperature. This ensures that the fluid does not solidify at any point during the cycle nor whilst being handled outside the system.
The working fluid should possess a low value of the liquid viscosity, a high latent heat of vaporisation, a high liquid thermal conductivity and a good wetting capability. These ensure that the working fluid pressure drops in passing through the heat exchangers and the auxiliary piping are low and that the heat transfer rates in the exchangers are high.
The working fluid should have low vapour and liquid specific volumes. These properties affect the rates of heat transfer in the heat exchangers. The vapour specific volume relates directly to the size and cost of the cycle components. Moreover, a high vapour specific volume leads to larger volumetric flows requiring a multiplicity of exhaust ends of the expander at heat engines or compressor in refrigeration cycles and resulting in significant pressure losses. The specific volume of the liquid at the condenser pressure should be as small as possible in order to minimise the required feedwater pump work.
Non-corrosivity and compatibility with common system materials are important selection criteria.
The fluid should be chemically stable over the whole temperature and pressure range employed. The thermal decomposition resistance of the working fluid in the presence of lubricants and container materials is a highly important criterion. In addition to making the replacement of the working fluid necessary, chemical decomposition of the fluid can produce non-condensable gases which lower the heat transfer rate in the heat exchangers, as well as compounds, which have corrosive effects on the materials of the system.
Non-toxicity, non-flammability, non-explosiveness, non-radioactiveness and current industrial acceptability are also desirable attributes.
The fluid should meet the criteria of environmental protection requirements such as a low grade ozone depletion potential (ODP) and global warming potential (GWP).
The fluid should possess good lubrication properties to reduce friction between surfaces in mutual contact, which reduces the heat generated when the surfaces move and ultimately increases cycle performance.
The substance should be of low cost and readily available in large quantities.
Long-term (operational) experience with the working fluid and possible fluid recycling is also beneficial.
Special criteria for heat engines (like Rankine cycle)
The critical temperature of the fluid should be well above the highest temperature existing in the proposed cycle. Evaporation of the working fluid — and thus the significant addition of heat — can then ensue at the maximum temperature of the cycle. This results in a relatively high cycle efficiency.
The slope ds/dT of the saturated vapour line in T–s diagram (see Chapter Classification of pure (single-component) working fluids) should be nearly zero in the applied pressure ratio of the expander. This prevents significant moisture (liquid droplet) formation or excessive superheat occurring during the expansion. It also ensures that all the heat rejection in the condenser occurs at the minimum cycle temperature, which increases the thermal efficiency.
A low value for the specific heat of the liquid or, alternatively, a low ratio of number of atoms per molecule divided by the molecular weight and a high ratio of the latent heat of vaporisation to the liquid's specific heat should appertain. This reduces the amount of the heat required to raise the temperature of the subcooled liquid of the working fluid to the saturation temperature corresponding to the pressure in the Rankinecycle's evaporator. So most of the heat is added at the maximum cycle temperature, and the Rankine cycle can approach more closely the Carnot cycle.
Special criteria for refrigeration cycles or heat pumps
The slope ds/dT of the saturated vapour line in T–s diagram (see Chapter Classification of pure (single-component) working fluids) should be nearly zero, but never positive in the applied pressure ratio of the compressor. This prevents significant moisture (liquid droplet) formation or excessive superheat occurring during the compression. Compressors are very sensitive to liquid droplets.
The saturation pressure at the temperature of evaporation should not be lower than atmospheric pressure. This mainly corresponds to open-type compressors.
The saturation pressure at the temperature of condensation should not be high.
The ratio of condensation and evaporation pressures should be low.
Classification of pure (single-component) working fluids
Traditional classification
Traditional and presently most widespread categorisation of pure working fluids was first used by H. Tabor et al. and O. Badr et al. dating back to the 60s. This three-class classification system sorts pure working fluids into three categories. The base of the classification is the shape of the saturation vapour curve of the fluid in temperature-entropy plane. If the slope of the saturation vapour curve in all states is negative (ds/dT<0), which means that with decreasing saturation temperature the value of entropy increases, the fluid is called wet. If the slope of the saturation vapour curve of the fluid is mainly positive (regardless of a short negative slope somewhat below the critical point), which means that with decreasing saturation temperature the value of entropy also decreases (dT/ds>0), the fluid is dry. The third category is called isentropic, which means constant entropy and refers to those fluids that have a vertical saturation vapour curve (regardless of a short negative slope somewhat below the critical point) in temperature-entropy diagram. According to mathematical approach, it means a (negative) infinite slope (ds/dT=0). The terms of wet, dry and isentropic refer to the quality of vapour after the working fluid undergoes an isentropic (reversible adiabatic) expansion process from saturated vapour state. During an isentropic expansion process the working fluid always ends in the two-phase (also called wet) zone, if it is a wet-type fluid. If the fluid is of dry-type, the isentropic expansion necessarily ends in the superheated (also called dry) steam zone. If the working fluid is of isentropic-type, after an isentropic expansion process the fluid stays in saturated vapour state. The quality of vapour is a key factor in choosing steam turbine or expander for heat engines. See figure for better understanding.
Novel classification
Traditional classification shows several theoretical and practical deficiencies. One of the most important is the fact that no perfectly isentropic fluid exists. Isentropic fluids have two extrema (ds/dT=0) on the saturation vapour curve. Practically, there are some fluids which are very close to this behaviour or at least in a certain temperature range, for example trichlorofluoromethane (CCl3F). Another problem is the extent of how dry or isentropic the fluid behaves, which has significant practical importance when designing for example an Organic Rankine Cycle layout and choosing proper expander.
A new kind of classification was proposed by G. Györke et al. to resolve the problems and deficiencies of the traditional three-class classification system. The new classification is also based on the shape of the saturation vapour curve of the fluid in temperature-entropy diagram similarly to the traditional one. The classification uses a characteristic-point based method to differentiate the fluids. The method defines three primary and two secondary characteristic points. The relative location of these points on the temperature-entropy saturation curve defines the categories. Every pure fluid has primary characteristic points A, C and Z:
Primary point A and Z are the lowest temperature points on the saturation liquid and saturation vapour curve respectively. This temperature belongs to the melting point, which practically equals the triple point of the fluid. The choice of A and Z refers to the first and last point of the saturation curve visually.
Primary point C refers to the critical point, which is an already well-defined thermodynamic property of the fluids.
The two secondary characteristic points, namely M and N are defined as local entropy extrema on the saturation vapour curve, more accurately, at those points, where with the decrease of the saturation temperature entropy stays constant: ds/dT=0. We can easily realise that considering traditional classification, wet-type fluids have only primary (A, C and Z), dry-type fluids have primary points and exactly one secondary point (M) and redefined isentropic-type fluids have both primary and secondary points (M and N) as well. See figure for better understanding.
The ascending order of entropy values of the characteristic points gives a useful tool to define categories. The mathematically possible number of orderings are 3! (if there are no secondary points), 4! (if only secondary point M exists) and 5! (if both secondary points exist), which makes it 150. There are some physical constraints including the existence of the secondary points decrease the number of possible categories to 8. The categories are to be named after the ascending order of the entropy of their characteristic points. Namely the possible 8 categories are ACZ, ACZM, AZCM, ANZCM, ANCZM, ANCMZ, ACNZM and ACNMZ. The categories (also called sequences) can be fitted into the traditional three-class classification, which makes the two classification system compatible. No working fluids have been found, which could be fitted into ACZM or ACNZM categories. Theoretical studies confirmed that these two categories may not even exist. Based on the database of NIST, the proved 6 sequences of the novel classification and their relation to the traditional one can be seen in the figure.
Multicomponent working fluids
Although multicomponent working fluids have significant thermodynamic advantages over pure (single-component) ones, research and application keep focusing on pure working fluids. However, there are some typical examples for multicomponent based technologies such as Kalina cycle which uses water and ammonia mixture, or absorption refrigerators which also use water and ammonia mixture besides water, ammonia and hydrogen, lithium bromide or lithium chloride mixtures in a majority. Some scientific papers deal with the application of multicomponent working fluids in Organic Rankine cycles as well. These are mainly binary mixtures of hydrocarbons, fluorocarbons, hydrofluorocarbons, siloxanes and inorganic substances.
See also
Heat pump and refrigeration cycle
Organic Rankine cycle
Refrigerant
Rankine cycle
Thermodynamic cycle
Vapor-compression refrigeration
References
External links
Knowledge Center on Organic Rankine Cycle
National Refrigerants, Inc.
NIST Chemistry Webbook
Novel classification of pure working fluids for Organic Rankine Cycle
ORC World Map
Fluid mechanics
Thermodynamics | Working fluid selection | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 3,018 | [
"Civil engineering",
"Fluid mechanics",
"Thermodynamics",
"Dynamical systems"
] |
57,210,774 | https://en.wikipedia.org/wiki/Spinor%20condensate | Spinor condensates are degenerate Bose gases that have degrees of freedom arising from the internal spin of the constituent particles
.
They are described by a multi-component (spinor) order parameter.
Since their initial experimental realisation,
a wealth of studies have appeared, both
experimental and theoretical, focusing
on the physical properties of spinor condensates, including their
ground states, non-equilibrium dynamics, and
vortices.
Early work
The study of spinor condensates was initiated in 1998 by experimental groups at JILA
and MIT. These experiments utilised
23Na and 87Rb atoms, respectively.
In contrast to most prior experiments on ultracold gases, these experiments utilised a purely
optical trap, which is spin-insensitive. Shortly thereafter, theoretical work appeared
which described the possible mean-field phases of spin-one spinor condensates.
Underlying Hamiltonian
The Hamiltonian describing a spinor condensate is most frequently written using the language of
second quantization. Here the field operator
creates a boson in Zeeman level at position . These
operators satisfy bosonic commutation relations:
The free (non-interacting) part of the Hamiltonian is
where denotes the mass of the constituent particles and
is an external potential.
For a spin-one spinor condensate, the interaction Hamiltonian is
In this expression,
is the operator corresponding to the density,
is the local spin operator (
is a vector composed of the spin-one matrices),
and :: denotes normal ordering. The parameters
can be expressed in terms of the s-wave scattering lengths of the constituent particles.
Higher spin versions of the
interaction Hamiltonian are slightly more involved, but
can generally be expressed by using Clebsch–Gordan coefficients.
The full Hamiltonian then is .
Mean-field phases
In Gross-Pitaevskii mean field theory, one replaces the field operators with c-number functions:
. To find the mean-field
ground states, one then minimises the resulting energy with respect to these c-number functions.
For a spatially uniform system spin-one system, there are two possible mean-field ground states.
When , the ground state is
while for the ground state is
The former expression is referred to as the polar state while the latter is the
ferromagnetic state.
Both states are unique up to overall spin rotations. Importantly,
cannot be rotated into .
The Majorana stellar representation
provides a particularly insightful description of the mean-field phases of spinor
condensates with larger spin.
Vortices
Due to being described by a multi-component order parameter, numerous types of
topological defects (vortices) can appear in spinor condensates
.
Homotopy theory provides a natural description of topological defects, and is regularly employed to understand
vortices in spinor condensates.
References
Bose–Einstein condensates
Exotic matter
Phases of matter | Spinor condensate | [
"Physics",
"Chemistry",
"Materials_science"
] | 591 | [
"Bose–Einstein condensates",
"Phases of matter",
"Condensed matter physics",
"Exotic matter",
"Matter"
] |
57,216,564 | https://en.wikipedia.org/wiki/Welding%20of%20advanced%20thermoplastic%20composites | Advanced thermoplastic composites (ACM) have a high strength fibres held together by a thermoplastic matrix. Advanced thermoplastic composites are becoming more widely used in the aerospace, marine, automotive and energy industry. This is due to the decreasing cost and superior strength to weight ratios, over metallic parts. Advance thermoplastic composite have excellent damage tolerance, corrosion resistant, high fracture toughness, high impact resistance, good fatigue resistance, low storage cost, and infinite shelf life. Thermoplastic composites also have the ability to be formed and reformed, repaired and fusion welded.
Fusion bonding fundamentals
Fusion bonding is a category of techniques for welding thermoplastic composites. It requires the melting of the joint interface, which decreases the viscosity of the polymer and allows for intermolecular diffusion. These polymer chains then diffuse across the joint interface and become entangled, giving the joint its strength.
Welding techniques
There are many welding techniques that can be used to fusion bond thermoplastic composites. These different techniques can be broken down into three classifications for their ways of generating heat; frictional heating, external heating and electromagnetic heating. Some of these techniques can be very limited and only used for specific joints and geometries.
Friction welding
Friction welding is best used for parts that are small and flat. The welding equipment is often expensive, but produces high-quality welds.
Linear vibration welding
Two flat parts are brought together under pressure with one fixed in place and the other vibrating back-and-forth parallel to the joint. Frictional heat is then generated till the polymers are softened or melted. Once the desired temperature is met, the vibration motion stops, the polymer solidifies and a weld joint is made. The two most important welding parameters that affect the mechanical performance are welding pressure and time. Developing parameters for different advance thermoplastic composite can be challenging because the high elastic modulus of the material will have a higher heat generation, requiring less weld time. The pressure can affect the fiber orientation which also greatly impact the mechanical performance. Lap shear joints tend to have the best mechanical performance from the higher volume fraction of fibers at the weld interface. Overall linear vibration welding can achieve high production rates with excellent strength, but is limited to the joint geometries that are flat.
Spin welding
Spin welding is not a very common welding technique for advanced thermoplastic composites because this can only be done with parts that have a circular geometry. This is done by one part remaining stationary while the other is continuously rotated with pressure applied to the weld interface. Rotational velocity will vary throughout different radii of the Interface. This will result in a temperature gradient as a function of the radius, resulting in different shrinkage for the fibers causing high residual stresses. The orientation of the fibers will also contribute to high residual stress and reduction in strength.
Ultrasonic welding
Ultrasonics welding is one of the most commonly used technique for welding advanced thermoplastic composites. This is due for its ability to maintain high weld strength, hermetic sealing, and high production rates. This welding technique operates at high vibrational frequencies (10–70 kHz) and low amplitude. The direction of vibration is perpendicular to the joint surface, but can also be parallel to the joint for hermetic application. Heat is generated from the surface and intermolecular friction due to the vibrational. On the surface of the joint there are small asperities called energy directors, where the vibrational energy concentrates and induces melting. Design of the energy director and optimized parameters can be critical to improve the quality of the weld to reducing any fiber disruption during welding. Energy directors that are triangular or semi-circle often achieve the highest strength. With optimize welding parameters and joint design weld strength, up to 80% of the base material can be retained for advanced thermoplastic composites. However, welding can cause damage to the fibers, which will result in premature failure. Ultrasonic welding of advanced thermoplastic composites is used for making automotive parts, medical devices and battery housing.
Thermal welding
Thermal welding can produce good weld quality although extra precautions need to be taken to prevent high residual stress, warping, and decohesion. Other thermal welding techniques are not commonly used due their high heat input, which can damage the composite.
Laser welding
Laser welding of advanced thermoplastic composites is a process by which the LASER (Light Amplification of Simulated Emission of electromagnetic Radiation), a highly focused coherent beam of light melts the composite tin various ways. Taking advantage of joint design and material properties, lasers can be applied either directly or indirectly to create the welded joint. There are processing methods that take advantage of material structure/properties to create the weld joint. Welding variables affect weld quality in both positive and negative ways depending on how they are manipulated.
Laser heating mechanism in matter
When a laser beam impinges on a material, it excites electrons in the outer most shell of the atom. The return of those electrons to the relaxed state induces thermal heating through conversion to vibrational states which propagate to the surrounding material.
Joining methods for laser welding
Surface heating
This method involves using infrared radiation to heat the surfaces the composites to be welded and then clamping until and holding the parts together.
IR/Laser stacking
This method involves laser melting a polymer post and pressing a die into the molten post to create a rivet-like button to joint materials like metals. This process can be used to join metallic joints to composite structures.
Through Transmission IR welding (TTIr)
This method utilizes one laser transparent (LT) and one laser absorbing (LA) material. Typically, the components are layered as a sandwich with the laser beam passing through the LT layer and irradiating the surface of the LA. This creates a melt layer at the interface of two components leading to a weld.
Effect of Constituent Properties on Weldability
To understand how the properties of a composite affect is weldability, the effects of the individual constituents (fiber, matrix, additives, etc.) need to be understood. The effect of each will be noted separately and then the combined effects will be discussed.
Matrix
Electromagnetic radiation interaction
A laser beam can interact in one of three ways when it contacts the polymer matrix. It can be absorbed, transmitted, or reflected. The amount of absorption determines the amount of energy available for welding. The reflectivity is affected by the index of refraction according to this relation: , where n is the index of refraction of the polymer and m is the index of refraction of air.
Absorption can be affected by the following structural characteristics of the polymer to be discussed below: crystallinity, chemical bonding, and concentration of additives.
Crystallinity
Increased crystallinity tends to cause lower laser beam transmission because of scattering caused by changes in the index of refraction encountered when going from one phase to the next or because of changing crystallographic orientation. Increased crystallinity can cause the transmission to increase monotonically as a function of polymer thickness. The relationship follows the Lambert-Bouuger's Law: , where is the intensity of the laser beam at a given depth or thickness, t. is the intensity of laser beam at its source. is the absorption constant of the polymer. By the same token, amorphous polymers lack this trend with thickness.
Chemical bonding
Polymers absorb EMR (Electro Magnetic Radiation) in a specific wavelength of light depending on what functional groups are present on the polymer. For instance, bending of the C-H bond on the at 6800 nm. Many polymers have vibrational modes at wavelengths greater than 1100 nm, so to apply methods such as TTIr, laser sources must produce photons at wavelengths shorter than that. Therefore, Nd:YAG lasers (1064 nm) and diode lasers (800-950 nm) can pass through the LT until they impinge on the intended modified polymer or additive that results in absorption, whereas CO_2 lasers (10,640 nm) will be absorbed too easily as it passes through the LT.
Reinforcements
Reinforcements such as fibers or short particles. Reinforcing fibers can be added to increase the strength of a composite.
Some reinforcements like carbon fibers have high thermal conductivity and can dissipate the heat of welding, thus requiring more energy input than with other reinforcement materials such as glass. Glass reinforcements can cause scattering of the beam.
The orientation of the continuous fibers can affect the width of welds being made. When the welding direction is parallel to the orientation of the fibers, the weld width is usually narrower due to heat being channeled through the fibers to the front and the rear of the weld.
Increased volume fraction of reinforcements such as glass can scatter the laser beam, thus allowing less to be transmitted to the weld joint. When this happens, the amount of energy necessary to fuse the joint may increase. The increase if not done carefully can cause damage to the transparent part of a TTIr weld joint.
Additives and Fillers
Some additives can be intentionally added to absorb laser energy. This technique is especially useful in concentrating the weld joint to the mated surfaces of two materials that are relatively transparent to the laser beam. For example, carbon black increases absorption of the laser beam. There can be some unintended consequences of using these absorbing additives. Increasing the concentration of carbon black in a polymer can decrease the depth of heating and increase the peak temperature at the weld joint. Surface damage can occur if the concentration of carbon black becomes excessive.
Some additives such as the highly selective materials used in the Clearweld process are applied only to the mating surfaces between the plastics to be joined. Some of the chemicals such as cyanines only absorb in a narrow wavelength band centered around 785 nm. This methodology initially was applied only to plastics, but has recently been applied to composites such as carbon fiber reinforced PEEK.
Other additives called clarifiers can do the opposite of carbon black by increasing laser beam transmission by reducing crystallinity in polymers.
Despite the fact that both pigments and dyes can both add color to a polymer, they behave differently. A dye is soluble in a polymer, whereas a pigment is not.
Welding technique comparisons
Contour Welding (CW) vs quasi-simultaneous (QS)
During TTIr, although it takes more energy per unit length to achieve fusion with QS than with CW, QS offers the advantage of achieving higher weld strength and weldability of low transmissive materials such as continuous glass fiber thermoplastics. Greater strength is imparted because full fusion is achieved without damaging the surface of the transparent material.
Electromagnetic welding
Electromagnetic welding is capable of welding complex parts with also the possibility of reopening welds for replacement or repair. To achieve good welds the design of the coil and implant is important for uniform heating.
Implant resistance welding
Implant resistance welding can be a low cost solution for welding parts that are flat or with curved surfaces. The heating element used is often a metal mesh or carbon strips, which provides uniform heating. However, advanced thermoplastic composites that contain conductive fibers can’t be used due to unwanted power leakages.
Implant induction welding
Induction welding uses a implant or susceptor that is placed at the weld interface and embedded with conductive material such as metal or carbon fibers. An induction coil is then place near the weld joint, which induces a current in embedded in the material used to generate heat. When welding carbon fiber, carbon and graphite fiber mats with higher electrical resistance are used to concentrate the heat at the weld interface. This has the ability to weld complex geometry structures with great weld strength.
Challenges of welding advanced thermoplastic composites
The heat generated during welding thermoplastic composite, induces residual stresses in the joint. These stresses can greatly reduce the strength and performance of the part. Upon cooling from welding the matrix and fibers will have different coefficients of thermal expansion, which introduces the residual stress. Things such as heat input, cooling rates, volume fraction of the fibers, and matrix material will influence the residual stress. Another important factor to consider is the orientation of the fibers. During the molten state of welding, fibers can reorient themselves in a manner that reduces weld strength.
Advanced thermoplastic composites commonly used for welding
Carbon fiber polyetherimide (CF/PEI)
Carbon fiber polyphenylene sulfide (CF/PPS)
Carbon fiber polyetheretherketone (CF/PEEK)
See also
Testing of advanced thermoplastic composite welds
References
Welding
Materials science | Welding of advanced thermoplastic composites | [
"Physics",
"Materials_science",
"Engineering"
] | 2,597 | [
"Welding",
"Applied and interdisciplinary physics",
"Materials science",
"Mechanical engineering",
"nan"
] |
40,888,645 | https://en.wikipedia.org/wiki/Bayesian%20programming | Bayesian programming is a formalism and a methodology for having a technique to specify probabilistic models and solve problems when less than the necessary information is available.
Edwin T. Jaynes proposed that probability could be considered as an alternative and an extension of logic for rational reasoning with incomplete and uncertain information. In his founding book Probability Theory: The Logic of Science he developed this theory and proposed what he called “the robot,” which was not
a physical device, but an inference engine to automate probabilistic reasoning—a kind of Prolog for probability instead of logic. Bayesian programming is a formal and concrete implementation of this "robot".
Bayesian programming may also be seen as an algebraic formalism to specify graphical models such as, for instance, Bayesian networks, dynamic Bayesian networks, Kalman filters or hidden Markov models. Indeed, Bayesian Programming is more general than Bayesian networks and has a power of expression equivalent to probabilistic factor graphs.
Formalism
A Bayesian program is a means of specifying a family of probability distributions.
The constituent elements of a Bayesian program are presented below:
A program is constructed from a description and a question.
A description is constructed using some specification () as given by the programmer and an identification or learning process for the parameters not completely specified by the specification, using a data set ().
A specification is constructed from a set of pertinent variables, a decomposition and a set of forms.
Forms are either parametric forms or questions to other Bayesian programs.
A question specifies which probability distribution has to be computed.
Description
The purpose of a description is to specify an effective method of computing a joint probability distribution
on a set of variables given a set of experimental data and some
specification . This joint distribution is denoted as: .
To specify preliminary knowledge , the programmer must undertake the following:
Define the set of relevant variables on which the joint distribution is defined.
Decompose the joint distribution (break it into relevant independent or conditional probabilities).
Define the forms of each of the distributions (e.g., for each variable, one of the list of probability distributions).
Decomposition
Given a partition of containing subsets, variables are defined
, each corresponding to one of these subsets.
Each variable is obtained as the conjunction of the variables
belonging to the subset. Recursive application of Bayes' theorem leads to:
Conditional independence hypotheses then allow further simplifications. A conditional
independence hypothesis for variable is defined by choosing some variable
among the variables appearing in the conjunction , labelling as the
conjunction of these chosen variables and setting:
We then obtain:
Such a simplification of the joint distribution as a product of simpler distributions is
called a decomposition, derived using the chain rule.
This ensures that each variable appears at the most once on the left of a conditioning
bar, which is the necessary and sufficient condition to write mathematically valid
decompositions.
Forms
Each distribution appearing in the product is then associated
with either a parametric form (i.e., a function ) or a question to another Bayesian program .
When it is a form , in general, is a vector of parameters that may depend on or or both. Learning
takes place when some of these parameters are computed using the data set .
An important feature of Bayesian Programming is this capacity to use questions to other Bayesian programs as components of the definition of a new Bayesian program. is obtained by some inferences done by another Bayesian program defined by the specifications and the data . This is similar to calling a subroutine in classical programming and provides an easy way to build hierarchical models.
Question
Given a description (i.e., ), a question is obtained by partitioning
into three sets: the searched variables, the known variables and
the free variables.
The 3 variables , and are defined as the
conjunction of the variables belonging to
these sets.
A question is defined as the set
of distributions:
made of many "instantiated questions" as the cardinal of ,
each instantiated question being the distribution:
Inference
Given the joint distribution , it is always possible to compute any possible question using the following general inference:
where the first equality results from the marginalization rule, the second
results from Bayes' theorem and the third corresponds to a second application of marginalization. The denominator appears to be a normalization term and can be replaced by a constant .
Theoretically, this allows to solve any Bayesian inference problem. In practice,
however, the cost of computing exhaustively and exactly is too great in almost all cases.
Replacing the joint distribution by its decomposition we get:
which is usually a much simpler expression to compute, as the dimensionality of the problem is considerably reduced by the decomposition into a product of lower dimension distributions.
Example
Bayesian spam detection
The purpose of Bayesian spam filtering is to eliminate junk e-mails.
The problem is very easy to formulate. E-mails should be classified
into one of two categories: non-spam or spam. The only available information to classify the e-mails is their content: a set of words. Using these words without taking the order into account is commonly called a bag of words model.
The classifier should furthermore be able to adapt to its user and to learn
from experience. Starting from an initial standard setting, the classifier should
modify its internal parameters when the user disagrees with its own decision.
It will hence adapt to the user's criteria to differentiate between non-spam and
spam. It will improve its results as it encounters increasingly classified e-mails.
Variables
The variables necessary to write this program are as follows:
: a binary variable, false if the e-mail is not spam and true otherwise.
: binary variables. is true if the word of the dictionary is present in the text.
These binary variables sum up all the information
about an e-mail.
Decomposition
Starting from the joint distribution and applying recursively Bayes' theorem we obtain:
This is an exact mathematical expression.
It can be drastically simplified by assuming that the probability of appearance of a word knowing the nature of the text (spam or not) is independent of the appearance of the other words. This is the naive Bayes assumption and this makes this spam filter a naive Bayes model.
For instance, the programmer can assume that:
to finally obtain:
This kind of assumption is known as the naive Bayes' assumption. It is "naive" in the sense that the independence between words is clearly not completely true. For instance, it completely neglects that the appearance of pairs of words may be more significant than isolated appearances. However, the programmer may assume this hypothesis and may develop the model and the associated inferences to test how reliable and efficient it is.
Parametric forms
To be able to compute the joint distribution, the programmer must now specify the
distributions appearing in the decomposition:
is a prior defined, for instance, by
Each of the forms may be specified using Laplace rule of succession (this is a pseudocounts-based smoothing technique to counter the zero-frequency problem of words never-seen-before):
where stands for the number of appearances of the word in non-spam e-mails and stands for the total number of non-spam e-mails. Similarly, stands for the number of appearances of the word in spam e-mails and stands for the total number of spam e-mails.
Identification
The forms are not yet completely specified because the parameters , , and have no values yet.
The identification of these parameters could be done either by batch processing a series of classified e-mails or by an incremental updating of the parameters using the user's classifications of the e-mails as they arrive.
Both methods could be combined: the system could start with initial standard values of these parameters issued from a generic database, then some incremental learning customizes the classifier to each individual user.
Question
The question asked to the program is: "what is the probability for a given text to be spam knowing which words appear and don't appear in this text?"
It can be formalized by:
which can be computed as follows:
The denominator appears to be a normalization constant. It is not necessary to compute it to decide if we are dealing with spam. For instance, an easy trick is to compute the ratio:
This computation is faster and easier because it requires only products.
Bayesian program
The Bayesian spam filter program is completely defined by:
Bayesian filter, Kalman filter and hidden Markov model
Bayesian filters (often called Recursive Bayesian estimation) are generic probabilistic models for time evolving processes. Numerous models are particular instances of this generic approach, for instance: the Kalman filter or the Hidden Markov model (HMM).
Variables
Variables are a time series of state variables considered to be on a time horizon ranging from to .
Variables are a time series of observation variables on the same horizon.
Decomposition
The decomposition is based:
on , called the system model, transition model or dynamic model, which formalizes the transition from the state at time to the state at time ;
on , called the observation model, which expresses what can be observed at time when the system is in state ;
on an initial state at time : .
Parametrical forms
The parametrical forms are not constrained and different choices lead to different well-known models: see Kalman filters and Hidden Markov models just below.
Question
The typical question for such models is : what is the probability distribution for the state at time knowing the observations from instant to ?
The most common case is Bayesian filtering where , which searches for the present state, knowing past observations.
However, it is also possible , to extrapolate a future state from past observations, or to do smoothing , to recover a past state from observations made either before or after that instant.
More complicated questions may also be asked as shown below in the HMM section.
Bayesian filters have a very interesting recursive property, which contributes greatly to their attractiveness. may be computed simply from with the following formula:
Another interesting point of view for this equation is to consider that there are two phases: a
prediction phase and an estimation phase:
During the prediction phase, the state is predicted using the dynamic model and the estimation of the state at the previous moment:
During the estimation phase, the prediction is either confirmed or invalidated using the last observation:
Bayesian program
Kalman filter
The very well-known Kalman filters are a special case of Bayesian
filters.
They are defined by the following Bayesian program:
Variables are continuous.
The transition model and the observation model are both specified using Gaussian laws with means that are linear functions of the conditioning variables.
With these hypotheses and by using the recursive formula, it is possible to solve
the inference problem analytically to answer the usual question.
This leads to an extremely efficient algorithm, which explains the popularity of Kalman filters and the number of their everyday applications.
When there are no obvious linear transition and observation models, it is still often
possible, using a first-order Taylor's expansion, to treat these models as locally linear.
This generalization is commonly called the extended Kalman filter.
Hidden Markov model
Hidden Markov models (HMMs) are another very popular specialization of Bayesian filters.
They are defined by the following Bayesian program:
Variables are treated as being discrete.
The transition model and the observation model are
both specified using probability matrices.
The question most frequently asked of HMMs is:
What is the most probable series of states that leads to the present state, knowing the past observations?
This particular question may be answered with a specific and very efficient algorithm
called the Viterbi algorithm.
The Baum–Welch algorithm has been developed
for HMMs.
Applications
Academic applications
Since 2000, Bayesian programming has been used to develop both robotics applications and life sciences models.
Robotics
In robotics, bayesian programming was applied to autonomous robotics, robotic CAD systems, advanced driver-assistance systems, robotic arm control, mobile robotics, human-robot interaction, human-vehicle interaction (Bayesian autonomous driver models) video game avatar programming and training and real-time strategy games (AI).
Life sciences
In life sciences, bayesian programming was used in vision to reconstruct shape from motion, to model visuo-vestibular interaction and to study saccadic eye movements; in speech perception and control to study early speech acquisition and the emergence of articulatory-acoustic systems; and to model handwriting perception and control.
Pattern recognition
Bayesian program learning has potential applications voice recognition and synthesis, image recognition and natural language processing. It employs the principles of compositionality (building abstract representations from parts), causality (building complexity from parts) and learning to learn (using previously recognized concepts to ease the creation of new concepts).
Possibility theories
The comparison between probabilistic approaches (not only bayesian programming) and possibility theories continues to be debated.
Possibility theories like, for instance, fuzzy sets, fuzzy logic and possibility theory are alternatives to probability to model uncertainty. They argue that probability is insufficient or inconvenient to model certain aspects of incomplete/uncertain knowledge.
The defense of probability is mainly based on Cox's theorem, which starts from four postulates concerning rational reasoning in the presence of uncertainty. It demonstrates that the only mathematical framework that satisfies these postulates is probability theory. The argument is that any approach other than probability necessarily infringes one of these postulates and the value of that infringement.
Probabilistic programming
The purpose of probabilistic programming is to unify the scope of classical programming languages with probabilistic modeling (especially bayesian networks) to deal with uncertainty while profiting from the programming languages' expressiveness to encode complexity.
Extended classical programming languages include logical languages as proposed in Probabilistic Horn Abduction, Independent Choice Logic, PRISM, and ProbLog which proposes an extension of Prolog.
It can also be extensions of functional programming languages (essentially Lisp and Scheme) such as IBAL or CHURCH. The underlying programming languages can be object-oriented as in BLOG and FACTORIE or more standard ones as in CES and FIGARO.
The purpose of Bayesian programming is different. Jaynes' precept of "probability as logic" argues that probability is an extension of and an alternative to logic above which a complete theory of rationality, computation and programming can be rebuilt. Bayesian programming attempts to replace classical languages with a programming approach based on probability that considers incompleteness and uncertainty.
The precise comparison between the semantics and power of expression of Bayesian and probabilistic programming is an open question.
See also
References
Further reading
External links
A companion site to the Bayesian programming book where to download ProBT an inference engine dedicated to Bayesian programming.
The Bayesian-programming.org site for the promotion of Bayesian programming with detailed information and numerous publications.
Bayesian statistics
Artificial intelligence
Artificial intelligence engineering | Bayesian programming | [
"Engineering"
] | 3,067 | [
"Software engineering",
"Artificial intelligence engineering"
] |
40,889,432 | https://en.wikipedia.org/wiki/Leviton%20%28quasiparticle%29 | A leviton is a collective excitation of a single electron within a metal. It has been mostly studied in two-dimensional electron gases alongside quantum point contacts. The main feature is that the excitation produces an electron pulse without the creation of electron holes. The time-dependence of the pulse is described by a Lorentzian distribution created by a pulsed electric potential.
Levitons have also been described in graphene.
The leviton is named after Leonid Levitov, who first predicted its existence in 1996.
References
Quasiparticles | Leviton (quasiparticle) | [
"Physics",
"Materials_science"
] | 111 | [
"Matter",
"Quantum mechanics",
"Subatomic particles",
"Condensed matter physics",
"Quasiparticles",
"Quantum physics stubs"
] |
46,930,376 | https://en.wikipedia.org/wiki/Penicillium%20novae-zelandiae | Penicillium novae-zelandiae is an anamorph species of fungus in the genus Penicillium which was isolated from the plant Festuca novae-zelandiae. Penicillium novae-zelandiae produces patulin, 3-hydroxybenzyl alcohol and gentisyl alcohol
Further reading
References
novae-zelandiae
Fungi described in 1940
Fungus species | Penicillium novae-zelandiae | [
"Biology"
] | 85 | [
"Fungi",
"Fungus species"
] |
46,934,541 | https://en.wikipedia.org/wiki/Inserter%20category | In category theory, a branch of mathematics, the inserter category is a variation of the comma category where the two functors are required to have the same domain category.
Definition
If C and D are two categories and F and G are two functors from C to D, the inserter category Ins(F, G) is the category whose objects are pairs (X, f) where X is an object of C and f is a morphism in D from F(X) to G(X) and whose morphisms from (X, f) to (Y, g) are morphisms h in C from X to Y such that .
Properties
If C and D are locally presentable, F and G are functors from C to D, and either F is cocontinuous or G is continuous; then the inserter category Ins(F, G) is also locally presentable.
References
Category theory | Inserter category | [
"Mathematics"
] | 191 | [
"Functions and mappings",
"Mathematical structures",
"Category theory stubs",
"Mathematical objects",
"Fields of abstract algebra",
"Mathematical relations",
"Category theory"
] |
46,941,716 | https://en.wikipedia.org/wiki/Vine%20copula | A vine is a graphical tool for labeling constraints in high-dimensional probability distributions. A regular vine is a special case for which all constraints are two-dimensional or conditional two-dimensional. Regular vines generalize trees, and are themselves specializations of Cantor tree.
Combined with bivariate copulas, regular vines have proven to be a flexible tool in high-dimensional dependence modeling. Copulas
are multivariate distributions with uniform univariate margins. Representing a joint distribution as univariate margins plus copulas allows the separation of the problems of estimating univariate distributions from the problems of estimating dependence. This is handy in as much as univariate distributions in many cases can be adequately estimated from data, whereas dependence information is roughly unknown, involving summary indicators and judgment.
Although the number of parametric multivariate copula families with flexible dependence is limited, there are many parametric families of bivariate copulas. Regular vines owe their increasing popularity to the fact that they leverage from bivariate copulas and enable extensions to arbitrary dimensions. Sampling theory and estimation theory for regular vines are well developed
and model inference has left the post
. Regular vines have proven useful in other problems such as (constrained) sampling of correlation matrices, building non-parametric continuous Bayesian networks.
For example, in finance, vine copulas have been shown to effectively model tail risk in portfolio optimization applications.
Historical origins
The first regular vine, avant la lettre, was introduced by Harry Joe.
The motive was to extend parametric bivariate extreme value copula families to higher dimensions. To this end he introduced what would later be called the D-vine. Joe
was interested in a class of n-variate distributions with given one dimensional margins, and n(n − 1) dependence parameters, whereby n − 1 parameters correspond to bivariate margins, and the others correspond to conditional bivariate margins. In the case of multivariate normal distributions, the parameters would be n − 1 correlations and (n − 1)(n − 2)/2 partial correlations, which were noted to be algebraically independent in (−1, 1).
An entirely different motivation underlay the first formal definition of vines in Cooke.
Uncertainty analyses of large risk models, such as those undertaken for the European Union and the US Nuclear Regulatory Commission for accidents at nuclear power plants, involve quantifying and propagating uncertainty over hundreds of variables.
Dependence information for such studies had been captured with Markov trees,
which are trees constructed with nodes as univariate random variables and edges as bivariate copulas. For n variables, there are at most n − 1 edges for which dependence can be specified. New techniques at that time involved obtaining uncertainty distributions on modeling parameters by eliciting experts' uncertainties on other variables which are predicted by the models. These uncertainty distributions are pulled back onto the model's parameters by a process known as probabilistic inversion.
The resulting distributions often displayed a dependence structure that could not be captured as a Markov tree.
Graphical models called vines were introduced in 1997 and further refined by Roger M. Cooke, Tim Bedford, and Dorota Kurowicka. An important feature of vines is that they can add conditional dependencies among variables on top of a Markov tree which is generally too parsimonious to summarize the dependence among variables.
Regular vines (R-vines)
A vine on n variables is a nested set of connected trees where the edges in the first tree are the nodes of the second tree, the edges of the second tree are the nodes of the third tree, etc.
A regular vine or R-vine on n variables is a vine in which two edges in tree are joined by an edge in tree j + 1 only if these edges share a common node, j = 1, ..., n − 2. The nodes in the first tree are univariate random variables. The edges are constraints or conditional constraints explained as follows.
Recall that an edge in a tree is an unordered set of two nodes. Each edge in a vine is associated with a constraint set, being the set of variables (nodes in first tree) reachable by the set membership relation. For each edge, the constraint set is the union of the constraint sets of the edge's two members called its component constraint sets (for an edge in the first tree, the component constraint sets are empty). The constraint associated with each edge is now the symmetric difference of its component constraint sets conditional on the intersection of its constraint sets. One can show that for a regular vine, the symmetric difference of the component constraint sets is always a doubleton and that each pair of variables occurs exactly once as constrained variables. In other words, all constraints are bivariate or conditional bivariate.
The degree of a node is the number of edges attaching to it. The simplest regular vines have the simplest degree structure; the D-Vine assigns every node degree 1 or 2, the C-Vine assigns one node in each tree the maximal degree. For large vines, it is clearer to draw each tree separately.
The number of regular vines on n variables grows rapidly in n: there are 2n−3 ways of extending a regular vine with one additional variable, and there are n(n − 1)(n − 2)!2(n − 2)(n − 3)/2/2 labeled regular vines on n variables
.
The constraints on a regular vine may be associated with partial correlations or with conditional bivariate copula. In the former case, we speak of a partial correlation vine, and in the latter case of a vine copula.
Partial correlation vines
Bedford and Cooke show that any assignment of values in the open interval (−1, 1) to the edges in any partial correlation vine is consistent, the assignments are algebraically independent, and there is a one-to-one relation between all such assignments and the set of correlation matrices. In other words, partial correlation vines provide an algebraically independent parametrization of the set of correlation matrices, whose terms have an intuitive interpretation. Moreover, the determinant of the correlation matrix is the product over the edges of (1 − ρ2ik;D(ik)) where ρik;D(ik) is the partial correlation assigned to the edge with conditioned variables i,k and conditioning variables D(ik). A similar decomposition characterizes the mutual information, which generalizes the determinant of the correlation matrix. These features have been used in constrained sampling of correlation matrices, building non-parametric continuous Bayesian networks and addressing the problem of extending partially specified matrices to positive definite matrices
.
Vine copulas or pair-copula construction
Under suitable differentiability conditions, any multivariate density f1...n on n variables, with univariate densities f1,...,fn, may be represented in closed form as a product of univariate densities and (conditional) copula densities on any R-vine
where edges with conditioning set are in the edge set of any regular vine . The conditional copula densities in this representation depend on the cumulative conditional distribution functions of the conditioned variables, , and, potentially, on the values of the conditioning variables. When the conditional copulas do not depend on the values of the conditioning variables, one speaks of the simplifying assumption of constant conditional copulas. Though most applications invoke this assumption, exploring the modelling freedom gained by discharging this assumption has begun
. When bivariate Gaussian copulas are assigned to edges of a vine, then the resulting multivariate density is the Gaussian density parametrized by a partial correlation vine rather than by a correlation matrix.
The vine pair-copula construction, based on the sequential mixing of conditional distributions has been adapted to discrete variables and mixed discrete/continuous response
.
Also factor copulas, where latent variables have been added to the vine, have been proposed (e.g.,
).
Vine researchers have developed algorithms for maximum likelihood estimation and simulation of vine copulas, finding truncated vines that summarize the dependence in the data, enumerating through vines, etc. Chapter 6 of Dependence Modeling with Copulas summarizes these algorithms in pseudocode.
Truncated vine copulas (introduced by E.C Brechmann in his Ph.D. thesis) are vine copulas that have independence copulas in the last trees. This way truncated vine copulas encode in their structure conditional independences. Truncated vines are very useful because they contain much fewer parameters than regular vines. An important question is what should be the tree at the highest level.
An interesting relationship between truncated vines and cherry tree copulas is presented in (
)
Cherry tree graph representations were introduced as an alternative for the usual graphical representations of vine copulas, moreover, the conditional independencies encoded by the last tree (first tree after truncation) is also highlighted here (
)
and in
()
The cherry tree sequence representation of the vine copulas gives a new way to look at truncated copulas, based on the conditional independence which is caused by truncation.
Parameter estimation
For parametric vine copulas, with a bivariate copula family on each edge of a vine, algorithms and software are available for maximum likelihood estimation of copula parameters, assuming data have been transformed to uniform scores after fitting univariate margins. There are also available algorithms (e.g.,
)
for choosing good truncated regular vines where edges of high-level trees are taken as conditional independence. These algorithms assign variables with strong dependence or strong conditional dependence to low order trees in order that higher order trees have weak conditional dependence or conditional independence. Hence parsimonious truncated vines are obtained for a large number of variables. Software with a user interface in R are available (e.g.,
).
Sampling and conditionalizing
A sampling order for variables is a sequence of conditional densities in which the first density is unconditional, and the densities for other variables are conditioned on the preceding variables in the ordering. A sampling order is implied by a regular-vine representation of the density if each conditional density can be written as a product of copula densities in the vine and one dimensional margins.
An implied sampling order is generated by a nested sequence of subvines where each sub-vine in the sequence contains one new variable not present in the preceding sub-vine. For any regular vine on variables there are implied sampling orders. Implied sampling orders are a small
subset of all orders but they greatly facilitate sampling. Conditionalizing a regular vine on values of an arbitrary subset of variables is a complex operation. However, conditionalizing on an initial sequence of an implied sampling order is trivial, one simply plugs in the initial conditional values and proceeds with the sampling. A general theory of conditionalization does not exist at present.
Further reading
References
External links
Roger M. Cooke
Roger M. Cooke at TU Delft
- Software for estimating and sampling regular vines, literature and event notices
Actuarial science
Independence (probability theory)
Systems of probability distributions | Vine copula | [
"Mathematics"
] | 2,290 | [
"Applied mathematics",
"Actuarial science"
] |
50,001,505 | https://en.wikipedia.org/wiki/Photosynthetica | Photosynthetica is a quarterly peer-reviewed scientific journal covering research on photosynthesis. It was established in 1967 and is published by the Institute of Experimental Botany of the Academy of Sciences of the Czech Republic. The editor-in-chief is Helena Synkova (Academy of Sciences of the Czech Republic). Up till 2019, the journal was published by Springer Science+Business Media.
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2019 impact factor of 2.562.
References
External links
Biochemistry journals
Botany journals
English-language journals
Quarterly journals
Springer Science+Business Media academic journals
Academic journals established in 1967
Czech Academy of Sciences
1967 establishments in Czechoslovakia | Photosynthetica | [
"Chemistry"
] | 147 | [
"Biochemistry journals",
"Biochemistry literature"
] |
32,517,295 | https://en.wikipedia.org/wiki/Terumo%20Penpol | Terumo Penpol Private Limited is a subsidiary of Terumo Corporation, and is India's largest blood bag manufacturer. It is also the largest producers of blood bags in Asia, outside Japan.
History
Peninsula Polymers Limited (Penpol Ltd.) - was incorporated in 1983 by C. Balagopal, a former IAS (1977 Batch) officer from the Manipur Cadre. It was established as a joint venture with Sree Chitra Thirunal Institute of Medical Sciences and Technology (know fhen as the Chitra Medical Centre) and soon became the first company in India to produce blood bags using indigenous technology. In 1989, Penpol started exporting and followed it up by setting up an R&D centre.
In 1999, Tokyo-based Terumo Corporation signed a contract to acquire a 74% share of Peninsula Polymers Limited, and the new joint venture was renamed as Terumo Penpol Limited (TPL). The stake of financial institutions and other investors were bought over by Terumo, leaving only the promoters and itself as shareholders.
Operations
Terumo Penpol has its headquarters in Thiruvananthapuram, Kerala and employs 1200 people. Terumo Penpol Blood Bags are sold in over 64 countries across the world and its medical equipment division has commissioned more than 25000 installations. TPL has entered its 25-year of operations in 2010, with an enhanced production capacity of 22 million blood bags per annum.
TPL has been winning the top exporter award or the second best exporter award for medical disposables every year from 1994 onwards.
References
External links
Terumo Penpol Ltd.
Terumo Corporation Japan
https://cb.hbsp.harvard.edu/cbmp/product/NA0294-PDF-ENG Harvard case study on PENPOL
Companies established in 1987
Biomedical engineering | Terumo Penpol | [
"Engineering",
"Biology"
] | 382 | [
"Biological engineering",
"Medical technology",
"Biomedical engineering"
] |
32,518,704 | https://en.wikipedia.org/wiki/Braided%20vector%20space | In mathematics, a braided vector space is a vector space together with an additional structure map symbolizing interchanging of two vector tensor copies:
such that the Yang–Baxter equation is fulfilled. Hence drawing tensor diagrams with an overcrossing the corresponding composed morphism is unchanged when a Reidemeister move is applied to the tensor diagram and thus they present a representation of the braid group.
As first example, every vector space is braided via the trivial braiding (simply flipping). A superspace has a braiding with negative sign in braiding two odd vectors. More generally, a diagonal braiding means that for a -base we have
A good source for braided vector spaces entire braided monoidal categories with braidings between any objects , most importantly the modules over quasitriangular Hopf algebras and Yetter–Drinfeld modules over finite groups (such as above)
If additionally possesses an algebra structure inside the braided category ("braided algebra") one has a braided commutator (e.g. for a superspace the anticommutator):
Examples of such braided algebras (and even Hopf algebras) are the Nichols algebras, that are by definition generated by a given braided vectorspace. They appear as quantum Borel part of quantum groups and often (e.g. when finite or over an abelian group) possess an arithmetic root system, multiple Dynkin diagrams and a PBW-basis made up of braided commutators just like the ones in semisimple Lie algebras.
References
Hopf algebras
Quantum groups | Braided vector space | [
"Mathematics"
] | 330 | [
"Algebra stubs",
"Algebra"
] |
32,519,980 | https://en.wikipedia.org/wiki/Total%20synthesis%20of%20morphine%20and%20related%20alkaloids | Synthesis of morphine-like alkaloids in chemistry describes the total synthesis of the natural morphinan class of alkaloids that includes codeine, morphine, oripavine, and thebaine and the closely related semisynthetic analogs methorphan, buprenorphine, hydromorphone, hydrocodone, isocodeine, naltrexone, nalbuphine, oxymorphone, oxycodone, and naloxone.
The structure of morphine is not particularly complex, however the electrostatic polarization of adjacent bonded atoms does not alternate uniformly throughout the structure. This "dissonant connectivity" makes bond formation more difficult and therefore significantly complicates any synthetic strategy that is applied to this family of molecules.
The first morphine total synthesis, devised by Marshall D. Gates, Jr. in 1952 remains a widely used example of total synthesis. This synthesis took a total of 31 steps and proceeded in 0.06% overall yield. The hydrocodone synthesis of Kenner C. Rice is one of the most efficient and proceeds in 30% overall yield in 14 steps. At 9 steps, the Barriault route is the shortest to date, but contains a number of low-yielding steps and is racemic.
Several other syntheses were reported, notably by the research groups of Evans, Fuchs, Parker, Overman, Mulzer-Trauner, White, Taber, Trost, Fukuyama, Guillou, Stork, Magnus, Smith, and Barriault.
Gates synthesis
Gates' total synthesis of morphine provided a proof of the structure of morphine proposed by Robinson in 1925. This synthesis of morphine features one of the first examples of the Diels-Alder reaction in the context of total synthesis.
Rice synthesis
The Rice synthesis follows a biomimetic route and is the most efficient reported to date. A key step is the Grewe cyclization that is analogous to the cyclization of reticuline that occurs in morphine biosynthesis.
References
External links
Morphine Total Syntheses @ SynArchive.com
Natural opium alkaloids
Total synthesis | Total synthesis of morphine and related alkaloids | [
"Chemistry"
] | 455 | [
"Total synthesis",
"Chemical synthesis"
] |
43,767,235 | https://en.wikipedia.org/wiki/Asymmetric%20addition%20of%20dialkylzinc%20compounds%20to%20aldehydes | In asymmetric addition of dialkylzinc compounds to aldehydes dialkyl zinc compounds can be used to perform asymmetric additions to aldehydes, generating substituted alcohols as products (See Barbier reaction). Chiral alcohols are prevalent in many natural products, drugs, and other important organic molecules. Dimethyl zinc is often used with an asymmetric amino alcohol, amino thiol, or other ligand to affect enantioselective additions to aldehydes and ketones. One of the first examples of this process, reported by Noyori and colleagues, features the use of the amino alcohol ligand (−)-3-exo-dimethylaminoisobornenol along with dimethylzinc to add a methyl group asymmetrically to benzaldehyde (see figure). Many ligands have been developed for binding zinc during addition reactions. TADDOLs (tetraaryl-1,3-dioxolane-4,5-dimethanols), which are derived from chiral tartaric acid, are a class of diol ligands often used to bind titanium, but have been adopted for zinc addition chemistry. These ligands require relatively low catalyst loadings, and can achieve up to 99% ee in dialkylzinc additions to aromatic and aliphatic aldehydes. Martens and colleagues have used azetidine alcohols as ligands for asymmetric zinc additions. The researchers found that when paired with catalytic n-butyllithium, diethylzinc can add to aromatic aldehydes with ee in the range of 94-100%.
Many studies have shown that in zinc addition reactions, the enantioselectivity is not linearly correlated with catalyst enantiomeric purity. Researchers propose that this is because the kinetics of the reaction are controlled by the relative concentrations of hetero and homodimeric catalytic complexes; that is, the system displays autocatalysis because the product alcohol itself acts as an asymmetric ligand on zinc.
References
Organic reactions | Asymmetric addition of dialkylzinc compounds to aldehydes | [
"Chemistry"
] | 434 | [
"Organic reactions"
] |
43,767,752 | https://en.wikipedia.org/wiki/Home%20medicines%20review%20%28Australia%29 | The Domiciliary Medication Management Review (DMMR), also named as a Home Medicines Review (HMR), in an Australian scheme for the patients residing in community setting. There are many steps included in a Home Medicine Review Service (Figure 1).
Introduced in 2001 into the Medicare Benefits Schedule (MBS) as item 900, it is aimed at preventative care.
Following an assessment of the patient's needs, a medication management plan is made.
Consultant Pharmacist
References
Clinical pharmacology
Pharmacy in Australia
2001 introductions | Home medicines review (Australia) | [
"Chemistry"
] | 113 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs",
"Clinical pharmacology"
] |
36,704,617 | https://en.wikipedia.org/wiki/Hardy%E2%80%93Littlewood%20inequality | In mathematical analysis, the Hardy–Littlewood inequality, named after G. H. Hardy and John Edensor Littlewood, states that if and are nonnegative measurable real functions vanishing at infinity that are defined on -dimensional Euclidean space , then
where and are the symmetric decreasing rearrangements of and , respectively.
The decreasing rearrangement of is defined via the property that for all the two super-level sets
and
have the same volume (-dimensional Lebesgue measure) and is a ball in
centered at , i.e. it has maximal symmetry.
Proof
The layer cake representation allows us to write the general functions and in the form
and
where equals for and otherwise.
Analogously, equals for and otherwise.
Now the proof can be obtained by first using Fubini's theorem to interchange the order of integration. When integrating with respect to the conditions and the indicator functions and appear with the superlevel sets and as introduced above:
Denoting by the -dimensional Lebesgue measure we continue by estimating the volume of the intersection by the minimum of the volumes of the two sets. Then, we can use the equality of the volumes of the superlevel sets for the rearrangements:
Now, we use that the superlevel sets and are balls in
centered at , which implies that is exactly the smaller one of the two balls:
The last identity follows by reversing the initial five steps that even work for general functions. This finishes the proof.
An application
Let random variable is Normally distributed with mean and finite non-zero variance , then using the Hardy–Littlewood inequality, it can be proved that for the reciprocal moment for the absolute value of is
The technique that is used to obtain the above property of the Normal distribution can be utilized for other unimodal distributions.
See also
Rearrangement inequality
Chebyshev's sum inequality
Lorentz space
References
Inequalities
Articles containing proofs | Hardy–Littlewood inequality | [
"Mathematics"
] | 393 | [
"Binary relations",
"Mathematical relations",
"Inequalities (mathematics)",
"Mathematical problems",
"Articles containing proofs",
"Mathematical theorems"
] |
36,707,156 | https://en.wikipedia.org/wiki/Skew%20partition | In graph theory, a skew partition of a graph is a partition of its vertices into two subsets, such that the induced subgraph formed by one of the two subsets is disconnected and the induced subgraph formed by the other subset is the complement of a disconnected graph. Skew partitions play an important role in the theory of perfect graphs.
Definition
A skew partition of a graph is a partition of its vertices into two subsets and for which the induced subgraph is disconnected and the induced subgraph is the complement of a disconnected graph (co-disconnected).
Equivalently, a skew partition of a graph may be described by a partition of the vertices of into four subsets , , , and , such that there are no edges from to and such that all possible edges from to exist; for such a partition, the induced subgraphs and are disconnected and co-disconnected respectively, so we may take and .
Examples
Every path graph with four or more vertices has a skew partition, in which the co-disconnected set is one of the interior edges of the path and the disconnected set consists of the vertices on either side of this edge. However, it is not possible for a cycle graph of any length to have a skew partition: no matter which subsets of the cycle are chosen as the set , the complementary set will have the same number of connected components, so it is not possible for to be disconnected and to be co-disconnected.
If a graph has a skew partition, so does its complement. For instance, the complements of path graphs have skew partitions, and the complements of cycle graphs do not.
Special cases
If a graph is itself disconnected, then with only three simple exceptions (an empty graph, a graph with one edge and three vertices, or a four-vertex perfect matching) it has a skew partition, in which the co-disconnected side of the partition consists of the endpoints of a single edge and the disconnected side consists of all other vertices. For the same reason, if the complement of a graph is disconnected, then with a corresponding set of three exceptions it must have a skew partition.
If a graph has a clique separator (a clique whose removal would disconnect the remaining vertices) with more than one vertex, then the partition into the clique and the remaining vertices forms a skew partition. A clique cutset with one vertex is an articulation point; if such a vertex exists, then with a small number of simple exceptions, there is a skew partition in which the co-disconnected side consists of this vertex and one of its neighbors.
A star cutset in a graph is a vertex separator in which one of the separator vertices is adjacent to all the others. Every clique separator is a star cutset. Necessarily, a graph with a star cutset (with more than one vertex) has a skew partition in which the co-disconnected subgraph consists of the vertices in the star cutset and the disconnected subgraph consists of all the remaining vertices.
A module (or homogeneous set) is a nontrivial subset of the vertices of such that, for every vertex that is not in , either is adjacent to all vertices in or to none of them. If a graph has a module and, outside it, there exist both vertices adjacent to all vertices in and other vertices adjacent to none of them, then has a star cutset consisting of one vertex in the module together with its neighbors outside the module. On the other hand, if there exists a module in which one of these two subsets is empty, then the graph is disconnected or co-disconnected and again (with the three simple exceptions) it has a skew cutset.
History
Skew partitions were introduced by , in connection with perfect graphs. Chvátal proved that a minimally imperfect graph could not have a star cutset. Trivially, disconnected graphs cannot be minimally imperfect, and it was also known that graphs with clique separators or modules could not be minimally imperfect. Claude Berge had conjectured in the early 1960s that perfect graphs were the same as the Berge graphs, graphs with no induced odd cycle (of length five or more) or its complement, and (because cycles and their complements do not have skew partitions) no minimal non-Berge graph can have a skew partition. Motivated by these results, Chvátal conjectured that no minimally imperfect graph could have a skew partition. Several authors proved special cases of this conjecture, but it remained unsolved for many years.
Skew partitions gained significance when they were used by to prove the strong perfect graph theorem that the Berge graphs are indeed the same as the perfect graphs. Chudnovsky et al. were unable to prove Chvátal's conjecture directly, but instead proved a weaker result, that a minimal counterexample to the theorem (if it existed) could not have a balanced skew partition, a skew partition in which every induced path with endpoints on one side of the partition and interior vertices on the other side has even length. This result formed a key lemma in their proof, and the full version of Chvátal's lemma follows from their theorem.
In structural graph theory
Skew partitions form one of the key components of a structural decomposition of perfect graphs used by as part of their proof of the strong perfect graph theorem. Chudnovsky et al. showed that every perfect graph either belongs to one of five basic classes of perfect graphs, or it has one of four types of decomposition into simpler graphs, one of which is a skew partition.
A simpler example of a structural decomposition using skew partitions is given by . He observes that every comparability graph is complete, is bipartite, or has a skew partition. For, if every element of a partially ordered set is either a minimal element or a maximal element, then the corresponding comparability graph is bipartite. If the ordering is a total order, then the corresponding comparability graph is complete. If neither of these two cases arise, but every element that is neither minimal nor maximal is comparable to all other elements, then either the partition into the minimal and non-minimal elements (if there is more than one minimal element) or the partition into the maximal and non-maximal elements (if there is more than one maximal element) forms a star cutset. And in the remaining case, there exists an element of the partial order that is not minimal, not maximal, and not comparable with all other elements; in this case, there is a skew partition (the complement of a star cutset) in which the co-disconnected side consists of the elements comparable to (not including itself) and the disconnected side consists of the remaining elements.
The chordal graphs have an even simpler decomposition of a similar type: they are either complete or they have a clique separator.
showed, analogously, that every connected and co-connected weakly chordal graph (a graph with no induced cycle or its complement of length greater than four) with four or more vertices has a star cutset or its complement, from which it follows by Chvátal's lemma that every such graph is perfect.
Algorithms and complexity
A skew partition of a given graph, if it exists, may be found in polynomial time. This was originally shown by but with an impractically large running time of , where is the number of vertices in the input graph. improved the running time to ; here is the number of input edges.
It is NP-complete to test whether a graph contains a skew partition in which one of the parts of the co-disconnected side is independent.
Testing whether a given graph contains a balanced skew partition is also NP-complete in arbitrary graphs, but may be solved in polynomial time in perfect graphs.
Notes
References
.
.
.
.
.
.
.
.
.
.
.
.
Graph theory objects
Perfect graphs | Skew partition | [
"Mathematics"
] | 1,641 | [
"Mathematical relations",
"Graph theory",
"Graph theory objects"
] |
36,709,000 | https://en.wikipedia.org/wiki/Playing%20God%20%282012%20film%29 | Playing God was a 2012 BBC documentary in the Horizon series, hosted by Adam Rutherford. The documentary discusses synthetic biology, the potential of science "breaking down nature into spare parts" and then rebuilding it back up as we wish.
Summary
Adam Rutherford has been studying the emerging field of synthetic biology for the past 10 years and believes that this sort of genetic tinkering is the most effective way to pass along traits between different organisms, given that this result cannot be easily obtained through typical mating of certain species. In Playing God, he discusses the creation of the spider-goat by American researchers. It is a part goat, part spider hybrid, whose genetic code has been altered to be able to produce the same proteins found in spider silk. The milk produced by these goats can be used to create an artificial spider's web. Typically this silk is produced in small quantities by spiders but with the genetic modification of the goats, it may be mass produced. The goats are easier to handle in large groups than spiders and thus this could potentially increase the efficiency of silk production and consumer cost.
This technology is also being used to make bio-diesel to power cars. Biotech company, Amyris, in Emeryville, California, has applied synthetic biology to yeast. Instead of producing alcohol through the fermentation of sugar, this yeast produces diesel. The amount of diesel that is produced by this method is substantially greater than that of the conventional method and can exponentially increase the rate of production. This diesel has already been implemented in actual cars, in Brazil.
Other researchers are looking at how we might, one day, control human emotions by sending 'biological machines' into our brains. MIT professor Ed Boyden leads a group in synthetic neuroscience. This team utilizes the genetic code for molecules that convert light into electrical energy and inserts it in a virus or cassette that may be installed into various organisms. Because the brain works on electrical impulses, this technology may be applied to neural circuits, in order to treat various neurological disorders. With this technology, the direct control of various light sensitive and electrical functions in the brain would be enabled.
MIT professor, Ron Weiss, has worked extensively in this field and has been leading a team in incorporating the concepts of computer programming and engineering to biology. "I decided to take what we understand in computing and apply that to programming biology. To me, that's really the essence of synthetic biology." Weiss' team published a study that discusses how they combined computer circuit elements with BioBricks parts. The way this works is that the computer circuit is able to distinguish between a healthy and mutated cell by utilizing a set of five criteria. Any cells that satisfy these conditions are deleted, similar to the way code could eliminate certain functions in computer programming. The ability to target individual cells could make this an alternative to treatments that may have harmful effects on healthy cells, such as chemotherapy.
Scientific Concepts
The area of synthetic biology is extensively discussed in Playing God. This is considered to be a relatively new field that incorporates elements of engineering into biology. It involves the blending and manipulation of natural molecules and unnatural molecules to obtain a specific outcome. Through this mean, scientists may use biology as a base for the development of synthetic life and genetic coding. This area of study has been noted with the potential to increase the efficiency of the production of certain biomolecules and naturally occurring materials, such as enzymes.
Reception
Playing God and presenter, Adam Rutherford, have been subject to criticism following the release of this documentary. As plainly stated by Andrew Marszal of The Telegraph, "Rutherford overcooked it, particularly in his opening gambit – the revelation that scientists have developed a new hybrid breed of 'spider-goats'... we were left a little disappointed as Freckles, Pudding and Sweetie chomped and grazed around the farm looking for all the world like, well, goats." The portrayal of the applications of synthetic biology in this documentary appeared to be exaggerated and their importance not adequately stressed. The Alliance for Natural Health International (ANHI) wrote a reaction piece in which it expressed concern about the overwhelmingly positive tone the documentary as a whole had towards the development of this technology. "The programme was almost wholly positive about the new technology, even though it has many serious potential consequences for humans and the planet... Dr Rutherford reacts like a kid in a sweetshop at each new technological marvel. Not once does he question whether creating spider-goats is necessary, or whether the 'trickle-down' of genetic engineering into the community is a step too far."
References
BBC television documentaries about science
Horizon (British TV series)
Synthetic biology
Works about genetics | Playing God (2012 film) | [
"Engineering",
"Biology"
] | 939 | [
"Synthetic biology",
"Molecular genetics",
"Biological engineering",
"Bioinformatics"
] |
36,711,208 | https://en.wikipedia.org/wiki/Vehicle%20fire%20suppression%20system | A vehicle fire suppression system is a pre-engineered fire suppression system safety accessory permanently mounted on any type of vehicle. These systems are especially prevalent in the mobile heavy equipment segment and are designed to protect equipment assets from fire damage and related losses. Vehicle fire suppression systems have become a vital safety feature to several industries and are most commonly used in the mining, forestry, landfill, and mass transit industries.
Parts of a typical system
A typical vehicle fire suppression system has five key components:
Fire-detecting linear wire or spot sensors,
A control panel to detect a fire and alert the operator,
Actuators discharge automatically or manually to activate the system,
Tanks filled with fire-fighting agent, and
A distribution network of tubes, hoses, and nozzles.
To mitigate a fire as soon as it happens, fire-detecting linear wire or sensors are strategically placed around the machine. When the high heat of a fire penetrates the linear wire or is detected by the sensors, a signal is sent to the control panel in the vehicle cab.
The control panel alarms and alerts the driver to quickly evacuate the machine. At the same time, the panel automatically initiates the actuator, which discharges the fire-fighting agent inside the onboard tanks and sends it through a distribution network composed of stainless steel tubing and/or hydraulic hosing. An actuator can also activate the system when pressed manually by the operator.
At the end of the distribution network, the agent is disbursed into the equipment’s protected areas via nozzles aimed at the machine's high-hazard components, like turbochargers, starters, fuel filters, batteries, alternators, and transmissions to extinguish the fire quickly and efficiently.
References
Fire suppression | Vehicle fire suppression system | [
"Physics"
] | 358 | [
"Physical systems",
"Transport",
"Transport stubs"
] |
36,712,903 | https://en.wikipedia.org/wiki/Lactic%20acid%20O-carboxyanhydride | Lactic acid O-carboxyanhydride (lac-OCA) is an organic compound. It is used as a monomer equivalent to lactic acid or lactide in the preparation of poly(lactic acid). When this monomer undergoes ring-opening polymerization, one equivalent of carbon dioxide gas is released for every lactic acid unit incorporated into the polymer:
This compound is prepared by treatment of lactic acid or its salts with phosgene or one of its equivalents, e.g. diphosgene.
References
Carbonate esters
Monomers | Lactic acid O-carboxyanhydride | [
"Chemistry",
"Materials_science"
] | 121 | [
"Monomers",
"Polymer chemistry"
] |
34,137,696 | https://en.wikipedia.org/wiki/Molecular%20breeding | Molecular breeding is the application of molecular biology tools, often in plant breeding and animal breeding. In the broad sense, molecular breeding can be defined as the use of genetic manipulation performed at the level of DNA to improve traits of interest in plants and animals, and it may also include genetic engineering or gene manipulation, molecular marker-assisted selection, and genomic selection. More often, however, molecular breeding implies molecular marker-assisted breeding (MAB) and is defined as the application of molecular biotechnologies, specifically molecular markers, in combination with linkage maps and genomics, to alter and improve plant or animal traits on the basis of genotypic assays.
The areas of molecular breeding include:
QTL mapping or gene discovery
Marker assisted selection and genomic selection
Genetic engineering
Genetic transformation
Constituent methods
Marker assisted breeding
Methods in marker assisted breeding include:
Genotyping and creating molecular maps - genomics
The commonly used markers include simple sequence repeats (or microsatellites), single nucleotide polymorphisms (SNP). The process of identification of plant genotypes is known as genotyping.
Another area that is developing is genotyping by sequencing.
Phenotyping - phenomics
To identify genes associated with traits, it is important to measure the trait value - known as phenotype. The "omics" for measurement of phenotypes is called phenomics. The phenotype can be indicative of the measurement of the trait itself or an indirectly related or correlated trait.
QTL mapping or association mapping
Genes (Quantitative trait loci (abbreviated as QTL) or quantitative trait genes or minor genes or major genes) involved in controlling trait of interest are identified. The process is known as mapping. Mapping of such genes can be done using molecular markers. QTL mapping can involve single large family, unrelated individuals or multiple families (see: Family based QTL mapping). The basic idea is to identify genes or markers associated with genes that correlate to a phenotypic measurement and that can be used in marker assisted breeding / selection.
Marker assisted selection or genetic selection
Once genes or markers are identified, they can be used for genotyping and selection decisions can be made.
Marker-assisted backcrossing (MABC)
Backcrossing is crossing an F1 with its parents to transfer a limited number of loci (e.g. transgene, disease resistance loci, etc.) from one genetic background to another. Usually the recipient of such genes is a cultivar that is already well performing - except for the gene that is to be transferred. So we want to keep the genetic background of the recipient genotypes, which is done by 4-6 rounds of repeated backcrosses while selecting for the gene of interest.
Genomic selection
Genomic selection is a novel approach to traditional marker-assisted selection where selection is made based on only a few markers. Rather than seeking to identify individual loci significantly associated with a trait, genomics uses all marker data as predictors of performance and consequently delivers more accurate predictions. Selection can be based on genomic selection predictions, potentially leading to more rapid and lower cost gains from breeding. Genomic prediction combines marker data with phenotypic and pedigree data (when available) in an attempt to increase the accuracy of the prediction of breeding and genotypic values.
Genetic transformation or Genetic engineering
Transfer of genes makes possible the horizontal transfer of genes from one organism to another. Thus plants can receive genes from humans or algae or any other organism. This provides limitless opportunities in breeding crop plants.
By organism
Molecular breeding resources (including multiomics data) are available for:
Some of the millets
Wheat
References
Further reading
Molecular biology
Breeding | Molecular breeding | [
"Chemistry",
"Biology"
] | 759 | [
"Behavior",
"Reproduction",
"Breeding",
"Molecular biology",
"Biochemistry"
] |
34,138,058 | https://en.wikipedia.org/wiki/RefDB%20%28chemistry%29 | The Re-referenced Protein Chemical shift Database (RefDB) is an NMR spectroscopy database of carefully corrected or re-referenced chemical shifts, derived from the BioMagResBank (BMRB) (Fig. 1). The database was assembled by using a structure-based chemical shift calculation program (called SHIFTX) to calculate expected protein (1)H, (13)C and (15)N chemical shifts from X-ray or NMR coordinate data of previously assigned proteins reported in the BMRB. The comparison is automatically performed by a program called SHIFTCOR. The RefDB database currently provides reference-corrected chemical shift data on more than 2000 assigned peptides and proteins. Data from the database indicates that nearly 25% of BMRB entries with (13)C protein assignments and 27% of BMRB entries with (15)N protein assignments require significant chemical shift reference readjustments. Additionally, nearly 40% of protein entries deposited in the BioMagResBank appear to have at least one assignment error. Users may download, search or browse the database through a number of methods available through the RefDB website. RefDB provides a standard chemical shift resource for biomolecular NMR spectroscopists, wishing to derive or compute chemical shift trends in peptides and proteins.
Scope and Access
All data in RefDB is non-proprietary or is derived from a non-proprietary source. It is freely accessible and available to anyone. In addition, nearly every data item is fully traceable and explicitly referenced to the original source. RefDB data is available through a public web interface and downloads.
Features
All chemical shifts in RefDB have been computationally re-referenced to DSS (a common NMR chemical shift standard). RefDB is a continuously updated resource that uses web-bots to query public databases (BMRB, GenBank, Protein Data Bank) and fetch assignment, sequence and structure data on a weekly basis. It then applies a series of data checking routines (using keywords to remove paramagnetic or denatured proteins) followed by a series of calculations to identify and correct chemical shift referencing errors. RefDB is fully web-enabled database, it stores data in two standard formats (NMR-STAR and Shifty), it performs automated data updating, checking and validation and it provides open access to output data in a fully downloadable flat file format as well as in a hyperlinked browsable table (see Fig. 2). RefDB also supports keyword queries and sequence searches (using local BLAST). RefDB is usually updated on a weekly basis. The RefDB database, along with its associated software, is freely available at http://refdb.wishartlab.com and at the BMRB website.
Protocols
RefDB has been prepared using a combination of three different computer programs. The first program (SHIFTX) calculates backbone 1H, 13C and 15N chemical shifts from protein 3D coordinate data. The second program (SHIFTCOR) compares the calculated shifts with the observed shifts, evaluates any statistically significant differences and performs the necessary chemical shift corrections. The third program (UPDATE) automatically retrieves newly deposited BMRB data along with any corresponding PDB data. UPDATE also directs the data to SHIFTCOR and appends the ‘corrected’ chemical shift file to the RefDB database.
What RefDB provides
A downloadable database of 2162 re-referenced protein chemical shift files
A comprehensive list of BMRB and re-referenced BMRB entries with corresponding PDB entries (See Fig. 2)
A tab-delimited summary of all RefDB entries
Secondary structure information for each protein sequence in shifty format
Statistical information
Referencing errors (in ppm) for backbone atoms
Averaged backbone chemical shift values for secondary structure assignments (Alpha-helix, beta and coil)
See also
Chemical Shift
SHIFTCOR
Protein structure database
Protein Chemical Shift Re-Referencing
Protein secondary structure
Protein Chemical Shift Prediction
Chemical shift index
Protein NMR
References
General References
Chemical databases
Nuclear magnetic resonance software
Protein methods
Chemistry software
Biological databases | RefDB (chemistry) | [
"Chemistry",
"Biology"
] | 833 | [
"Biochemistry methods",
"Nuclear magnetic resonance",
"Nuclear magnetic resonance software",
"Chemistry software",
"Protein methods",
"Chemical databases",
"Protein biochemistry",
"Bioinformatics",
"nan",
"Biological databases"
] |
34,138,240 | https://en.wikipedia.org/wiki/SHIFTCOR | SHIFTCOR (Shift Correction) is a freely available web server as well as a stand-alone computer program for protein chemical shift re-referencing. Chemical shift referencing is a particularly widespread problem in biomolecular NMR with up to 25% of existing NMR chemical shift assignments being improperly referenced. Some of these referencing problems can lead to systematic errors of between 1.0 to 2.5 ppm (especially in 13C and 15N chemical shifts). Errors of this magnitude can play havoc with any attempt to compare assignments between proteins or to structurally interpret chemical shifts. Identifying which proteins are mis-assigned or improperly referenced can be challenging, as can correcting the errors once they are found. The SHIFTCOR program was designed to assist with identifying and fixing these chemical shift referencing problems. Specifically it compares, identifies, corrects and re-references 1H, 13C and 15N backbone chemical shifts of peptides and proteins by comparing the observed chemical shifts with the predicted chemical shifts derived from the 3D structure (using PDB coordinates) of the protein(s) of interest [1]. The predicted chemical shifts are calculated using the ShiftX program. The SHIFTCOR program was originally used to construct a database of properly re-referenced protein chemical shift assignments called RefDB. RefDB is a web-accessible database of more than 2000 correctly referenced protein chemical shift assignments. While originally available as a stand-alone program only, SHIFTCOR has since been released for general use as a web server.
See also
Chemical Shift
NMR
Protein
Protein structure database
Protein Chemical Shift Re-Referencing
Protein secondary structure
Protein Chemical Shift Prediction
Chemical shift index
Protein NMR
References
External links
ShiftX
RefDB
Nuclear magnetic resonance software
Protein methods
Protein structure
Biophysics
Chemistry software
Biological databases | SHIFTCOR | [
"Physics",
"Chemistry",
"Biology"
] | 354 | [
"Biochemistry methods",
"Applied and interdisciplinary physics",
"Nuclear magnetic resonance",
"Nuclear magnetic resonance software",
"Chemistry software",
"Protein methods",
"Protein biochemistry",
"Bioinformatics",
"Biophysics",
"Structural biology",
"nan",
"Protein structure",
"Biological... |
34,143,120 | https://en.wikipedia.org/wiki/Metal%20halides | Metal halides are compounds between metals and halogens. Some, such as sodium chloride are ionic, while others are covalently bonded. A few metal halides are discrete molecules, such as uranium hexafluoride, but most adopt polymeric structures, such as palladium chloride.
Preparation
The halogens can all react with metals to form metal halides according to the following equation:
2M + nX2 → 2MXn
where M is the metal, X is the halogen, and MXn is the metal halide.
In practice, this type of reaction may be very exothermic, hence impractical as a preparative technique. Additionally, many transition metals can adopt multiple oxidation states, which complicates matters. As the halogens are strong oxidizers, direct combination of the elements usually leads to a highly oxidized metal halide. For example, ferric chloride can be prepared thus, but ferrous chloride cannot. Heating the higher halides may produce the lower halides; this occurs by thermal decomposition or by disproportionation. For example, gold(III) chloride to gold(I) chloride:
AuCl3 → AuCl + Cl2 at 160°C
Metal halides are also prepared by the neutralization of a metal oxide, hydroxide, or carbonate with the appropriate halogen acid. For example, with sodium hydroxide:
NaOH + HCl → NaCl + H2O
Water can sometimes be removed by heat, vacuum, or the presence of anhydrous hydrohalic acid. Anhydrous metal chlorides suitable for preparing other coordination compounds may be dehydrated by treatment with thionyl chloride:
MCln·xH2O + x SOCl2 → MCln + x SO2 + 2x HCl
The silver and thallium(I) cations have a great affinity for halide anions in solution, and the metal halide quantitatively precipitates from aqueous solution. This reaction is so reliable that silver nitrate is used to test for the presence and quantity of halide anions. The reaction of silver cations with bromide anions:
Ag+ (aq) + Br− (aq) → AgBr (s)
Some metal halides may be prepared by reacting oxides with halogens in the presence of carbon (carbothermal reduction):
Structure and reactivity
"Ionic" metal halides (predominantly of the alkali and alkali earth metals) tend to have very high melting and boiling points. They freely dissolve in water, and some are deliquescent. They are generally poorly soluble in organic solvents.
Some low-oxidation state transition metals have halides which dissolve well in water, such as ferrous chloride, nickelous chloride, and cupric chloride. Metal cations with a high oxidation state tend to undergo hydrolysis instead, e.g. ferric chloride, aluminium chloride, and titanium tetrachloride.
Discrete metal halides have lower melting and boiling points. For example, titanium tetrachloride melts at −25 °C and boils at 135 °C, making it a liquid at room temperature. They are usually insoluble in water, but soluble in organic solvent.
Polymeric metal halides generally have melting and boiling points that are higher than monomeric metal halides, but lower than ionic metal halides. They are soluble only in the presence of a ligand which liberates discrete units. For example, palladium chloride is quite insoluble in water, but it dissolves well in concentrated sodium chloride solution:
PdCl2 (s) + 2 Cl− (aq) → PdCl42− (aq)
Palladium chloride is insoluble in most organic solvents, but it forms soluble monomeric units with acetonitrile and benzonitrile:
[PdCl2]n + 2n CH3CN → n PdCl2(CH3CN)2
The tetrahedral tetrahalides of the first-row transition metals are prepared by addition of a quaternary ammonium chloride to the metal halide in a similar manner:
MCl2 + 2 Et4NCl → (Et4N)2MCl4 (M = Mn, Fe, Co, Ni, Cu)
Antimony pentafluoride is a strong Lewis acid. It gives fluoroantimonic acid, the strongest known acid, with hydrogen fluoride. Antimony pentafluoride as the prototypical Lewis acid, used to compare different compounds' Lewis basicities. This measure of basicity is known as the Gutmann donor number.
Halide ligands
Halides are X-type ligands in coordination chemistry. The halides are usually good σ- and good π-donors. These ligands are usually terminal, but they might act as bridging ligands as well. For example, the chloride ligands of aluminium chloride bridge two aluminium centers, thus the compound with the empirical formula AlCl3 actually has the molecular formula of Al2Cl6 under ordinary conditions. Due to their π-basicity, the halide ligands are weak field ligands. Due to a smaller crystal field splitting energy, the halide complexes of the first transition series are all high spin when possible. These complexes are low spin for the second and third row transition series. Only [CrCl6]3− is exchange inert.
Homoleptic metal halide complexes are known with several stoichiometries, but the main ones are the hexahalometallates and the tetrahalometallates. The hexahalides adopt octahedral coordination geometry, whereas the tetrahalides are usually tetrahedral. Square planar tetrahalides are known as are examples with 2- and 3-coordination.
Alfred Werner studied hexamminecobalt(III) chloride, and was the first to propose the correct structures of coordination complexes. Cisplatin, cis-Pt(NH3)2Cl2, is a platinum drug bearing two chloride ligands. The two chloride ligands are easily displaced, allowing the platinum center to bind to two guanine units, thus damaging DNA.
Due to the presence of filled pπ orbitals, halide ligands on transition metals are able to reinforce π-backbonding onto a π-acid. They are also known to labilize cis-ligands.
Applications
The volatility of the tetrachloride and tetraiodide complexes of Ti(IV) is exploited in the purification of titanium by the Kroll and van Arkel–de Boer processes, respectively.
Metal halides act as Lewis acids. Ferric and aluminium chlorides are catalysts for the Friedel-Crafts reaction, but due to their low cost, they are often added in stoichiometric quantities.
Chloroplatinic acid (H2PtCl6) is an important catalyst for hydrosilylation.
Precursor to inorganic compounds
Metal halides are often readily available precursors for other inorganic compounds. Mentioned above, the halide compounds can be made anhydrous by heat, vacuum, or treatment with thionyl chloride.
Halide ligands may be abstracted by silver(I), often as the tetrafluoroborate or the hexafluorophosphate. In many transition metal compounds, the empty coordination site is stabilized by a coordinating solvent like tetrahydrofuran. Halide ligands may also be displaced by the alkali salt of an X-type ligand, such as a salen-type ligand. This reaction is formally a transmetallation, and the abstraction of the halide is driven by the precipitation of the resultant alkali halide in an organic solvent. The alkali halides generally have very high lattice energies.
For example, sodium cyclopentadienide reacts with ferrous chloride to yield ferrocene:
2 NaC5H5 + FeCl2 → Fe(C5H5)2 + 2 NaCl
While inorganic compounds used for catalysis may be prepared and isolated, they may at times be generated in situ by addition of the metal halide and the desired ligand. For example, palladium chloride and triphenylphosphine may be often be used in lieu of bis(triphenylphosphine)palladium(II) chloride for palladium-catalyzed coupling reactions.
Lamps
Some halides are used in metal-halide lamps.
See also
Hard and soft acids and bases
Alkali halides
Silver halides
References
Inorganic compounds | Metal halides | [
"Chemistry"
] | 1,791 | [
"Inorganic compounds",
"Metal halides",
"Salts"
] |
34,143,799 | https://en.wikipedia.org/wiki/Germanium%20tetrafluoride | Germanium tetrafluoride (GeF4) is a chemical compound of germanium and fluorine. It is a colorless gas.
Synthesis
Germanium tetrafluoride is formed by treating germanium with fluorine:
Ge + 2 F2 → GeF4
Alternatively germanium dioxide combines with hydrofluoric acid (HF):
GeO2 + 4 HF → GeF4 + 2 H2O
It is also formed during the thermal decomposition of a complex salt, Ba[GeF6]:
Ba(GeF6) → GeF4 + BaF2
Properties
Germanium tetrafluoride is a noncombustible, strongly fuming gas with a garlic-like odor. It reacts with water to form hydrofluoric acid and germanium dioxide. Decomposition occurs above 1000 °C.
Reaction of GeF4 with fluoride sources produces GeF5− anions with octahedral coordination around Ge atom due to polymerization. The structural characterization of a discrete trigonal bipyramidal GeF5− anion was achieved by a "naked" fluoride reagent 1,3-bis(2,6-diisopropylphenyl)imidazolium fluoride.
Uses
In combination with disilane, germanium tetrafluoride is used for in the synthesis of SiGe.
References
External links
"Reactivity of a Naked Fluoride Reagent and Controlled Design of Germanium Fluorido-Anions."
Germanium(IV) compounds
Gases
Fluorides
Metal halides | Germanium tetrafluoride | [
"Physics",
"Chemistry"
] | 329 | [
"Matter",
"Inorganic compounds",
"Phases of matter",
"Salts",
"Metal halides",
"Statistical mechanics",
"Fluorides",
"Gases"
] |
34,147,023 | https://en.wikipedia.org/wiki/Slack%20bus | In electrical power systems a slack bus (or swing bus), defined as a Vδ bus, is used to balance the active power and reactive power in a system while performing load flow studies. The slack bus is used to provide for system losses by emitting or absorbing active and/or reactive power to and from the system.
Load flow studies
For power systems engineers, a load flow study explains the power system conditions at various intervals during operation. It aims to minimize the difference between the calculated and actual quantities. Here, the slack bus can contribute to the minimization by having an unconstrained real and reactive power input.
The use of a slack bus has an inherent disadvantage when dealing with uncertain input variables: the slack bus must absorb all uncertainties arising from the system and thus must have the widest possible nodal power distributions. Even moderate amounts of uncertainty in a large system may allow the resulting distributions to contain values beyond the slack bus's margins.
A load flow approach able to directly incorporate uncertainties into the solution processes can be very useful. The results from such analyses give solutions over the range of the uncertainties, i.e., solutions that are sets of values or regions instead of single values.
Types of buses
Buses are of 3 types and are classified as:
PQ bus – the real power and reactive power are specified. It is also known as Load Bus. Generally, in a PQ bus, the generated real and reactive power will be assumed to be zero. However, power will be flowing out, thus, the real power and reactive power will be both negative. The Load Bus will be used to find the bus voltage and angle.
PV bus – the real power and the voltage magnitude are specified. It is also known as Generator Bus. The real power and voltage are specified for buses that are generators. These buses have a constant power generation, controlled through a prime mover, and a constant bus voltage.
Slack bus – to balance the active and reactive power in the system. It is also known as the Reference Bus or the Swing Bus. The slack bus will serve as an angular reference for all other buses in the system, which is set to 0°. The voltage magnitude is also assumed to be 1 p.u. at the slack bus.
The slack bus provides or absorbs active and reactive power to and from the transmission line to provide for losses, since these variables are unknown until the final solution is established. The slack bus is the only bus for which the system reference phase angle is defined. From this, the various angular differences can be calculated in the power flow equations. If a slack bus is not specified, then a generator bus with maximum real power acts as the slack bus. A given scheme can involve more than one slack bus.
Formulation of load flow problem
The most common formulation of the load flow problem specifies all input variables (PQ at loads, PV at generators) as deterministic values. Each set of specified values corresponds to one system state, which depends on a set of system conditions. When those conditions are uncertain, numerous scenarios must be analyzed.
A classic load flow analysis consists of calculating voltage magnitude and phase angle at the buses, as well as the active and reactive line flows for the specified terminal (or bus conditions). Four variables are associated with each bus:
voltage
phase angle
active or real power
reactive power
Based on these values, a bus may be classified into the above-mentioned three categories as -
Real and reactive powers (i.e. complex power) cannot be fixed. The net complex power flow into the network is not known in advance, and the system power losses are unknown until the study is complete. It is necessary to have one bus (i.e. the slack bus) at which complex power is unspecified so that it supplies the difference in the total system load plus losses and the sum of the complex powers specified at the remaining buses. The complex power allocated to this bus is computed as part of the solution. In order for the variations in real and reactive powers of the slack bus to be a small percentage of its generating capacity during the solution process, the bus connected to the largest generating station is normally selected as the slack bus. The slack bus is crucial to a load flow problem since it will account for transmission line losses. In a load flow problem, conservation of energy results in the total generation equaling to the sum of the loads. However, there still would be a discrepancy in these quantities due to line losses, which are dependent on line current. Yet to determine line current, angles and voltages of the buses connected to the line would be needed. Here, the slack bus will be required to account for line losses and serve as a generator, injecting real power to the system.
Solutions
The solution requires mathematical formulation and numerical solution. Since load flow problems generate non-linear equations that computers cannot solve quickly, numerical methods are required. The following methods are commonly used algorithms:
Gauss Iterative Method -
Fast Decoupled Load Flow Method
Gauss-Seidel Method
Newton-Raphson Method
See also
Power Flow Study
Power Engineering
References
L.P. Singh, "Advanced Power System Analysis & Dynamics" by New Age International, .
I.J. Nagrath & D.P Kothari, "Modern Power System Analysis" by Tata-McGraw Hill, ,
External links
Load Flow Studies
Electrical engineering
Electric power
Power engineering | Slack bus | [
"Physics",
"Engineering"
] | 1,098 | [
"Physical quantities",
"Energy engineering",
"Power (physics)",
"Electric power",
"Power engineering",
"Electrical engineering"
] |
52,467,837 | https://en.wikipedia.org/wiki/Aspen%20HYSYS | Aspen HYSYS (or simply HYSYS) is a chemical process simulator currently developed by AspenTech used to mathematically model chemical processes, from unit operations to full chemical plants and refineries. HYSYS is able to perform many of the core calculations of chemical engineering, including those concerned with mass balance, energy balance, vapor-liquid equilibrium, heat transfer, mass transfer, chemical kinetics, fractionation, and pressure drop. HYSYS is used extensively in industry and academia for steady-state and dynamic simulation, process design, performance modeling, and optimization.
Etymology
HYSYS is a portmanteau formed from Hyprotech, the name of the company which created the software, and Systems.
History
HYSYS was first conceived and created by the Canadian company Hyprotech, founded by researchers from the University of Calgary. The HYSYS Version 1.1 Reference Volume was published in 1996.
In May 2002, AspenTech acquired Hyprotech, including HYSYS. Following a 2004 ruling by the United States Federal Trade Commission, AspenTech was forced to divest its Hyprotech assets, including HYSYS source code, ultimately selling these to Honeywell. Honeywell was also able to hire a number of HYSYS developers, ultimately mobilizing these resources to produce UniSim. The divestment agreement specified that Aspentech would retain rights to market and develop most Hyprotech products (including HYSYS) royalty-free. As of 2024, AspenTech continues to produce HYSYS.
References
Chemical synthesis
Chemical engineering software | Aspen HYSYS | [
"Chemistry",
"Engineering"
] | 326 | [
"Chemical engineering software",
"Chemical engineering",
"nan",
"Chemical synthesis"
] |
48,622,094 | https://en.wikipedia.org/wiki/Deterministic%20global%20optimization | Deterministic global optimization is a branch of mathematical optimization which focuses on finding the global solutions of an optimization problem whilst providing theoretical guarantees that the reported solution is indeed the global one, within some predefined tolerance. The term "deterministic global optimization" typically refers to complete or rigorous (see below) optimization methods. Rigorous methods converge to the global optimum in finite time. Deterministic global optimization methods are typically used when locating the global solution is a necessity (i.e. when the only naturally occurring state described by a mathematical model is the global minimum of an optimization problem), when it is extremely difficult to find a feasible solution, or simply when the user desires to locate the best possible solution to a problem.
Overview
Neumaier classified global optimization methods in four categories, depending on their degree of rigour with which they approach the optimum, as follows:
An incomplete method uses clever intuitive heuristics for searching but has no safeguards if the search gets stuck in a local minimum
An asymptotically complete method reaches a global minimum with certainty or at least with probability one if allowed to run indefinitely long, but has no means to know when a global minimizer has been found.
A complete method reaches a global minimum with certainty, assuming exact computations and indefinitely long run time, and knows after a finite time that an approximate global minimizer has been found (to within prescribed tolerances).
A rigorous method reaches a global minimum with certainty and within given tolerances even in the presence of rounding errors, except in near-degenerate cases, where the tolerances may be exceeded.
Deterministic global optimization methods typically belong to the last two categories. Note that building a rigorous piece of software is extremely difficult as the process requires that all the dependencies are also coded rigorously.
Deterministic global optimization methods require ways to rigorously bound function values over regions of space. One could say that a main difference between deterministic and non-deterministic methods in this context is that the former perform calculations over regions of the solution space, whereas the latter perform calculations on single points. This is either done by exploiting particular functional forms (e.g. McCormick relaxations), or using interval analysis in order to work with more general functional forms. In any case bounding is required, which is why deterministic global optimization methods cannot give a rigorous result when working with black-box code, unless that code is explicitly written to also return function bounds. For this reason, it is common for problems in deterministic global optimization to be represented using a computational graph, as it is straightforward to overload all operators such that the resulting function values or derivatives yield interval (rather than scalar) results.
Classes of deterministic global optimization problems
Linear programming problems (LP)
Linear programming problems are a highly desirable formulation for any practical problem. The reason is that, with the rise of interior-point algorithms, it is possible to efficiently solve very large problems (involving hundreds of thousands or even millions of variables) to global optimality. Linear programming optimization problems strictly fall under the category of deterministic global optimization.
Mixed-integer linear programming problems (MILP)
Much like linear programming problems, MILPs are very important when solving decision-making models. Efficient algorithms for solving complex problems of this type are known and are available in the form of solvers such as CPLEX.
Non-linear programming problems (NLP)
Non-linear programming problems are extremely challenging in deterministic global optimization. The order of magnitude that a modern solver can be expected to handle in reasonable time is roughly 100 to a few hundreds of non-linear variables. At the time of this writing, there exist no parallel solvers for the deterministic solution of NLPs, which accounts for the complexity gap between deterministic LP and NLP programming.
Mixed-integer non-linear programming problems (MINLP)
Even more challenging than their NLP counterparts, deterministically solving an MINLP problem can be very difficult. Techniques such as integer cuts, or branching a problem on its integer variables (hence creating NLP sub-problems which can in turn be solved deterministically), are commonly used.
Zero-order methods
Zero-order methods consist of methods which make use of zero-order interval arithmetic. A representative example is interval bisection.
First-order methods
First-order methods consist of methods which make use of first-order information, e.g., interval gradients or interval slopes.
Second-order methods
Second-order methods make use of second-order information, usually eigenvalue bounds derived from interval Hessian matrices. One of the most general second-order methodologies for handling problems of general type is the αΒΒ algorithm.
Deterministic global optimization solvers
ANTIGONE: Algorithms for coNTinuous / Integer Global Optimization of Nonlinear Equations). It is a proprietary software, available through ANTIGONE the GAMS modelling platform.
BARON: BARON is available under the AIMMS, AMPL, and GAMS modeling language and on the NEOS Server. It is a proprietary software
Couenne: Convex Over and Under ENvelopes for Nonlinear Estimation (Couenne) is an open-source library
EAGO: Easy-Advanced Global Optimization (EAGO) is an open-source solver in Julia (programming language). It is developed by the University of Connecticut.
LINDO (Linear, Interactive, and Discrete Optimizer) includes global optimization capabilities.
MAiNGO: McCormick-based Algorithm for mixed-integer Nonlinear Global Optimization (MAiNGO) is a C++ package with MPI and openMP parallelization and provided open-source under Eclipse Public License - v 2.0.
Octeract Engine is a proprietary solver with parallelization capabilities. It is developed and licensed by Octeract
SCIP: SCIP is an open-source suite of optimization solvers which among others solves mixed integer nonlinear programming (MINLP)
References
Mathematical optimization | Deterministic global optimization | [
"Mathematics"
] | 1,221 | [
"Mathematical optimization",
"Mathematical analysis"
] |
48,624,128 | https://en.wikipedia.org/wiki/Wildlife%20of%20Saudi%20Arabia | The wildlife of Saudi Arabia is substantial and varied. Saudi Arabia is a very large country forming the biggest part of the Arabian Peninsula. It has several geographic regions, each with a diversity of plants and animals adapted to their own particular habitats. The country has several extensive mountain ranges, deserts, highlands, steppes, hills, wadis, volcanic areas, lakes and over 1300 islands. The Saudi Arabian coastline has a combined length of and consists of the Gulf of Aqaba and the Red Sea to the west while a shorter eastern coastline can be found along the Persian Gulf.
Geography
Saudi Arabia has a range of mountains, the Sarawat or Sarat Mountains, which run parallel with the Red Sea coast. These are low at the northern end, have a gap in the middle between Medina and Ta'if, and are higher at the southern end, where Mount Soudah in the Asir Mountains, at just over is the highest point in Saudi Arabia. Between these mountains and the Red Sea is a coastal plain known as Tihamah. The west side of this range is a steep escarpment but to the east is a wide plateau called the Najd which is bounded on the east by a series of mountain ridges, including the Ṭuwayq Mountains, east of which the land descends gradually to the Persian Gulf.
In the south of the country is the Rub' al Khali, or "Empty Quarter", the largest contiguous sand desert in the world. It slopes from about near the Yemeni border, northwestwards nearly to the Persian Gulf. Another sandy desert, the Nefud, lies in the north central part of Saudi Arabia, and it is connected to the Rub' al Khali by a broad swathe of sand dunes and gravel plains known as Dahna. Most of the country has very little precipitation, less than in many regions, and in the Rub' al Khali there may be no rain for a decade. The mountainous region of Asir in the southwest is wetter; it receives monsoon rains between May and October which may amount to .
The northern Ha'il Region has the Shammar Mountains, further divisible into the Aja and Salma subranges.
The Red Sea was formed when in the Eocene period, the Arabian Peninsula began to move away from the continent of Africa. This prevented further exchange of genes between African and Arabian species. Furthermore, the late Tertiary and the early Quaternary eras saw a period of climatic cooling that drove vegetation bands southwards, and the Arabian Peninsula received an influx of species from Eurasia. With increasing aridity, conditions became inimical for many of these and they retreated to the damper, southwestern mountainous regions, becoming relict populations.
Flora
Studying the flora of Saudi Arabia is a daunting task because of the vast size of the kingdom; the general pattern of vegetation is now known but the exact distribution of the many species of flowering plant is poorly understood. Almost 3,500 species of plant have been recorded in the country, with nearly 1,000 species known from the southwestern region of Asir with its higher rainfall. Plants in general are xerophytic and mostly dwarf shrubs or small herbs. There are few species of tree but date palms are abundant in places.
The east of Saudi Arabia often receives "Mediterranean depressions" from November onwards. The arrival of sufficient quantities of rain causes perennial plants to produce new shoots and the seeds of annual plants to germinate. These annuals grow with great rapidity and complete their life cycle within a few weeks. By April or May, the annuals will have flowered, set seeds and died, and the perennials returned to a state of dormancy.
In desert areas, plant growth is mostly confined to depressions or wadis, though some plants with deep rooting-systems grow elsewhere. The Rub' al Khali desert has very little plant diversity, with about 37 species of flowering plant having been recorded here, 17 of which are only found around the periphery of the desert. There are virtually no trees, and the plants are adapted for desert life and include dwarf shrubs such as Calligonum crinitum and saltbush, and several species of sedge. Around the margins of this desert are open woodlands with Acacia and Prosopis cineraria.
The Asir Mountains in the southwest of the country, and most of the western highlands of Yemen, support a distinct flora which has affinities with parts of East Africa. The highest parts are clothed with cloud forests, southwestern Arabian montane woodlands which includes, on north-facing slopes, Juniperus procera and Euryops arabicus, draped with the lichen Usnea articulata, and on south-facing slopes, dwarf shrubs such as Rubus petitianus, Rosa abyssinica, Alchemilla crytantha, Senecio and Helichrysum abyssinicum, with Aloe sabae and Euphorbia in the driest locations. Lower down, below about , there is evergreen woodland and scrub dominated by Olea europaea subsp. cuspidata and Tarchonanthus camphoratus. Below about the vegetation is deciduous scrubland with Acacia, Commiphora, Grewia and succulent plants.
In Ha'il Region is located Jabal Aja Protected Area, which is noted for its flora, is located in the area of the Aja Mountains.
Fauna
The fauna of Saudi Arabia has been better studied than the flora, not least because of interest in the larger mammals for the purpose of hunting and shooting. Birds and butterflies have also been studied, but less is known about other parts of the animal kingdom. Some of the larger mammals found here include the dromedary camel, the Arabian tahr, the Arabian wolf, the Arabian red fox and fennec, the caracal, the striped hyena, the sand cat, the rock hyrax, and the Cape hare. However habitat destruction, hunting, off-road driving and other human activities have led to the local extinction of the striped hyena, the golden jackal and the honey badger in some localities. The Asir Mountains in the southwest of the country is where the critically endangered Arabian leopard is still to be found, and the broader region is also home to the hamadryas baboon with colonies reaching as far north as Baha, Taif, and the suburbs south of Mecca.
The Arabian oryx used to roam over Saudi Arabia's deserts and much of the Middle East but by 1970, it had been hunted to extinction in the wild. However, a captive breeding programme had been initiated at the Phoenix Zoo in the United States in the 1960s and the oryx has since been successfully reintroduced into the wild in the Mahazat as-Sayd Protected Area in Saudi Arabia, a fenced reserve of over . It is also now present in the 'Uruq Bani Ma'arid protected area, where the goitered gazelle and mountain gazelle are also to be found.
The sand cat, which is the only member of the cat family to live exclusively in deserts, can be found in the western region of Saudi Arabia. Its paws are covered with thick hair to protect it from the hot ground, but it is chiefly nocturnal. In Najd and Tabuk, the Arabian wolf can be found. It is a solitary hunter and is persecuted by livestock owners. Only 2000 to 3000 wolves are left in the wild, and accordingly they are considered endangered.
Birds native to Saudi Arabia include sandgrouse, quails, eagles, buzzards and larks and on the coast, seabirds include pelicans and gulls. The country is also visited by migratory birds including flamingoes, storks and swallows in spring and autumn.
MacQueen's bustard is a resident species that is dependent on good vegetation cover, often being found in areas with dense scrubby growth with shrubs such as Capparis spinosa. The cliff faces of the Asir Mountains provide habitat for the griffon vulture, the Verreaux's eagle and the small Barbary falcon, and the juniper woodlands are home to the Yemen linnet, the Yemen thrush, the Yemen warbler and the African paradise flycatcher. The hamerkop nests in the Wadi Turabah Nature Reserve, the only place on the Arabian Peninsula at which it is found.
Extinct
The lion, cheetah, and Syrian wild ass used to occur here, as evidenced by Islamic texts. For example, there is a hadith in Muwatta’ Imam Malik about Muslim pilgrims having to beware of the asad (lion) and fahd (cheetah) in the land, besides other animals. The country's last known cheetahs were killed near Ha'il in 1973. The lion reportedly became extinct in the middle of the 19th century. Later on, a 325,000-year-old tusk of an extinct type of elephants known as Palaeoloxodon was found in An Nafud desert in northwestern Saudi Arabia, in addition to remains of an extinct jaguar, oryx and a member of the horse family. In 2020, footprints of humans, camels, buffalo, elephants and other species, dated to 120,000 years ago, were found in Tabuk Province near what was then a shallow lake.
See also
List of birds of Saudi Arabia
List of cockroaches of Saudi Arabia
List of mammals of Saudi Arabia
References
Saudi Arabia
Fauna of Saudi Arabia | Wildlife of Saudi Arabia | [
"Biology"
] | 1,940 | [
"Biota by country",
"Wildlife by country"
] |
53,837,100 | https://en.wikipedia.org/wiki/Fujiwara%E2%80%93Moritani%20reaction | In organic chemistry, the Fujiwara–Moritani reaction is a type of cross coupling reaction where an aromatic C-H bond is directly coupled to an olefinic C-H bond, generating a new C-C bond. This reaction is performed in the presence of a transition metal, typically palladium. The reaction was discovered by Yuzo Fujiwara and Ichiro Moritani in 1967. An external oxidant is required to this reaction to be run catalytically. Thus, this reaction can be classified as a C-H activation reaction, an oxidative Heck reaction, and a C-H olefination. Surprisingly, the Fujiwara–Moritani reaction was discovered before the Heck reaction.
The need for prefunctionalization of either component is obviated in this reaction, which is desirable because it can shorten syntheses, provide atom economical routes, and enable late stage functionalization of complex molecules. Despite the potential of the Fujiwara-Moritani transformation, it is not often utilized by organic chemists due to the typically harsh reaction conditions, such as acidic, oxidative and high temperature conditions, that most functional groups can not survive.
Mechanism
The mechanism of the Fujiwara–Moritani reaction is not fully understood. The most widely accepted mechanism is as shown in Figure 1. The sequence begins by formation of the palladium–Aryl cationic complex via Friedel-Crafts or concerted deprotonation metallation process which eliminates an acetic acid to generate a palladium–Aryl species. An olefin then coordinates to the palladium and undergoes a 1,2-migratory insertion, forming a C-C bond. The following β-hydride elimination yields the styrene type product and a palladium hydride species. Deprotonation of this palladium(II) species by acetate (i.e., reductive elimination of the H-OAc pair) yields palladium(0) which in the presence of an oxidant, e.g. Cu(II), can be re-oxidized to palladium(II) and undergo the catalytic cycle once more.
Industrial example
Ube Industries succeeded in an industrial application of the Fujiwara–Moritani reaction for the first time 1982. In presence of a catalytic amount of palladium acetate, dimethyl phthalate was directly converted to a biaryl species that was then dehydrated to give biphthalic anhydride, a precursor of polyimide polymers. There are two different potential biaryl products, a symmetric or an asymmetric form. They achieved selective synthesis of either form via ligand control.
Latest examples
Even though the original conditions of the Fujiwara–Moritani reaction are not practical, this reaction has significant importance in the sense that it showed the possibility for other transformations that were later developed. Recent advancements have unveiled the mechanism of the Fujiwara–Moritani reaction to some extent which has allowed the development of new systems that enable similar transformations on complex substrates.
One of the earliest examples of the Fujiwara–Moritani reaction in total synthesis is found in the enantioselective total synthesis of clavicipitic acid by the Murakami group. They used stoichiometric palladium acetate to couple 4-bromo indole and protected dehydroalanine. Notably, the aryl bromide survived the reaction conditions which permitted orthogonal C-H olefination techniques to be utilized. Differentiation of the C3 and C4 positions was now possible, whereas conventional cross coupling methods with the dihalogenated indole had regioselectivity issues.
Fagnou's group showed that direct C-H arylation of an indole is possible using palladium catalysis with a copper oxidant. Although the reaction requires high temperature, acidic solvent, and solvent quantities of the coupling partner, this demonstration of a selective and direct hetero aryl-aryl coupling is notable.
The Yu group developed an Aryl C-H olefination reaction in which aryl carboxylic acids were directly coupled to olefins through the aryl C-H bond. Note: this is not the Fujiwara–Moritani reaction because it is directed by the free acid. The Fujiwara–Moritani reaction is typified by non-directed C–H palladation consistent with regioselectivity of Friedel-Crafts reactions. They also applied their methodology for the total synthesis of (+)-Lithospermic acid. The product yield is as high as 93% despite the complexity of both coupling partners. This is one of the best examples in which C-H olefination simplifies the retro synthesis and demonstrates a convergent synthesis of this complex natural product.
The Lipshutz group dramatically improved the conditions of the Fujiwara–Moritani reaction by developing reaction conditions that utilize water as the solvent and obviate the need for an exogenous acid. Although the substrate scope is limited to p-methoxy aryl species, Lipshutz's report suggested that the Fujiwra-Moritani can be run under milder conditions.
References
Coupling reactions
Carbon-carbon bond forming reactions
Name reactions | Fujiwara–Moritani reaction | [
"Chemistry"
] | 1,086 | [
"Coupling reactions",
"Name reactions",
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
53,839,611 | https://en.wikipedia.org/wiki/Aspergillus%20conicus | Aspergillus conicus is a xerophilic species of fungus in the genus Aspergillus which can cause endophthalmitis in rare cases. It was first described in 1914. It is from the section Restricti. Aspergillus conicus has been reported as a human pathogen.
Growth and morphology
A. conicus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
Further reading
conicus
Xerophiles
Fungi described in 1914
Fungus species | Aspergillus conicus | [
"Biology"
] | 136 | [
"Fungi",
"Fungus species"
] |
53,843,914 | https://en.wikipedia.org/wiki/Ruqian%20Wu | Ruqian Wu is a professor of physics and astronomy at the University of California, Irvine (UCI). His primary research area is condensed matter physics.
He gained a Ph.D. at the Institute of Physics, Academia Sinica
He was awarded the status of Fellow in the American Physical Society, after he was nominated by their Division of Computational Physics in 2001, for contributions to the understanding of magnetic, electronic, mechanical, chemical and optical properties of compounds, alloys, interfaces, thin films and surfaces using first-principles calculations and for development of the methods and codes for such components.
References
Year of birth missing (living people)
Living people
University of California, Irvine faculty
Condensed matter physicists
Fellows of the American Physical Society | Ruqian Wu | [
"Physics",
"Materials_science"
] | 146 | [
"Condensed matter physicists",
"Condensed matter physics"
] |
53,844,057 | https://en.wikipedia.org/wiki/Alamgir%20Karim | Alamgir Karim is a professor of chemical and biomolecular engineering at the University of Houston noted for his work on polymer and polymer nanocomposite materials.
Education
Karim received a B.Sc. degree in physics from St. Stephen's College, Delhi in 1985. He worked under at Northwestern University, where he received a PhD in physics in 1991. He worked as a postdoctoral research associate under Matthew Tirrell at the University of Minnesota in 1991 -- 1992.
Career
Karim joined the Polymers Division at the National Institute of Science and Technology in 1993, advancing from Physicist to Group Leader. In 2004 he was named a Fellow of the American Physical Society from the Division of Polymer Physics for pioneering research on polymer thin films and interfaces, polymer brushes, blend film phase separation, thin film dewetting, pattern formation in block copolymer films, and the application of combinatoric measurement methods to complex polymer physics. In 2008 he moved to the Department of Polymer Engineering at the University of Akron, where he was the Goodyear Chair Professor. In 2017 he joined the Department of Chemical & Biomolecular Engineering at the University of Houston (UH), where he is the Dow Chair and Welch Foundation Professor.
Karim is an expert in the processing of polymer thin films, polymer brushes block copolymers, polymer blends, and polymer-nanoparticle mixtures. He has applied these materials to produce membranes, phase gratings, and sensors, among other applications.
At UH, he has developed methods to process chitin, increase the energy density of capacitors and manipulate polyelectrolyte coacervate droplets. In 2024 he proposed that coacervate droplets suspended in deionized water could potentially act as protocells for the origins of life.
His most-cited research article demonstrated a method to calculate the elastic moduli of polymer thin films from the spacing of wrinkles on polymer films.
Awards and recognition
2002 — Bronze Medal Award, United States Department of Commerce
2004 — Fellow, American Physical Society
2012 — Fellow, American Association for the Advancement of Science
2021 — National Science Foundation Special Creativity Award
References
Living people
Chemical engineers
Fellows of the American Physical Society
Fellows of the American Association for the Advancement of Science
Year of birth missing (living people)
University of Houston faculty
Place of birth missing (living people) | Alamgir Karim | [
"Chemistry",
"Engineering"
] | 475 | [
"Chemical engineering",
"Chemical engineers"
] |
53,844,230 | https://en.wikipedia.org/wiki/Massimo%20Boninsegni | Massimo Boninsegni (born 1963 in Genova, Italy) is an Italian-Canadian theoretical condensed matter physicist. He graduated with a Bachelor's degree in physics at the Universita' degli Studi di Genova in 1986.
He moved to the United States in 1987, where he received a doctoral degree in physics from Florida State University in 1992. His Ph.D. thesis was on numerical studies of a strongly correlated electronic model of high-temperature superconductivity. He took on postdoctoral positions at the University of Illinois at Urbana-Champaign and University of Delaware, before becoming in 1997 an assistant professor of physics at San Diego State University. He moved to the University of Alberta in 2002, where he has been a professor of physics since 2005.
His research interests are in the areas of superfluidity, superconductivity, Bose-Einstein condensation and Quantum Monte Carlo simulations. His main contribution is in computational quantum many-body physics, specifically to the development of the continuous-space Worm Algorithm for the simulation of strongly correlated Bose systems at finite temperature. He also contributed to the study of the condensed phase of molecular hydrogen, chiefly the superfluid properties of small clusters, as well as of the supersolid phase of matter.
He was awarded the status of Fellow of the American Physical Society in 2007 for "the development of a novel methodology enabling accurate, large-scale Quantum Monte Carlo simulations of interacting many-body systems, and for its application to the investigation of the supersolid phase of helium and of superfluidity of molecular hydrogen".
References
Fellows of the American Physical Society
21st-century Italian physicists
Living people
Date of birth missing (living people)
1963 births
Scientists from Genoa
Condensed matter physicists
Computational physicists
Florida State University alumni
Italian emigrants to the United States
Italian emigrants to Canada
San Diego State University faculty
Academic staff of the University of Alberta
University of Genoa alumni
Canadian physicists | Massimo Boninsegni | [
"Physics",
"Materials_science"
] | 395 | [
"Condensed matter physicists",
"Condensed matter physics",
"Computational physicists",
"Computational physics"
] |
53,844,678 | https://en.wikipedia.org/wiki/Xin%20Zhang%20%28engineer%29 | Xin Zhang is a Distinguished Professor of Engineering at Boston University (BU).
Education
Dr. Zhang obtained her Ph.D. from the Hong Kong University of Science and Technology (HKUST). She subsequently served as a postdoctoral researcher and research scientist at the Massachusetts Institute of Technology (MIT).
Career
Dr. Zhang joined the faculty of Boston University in 2002 and currently holds the esteemed title of Distinguished Professor of Engineering. Her appointment spans across the Departments of Mechanical Engineering, Electrical & Computer Engineering, Biomedical Engineering, and Materials Science & Engineering.
Current research
Dr. Zhang directs the Laboratory for Microsystems Technology (LMST) at Boston University, which specializes in metamaterials and microelectromechanical systems (MEMS or microsystems). Her recent research endeavors in metamaterials have captured widespread global attention. Her pioneering work encompasses the creation of metamaterials tailored for highly efficient air-permeable sound silencing and noise reduction, alongside innovations that notably enhance MRI signal-to-noise ratio, thereby revolutionizing MRI performance. These groundbreaking achievements have resonated across hundreds of media outlets, eliciting profound interest from both the scientific community and industry leaders.
Professional memberships
Fellow of the American Society of Mechanical Engineers (2015)
Fellow of the Optica (2016)
Fellow of the American Institute for Medical and Biological Engineering (2016)
Associate Fellow of the American Institute of Aeronautics and Astronautics (2017)
Fellow of the American Association for the Advancement of Science (2016)
Fellow of the Institute of Electrical and Electronics Engineers (2017)
Fellow of the American Physical Society (2019)
Fellow of the National Academy of Inventors (2019)
Member of the European Academy of Sciences and Arts (2023)
Honors, awards and special recognitions
2024: ASME Robert Henry Thurston Lecture Award
2024: Honoree of Fast Company Innovation By Design Awards
2024: Falling Walls Science Breakthroughs of the Year (Finalist)
2023: Member of the European Academy of Sciences and Arts
2023: IET Excellence and Innovation Awards - International Award (Finalist)
2023: Sigma Xi Walston Chubb Award for Innovation
2023: Falling Walls Science Breakthroughs of the Year (Finalist)
2023: IEEE EMBS Technical Achievement Award
2023: STAT Madness All-Star Award
2023: ASME Per Bruel Gold Medal
2022: IET Innovation Award on Chief Engineer of the Year (Finalist)
2022: Guggenheim Fellowship
2022: Distinguished Professor of Engineering, Boston University
2021: Rajen Kilachand Fund for Integrated Life Science and Engineering
2021: IET Innovation Award on Digital Health and Social Care (Finalist)
2021: IET Innovation Award on Tech for Good (Finalist)
2021: Invented Here! Honoree, Boston Patent Law Association
2020: IET Achievement Medal (Finalist)
2020: Invented Here! Honoree, Boston Patent Law Association
2020: IET Innovation Award on Excellence in R&D (Finalist)
2019: Fellow of National Academy of Inventors
2019: IET Innovation Award on Emerging Technology Design
2018: Boston University Innovator of the Year Award
2018: Charles DeLisi Award and Distinguished Lecture
2016: IEEE Sensors Council Technical Achievement Award
Education and outreach
Dr. Zhang serves as the Director for both the National Science Foundation (NSF) Research Experiences for Undergraduates (REU) Site and Research Experiences for Teachers (RET) Site in Integrated Nanomanufacturing at Boston University. Additionally, she holds the role of Associate Director at the Boston University Nanotechnology Innovation Center.
References
Year of birth missing (living people)
Living people
21st-century American engineers
Boston University faculty
Alumni of the Hong Kong University of Science and Technology
Massachusetts Institute of Technology people
Fellows of the American Association for the Advancement of Science
Fellows of the American Institute for Medical and Biological Engineering
Fellows of the American Society of Mechanical Engineers
Fellows of the American Physical Society
Fellows of the IEEE
Fellows of Optica (society)
Massachusetts Institute of Technology alumni | Xin Zhang (engineer) | [
"Materials_science"
] | 801 | [
"Metamaterials scientists",
"Metamaterials"
] |
53,844,820 | https://en.wikipedia.org/wiki/CpG%20island%20hypermethylation | CpG island hypermethylation is a phenomenon that is important for the regulation of gene expression in cancer cells, as an epigenetic control aberration responsible for gene inactivation. Hypermethylation of CpG islands has been described in almost every type of tumor.
Many important cellular pathways, such as DNA repair (hMLH1, for example), cell cycle (p14ARF), apoptosis (DAPK), and cell adherence (CDH1, CDH13), are inactivated by it. Hypermethylation is linked to methyl-binding proteins, DNA methyltransferases and histone deacetylase, but the degree to which this process selectively silences tumor suppressor genes remains a research area. The list for hypermethylated genes is growing.
History
The first discovery of methylation in a CpG island of a tumor suppressor gene in humans was that of the Retinoblastoma (Rb) gene in 1989. This was just a few years after the first oncogene mutation was discovered in a human primary tumor. The discovery of the methylation-associated inactivation of the Von Hippel-Lindau (VHL) gene revived the idea of the hypermethylation of the CpG island promoter being a mechanism to inactivate genes in cancer. Cancer epigenetic silencing in its current state was born in the labs of Baylin and Jones, where it was proven that CpG island hypermethylation was a common inactivation mechanism of the tumor suppressor gene p16INK4a. The introduction of methylation-specific PCR and sodium bisulfite modification added tools to the belt of cancer epigenetics research, and the list of candidate genes with aberrant methylation of their CpG islands has been growing since. Initially, the presence of alterations in the profile of DNA methylation in cancer was seen as a global hypomethylation of the genome that would lead to massive overexpression of oncogenes with a normally hypermethylated CpG island. Lately, this is considered as an incomplete scenario, despite the idea of the genome of the cancer cell undergoing a reduction of its 5-methylcytosine content when compared to its parent normal cell being correct. In normal tissues, the vast majority of CpG islands are completely unmethylated with some exceptions. The association of transcriptional silencing of tumor suppressor genes with hypermethylation is the foundation upon which this subset of cancer epigenetics stands.
Structure
In a normal cell, the CpG island is hypomethylated, and the rest of the genome is methylated. It is evident that the hypomethylation of the CpG island in normal cells provides no additional steric hindrance to future binding. The majority of CpG pairs in mammals are chemically modified by the covalent attachment of a methyl group to the C5 position of the cytosine ring. This modification is distributed throughout the genome and represses transcription. A CpG island is a cytosine and guanine linked by a phosphate in a repeated sequence. These are genetic hotspots as they are sites for active methylation. The expression of a gene is tissue specific, which leads to variation in tissue function. Methylation of a gene prevents expression of a gene in a particular way.
The reason for methylation to be almost exclusive to CpG dinucleotides is the symmetry of the dinucleotide. This allows for preservation during cell division and is a hallmark for epigenetic modifications.
Role in cancer
CpG islands that are hypermethylated can play three roles in cancer: in diagnosis, prognosis and in monitoring. It is useful to consider a particular tumor type, called CpG island methylator phenotype, or CIMP: higher levels of CpG island hypermethylation are found in CIMP. The frequent occurrence of hypermethylation was first described in colorectal cancer and later for glioma. More recently, it has been studied for neuroblastomas. Colorectal cancer will not necessarily have the same set of hypermethylated CpG islands as in a glioma, and this clinical distinctness of tumors can be interpreted by doctors. Hypermethylated CpG islands also act as biomarkers, as they can help distinguish cancer from normal cells in the same sample.
Colorectal CIMP was one of the first to be described. Patients in this category of cancer tend to be older, female and have a defective MLH1 function. The tumors are usually in the ascending colon. They also have a good prognostic outcome. Clinically distinct phenotypes of CIMP also suggest that there is potential for epigenetic therapy.
In diagnosis, one can identify the tumor type and tumor subtype, as well as its primary tumor when that is unknown. Hypermethylation increases with tumorigenicity, which is an indication of the prognosis of cancer. For example, high methylation is a marker for poor prognosis in lung cancer. CpG island hypermethylation shows promise for molecular monitoring of patients with cancer, and is also a potential target for therapeutic use.
Aberrations in epigenetic control that are seen in cancer pertain to DNA methylation, which can be either locus-specific DNA hypermethylation or genome-wide DNA hypomethylation. Under locus-specific DNA hypermethylation comes CpG island hypermethylation. DNA methylation acts as an alternative to genetic mutation. According to the Knudson hypothesis, cancer is a result of multiple hits to DNA, and DNA methylation can be one such hit. Epigenetic mutations such as DNA methylation are mitotically heritable, but also reversible, unlike gene mutations. The identity of hypermethylated CpG islands varies by the type of tumor. Some single gene examples include MLH1 in colorectal cancer and BRCA1 in breast cancer.
References
Methylation | CpG island hypermethylation | [
"Chemistry"
] | 1,255 | [
"Methylation"
] |
53,847,239 | https://en.wikipedia.org/wiki/Pyridinium%20perbromide | Pyridinium perbromide (also called pyridinium bromide perbromide, pyridine hydrobromide perbromide, or pyridinium tribromide) is an organic chemical composed of a pyridinium cation and a tribromide anion. It can also be considered as a complex containing pyridinium bromide—the salt of pyridine and hydrogen bromide—with an added bromine (Br2). The chemical is a solid whose reactivity is similar to that of bromine. It is thus a strong oxidizing agent used as a source of electrophilic bromine in halogenation reactions. The analogous quinoline compound behaves similarly.
Preparation
Pyridinium tribromide can be obtained by reacting pyridinium bromide with bromine or thionyl bromide.
Properties
Pyridinium tribromide is a crystalline red solid which is virtually insoluble in water.
Use
Pyridinium tribromide is used as a brominating agent of ketones, phenols, and ethers. As a stable solid, it can be more easily handled and weighed precisely, especially important properties for use in small scale reactions. One example from the original publications on this chemical is the bromination of the 3-ketosteroid 1 to 2,4-dibromocholestanone (2):
References
Oxidizing agents
Pyridinium compounds
Bromine compounds
Polyhalides | Pyridinium perbromide | [
"Chemistry"
] | 311 | [
"Redox",
"Oxidizing agents"
] |
38,128,880 | https://en.wikipedia.org/wiki/Born%20equation | The Born equation can be used for estimating the electrostatic component of Gibbs free energy of solvation of an ion. It is an electrostatic model that treats the solvent as a continuous dielectric medium (it is thus one member of a class of methods known as continuum solvation methods).
It was derived by Max Born.
where:
NA = Avogadro constant
z = charge of ion
e = elementary charge, 1.6022 C
ε0 = permittivity of free space
r0 = effective radius of ion
εr = dielectric constant of the solvent
Derivation
The energy U stored in an electrostatic field distribution is:Knowing the magnitude of the electric field of an ion in a medium of dielectric constant εr is and the volume element can be expressed as , the energy can be written as: Thus, the energy of solvation of the ion from gas phase (εr =1) to a medium of dielectric constant εr is:
References
External links
aspects about this equation
Enthalpy
Max Born | Born equation | [
"Physics",
"Chemistry",
"Mathematics"
] | 211 | [
"Thermodynamic properties",
"Physical quantities",
"Quantity",
"Enthalpy",
"Physical chemistry stubs"
] |
38,131,962 | https://en.wikipedia.org/wiki/Ammonium%20diethyl%20dithiophosphate | Ammonium diethyl dithiophosphate or more systematically ammonium O,O′-diethyl dithiophosphate, is the ammonium salt of diethyl dithiophosphoric acid. It is used as a source of the (C2H5O)2PS2− ligand in coordination chemistry and in analytical chemistry for determination of various ions. It can be obtained by the reaction of phosphorus pentasulfide with ethanol and ammonia. In crystal structure of this compound the ammonium cation is connected by four charge-assisted N—H···S hydrogen bonds to four tetrahedral diethyl dithiophosphate anions.
See also
Dimethyl dithiophosphoric acid
Zinc dithiophosphate
References
Phosphorothioates
Ethyl esters
Ammonium compounds | Ammonium diethyl dithiophosphate | [
"Chemistry"
] | 170 | [
"Phosphorothioates",
"Ammonium compounds",
"Functional groups",
"Salts"
] |
55,354,878 | https://en.wikipedia.org/wiki/Zero%20degrees%20of%20freedom | In statistics, the non-central chi-squared distribution with zero degrees of freedom can be used in testing the null hypothesis that a sample is from a uniform distribution on the interval (0, 1). This distribution was introduced by Andrew F. Siegel in 1979.
The chi-squared distribution with n degrees of freedom is the probability distribution of the sum
where
However, if
and are independent, then the sum of squares above has a non-central chi-squared distribution with n degrees of freedom and "noncentrality parameter"
It is trivial that a "central" chi-square distribution with zero degrees of freedom concentrates all probability at zero.
All of this leaves open the question of what happens with zero degrees of freedom when the noncentrality parameter is not zero.
The noncentral chi-squared distribution with zero degrees of freedom and with noncentrality parameter μ is the distribution of
This concentrates probability e−μ/2 at zero; thus it is a mixture of discrete and continuous distributions
References
Continuous distributions
Normal distribution
Exponential family distributions
Probability distributions
Statistical hypothesis testing | Zero degrees of freedom | [
"Mathematics"
] | 218 | [
"Functions and mappings",
"Mathematical relations",
"Mathematical objects",
"Probability distributions"
] |
55,361,903 | https://en.wikipedia.org/wiki/SAP%20Converged%20Cloud | SAP Converged Cloud is a private managed cloud developed and marketed by SAP.
It is a set of cloud computing services that offer managed private cloud based on an OpenStack technology-based public cloud. It is used by SAP's organization for their own internal IT resources to create a mix of different cloud computing environments made up of OpenStack services.
It offers compute, storage, and platform services that are accessible to SAP software.
History
In 2012, SAP promoted aspects of cloud computing. In October 2012, SAP announced a platform as a service called the SAP Cloud Platform.
In May 2013, a managed private cloud called the S/4HANA Enterprise Cloud service was announced.
SAP Converged Cloud was announced in January 2015. SAP Converged Cloud is managed under The Converged Cloud unit, an SAP business unit established in 2015 headed by Markus Riedinger as Unit manager.
The Converged Cloud BETA went live in May 2017, included OpenStack technology-based storage, compute network components, and shared new services made by SAP: Later, Designate (DNS as a Service) was added as well.
Converged Cloud characteristics and components
Converged infrastructure
SAP Converged Cloud follows the premise of Converged infrastructure: the integration of compute, storage, and networking components and technologies into self-provisioning pools of shared resources and supported by IT services. One of the benefits of a data center based on converged infrastructure is that manual tasks can be automated thus reducing the time and cost to carry them out.
Open standards
SAP Converged Cloud is supporting the VMware hypervisors. It supports multiple operating systems (Microsoft, Red Hat, Ubuntu, etc.)
SAP Converged Cloud is compatible with public cloud platforms based on OpenStack technology, such as RackSpace and Nebula.
SAP Converged Cloud Professional Services
SAP Converged Cloud consultants work with SAP's users to help them configure SAPs products. These users receive advice on how they can implement cloud in a consistent manner and how to get value from their cloud investment.
Services
SAP Cloud Marketing – Delivers information on the possible uses for cloud services and identifies opportunities to begin implementing cloud.
Cloud products
Compute is the service that can deliver a virtual server on demand. New virtual servers, or compute instances, can be brought online and can be customized to meet a variety of computing needs. It is built on OpenStack's open-source operating environment.
Object Storage
Content Delivery Network (CDN) is a webservice that delivers data from the Object Storage to users all around the world. Using a network of servers from Akamai Technologies, SAP's CDN routes content to local servers nearer to the customers.
Block Storage enables customers to store data from Compute instances. Block Storage is used in applications requiring frequent read/write access such as web applications.
Load Balancers are a managed load balancing service that allow for the automatic distribution of incoming traffic across compute resources.
DNS
Monitoring is a complementary dashboard that delivers fundamental compute and block storage metrics, providing visibility into resource utilization, application performance, and operational health.
Platform as a Service (PaaS) and Software as a Service (SaaS) Special cloud-based web services (like Applications, Databases, Multimedia, etc.).
See also
List of SAP products
OpenStack
References
External links
1Gbps Unmetered VPS Hosting
Cloud platforms
Cloud computing
Cloud infrastructure
Converged Cloud
2017 in computing | SAP Converged Cloud | [
"Technology"
] | 693 | [
"Cloud infrastructure",
"Cloud platforms",
"Computing platforms",
"IT infrastructure"
] |
55,362,020 | https://en.wikipedia.org/wiki/Afroinsectivora | Afroinsectivora is a clade of mammals that includes the macroscelideans and afrosoricidans. This clade includes the elephant shrews, golden moles, tenrecs, and otter shrews.
Phylogeny
Classification
Clade Afroinsectivora
Order Macroscelidea
Family Macroscelididae
Order Afrosoricida
Family Chrysochloridae
Family Tenrecidae
Family Potamogalidae
Genus †Widanelfarasia?
Notes
References
Afroinsectiphilia
Mammal unranked clades | Afroinsectivora | [
"Biology"
] | 116 | [
"Phylogenetics",
"Afroinsectiphilia"
] |
55,362,449 | https://en.wikipedia.org/wiki/1%2C5-Hexadiene | 1,5-Hexadiene is the organic compound with the formula (CH)(CH=CH). It is a colorless, volatile liquid. It is used as a crosslinking agent and precursor to a variety of other compounds.
Synthesis
1,5-Hexadiene is produced commercially by the ethenolysis of 1,5-cyclooctadiene:
(CHCH=CHCH) + 2 CH=CH → 2 (CH)CH=CH
The catalyst is derived from ReO on alumina.
A laboratory-scale preparation involves reductive coupling of allyl chloride using magnesium:
2 ClCHCH=CH + Mg → (CH)(CH=CH) + MgCl
References
Alkadienes
Monomers | 1,5-Hexadiene | [
"Chemistry",
"Materials_science"
] | 159 | [
"Monomers",
"Polymer chemistry"
] |
55,364,722 | https://en.wikipedia.org/wiki/GW170814 | GW170814 was a gravitational wave signal from two merging black holes, detected by the LIGO and Virgo observatories on 14 August 2017. On 27 September 2017, the LIGO and Virgo collaborations announced the observation of the signal, the fourth confirmed event after GW150914, GW151226 and GW170104. It was the first binary black hole merger detected by LIGO and Virgo together.
Event detection
The signal was detected at 10:30:43 UTC. The Livingston detector was the first to receive the signal, followed by the Hanford detector 8 milliseconds later and Virgo received the signal 14 milliseconds after Livingston. The detection in all three detectors lead to a very accurate estimate of the position of the source, with a 90% credible region of just 60 deg2, a factor 20 times more accurate than before.
Astrophysical origin
Analysis indicated the signal resulted from the inspiral and merger of a pair of black holes (BBH) with and times the mass of the Sun, at a distance of ( billion light years) from Earth. The resulting black hole had a mass of solar masses, solar masses having been radiated away as gravitational energy. The peak luminosity of GW170814 was .
Implications for general relativity
General relativity predicts that gravitational waves have a tensor-like (spin-2) polarization. The detection in all three detectors led to strong experimental evidence for pure tensor polarization over pure scalar or pure vector polarizations.
See also
Gravitational-wave astronomy
List of gravitational wave observations
References
External links
GW170814 – FactSheet – LIGO
Binary stars
Gravitational waves
August 2017
Stellar black holes
2017 in science
2017 in outer space | GW170814 | [
"Physics"
] | 363 | [
"Physical phenomena",
"Black holes",
"Stellar black holes",
"Unsolved problems in physics",
"Waves",
"Gravitational waves"
] |
55,366,444 | https://en.wikipedia.org/wiki/Zisman%20Plot | The Zisman plot the graphical method of the Zisman theory or the Zisman method for characterizing the wettability of a solid surface , named for the American chemist and geophysicist, William Albert Zisman (1905–1986). It is a prominent Sessile drop technique used for characterizing liquid-surface interactions based on the contact angle of a single drop of liquid sitting on the solid surface.
Zisman Plot
In 1964, William Zisman published an article in the ACS publications on the "Relation of the Equilibrium Contact Angle to Liquid and Solid Constitution". It was in this article where he used what we call today as the Zisman plot. The Zisman plot is used to very quickly give a quantitative measurement of wettability, also known as the critical surface tension, γC , of a solid surface by measuring the liquid contact angle as shown in Figure 1. Taking the cosine of said angle and then graphing it against the surface tension of the liquid wetting the solid substrate yields the critical surface tension. Wettability is a measure of how well a liquid spreads and how complete the contact of the liquid is across the surface of a solid interface. A small contact angle indicates good wettability, while a large contact angle indicates poor wettability. The critical surface tension is the highest liquid surface tension that can completely wet a specific solid surface. For adhesive bonding complete wetting is used to maximize the adhesive joint strength.
Even though this relationship is empirical and less precise than the surface tension of a homologous series of liquids, it is very useful considering it is a parameter of the solid surface. This method is especially used to compare and measure the critical surface tension of low-energy solids (mainly plastics) very quickly and easily. Figure 4 in ZIsman's published article from 1964 shows the critical surface tension as a measure of wettability of Polyethylene. Zisman published this analysis in 1964 and used a variety of nonhomologous liquids to measure the critical surface tension of Polyethylene to be around 35 dynes per centimeter as shown by the intercept at x=1 in Figure 4. Figure 12 in Zisman's 1964 article shows that different solids can also be plotted on the same graph to easily compare the critical solid surface tensions of a variety of plastic substrates including very different polymers such as teflon, acid monolayers, and esters. The ZIsman Plot proved to be a breakthrough which allowed for a very efficient way to measure wettability of a solid which helped to spawn the work of Dann in the late 1960s. Dann characterized the critical surface tensions of a variety of polymeric materials using the Zisman Plot. In modern days, David and Neumann in an investigation of contact angle on low-energy surfaces. However, today some different variations of the Zisman plot exist because the dependent variable is unitless being since it is cosine of the contact angle for the liquid.
Modern Day Zisman Plot Variation
For adhesive bonding of materials, wetting of the surface, which can be measured by the contact angle, is critical to successful adhesive application. To determine how well a liquid wets a solid surface is proportional to the contact angle from the liquid while on the solid. This is determined by the respective surface tensions of the solid and liquid. William Zisman's contribution to adhesives in the way of his Zisman Plot, which has a variation today that graphs 1-cos(θSL) vs γL . In this variation the X-intercept gives the critical surface tension of the liquid needed to effectively wet the solid surface. There are two steps when graphing the data which are to neglect all the points around zero on the y-axis to initially plot the line of best fit to find γc ; however, when graphing the line initially if a point near 0 lands to the right of the intersection redo the regression including that point to make the measurement of the critical surface tension, γc, more accurate. A table of variables and an example can be seen below.
Table of Variables
Example
In this example we will use the five liquids in the Table 2 (Liquid Data) to find the critical wetting surface tension needed to effectively wet PC (polycarbonate) using the Zisman Plot.
The data of the liquids given from the table above is then graphed on the Zisman Plot (Figure 2) with the independent variable as the surface tension of the liquid in dynes/cm and the dependent variable as 1-cos(θSL). There also are different variations of the Zisman plot since the Y-axis is unitless as seen in Table 1 and as mentioned above.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Liquids 1 and 2 fully wet the surface as shown by their low contact angles, so they should be neglected when first drawing the line of best fit to find the critical liquid surface tension needed to effectively wet the PC surface, γC, which is simply the x-intercept of the best fit line for the Zisman Plot. To find the best fit line a least squares regression is recommended by using a computer program such as Microsoft Excel, Minitab, Matlab, or it can also be done using a modern graphing calculator such as a TI-84. This was done with the data from Table 1 and the fit data for liquids 3,4, and 5 can be seen on Figure 3.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
The x-intercept lands at 39.5 dynes per centimeter (This can be calculated by setting y equal to zero and solving for x) which is less than that of liquid 2, 42.9 dynes per centimeter; therefore, a more accurate measurement of the critical liquid surface tension needed to effectively wet the surface of PC can be obtained by including liquid 2 when making the line of best fit, as seen in Figure 4.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
The x-intercept here lands at 42.1 dynes per centimeter (This can be calculated by setting y equal to zero and solving for x), indicating the critical liquid surface tension for PC.
Summary
William Zisman's contribution of what is called today as the Zisman Plot revolutionized the world of adhesive bonding and surface chemistry by giving a fast, effective, and quantitative way to measure the wettability or critical surface tension of a solid. This spawned the work of many others over the past few decades. This spans from Dann's work in the late 1960s to David and Neumann's work in 2014. The Zisman Plot is still used today, and it has many variations since the y-axis is unitless and can be found more easily and accurately using modern regression software packages.
See also
William Zisman
Contact angle
Adhesion
Wettability
References
Zisman, Relation of the Equilibrium Contact Angle to Liquid and Solid Constitution, January 1, 1964, Advances in Chemistry; American Chemical Society: Washington, DC, 1964. doi: 10.1021/ba-1964-0043.ch001
J.R. Dann, Critical Surface Tensions of Polymeric Solids as Determined with Polar Liquids, September 16, 1969, Journal of Colloid and Interface Science, Volume 32, No. 2, February 1970. doi: 10.1016/0021-9797(70)90054-8
David and Neumann, Contact Angle Patterns on Low-Energy Surfaces, March 26, 2013, Advances in Colloid and Interface Science, Volume 206, April 2014, Pages 46–56. doi: 10.1016/j.cis.2013.03.005
Materials testing
Surface science | Zisman Plot | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,595 | [
"Materials testing",
"Condensed matter physics",
"Surface science",
"Materials science"
] |
60,193,989 | https://en.wikipedia.org/wiki/Herbert%20Morawetz | Herbert Morawetz (October 16, 1915 – October 29, 2017) was a Czechoslovakian-American chemical engineer. He was a professor of chemistry at Polytechnic Institute of Brooklyn; now New York University. His work focused on polymer chemistry and macromolecules. He published two books: Macromolecules in Solution and Polymers and The Origins and Growth of a Science both Wiley).
Personal life
Herbert's wife Cathleen Synge Morawetz was a prolific mathematician at NYU. His sister Sonja Morawetz Sinclair revealed in 2017 she was a WW2 codebreaker after seven decades of secrecy by Bletchley Park Signals Intelligence. He helped organize the defection of Mikhail Barishnikov from the USSR 1974. His brother, Oskar Morawetz was a Canadian composer. His brother John Morawetz was a Canadian businessman.
References
1915 births
2017 deaths
American chemical engineers
Engineers from Prague
Polymer chemistry
Czechoslovak emigrants to the United States
American men centenarians | Herbert Morawetz | [
"Chemistry",
"Materials_science",
"Engineering"
] | 200 | [
"Materials science",
"Polymer chemistry"
] |
45,636,825 | https://en.wikipedia.org/wiki/Erodibility | Erodability (or erodibility) is the inherent yielding or nonresistance of soils and rocks to erosion. A high erodibility implies that the same amount of work exerted by the erosion processes leads to a larger removal of material. Because the mechanics behind erosion depend upon the competence and coherence of the material, erodibility is treated in different ways depending on the type of surface that eroded.
Soils
Soil erodibility is a lumped parameter that represents an integrated annual value of the soil profile reaction to the process of soil detachment and transport by raindrops and surface flow. The most commonly used model for predicting soil loss from water erosion is the Universal Soil Loss Equation (USLE) (also known as the K-factor technique), which estimates the average annual soil loss as:
where R is the rainfall erosivity factor, K is the soil erodibility, L and S are topographic factors representing length and slope, and C and P are cropping management factors.
Other factors such as the stone content (referred as stoniness), which acts as protection against soil erosion, are very significant in Mediterranean countries. The K-factor is estimated as following
K = [(2.1 x 10−4 M1.14 (12–OM) + 3.25 (s-2) + 2.5 (p-3))/100] * 0.1317
M: the textural factor with M = (msilt + mvfs) * (100 - mc)
mc :clay fraction content (b0.002 mm);
msilt : silt fraction content (0.002–0.05 mm);
mvfs : very fine sand fraction content (0.05–0.1 mm);
OM: Organic Matter content (%)
s: soil structure
p: permeability
The K-factor is expressed in the International System of units as t ha h ha−1 MJ−1 mm−1
Rocks
Shear stress model
Geological and experimental studies have shown that the erosion of bedrock by rivers follows in first approach the following expression known as the shear stress model of stream power erosion:
where z is the riverbed elevation, t is time, K is the erodibility, is the basal shear stress of the water flow, and a is an exponent. For a river channel with a slope S and a water depth D, can be expressed as:
Note that embeds not only mechanical properties inherent to the rock but also other factors unaccounted in the previous two equations, such as the availability of river tools (pebbles being dragged by the current) that actually produce the abrasion of the riverbed.
can be measured in the lab for weak rocks, but river erosion rates in natural geological scenarios are often slower than 0.1 mm/yr, and therefore the river incision must be dated over periods longer than a few thousand years to make accurate measurements. Ke values range between 10−6 to 10+2 m yr−1 Pa−1.5 for a=1.5 and 10−4 to 10+4 m yr−1 Pa−1 for a=1. However, the hydrological conditions in these time scales are usually poorly constrained, impeding a good the quantification of D.
This model can also be applied to soils. In this case, the erodibility, K, can be estimated using a hole erosion test or a jet erosion test.
Unit stream power model
An alternative model for bedrock erosion is the unit stream power, which assumes that erosion rates are proportional to the potential energy loss of the water per unit area:
where is the erodibility, and is the unit stream power, which is easily calculated as:
where Q is the water discharge of the river [m3/s], and W is the width of the river channel [m].
Relative differences in long-term erodibility can be estimated by quantifying the erosion response under similar climatic and topographic conditions with different rock lithology.
See also
Erosion
Stream power
Stream power law
References
Geomorphology
Soil mechanics | Erodibility | [
"Physics"
] | 836 | [
"Soil mechanics",
"Applied and interdisciplinary physics"
] |
45,637,989 | https://en.wikipedia.org/wiki/Biological%20functions%20of%20hydrogen%20sulfide | Hydrogen sulfide is produced in small amounts by some cells of the mammalian body and has a number of biological signaling functions. Only two other such gases are currently known: nitric oxide (NO) and carbon monoxide (CO).
The gas is produced from cysteine by the enzymes cystathionine beta-synthase and cystathionine gamma-lyase. It acts as a relaxant of smooth muscle and as a vasodilator and is also active in the brain, where it increases the response of the NMDA receptor and facilitates long term potentiation, which is involved in the formation of memory.
Eventually the gas is converted to sulfite in the mitochondria by thiosulfate reductase, and the sulfite is further oxidized to thiosulfate and sulfate by sulfite oxidase. The sulfates are excreted in the urine.
Due to its effects similar to nitric oxide (without its potential to form peroxides by interacting with superoxide), hydrogen sulfide is now recognized as potentially protecting against cardiovascular disease. The cardioprotective role effect of garlic is caused by catabolism of the polysulfide group in allicin to , a reaction that could depend on reduction mediated by glutathione.
Though both nitric oxide (NO) and hydrogen sulfide have been shown to relax blood vessels, their mechanisms of action are different: while NO activates the enzyme guanylyl cyclase, activates ATP-sensitive potassium channels in smooth muscle cells. Researchers are not clear how the vessel-relaxing responsibilities are shared between nitric oxide and hydrogen sulfide. However, there exists some evidence to suggest that nitric oxide does most of the vessel-relaxing work in large vessels and hydrogen sulfide is responsible for similar action in smaller blood vessels.
Recent findings suggest strong cellular crosstalk of NO and , demonstrating that the vasodilatatory effects of these two gases are mutually dependent. Additionally, reacts with intracellular S-nitrosothiols to form the smallest S-nitrosothiol (HSNO), and a role of hydrogen sulfide in controlling the intracellular S-nitrosothiol pool has been suggested.
Like nitric oxide, hydrogen sulfide is involved in the relaxation of smooth muscle that causes erection of the penis, presenting possible new therapy opportunities for erectile dysfunction.
Hydrogen sulfide () deficiency can be detrimental to the vascular function after an acute myocardial infarction (AMI). AMIs can lead to cardiac dysfunction through two distinct changes; increased oxidative stress via free radical accumulation and decreased NO bioavailability. Free radical accumulation occurs due to increased electron transport uncoupling at the active site of endothelial nitric oxide synthase (eNOS), an enzyme involved in converting L-arginine to NO. During an AMI, oxidative degradation of tetrahydrobiopterin (BH4), a cofactor in NO production, limits BH4 availability and limits NO production by eNOS. Instead, eNOS reacts with oxygen, another cosubstrates involved in NO production. The products of eNOS are reduced to superoxides, increasing free radical production and oxidative stress within the cells. A deficiency impairs eNOS activity by limiting Akt activation and inhibiting Akt phosphorylation of the eNOSS1177 activation site. Instead, Akt activity is increased to phosphorylate the eNOST495 inhibition site, downregulating eNOS production of NO.
therapy uses a donor, such as diallyl trisulfide (DATS), to increase the supply of to an AMI patient. donors reduce myocardial injury and reperfusion complications. Increased levels within the body will react with oxygen to produce sulfane sulfur, a storage intermediate for . pools in the body attracts oxygen to react with excess and eNOS to increase NO production. With increased use of oxygen to produce more NO, less oxygen is available to react with eNOS to produce superoxides during an AMI, ultimately lowering the accumulation of reactive oxygen species (ROS). Furthermore, decreased accumulation of ROS lowers oxidative stress in vascular smooth muscle cells, decreasing oxidative degeneration of BH4. Increased BH4 cofactor contributes to increased production of NO within the body. Higher concentrations of directly increase eNOS activity through Akt activation to increase phosphorylation of the eNOSS1177 activation site, and decrease phosphorylation of the eNOST495 inhibition site. This phosphorylation process upregulates eNOS activity, catalyzing more conversion of L-arginine to NO. Increased NO production enables soluble guanylyl cyclase (sGC) activity, leading to an increased conversion of guanosine triphosphate (GTP) to 3’,5’-cyclic guanosine monophosphate (cGMP). In therapy immediately following an AMI, increased cGMP triggers an increase in protein kinase G (PKG) activity. PKG reduces intracellular Ca2+ in vascular smooth muscle to increase smooth muscle relaxation and promote blood flow. PKG also limits smooth muscle cell proliferation, reducing intima thickening following AMI injury, ultimately decreasing myocardial infarct size.
In a certain rat model of Parkinson's disease, the brain's hydrogen sulfide concentration was found to be reduced, and administering hydrogen sulfide alleviated the condition. In trisomy 21 (Down syndrome) the body produces an excess of hydrogen sulfide. Hydrogen sulfide is also involved in the disease process of type 1 diabetes. The beta cells of the pancreas in type 1 diabetes produce an excess of the gas, leading to the death of these cells and to a reduced production of insulin by those that remain.
In 2005, it was shown that mice can be put into a state of suspended animation-like hypothermia by applying a low dosage of hydrogen sulfide (81 ppm ) in the air. The breathing rate of the animals sank from 120 to 10 breaths per minute and their temperature fell from 37 °C to just 2 °C above ambient temperature (in effect, they had become cold-blooded). The mice survived this procedure for 6 hours and afterwards showed no negative health consequences. In 2006 it was shown that the blood pressure of mice treated in this fashion with hydrogen sulfide did not significantly decrease.
A similar process known as hibernation occurs naturally in many mammals and also in toads, but not in mice. (Mice can fall into a state called clinical torpor when food shortage occurs). If the -induced hibernation can be made to work in humans, it could be useful in the emergency management of severely injured patients, and in the conservation of donated organs. In 2008, hypothermia induced by hydrogen sulfide for 48 hours was shown to reduce the extent of brain damage caused by experimental stroke in rats.
As mentioned above, hydrogen sulfide binds to cytochrome oxidase and thereby prevents oxygen from binding, which leads to the dramatic slowdown of metabolism. Animals and humans naturally produce some hydrogen sulfide in their body; researchers have proposed that the gas is used to regulate metabolic activity and body temperature, which would explain the above findings.
Two recent studies cast doubt that the effect can be achieved in larger mammals. A 2008 study failed to reproduce the effect in pigs, concluding that the effects seen in mice were not present in larger mammals. Likewise a paper by Haouzi et al. noted that there is no induction of hypometabolism in sheep, either.
At the February 2010 TED conference, Mark Roth announced that hydrogen sulfide induced hypothermia in humans had completed Phase I clinical trials. The clinical trials commissioned by the company he helped found, Ikaria, were however withdrawn or terminated by August 2011.
References
Gaseous signaling molecules | Biological functions of hydrogen sulfide | [
"Chemistry"
] | 1,656 | [
"Gaseous signaling molecules",
"Signal transduction"
] |
45,638,041 | https://en.wikipedia.org/wiki/Senolytic | A senolytic (from the words senescence and -lytic, "destroying") is among a class of small molecules under basic research to determine if they can selectively induce death of senescent cells and improve health in humans. A goal of this research is to discover or develop agents to delay, prevent, alleviate, or reverse age-related diseases. Removal of senescent cells with senolytics has been proposed as a method of enhancing immunity during aging.
A related concept is "senostatic", which means to suppress senescence.
Research
Possible senolytic agents are under preliminary research, including some which are in early-stage human trials. The majority of candidate senolytic compounds are repurposed anti-cancer molecules, such as the chemotherapeutic drug dasatinib and the experimental small molecule navitoclax.
Soluble urokinase plasminogen activator surface receptor have been found to be highly expressed on senescent cells, leading researchers to use chimeric antigen receptor T cells to eliminate senescent cells in mice.
According to reviews, it is thought that senolytics can be administered intermittently while being as effective as continuous administration. This could be an advantage of senolytic drugs and decrease adverse effects, for instance circumventing potential off-target effects.
Recently, artificial intelligence has been used to discover new senolytics, resulting in the identification of structurally distinct senolytic compounds with more favorable medicinal chemistry properties than previous senolytic candidates.
Senolytic candidates
Senomorphics
Senolytics eliminate senescent cells whereas senomorphics – with candidates such as Apigenin, Rapamycin and rapalog Everolimus – modulate properties of senescent cells without eliminating them, suppressing phenotypes of senescence, including the SASP.
See also
Autophagy
Biogerontology
Geroprotector
Hsp90
Immunosenescence
Invariant NKT (iNKT) cells
Life extension
Senescence-associated beta-galactosidase, used as a biomarker
Senotherapy
Sirtuin-activating compound
Unity Biotechnology
Venetoclax
References
Further reading
, a review that is open access and features a list of senolytics candidates
Senescence in non-human organisms
Senescence
Anti-aging substances | Senolytic | [
"Chemistry",
"Biology"
] | 484 | [
"Anti-aging substances",
"Senescence",
"Senescence in non-human organisms",
"Cellular processes",
"Metabolism"
] |
45,639,003 | https://en.wikipedia.org/wiki/Illusion%20optics | Illusion optics is an electromagnetic theory that can change the optical appearance of an object to be exactly like that of another virtual object, i.e. an illusion, such as turning the look of an apple into that of a banana. Invisibility is a special case of illusion optics, which turns objects into illusions of free space. The concept and numerical proof of illusion optics was proposed in 2009 based on transformation optics in the field of metamaterials. It is a scientific disproof of the idiom 'Seeing is Believing'.
Illusion optics proves that the optical responses or properties of a space containing any objects can be changed to be exactly those of a virtual space but containing arbitrary virtual objects (illusions) by using a passive illusion optics device composed of materials or metamaterials with specific parameters and shape. For example, a dielectric spoon was numerically shown to exhibit the scattering properties of a metallic cup by using an illusion optics device in the seminal paper. Such illusion effects do not rely on the direction and form of incident waves. However, due to dispersion limitation of specific material parameters, the functionality of illusion optics device only works in a narrow band of frequency.
Difference with optical illusions
Unlike optical illusions that utilize the misinterpretation of the human brain to create illusionary perception different from the physical measurement, illusion optics changes the optical response or properties of objects. Illusion optic devices make these changes happen. Although both these terms deal with illusions, Illusion optics deal with the refraction and reflection of light, whereas while optical illusions are basically mind tricks.
History
Illusion optics was recorded in 1968 when Soviet physicist Victor Veselago discovered that he can make objects appear in different areas through negatively refracting flat slab. When light is negatively refracted, the light is directed towards the direction it entered and deflected away from the line of refraction. Normal refraction occurs when light passes through the line of refraction. Veselago used this theory to work the slab into a lens, which he recorded in his experiments. He discovered that unlike a normal lens, the objects resolution does not depend on the limits of the wavelengths passing through the lens.
Veselago's work has been more prominent in recent years due to the advancement in metamaterials, which are engineered materials that have special internal physical properties and have the ability to negatively refract light.
Devices
An illusion device is how illusion optics works—without a device there is no way to define how light is refracted and deflected. Based on a study on circular objects with illusion optic properties, (i.e. negative refraction indexes) there are three basics of an illusion device: the invisibility cloak, real object, and illusion object.
The invisibility cloak is basically the medium on which light waves refract. Invisibility cloaks allow for an object to be undetected while confined in the area of the cloak. In other words, the viewer does not see the real object. In illusion optics, devices are not limited to only invisibility cloaks. For example, in Veselago's experiments, and lens was used to steer eyes away from the real object and direct them towards the illusion object.
The real object refers to any object that is being refracted upon. In this case, while the real object is under the invisibility cloak, light waves are directed around it so the viewer only sees past the cloak. In Veselago's experiments, the real object is being refracted so the viewer sees a mirrored view of it.
The illusion object is how the light waves come together and produces what the viewer sees as “normal.” The invisibility cloak refracts the reflected background light around the object and directs it into the viewer. The viewer only perceives there to be a background. With Veselago's experiments, the illusion object is displayed, but is only an image and is not the real object.
Metamaterials
Artificial metamaterials are important to how illusion optic devices are created. The properties of these materials allow it to bend light waves negatively, so as to have negative permittivity and negative permeability. There are two pieces of metamaterials which hold different properties: the complementary medium and the restoring medium. The complementary medium is the illusion media used to scatter wavelengths away from the object that is being refracted. The restoring medium focuses waves and directs scattered waves together.
Transformation optics is an important to creating metamaterials. The intermolecular geometry used in this field is crucial to creating the material properties.
References
Optical illusions
Physical optics
Metamaterials | Illusion optics | [
"Physics",
"Materials_science",
"Engineering"
] | 952 | [
"Physical phenomena",
"Optical illusions",
"Metamaterials",
"Materials science",
"Optical phenomena"
] |
45,640,647 | https://en.wikipedia.org/wiki/History%20of%20modern%20period%20domes | Domes built in the 19th, 20th, and 21st centuries benefited from more efficient techniques for producing iron and steel as well as advances in structural analysis.
Metal-framed domes of the 19th century often imitated earlier masonry dome designs in a variety of styles, especially in church architecture, but were also used to create glass domes over shopping arcades and hothouses, domes over locomotive sheds and exhibition halls, and domes larger than any others in the world. The variety of domed buildings, such as parliaments and capitol buildings, gasometers, observatories, libraries, and churches, were enabled by the use of reinforced concrete ribs, lightweight papier-mâché, and triangulated framing.
In the 20th century, planetarium domes spurred the invention by Walther Bauersfeld of both thin shells of reinforced concrete and geodesic domes. The use of steel, computers, and finite element analysis enabled yet larger spans. Tension membrane structure became popular for domed sports stadiums, which also innovated with rigid retractable domed roofs.
Nineteenth century
Developments
Materials
New production techniques allowed for cast iron and wrought iron to be produced both in larger quantities and at relatively low prices during the Industrial Revolution. Most iron domes were built with curved iron ribs arranged radially from the top of the dome to a ring at the base. The material of choice for domes changed over the course of the 19th century from cast iron to wrought iron to steel. Excluding domes that simply imitated multi-shell masonry, the century's chief development of the simple domed form may be metal framed domes such as the circular dome of Halle au Blé in Paris and the elliptical dome of Royal Albert Hall in London.
The practice of building rotating domes for housing large telescopes became popular in the 19th century, with early examples using papier-mâché to minimize weight.
Beginning in the late 19th century, the Guastavino family, a father and son team who worked on the eastern seaboard of the United States, further developed the masonry dome. They perfected a traditional Spanish and Italian technique for light, center-less vaulting using layers of tiles in fast-setting cement set flat against the surface of the curve, rather than perpendicular to it. The father, Rafael Guastavino, innovated with the use of Portland cement as the mortar, rather than the traditional lime and gypsum mortars, which allowed mild steel bar to be used to counteract tension forces.
Although domes made entirely from reinforced concrete were not built before 1900, the church of Saint-Jean-de-Montmartre was designed by Anatole de Baudot with a small brick shell dome with reinforced concrete ribs. in Munich, Germany, was built between 1894 and 1897 with a dome of two lightweight concrete shells, using reinforcing rings only in the underlying octagonal tambour. The 11.2 meter wide hemispherical inner dome is 15 cm thick with eight ribs on its outer surface increasing that thickness to 29 cm. The outer octagonal cloister vault is 11.8 meters wide and 16 cm thick. An artificial stone called was used as a porous aggregate. Neither dome included reinforcement, although iron bars have been detected at irregular distances in the outer dome that may have been part of the formwork needed for the building process, and which may act as partial reinforcement. Other concrete domes at this time included a music pavilion in Hoppegarten (1887–1888), the Monier-dome of the mausoleum of Emperor Frederick III in Potsdam (1889), and a dome above the foyer of the Brunner banking house in Brussels (1892–1895).
Structure
Proportional rules for an arch's thickness to span ratio were developed during the 19th century, based on catenary shape changes in response to weight loads, and these were applied to the vertical forces in domes. Edmund Beckett Denison, who had published a proof on the subject in 1871, wrote in a Domes article in the Ninth Edition of the Encyclopædia Britannica that the thickness to span ratio was lower for a dome than it was for an arch due to the more distributed loads of a dome. Ideas on linear elasticity were formalized in the 19th century.
The span of the ancient Pantheon dome, although matched during the Renaissance, remained the largest in the world until the middle of the 19th century. The large domes of the 19th century included exhibition buildings and functional structures such as gasometers and locomotive sheds.
Domes made of radial trusses were analyzed with a "plane frame" approach, rather than considering three dimensions, until an 1863 Berlin gasometer dome design by engineer Johann Wilhelm Schwedler that became known as the "Schwedler dome". He published the theory behind five such domes and a structural calculation technique in 1866. Schwedler's work on these axially symmetric shells was expanded by August Föppl in 1892 to apply to "other shell-type truss frameworks". By the 1860s and 1870s, German and other European engineers began to treat iron domes as collections of short straight beams with hinged ends, resulting in light openwork structures. Other than in glasshouses, these structures were usually hidden behind ceilings. Dome types that used lengths of rolled steel with riveted joints included "Schwedler domes", "Zimmermann domes", "lattice domes", and "Schlink domes".
According to Irene Giustina, dome construction was one of the most challenging architectural problems until at least the end of the 19th century, due to a lack of knowledge about statics. Rafael Guastavino's use of the recent development of graphic statics enabled him to design and build inexpensive funicular domes with minimal thickness and no scaffolding. The vaults were typically 3 inches thick and workers, standing on the completed portions, used simple templates, wires, and strings to align their work.
Style
The historicism of the 19th century led to many domes being re-translations of the great domes of the past, rather than further stylistic developments, especially in sacred architecture. The Neoclassical style popular at this time was challenged in the middle of the 19th century by a Gothic Revival in architecture, in what has been termed the "Battle of the Styles". This lasted from about 1840 to the beginning of the 20th century, with various styles within Classicism, such as Renaissance, Baroque, and Rococo revivals, also vying for popularity. The last three decades of this period included unusual combinations of these styles.
Religious and royal buildings
The rotunda dome of the Church of the Holy Sepulchre in Jerusalem was replaced from 1808 to 1810 after a fire and replaced again from 1868 to 1870. The dome completed in 1870 was a Russian design made with wrought iron arches.
Iron domes offered the lightness of timber construction along with incombustibility and higher strength, allowing for larger spans. Because domes themselves were relatively rare, the first examples made from iron date well after iron began to be used as a structural material. Iron was used in place of wood where fire resistance was a priority. In Russia, which had large supplies of iron, some of the earliest examples of the material's architectural use can be found. Andrey Voronikhin built a large wrought iron dome over Kazan Cathedral in Saint Petersburg. Built between 1806 and 1811, the 17.7 meter wide outer dome of the cathedral was one of the earliest iron domes. The iron outer dome covers two masonry inner domes and is made of 15 mm thick sheets set end to end.
St. George's Church in Edinburgh was built from 1811 to 1814 by Robert Reid with a dome modeled after that of St. Paul's Cathedral. An early example of an iron dome in Britain is the fanciful iron-framed dome over the central building of the Royal Pavilion in Brighton, begun in 1815 by John Nash, the personal architect of King George IV. The dome was not one of the prominent onion domes but instead the dome-like structure of twelve cast iron ribs resting on cast iron columns over Henry Holland's earlier saloon. It was completed in 1818–1819.
The neoclassical Baltimore Basilica, designed by Benjamin Henry Latrobe like the Roman Pantheon for Bishop John Carroll, was begun in 1806 and dedicated in 1821, although the porch and towers would not be completed until the 1870s. An influence on the interior design may have been the Church of St. Mary in East Lulworth, England, where Bishop Carroll had been consecrated. The central dome is 72 feet in diameter and 52 feet above the nave floor. The onion domes over the two towers were built according to Latrobe's designs. The church was extended to the east by 33 feet in 1890. Before initial construction of the church was completed, two other neoclassical domed churches would be built in Baltimore. The First Independent (Unitarian) Church by Maximilian Godefroy was begun in 1817 and covered the interior space with a 55 foot wide shallow coffered dome on pendentives with an oculus at the center. To improve acoustics, the interior was modified. The First Baptist Church by Robert Mills, also known as "Old Red Top Church", was a domed cylindrical rotunda with a porch block and portico. The dome had a shallow exterior profile and its oculus was covered by a low lantern, called a monitor. It was completed in 1818 but demolished in 1878.
In 1828, the eastern crossing tower of Mainz Cathedral was rebuilt by Georg Moller with a wrought iron dome. The dome was made of flat iron sections and reinforced with ties that passed through the interior of the dome. Such dome reinforcement was one of the two established techniques, the other being the use of a combination of horizontal rings and vertical ribs. The span may have been about 27 meters. It was later removed in favor of the current structure.
The Altes Museum in Berlin, built in 1828 by Karl Schinkel, included a dome in its entrance hall inspired by the Roman Pantheon.
Large neoclassical domes include the Rotunda of Mosta in Malta, was completed in 1840 with a dome 38 meters wide, and San Carlo al Corso in Milan, completed in 1847 with a dome 32 meters wide.
In Galicia, the Basilian monastic was built from 1834 to 1842 as a large domed rotunda with four rectangular annexes in a cruciform plan, combining the central plan popular with classicist trends in Central Europe with the cross-domed plans held to be characteristic of eastern orthodox architecture. Ruthenian Greek Catholic churches were built as tripartite churches with a dome over each of the three parts, such as the (1863–1864), or as cross-domed plans, such as (1855). Beginning around 1883, Vasyl Nahirnyi began to combine these traditional forms with the Neo-Byzantine style of Theophil Hansen, "borrowing the motifs of umbrella dome, portico, multi-mullioned windows, and arcaded friezes". He built more than 200 churches in Galicia, establishing a uniformity of Greek Catholic churches there to the extent of influencing the work of other architects. Examples include his Greek Catholic parish churches of Kuryłówka (1895) and Nowy Lubliniec (1898). Seeking a more original design, the church building committee of commissioned to build .
Saint Isaac's Cathedral, in Saint Petersburg, was built by 1842 with one of the largest domes in Europe. A cast iron dome nearly 26 meters wide, it had a technically advanced triple-shell design with iron trusses reminiscent of St. Paul's Cathedral in London. The design for the cathedral was begun after the defeat of Napoleon in 1815 and given to a French architect, but construction was delayed. Although the dome was originally designed to be masonry, cast iron was used instead.
Also reminiscent of St. Paul's dome and that of the Panthéon in Paris, both of which the original designer had visited, the dome of St. Nicholas' Church in Potsdam was added to the building from 1843 to 1849. A dome was included as a possibility in the original late Neoclassical design of 1830, but as a wooden construction. Iron was used instead by the later architects.
Other examples of framed iron domes include those of a synagogue in Berlin, by Schwedler in 1863, and the Bode Museum by Muller-Breslau in 1907.
The wrought-iron dome of Royal Albert Hall in London was built from 1867 to 1871 over an elliptical plan by architect Henry Young Darracott Scott and structural design by Rowland Mason Ordish. It uses a set of curved trusses, like those of the earlier New Street Station in Birmingham, interrupted in the middle by a drum. The elliptical dome's span is 66.9 meters by 56.5 meters.
The wrought-iron dome of St. Augustin's church in Paris dates from 1870 and spans 25.2 meters. A wrought-iron dome was also built over Jerusalem's Holy Sepulchre in 1870, spanning 23 meters.
The dome over the Basilica of San Gaudenzio (begun in 1577) in Novara, Italy, was built between 1844 and 1880. Revisions by the architect during construction transformed what was initially going to be a drum, hemispherical dome, and lantern 42.22 meters tall into a structure with two superimposed drums, an ogival dome, and a thirty meter tall spire reaching 117.5 meters. The architect, Alessandro Antonelli, who also built the Mole Antonelliana in Turin, Italy, combined Neoclassical forms with the vertical emphasis of the Gothic style.
A large dome was built in 1881–1882 over the circular courtyard of the Devonshire Royal Hospital in England with a diameter of 156 feet. It used radial trussed ribs with no diagonal ties.
The dome of Pavia Cathedral, a building started in 1488, was completed with a large octagonal dome joined to the basilica plan of the church.
Commercial buildings
Although iron production in France lagged behind Britain, the government was eager to foster the development of its domestic iron industry. In 1808, the government of Napoleon approved a plan to replace the burnt down wooden dome of the Halle au Blé granary in Paris with a dome of iron and glass, the "earliest example of metal with glass in a dome". The dome was 37 meters in diameter and used 51 cast iron ribs to converge on a wrought iron compression ring 11 meters wide containing a glass and wrought iron skylight. The outer surface of the dome was covered with copper, with additional windows cut near the dome's base to admit more light during an 1838 modification. Cast-iron domes were particularly popular in France.
In the United States, an 1815 commission to build the Baltimore Exchange and Custom House was awarded to Benjamin Henry Latrobe and Maximilian Godefroy for their design featuring a prominent central dome. The dome design was altered during construction to raise its height to 115 feet by adding a tall drum and work was completed in 1822. Signals from an observatory on Federal Hill were received at an observation post in the dome, providing early notice of arriving merchant vessels. The building was demolished in 1901–2.
The Coal Exchange in London, by James Bunning from 1847 to 1849, included a dome 18 meters wide made from 32 iron ribs cast as single pieces. It was demolished in the early 1960s.
Large temporary domes were built in 1862 for London's International Exhibition Building, spanning 48.8 meters. The Leeds Corn Exchange, built in 1862 by Cuthbert Brodrick, features an elliptical plan dome 38.9 meters by 26.7 meters with wrought iron ribs along the long axis that radiate from the ends and others spanning the short axis that run parallel to each other, forming a grid pattern.
Elaborate covered shopping arcades, such as the Galleria Vittorio Emanuele II in Milan and the Galleria Umberto I in Naples, included large glazed domes at their cross intersections. The dome of the Galleria Vittorio Emanuele II (1863–1867) rises to 145 feet above the ground and has the same span as the dome of St. Peter's Basilica, with sixteen iron ribs over an octagonal space at the intersection of two covered streets. It is named after the first king of a united Italy.
The central market hall in Leipzig was built by 1891 with the first application of the "lattice dome" roof system developed by August Föppl from 1883. The dome covered an irregular pentagonal plan and was about 20 meters wide and 6.8 meters high.
Vladimir Shukhov was an early pioneer of what would later be called gridshell structures and in 1897 he employed them in domed exhibit pavilions at the All-Russia Industrial and Art Exhibition.
The dome of Sydney's Queen Victoria Building uses radial ribs of steel along with redundant diagonal bracing to span 20 meters. It was claimed to be the largest dome in the Southern Hemisphere when completed in 1898.
Greenhouses and conservatories
Iron and glass glasshouses with curved roofs were popular for a few decades beginning shortly before 1820 to maximize orthogonality to the sun's rays, although only a few have domes. The conservatory at Syon Park was one of the earliest and included a 10.8 meter span iron and glass dome by Charles Fowler built between 1820 and 1827. The glass panes are set in panels joined by copper or brass ribs between the 23 main cast iron ribs. Another example was the conservatory at Bretton Hall in Yorkshire, completed in 1827 but demolished in 1832 upon the death of the owner. It had a 16 meter wide central dome of thin wrought iron ribs and narrow glass panes on a cast iron ring and iron columns. The glass acted as lateral support for the iron ribs.
The Antheum at Brighton would have had the largest span dome in the world in 1833 at 50 meters but the circular cast-iron dome collapsed when the scaffolding was removed. It had been built for horticulturalist Henry Phillips.
Unique glass domes springing straight from ground level were used for hothouses and winter gardens, such as the Palm house at Kew (1844–48) and the Laeken winter garden near Brussels (1875–1876). The Laeken dome spans the central 40 meters of the circular building, resting on a ring of columns. The Kibble Palace of 1865 was re-erected in 1873 in an enlarged form with a 16 meter wide central dome on columns. The Palm House at Sefton Park in Liverpool has an octagonal central dome, also 16 meters wide and on columns, completed in 1896.
Libraries
The domed rotunda building of the University of Virginia was designed by Thomas Jefferson and completed in 1836.
The British Museum Library constructed a new reading room in the courtyard of its museum building between 1854 and 1857. The round room, about 42.6 meters in diameter and inspired by the Pantheon, was surmounted by a dome with a ring of windows at the base and an oculus at the top. Hidden iron framing supported a suspended ceiling made of papier-mâché.
For the reading room of Paris' Bibliothèque Impériale, Henri Labrouste proposed in 1858 an iron-supported domed ceiling with a single central source of light similar to the British reading room, but changed the design due to concerns about insufficient light for readers. His completed 1869 design was a grid of nine domes, each with an oculus, supported by 16 thin cast iron columns, four of which were free-standing under the central dome. The domes themselves, supported on iron arches, were covered in white ceramic panels nine millimeters thick.
Inspired by the prestigious British Museum reading room, the first iron dome in Canada was built in the early 1870s over the reading room of the Library of Parliament building in Ottawa. Unlike the British Museum room, the library, which opened in 1876, uses the Gothic style.
The dome of the Thomas Jefferson Building of the Library of Congress, also inspired by the reading room dome at the British Museum, was built between 1889 and 1897 in a classical style. It is 100 feet wide and rises 195 feet above the floor on eight piers. The dome has a relatively low external profile to avoid overshadowing the nearby United States Capitol dome.
The Boston Public Library (1887–1898) includes dome vaulting by Rafael Guastavino.
Governmental buildings
The New Hampshire State House, built from 1816 to 1819, featured a dome crowned by a gilded eagle. When the dome was replaced after an 1864 project to double the size of the building, the eagle was transferred to the new dome.
The design for the United States' national capitol building approved by George Washington included a dome modeled on the Pantheon, with a low exterior elevation. Subsequent design revisions resulted in a double dome, with a raised external profile on an octagonal drum, and construction did not begin until 1822. The interior dome was built of stone and brick except for the upper third, which was made of wood. The exterior dome was wooden and covered with copper sheeting. The dome and building were completed by Charles Bulfinch in 1829.
Most of the 50 state capitol buildings or statehouses with domes in the United States cover a central rotunda, or hall of the people, due to the use of a bicameral legislature. The Pennsylvania capitol building designed by Stephen Hills in Harrisburg was the earliest to combine all the elements that would subsequently become characteristic of state capitol buildings: dome, rotunda, portico, and two legislative chambers. Like the design of the national capitol, the design was chosen through a formal competition. Early domed state capitol buildings include those of North Carolina (as remodeled by William Nichols), Alabama (in Tuscaloosa), Mississippi, Maine (1832), Kentucky, Connecticut (in New Haven), Indiana, North Carolina (as rebuilt), Missouri (very similar to Hills' Harrisburg design), Minnesota (later rebuilt), Texas, and Vermont (1832).
The current dome over the United States Capitol building, although painted white and crowning a masonry building, is made of cast iron. The dome was built between 1855 and 1866, replacing a lower wooden dome with copper roofing from 1824. It has a 30-meter diameter. It was completed just two years after the Old St. Louis County Courthouse, which has the first cast iron dome built in the United States. The initial design of the capitol dome was influenced by a number of European church domes, particularly St. Paul's in London, St. Peter's in Rome, the Panthéon in Paris, Les Invalides in Paris, and St. Isaac's Cathedral in St. Petersburg. The architect, Thomas U. Walter, designed a double dome interior based on that of the Panthéon in Paris.
Dome construction for state capitol buildings and county courthouses in the United States flourished in the period between the American Civil War and World War I. Most capitols built between 1864 and 1893 were landmarks for their cities and had gilded domes. Examples from the Gilded Age include those of California, Kansas, Connecticut, Colorado, Idaho, Indiana, Iowa, Wyoming, Michigan, Texas, and Georgia. Many American state capitol building domes were built in the late 19th or early 20th century in the American Renaissance style and cover rotundas open to the public as commemorative spaces. Examples include the Indiana State House, Texas State Capitol, and the Wisconsin State Capitol. American Renaissance capitols also include those of Rhode Island and Minnesota.
The Reichstag Palace, built between 1883 and 1893 to house the Parliament of the new German Empire, included a dome made of iron and glass as part of its unusual mixture of Renaissance and Baroque components. Controversially, the 74 meter tall dome stood seven meters taller than the dome of the Imperial Palace in the city, drawing criticism from Kaiser Wilhelm II. Hermann Zimmermann assisted the architect Paul Wallot in 1889, inventing the spatial framework for the dome over the plenary chamber. It is known as the "Zimmermann dome".
The Hungarian Parliament Building was built in the Gothic style, although most of the 1882 design competition entries used Neo-Renaissance, and it includes a domed central hall. The large, ribbed, egg-shaped dome topped with a spire was influenced by the dome of the Maria vom Siege church in Vienna. It has a sixteen sided outer shell with an iron skeleton that rises 96 meters high, and an inner shell star vault supported on sixteen stone pillars. The Dome Hall is used to display the coronation crown of Hungary and statuary of monarchs and statesmen. The dome was structurally complete by the end of 1895.
Industrial buildings
The "first fully triangulated framed dome" was built in Berlin in 1863 by Johann Wilhelm Schwedler in a gasometer for the Imperial Continental Gas Association and, by the start of the 20th century, similarly triangulated frame domes had become fairly common. Schwedler built three wrought-iron domes over gasholders in Berlin between 1876 and 1882 with spans of 54.9 meters, one of which survives. Six similar Schwedler-type domes were used over gasholders in Leipzig beginning in 1885 and in Vienna using steel, in the 1890s. Rather than using traditional iron ribs, the domes consist of a thinner arrangement of short straight iron bars connected with pin joints in a lattice shell, with cross-bracing provided by light iron rods.
Tombs
The dome of Grant's Tomb in New York City was built by Rafael Guastavino in 1890.
Twentieth century
Developments
American state capitol domes built in the twentieth century include those of Arizona, Mississippi, Pennsylvania, Wisconsin, Idaho, Kentucky, Utah, Washington, Missouri, and West Virginia. The West Virginia capitol building has been called the last American Renaissance capitol.
Early Modernist architecture, characterized by "geometrization of architectural detail", includes the domed Greek Catholic parish churches of Čemerné (1905–1907) and Jakubany (1911) in Slovakia. The Greek Catholic in Jarosław, Poland, (1902–1907) and the in Surochów, Poland, (1912–1914) have simplified geometry that attempts to blend traditional and modern styles, an effort interrupted by World War I and the breakup of the Habsburg monarchy.
Wooden domes in thin-wall shells on ribs were made until the 1930s. After World War II, steel and wooden laminate structural members made with waterproof resorcinol glues were used to create domes with grid-patterned wooden support structures, such as the 100 meter diameter Skydome in Flagstaff, Arizona. Glued laminated wooden structures were also used in 1983 to create the 160 meter Tacoma Dome, in 1990 to create the 160 meter Superior Dome, and in 1997 to create the 178 meter Nipro Hachiko Dome.
Stand-alone dome structures were used to house public utility facilities in the 20th century. The "Fitzpatrick dome", designed by John Fitzpatrick as an inexpensive structure to store winter road service sand and salt, has been used in countries around the world. The first was built in 1968. The domes have twenty sides and are normally 100 feet in diameter and a little more than 50 feet tall. The conical shape is meant to conform to the 45 degree slope of a pile of wet sand. They are built on concrete footings and covered with asphalt shingles.
Guastavino tile
The Guastavino family, a father and son team who worked on the eastern seaboard of the United States, built vaults using layers of tiles in hundreds of buildings in the late 19th and early 20th centuries, including the domes of the Basilica of St. Lawrence in Asheville, North Carolina, and St. Francis de Sales Roman Catholic Church in Philadelphia, Pennsylvania. The dome over the crossing of the Cathedral of St. John the Divine in New York City was built by the son in 1909. A part-spherical dome, it measures 30 meters in diameter from the top of its merging pendentives, where steel rods embedded in concrete act as a restraining ring. With an average thickness 1/250th of its span, and steel rods also embedded within the pendentives, the dome "looked forward to modern shell construction in reinforced concrete."
Reinforced concrete
Domes built with steel and concrete were able to achieve very large spans. The 1911 dome of the Melbourne Public Library reading room, presumably inspired by the British Museum, had a diameter of 31.5 meters and was briefly the widest reinforced concrete dome in the world until the completion of the Centennial Hall. The Centennial Hall was built with reinforced concrete in Breslau, Germany (today Poland), from 1911–13 to commemorate the 100-year anniversary of the uprising against Napoleon. With a 213 foot wide central dome surrounded by stepped rings of vertical windows, it was the largest building of its kind in the world. Other examples of ribbed domes made entirely of reinforced concrete include the Methodist Hall in Westminster, London, the Augsburg Synagogue, and the Orpheum Theater in Bochum. The 1928 Leipzig Market Hall by Deschinger and Ritter featured two 82 meter wide domes.
The thin domical shell was further developed with the construction of two domes in Jena, Germany in the early 1920s. To build a rigid planetarium dome, Walther Bauersfeld constructed a triangulated frame of light steel bars and mesh with a domed formwork suspended below it. By spraying a thin layer of concrete onto both the formwork and the frame, he created a 16 meter wide dome that was just 30 millimeters thick. The second dome was still thinner at 40 meters wide and 60 millimeters thick. These are generally taken to be the first modern architectural thin shells. These are also considered the first geodesic domes. Beginning with one for the Deutsches Museum in Munich, 15 domed projection planetariums using concrete shells up to 30 meters wide had been built in Europe by 1930, and that year the Adler Planetarium in Chicago became the first planetarium to open in the Western Hemisphere. Planetarium domes required a hemispherical surface for their projections, but most 20th century shell domes were shallow to reduce the material costs, simplify construction, and reduce the volume of air needing to be heated.
In India, the Viceroy's House in New Delhi was designed in 1912–1913 by Edwin Lutyens with a dome.
Although an equation for the bending theory of a thick spherical shell had been published in 1912, based on general equations from 1888, it was too complex for practical design work. A simplified and more approximate theory for domes was published in 1926 in Berlin. The theory was tested using sheet metal models with the conclusion that the membrane stresses in domes are small with little reinforcement required, especially at the top, where openings could be cut for light. Only the concentrated stresses at point supports required heavy reinforcement. Early examples used a relatively thick bordering girder to stabilize exposed edges. Alternative stabilization techniques include adding a bend at these edges to stiffen them or increasing the thickness of the shell itself at the edges and near the supports. In 1933–34, Spanish engineer-architect Eduardo Torroja, with Manuel Sanchez, designed the Market Hall in Algeciras, Spain, with a thin shell concrete dome. The shallow dome is 48 meters wide, 9 centimeters thick, and supported at points around its perimeter. The indoor stadium for the 1936 Olympic Games in Berlin used an oval dome of concrete shell 35 meters wide and 45 meters long.
The use of metal structures in Italy was reduced in the first half of the 20th century by autarchy and the demands of the world wars. Steel became broadly used in building construction in the 1930s. A shortage of steel following World War II and the demonstrated vulnerability of exposed steel to damage from intense fires during the war may have contributed to the popularity of concrete architectural shells beginning in the late 1940s. In the 1960s, improvements in welding and bolting techniques and higher labor costs made steel frames more economical.
In 1940, California architect Wallace Neff built a 30-meter (100 foot) dome using an inflated balloon of sailcloth as formwork. The balloon was made airtight by wetting, inflated, then supported steel reinforcement as concrete was sprayed onto it in layers. This technique was used to build "bubble houses" in Florida.
Popularized by a 1955 article on the work of Félix Candela in Mexico, architectural shells had their heyday in the 1950s and 1960s, peaking in popularity shortly before the widespread adoption of computers and the finite element method of structural analysis. Notable examples of domes include the Kresge Auditorium at MIT, which has a spherical shell 49 meters wide and 89 millimeters thick, and the Palazzetto dello Sport, with a 59 meter wide dome designed by Pier Luigi Nervi.
Built from 1955 to 1957, the prestressed concrete dome of the main exhibition hall of the Belgrade Fair has a span of 106 meters. It was designed by Branko Žeželj, using a pre-stressing system developed by him, and was the largest dome in the world until 1965. It remains the largest dome in Europe.
In the 1960s, Italian architect Dante Bini developed an inflatable formwork system using a nylon-reinforced neoprene spherical balloon. First, a concrete floor slab and ring beam was poured. The ring beam included voids for air inlets and outlets and an inflatable tube that held the balloon membrane in place. The balloon was laid out uninflated over the floor slab and secured at the ring beam, reinforcement bars spaced with springs were laid on top, the concrete was applied, an outer membrane of PVC was laid over the concrete, then the balloon was inflated and lifted the material into the dome shape. After inflation, the concrete was vibrated using rolling carts attached to cables. After drying, the balloon could be removed and openings for door or windows could be cut out of the dome. This "Binishell" system was used to build over 1,500 elliptical-section domes in countries around the world between 1970 and 1990, with diameters between 12 and 36 meters. Examples include the Edinburgh Sports Dome in Malvern (1977) and a project at Sydney's Ashbury School.
The dome of the Church of the Holy Sepulchre in Jerusalem experienced an earthquake in 1927, a fire in 1934, and a fire in 1949, which partially destroyed its lead roof. In 1977 it was decided to renovate the dome to better resist earthquakes and fire. A British team of contractors used steel connectors to attach a 115 millimeter thick reinforced concrete dome shell to the outside of the 1870 wrought iron arches. They reduced the dome's total weight by 100 tons, so that either the shell or the arches could each support the total weight of the dome independently of the other. No flammable materials were used. The exterior was covered in traditional hand-finished lead sheeting. The interior was covered with a 25 millimeter thick layer of plaster attached to the wrought iron arches with a metal mesh.
Reticular and geodesic domes
The West Baden Springs Hotel in Indiana was built in 1903 with the largest span dome in the world at 200 feet. Its metal and glass skin was supported by steel trusses resting on metal rollers to allow for expansion and contraction from temperature changes. It was surpassed in span by the Centennial Hall of Max Berg.
Structurally, geodesic domes are considered shells when the loads are borne by the surface polygons, as in the Kaiser Dome, but are considered space grid structures when the loads are borne by point-to-point members. A geodesic dome made of welded steel tubes was made in 1935 for the aviary of the Rome Zoo. Aluminum reticular domes allow for large dimensions and short building times, suitable for sports arenas, exhibition centers, auditoriums, or storage facilities. The Dome of Discovery exhibition hall was built in London in 1951. It was the largest domed building in the world at the time, 365 feet wide. Other aluminum domes include the 61 meter wide "Palasport" in Paris (1959) and the 125 meter wide "Spruce Goose Dome" in Long Beach, California.
Although the first examples were built 25 years earlier by Walther Bauersfeld, the term "geodesic domes" was coined by Buckminster Fuller, who received a patent for them in 1954. Geodesic domes have been used for radar enclosures, greenhouses, housing, and weather stations. Early examples in the United States include a 53-foot-wide dome for the Ford Rotunda in 1953 and a 384-foot-diameter dome for the Baton Rouge facility of the Union Tank Car Company in 1958, the largest clear-span structure in the world at that time. The U.S. Pavilion at Expo 67 in Montreal, Quebec, Canada, was enclosed by a 76.5-meter-wide and 60-meter-tall dome made of steel pipes and acrylic panels. It is used today as a water monitoring center. Other examples include the Amundsen-Scott South Pole Station, which was used from 1975 to 2003, and the Eden Project in the UK, built in 2000.
"Grid-domes", using a structural grid of roughly orthogonal members adjusted to create a double-curved surface, were employed in 1989 to create a double-glazed glass dome over an indoor swimming pool in Neckarsulm, Germany, and a single-glazed glass dome over the courtyard of the Museum for Hamburg History in Hamburg, Germany.
The 167 meter Osaka Dome and the 187 meter Nagoya Dome were completed in 1997.
Tension and membranes
Tensegrity domes, patented by Buckminster Fuller in 1962 from a concept by Kenneth Snelson, are membrane structures consisting of radial trusses made from steel cables under tension with vertical steel pipes spreading the cables into the truss form. They have been made circular, elliptical, and other shapes to cover stadiums from Korea to Florida. While the first permanent air supported membrane domes were the radar domes designed and built by Walter Bird after World War II, the temporary membrane structure designed by David Geiger to cover the United States pavilion at Expo '70 was a landmark construction. Geiger's solution to a 90% reduction in the budget for the pavilion project was a "low profile cable-restrained, air-supported roof employing a superelliptical perimeter compression ring". Its very low cost led to the development of permanent versions using teflon-coated fiberglass and within 15 years the majority of the domed stadiums around the world used this system, including the 1975 Silverdome (168 meters) in Pontiac, Michigan. Other examples include the 1982 Hubert H. Humphrey Metrodome (180 meters), the 1983 BC Place (190 meters), and the 1988 Tokyo Dome (201 meters). The restraining cables of such domes are laid diagonally to avoid the sagging perimeter found to occur with a standard grid.
Tension membrane design has depended upon computers, and the increasing availability of powerful computers resulted in many developments being made in the last three decades of the 20th century. Weather-related deflations of some air-supported roofs led David Geiger to develop a modified type, the more rigid "Cabledome", that incorporated Fuller's ideas of tensegrity and aspension rather than being air-supported. The example he built in St. Petersburg, Florida spans 230 meters. The pleated effect seen in some of these domes is the result of lower radial cables stretching between those forming trusses in order to keep the membrane in tension. The lightweight membrane system used consists of four layers: waterproof fiberglass on the outside, insulation, a vapor barrier, then an acoustic insulation layer. This is semitransparent enough to fulfill most daytime lighting needs beneath the dome. The first large span examples were two Seoul, South Korea, sports arenas built in 1986 for the Olympics, one 93 meters wide and the other 120 meters wide. The Georgia Dome, built in 1992 on an oval plan, uses instead a triangulated pattern in a system patented as the "Tenstar Dome". In Japan, the Izumo Dome was built in 1992 with a height of 49 meters and a diameter of 143 meters. It uses a PTFE-coated glass fiber fabric. The first cable dome to use rigid steel frame panels as roofing instead of a translucent membrane was begun for an athletic center in North Carolina in 1994. The Millennium Dome was completed as the largest cable dome in the world with a diameter of 320 meters and uses a different system of membrane support, with cables extending down from the 12 masts that penetrate the membrane.
Retractable domes and stadiums
The higher expense of rigid large span domes made them relatively rare, although rigidly moving panels is the most popular system for sports stadiums with retractable roofing. With a span of 126 meters, Pittsburgh's Civic Arena featured the largest retractable dome in the world when completed for the city's Civic Light Opera in 1961. Six of its eight sections could rotate behind the other two within three minutes, and in 1967 it became the home of the Pittsburgh Penguins hockey team.
The Assembly Hall arena at the University of Illinois Urbana-Champaign was completed in 1963 with a concrete saucer dome spanning 400 feet. The edge of the dome was post-tensioned with more than 600 miles of steel cable. The first domed baseball stadium, the Astrodome in Houston, Texas, was completed in 1965 with a rigid 641 foot wide steel dome filled with 4,596 skylights. Other early examples of rigid stadium domes include the steel frame Superdome of New Orleans and the cement Kingdome of Seattle. The Louisiana Superdome has a span of 207 meters. Stockholm's 1989 Ericsson Globe, an arena for ice hockey, earned the title of largest hemispherical building in the world with a diameter of 110 meters and height of 85 meters.
Montreal's Olympic Stadium featured a retractable membrane roof in 1988, although repeated tearing led to its replacement with a non-retractable roof. The SkyDome of Toronto opened in 1989 with a rigid system in four parts: one that is fixed, two that slide horizontally, and one that rotates along the edge of the 213 meter wide span. In Japan, the 1993 Fukuoka Dome featured a 222-meter dome in three parts, two of which rotated under the third.
Twenty-first century
The variety of modern domes over sports stadiums, exhibition halls, and auditoriums have been enabled by developments in materials such as steel, reinforced concrete and plastics. Their uses over department stores and "futuristic video-hologram entertainment centres" exploit a variety of non-traditional materials. The use of design processes that integrate numerical control machines, computer design, virtual reconstructions, and industrial prefabrication allow for the creation of dome forms with complex geometry, such as the 2004 ellipsoid bubbles of Nardini Company's production district designed by Massimiliano Fuksas.
Ōita Stadium was built in 2001 as a mostly fixed semi-spherical roof 274 meters wide with two large membrane-covered panels that can slide down from the center to opposite sides. The Sapporo Dome was completed in 2001 with a span of 218 meters. Singapore's National Stadium was completed in 2014 with the largest dome in the world at 310 meters in span. It uses a post-tensioned concrete ring beam to support steel trusses that enable two halves of a section of the dome to retract.
References
Bibliography
Domes
Arches and vaults
Church architecture
Mosque architecture
Baroque architectural features
Roofs | History of modern period domes | [
"Technology",
"Engineering"
] | 8,783 | [
"Structural system",
"Structural engineering",
"Roofs"
] |
45,640,813 | https://en.wikipedia.org/wiki/Schumann%E2%80%93Runge%20bands | The Schumann–Runge bands are a set of absorption bands of molecular oxygen that occur at wavelengths between 176 and 192.6 nanometres. The bands are named for Victor Schumann and Carl Runge.
See also
Triplet oxygen
Atmospheric chemistry
References
External links
Spectroscopy
Atmospheric chemistry | Schumann–Runge bands | [
"Physics",
"Chemistry"
] | 56 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"nan",
"Analytical chemistry stubs",
"Spectroscopy"
] |
45,642,500 | https://en.wikipedia.org/wiki/Primetals%20Technologies | Primetals Technologies Limited, is an engineering and plant construction company headquartered in London, United Kingdom, with numerous locations worldwide. It serves clients in the metals industry, both the ferrous and the nonferrous metals sector. It was established as a joint venture between Siemens VAI Metals Technologies and Mitsubishi-Hitachi Metals Machinery in 2015. As of 2020, Primetals Technologies is a joint venture of Mitsubishi Heavy Industries and partners.
Overview
Operations
Primetals Technologies operates as a full liner, i.e., a comprehensive supplier for the metals industry. Their portfolio covers all aspects of the iron and steel production process and includes several nonferrous metals technologies, as well as comprehensive metallurgical services. These processes include but are not limited to beneficiation, direct reduction, oxygen steelmaking, electric steelmaking, continuous casting, hot and cold rolling, and processing. They are also active in the digitalization of various aspects of the metals industry, including improvements in automation technologies, services, increasing the use of artificial intelligence, and increasing the use of robotics to improve the safety of iron and steel works.
Green Steel at Primetals Technologies
As of May 2022, the formation of a new global task force titled "Green Steel" was announced to combine the competencies in the metals industry with cross-industrial competencies of the larger Mitsubishi Heavy Industries Group. Recognizing the severity of the global climate crisis, the steel industry at large has been recognized as a major contributor of carbon emissions into the atmosphere producing eight percent of global CO2 emissions in 2020. With several environmental technologies in their portfolio, Primetals Technologies is at work to help reduce the carbon footprint of the metals industry.
History
Mitsubishi-Hitachi Metals Machinery
In May 2000, Mitsubishi Heavy Industries and Hitachi announced the establishment of a broad-based partnership in metals rolling mills. In October 2000, a joint venture company, MHI-Hitachi Metals Machinery, Inc. was established to handle sales and related engineering activities of metal rolling mills and downstream facilities. In 2002, the name was changed to Mitsubishi-Hitachi Metals Machinery, Inc.
In 2004, Mitsubishi-Hitachi Metals Machinery, Inc. U.S.A. was established.
2006 – Acquisition of New Gencoat, Inc., U.S.A
2006 – MHMM receives contract for supply of pickling line-tandem cold mill from Shougang Jingtang Inc., China
2007 – Mitsubishi-Hitachi Metals Machinery, Inc., China established
2010 – Start-up of endless billet-welding and rolling mill at POSCO, South Korea
2010 – Founding of Mitsubishi-Hitachi Metals Machinery South Asia Private Ltd.
2012 – Start-up of No. 2 Hot-Rolling Mill at Usiminas Cubatão, Brazil
2013 – Integration of IHI Metaltech rolling mill business
2013 – Mitsubishi-Hitachi Metals Machinery acquires shares in Concast Ltd., India
2013 – Acquisition of majority shares of Hasegawa Gear Works, Ltd.
Siemens VAI Metals Technologies
1938–1955
VAI, whose parent company was VA Technologie AG, began as the plant building operation of Vereinigte Österreichische Eisen und Stahlwerke (VÖEST) but became a separate operation in 1946. In 1938, the American company H.A. Brassert & Co. began the designs for the Reichswerke in Linz before withdrawing in 1939.
At the end of WWII, Austria was occupied by the allied forces, and due to the air raids during the war, all industrial assets were severely damaged. In July 1945, the "Alpine Montan AG Hermann Göring" plant was renamed to "Vereinigte Österreichische Eisen- und Stahlwerke" (VÖEST) (United Austrian Iron and Steel Plants) and separated from the Alpine Mountain AG. The plant was returned to the newly founded Republic of Austria in July 1946 as part of what would become the ÖIAG (Austrian Industries AG). For the plant's reconstruction, Voestalpine used funding from the Marshall Plan. In 1947 the first blast furnace, a Siemens-Martin open hearth furnace, and first coke ovens started production. In 1948, with the launch of the “Iron and Steel Plan,” which allocated the production of flat steel to Linz and the production of long products to Donawitz. This included the production of finished products in a heavy-plate mill and the construction of a new slab and hot-strip mill in Linz.
However, with production based on the open-hearth furnace, the future of flat production in Linz would be too expensive. Therefore, a new manufacturing process, i.e., what would become the Linz-Donawitz-process, began development in 1949 and the patent was applied for in 1950. After construction, two LD-Converters were commissioned, one in 1952 in Linz, and the other in 1953 in Donawitz.
1956–1973
Voest-Alpine Industrieanlagenbau began as a division of VÖEST in 1956.
With the invention of the LD-process and experiences gained from the complete reconstruction after the war, VÖEST, partnered with the Fried. Krupp company in Essen, Germany, began the construction of the first LD steelmaking plant outside Austria in Rourkela, India, in 1958. In 1960, the management of Wiener Brückenbau und Eisenkonstruktions AG (WBB), later renamed Voest-Alpine Hebetechnik- und Brückenbau AG, is transferred to VÖEST. VÖEST continued to expand worldwide and expanded their network of customers to amount to 274 business relationships in 87 countries by 1968. In 1973, the two nationalized iron and steel industries VÖEST and Österreichisch-Alpine Montangesellschaft (Alpine) merged and became Vöest-Alpine AG.
1974–1994
When the oil crisis started in 1973, the metallurgical industry was severely affected in all parts of the world (see also, "steel crisis"). In Austria, this coincided with the creation of Vöest-Alpine AG and between 1974 and 1976 several other companies joined the group, including Gebrüder Böhler & Co AG, Schoeller-Bleckmann Stahlwerke AG, and Steirische Gußstahlwerke AG, making up the "Vereinigte Edelstahlwerke AG (VEW). While the merger of these companies strengthened the position of Vöest-Alpine AG, the impact of the steel crisis led the company to refocus its efforts. In 1977, the group was divided into four divisions: steelworks, processing, finished goods, and industrial plant engineering. In 1978, industrial plant engineering was also under the direct control of the management board.
In the late 1970s, VOEST-ALPINE AG, as VAI, began developing the "COREX process" with Korf Engineering GmbH, originally known as the "KR method." In 1985 the crisis came to a head, and VOEST-ALPINE declared bankruptcy. In fall of 1986, the "Concept for a New VOEST-ALPINE" was introduced along with the establishment of the Österreichische Industrieholding Aktiengesellschaft (ÖIAG).
Finally, in 1988, the Austrian government decided to partially privatize the ÖIAG, forming Voest-Alpine Stahl AG (VA Stahl), while placing Voest-Alpine Industrieanlagenbau (VAI, also VAI-BAU) under Maschinen- und Anlagenbauholding AG.
1995–2014
In 1993, after a restructuring of the ÖIAG group, VA Technologie AG (VA Tech) emerged and became the parent company to VAI. By 1995, VAI operated in 45 countries and had 2000 engineers, with revenue of $841 million.
In 1995, VAI bought its first shares of Fuchs Systems Inc. (Fuchs Systemtechnik GmbH), a German-based manufacturer of electric arc furnaces and other equipment for manufacturing steel, with plants in Mexico and Salisbury, North Carolina. The Salisbury plant had 230 employees in 1997.
As of 1997, ÖIAG owned 24% and Voest-Alpine Stahl owned 19.05% of the stock in VAI's parent VATech. In September 1999, VAI completed its acquisition of the Norwegian-owned Kvaerner A.S.A. metals equipment group, including operations in France, Spain, Italy, Germany, China, India and Great Britain. VAI subsidiary Voest-Alpine Industries Inc. had its American headquarters in Pittsburgh, Pennsylvania. In 1999, Voest-Alpine Industries, part of VA Tech North America, moved all its Pittsburgh operations to Southpointe in Washington County. At the time, the company had just taken over Kværner A.S.A.'s metals equipment group. Voest-Alpine Industries also operated in Eastlake, Ohio, and Benton Harbor, Michigan. The metals automation division of Voest-Alpine Industries relocated from Eastlake to Southpointe in 2002.
As of 1999, Voest-Alpine Industries owned 49 percent of Fuchs. Although the company laid off 59 employees in Salisbury, Fuchs was "the market leader", and the parent companies intended to keep Fuchs in business. The layoffs resulted from an economic crisis in Asia, as well as lower demand for American steel resulting from the low import prices. However, the Asian market was returning by 1999, and Europe and South America were also possible new markets. In 2001, Voest-Alpine Industrieanlagenbau bought the rest of Fuchs Systems, which became VAI Fuchs and added VAI Technometal. In May 2001, however, Fuchs closed the Salisbury plant, its only American facility, because half the customers were bankrupt or close to it. AlloyWorks bought three of the buildings, and the fourth became a medical office. Also, in 2001, the steel industry worldwide experienced a downturn due to lower prices, though continuous casting (for which VAI was the world's top company) continued its positive results, especially in China. VAI reduced its six business areas to four: Iron & Steelmaking (the largest); Rolling & Processing; Automation, and Metallurgical Services.
Also in 2001, VAI's continuous casting operation added a casting and rolling mill for ultra-wide medium thickness slabs for IPSCO Steel in Mobile, Alabama, with what were believed to be the world's largest one-piece cast mill housings at 350 tons. The automation business completed a quality control project along with Voest-Alpine Stahl.
In 2003, VAI subsidiary Voest-Alpine Services & Technologies Corp. became majority owner of Steel Related Technology of Blytheville, Arkansas.
After the Siemens purchase of VA Technologie AG completed in July 2005, VAI became Siemens VAI, a part of the Siemens Industrial Solutions and Services Group. Siemens VAI was later named Siemens VAI Metals Technologies GmbH & Co. and also referred to as VAI Group, which was created from VAI and Siemens electrical engineering and automation businesses. Siemens Group Industrial Solutions and Services also included Voest-Alpine Services and Technologies (VAST). Both Siemens units operated from the Pittsburgh area. VAST provided mill maintenance services to steel and aluminum manufacturers from eleven locations: Baltimore, Maryland; North East, Maryland; New London, Ohio; Milan, Ohio; Benton Harbor, Michigan; Bethel Park, Pennsylvania; Blytheville, Arkansas; Charleston, South Carolina; Decatur, Alabama; and Erie, Pennsylvania, in the United States and Sault Ste. Marie, Ontario, in Canada.
Primetals Technologies
2014 - Present
In 2014, Mitsubishi-Hitachi Metals Machinery and Siemens AG announced the establishment of a joint venture in which equity ownership shares were 51% for Mitsubishi-Hitachi Metals Machinery and 49% for Siemens AG. The joint venture, Primetals Technologies Limited, started operations in January 2015. In 2019, Primetals Technologies together with Mitsubishi Heavy Industries acquired ABP Induction Systems.
On September 30, 2019, Mitsubishi Heavy Industries and Siemens AG reached the agreement that MHI will acquire Siemens’ 49 percent stake in Primetals Technologies. The transaction was completed at the end of January 2020.
In Spring 2021, the international private equity investor Mutares completed the acquisition of Primetals Technologies France from Primetals Technologies, Ltd., subsequently changing the name to Clecim France.
In Fall 2021, Primetals Technologies, Ltd. transferred its shares of Primetals Technologies Italy to Callista Private Equity, a financial investment company located in Munich, Germany.
References
External links
2015 establishments in England
Companies based in the London Borough of Hounslow
Industrial machine manufacturers
Mitsubishi Heavy Industries
Siemens
Manufacturing companies established in 2015
British companies established in 2015
Steel companies based in London
Engineering companies of the United Kingdom | Primetals Technologies | [
"Engineering"
] | 2,639 | [
"Industrial machine manufacturers",
"Industrial machinery"
] |
45,643,760 | https://en.wikipedia.org/wiki/Cppdepend | CppDepend is a static analysis tool for C/C++ code. This tool supports a large number of code metrics, allows for visualization of dependencies using directed graphs and dependency matrix. The tools also performs code base snapshots comparison, and validation of architectural and quality rules. User-defined rules can be written using LINQ queries. This possibility is named CQLinq. The tool also comes with a large number of predefined CQLinq code rules.
Features
The main features of CppDepend are:
Support for Coding Standards: Misra C++, Misra C, Cert C, Cert C++, CWE, Autosar
Support for C++23, C++20, C++17
Declarative code rule over LINQ query (CQLinq)
Dependency Visualization (using dependency graphs, and dependency matrix)
Software metrics (CppDepend currently supports 82 code metrics: Cyclomatic complexity; Afferent and Efferent Coupling; Relational Cohesion; Percentage of code covered by tests, etc.)
CppDepend can tell you what has been changed between 2 builds
New features in v2024.1
Advanced Source Explorer.
Support for C++23/C++20/C++17.
C++ Modules Support.
Improved Incremental analysis.
Improved Visual Studio support.
New useful rules added.
Improved Linux Support.
External Symbols Refined.
Code Rule through LINQ Query (CQLinq)
The tool proposes live code query and code rule through LINQ query.
This is one of the innovations of CppDepend. For example:
- Classes inherit from a particular class:
// classes inherit from a particular class
from t in Types
where t.IsClass && t.DeriveFrom ("CBase")
select t
- The 10 most complex methods (Source Code Cyclomatic complexity)
// The 10 most complex methods
(from m in Methods
orderby m.CyclomaticComplexity
select new { m, m.CyclomaticComplexity }).Take(10)
In addition, the tool proposes a live CQLinq query editor with code completion and embedded documentation.
See also
Sourcetrail Free Open-Source source code explorer that provide interactive dependency graphs.
Design Structure Matrix
Software visualization
References
External links
The CppDepend web-site
Dr.Dobb's Review
InfoQ Review
isocpp News
heise.de Review
CppDepend Blog
PCWorld Reviews
LLVM review
CodeGuru Review
Static program analysis tools
Software metrics | Cppdepend | [
"Mathematics",
"Engineering"
] | 544 | [
"Software engineering",
"Quantity",
"Metrics",
"Software metrics"
] |
51,010,296 | https://en.wikipedia.org/wiki/Electronics%20and%20semiconductor%20manufacturing%20industry%20in%20India | In the early twenty-first century; foreign investment, government regulations and incentives promoted growth in the Indian electronics industry. The semiconductor industry, which is its most important and resource-intensive sector, profited from the rapid growth in domestic demand. Many industries, including telecommunications, information technology, automotive, engineering, medical electronics, electricity and solar photovoltaic, defense and aerospace, consumer electronics, and appliances, required semiconductors. However, as of 2015, progress was threatened by the talent gap in the Indian sector, since 65 to 70 percent of the market was dependent on imports.
Electronics industry
Statistics and trends
Market-size
India's electronics sector, which ranks among the world's largest in terms of consumption, is expected to increase from $69.6 billion in 2012 to $400 billion by 2020. It was primarily driven by an increase in demand, which was expected to expand at a compound annual growth rate of over 25% during that time.
Imports accounted for 65% of the demand for electrical products in 2013–14. A Frost & Sullivan-IESA data analysis indicates that 60% of total electronic usage can be attributed to five high priority product categories. These are, in descending order, desktop computers (4.39%), laptops (5.54%), mobile phones (38.85%), and flat panel display televisions (7.91%).
The consumer electronics and appliance industry in India, which was valued at $9.7 billion in 2014, is expected to increase at a compound annual growth rate of 13.4% to reach $20.6 billion by 2020. Set-top boxes are expected to increase at the quickest rate among consumer electronics, with Y-o-Y growth of 28.8% forecast between 2014 and 2020. Televisions will grow at the rate of 20%, refrigerators at 10%, washing machines at 8–9%, and air conditioners at roughly 6-7%. India's demand for IT devices was projected to be worth $13 billion in 2013. By 2029–2030, it is estimated that India's aerospace and defense (A&D) electronics sector might be valued up to $70 billion, of which $55 billion could come from electronics used in platforms that need to be purchased and the other $70 billion from system-of-system initiatives.
Domestic production
In 2012–13, 2013–14, and 2014–15, the total amount of electronic goods produced domestically was ₹164,172 crores, ₹180,454 crores, and ₹190,366 crores, respectively. India's electronics hardware manufacturing sector is expected to generate $104 billion in electronic goods by 2020, up from $32.46 billion in 2013–14. India produced 1.6% of the world's electronics gear in FY13. With 31% of the entire production of electronic goods in India in FY13, the communication and broadcasting equipment industry held a leading position, followed by consumer electronics at 23%. In the April–June quarter of 2015, 24.8% of the cellphones transported into the nation were either assembled or made in India, an increase from 19.9% the quarter before.
Out of the 220 million mobile sets that were shipped in India in 2015–16, 110 million of those units were either manufactured or assembled there in the past year, up from 60 million the year before. The value of mobile handset production increased by 185% in 2015–16, from ₹19,000 crores to ₹54,000 crores. Turning the Make in India initiative into a reality for the electronics and hardware sector was the title of an ASSOCHAM-EY research. It predicted that the Indian electronics and hardware industry will develop at a CAGR of 13%–16% in 2013–18, from a level of $75 billion in 2016 to $112–130 billion by 2018.
In May 2016, an NITI Aayog research stated that India's electronics sector contributes just 1.7% of GDP, but in Taiwan, South Korea, and China, it accounts for 15.5%, 15.1%, and 12.7% of GDP, respectively. India now makes up less than 5% of the global electronics manufacturing sector, with the majority of its electronics production going towards the country's own market.
In 2014, the percentage of localized input/value addition in televisions was approximately 25–30% due to the importation of panels, semiconductors, and glass required for the production of LCD/LED TVs. Due to the importation of the compressor, refrigerant, motor, and coil, the localization of air conditioners was only approximately 30% to 40%. About 35–40% of the parts used in set-top boxes came from within the country. For refrigerators and washing machines, the localized content was approximately 70%. It is stated that in 2016, the percentage of localized value addition created in mobile phone assembly in India was barely 2-8%.
The Ordnance Factory Board's Opto Electronics Factory (OLF), located in Dehradun, is a unique facility in India that produces opto-electronic goods for defense use Polymatech Electronics, located in Chennai, is another facility that produces Opto Semi-Conductor Chips. The Company signed MoU with Government of Tamil Nadu on 27 May 2020. The project was also acknowledged under SPECS program. Hon'ble Prime Minister of India has visited Polymatech Electronics at Semicon India 2023Samsung, a leading Korean electronics company, intends to begin producing laptops at its Noida plant in India in 2024. Airport navigation aids are presently produced worldwide by Thales Reliance Defence Systems, a joint venture electronics company of Thales Group, while Bharat Electronics-Thales Systems, based in Bangalore, produces high-tech items like low-band receivers for the Dassault Rafale's electronic warfare suite. On 18 January 2024, Grupo Antolin inaugurated a new production plant in Chakan for electronics, Human-Machine Interface systems, and advanced lighting. In India, Grupo Antolin provides parts to Volkswagen, Škoda Auto, Suzuki, Toyota, Mahindra & Mahindra, and Tata Motors.
Lenovo intends to increase its manufacturing and is considering producing servers in India in order to benefit from the production-linked incentive (PLI) program for IT hardware. In 2024, Samsung Electronics intends to begin producing laptops at its Noida plant. Already, preparations are in motion. Intel showcased a comprehensive range of Made in India laptops and IT goods, including locally made servers, during the India Tech Ecosystem Summit 2024. By Q2 2024, Google has instructed vendors to begin producing Pixel smartphones in India. The production line for Google's high-end Pixel 8 Pro will be ready first. In India, Dixon Technologies is going to manufacture Google Pixel 8 smartphones. September 2024 will see the release of the first batch into the market. Monthly production of Pixel smartphones is expected to reach 100,000 devices, with 25–30% of those units going toward export.
Apple Inc. plans to begin producing AirPods in India from 2025. With Jabil in Pune, the business has begun producing wireless charging case parts on a trial basis. Parts of AirPods wireless charging cases are already being produced by Jabil and delivered to China and Vietnam.
Exports and imports
India is a net importer of electronics, with China accounting for the majority of India's imports. Electronics surpassed gold in 2015 and now rank second in terms of value among all imports into the nation, right behind crude oil. India is spending more money on semiconductor imports than on oil, according to research published in 2019 by Professor Vikram Kumar, an emeritus professor of physics at IIT Delhi. According to a Ministry of Commerce and Industry report, one of the six industries that could assist India in reaching over 70% of its target of $1 trillion in goods exports by FY30 is electronics.
Exports
The estimated value of India's electronics exports in FY13 was $7.66 billion, down slightly from $8.15 billion in FY12; however, due to the depreciation of the rupee, they increased in INR terms during the same time, rising from ₹44,000 crore to ₹46,300 crore. India's electronics exports were dominated in 2013–14 by the telecom sector, which was followed by computing, consumer electronics, instruments, and electronic components. The increasing demand for Indian electronics items overseas is believed to be mostly driven by advancements in technology and competitive cost-effectiveness. Indian exports of electronic gear nearly doubled in value from ₹109,940 crores in 2009–10 to ₹196,103 crores in 2013–14, measured in rupees. India's electronics exports fell to $6 billion in FY14, accounting for 0.28% of the world's electronics trade.
India recorded a 22.24% growth in electronics exports, exceeding the $20 billion milestone within the nine months of the FY 2023-24. Between April and December 2023, mobile phone exports—which made up 52% of all electronics exports—reached $10.5 billion. Notably, the iPhone became the main export engine during this time, accounting for 35% of all electronics exports and an astounding 70% of all mobile exports from the nation. The value of iPhone exports topped $7 billion only in December 2023. the nearly seven-fold increase in mobile exports, which went from $1.6 billion in FY19 to $11.1 billion in FY23. In the same time frame, total electronics exports increased by almost three times, from $8.4 billion in FY19 to $23.6 billion in FY23.
During their results call for the December quarter in 2023, Indian companies such as Havells, Dixon Technologies, Voltas, and Blue Star stated that they are building a foundation for exports to industrialized nations such as the US and those in Europe. Havells intends to supply air conditioners in the US market after establishing a company there. About 28–30% of Motorola smartphone output is now sent to the US as Dixon Technologies increases its exports of the devices. Appliance retailer Arçelik, which markets appliances in Europe under the Beko name, has placed export orders with Voltas for frost-free refrigerators and dishwashers. India exported $29.12 billion worth of electronics in 2023–24, a 23.6 percent increase over the previous year. The United States, United Arab Emirates, Netherlands, United Kingdom, and Italy are the top five export markets for electronics goods. Exports also expanded into new markets in FY24, including Turkmenistan, Honduras, El Salvador, Mongolia, Montenegro, and the Cayman Islands. Limited amounts of semiconductor chips packaged at Tata Electronics' Bengaluru R&D Center are now being exported to partners in the US, Europe, and Japan. Currently, these packed chips are under the trial program.
For the US and European markets, Foxconn and Padget Electronics will produce the pro and base versions of the Pixel smartphone, respectively, along with other Google devices. Foxconn's Sriperumbudur factory has already begun trial production. Full-scale manufacturing is scheduled to begin in September 2024.
Imports
In 2012–13, 2013–14, and 2014–15, the estimated total value of electronic products imports was ₹1,79,000 crore (US$28 billion), ₹1,95,900 crore (US$31 billion), and ₹2,25,600 crore (US$37 billion), respectively. Based on data from the Ministry of Commerce and Industry, the importation of phones surged dramatically from $665.47 million in 2003–04 to $10.9 billion in 2013–14. Over the same time period, phone imports from China increased from $64.61 million to $7 billion. China was responsible for 67% of India's $23.5 billion electronics trade imbalance in 2013–14. Electronics imports could reach $40 billion in FY16, up from about $28 billion in FY11. In 2016, local electronics production started to grow, signaling the start of a recovery during a period of low Indian exports. Electronic exports increased by 7.8% to $0.5 billion in January 2016, while electronic imports, which made up 27% of India's annual trade imbalance, decreased by 2.2% to $3.2 billion.
Government initiatives
To promote overall growth and open job opportunities, projected to be more than 28 million by attracting investments worth $100 billion, the Indian central government has sought to reduce the country's electronics import bill from 65% in 2014–15 to 50% in 2016 and gradually to a net-zero electronics trade by 2020. India has pursued a two-pronged strategy of import substitution and export encouragement, through the Make in India campaign coupled with the Digital India campaign, the Startup India and the Skill India campaigns. The government has fostered an environment conducive to foreign direct investment (FDI) inflow in several ways, as outlined in the National Electronics Policy and the National Telecom Policy.
Increased liberalisation of Foreign Direct Investment (FDI): 100% FDI through an automatic route.
Relaxation of tariffs.
Establishment of Electronic Hardware Technology Parks (EHTPs) and Special Economic Zones (SEZs).
Implementation of Preferential Market Access (PMA).
Imposing basic customs duties on certain items falling outside the framework of the IT free trade agreement.
Exempting import-dependent inputs/components for PC manufacturing from a Special Additional Duty (SAD).
Incentivising the export of certain electronics goods in the Focus Products scheme under the Foreign Trade Policy.
Funding 3000 PhD students in electronics and IT across the Indian universities.
Imposing an education cess on imported electronic products for parity.
To offer incentives of up to $1.7 billion by 2020 to electronics hardware manufacturing entities setting up shops in India to help offset the disadvantages of developing the new industry in the country, a Modified Special Incentive Package Scheme (MSIPS) has been initiated. The government has approved 40 proposals worth over INR9538 crore between January 2014 and June 2015 under the scheme.
The establishment of greenfield and brownfield Electronic Manufacturing Clusters (EMCs) is encouraged under the EMC scheme. Some 200 EMCs are projected by 2020, of which 30 are already in the process of establishment.
The National Institution for Transforming India (NITI Aayog), a policy think-tank under the Indian central government, has suggested in a draft report that a policy be adopted to provide a tax holiday for ten years to firms investing US$1 billion or more that also create 20,000 jobs. The report, hinting at a policy tilt toward the Information Technology Agreement-2 (ITA -2), also suggests that India should re-strategize its defensive policies regarding Free Trade agreements (FTAs) and aggressively pursue export-oriented policies to utilize these FTAs as opportunities to obtain duty-free access to the electronics markets of its FTA partners.
The Indian government launched Digital India futureLABS on 3 February 2024, with the aim of doing research and development in the areas of automotive, computer, communication, industrial electronics, strategic electronics, and internet of things. Funding will originate from the Ministry of Electronics and Information Technology's R&D budget. The primary organization for creating the general strategy, SOPs, and guidelines for new businesses and other private sector enterprises engaged in those fields would be the Centre for Development of Advanced Computing (C-DAC).
Investments in the electronics sector
Between April 2000 and March 2016, the electronics industry in India attracted $1.636 billion in foreign direct investment (FDI) (equity capital component only; this amount excludes funds remitted through the Reserve Bank of India's NRI schemes). This represents 0.57% of the total FDI equity inflow that the nation received during the same period, totaling $288.51 billion.
As of February 2016, the India Electronics and Semiconductor Association (IESA), a group that supports domestic production of computer hardware and electronic goods in India, reported that the government had received 156 proposals with investment commitments totaling INR1.14 lakh crore, or $16.8 billion, over the preceding 20 months. As of May 2016, the government had approved 74 applications totaling ₹ 17,300 crores out of 195 investment proposals costing ₹ 1.21 lakh crore, while 27 projects had been rejected. As of June 2016, the Indian electronics industry anticipates US$56 billion in investments over the following four years to reach its 2020 export target of over US$80 billion. As of August 2016, 37 mobile manufacturing companies have invested in India during the previous year, resulting in the creation of 40,000 direct jobs and around 125,000 indirect jobs.
Foxconn has pledged investment worth $5 billion to set up R&D and electronic manufacturing facilities in India within the next five years. In January 2015, the Spice Global signed an MoU to set up a mobile phone manufacturing unit in Uttar Pradesh with an investment of . In January 2015, Samsung contemplated a joint public-private initiative under which 10 "MSME-Samsung Technical Schools" will be established in India. In February, Samsung announced that it will manufacture the Samsung Z1 in its plant in Noida. In addition to mobile phones, Samsung's factories in Noida and Sriperumbudur produce appliances and consumer electronics such as refrigerators, LED televisions, washing machines, and split air conditioners. In February 2015, Huawei opened an R&D center in Bengaluru with an investment of million. It is also setting up a telecom hardware manufacturing plant in Chennai, which has been approved by the central government. In February 2015, Xiaomi began initial talks with the Andhra Pradesh government to begin manufacturing smartphones at a Foxconn-run facility in Sri City. In early August 2015, the company announced that the first manufacturing unit was operational within seven months after it was conceived. In August 2015, Lenovo commenced operations at a smartphone manufacturing plant in Sriperumbudur, run by the Singapore-based contract manufacturer Flextronics International Limited. The plant has separate manufacturing lines for Lenovo and Motorola, as well as separate quality assurance and product testing functions. Taiwan's major contract manufacturer, Wistron, which makes devices for companies such as BlackBerry, HTC and Motorola, announced plans in November 2015 to manufacture the devices at a new factory in Noida, Uttar Pradesh. In December 2015, Micromax announced that it would set up three new manufacturing units in the Indian states of Rajasthan, Telangana and Andhra Pradesh at a cost of . The plants may become operational in 2016, each employing 3,000-3,500 people. Phone manufacturer Vivo began manufacturing smartphones in December 2015 at a plant in Greater Noida, employing a workforce of 2,200 people.
The US-based personal computing hardware multinational Dell Technologies, is looking to expand its capacity to export from India, at its laptop and computer manufacturing factory in Sriperumbudur, where it previously invested US$30 million. Dell has plans of investing in the tunes of US$300 million through its venture fund arm Dell ventures, in Indian start-ups working in cloud computing, security and analytics as well as in the manufacturing of microprocessors and photo voltaic cells. Chennai-based Munoth Industries has partnered with China's Better Power for technological support as it aims to set up India's first Lithium-ion cell manufacturing plant in Tirupati in three phases by 2022 with an investment of 799 crores. The first phase of the project will be complete by 2019 and the latter phases by 2022. The plant is expected to generate 1,700 job opportunities. The company has invested 165 crore for the first phase, in which it would draw a capital investment of 25 crore from the Central Government under the Make In India scheme. The state government of Andhra Pradesh also will provide fiscal and operational incentives, including subsidies on taxes and power costs. The company intends to sell finished lithium ion cells to mobile phone manufacturers and battery pack manufacturers in India. Together, MEL Systems and Services, Syrma SGS, O/E/N India, Sahasra Group, and Deki Electronics launched a new company called Awesense Five in February 2024. Its goal is to develop and produce industrial sensors in India, reducing reliance on imports and capturing the ₹7,000 crore domestic market, which includes the defence sector.
The first commercial supercapacitor production facility in India has been established by KELTRON in Kannur. Chief Minister Pinarayi Vijayan officially opened it on October 1, 2024. The company has been collaborating on technology development with ISRO, the Naval Materials Research Laboratory, and the Centre for Materials for Electronics Technology. With an expenditure of ₹18 crore, the production facility's first phase has been established. In due time, the total investment will be ₹42 crore. The production capacity is approximately 2000 pieces per day.
Semiconductor industry
With the newly heralded era of the Internet of Things (IoT) dictating that the new generation of interconnected devices be capable of smart-computing, the Indian semiconductor industry is set for a stable upsurge with bright prospects provided India's generic obstacles like redtape-ism, fund crunch and infrastructural deficits are adequately addressed.
Statistics and trends
The fast growing electronics system design manufacturing ( ESDM ) industry in India has design capabilities with the number of units exceeding 120. As stated by the Department of Electronics and Information Technology (DeitY), approximately 2,000 chips are being designed in India every year with more than 20,000 engineers currently employed to work on various aspects of IC design and verification.
According to a NOVONOUS report, the consumption of semiconductors in India, mostly import-based, is estimated to rise from $10.02 billion in 2013 to $52.58 billion by 2020 at a dynamic CAGR of 26.72%. The report estimates that the consumption of mobile devices will grow at a CAGR of 33.4% between 2013 and 2020, driving the share of mobile devices in semiconductor revenue up from 35.4% in 2013 to 50.7% in 2020. Moreover, the telecom segment is also expected to rise at a CAGR of 26.8% during 2013-20. The information technology and office automation segment are estimated to grow at a CAGR of 18.2% in the same period. The consumer electronics segment also is expected to grow at a CAGR of 18.8% over the seven years. The automotive electronics segment is expected to grow at a 30.5% CAGR from 2013 to 2020. The EDSM industry will also grow on the back of these high consumption-led industries. Currently, almost all the semiconductor demand is met by imports from countries like the USA, Japan, and Taiwan. In the semiconductor sector, India has a significant human-capital pool which is currently concentrated in design, in the absence of an end-to-end manufacturing base. But the nascent ESDM segment in India is premised on competent domestic research by Indian universities and institutes across the entire semiconductor manufacturing value chain; namely, chip design and testing, embedded systems, process-related, EDA, MEMS and sensors, etc., which have contributed to a voluminous number of research publications.
Initiatives in the semiconductor industry
As of 2016, the government allows 100% FDI in the Electronics system manufacturing and design (ESDM) sector through an automatic route to attract investments including from Original Equipment Manufacturers (OEMs) and Integrated Device Manufacturers (IDMs), and those relocating to India from other countries, in addition to EMC, MIPS and other incentives and schemes provided to the electronics sector.
The Department of Electronics and Information Technology (DeitY), in line with Skill India campaign has launched an 49 crore scheme for capacity building in ESDM. In October 2015, Infineon Technologies, a German semiconductor firm partnered with National Skill Development Corporation (NSDC) to enhance skill and manpower in semiconductor technology, aimed at boosting the ESDM ecosystem in India.
The India Electronics & Semiconductor Association (IESA) has announced a SPEED UP and SCALE-UP of its talent development initiative to be implemented through the Centre of Excellence with the Electronics Sector Skills Council of India (ESSCI) and an MoU with the Visvesvaraya Technological University (VTU) and the RV-VLSI Design Center to build human capital in the ESDM field. ESSCI, which has developed over 140 Qualification Packs (QP) / National Occupation Standards (NOS) across 14 sub-sectors of which Embedded System Design and VLSI are key domains absorbing engineers, established their first-ever Centre of Excellence (CoE) at BMS college of Engineering for VLSI and embedded system design. IESA signed an MoU with Taiwan Electrical and Electronic Manufacturers' Association (TEEMA) to encourage co-operation in technology and knowledge transfer and investment commitment to the domestic ESDM sector that can benefit both Indian and Taiwanese companies. IESA also entered into a MoU with Singapore Semiconductor Industry Association (SSIA) in February 2015, with an objective to forge trade and technical cooperation tie-ups between the electronics and semiconductor industries of both the countries.
The Department of Electronics and Information Technology (DeitY) has established an Electronics Development Fund (EDF) managed by Canara Bank ( CANBANK Venture Capital Funds or CVCFL) to provide risk capital and to attract venture funds, angel funds and seed funds for incubating R&D and fostering the innovative environment in the sector., the establishment of "Fund of Funds for Start-ups" (FFS) approved by the union cabinet as part of the EDF for contribution to various alternative investment funds or daughter funds, registered with Securities and Exchange Board of India which would extend funding support to start-ups, in line with the Start-up India Action Plan unveiled by Government in January 2016, will be beneficial to the start-ups in the ESDM space, according to IESA.
The National Centre for Flexible Electronics (NCFlexE) at IIT Kanpur, the National Centre for Excellence in Technology for Internal Security at IIT Bombay and the Centre for Excellence for Internet of Things at NASSCOM, Bengaluru has been set up to promote the development of national capability in ESDM.
Recent notable achievements in ESDM (Electronic System Design & Manufacturing) Research and Development
In 2011, Hyderabad based semiconductor chip design services entity SoCtronics completed the first 28 nm design chip to be developed in India. Bangalore-based Indian company Navika Electronics has designed GNSS/GPS SoC (System on Chip) chipsets based on ARM core processors under its own brand name for portable applications like receiving/down conversion and amplification of GPS and Galileo signals.
The Centre for Nano Science and Engineering (CeNSE), IISc, Bengaluru, in collaboration with KAS Tech, a Bengaluru-based electronics manufacturing company, has developed 'Ocean', a highly integrated and portable chemical vapour depositor that can commercially produce various two dimensional materials including graphene, in an easy 'plug and grow' approach which can have various novel applications in the ESDM sector, for both academia and industry alike.
In what could be viewed as a breakthrough for the country's electric automobile programme as well as indigenous electronics manufacturing, the Indian Space Research Organization (ISRO) and the Automotive Research Association of India (ARAI) together have developed and validated through tests, using ISRO's state of the art cell technology, a lithium-ion battery prototype for application in electric vehicles and looks forward to commercialising the technology through mass production by partnering with automotive companies. Currently India's lithium-ion battery requirements are completely met by import as there is no domestic manufacturing of these batteries. While the raw material for the batteries still has to be imported, the rest of the value chain can be synthesized domestically at a competitive cost, if the project clears all the barriers.
Researchers at the Indian Institute of Technology - Bombay (IIT-B), in a collaboration with ISRO's Semi-Conductor Labs (SCL), Chandigarh, have developed an indigenous Bipolar Junction Transistor (BJT) which can function with Bi-CMOS (Bipolar Complementary Metal Oxide Semiconductor). Analogue or mixed chips based on various digital Bi-CMOS technology with integrated analogue high frequency BJT based amplifiers are essential for IoT and space applications like high frequency communications as they reduce form factor, power consumption, weight and size dimensions and cost etc.
During the Digital India FutureLABS Summit 2024, technologies for a CMOS-based vision processing system, a Thermal Smart Camera, and a Fleet Management System were distributed to 12 industries as part of Ministry of Electronics and Information Technology's InTranSE Program. A digital signal processor included inside the Thermal Smart Camera allows it to do a variety of AI-based analytics. Applications in smart cities, industry, defense, and healthcare are its main focus. With a potent on-board computing engine, the Industrial Vision Sensor iVIS 10GigE is a CMOS-based vision processing system designed to handle the demands of the upcoming industrial machine vision applications. The Fleet Management System tracks the position of vehicles and sends out notifications for a variety of situations, including reckless driving, overspeeding, ignition, idling, and stopping. It will also help improve the dependability of public transportation services by reducing the occurrence of bus bunching, as transit operators can employ operational techniques for headway reliability as a dynamic scheduling decision support tool.
Investments in Semiconductor Industry in India
In 2014, the ESDM industry was projected to see investment proposals worth 10,000 crores (USD $1.5 billion) over the next two years, along with five partially state-funded start-up incubation centres of the 250 planned by the industry body, as per IESA.
In February 2014, the union cabinet approved the setting up of these fab proposals with the decision to extend incentives as follows:
25% subsidy on capital expenditure and tax reimbursement under M-SIPS Policy.
Exemption of Basic Customs Duty (BCD) for non-covered capital items.
200% deduction on expenditure on R&D under Section 35(2AB) of the Income Tax Act.
Investment-linked deductions under Section 35AD of the IT Act.
Interest free loan of around 5,124 crore to each.
Starting January 1, 2022, the government started taking applications under its incentive scheme in developing a full ecosystem for the chip manufacturing industry and expects at least a dozen semiconductor manufacturers to start setting up local factories in the next several years.
The Ministry of Finance has proposed a 71% increase in funding to ₹13,104.50 crore for the manufacturing of chips and electronics in the 2024 Union budget of India.
In 2024, the Government of India has approved the establishment of four semiconductor manufacturing units in the country as part of the Semicon India Programme. Additionally, nine projects have received approval under the Scheme for Promotion of Manufacturing of Electronic Components and Semiconductors (SPECS). These initiatives are expected to generate approximately 15,710 jobs, significantly contributing to the growth of the semiconductor and electronics sectors in India.
The Uttar Pradesh government is actively working to establish the state as a prominent hub for semiconductor manufacturing. Currently, it has received four to five investment proposals amounting to ₹40,000 crore. This initiative coincides with the upcoming three-day Semicon India 2024 event, highlighting the state's commitment to advancing its capabilities in the semiconductor sector.
Investments in fabrication plants in India
As of mid 2016, there were no operational commercial Semiconductor fabrication plants in India.
The Centre of Excellence in Nanoelectronics (CEN), at the Indian Institute of Technology – Bombay, has a lab-like fab facility collaborated between IIT Bombay and IISc Bangalore that offers research in the design, fabrication and characterization of traditional CMOS Nano-electronic devices, Novel Material based devices (III-V Compound Semiconductor devices, Spintronics, Opto-electronics), Micro Electromechanical Systems (MEMS), NEMS, Bio-MEMS, polymer based devices and solar Photovoltaics to researchers across academia, industry and government laboratories, all over India. The center also offers support in device fabrication technologies using sophisticated equipment under the Indian Nano Users Program (INUP) and acts as a linchpin for developing innovative technologies that can be tweaked and commercialized for spurring the nano-industrial growth in India.
A foundry for producing GaN nano material proposed to be extended around the existing facility for producing gallium nitride transistors, at the IISc's Centre for Nano Science and Engineering (CeNSE), Bangalore, at a cost of 3000 crore has received preliminary approval from the central government.
Gujarat is expected to be home to one of the semiconductor wafer fabrication manufacturing facility by late 2017 in Prantij of Sabarkantha district. To be set up by anchor partner Hindustan Semiconductor Manufacturing Corporation (HSMC) and copartners STMicroelectronics N.V. (France/Italy) and Silterra (Malaysia), it will employ a workforce of over 25,000 including 4,000 direct employees. The group will establish two manufacturing units at an expense of over 29,000 crore or about US$4.5 billion, each capable of producing 20,000 wafers per month. Technology nodes currently proposed by this consortium are 90, 65 and 45 nm nodes in Phase I and 45, 28 and 22 nm nodes in Phase II. In March 2016, HSMC received ₹700 crore worth of seed investment for the project from Mumbai-based private equity fund Next Orbit Ventures (NOV).
Another consortium, led by Jaiprakash Associates in collaboration with IBM and Tower Semiconductor, proposed to build a wafer fab in Greater Noida at an expense of over 34,000 crore or about US$5 billion, capable of producing 40,000 300 mm-diameter wafers per month in an advanced CMOS with 90, 65 and 45 nm CMOS nodes initially before gradually switching over to 28 nm and 22 nm CMOS nodes in later phases. As of April 2016, the fate of the project remains uncertain as to the debt-ridden lead partner, JPA, exited the project, citing the commercial infeasibility of the project. In 2022, International Semiconductor Consortium (ISMC), a joint venture between Abu Dhabi-based Next Orbit Ventures and Tower Semiconductor, announced that it had signed a memorandum of understanding (MoU) with the Government of Karnataka to set up a 65-nanometer analog semiconductor fabrication unit. ISMC will invest $3 billion to set up the plant. Tower Semiconductor has put in a fresh proposal for $8 billion chip production facility in 2024. On 5 September 2024, Government of Maharashtra approved an investment proposal of ₹83,947 crore ($10 billion ) for a joint venture between Tower Semiconductor and the Adani Group that will establish chip manufacturing plant in Panvel. In the first phase, the plant's capacity will be 40,000 wafers per month (WPM), and it will eventually be expanded to 80,000 WPM.
SunEdison and Adani Group have signed an MoU to build the largest vertically integrated solar photovoltaic fab facility in India with an investment of up to US$4 billion in Gujarat's Mundra, creating 4,500 direct jobs and more than 15,000 indirect jobs by integrating all aspects of solar panel production on site, including polysilicon refining and ingot, cell, and module production.
The U.S.-based company called Cricket Semiconductor has evinced interest in investing US$1 billion in building an analog integrated-circuit and power supply integrated-circuit specific semiconductor fab in Madhya Pradesh.
With a $2.75 billion investment, Micron Technology began construction of its semiconductor manufacturing plant in Gujarat, in September 2023.
The partnership between HCLTech and Foxconn for an OSAT (outsourced semiconductor assembly and testing) plant was announced in January 2024. Foxconn, which invested $37.2 million, will own 40% of the company. With HCL Group, Foxconn set aside ₹1,200 crore as a down payment for the building of a chip facility in India. It released a call for bids in February 2024 for construction of chip assembly and testing plant.
Murugappa Group would invest $791 million over a five-year period to enter the semiconductor assembly and testing business.
To establish a semiconductor assembly and testing facility in India, CG Power and Industrial Solutions has partnered with Renesas Electronics America and Stars Microelectronics, based in Thailand. As equity capital of the joint venture, CG Power, Renesas, and Stars will invest up to $205 million, $15 million, and $2 million in one or more tranches. This amounts to about 92.34 percent, 6.76 percent, and 0.9 percent, respectively. Chips for consumer, industrial, automotive, and power applications will be produced with a daily capacity of 15 million units.
At a cost of ₹91,000 crore, Tata Electronics and Powerchip plan to establish a semiconductor fabrication facility in Dholera. It is expected to be able to process 50,000 wafer starts per month (WSPM). Using 28 nm technology, this facility will manufacture power management ICs, display drivers, micro-controllers, and high-performance computing chips for AI, automotive, computing and data storage, and wireless communication technologies. Tata Electronic and Synopsys to work together at Dholera semiconductor fabrication facility. Tata Electronics plans to leverage Synopsys' foundry design platform to expedite the process of customizing semiconductor products for its clientele. The two firms will also work together in the fields of data analytics, computer-aided design, factory automation, product design kits that Synopsys will offer to Tata Electronics, and the creation of intellectual property for the chip fab. Tokyo Electron will equip Tata Electronics workers and provide training so that the company can meet its 2026 chip manufacturing target on schedule. The partnership covers both back-end packaging technologies and front-end manufacturing. Additionally, Tokyo Electron will fund continuous R&D and enhancement projects.
In Morigaon, Tata Semiconductor Assembly and Test intends to establish a semiconductor plant. ₹27,000 crore will be needed to establish this facility. Chips for use in automotive, consumer electronics, telecom, mobile phones, and electric vehicles will be assembled with a capacity of 48 million units per day. The Jagiroad plant will become operational from 2025.
A group of European businesses and RRP Electronics have partnered to establish an Outsourced Semiconductor Assembly and Testing (OSAT) facility in Maharashtra. On 23 March 2024, it intends to lay the cornerstone of its new 25,000 square foot facility.
The Indian government approved the construction of the ₹3,307 crore Kaynes Technology ATMP unit in Gujarat on September 2, 2024. The facility would have a daily production capacity of 6.3 million chips. In addition, Kaynes SemiCon is investing ₹5,000 crore to develop an OSAT plant in Sanand.
Miscellaneous Investments in Semiconductor Industry
Cyient Ltd. signed an agreement to acquire a 74 per cent equity stake in Rangsons Electronics Pvt Ltd, a Mysuru-based ESDM servicing firm.
A US-based product engineering firm, Aricent, acquired Bengaluru-based chip design services company SmartPlay US$180 million.
Altran Technologies SA, a French technology consulting multinational, agreed to acquire SiConTech, a Bengaluru-based start-up that designs semiconductor chips.
In August 2013, AMD opened a new ESDM design centre in HITEC City, Hyderabad, in addition to its existing design centre in Bengaluru.
The world's largest processor intellectual property technology vendor ARM expanded its VLSI operations out of Bengaluru as it set up a new Design Centre at Noida, Uttar Pradesh for working on planar and FinFET CMOS technologies under its physical IP division.
Taiwan based Mobile phone chipmaker Mediatek opened a VLSI and embedded software design center at Techpark in Bangalore with a plan to invest $200 million to employ up to 500 engineers over the following few years for working on mobile communications, wireless connectivity & home entertainment segments.
Critics and detractors of the fab projects currently underway in India, in different conceptual phases, doubt the prospects of success of these capital-intensive projects, pointing to various reasons like marginal profitability due to overcapacity of output in a saturated and fiercely competed fab market, noncompetence of these particular fabs in terms of cost and performance related to the dimensions of CMOS nodes even in attracting domestic end-use industries which have access to the more sophisticated fabs outside the country, cost prohibitive maintenance and upgrades needed every few years to weather obsolescence, nonavailability of domestically procurable semiconductor-grade materials in absence of complementing ancillary manufacturing industries and other resource-intensive strings attached to such projects, including land acquisition requirements, necessity uninterrupted deionised water and power supplies, supply of critical gases such as nitrogen and argon, absence of skilled labour force and drain of an already inadequate number of experienced domestic talent pool in electronic engineering and R&D possessing expertise to overcome the barriers of related sensitive technologies for mass production towards other attractive sectors in absence of a major Indian player in the electronics sector, especially in a developing country like India, which is still grappling with infrastructural bottlenecks.
However, the endorsers of the fab projects, such as AMD, which partnered with HSMC for the fab project in Gujarat, stress the strategic need of developing the fabs as part of an end-to-end electronics manufacturing base in India which imports billions of dollars worth of even lower-end semiconductor nodes of 90 nm and above each year.
Relevant circles within India have been advocating for investment by the central government with a long term strategic vision in the revolutionising fields of Gallium Nitride (GaN) and Mercury Cadmium Telluride (HgCdTe) based non silica semiconductor foundry and fab because of their wide ranged use like High Electron Mobility Transistor (HEMT) made from GaN in power electronics both for civilian and military applications which can switch at high speed and can handle high power and high temperature without needing any cooling and HgCdTe based high quality sensors for military space requirements.
See also
Economy of India
Foreign Direct Investment in India
Standup India
Make in India
Digital India
Skill India
Automotive industry in India
Semiconductor industry in Taiwan
Semiconductor industry in China
References
Electronics industry in India
Semiconductor industry by country
Semiconductors
Industries in India | Electronics and semiconductor manufacturing industry in India | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 8,862 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
51,010,708 | https://en.wikipedia.org/wiki/List%20of%20genetic%20engineering%20software | This article provides a list of genetic engineering software.
Cloud-based freemium software
Varstation NGS variants processing and analysis tool
BaseSpace Variant Interpreter by Illumina
Closed-source software
VectorBee
PeptiCloud
BlueTractorSoftware DNADynamo
Agilent Technologies RFLP Decoder Software, Fish Species
Applied Biosystems GeneMapper
Joint BioEnergy Institute j5
CLC bio CLC DNA Workbench Software
CLC bio CLC Free Workbench Software
CLC bio CLC Sequence Viewer
CLC bio Protein Workbench Software
DNASTAR Lasergene
Geneious
LabVantage Solutions Inc. LabVantage Sapphire
LabVantage Solutions Inc. LV LIMS
Mega2
SnapGene
The GeneRecommender
Open-source software
Autodesk Genetic Constructor (suspended)
BIOFAB Clotho BIOFAB Edition
BIOFAB BIOFAB Studio
EGF Codons and EGF CUBA (Collection of Useful Biological Apps) by the Edinburgh Genome Foundry
Integrative Genomics Viewer (part of Google Genomics)
Mengqvist's DNApy
See also
Geppetto (3D engine); an open-source 3D engine for genetic engineering-related functions;also used in the OpenWorm project
Software
E
Genetic | List of genetic engineering software | [
"Chemistry",
"Engineering",
"Biology"
] | 252 | [
"Biological engineering",
"Genetic engineering",
"Molecular biology"
] |
51,012,305 | https://en.wikipedia.org/wiki/Acid%20egg | The terms acid egg and montejus (or monte-jus) are sometimes used interchangeably to refer to a device with no moving parts formerly used instead of a pump in order to transfer difficult liquids. The principle is that a strong vessel containing the liquid is pressurized with gas or steam, forcing the liquid into a pipe (usually vertical upwards) thereby causing flow. When the liquid has been transferred, the pressure is released and more liquid is put in via gravity. It is thus cyclic in operation. The same principle has been used to lift water and called an air displacement pump or intermittent gas-lift pump, and has been applied to pumping oil up from the formation.
Its use has largely been superseded by modern pumps, but it is still used sometimes for special tasks.
Acid Egg
This was specifically devised to deal with the highly corrosive sulfuric acid, but was extended to other corrosive substances. It was traditionally made of ceramic (to be corrosion resistant) and spherical in shape (to withstand the pressure) thus giving its name.
A cylindrical version (with hemispherical ends) was described by Swindin, being 3 feet in diameter and 6 feet long, holding 40 cubic feet of acid.
In principle, the vessel is part filled with liquid, which is then expelled by pumping in compressed air. The liquid outlet is via a pipe from the top going down almost to the bottom of the vessel. When the acid egg is emptied, connections to the compressor and the delivery pipe are closed by valves, the air pressure is vented and the vessel refilled with acid. The cycle can then start again.
Montejus
A French invention used in sugar production to move the partially processed sugar liquid up a pipe to the next stage of purification. Hence the name “monte-jus” or “raise juice”. Unlike the acid egg, it traditionally consists of a vertical cylindrical vessel made of steel, with a pipe from the bottom turned upwards, and it is pressurized by steam.
References
Pumps
Chemical equipment
Chemical industry
History of sugar | Acid egg | [
"Physics",
"Chemistry",
"Engineering"
] | 416 | [
"Pumps",
"Turbomachinery",
"Chemical equipment",
"Physical systems",
"Hydraulics",
"nan"
] |
51,016,079 | https://en.wikipedia.org/wiki/Mount%20Elliott%20Mining%20Complex | Mount Elliott Mining Complex is a heritage-listed copper mine and smelter at Selwyn, Shire of Cloncurry, Queensland, Australia. It was designed by William Henry Corbould and built in 1908. It is also known as Mount Elliott Smelter and Selwyn. It was added to the Queensland Heritage Register on 16 September 2011.
History
The Mount Elliott Mining Complex is an aggregation of the remnants of copper mining and smelting operations from the early 20th century and the associated former mining township of Selwyn. The earliest copper mining at Mount Elliott was in 1906 with smelting operations commencing shortly after. Significant upgrades to the mining and smelting operations occurred under the management of W.H. Corbould during 1909–1910. Following these upgrades and increases in production, the Selwyn Township grew quickly and had 1500 residents by 1918. The Mount Elliott Company took over other companies on the Cloncurry field in the 1920s, including the Mount Cuthbert and Kuridala/Hampden smelters. Mount Elliott operations were taken over by Mount Isa Mines in 1943 to ensure the supply of copper during World War Two. The Mount Elliott Company was eventually liquidated in 1953.
Mount Elliott Smelter
The existence of copper in the Leichhardt River area of north western Queensland had been known since Ernest Henry discovered the Great Australia Mine in 1867 at Cloncurry. In 1899 James Elliott discovered copper on the conical hill that became Mount Elliott, but having no capital to develop the mine, he sold an interest to James Morphett, a pastoralist of Fort Constantine station near Cloncurry. Morphett, being drought stricken, in turn sold out to John Moffat of Irvinebank, the most successful mining promoter in Queensland at the time.
Plentiful capital and cheap transport were prerequisites for developing the Cloncurry field, which had stagnated for forty years. Without capital it was impossible to explore and prove ore-bodies; without proof of large reserves of wealth it was futile to build a railway; and without a railway it was hazardous to invest capital in finding large reserves of ore. The mining investor or the railway builder had to break the impasse.
In 1906-1907 copper averaged a ton on the London market, the highest price for thirty years, and the Cloncurry field grew. The Great Northern railway was extended west of Richmond in 1905-1906 by the Queensland Government and mines were floated on the Melbourne Stock Exchange. At Mount Elliott a prospecting shaft had been sunk and on 1 August 1906 a Cornish boiler and winding plant were installed on the site.
Mount Elliott Limited was floated in Melbourne on 13 July 1906. In 1907 it was taken over by British and French interests and restructured. Combining with its competitor, Hampden Cloncurry Copper Mines Limited, Mount Elliott formed a special company to finance and construct the railway from Cloncurry to Malbon, Kuridala (then Friezeland) and Mount Elliott (later Selwyn). This new company then entered into an agreement with the Queensland Railways Department in July 1908.
The Selwyn railway, which was known as the "Syndicate Railway", aroused opposition in 1908 from the trade unions and Labor movement generally, who contended that railways should be State-owned. However, the Hampden-Mount Elliott Railway Bill was passed by the Queensland Parliament and assented to on 21 April 1908; construction finished in December 1910. The railway terminated at the Mount Elliott smelter.
By 1907 the main underlie shaft had been sunk and construction of the smelters was underway using a second-hand water-jacket blast furnace and converters. At this time, W.H. Corbould was appointed general manager of Mount Elliott Limited.
The second-hand blast furnace and converters were commissioned or "blown in" in May 1909, but were problematic causing hold-ups. Corbould referred to the equipment in use as being the "worst collection of worn-out junk he had ever come across". Corbould soon convinced his directors to scrap the plant and let him design new works.
Corbould was a metallurgist and geologist as well as mine/smelter manager. He foresaw a need to obtain control and thereby ensure a reliable supply of ore from a cross-section of mines in the region. He also saw a need to implement an effective strategy to manage the economies of smelting low-grade ore. Smelting operations in the region were made difficult by the technical and economic problems posed by the deterioration in the grade of ore. Corbould resolved the issue by a process of blending ores with different chemical properties, increasing the throughput capacity of the smelter and by championing the unification of smelting operations in the region. In 1912, Corbould acquired Hampden Consols Mine at Kuridala for Mount Elliott Limited, followed with the purchases of other small mines in the district.
Walkers Limited of Maryborough was commissioned to manufacture a new water jacket furnace for the smelters. An air compressor and blower for the smelters were constructed in the powerhouse and an electric motor and dynamo provided power for the crane and lighting for the smelter and mine.
The new smelter was blown in September 1910, a month after the first train arrived, and it ran well, producing of blister copper by the end of the year. The new smelting plant made it possible to cope with low-grade sulphide ores at Mount Elliott. The use of of low-grade sulphide ores bought from the Hampden Consols Mine in 1911 made it clear that if a supply of higher sulphur ore could be obtained and blended, performance and economy would improve. Accordingly, the company bought a number of smaller mines in the district in 1912.
Corbould mined with cut and fill stoping but a young Mines Inspector condemned the system, ordered it dismantled and replaced with square set timbering. In 1911, after gradual movement in stopes on the No.3 level, the smelter was closed for two months. Nevertheless, of blister copper was produced in 1911, rising to in 1912 - the company's best year. Many of the surviving structures at the site were built at this time.
Troubles for Mount Elliott started in 1913. In February, a fire at the Consols Mine closed it for months. In June, a thirteen-week strike closed the whole operation, severely depleting the workforce. The year 1913 was also bad for industrial accidents in the area, possibly due to inexperienced people replacing the strikers. Nevertheless, the company paid generous dividends that year.
At the end of 1914 smelting ceased for more than a year due to shortage of ore. Although of blister copper was produced in 1913, production fell to in 1914 and the workforce dwindled to only 40 men. For the second half of 1915 and early 1916 the smelter treated ore railed south from Mount Cuthbert. At the end of July 1916 the smelting plant at Selwyn was dismantled except for the flue chambers and stacks. A new furnace with a capacity of per day was built, a large amount of second-hand equipment was obtained and the converters were increased in size.
After the enlarged furnace was commissioned in June 1917, continuing industrial unrest retarded production which amounted to only of copper that year. The point of contention was the efficiency of the new smelter which processed twice as much ore while employing fewer men. The company decided to close down the smelter in October and reduce the size of the furnace, the largest in Australia, from . In the meantime the price of copper had almost doubled from 1916 due to wartime consumption of munitions.
The new furnace commenced on 16 January 1918 and of ore were smelted yielding of blister copper which were sent to the Bowen refinery before export to Britain. Local coal and coke supply was a problem and materials were being sourced from the distant Bowen Colliery. The smelter had a good run for almost a year except for a strike in July and another in December, which caused Corbould to close down the plant until New Year. In 1919, following relaxation of wartime controls by the British Metal Corporation, the copper price plunged from about per ton at the start of the year to per ton in April, dashing the company's optimism regarding treatment of low grade ores. The smelter finally closed after two months operation and most employees were laid off.
For much of the period 1919 to 1922, Corbould was in England trying to raise capital to reorganise the company's operations but he failed and resigned from the company in 1922. The Mount Elliott Company took over the assets of the other companies on the Cloncurry field in the 1920s - Mount Cuthbert in 1925 and Kuridala in 1926. Mount Isa Mines bought the Mount Elliott plant and machinery, including the three smelters, in 1943 for , enabling them to start copper production in the middle of the Second World War. The Mount Elliott Company was finally liquidated in 1953.
In 1950 A.E. Powell took up the Mount Elliott Reward Claim at Selwyn and worked close to the old smelter buildings. An open cut mine commenced at Starra, south of Mount Elliott and Selwyn, in 1988 and is Australia's third largest copper producer producing copper-gold concentrates from flotation and gold bullion from carbon-in-leach processing.
Profitable copper-gold ore bodies were recently proved at depth beneath the Mount Elliott smelter and old underground workings by Cyprus Gold Australia Pty Ltd. These deposits were subsequently acquired by Arimco Mining Pty Ltd for underground development which commenced in July 1993. A decline tunnel portal, ore and overburden dumps now occupy a large area of the Maggie Creek valley south-west of the smelter which was formerly the site of early miner's camps.
Selwyn Township
In 1907, the first hotel, run by H. Williams, was opened at the site. The township was surveyed later, around 1910, by the Queensland Mines Department. The town was to be situated north of the mine and smelter operations adjacent the railway, about distant. It took its name from the nearby Selwyn Ranges which were named, during Burke's expedition, after the Victorian Government Geologist, Alfred Richard Cecil Selwyn. The town has also been known by the name of Mount Elliott, after the nearby mines and smelter.
Many of the residents either worked at the Mount Elliott Mine and Smelter or worked in the service industries which grew around the mining and smelting operations. Little documentation exists about the everyday life of the town's residents. Surrounding sheep and cattle stations, however, meant that meat was available cheaply and vegetables grown in the area were delivered to the township by horse and cart. Imported commodities were, however, expensive.
By 1910 the town had four hotels. There was also an aerated water manufacturer, three stores, four fruiterers, a butcher, baker, saddler, garage, police, hospital, banks, post office (officially from 1906 to 1928, then unofficially until 1975) and a railway station. There was even an orchestra of ten players in 1912. The population of Selwyn rose from 1000 in 1911 to 1500 in 1918, before gradually declining.
Description
Mount Elliott Smelter
Mount Elliott Smelter is located in Cloncurry Shire, approximately one kilometre south of the former township of Selwyn, and south of Cloncurry. The main processing infrastructure and remains are centralised on the northern side of the low hill known as Mount Elliott.
Immediately north of the main processing complex is a shallow basin formed by natural rises in ground. Within this basin lay the powerhouse and boiler house machinery beds, ore tunnel, beehive kiln, the lower condenser area and railway embankments.
To the east of the central complex are the remains of an assay office, furnaces, one possible and one identifiable explosives magazine, the upper condenser bank, tank stands, and the remains of a substantial residence.
To the south-west, on the central low hill, are the remains of a winder engine machine-bed, a square brick stack that served the boilers of the winder complex, and the single remaining bed and footings of the primary ore crusher.
Further west on low ridges are the remains of a strong room, office, stone tank stand and smithy.
Selwyn Township
The Selwyn Township is located at the northern end of a valley running south to Mount Elliott Smelter which is about distant. All buildings have been removed and the evidence of the township now comprises garden plots, cement surfaces and corrugated iron water tanks.
The site of the Union Hotel remains identifiable and the timber stumps of the stationmaster's house, the railway formation and surviving timber sleepers are among the most visible remains. The railway embankment follows the eastern side of the valley between the town and smelter passing several miners' hut sites comprising rough stone walls, and benched surfaces with stone retaining walls.
Identifiable sites in the valley include a cement surface formerly under the high-set school building, the police station site, and the original smelter site at the base of a hill on the western side of the valley.
The town cemetery, about south-east of the town, contains about fifteen headstones in separate sections for Catholics and Protestants. All but one headstone are from Melrose & Fenwick of Townsville. The grave sites include three women, a returned Anzac accidentally killed in the mine, a man who died of injuries in the local hospital, and a miner from Mount Cobalt who was buried in 1925 after Selwyn was almost deserted.
Heritage listing
Mount Elliott Mining Complex was listed on the Queensland Heritage Register on 16 September 2011 having satisfied the following criteria.
The Mount Elliott Mining Complex, incorporating the remnants of the Mount Elliott Mine, Smelter, a range of associated infrastructure, scattered archaeological artefacts, the abandoned town of Selwyn and its associated cemetery, has the potential to provide important information on aspects of Queensland's history particularly early copper smelter practices and technologies, the full range of activities peripheral to those base operations and, importantly, the people who lived and worked in this complex historic mining landscape.
The Mount Elliott Mining Complex has sufficient archaeological integrity and diversity in its assemblage to facilitate detailed studies which would reveal the largely undocumented social and cultural aspects of the occupation and use of the mine, smelter and Selwyn Township areas. Important research questions could focus on, but are not limited to, cultural identity and ethnicity, socioeconomic status, individual and collective living conditions, and individual adaptations to the remoteness and harshness of the local environment.
Archaeological investigations within the Mount Elliott Mining Complex have potential to reveal specific details about the function and use of the area that complement and augment archival records. Investigations of the remnant mining and smelting infrastructure may help answer important research questions relating to mining and smelting operations including, but not limited to:
the design and operation of an early primary ore-processing plant, base metals mine and smelting operation in Queensland
adaptation of work practices due to remoteness, harshness and the local environment
fundamental and influential changes to copper mining and smelting practice in Queensland initiated by Mount Elliott's manager W.R. Corbould (1907-1922), especially the use of more efficient and economical production techniques compared to other similar operations
The Mount Elliott Mining Complex is an important component of a broader historic mining landscape as operations at Mount Elliott helped initiate extractive and primary processing industries in the Cloncurry region and north-west Queensland generally. The remnant infrastructure, mining artefacts, and the remains of the mining township of Selwyn have potential for important comparative material to other mining sites in the region, particularly the nearby Hampden Company Smelter at Kuridala and Mount Cuthbert Township and Smelter. Archaeological investigations at the Mount Elliott Mining Complex may reveal new information that expands our understanding of such enterprises and on the everyday lives of the people living and working at such sites across Queensland.
See also
Mount Elliott Company Metallurgical Plant and Mill
Mount Elliott mine
References
Burke, Heather and Gordon Grimwade (2001) Cultural Heritage Recommendations for the Mount Elliott Mine and Smelter, Northwest QLD, unpublished report to AustralAsian Resource Consultants Pty Ltd
Hooper, Colin (1993) Angor to Zillmanton: Stories of North Queensland's deserted towns. Mundingburra: Colin Hooper
Hore-Lacy, I. (ed) (1981) Broken Hill to Mount Isa: The Mining Odyssey of W.H. Corbould, Melbourne: Hyland House
Kerr, Ruth (1992) Queensland Historical Mining Sites Study, Volume 4, Unpublished report to the Department of Environment and Heritage, Brisbane
Knight, James (1992) Mount Elliott Mine and Smelter Site, North West Queensland: a preliminary survey and conservation recommendations, unpublished report to Cyprus Gold Australia Corporation
Lennon, Jane and Howard Pearce (1996) Mining Heritage Places Study: Northern and Western Queensland, Volume 4: Mount Isa Mining District, unpublished report to the Queensland Department of Environment and Heritage
Nexus Archaeology & Heritage (2009) Assessment of Historical and Industrial Archaeology Values: Mt Elliott Smelter Precinct, North-west Queensland, unpublished report to Ivanhoe Cloncurry Limited
Attribution
External links
Queensland Heritage Register
Shire of Cloncurry
Industrial buildings in Queensland
Articles incorporating text from the Queensland Heritage Register
Smelting
Copper mines in Queensland
Metal companies of Australia
Copper mining companies of Australia
Archaeological sites in Queensland | Mount Elliott Mining Complex | [
"Chemistry"
] | 3,617 | [
"Metallurgical processes",
"Smelting"
] |
51,016,663 | https://en.wikipedia.org/wiki/Differential%20forms%20on%20a%20Riemann%20surface | In mathematics, differential forms on a Riemann surface are an important special case of the general theory of differential forms on smooth manifolds, distinguished by the fact that the conformal structure on the Riemann surface intrinsically defines a Hodge star operator on 1-forms (or differentials) without specifying a Riemannian metric. This allows the use of Hilbert space techniques for studying function theory on the Riemann surface and in particular for the construction of harmonic and holomorphic differentials with prescribed singularities. These methods were first used by in his variational approach to the Dirichlet principle, making rigorous the arguments proposed by Riemann. Later found a direct approach using his method of orthogonal projection, a precursor of the modern theory of elliptic differential operators and Sobolev spaces. These techniques were originally applied to prove the uniformization theorem and its generalization to planar Riemann surfaces. Later they supplied the analytic foundations for the harmonic integrals of . This article covers general results on differential forms on a Riemann surface that do not rely on any choice of Riemannian structure.
Hodge star on 1-forms
On a Riemann surface the Hodge star is defined on 1-forms by the local formula
It is well-defined because it is invariant under holomorphic changes of coordinate.
Indeed, if is holomorphic as a function of ,
then by the Cauchy–Riemann equations and . In the new coordinates
so that
proving the claimed invariance.
Note that for 1-forms and
In particular if then
Note that in standard coordinates
Recall also that
so that
The decomposition is independent of the choice of local coordinate. The 1-forms with only a component are called (1,0) forms; those with only a component are called (0,1) forms. The operators and are called the Dolbeault operators.
It follows that
The Dolbeault operators can similarly be defined on 1-forms and as zero on 2-forms. They have the properties
Poincaré lemma
On a Riemann surface the Poincaré lemma states that every closed 1-form or 2-form is locally exact. Thus if ω is a smooth 1-form with then in some open neighbourhood of a given point there is a smooth function f such that in that neighbourhood; and for any smooth 2-form Ω there is a smooth 1-form ω defined in some open neighbourhood of a given point such that in that neighbourhood.
If is a closed 1-form on , then . If then and . Set
so that . Then must satisfy and . The right hand side here is independent of x since its partial derivative with respect to x is 0. So
and hence
Similarly, if then with . Thus a solution is given by and
Comment on differential forms with compact support. Note that if ω has compact support, so vanishes outside some smaller rectangle with and , then the same is true for the solution f(x,y). So the Poincaré lemma for 1-forms holds with this additional conditions of compact support.
A similar statement is true for 2-forms; but, since there is some choices for the solution, a little more care has to be taken in making those choices.
In fact if Ω has compact support on and if furthermore , then with ω a 1-form of compact support on . Indeed, Ω must have support in some smaller rectangle with and . So vanishes for or and for or . Let h(y) be a smooth function supported in (c1,d1) with . Set : it is a smooth function supported in (a1,b1). Hence is smooth and supported in . It now satisfies . Finally set
Both P and Q are smooth and supported in with and . Hence is a smooth 1-form supported in with
Integration of 2-forms
If Ω is a continuous 2-form of compact support on a Riemann surface X, its support K can be covered by finitely many coordinate charts Ui and there is a partition of unity χi of smooth non-negative functions with compact support such that Σ χi = 1 on a neighbourhood of K. Then the integral of Ω is defined by
where the integral over Ui has its usual definition in local coordinates. The integral is independent of the choices here.
If Ω has the local representation f(x,y) dx ∧ dy, then |Ω| is the density |f(x,y)| dx ∧ dy, which is well defined and satisfies |∫X Ω| ≤ ∫X |Ω|. If Ω is a non-negative continuous density, not necessarily of compact support, its integral is defined by
If Ω is any continuous 2-form it is integrable if ∫X |Ω| < ∞. In this case, if ∫X |Ω| = lim ∫X ψn |Ω|, then ∫X Ω can be defined as lim ∫X ψn Ω. The integrable continuous 2-forms form a complex normed space with norm ||Ω||1 = ∫X |Ω|.
Integration of 1-forms along paths
If ω is a 1-form on a Riemann surface X and γ(t) for is a smooth path in X, then the mapping γ induces a 1-form γ∗ω on [a,b]. The integral of ω along γ is defined by
This definition extends to piecewise smooth paths γ by dividing the path up into the finitely many segments on which it is smooth. In local coordinates if and then
so that
Note that if the 1-form ω is exact on some connected open set U, so that for some smooth function f on U (unique up to a constant), and γ(t), , is a smooth path in U, then
This depends only on the difference of the values of f at the endpoints of the curve, so is independent of the choice of f. By the Poincaré lemma, every closed 1-form is locally exact, so this allows ∫γ ω to be computed as a sum of differences of this kind and for the integral of closed 1-forms to be extended to continuous paths:
Monodromy theorem. If ω is a closed 1-form, the integral can be extended to any continuous path γ(t), so that it is invariant under any homotopy of paths keeping the end points fixed.
In fact, the image of γ is compact, so can be covered by finitely many connected open sets Ui on each of which ω can be written dfi for some smooth function fi on Ui, unique up to a constant. It may be assumed that [a,b] is broken up into finitely many closed intervals with and so that . From the above if γ is piecewise smooth,
Now γ(ti) lies in the open set , hence in a connected open component Vi. The difference satisfies , so is a constant ci independent of γ. Hence
The formula on the right hand side also makes sense if γ is just continuous on [a,b] and can be used to define . The definition is independent of choices: for the curve γ can be uniformly approximated by piecewise smooth curves δ so close that for all i; the formula above then equals and shows the integral is independent of the choice of δ. The same argument shows that the definition is also invariant under small homotopies fixing endpoints; by compactness, it is therefore invariant under any homotopy fixing endpoints.
The same argument shows that a homotopy between closed continuous loops does not change their integrals over closed 1-forms. Since , the integral of an exact form over a closed loop vanishes.
Conversely if the integral of a closed 1-form ω over any closed loop vanishes, then the 1-form must be exact.
Indeed a function f(z) can be defined on X by fixing a point w, taking any path δ from w to z and setting . The assumption implies that f is independent of the path. To check that , it suffices to check this locally. Fix z0 and take a path δ1 from w to z0. Near z0 the Poincaré lemma implies that for some smooth function g defined in a neighbourhood of z0. If δ2 is a path from z0 to z, then , so f differs from g by a constant near z0. Hence near z0.
A closed 1-form is exact if and only if its integral around any piecewise smooth or continuous Jordan curve vanishes.
In fact the integral is already known to vanish for an exact form, so it suffices to show that if for all piecewise smooth closed Jordan curves γ then for all closed continuous curves γ. Let γ be a closed continuous curve. The image of γ can be covered by finitely many opens on which ω is exact and this data can be used to define the integral on γ. Now recursively replace γ by smooth segments between successive division points on the curve so that the resulting curve δ has only finitely many intersection points and passes through each of these only twice. This curve can be broken up as a superposition of finitely many piecewise smooth Jordan curves. The integral over each of these is zero, so their sum, the integral over δ, is also zero. By construction the integral over δ equals the integral over γ, which therefore vanishes.
The above argument also shows that given a continuous Jordan curve γ(t), there is a finite set of simple smooth Jordan curves γi(t) with nowhere zero derivatives such that
for any closed 1-form ω. Thus to check exactness of a closed form it suffices to show that the vanishing of the integral around any regular closed curve, i.e. a simple smooth Jordan curve with nowhere vanishing derivative.
The same methods show that any continuous loop on a Riemann surface is homotopic to a smooth loop with nowhere zero derivative.
Green–Stokes formula
If U is a bounded region in the complex plane with boundary consisting of piecewise smooth curves and ω is a 1-form defined on a neighbourhood of the closure of U, then the Green–Stokes formula states that
In particular if ω is a 1-form of compact support on C then
since the formula may be applied to a large disk containing the support of ω.
Similar formulas hold on a Riemann surface X and can be deduced from the classical formulas using partitions of unity. Thus if is a connected region with compact closure and piecewise smooth boundary ∂U and ω is a 1-form defined on a neighbourhood of the closure of U, then the Green–Stokes formula states that
Moreover, if ω is a 1-form of compact support on X then
To prove the second formula take a partition of unity ψi supported in coordinate charts covering the support of ω. Then , by the planar result. Similarly to prove the first formula it suffices to show that
when ψ is a smooth function compactly supported in some coordinate patch. If the coordinate patch avoids the boundary curves, both sides vanish by the second formula above. Otherwise it can be assumed that the coordinate patch is a disk, the boundary of which cuts the curve transversely at two points. The same will be true for a slightly smaller disk containing the support of ψ. Completing the curve to a Jordan curve by adding part of the boundary of the smaller disk, the formula reduces to the planar Green-Stokes formula.
The Green–Stokes formula implies an adjoint relation for the Laplacian on functions defined as Δf = −d∗df. This gives a 2-form, given in local coordinates by the formula
Then if f and g are smooth and the closure of U is compact
Moreover, if f or g has compact support then
Duality between 1-forms and closed curves
Theorem. If γ is a continuous Jordan curve on a Riemann surface X, there is a smooth closed 1-form α of compact support such that for any closed smooth 1-form ω on X.
It suffices to prove this when γ is a regular closed curve. By the inverse function theorem, there is a tubular neighbourhood of the image of γ, i.e. a smooth diffeomorphism of the annulus into X such that . Using a bump function on the second factor, a non-negative function g with compact support can be constructed such that g is smooth off γ, has support in a small neighbourhood of γ, and in a sufficiently small neighbourhood of γ is equal to 0 for and 1 for . Thus g has a jump discontinuity across γ, although its differential dg is smooth with compact support. But then, setting , it follows from Green's formula applied to the annulus that
Corollary 1. A closed smooth 1-form ω is exact if and only if for all smooth 1-forms α of compact support.
In fact if ω is exact, it has the form df for f smooth, so that by Green's theorem. Conversely, if for all smooth 1-forms α of compact support, the duality between Jordan curves and 1-forms implies that the integral of ω around any closed Jordan curve is zero and hence that ω is exact.
Corollary 2. If γ is a continuous closed curve on a Riemann surface X, there is a smooth closed 1-form α of compact support such that for any closed smooth 1-form ω on X. The form α is unique up to adding an exact form and can be taken to have support in any open neighbourhood of the image of γ.
In fact γ is homotopic to a piecewise smooth closed curve δ, so that . On the other hand there are finitely many piecewise smooth Jordan curves δi such that . The result for δi thus implies the result for γ. If β is another form with the same property, the difference satisfies for all closed smooth 1-forms ω. So the difference is exact by Corollary 1. Finally, if U is any neighbourhood of the image of γ, then the last result follows by applying first assertion to γ and U in place of γ and X.
Intersection number of closed curves
The intersection number of two closed curves γ1, γ2 in a Riemann surface X can be defined analytically by the formula
where α1 and α2 are smooth 1-forms of compact support corresponding to γ1 and γ2. From the definition it follows that . Since αi can be taken to have its support in a neighbourhood of the image of γi, it follows that if γ1 and γ2 are disjoint. By definition it depends only on the homotopy classes of γ1 and γ2.
More generally the intersection number is always an integer and counts the number of times with signs that the two curves intersect. A crossing at a point is a positive or negative crossing according to whether dγ1 ∧ dγ2 has the same or opposite sign to , for a local holomorphic parameter z = x + iy.
Indeed, by homotopy invariance, it suffices to check this for smooth Jordan curves with nowhere vanishing derivatives. The α1 can be defined by taking α1df with f of compact support in a neighbourhood of the image of γ1 equal to 0 near the left hand side of γ1, 1 near the right hand side of γ1 and smooth off the image of γ1. Then if the points of intersection of γ2(t) with γ1 occur at t = t1, ..., tm, then
This gives the required result since the jump is + 1 for a positive crossing and −1 for a negative crossing.
Holomorphic and harmonic 1-forms
A holomorphic 1-form ω is one that in local coordinates is given by an expression f(z) dz with f holomorphic. Since it follows that dω = 0, so any holomorphic 1-form is closed. Moreover, since ∗dz = −i dz, ω must satisfy ∗ω = −iω. These two conditions characterize holomorphic 1-forms. For if ω is closed, locally it can be written as dg for some g, The condition ∗dg = i dg forces , so that g is holomorphic and dg = g '(z) dz, so that ω is holomorphic.
Let ω = f dz be a holomorphic 1-form. Write ω = ω1 + iω2 with ω1 and ω2 real. Then dω1 = 0 and dω2 = 0; and since ∗ω = −iω, ∗ω1 = ω2. Hence d∗ω1 = 0. This process can clearly be reversed, so that there is a one-one correspondence between holomorphic 1-forms and real 1-forms ω1 satisfying dω1 = 0 and d∗ω1 = 0. Under this correspondence, ω1 is the real part of ω while ω is given by ω = ω1 + i∗ω1. Such forms ω1 are called harmonic 1-forms. By definition ω1 is harmonic if and only if ∗ω1 is harmonic.
Since holomorphic 1-forms locally have the form df with f a holomorphic function and since the real part of a holomorphic function is harmonic, harmonic 1-forms locally have the form dh with h a harmonic function. Conversely if ω1 can be written in this way locally, d∗ω1 = d∗dh = (hxx + hyy) dx∧dy so that h is harmonic.
Remark. The definition of harmonic functions and 1-forms is intrinsic and only relies on the underlying Riemann surface structure. If, however, a conformal metric is chosen on the Riemann surface, the adjoint d* of d can be defined and the Hodge star operation extended to functions and 2-forms. The Hodge Laplacian can be defined on k-forms as ∆k = dd* +d*d and then a function f or a 1-form ω is harmonic if and only if it is annihilated by the Hodge Laplacian, i.e. ∆0f = 0 or ∆1ω = 0. The metric structure, however, is not required for the application to the uniformization of simply connected or planar Riemann surfaces.
Sobolev spaces on T2
The theory of Sobolev spaces on can be found in , an account which is followed in several later textbooks such as and . It provides an analytic framework for studying function theory on the torus C/Z+i Z = R2 / Z2 using Fourier series, which are just eigenfunction expansions for the Laplacian . The theory developed here essentially covers tori C / Λ where Λ is a lattice in C. Although there is a corresponding theory of Sobolev spaces on any compact Riemann surface, it is elementary in this case, because it reduces to harmonic analysis on the compact Abelian group . Classical approaches to Weyl's lemma use harmonic analysis on the non-compact Abelian group C = R2, i.e. the methods of Fourier analysis, in particular convolution operators and the fundamental solution of the Laplacian.
Let T2 = {(eix,eiy: x, y ∊ [0,2π)} = R2/Z2 = C/Λ where Λ = Z + i Z.
For λ = m + i n ≅ (m,n) in Λ, set . Furthermore, set Dx = −i∂/∂x and Dy = −i∂/∂y. For α = (p,q) set Dα =(Dx)p (Dy)q, a differential operator of total degree |α| = p + q. Thus , where . The (eλ) form an orthonormal basis in C(T2) for the inner product , so that .
For f in C∞(T2) and k an integer, define the kth Sobolev norm by
The associated inner product
makes C∞(T2) into an inner product space. Let Hk(T2) be its Hilbert space completion. It can be described equivalently as the Hilbert space completion of the space of trigonometric polynomials—that is finite sums —with respect to the kth Sobolev norm, so that Hk(T2) = {Σ aλ eλ : Σ |aλ|2(1 + |λ|2)k < ∞} with inner product
(Σ aλ eλ, Σ bμ eμ)(k) = Σ aλ (1 + |λ|2)k.
As explained below, the elements in the intersection H∞(T2) = Hk(T2) are exactly the smooth functions on T2; elements in the union H−∞(T2) = Hk(T2) are just distributions on T2 (sometimes referred to as "periodic distributions" on R2).
The following is a (non-exhaustive) list of properties of the Sobolev spaces.Differentiability and Sobolev spaces. for k ≥ 0 since, using the binomial theorem to expand (1 + |λ|2)k,
Differential operators. Dα Hk(T2) ⊂ Hk−|α|(T2) and Dα defines a bounded linear map from Hk(T2) to Hk−|α|(T2). The operator I + Δ defines a unitary map of Hk+2(T2) onto Hk(T2); in particular (I + Δ)k defines a unitary map of Hk(T2) onto H−k(T2) for k ≥ 0.
The first assertions follow because Dα eλ = λα eλ and |λα| ≤ |λ||α| ≤ (1 + |λ|2)|α|/2. The second assertions follow because I + Δ acts as multiplication by 1 + |λ|2 on eλ.Duality. For k ≥ 0, the pairing sending f, g to (f,g) establishes a duality between Hk(T2) and H−k(T2).
This is a restatement of the fact that (I + Δ)k establishes a unitary map between these two spaces, because .Multiplication operators. If h is a smooth function then multiplication by h defines a continuous operator on Hk(T2).
For k ≥ 0, this follows from the formula for ||f|| above and the Leibniz rule. Continuity for H−k(T2) follows by duality, since .Sobolev spaces and differentiability (Sobolev's embedding theorem). For k ≥ 0, and sup|α|≤k |Dαf| ≤ Ck ⋅ ||f||(k+2).
The inequalities for trigonometric polynomials imply the containments. The inequality for k = 0 follows from
by the Cauchy-Schwarz inequality. The first term is finite by the integral test, since ∬C (1 + |z|2)−2 dx dy = < ∞ using polar coordinates. In general if |α| ≤ k, then |sup Dαf| ≤ C0 ||Dαf||2 ≤ C0 ⋅ Cα ⋅ ||f||k+2 by the continuity properties of Dα.Smooth functions. C∞(T2) = Hk(T2) consists of Fourier series Σ aλ eλ such that for all k > 0, (1 + |λ|2)k |aλ| tends to 0 as |λ| tends to ∞, i.e. the Fourier coefficients aλ are of "rapid decay".
This is an immediate consequence of the Sobolev embedding theorem.Inclusion maps (Rellich's compactness theorem). If k > j, the space Hk(T2) is a subspace of Hj(T2) and the inclusion Hk(T2) Hj(T2) is compact.
With respect to the natural orthonormal bases, the inclusion map becomes multiplication by (1 + |λ|2)−(k−j)/2. It is therefore compact because it is given by a diagonal matrix with diagonal entries tending to zero.Elliptic regularity (Weyl's lemma). Suppose that f and u in H−∞(T2) = Hk(T2) satisfy ∆u = f. Suppose also that ψ f is a smooth function for every smooth function ψ vanishing off a fixed open set U in T2; then the same is true for u. (Thus if f is smooth off U, so is u.)
By the Leibniz rule , so . If it is known that φu lies in Hk(T2) for some k and all φ vanishing off U, then differentiating shows that φux and φuy lie in Hk−1(T2). The square-bracketed expression therefore also lies in Hk−1(T2). The operator (I + Δ)−1 carries this space onto Hk+1(T2), so that ψu must lie in Hk+1(T2). Continuing in this way, it follows that ψu lies in Hk(T2) = C∞(T2).Hodge decomposition on functions. H0(T2) = ∆ H2(T2) ker ∆ and C∞(T2) = ∆ C∞(T2) ker ∆.
Identifying H2(T2) with L2(T2) = H0(T2) using the unitary operator I + Δ, the first statement reduces to proving that the operator T = ∆(I + Δ)−1 satisfies L2(T2) = im T ker T. This operator is bounded, self-adjoint and diagonalized by the orthonormal basis eλ with eigenvalue |λ|2(1 + |λ|2)−1. The operator T has kernel C e0 (the constant functions) and on (ker T)⊥ = it has a bounded inverse given by S eλ = |λ|−2(1 + |λ|2) eλ for λ ≠ 0. So im T must be closed and hence L2(T2) = (ker T)⊥ ker T = im T ker T. Finally if f = ∆g + h with f in C∞(T2), g in H2(T2) and h constant, g must be smooth by Weyl's lemma.Hodge theory on T2. Let Ωk(T2) be the space of smooth k-forms for 0 ≤ k ≤ 2. Thus Ω0(T2) = C∞(T2), Ω1(T2) = C∞(T2) dx C∞(T2) dy and Ω2(T2) = C∞(T2) dx ∧ dy. The Hodge star operation is defined on 1-forms by ∗(p dx + q dy) = −q dx + p dy. This definition is extended to 0-forms and 2-forms by *f = f dx ∧ dy and *(g dx ∧ dy) = g. Thus ** = (−1)k on k-forms. There is a natural complex inner product on Ωk(T2) defined by
Define . Thus δ takes Ωk(T2) to Ωk−1(T2), annihilating functions; it is the adjoint of d for the above inner products, so that . Indeed by the Green-Stokes formula
The operators d and δ = d* satisfy d2 = 0 and δ2 = 0. The Hodge Laplacian on k-forms is defined by . From the definition . Moreover and . This allows the Hodge decomposition to be generalised to include 1-forms and 2-forms:Hodge theorem. Ωk(T2) = ker d ker d∗ im d im ∗d = ker d ker d* im d im d*. In the Hilbert space completion of Ωk(T2) the orthogonal complement of is , the finite-dimensional space of harmonic k-forms, i.e. the constant k-forms. In particular in , , the space of harmonic k-forms. Thus the de Rham cohomology of T2 is given by harmonic (i.e. constant) k-forms.
From the Hodge decomposition on functions, Ωk(T2) = ker ∆k im ∆k. Since ∆k = dd* + d*d, ker ∆k = ker d ker d*. Moreover im (dd* + d*d) ⊊ im d im d*. Since ker d ker d* is orthogonal to this direct sum, it follows that Ωk(T2) = ker d ker d* im d im d*. The last assertion follows because ker d contains and is orthogonal to im d* = im ∗d.
Hilbert space of 1-forms
In the case of the compact Riemann surface C / Λ, the theory of Sobolev spaces shows that the Hilbert space completion of smooth 1-forms can be decomposed as the sum of three pairwise orthogonal spaces, the closure of exact 1-forms df, the closure of coexact 1-forms ∗df and the harmonic 1-forms (the 2-dimensional space of constant 1-forms). The method of orthogonal projection of puts Riemann's approach to the Dirichlet principle on sound footing by generalizing this decomposition to arbitrary Riemann surfaces.
If X is a Riemann surface Ω(X) denote the space of continuous 1-forms with compact support. It admits the complex inner product
for α and β in Ω(X). Let H denote the Hilbert space completion of Ω(X). Although H can be interpreted in terms of measurable functions, like Sobolev spaces on tori it can be studied directly using only elementary functional analytic techniques involving Hilbert spaces and bounded linear operators.
Let H1 denote the closure of d C(X) and H2 denote the closure of ∗d C(X). Since , these are orthogonal subspaces. Let H0 denote the orthogonal complement (H1 H2)⊥ = H H.Theorem (Hodge−Weyl decomposition). H = H0 H1 H2. The subspace H0 consists of square integrable harmonic 1-forms on X, i.e. 1-forms ω such that dω = 0, d∗ω = 0 and ||ω||2 = ∫X ω ∧ ∗ < ∞.
Every square integrable continuous 1-form lies in H.
The space of continuous 1-forms of compact support is contained in the space of square integrable continuous 1-forms. They are both inner product spaces for the above inner product. So it suffices to show that any square integrable continuous 1-form can be approximated by continuous 1-forms of compact support. Let ω be a continuous square integrable 1-form, Thus the positive density Ω = ω ∧ ∗ is integrable and there are continuous functions of compact support ψn with 0 ≤ ψn ≤ 1 such that ∫X ψn Ω tends to ∫X Ω = ||ω||2. Let , a continuous function of compact support with . Then ωn = φn ⋅ ω tends to ω in H, since ||ω − ωn||2 = ∫X (1 − ψn) Ω tends to 0.
If ω in H is such that ψ ⋅ ω is continuous for every ψ in Cc(X), then ω is a square integrable continuous 1-form.
Note that the multiplication operator m(φ) given by m(φ)α = φ ⋅ α for φ in Cc(X) and α in Ω(X) satisfies ||m(φ)α|| ≤ ||φ||∞ ||α||, where ||φ||∞ = sup |φ|. Thus m(φ) defines a bounded linear operator with operator norm ||m(φ)|| ≤ ||φ||∞. It extends continuously to a bounded linear operator on H with the same operator norm. For every open set U with compact closure, there is a continuous function φ of compact support with 0 ≤ φ ≤ 1 with φ ≅ 1 on U. Then φ ⋅ ω is continuous on U so defines a unique continuous form ωU on U. If V is another open set intersecting U, then ωU = ωV on U V: in fact if z lies in U V and ψ in Cc(U V) ⊂ Cc(X) with ψ = 1 near z, then ψ ⋅ ωU = ψ ⋅ ω = ψ ⋅ ωV, so that ωU = ωV near z. Thus the ωU's patch together to give a continuous 1-form ω0 on X. By construction, ψ ⋅ ω = ψ ⋅ ω0 for every ψ in Cc(X). In particular for φ in Cc(X) with , ∫ φ ⋅ ω0 ∧ ∗ = ||φ1/2 ⋅ ω0||2 = ||φ1/2 ⋅ ω||2 ≤ ||ω||2. So ω0 ∧ ∗ is integrable and hence ω0 is square integrable, so an element of H. On the other hand ω can be approximated by ωn in Ω(X). Take ψn in Cc(X) with 0 ≤ ψn ≤ 1 with . Since real-valued continuous functions are closed under lattice operations. it can further be assumed that ∫ ψ ω0 ∧ ∗, and hence ∫ ψn ω0 ∧ ∗, increase to ||ω0||2. But then ||ψn ⋅ ω − ω|| and ||ψn ⋅ ω0 − ω0|| tend to 0. Since , this shows that .
Every square integrable harmonic 1-form ω lies in H0.
This is immediate because ω lies in H and, for f a smooth function of compact support, and .
Every element of H0 is given by a square integrable harmonic 1-form.
Let ω be an element of H0 and for fixed p in X fix a chart U in X containing p which is conformally equivalent by a map f to a disc D ⊂ T2 with f(0) = p. The identification map from Ω(U) onto Ω(D) and hence into Ω1(T2) preserves norms (up to a constant factor). Let K be the closure of Ω(U) in H. Then the above map extends uniquely to an isometry T of K into H0(T2)dx H0(T2)dy. Moreover if ψ is in C(U) then . The identification map T is also compatible with d and the Hodge star operator. Let D1 be a smaller concentric disk in T2 and set V = f(V). Take φ in C(U) with φ ≡ 1 on V. Then (m(φ) ω,dh) = 0 = (m(φ) ω,∗dh) for h in C(V). Hence, if ω1 = m(φ)ω and ω2 = T(ω1), then (ω2, dg) = 0 = (ω2, ∗dg) for g in .
Write ω2 = a dx + b dy with a and b in H0(T2). The conditions above imply (dω1, ∗g) = 0 = (d∗ ω1, ∗g). Replacing ∗g by dω3 with ω3 a smooth 1-form supported in D1, it follows that ∆1 ω2 = 0 on D1. Thus ∆a = 0 = ∆b on D1. Hence by Weyl's lemma, a and b are harmonic on D1. In particular both of them, and hence ω2, are smooth on D1; and dω2 = 0 = d∗ω2 on D1. Transporting these equations back to X, it follows that ω1 is smooth on V and dω1 = 0 = d∗ω1 on V. Since ω1 = m(φ)ω and p was an arbitrary point, this implies in particular that m(ψ)ω is continuous for every ψ in Cc(X). So ω is continuous and square integrable.
But then ω is smooth on V and dω = 0 = d∗ω on V. Again since p was arbitrary, this implies ω is smooth on X and dω = 0 = d∗ω on X, so that ω is a harmonic 1-form on X.
From the formulas for the Dolbeault operators and , it follows that
where both sums are orthogonal. The two subspaces in the second sum correspond to the ±i eigenspaces of the Hodge ∗ operator. Denoting their closures by H3 and H4, it follows that H = H3 ⊕ H4 and that these subspaces are interchanged by complex conjugation. The smooth 1-forms in H1, H2, H3 or H4 have a simple description.
A smooth 1-form in H1 has the form df for f smooth.
A smooth 1-form in H2 has the form ∗df for f smooth.
A smooth 1-form in H3 has the form f for f smooth.
A smooth 1-form in H3 has the form f for f smooth.
In fact, in view of the decompositions of H and its invariance under the Hodge star operation, it suffices to prove the first of these assertions. Since H1 is invariant under complex conjugation, it may be assumed that α is a smooth real 1-form in H1. It is therefore a limit in H1 of forms dfn with fn smooth of compact support. The 1-form α must be closed since, for any real-valued f in C(X),
so that dα = 0. To prove that α is exact it suffices to prove that ∫X α ∧ ∗β = 0 for any smooth closed real 1-form β of compact support. But by Green's formula
The above characterisations have an immediate corollary:
A smooth 1-form α in H can be decomposed uniquely as α = da + ∗db = ∂f + ∂g, with a, b, f and g smooth and all the summands square integrable.
Combined with the previous Hodge–Weyl decomposition and the fact that an element of H0 is automatically smooth, this immediately implies:Theorem (smooth Hodge–Weyl decomposition). If α is a smooth square integrable 1-form then α can be written uniquely as with ω harmonic, square integrable and smooth with square integrable differentials.
Holomorphic 1-forms with a double pole
The following result—reinterpreted in the next section in terms of harmonic functions and the Dirichlet principle—is the key tool for proving the uniformization theorem for simply connected, or more generally planar, Riemann surfaces.Theorem. If X is a Riemann surface and P is a point on X with local coordinate z, there is a unique holomorphic differential 1-form ω with a double pole at P, so that the singular part of ω is z−2dz near P, and regular everywhere else, such that ω is square integrable on the complement of a neighbourhood of P and the real part of ω is exact on X \ {P}.
The double pole condition is invariant under holomorphic coordinate change z z + az2 + ⋯. There is an analogous result for poles of order greater than 2 where the singular part of ω has the form z–kdz with k > 2, although this condition is not invariant under holomorphic coordinate change.
To prove uniqueness, note that if ω1 and ω2 are two solutions then their difference ω = ω1 − ω2 is a square integrable holomorphic 1-form which is exact on X \ {P}. Thus near P, with f holomorphic near z = 0. There is a holomorphic function g on X \ {P} such that ω = dg there. But then g must coincide with a primitive of f near z = 0, so that ω = dg everywhere. But then ω lies in H0 ∩ H1 = (0), i.e. ω = 0.
To prove existence, take a bump function 0 ≤ ψ ≤ 1 in C(X) with support in a neighbourhood of P of the form |z| < ε and such that ψ ≡ 1 near P. Set
so that α equals z–2dz near P, vanishes off a neighbourhood of P and is exact on X \ {P}. Let β = α − i∗α, a smooth (0,1) form on X, vanishing near z = 0, since it is a (1,0) form there, and vanishing off a larger neighbourhood of P. By the smooth Hodge−Weyl decomposition, β can be decomposed as β = ω0 + da – i∗da with ω0 a harmonic and square integrable (0,1) form and a smooth with square integrable differential. Now set γ = α – da = ω0 + i∗α − i∗da and ω = Re γ + i∗ Re γ. Then α is exact on X \ {P}; hence so is γ, as well as its real part, which is also the real part of ω. Near P, the 1-form ω differs from z–2dz by a smooth (1,0) form. It remains to prove that ω = 0 on X \ {P}; or equivalently that Re γ is harmonic on X \ {P}. In fact γ is harmonic on X \ {P}; for dγ = dα − d(da) = 0 on X \ {P} because α is exact there; and similarly d∗γ = 0 using the formula γ = ω0 + i∗α − i∗da and the fact that ω0 is harmonic.Corollary of proof. If X is a Riemann surface and P is a point on X with local coordinate z, there is a unique real-valued 1-form δ which is harmonic on X \ {P} such that δ – Re z−2dz is harmonic near z = 0 (the point P) such that δ is square integrable on the complement of a neighbourhood of P. Moreover, if h is any real-valued smooth function on X with dh square integrable and h vanishing near P, then (δ,dh) = 0.
Existence follows by taking δ = Re γ = Re ω above. Since ω = δ + i∗δ, the uniqueness of ω implies the uniqueness of δ. Alternatively if δ1 and δ2 are two solutions, their difference η = δ1 – δ2 has no singularity at P and is harmonic on X \ {P}. It is therefore harmonic in a neighbourhood of P and therefore everywhere. So η lies in H0. But also η is exact on X \ P and hence on the whole of X, so it also lies in H1. But then it must lie in H0 ∩ H1 = (0), so that η = 0. Finally, if N is the closure of a neighbourhood of P disjoint from the support of h and Y = X \ N, then δ|Y lies in H0(Y) and dh lies in the space H1(Y) so that
Dirichlet's principle on a Riemann surfaceTheorem. If X is a Riemann surface and P is a point on X with local coordinate z, there is a unique real-valued harmonic function u on X \ {P} such that u(z) – Re z−1 is harmonic near z = 0 (the point P) such that du is square integrable on the complement of a neighbourhood of P. Moreover, if h is any real-valued smooth function on X with dh square integrable and h vanishing near P, then (du,dh)=0.
In fact this result is immediate from the theorem and corollary in the previous section. The harmonic form δ constructed there is the real part of a holomorphic form ω = dg where g is holomorphic function on X with a simple pole at P with residue -1, i.e. g(z) = –z−1 + a0 + a1z + a2 z2 + ⋯ near z = 0. So u = - Re g gives a solution with the claimed properties since δ = −du and hence (du,dh) = −(δ,dh) = 0.
This result can be interpreted in terms of Dirichlet's principle. Let DR be a parametric disk |z| < R about P (the point z = 0) with R > 1. Let α = −d(ψz−1), where 0 ≤ ψ ≤ 1 is a bump function supported in D = D1, identically 1 near z = 0. Let α1 = −χD(z) Re d(z−1) where χD is the characteristic function of D. Let γ= Re α and γ1 = Re α1. Since χD can be approximated by bump functions in L2, γ1 − γ lies in the real Hilbert space of 1-forms Re H; similarly α1 − α lies in H. Dirichlet's principle states that the distance function
F(ξ) = ||γ1 − γ – ξ||
on Re H1 is minimised by a smooth 1-form ξ0 in Re H1. In fact −du coincides with the minimising 1-form: γ + ξ0 = −du.
This version of Dirichlet's principle is easy to deduce from the previous construction of du. By definition ξ0 is the orthogonal projection of γ1 – γ onto Re H1 for the real inner product Re (η1,η2) on H, regarded as a real inner product space. It coincides with the real part of the orthogonal projection ω1 of α1 – α onto H1 for the complex inner product on H. Since the Hodge star operator is a unitary map on H swapping H1 and H2, ω2 = ∗ω1 is the orthogonal projection of ∗(α1 – α) onto H2. On the other hand, ∗α1 = −i α1, since α is a (1,0) form. Hence
(α1 – α) − i∗(α1 – α) = ω0 + ω1 + ω2,
with ωk in Hk. But the left hand side equals –α + i∗α = −β, with β defined exactly as in the preceding section, so this coincides with the previous construction.
Further discussion of Dirichlet's principle on a Riemann surface can be found in , , , , and .Historical note. proved the existence of the harmonic function u by giving a direct proof of Dirichlet's principle. In , he presented his method of orthogonal projection which has been adopted in the presentation above, following , but with the theory of Sobolev spaces on T2 used to prove elliptic regularity without using measure theory. In the expository texts and , both authors avoid invoking results on measure theory: they follow Weyl's original approach for constructing harmonic functions with singularities via Dirichlet's principle. In Weyl's method of orthogonal projection, Lebesgue's theory of integration had been used to realise Hilbert spaces of 1-forms in terms of measurable 1-forms, although the 1-forms to be constructed were smooth or even analytic away from their singularity. In the preface to , referring to the extension of his method of orthogonal projection to higher dimensions by , Weyl writes:
"Influenced by Kodaira's work, I have hesitated a moment as to whether I should not replace the Dirichlet principle by the essentially equivalent "method of orthogonal projection" which is treated in a paper of mine. But for reasons the explication of which would lead too far afield here, I have stuck to the old approach."
In , after giving a brief exposition of the method of orthogonal projection and making reference to Weyl's writings, Kodaira explains:
"I first planned to prove Dirichlet's Principle using the method of orthogonal projection in this book. However, I did not like to have to use the concept of Lebesgue measurability only for the proof of Dirichlet's Principle and therefore I rewrote it in such a way that I did not have to."
The methods of Hilbert spaces, Lp spaces and measure theory appear in the non-classical theory of Riemann surfaces (the study of moduli spaces of Riemann surfaces) through the Beltrami equation and Teichmüller theory.
Holomorphic 1-forms with two single polesTheorem. Given a Riemann surface X and two distinct points A and B on X, there is a holomorphic 1-form on X with simple poles at the two points with non-zero residues having sum zero such that the 1-form is square integrable on the complement of any open neighbourhoods of the two points.
The proof is similar to the proof of the result on holomorphic 1-forms with a single double pole. The result is first proved when A and B are close and lie in a parametric disk. Indeed, once this is proved, a sum of 1-forms for a chain of sufficiently close points between A and B will provide the required 1-form, since the intermediate singular terms will cancel. To construct the 1-form for points corresponding to a and b in a parametric disk, the previous construction can be used starting with the 1-form
which locally has the form
Poisson equationTheorem (Poisson equation). If Ω is a smooth 2-form of compact support on a Riemann surface X, then Ω can be written as Ω = ∆f where f is a smooth function with df square integrable if and only if ∫X Ω = 0.
In fact, Ω can be written as Ω = dα with α a smooth 1-form of compact support: indeed, using partitions of unity, this reduces to the case of a smooth 2-form of compact support on a rectangle. Indeed Ω can be written as a finite sum of 2-forms each supported in a parametric rectangle and having integral zero. For each of these 2-forms the result follows from Poincaré's lemma with compact support. Writing α = ω + da + *db, it follows that Ω = d*db = ∆b.
In the case of the simply connected Riemann surfaces C, D and S= C ∪ ∞, the Riemann surfaces are symmetric spaces G / K for the groups G = R2, SL(2,R''') and SU(2). The methods of group representation theory imply the operator ∆ is G-invariant, so that its fundamental solution is given by right convolution by a function on K \ G / K. Thus in these cases Poisson's equation can be solved by an explicit integral formula. It is easy to verify that this explicit solution tends to 0 at ∞, so that in the case of these surfaces there is a solution f'' tending to 0 at ∞. proves this directly for simply connected surfaces and uses it to deduce the uniformization theorem.
See also
Differentials of the first kind
Abelian differential
Dolbeault complex
Notes
References
2016 reprint
, 1989 reprint of 1941 edition with foreword by Michael Atiyah
, reprint of 1941 edition incorporating corrections supplied by Hermann Weyl
, Part III, Chapter 8: "Die Verallgemeinerung des Riemannschen Abbildungssatzes. Das Dirichletsche Prinzlp," by Richard Courant
Harmonic functions
Riemann surfaces
Differential forms | Differential forms on a Riemann surface | [
"Engineering"
] | 10,898 | [
"Tensors",
"Differential forms"
] |
51,017,707 | https://en.wikipedia.org/wiki/Ocean%20acidification%20in%20the%20Great%20Barrier%20Reef | Ocean acidification threatens the Great Barrier Reef by reducing the viability and strength of coral reefs. The Great Barrier Reef, considered one of the seven natural wonders of the world and a biodiversity hotspot, is located in Australia. Similar to other coral reefs, it is experiencing degradation due to ocean acidification. Ocean acidification results from a rise in atmospheric carbon dioxide, which is taken up by the ocean. This process can increase sea surface temperature, decrease aragonite, and lower the pH of the ocean. The more humanity consumes fossil fuels, the more the ocean absorbs released CO₂, furthering ocean acidification.
This decreased health of coral reefs, particularly the Great Barrier Reef, can result in reduced biodiversity. Organisms can become stressed due to ocean acidification and the disappearance of healthy coral reefs, such as the Great Barrier Reef, is a loss of habitat for several taxa. Ocean acidification makes it harder for organisms to reproduce affecting the ecosystem in the Great Barrier Reef.
Species of fish can be affected immensely from ocean acidification which disrupts the overall ecosystem. There is a possible solution that can reverse the effects of ocean acidification called alkalization injection. Alkalization injection injects a solution into the ocean and increases the pH of the water. Coral reefs are very important to society and the economy.
Background
Atmospheric carbon dioxide has risen from 280 to 409 ppm since the industrial revolution. Around 30% of carbon dioxide released from humans have been absorbed into the ocean during that era. This increase in carbon dioxide has led to a 0.1 decrease in pH, and it could decrease by 0.5 by 2100. When carbon dioxide meets seawater, it forms carbonic acid; the molecules dissociate into hydrogen, bicarbonate, and carbonate, and they lower the pH of the ocean. Sea surface temperature, ocean acidity, and dissolved inorganic carbon are also positively correlated with atmospheric carbon dioxide. Ocean acidification can cause hypercapnia and increase stress in marine organisms, thereby leading to decreased biodiversity. Coral reefs themselves can also be negatively affected by ocean acidification, as calcification rates decrease and acidity increases.
Aragonite is impacted by the process of ocean acidification because it is a form of calcium carbonate. It is essential in coral viability and health because it is found in coral skeletons and is more readily soluble than calcite. Increasing carbon dioxide levels can reduce coral growth rates from 9 to 56% due to the lack of available carbonate ions needed for the calcification process. Other calcifying organisms, such as bivalves and gastropods, experience negative effects due to ocean acidification as well. The excess hydrogen ions in the acidic water dissolve their shells, limiting their shelter and reproduction rates.
As a biodiversity hotspot, the many taxa of the Great Barrier Reef are threatened by ocean acidification. Rare and endemic species are in greater danger due to ocean acidification, because they rely upon the Great Barrier Reef more extensively. Additionally, the risk of coral reefs collapsing due to acidification poses a threat to biodiversity. The stress of ocean acidification could also negatively affect other biological processes, such as reducing photosynthesis or reproduction and allowing organisms to become vulnerable to disease.
The Great Barrier Reef is susceptible to poor water quality and the impacts of ocean acidification. There are thirty five major rivers that discharge nutrient and sediment loads, there is about five to eight times the amount of discharge then prior to European settlement. These discharges lead to elevated seawater nutrients and turbidity which further promotes the impacts Ocean acidification.
Coral health
Calcification and aragonite
Coral is a calcifying organism, putting it at high risk for decay and slow growth rates as ocean acidification increases. Aragonite assists the coral as they build their skeletons because it is another form of calcium carbonate (CaCO3) that is more soluble. When the pH of the water decreases, aragonite decreases as well, leading to the loss of calcium carbonate uptake in corals. Levels of aragonite have decreased by 16% since industrialization and could be lower in some portions of the Great Barrier Reef due to the current, which allows northern corals to take up more aragonite than southern corals. Aragonite is predicted to reduce by 0.1 by 2100 which could greatly hinder coral growth. Since 1990, calcification rates of Porites, a common large reef-building coral in the Great Barrier Reef, have decreased by 14.2% annually. Aragonite levels across the Great Barrier Reef itself are not equal; due to currents and circulation, some portions of the Great Barrier Reef can have half as much aragonite as others. Levels of aragonite are also affected by calcification and production, which can vary from reef to reef. If atmospheric carbon dioxide reaches 560 ppm, most ocean surface waters will be adversely undersaturated with respect to aragonite, and the pH will have reduced by about 0.24 units, from almost 8.2 today to just over 7.9. At this point (sometime in the third quarter of this century, at current rates of carbon dioxide increase), only a few parts of the Pacific will have levels of aragonite saturation adequate for coral growth. Additionally, if atmospheric carbon dioxide reaches 800 ppm, the ocean surface water pH decrease will be 0.4 units, and the total dissolved carbonate ion concentration will have decreased by at least 60%. Recent estimates state that with business-as-usual emission levels, the atmospheric carbon dioxide could reach 800 ppm by the year 2100. At this point, it is almost certain that all the reefs in the world will be in erosional states. Increasing the pH and replicating pre-industrialization ocean chemistry conditions in the Great Barrier Reef, however, led to an increase in coral growth rates of 7%.
Temperature
Ocean acidification can also lead to increased sea surface temperature. An increase of about 1 or 2 °C can cause the collapse of the relationship between coral and zooxanthellae, possibly leading to bleaching. The average sea surface temperature in the Great Barrier Reef is predicted to increase between 1 and 3 °C by 2100. Bleaching occurs when the zooxanthellae and coralline algae leave the coral skeleton behind due to stresses in the water. This causes the coral to lose its colour because the previous organisms sustained on the coral skeleton vacate, leaving a white skeleton. The bleached coral can no longer complete photosynthesis, and so it slowly dies. The acidity of the water will slowly dissolve the leftover coral skeletons, essentially damaging the structural integrity of the coral reef. There are many organisms that also rely on the algae and zooxanthellae for their main source of food. Therefore, organisms in the bleached coral reef are forced to leave in search of new food sources. Since zooxanthellae and algae grow very slowly, restoring the coral reef to its original form will take a very long time. This breakdown of the relationship between the coral and the zooxanthellae occurs when Photosystem II is damaged, either due to a reaction with the D1 protein or a lack of carbon dioxide fixation; these result in a lack of photosynthesis and can lead to bleaching.
Reproduction
Ocean acidification threatens coral reproduction throughout almost all aspects of the process. Gametogenesis may be indirectly affected by coral bleaching. Additionally, the stress that acidification puts on coral can potentially harm the viability of the sperm released. Larvae can also be affected by this process; metabolism and settlement cues could be altered, changing the size of the population or viability of reproduction. Other species of calcifying larvae have shown reduced growth rates under ocean acidification scenarios. Biofilm, a bioindicator for oceanic conditions, underwent a reduced growth rate and altered composition in acidification, possibly affecting larval settlement on the biofilm itself.
Health Reports of The Great Barrier Reef
Throughout the years there have been a few mass bleaching events that have affected the Great Barrier Reef. In particular, the years of 2016 and 2017, saw the reef sustain two years of back to back bleaching periods. This long period accounted for an estimated loss of half of the coral life in the Great Barrier Reef. The parts of the reef that did survive were damaged, leading to an overall period of low coral reproduction. This was later followed by another bleaching event in 2020, making it the third bleaching event in five years. Studies found however that the results of the 2020 bleaching were not too severe, as it only affected a minimal amount of reefs, with most being in the lower to moderate levels of bleaching.
In early 2022 a study showed, 91% of coral in the Great Barrier Reef, have experienced some degree of coral bleaching. The reefs that had higher levels of bleaching, often were accompanied by higher overall air temperature. These temperature levels lasted all through the summer season in Australia, attributing to prolonged coral bleaching periods. Prolonged periods raise concern, as corals would not be able to reproduce and die out, leading to more loss of the reefs. However, recent reports from June 2022, have stated that the Great Barrier Reef, is currently recovering. Reefs affected by bleaching have lowered to 16% along different areas of the Australian Coast. As ocean temperatures continue to drop, we can expect bleaching levels to go down, and coral levels to increase. Though coral bleaching has gone down, predators of the coral reef, Crown-of-thorns starfish, are still impacting coral growth and development.
Biodiversity
Biodiversity refers to the variety of life forms, including species diversity, genetic diversity, and ecosystem diversity. The Great Barrier Reef is a biodiversity hotspot, ranging over 9000 known species. However, since the 1950’s half of the living corals on the Great Barrier Reef have died, and coral reef-associated biodiversity has declined by sixty three percent. Only an estimated twenty five percent of these species have been formally discovered, leaving a substantial proportion yet to be scientifically classified. We are no doubt losing species we have yet to identify in the wake of a shifting climate.
Reduced levels of aragonite, as a result of ocean acidification, continues to be one of the Great Barrier Reef's biggest threats. Healthy reefs support thousands of different corals, fish and marine mammals, but bleached reefs lose their ability to support and sustain life. Coral structural formations create complex habitats critical for providing shelter, breeding grounds, and food sources for numerous marine organisms, including fish, invertebrates, and microorganisms. In turn, corals depend on reef fish and other organisms to clean and regulate algae levels, provide nutrients for coral growth, and keep pests in check. Coral reefs and the species they host have dynamic symbiotic relationships.
Ocean acidification can also indirectly affect any organism, having reduced growth rates, decreased reproductive capacity, increased susceptibility to disease, and elevated mortality rates. Bleaching events trigger homogenization of coral composition and losses of structural complexity which can be detrimental to reef fish and other organisms that depend on branching coral for breeding and shelter. This decrease in ecosystem diversity has direct effects on species diversity.
Vulnerable Species
As coral reefs decay, their residents will have to adapt or find new habitats on which to rely. Ocean acidification threatens the fundamental chemical balance of our oceans, creating conditions that eat away at essential minerals like calcium carbonate. A lack of aragonite and decreasing pH levels in ocean water makes it harder for calcifying organisms such as oysters, clams, lobsters, shrimp and coral reefs to build their shells and exoskeletons. Organisms have been found to be more sensitive to the effects of ocean acidification in early, larval or planktonic stages. Larval health and settlement of both calcifying and non-calcifying organisms can be harmed by ocean acidification.
A study published in the journal Global Change Biology developed a model for predicting the vulnerability of sharks and sting rays to climate change in the Great Barrier Reef. It was found that 30 of the 133 species were identified as moderately or highly vulnerable to climate change with the most vulnerable species being the freshwater whipray, porcupine ray, speartooth shark, and sawfish. Increasing temperature is also affecting the behavior and fitness of may reef species such as the common coral trout, a very important fish in sustaining the health of coral reefs. Not only can ocean acidification affect habitat and development, but it can also affect how organisms view predators and conspecifics. Studies on the effects of ocean acidification have not been performed on long enough time scales to see if organisms can adapt to these conditions. However, ocean acidification is predicted to occur at a rate that evolution cannot match.
Some fish can compensate for disturbances under high CO2 conditions but they show unexpected sensitivity to current and future growing CO2 levels. The sensitivity affects many physiological and behavioral processes, including the growth to otoliths which are calcium carbonate structures in fish ears that aid in balance. Also, it affects functions in the brains, the amount of energy the fish uses, and the amount of nutrients a fish can absorb. The consequences of disrupted neurotransmitters like GABA are still being studied, but it can affect fish in the near future. Sensitivity of fish from ocean acidification varies between species with sensory perception being affected the most between all species.
Crown of Thorns Sea Star
A naturally occurring predator to coral reefs in the Great Barrier Reef is the Crown of Thorns sea star (Acanthaster planci). Population outbreaks of the Crown of Thorns sea star are one of the major causes of coral decline across the Great Barrier Reef, as an adult crown-of-thorns starfish is capable of consuming up to 10 m2 of reef building coral a year. However, each species of coral is not equally impacted, as the sea star has been observed to favor branching species of coral, Acropora, followed by a sub branching species. This results in a sequential and ordered eradication of coral reef species.
Crown of Thorns Sea Star outbreaks on the Great Barrier Reef have become more frequent in recent years, which scientists predict could be linked to human activities. Any increase in nutrients, possibly from river run-off, can positively affect starfish populations, leading to detrimental outbreaks. As pressures from climate change increase, the time between reef disturbances is becoming shorter, leaving less time for reef recovery.
Possible Solution
A simulation from 2015 has shown a potential solution that involves artificial ocean alkalization. This method contains a solution that increases the alkalinity of water by about 4 moles. Ships will inject artificial ocean alkalization throughout the coast of the ocean and it would decrease the pH of the ocean, causing ocean acidification to go away temporarily. Through the simulation, the results stated a significant increase in aragonite saturation state across the Great Barrier Reef. The use of alkalization would offset around 4 years of ocean acidification. Also, the results showed that there was an increase in aragonite saturation state in about 25% of the reefs which means that alkalization is helpful in reducing OA.
Importance of Coral Reefs
Being a major hotspots of biodiversity, coral reefs are very important to the ecosystem and livelihood of marine and human life. Countries around the world depend on reefs as a source of food and income, especially for civilizations that inhabit small islands. With over a 60% decrease in available fishing around coral reefs, many countries, will be forced to adapt. Coral Reefs are also important for a countries economy, as reefs provide various forms of tourist activities, that can generate a lot of revenue for the economy. These can also contribute to individual levels of wellness, as the owners of these business, profit off of increased visitation and usage. Coral Reefs also provide, a form of coastal infrastructure, that acts as a barrier protecting coastal communities from major ocean catastrophes, such as tsunamis and coastal storms.
See also
Ocean acidification in the Arctic Ocean
References
Aquatic ecology
Biological oceanography
Carbon
Chemical oceanography
Effects of climate change
Environmental impact by effect
Geochemistry
Great Barrier Reef
Oceanography | Ocean acidification in the Great Barrier Reef | [
"Physics",
"Chemistry",
"Biology",
"Environmental_science"
] | 3,300 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Oceanography",
"Chemical oceanography",
"Ecosystems",
"nan",
"Aquatic ecology"
] |
58,685,207 | https://en.wikipedia.org/wiki/Kron%20reduction | In power engineering, Kron reduction is a method used to reduce or eliminate the desired node without need of repeating the steps like in Gaussian elimination.
It is named after American electrical engineer Gabriel Kron.
Description
Kron reduction is a useful tool to eliminate unused nodes in a Y-parameter matrix. For example, three linear elements linked in series with a port at each end may be easily modeled as a 4X4 nodal admittance matrix of Y-parameters, but only the two port nodes normally need to be considered for modeling and simulation. Kron reduction may be used to eliminate the internal nodes, and thereby reducing the 4th order Y-parameter matrix to a 2nd order Y-parameter matrix. The 2nd order Y-parameter matrix is then more easily converted to a Z-parameter matrix or S-parameter matrix when needed.
Matrix operations
Consider a general Y-parameter matrix that may be created from a combination of linear elements constructed such that two internal nodes exist.
While it is possible to use the 4X4 matrix in simulations or to construct a 4X4 S-parameter matrix, is may be simpler to reduce the Y-parameter matrix to a 2X2 by eliminating the two internal nodes through Kron Reduction, and then simulating with a 2X2 matrix and/or converting to a 2X2 S-parameter or Z-Parameter matrix.
The process for executing a Kron reduction is as follows:
Select the Kth row/column used to model the undesired internal nodes to be eliminated. Apply the below formula to all other matrix entries that do not reside on the Kth row and column. Then simply remove the Kth row and column of the matrix, which reduces the size of the matrix by one.
Kron Reduction for the Kth row/column of an NxN matrix:
Linear elements that are also passive always form a symmetric Y-parameter matrix, that is, in all cases. The number of computations of a Kron reduction may be reduced by taking advantage of this symmetry, as shown ion the equation below.
Kron Reduction for symmetric NxN matrices:
Once all the matrix entries have been modified by the Kron Reduction equation, the Kth row/column me be eliminated, and the matrix order is reduced by one. Repeat for all internal nodes desired to be eliminated
Simplified theory and derivation
The concept behind Kron reduction is quite simple. Y-parameters are measured using nodes shorted to ground, but unused nodes, that is nodes without ports, are not necessarily grounded, and their state is not directly known to the outside. Therefore, the Y-parameter matrix of the full network does not adequately describe the Y-parameter of the network being modeled, and contains extraneous entries if some nodes do not have ports.
Consider the case of two lumped elements of equal value in series, two resistors of equal resistance for example. If both resistors have an admittance of , and the series network has an admittance of . The full admittance matrix that accounts for all three nodes in the network would look like below, using standard Y-parameter matrix construction techniques:
However, it is easily observed that the two resistors in series, each with an assigned admittance of Y, has a net admittance of , and since resistors do not leak current to ground, that the network Y12 is equal and opposite to YR11, that is YR12 = -YR11. The 2 port network without the middle node can be created by inspection and is shown below:
Since row and column 2 of the matrix is to be eliminated, we can rewrite without row 2 and column 2. We will call this rewritten matrix .
Now we have a basis to create the translation equation by finding an equation that translates each entry in to the corresponding entry in :
For each of the four entries, it can be observed that subtracting from the left-of-arrow value successfully makes the translation. Since is identical to , each case of meets the condition shown in the general translation equations.
The same process may be used for elements of arbitrary admittance ( etc.) and networks of arbitrary size, but the algebra becomes more complex. The trick is to deduce and/or calculate an expression that translates the original matrix entries to the reduced matrix entries.
See also
Schur complement
Power-flow study
References
Power engineering
Electric power | Kron reduction | [
"Physics",
"Engineering"
] | 889 | [
"Physical quantities",
"Energy engineering",
"Power (physics)",
"Electric power",
"Power engineering",
"Electrical engineering"
] |
58,686,423 | https://en.wikipedia.org/wiki/Introduction%20to%20electromagnetism | Electromagnetism is one of the fundamental forces of nature. Early on, electricity and magnetism were studied separately and regarded as separate phenomena. Hans Christian Ørsted discovered that the two were related – electric currents give rise to magnetism. Michael Faraday discovered the converse, that magnetism could induce electric currents, and James Clerk Maxwell put the whole thing together in a unified theory of electromagnetism. Maxwell's equations further indicated that electromagnetic waves existed, and the experiments of Heinrich Hertz confirmed this, making radio possible. Maxwell also postulated, correctly, that light was a form of electromagnetic wave, thus making all of optics a branch of electromagnetism. Radio waves differ from light only in that the wavelength of the former is much longer than the latter. Albert Einstein showed that the magnetic field arises through the relativistic motion of the electric field and thus magnetism is merely a side effect of electricity. The modern theoretical treatment of electromagnetism is as a quantum field in quantum electrodynamics.
In many situations of interest to electrical engineering, it is not necessary to apply quantum theory to get correct results. Classical physics is still an accurate approximation in most situations involving macroscopic objects. With few exceptions, quantum theory is only necessary at the atomic scale and a simpler classical treatment can be applied. Further simplifications of treatment are possible in limited situations. Electrostatics deals only with stationary electric charges so magnetic fields do not arise and are not considered. Permanent magnets can be described without reference to electricity or electromagnetism. Circuit theory deals with electrical networks where the fields are largely confined around current carrying conductors. In such circuits, even Maxwell's equations can be dispensed with and simpler formulations used. On the other hand, a quantum treatment of electromagnetism is important in chemistry. Chemical reactions and chemical bonding are the result of quantum mechanical interactions of electrons around atoms. Quantum considerations are also necessary to explain the behaviour of many electronic devices, for instance the tunnel diode.
Electric charge
Electromagnetism is one of the fundamental forces of nature alongside gravity, the strong force and the weak force. Whereas gravity acts on all things that have mass, electromagnetism acts on all things that have electric charge. Furthermore, as there is the conservation of mass according to which mass cannot be created or destroyed, there is also the conservation of charge which means that the charge in a closed system (where no charges are leaving or entering) must remain constant. The fundamental law that describes the gravitational force on a massive object in classical physics is Newton's law of gravity. Analogously, Coulomb's law is the fundamental law that describes the force that charged objects exert on one another. It is given by the formula
where F is the force, ke is the Coulomb constant, q1 and q2 are the magnitudes of the two charges, and r2 is the square of the distance between them. It describes the fact that like charges repel one another whereas opposite charges attract one another and that the stronger the charges of the particles, the stronger the force they exert on one another. The law is also an inverse square law which means that as the distance between two particles is doubled, the force on them is reduced by a factor of four.
Electric and magnetic fields
In physics, fields are entities that interact with matter and can be described mathematically by assigning a value to each point in space and time. Vector fields are fields which are assigned both a numerical value and a direction at each point in space and time. Electric charges produce a vector field called the electric field. The numerical value of the electric field, also called the electric field strength, determines the strength of the electric force that a charged particle will feel in the field and the direction of the field determines which direction the force will be in. By convention, the direction of the electric field is the same as the direction of the force on positive charges and opposite to the direction of the force on negative charges. Because positive charges are repelled by other positive charges and are attracted to negative charges, this means the electric fields point away from positive charges and towards negative charges. These properties of the electric field are encapsulated in the equation for the electric force on a charge written in terms of the electric field:
where F is the force on a charge q in an electric field E.
As well as producing an electric field, charged particles will produce a magnetic field when they are in a state of motion that will be felt by other charges that are in motion (as well as permanent magnets). The direction of the force on a moving charge from a magnetic field is perpendicular to both the direction of motion and the direction of the magnetic field lines and can be found using the right-hand rule. The strength of the force is given by the equation
where F is the force on a charge q with speed v in a magnetic field B which is pointing in a direction of angle θ from the direction of motion of the charge.
The combination of the electric and magnetic forces on a charged particle is called the Lorentz force. Classical electromagnetism is fully described by the Lorentz force alongside a set of equations called Maxwell's equations. The first of these equations is known as Gauss's law. It describes the electric field produced by charged particles and by charge distributions. According to Gauss's law, the flux (or flow) of electric field through any closed surface is proportional to the amount of charge that is enclosed by that surface. This means that the greater the charge, the greater the electric field that is produced. It also has other important implications. For example, this law means that if there is no charge enclosed by the surface, then either there is no electric field at all or, if there is a charge near to but outside of the closed surface, the flow of electric field into the surface must exactly cancel with the flow out of the surface. The second of Maxwell's equations is known as Gauss's law for magnetism and, similarly to the first Gauss's law, it describes flux, but instead of electric flux, it describes magnetic flux. According to Gauss's law for magnetism, the flow of magnetic field through a closed surface is always zero. This means that if there is a magnetic field, the flow into the closed surface will always cancel out with the flow out of the closed surface. This law has also been called "no magnetic monopoles" because it means that any magnetic flux flowing out of a closed surface must flow back into it, meaning that positive and negative magnetic poles must come together as a magnetic dipole and can never be separated into magnetic monopoles. This is in contrast to electric charges which can exist as separate positive and negative charges.
The third of Maxwell's equations is called the Ampère–Maxwell law. It states that a magnetic field can be generated by an electric current. The direction of the magnetic field is given by Ampère's right-hand grip rule. If the wire is straight, then the magnetic field is curled around it like the gripped fingers in the right-hand rule. If the wire is wrapped into coils, then the magnetic field inside the coils points in a straight line like the outstretched thumb in the right-hand grip rule. When electric currents are used to produce a magnet in this way, it is called an electromagnet. Electromagnets often use a wire curled up into solenoid around an iron core which strengthens the magnetic field produced because the iron core becomes magnetised. Maxwell's extension to the law states that a time-varying electric field can also generate a magnetic field. Similarly, Faraday's law of induction states that a magnetic field can produce an electric current. For example, a magnet pushed in and out of a coil of wires can produce an electric current in the coils which is proportional to the strength of the magnet as well as the number of coils and the speed at which the magnet is inserted and extracted from the coils. This principle is essential for transformers which are used to transform currents from high voltage to low voltage, and vice versa. They are needed to convert high voltage mains electricity into low voltage electricity which can be safely used in homes. Maxwell's formulation of the law is given in the Maxwell–Faraday equation—the fourth and final of Maxwell's equations—which states that a time-varying magnetic field produces an electric field.
Together, Maxwell's equations provide a single uniform theory of the electric and magnetic fields and Maxwell's work in creating this theory has been called "the second great unification in physics" after the first great unification of Newton's law of universal gravitation. The solution to Maxwell's equations in free space (where there are no charges or currents) produces wave equations corresponding to electromagnetic waves (with both electric and magnetic components) travelling at the speed of light. The observation that these wave solutions had a wave speed exactly equal to the speed of light led Maxwell to hypothesise that light is a form of electromagnetic radiation and to posit that other electromagnetic radiation could exist with different wavelengths. The existence of electromagnetic radiation was proved by Heinrich Hertz in a series of experiments ranging from 1886 to 1889 in which he discovered the existence of radio waves. The full electromagnetic spectrum (in order of increasing frequency) consists of radio waves, microwaves, infrared radiation, visible light, ultraviolet light, X-rays and gamma rays.
A further unification of electromagnetism came with Einstein's special theory of relativity. According to special relativity, observers moving at different speeds relative to one another occupy different observational frames of reference. If one observer is in motion relative to another observer then they experience length contraction where unmoving objects appear closer together to the observer in motion than to the observer at rest. Therefore, if an electron is moving at the same speed as the current in a neutral wire, then they experience the flowing electrons in the wire as standing still relative to it and the positive charges as contracted together. In the lab frame, the electron is moving and so feels a magnetic force from the current in the wire but because the wire is neutral it feels no electric force. But in the electron's rest frame, the positive charges seem closer together compared to the flowing electrons and so the wire seems positively charged. Therefore, in the electron's rest frame it feels no magnetic force (because it is not moving in its own frame) but it does feel an electric force due to the positively charged wire. This result from relativity proves that magnetic fields are just electric fields in a different reference frame (and vice versa) and so the two are different manifestations of the same underlying electromagnetic field.
Conductors, insulators and circuits
Conductors
A conductor is a material that allows electrons to flow easily. The most effective conductors are usually metals because they can be described fairly accurately by the free electron model in which electrons delocalize from the atomic nuclei, leaving positive ions surrounded by a cloud of free electrons. Examples of good conductors include copper, aluminum, and silver. Wires in electronics are often made of copper.
The main properties of conductors are:
The electric field is zero inside a perfect conductor. Because charges are free to move in a conductor, when they are disturbed by an external electric field they rearrange themselves such that the field that their configuration produces exactly cancels the external electric field inside the conductor.
The electric potential is the same everywhere inside the conductor and is constant across the surface of the conductor. This follows from the first statement because the field is zero everywhere inside the conductor and therefore the potential is constant within the conductor too.
The electric field is perpendicular to the surface of a conductor. If this were not the case, the field would have a nonzero component on the surface of the conductor, which would cause the charges in the conductor to move around until that component of the field is zero.
The net electric flux through a surface is proportional to the charge enclosed by the surface. This is a restatement of Gauss' law.
In some materials, the electrons are bound to the atomic nuclei and so are not free to move around but the energy required to set them free is low. In these materials, called semiconductors, the conductivity is low at low temperatures but as the temperature is increased the electrons gain more thermal energy and the conductivity increases. Silicon is an example of a semiconductors that can be used to create solar cells which become more conductive the more energy they receive from photons from the sun.
Superconductors are materials that exhibit little to no resistance to the flow of electrons when cooled below a certain critical temperature. Superconductivity can only be explained by the quantum mechanical Pauli exclusion principle which states that no two fermions (an electron is a type of fermion) can occupy exactly the same quantum state. In superconductors, below a certain temperature the electrons form boson bound pairs which do not follow this principle and this means that all the electrons can fall to the same energy level and move together uniformly in a current.
Insulators
Insulators are material which are highly resistive to the flow of electrons and so are often used to cover conducting wires for safety. In insulators, electrons are tightly bound to atomic nuclei and the energy to free them is very high so they are not free to move and are resistive to induced movement by an external electric field. However, some insulators, called dielectrics, can be polarised under the influence of an external electric field so that the charges are minutely displaced forming dipoles that create a positive and negative side. Dielectrics are used in capacitors to allow them to store more electric potential energy in the electric field between the capacitor plates.
Capacitors
A capacitor is an electronic component that stores electrical potential energy in an electric field between two oppositely charged conducting plates. If one of the conducting plates has a charge density of +Q/A and the other has a charge of -Q/A where A is the area of the plates, then there will be an electric field between them. The potential difference between two parallel plates V can be derived mathematically as
where d is the plate separation and is the permittivity of free space. The ability of the capacitor to store electrical potential energy is measured by the capacitance which is defined as and for a parallel plate capacitor this is
If a dielectric is placed between the plates then the permittivity of free space is multiplied by the relative permittivity of the dielectric and the capacitance increases. The maximum energy that can be stored by a capacitor is proportional to the capacitance and the square of the potential difference between the plates
Inductors
An inductor is an electronic component that stores energy in a magnetic field inside a coil of wire. A current-carrying coil of wire induces a magnetic field according to Ampère's circuital law. The greater the current I, the greater the energy stored in the magnetic field and the lower the inductance which is defined where is the magnetic flux produced by the coil of wire. The inductance is a measure of the circuit's resistance to a change in current and so inductors with high inductances can also be used to oppose alternating current.
Other circuit components
Circuit laws
Circuit theory deals with electrical networks where the fields are largely confined around current carrying conductors. In such circuits, simple circuit laws can be used instead of deriving all the behaviour of the circuits directly from electromagnetic laws. Ohm's law states the relationship between the current I and the voltage V of a circuit by introducing the quantity known as resistance R
Ohm's law:
Power is defined as so Ohm's law can be used to tell us the power of the circuit in terms of other quantities
Kirchhoff's junction rule states that the current going into a junction (or node) must equal the current that leaves the node. This comes from charge conservation, as current is defined as the flow of charge over time. If a current splits as it exits a junction, the sum of the resultant split currents is equal to the incoming circuit.
Kirchhoff's loop rule states that the sum of the voltage in a closed loop around a circuit equals zero. This comes from the fact that the electric field is conservative which means that no matter the path taken, the potential at a point does not change when you get back there.
Rules can also tell us how to add up quantities such as the current and voltage in series and parallel circuits.
For series circuits, the current remains the same for each component and the voltages and resistances add up:
For parallel circuits, the voltage remains the same for each component and the currents and resistances are related as shown:
See also
List of textbooks on electromagnetism
References
Electromagnetism
electromagnetism | Introduction to electromagnetism | [
"Physics"
] | 3,470 | [
"Electromagnetism",
"Physical phenomena",
"Fundamental interactions"
] |
57,218,361 | https://en.wikipedia.org/wiki/Divisor%20sum%20identities | The purpose of this page is to catalog new, interesting, and useful identities related to number-theoretic divisor sums, i.e., sums of an arithmetic function over the divisors of a natural number , or equivalently the Dirichlet convolution of an arithmetic function with one:
These identities include applications to sums of an arithmetic function over just the proper prime divisors of .
We also define periodic variants of these divisor sums with respect to the greatest common divisor function in the form of
Well-known inversion relations that allow the function to be expressed in terms of are provided by the Möbius inversion formula.
Naturally, some of the most interesting examples of such identities result when considering the average order summatory functions over an arithmetic function defined as a divisor sum of another arithmetic function . Particular examples of divisor sums involving special arithmetic functions and special Dirichlet convolutions of arithmetic functions can be found on the following pages:
here, here, here, here, and here.
Average order sum identities
Interchange of summation identities
The following identities are the primary motivation for creating this topics page. These identities do not appear to be well-known, or at least well-documented, and are extremely useful tools to have at hand in some applications. In what follows, we consider that are any prescribed arithmetic functions and that denotes the summatory function of . A more common special case of the first summation below is referenced here.
In general, these identities are collected from the so-called "rarities and b-sides" of both well established and semi-obscure analytic number theory notes and techniques and the papers and work of the contributors. The identities themselves are not difficult to prove and are an exercise in standard manipulations of series inversion and divisor sums. Therefore, we omit their proofs here.
The convolution method
The convolution method is a general technique for estimating average order sums of the form
where the multiplicative function f can be written as a convolution of the form for suitable, application-defined arithmetic functions g and h. A short survey of this method can be found here.
A related technique is the use of the formula
this is known as the Dirichlet hyperbola method.
Periodic divisor sums
An arithmetic function is periodic (mod k), or k-periodic, if for all . Particular examples of k-periodic number theoretic functions are the Dirichlet characters modulo k and the greatest common divisor function . It is known that every k-periodic arithmetic function has a representation as a finite discrete Fourier series of the form
where the Fourier coefficients defined by the following equation are also k-periodic:
We are interested in the following k-periodic divisor sums:
It is a fact that the Fourier coefficients of these divisor sum variants are given by the formula
Fourier transforms of the GCD
We can also express the Fourier coefficients in the equation immediately above in terms of the Fourier transform of any function h at the input of using the following result where is a Ramanujan sum (cf. Fourier transform of the totient function):
Thus by combining the results above we obtain that
Sums over prime divisors
Let the function denote the characteristic function of the primes, i.e., if and only if is prime and is zero-valued otherwise. Then as a special case of the first identity in equation (1) in section interchange of summation identities above, we can express the average order sums
We also have an integral formula based on Abel summation for sums of the form
where denotes the prime-counting function. Here we typically make the assumption that the function f is continuous and differentiable.
Some lesser appreciated divisor sum identities
We have the following divisor sum formulas for f any arithmetic function and g completely multiplicative where is Euler's totient function and is the Möbius function:
If f is completely multiplicative then the pointwise multiplication with a Dirichlet convolution yields .
If and n has more than m distinct prime factors, then
The Dirichlet inverse of an arithmetic function
We adopt the notation that denotes the multiplicative identity of Dirichlet convolution so that for any arithmetic function f and . The Dirichlet inverse of a function f satisfies for all . There is a well-known recursive convolution formula for computing the Dirichlet inverse of a function f by induction given in the form of
For a fixed function f, let the function
Next, define the following two multiple, or nested, convolution variants for any fixed arithmetic function f:
The function by the equivalent pair of summation formulas in the next equation is closely related to the Dirichlet inverse for an arbitrary function f.
In particular, we can prove that
A table of the values of for appears below. This table makes precise the intended meaning and interpretation of this function as the signed sum of all possible multiple k-convolutions of the function f with itself.
Let where p is the Partition function (number theory). Then there is another expression for the Dirichlet inverse given in terms of the functions above and the coefficients of the q-Pochhammer symbol for given by
Variants of sums over arithmetic functions
See also
Summation
Bell series
List of mathematical series
Notes
References
Number theory
Integer sequences
Summability methods
Arithmetic | Divisor sum identities | [
"Mathematics"
] | 1,114 | [
"Sequences and series",
"Discrete mathematics",
"Integer sequences",
"Mathematical structures",
"Recreational mathematics",
"Summability methods",
"Mathematical objects",
"Combinatorics",
"Arithmetic",
"Numbers",
"Number theory"
] |
57,222,123 | https://en.wikipedia.org/wiki/Batch%20normalization | Batch normalization (also known as batch norm) is a method used to make training of artificial neural networks faster and more stable through normalization of the layers' inputs by re-centering and re-scaling. It was proposed by Sergey Ioffe and Christian Szegedy in 2015.
The reasons behind the effectiveness of batch normalization remain under discussion. It was believed that it can mitigate the problem of internal covariate shift, where parameter initialization and changes in the distribution of the inputs of each layer affect the learning rate of the network. Recently, some scholars have argued that batch normalization does not reduce internal covariate shift, but rather smooths the objective function, which in turn improves the performance. However, at initialization, batch normalization in fact induces severe gradient explosion in deep networks, which is only alleviated by skip connections in residual networks. Others maintain that batch normalization achieves length-direction decoupling, and thereby accelerates neural networks.
Internal covariate shift
Each layer of a neural network has inputs with a corresponding distribution, which is affected during the training process by the randomness in the parameter initialization and the randomness in the input data. The effect of these sources of randomness on the distribution of the inputs to internal layers during training is described as internal covariate shift. Although a clear-cut precise definition seems to be missing, the phenomenon observed in experiments is the change on means and variances of the inputs to internal layers during training.
Batch normalization was initially proposed to mitigate internal covariate shift. During the training stage of networks, as the parameters of the preceding layers change, the distribution of inputs to the current layer changes accordingly, such that the current layer needs to constantly readjust to new distributions. This problem is especially severe for deep networks, because small changes in shallower hidden layers will be amplified as they propagate within the network, resulting in significant shift in deeper hidden layers. Therefore, the method of batch normalization is proposed to reduce these unwanted shifts to speed up training and to produce more reliable models.
Besides reducing internal covariate shift, batch normalization is believed to introduce many other benefits. With this additional operation, the network can use higher learning rate without vanishing or exploding gradients. Furthermore, batch normalization seems to have a regularizing effect such that the network improves its generalization properties, and it is thus unnecessary to use dropout to mitigate overfitting. It has also been observed that the network becomes more robust to different initialization schemes and learning rates while using batch normalization.
Procedures
Transformation
In a neural network, batch normalization is achieved through a normalization step that fixes the means and variances of each layer's inputs. Ideally, the normalization would be conducted over the entire training set, but to use this step jointly with stochastic optimization methods, it is impractical to use the global information. Thus, normalization is restrained to each mini-batch in the training process.
Let us use B to denote a mini-batch of size m of the entire training set. The empirical mean and variance of B could thus be denoted as
and .
For a layer of the network with d-dimensional input, , each dimension of its input is then normalized (i.e. re-centered and re-scaled) separately,
, where and ; and are the per-dimension mean and standard deviation, respectively.
is added in the denominator for numerical stability and is an arbitrarily small constant. The resulting normalized activation have zero mean and unit variance, if is not taken into account. To restore the representation power of the network, a transformation step then follows as
,
where the parameters and are subsequently learned in the optimization process.
Formally, the operation that implements batch normalization is a transform called the Batch Normalizing transform. The output of the BN transform is then passed to other network layers, while the normalized output remains internal to the current layer.
Backpropagation
The described BN transform is a differentiable operation, and the gradient of the loss l with respect to the different parameters can be computed directly with the chain rule.
Specifically, depends on the choice of activation function, and the gradient against other parameters could be expressed as a function of :
,
, ,, ,
and .
Inference
During the training stage, the normalization steps depend on the mini-batches to ensure efficient and reliable training. However, in the inference stage, this dependence is not useful any more. Instead, the normalization step in this stage is computed with the population statistics such that the output could depend on the input in a deterministic manner. The population mean, , and variance, , are computed as:
, and .
The population statistics thus is a complete representation of the mini-batches.
The BN transform in the inference step thus becomes
,
where is passed on to future layers instead of . Since the parameters are fixed in this transformation, the batch normalization procedure is essentially applying a linear transform to the activation.
Theory
Although batch normalization has become popular due to its strong empirical performance, the working mechanism of the method is not yet well-understood. The explanation made in the original paper was that batch norm works by reducing internal covariate shift, but this has been challenged by more recent work. One experiment trained a VGG-16 network under 3 different training regimes: standard (no batch norm), batch norm, and batch norm with noise added to each layer during training. In the third model, the noise has non-zero mean and non-unit variance, i.e. it explicitly introduces covariate shift. Despite this, it showed similar accuracy to the second model, and both performed better than the first, suggesting that covariate shift is not the reason that batch norm improves performance.
Using batch normalization causes the items in a batch to no longer be iid, which can lead to difficulties in training due to lower quality gradient estimation.
Smoothness
One alternative explanation, is that the improvement with batch normalization is instead due to it producing a smoother parameter space and smoother gradients, as formalized by a smaller Lipschitz constant.
Consider two identical networks, one contains batch normalization layers and the other does not, the behaviors of these two networks are then compared. Denote the loss functions as and , respectively. Let the input to both networks be , and the output be , for which , where is the layer weights. For the second network, additionally goes through a batch normalization layer. Denote the normalized activation as , which has zero mean and unit variance. Let the transformed activation be , and suppose and are constants. Finally, denote the standard deviation over a mini-batch as .
First, it can be shown that the gradient magnitude of a batch normalized network, , is bounded, with the bound expressed as
.
Since the gradient magnitude represents the Lipschitzness of the loss, this relationship indicates that a batch normalized network could achieve greater Lipschitzness comparatively. Notice that the bound gets tighter when the gradient correlates with the activation , which is a common phenomena. The scaling of is also significant, since the variance is often large.
Secondly, the quadratic form of the loss Hessian with respect to activation in the gradient direction can be bounded as
.
The scaling of indicates that the loss Hessian is resilient to the mini-batch variance, whereas the second term on the right hand side suggests that it becomes smoother when the Hessian and the inner product are non-negative. If the loss is locally convex, then the Hessian is positive semi-definite, while the inner product is positive if is in the direction towards the minimum of the loss. It could thus be concluded from this inequality that the gradient generally becomes more predictive with the batch normalization layer.
It then follows to translate the bounds related to the loss with respect to the normalized activation to a bound on the loss with respect to the network weights:
, where and .
In addition to the smoother landscape, it is further shown that batch normalization could result in a better initialization with the following inequality:
, where and are the local optimal weights for the two networks, respectively.
Some scholars argue that the above analysis cannot fully capture the performance of batch normalization, because the proof only concerns the largest eigenvalue, or equivalently, one direction in the landscape at all points. It is suggested that the complete eigenspectrum needs to be taken into account to make a conclusive analysis.
Measure
Since it is hypothesized that batch normalization layers could reduce internal covariate shift, an experiment is set up to measure quantitatively how much covariate shift is reduced. First, the notion of internal covariate shift needs to be defined mathematically. Specifically, to quantify the adjustment that a layer's parameters make in response to updates in previous layers, the correlation between the gradients of the loss before and after all previous layers are updated is measured, since gradients could capture the shifts from the first-order training method. If the shift introduced by the changes in previous layers is small, then the correlation between the gradients would be close to 1.
The correlation between the gradients are computed for four models: a standard VGG network, a VGG network with batch normalization layers, a 25-layer deep linear network (DLN) trained with full-batch gradient descent, and a DLN network with batch normalization layers. Interestingly, it is shown that the standard VGG and DLN models both have higher correlations of gradients compared with their counterparts, indicating that the additional batch normalization layers are not reducing internal covariate shift.
Vanishing/exploding gradients
Even though batchnorm was originally introduced to alleviate gradient vanishing or explosion problems, a deep batchnorm network in fact suffers from gradient explosion at initialization time, no matter what it uses for nonlinearity. Thus the optimization landscape is very far from smooth for a randomly initialized, deep batchnorm network.
More precisely, if the network has layers, then the gradient of the first layer weights has norm for some depending only on the nonlinearity.
For any fixed nonlinearity, decreases as the batch size increases. For example, for ReLU, decreases to as the batch size tends to infinity.
Practically, this means deep batchnorm networks are untrainable.
This is only relieved by skip connections in the fashion of residual networks.
This gradient explosion on the surface contradicts the smoothness property explained in the previous section, but in fact they are consistent. The previous section studies the effect of inserting a single batchnorm in a network, while the gradient explosion depends on stacking batchnorms typical of modern deep neural networks.
Decoupling
Another possible reason for the success of batch normalization is that it decouples the length and direction of the weight vectors and thus facilitates better training.
By interpreting batch norm as a reparametrization of weight space, it can be shown that the length and the direction of the weights are separated and can thus be trained separately. For a particular neural network unit with input and weight vector , denote its output as , where is the activation function, and denote . Assume that , and that the spectrum of the matrix is bounded as , , such that is symmetric positive definite. Adding batch normalization to this unit thus results in
, by definition.
The variance term can be simplified such that . Assume that has zero mean and can be omitted, then it follows that
, where is the induced norm of , .
Hence, it could be concluded that , where , and and accounts for its length and direction separately. This property could then be used to prove the faster convergence of problems with batch normalization.
Linear convergence
Least-square problem
With the reparametrization interpretation, it could then be proved that applying batch normalization to the ordinary least squares problem achieves a linear convergence rate in gradient descent, which is faster than the regular gradient descent with only sub-linear convergence.
Denote the objective of minimizing an ordinary least squares problem as
, where and .
Since , the objective thus becomes
, where 0 is excluded to avoid 0 in the denominator.
Since the objective is convex with respect to , its optimal value could be calculated by setting the partial derivative of the objective against to 0. The objective could be further simplified to be
.
Note that this objective is a form of the generalized Rayleigh quotient
, where is a symmetric matrix and is a symmetric positive definite matrix.
It is proven that the gradient descent convergence rate of the generalized Rayleigh quotient is
, where is the largest eigenvalue of , is the second largest eigenvalue of , and is the smallest eigenvalue of .
In our case, is a rank one matrix, and the convergence result can be simplified accordingly. Specifically, consider gradient descent steps of the form with step size , and starting from , then
.
Learning halfspace problem
The problem of learning halfspaces refers to the training of the Perceptron, which is the simplest form of neural network. The optimization problem in this case is
, where and is an arbitrary loss function.
Suppose that is infinitely differentiable and has a bounded derivative. Assume that the objective function is -smooth, and that a solution exists and is bounded such that . Also assume is a multivariate normal random variable. With the Gaussian assumption, it can be shown that all critical points lie on the same line, for any choice of loss function . Specifically, the gradient of could be represented as
, where , , and is the -th derivative of .
By setting the gradient to 0, it thus follows that the bounded critical points can be expressed as , where depends on and . Combining this global property with length-direction decoupling, it could thus be proved that this optimization problem converges linearly.
First, a variation of gradient descent with batch normalization, Gradient Descent in Normalized Parameterization (GDNP), is designed for the objective function , such that the direction and length of the weights are updated separately. Denote the stopping criterion of GDNP as
.
Let the step size be
.
For each step, if , then update the direction as
.
Then update the length according to
, where is the classical bisection algorithm, and is the total iterations ran in the bisection step.
Denote the total number of iterations as , then the final output of GDNP is
.
The GDNP algorithm thus slightly modifies the batch normalization step for the ease of mathematical analysis.
It can be shown that in GDNP, the partial derivative of against the length component converges to zero at a linear rate, such that
, where and are the two starting points of the bisection algorithm on the left and on the right, correspondingly.
Further, for each iteration, the norm of the gradient of with respect to converges linearly, such that
.
Combining these two inequalities, a bound could thus be obtained for the gradient with respect to :
, such that the algorithm is guaranteed to converge linearly.
Although the proof stands on the assumption of Gaussian input, it is also shown in experiments that GDNP could accelerate optimization without this constraint.
Neural networks
Consider a multilayer perceptron (MLP) with one hidden layer and hidden units with mapping from input to a scalar output described as
, where and are the input and output weights of unit correspondingly, and is the activation function and is assumed to be a tanh function.
The input and output weights could then be optimized with
, where is a loss function, , and .
Consider fixed and optimizing only , it can be shown that the critical points of of a particular hidden unit , , all align along one line depending on incoming information into the hidden layer, such that
, where is a scalar, .
This result could be proved by setting the gradient of to zero and solving the system of equations.
Apply the GDNP algorithm to this optimization problem by alternating optimization over the different hidden units. Specifically, for each hidden unit, run GDNP to find the optimal and . With the same choice of stopping criterion and stepsize, it follows that
.
Since the parameters of each hidden unit converge linearly, the whole optimization problem has a linear rate of convergence.
References
Further reading
Ioffe, Sergey; Szegedy, Christian (2015). "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", ICML'15: Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, July 2015 Pages 448–456
Artificial intelligence engineering | Batch normalization | [
"Engineering"
] | 3,406 | [
"Software engineering",
"Artificial intelligence engineering"
] |
57,222,575 | https://en.wikipedia.org/wiki/AEConnect | AEConnect (AEC-1) is a submarine communications cable privately owned by Aqua Comms linking the United States and Ireland. The cable has extended connectivity via the CeltixConnect cable to London. Originally the cable project was called Emerald Express managed by Emerald Networks, and was intended to include a cable landing in Iceland, however after being unable to secure funding the project ownership was transferred to the current owner.
The cable began construction in April 2015. The cable spans 5536 km between landing stations in Shirley, USA and Killala, Ireland. The cable's final splice was made in November 2015 and was declared to be ready for service in January 2016
References
Transatlantic communications cables
Submarine communications cables in the North Atlantic Ocean
Infrastructure completed in 2016
Ireland–United States relations
Coastal construction
Telecommunications equipment
History of telecommunications
2016 establishments in Ireland
2016 establishments in New York (state) | AEConnect | [
"Engineering"
] | 178 | [
"Construction",
"Coastal construction"
] |
57,222,748 | https://en.wikipedia.org/wiki/Biomolecules%20%28journal%29 | Biomolecules is a peer-reviewed open-access scientific journal covering various aspects of biochemistry, molecular biology, and cell biology research. It is published by MDPI and was established in 2011.
The journal publishes research articles, reviews, and commentaries related to the structure, function, and interactions of biological molecules.
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2022 impact factor of 4.8.
References
External links
English-language journals
Academic journals established in 2011
Biochemistry journals
MDPI academic journals
Continuous journals
Creative Commons Attribution-licensed journals | Biomolecules (journal) | [
"Chemistry"
] | 128 | [
"Biochemistry journals",
"Biochemistry literature"
] |
57,223,306 | https://en.wikipedia.org/wiki/Journal%20of%20Functional%20Biomaterials | Journal of Functional Biomaterials is a peer-reviewed open-access scientific journal covering various aspects of biomaterials research. It is published by MDPI and was established in 2012. The editor-in-chief is Pankaj Vadgama (Queen Mary University of London).
The journal publishes research articles, reviews, and commentaries related to research, including nanomedicine, nanotechnology, and sensors for health.
Abstracting and indexing
The journal is abstracted and indexed, for example, in:
According to the Journal Citation Reports, the journal has a 2022 impact factor of 4.8.
References
External links
English-language journals
Academic journals established in 2010
Materials science journals
MDPI academic journals
Continuous journals
Creative Commons Attribution-licensed journals | Journal of Functional Biomaterials | [
"Materials_science",
"Engineering"
] | 159 | [
"Materials science journals",
"Materials science"
] |
57,223,553 | https://en.wikipedia.org/wiki/Nanomaterials%20%28journal%29 | Nanomaterials is an interdisciplinary scientific journal that covers all aspects of nanomaterials. The journal publishes theoretical and experimental research articles and studies about synthesis and use of nanomaterials.
It was founded in 2010. The journal is published by MDPI, as of 2022 editor in chief is Shirley Chiang, an American microscopist from University of California, Davis, the Department of Physics and Astronomy.
Abstracting and indexing
The journal is abstracted and indexed in:
DOAJ
EBSCO
Scopus
Science Citation Index Expanded
According to the Journal Citation Reports, the journal has a 2022 impact factor of 5.3.
References
External links
English-language journals
MDPI academic journals
Nanotechnology journals | Nanomaterials (journal) | [
"Materials_science"
] | 146 | [
"Materials science stubs",
"Nanotechnology journals",
"Materials science journals",
"Materials science journal stubs",
"Nanotechnology stubs",
"Nanotechnology"
] |
46,944,668 | https://en.wikipedia.org/wiki/Friedman%27s%20SSCG%20function | In mathematics, a simple subcubic graph (SSCG) is a finite simple graph in which each vertex has a degree of at most three. Suppose we have a sequence of simple subcubic graphs G1, G2, ... such that each graph Gi has at most i + k vertices (for some integer k) and for no i < j is Gi homeomorphically embeddable into (i.e. is a graph minor of) Gj.
The Robertson–Seymour theorem proves that subcubic graphs (simple or not) are well-founded by homeomorphic embeddability, implying such a sequence cannot be infinite. Then, by applying Kőnig's lemma on the tree of such sequences under extension, for each value of k there is a sequence with maximal length. The function SSCG(k) denotes that length for simple subcubic graphs. The function SCG(k) denotes that length for (general) subcubic graphs.
The SCG sequence begins SCG(0) = 6, but then explodes to a value equivalent to fε2*2 in the fast-growing hierarchy.
The SSCG sequence begins slower than SCG, SSCG(0) = 2, SSCG(1) = 5, but then grows rapidly. SSCG(2) = 3 × 2(3 × 295) − 8 ≈ 3.241704 × 10. Its first and last 20 digits are 32417042291246009846...34057047399148290040. SSCG(2) has in total digits. SSCG(3) is much larger than both TREE(3) and TREETREE(3)(3), that is, the TREE function nested TREE(3) times with 3 at the bottom.
Adam P. Goucher claims there is no qualitative difference between the asymptotic growth rates of SSCG and SCG. He writes "It's clear that SCG(n) ≥ SSCG(n), but I can also prove SSCG(4n + 3) ≥ SCG(n)."
The function was proposed and studied by Harvey Friedman.
See also
Goodstein's theorem
Paris–Harrington theorem
Kanamori–McAloon theorem
References
Mathematical logic
Theorems in discrete mathematics
Order theory
Wellfoundedness
Graph theory | Friedman's SSCG function | [
"Mathematics"
] | 502 | [
"Discrete mathematics",
"Mathematical theorems",
"Order theory",
"Mathematical logic",
"Wellfoundedness",
"Graph theory",
"Theorems in discrete mathematics",
"Combinatorics",
"Mathematical relations",
"Mathematical problems",
"Mathematical induction"
] |
46,946,727 | https://en.wikipedia.org/wiki/Pralmorelin | Pralmorelin (INN) (brand name GHRP Kaken 100; former developmental code names KP-102, GPA-748, WAY-GPA-748), also known as pralmorelin hydrochloride (JAN) and pralmorelin dihydrochloride (USAN), as well as, notably, growth hormone-releasing peptide 2 (GHRP-2), is a growth hormone secretagogue (GHS) used as a diagnostic agent that is marketed by Kaken Pharmaceutical in Japan in a single-dose formulation for the assessment of growth hormone deficiency (GHD).
Pralmorelin is an orally-active, synthetic peptide drug, specifically, an analogue of met-enkephalin, with the amino acid sequence D-Ala-D-(β-naphthyl)-Ala-Trp-D-Phe-Lys-NH2. It acts as a ghrelin/growth hormone secretagogue receptor (GHSR) agonist, and was the first of this class of drugs to be introduced clinically. Acute administration of the drug markedly increases the levels of plasma growth hormone (GH) and reliably induces sensations of hunger and increases food intake in humans.
Pralmorelin was also under investigation for the treatment of GHD and short stature (pituitary dwarfism), and made it to phase II clinical trials for these indications, but was ultimately never marketed for them. This may be because the ability of pralmorelin to increase plasma GH levels is significantly lower in people with GHD relative to healthy individuals.
See also
List of growth hormone secretagogues
References
Abandoned drugs
Ghrelin receptor agonists
Growth hormone secretagogues
Peptides
World Anti-Doping Agency prohibited substances | Pralmorelin | [
"Chemistry"
] | 376 | [
"Biomolecules by chemical classification",
"Drug safety",
"Abandoned drugs",
"Peptides",
"Molecular biology"
] |
46,946,757 | https://en.wikipedia.org/wiki/Curacin%20A | Curacin A is a hybrid polyketide synthase (PKS)/nonribosomal peptide synthase (NRPS) derived natural product produced isolated from the cyanobacterium Lyngbya majuscula. Curacin A belongs to a family of natural products including jamaicamide, mupirocin, and pederin that have an unusual terminal alkene. Additionally, Curacin A contains a notable thiazoline ring and a unique cyclopropyl moiety, which is essential to the compound's biological activity. Curacin A has been characterized as potent antiproliferative cytotoxic compound with notable anticancer activity for several cancer lines including renal, colon, and breast cancer. Curacin A has been shown to interact with colchicine binding sites on tubulin, which inhibits microtubule polymerization, an essential process for cell division and proliferation.
Biosynthesis
The synthetic enzymes for Curacin A are found in a gene cluster with 14 open reading frames (ORFs) with the nomenclature CurA through CurN. Analysis of the pathway demonstrated the presence of one NRPS/PKS hybrid module located on CurF, one HMG-CoA synthase cassette located on CurD, and seven monomodular PKS modules. CurA contains a unique GCN5-related N-acetyltransferase (GNAT) loading domain and an associated acyl carrier protein (ACP). The loading module tethers an acetyl group to the ACP that then condenses with one of three tandem ACPs present in the adjacent module of CurA. An hydroxymethylglutaryl-CoA synthase cassette (mevalonate pathway) catlyzes the formation of hydroxymethylglutaryl acid by the addition of an malonyl-CoA unit to the terminal ketide of the aceto-acetyl-ACP moiety of ACP1,ACP2, or ACP3. subsequent enzymes, including a unique heme independent halogenase (HaI) catalyze the formation of a cyclopropyl ring. A cysteine specific NRPS module located on CurF follows after cyclopropyl ring formation, and due to the activity of a cyclizing condensation domain, forms a thiazole ring attached to the cylcopropyl moiety from previous reactions in the pathway. Seven standalone PKS modules follow to extend the growing polyketide chain with S-adenosyl methionine (SAM) dependent methylations occurring at positions 10 and 13. A rare offloading strategy involving a sulfotransferase is employed by the final curacin synthase module. The sulfotransferase sulfates the hydroxyl group of carbon 15, which activates the molecule for decarboxylation and terminal alkene formation.
Cyclopropyl ring formation
The CurB (ACP), CurC (ketosynthase), and CurD (HMG-CoA reductase) are responsible for the formation of (S)HMG-ACP3. HaI, from the CurA gene, is a unique non-heme halogenase that goes through a purported Fe(IV)=O intermediate to add a chlorine atom onto an unactivated carbon atom. After chlorination, ECH1 acting as a dehydratates HMG-ACP3 to 3-methylgultaconyl-ACP3 and ECH2 performs the required decarboxylation. Finally,an unusual ER catalyzed cyclization reaction, purported to go through a substitution like mechanism, forms the cyclopropane ring. The added chlorine atom assists in the decarboxylation step and likely serves as the leaving group during cyclopropane ring formation.
References
Polyketides
Cyclopropanes
Alkene derivatives | Curacin A | [
"Chemistry"
] | 835 | [
"Biomolecules by chemical classification",
"Natural products",
"Polyketides"
] |
46,947,088 | https://en.wikipedia.org/wiki/CG%20artist | CG artists (also known as computer graphics artists) create 2D and 3D art, usually for cinema, advertising or animation movies. A CG artist's work usually revolves around finding balance between artistic sensibilities and technical limitations while working within a development team.
In a game development context, CG artists work closely with game directors, art directors, animators, game designers and level designers. CG artists (typically, technical artists) will also work with game programmers to ensure that the 3D models and assets created by the art team function as desired inside a game engine.
CG artists are typically skilled at creating both 2D and 3D digital art, and often specialize in one or more subsets of content creation such as: hard surface modeling, organic modelling, concept art painting, architectural rendering, animation, and/or visual effects. If the CG artist is a technical artist, they will usually also have programming skills such as shader and script writing, character rigging, and/or skill in languages such as Python, MEL, C++, or C#.
CG artists often begin their career with a degree from an animation school, an arts discipline, or in computer science.
References
Computer occupations
Product development
Video game design
Video game development | CG artist | [
"Technology"
] | 255 | [
"Computer occupations"
] |
46,950,491 | https://en.wikipedia.org/wiki/Ky%20Fan%20lemma | In mathematics, Ky Fan's lemma (KFL) is a combinatorial lemma about labellings of triangulations. It is a generalization of Tucker's lemma. It was proved by Ky Fan in 1952.
Definitions
KFL uses the following concepts.
: the closed n-dimensional ball.
: its boundary sphere.
T: a triangulation of .
T is called boundary antipodally symmetric if the subset of simplices of T which are in provides a triangulation of where if σ is a simplex then so is −σ.
L: a labeling of the vertices of T, which assigns to each vertex a non-zero integer: .
L is called boundary odd if for every vertex , .
An edge of T is called a complementary edge of L if the labels of its two endpoints have the same size and opposite signs, e.g. {−2, +2}.
An n-dimensional simplex of T is called an alternating simplex of T if its labels have different sizes with alternating signs, e.g.{−1, +2, −3} or {+3, −5, +7}.
Statement
Let T be a boundary-antipodally-symmetric triangulation of and L a boundary-odd labeling of T.
If L has no complementary edge, then L has an odd number of n-dimensional alternating simplices.
Corollary: Tucker's lemma
By definition, an n-dimensional alternating simplex must have labels with n + 1 different sizes.
This means that, if the labeling L uses only n different sizes (i.e. ), it cannot have an n-dimensional alternating simplex.
Hence, by KFL, L must have a complementary edge.
Proof
KFL can be proved constructively based on a path-based algorithm. The algorithm it starts at a certain point or edge of the triangulation, then goes from simplex to simplex according to prescribed rules, until it is not possible to proceed any more. It can be proved that the path must end in an alternating simplex.
The proof is by induction on n.
The basis is . In this case, is the interval and its boundary is the set . The labeling L is boundary-odd, so . Without loss of generality, assume that and . Start at −1 and go right. At some edge e, the labeling must change from negative to positive. Since L has no complementary edges, e must have a negative label and a positive label with a different size (e.g. −1 and +2); this means that e is a 1-dimensional alternating simplex. Moreover, if at any point the labeling changes again from positive to negative, then this change makes a second alternating simplex, and by the same reasoning as before there must be a third alternating simplex later. Hence, the number of alternating simplices is odd.
The following description illustrates the induction step for . In this case is a disc and its boundary is a circle. The labeling L is boundary-odd, so in particular for some point v on the boundary. Split the boundary circle to two semi-circles and treat each semi-circle as an interval. By the induction basis, this interval must have an alternating simplex, e.g. an edge with labels (+1,−2). Moreover, the number of such edges on both intervals is odd. Using the boundary criterion, on the boundary we have an odd number of edges where the smaller number is positive and the larger negative, and an odd number of edges where the smaller number is negative and the larger positive. We call the former decreasing, the latter increasing.
There are two kinds of triangles.
If a triangle is not alternating, it must have an even number of increasing edges and an even number of decreasing edges.
If a triangle is alternating, it must have one increasing edge and one decreasing edge, thus we have an odd number of alternating triangles.
By induction, this proof can be extended to any dimension.
References
Combinatorics
Fixed-point theorems | Ky Fan lemma | [
"Mathematics"
] | 838 | [
"Theorems in mathematical analysis",
"Discrete mathematics",
"Fixed-point theorems",
"Theorems in topology",
"Combinatorics"
] |
46,955,123 | https://en.wikipedia.org/wiki/Angles%20between%20flats | The concept of angles between lines (in the plane or in space), between two planes (dihedral angle) or between a line and a plane can be generalized to arbitrary dimensions. This generalization was first discussed by Camille Jordan. For any pair of flats in a Euclidean space of arbitrary dimension one can define a set of mutual angles which are invariant under isometric transformation of the Euclidean space. If the flats do not intersect, their shortest distance is one more invariant. These angles are called canonical or principal. The concept of angles can be generalized to pairs of flats in a finite-dimensional inner product space over the complex numbers.
Jordan's definition
Let and be flats of dimensions and in the -dimensional Euclidean space . By definition, a translation of or does not alter their mutual angles. If and do not intersect, they will do so upon any translation of which maps some point in to some point in . It can therefore be assumed without loss of generality that and intersect.
Jordan shows that Cartesian coordinates in can then be defined such that and are described, respectively, by the sets of equations
and
with . Jordan calls these coordinates canonical. By definition, the angles are the angles between and .
The non-negative integers are constrained by
For these equations to determine the five non-negative integers completely, besides the dimensions and and the number of angles , the non-negative integer must be given. This is the number of coordinates , whose corresponding axes are those lying entirely within both and . The integer is thus the dimension of . The set of angles may be supplemented with angles to indicate that has that dimension.
Jordan's proof applies essentially unaltered when is replaced with the -dimensional inner product space over the complex numbers. (For angles between subspaces, the generalization to is discussed by Galántai and Hegedũs in terms of the below variational characterization.)
Angles between subspaces
Now let and be subspaces of the -dimensional inner product space over the real or complex numbers. Geometrically, and are flats, so Jordan's definition of mutual angles applies. When for any canonical coordinate the symbol denotes the unit vector of the axis, the vectors form an orthonormal basis for and the vectors form an orthonormal basis for , where
Being related to canonical coordinates, these basic vectors may be called canonical.
When denote the canonical basic vectors for and the canonical basic vectors for then the inner product vanishes for any pair of and except the following ones.
With the above ordering of the basic vectors, the matrix of the inner products is thus diagonal. In other words, if and are arbitrary orthonormal bases in and then the real, orthogonal or unitary transformations from the basis to the basis and from the basis to the basis realize a singular value decomposition of the matrix of inner products . The diagonal matrix elements are the singular values of the latter matrix. By the uniqueness of the singular value decomposition, the vectors are then unique up to a real, orthogonal or unitary transformation among them, and the vectors and (and hence ) are unique up to equal real, orthogonal or unitary transformations applied simultaneously to the sets of the vectors associated with a common value of and to the corresponding sets of vectors (and hence to the corresponding sets of ).
A singular value can be interpreted as corresponding to the angles introduced above and associated with and a singular value can be interpreted as corresponding to right angles between the orthogonal spaces and , where superscript denotes the orthogonal complement.
Variational characterization
The variational characterization of singular values and vectors implies as a special case a variational characterization of the angles between subspaces and their associated canonical vectors. This characterization includes the angles and introduced above and orders the angles by increasing value. It can be given the form of the below alternative definition. In this context, it is customary to talk of principal angles and vectors.
Definition
Let be an inner product space. Given two subspaces with , there exists then a sequence of angles called the principal angles, the first one defined as
where is the inner product and the induced norm. The vectors and are the corresponding principal vectors.
The other principal angles and vectors are then defined recursively via
This means that the principal angles form a set of minimized angles between the two subspaces, and the principal vectors in each subspace are orthogonal to each other.
Examples
Geometric example
Geometrically, subspaces are flats (points, lines, planes etc.) that include the origin, thus any two subspaces intersect at least in the origin. Two two-dimensional subspaces and generate a set of two angles. In a three-dimensional Euclidean space, the subspaces and are either identical, or their intersection forms a line. In the former case, both . In the latter case, only , where vectors and are on the line of the intersection and have the same direction. The angle will be the angle between the subspaces and in the orthogonal complement to . Imagining the angle between two planes in 3D, one intuitively thinks of the largest angle, .
Algebraic example
In 4-dimensional real coordinate space R4, let the two-dimensional subspace be
spanned by and , and let the two-dimensional subspace be
spanned by and with some real and such that . Then and are, in fact, the pair of principal vectors corresponding to the angle with , and and are the principal vectors corresponding to the angle with
To construct a pair of subspaces with any given set of angles in a (or larger) dimensional Euclidean space, take a subspace with an orthonormal basis and complete it to an orthonormal basis of the Euclidean space, where . Then, an orthonormal basis of the other subspace is, e.g.,
Basic properties
If the largest angle is zero, one subspace is a subset of the other.
If the largest angle is , there is at least one vector in one subspace perpendicular to the other subspace.
If the smallest angle is zero, the subspaces intersect at least in a line.
If the smallest angle is , the subspaces are orthogonal.
The number of angles equal to zero is the dimension of the space where the two subspaces intersect.
Advanced properties
Non-trivial (different from and ) angles between two subspaces are the same as the non-trivial angles between their orthogonal complements.
Non-trivial angles between the subspaces and and the corresponding non-trivial angles between the subspaces and sum up to .
The angles between subspaces satisfy the triangle inequality in terms of majorization and thus can be used to define a distance on the set of all subspaces turning the set into a metric space.
The sine of the angles between subspaces satisfy the triangle inequality in terms of majorization and thus can be used to define a distance on the set of all subspaces turning the set into a metric space. For example, the sine of the largest angle is known as a gap between subspaces.
Extensions
The notion of the angles and some of the variational properties can be naturally extended to arbitrary inner products and subspaces with infinite dimensions.
Computation
Historically, the principal angles and vectors first appear in the context of canonical correlation and were originally computed using SVD of corresponding covariance matrices. However, as first noticed in, the canonical correlation is related to the cosine of the principal angles, which is ill-conditioned for small angles, leading to very inaccurate computation of highly correlated principal vectors in finite precision computer arithmetic. The sine-based algorithm fixes this issue, but creates a new problem of very inaccurate computation of highly uncorrelated principal vectors, since the sine function is ill-conditioned for angles close to /2. To produce accurate principal vectors in computer arithmetic for the full range of the principal angles, the combined technique first compute all principal angles and vectors using the classical cosine-based approach, and then recomputes the principal angles smaller than /4 and the corresponding principal vectors using the sine-based approach. The combined technique is implemented in open-source libraries Octave and SciPy and contributed and
to MATLAB.
See also
Singular value decomposition
Canonical correlation
References
Analytic geometry
Linear algebra
Angle | Angles between flats | [
"Physics",
"Mathematics"
] | 1,676 | [
"Geometric measurement",
"Scalar physical quantities",
"Physical quantities",
"Linear algebra",
"Wikipedia categories named after physical quantities",
"Angle",
"Algebra"
] |
56,776,202 | https://en.wikipedia.org/wiki/Tribosystem | A tribosystem is a tribological system that consists of at least two contacting bodies and any environmental factor that affects their interaction. Tribologists study tribological systems in detail, and devise tribological test procedures.
Definition
According to ASTM G40-17, a tribosystem is "any system that contains one or more triboelements, including all mechanical, chemical, and environmental factors relevant to tribological behavior." Here, triboelement refers to "one of two or more solid bodies that comprise a sliding, rolling, or abrasive contact, or a body subjected to impingement or cavitation."
More simply speaking, a tribosystem is a tribological system that consists of at least two contacting bodies, including the environment in which the interaction takes place. The complete description of a tribosystem includes all relevant factors that govern the tribological behavior of the system. Thus, depending on the aim of the tribological analysis, the tribosystem boundary is flexible and can be drawn more or less widely.
Describing Tribosystems
The description of tribosystems is based on a detailed assessment of relevant system inputs, outputs and losses, as well as an overall description of the system structure. The following table gives an overview.
Relevance
The complete description of a tribosystem is the first step when devising a tribological test procedure. Since tribological tests are often carried out on simplified model systems using standardized tribometers, a complete description of the tribosystem allows for tribological testing across different scales.
For example, if the tribological analysis aims to investigate a specific gear contact in a complex gearbox, exact knowledge of tribological inputs allows tribologists to devise a simplified test setup involving two gears only. Conversely, if the analysis aims to develop a new lubricant formulation for gearbox applications, a rough description of the gearbox-tribosystem allows to reduce testing to relevant system inputs. Thus, detailed knowledge of the tribosystem can significantly reduce the development effort for machines and lubricants.
References
Tribology | Tribosystem | [
"Chemistry",
"Materials_science",
"Engineering"
] | 452 | [
"Tribology",
"Mechanical engineering",
"Materials science",
"Surface science"
] |
56,777,468 | https://en.wikipedia.org/wiki/Kirchhoff%E2%80%93Helmholtz%20integral | The Kirchhoff–Helmholtz integral combines the Helmholtz equation with the Kirchhoff integral theorem to produce a method applicable to acoustics, seismology and other disciplines involving wave propagation.
It states that the sound pressure is completely determined within a volume free of sources, if sound pressure and velocity are determined in all points on its surface.
See also
Kirchhoff integral
References
Acoustic equations | Kirchhoff–Helmholtz integral | [
"Physics"
] | 82 | [
"Equations of physics",
"Acoustic equations"
] |
56,784,168 | https://en.wikipedia.org/wiki/Open%20system%20tribology | Open System Tribology is a field of tribology that studies tribological systems that are exposed to and affected by the natural environment.
Overview
Factors influencing the tribological process will vary with the operating environment. This environment may be closed or open. Closed systems (e.g., gears in a gearbox) are theoretically not affected by weather conditions. On the other hand, open systems are affected by weather conditions (i.e., precipitation, temperature, and humidity). For example, weather conditions will strongly influence the tribosystem formed in a ski-trail contact, and ski preparation specialists need to do a thorough work before a ski race.
Another example is that of tire–road and wheel-rail contacts that are exposed to the external environment. Here, artificial and natural contaminants will exert an influence on friction and wear. Sound and airborne particles from the contacting surfaces are not contained and emit to the surrounding air. Tribology at the wheel-rail contact plays a key role in railway performance. Friction controls the tracking and braking, while wear affects reliability and endurance.
Temperature influences the tribological process by affecting the properties of the contacting surfaces. Polymers, for example, are harder at low temperatures than at room temperature.
Tribology
References | Open system tribology | [
"Chemistry",
"Materials_science",
"Engineering"
] | 264 | [
"Tribology",
"Mechanical engineering",
"Materials science",
"Surface science"
] |
39,580,830 | https://en.wikipedia.org/wiki/Symmetry%20in%20quantum%20mechanics | Symmetries in quantum mechanics describe features of spacetime and particles which are unchanged under some transformation, in the context of quantum mechanics, relativistic quantum mechanics and quantum field theory, and with applications in the mathematical formulation of the standard model and condensed matter physics. In general, symmetry in physics, invariance, and conservation laws, are fundamentally important constraints for formulating physical theories and models. In practice, they are powerful methods for solving problems and predicting what can happen. While conservation laws do not always give the answer to the problem directly, they form the correct constraints and the first steps to solving a multitude of problems. In application, understanding symmetries can also provide insights on the eigenstates that can be expected. For example, the existence of degenerate states can be inferred by the presence of non commuting symmetry operators or that the non degenerate states are also eigenvectors of symmetry operators.
This article outlines the connection between the classical form of continuous symmetries as well as their quantum operators, and relates them to the Lie groups, and relativistic transformations in the Lorentz group and Poincaré group.
Notation
The notational conventions used in this article are as follows. Boldface indicates vectors, four vectors, matrices, and vectorial operators, while quantum states use bra–ket notation. Wide hats are for operators, narrow hats are for unit vectors (including their components in tensor index notation). The summation convention on the repeated tensor indices is used, unless stated otherwise. The Minkowski metric signature is (+−−−).
Symmetry transformations on the wavefunction in non-relativistic quantum mechanics
Continuous symmetries
Generally, the correspondence between continuous symmetries and conservation laws is given by Noether's theorem.
The form of the fundamental quantum operators, for example the energy operator as a partial time derivative and momentum operator as a spatial gradient, becomes clear when one considers the initial state, then changes one parameter of it slightly. This can be done for displacements (lengths), durations (time), and angles (rotations). Additionally, the invariance of certain quantities can be seen by making such changes in lengths and angles, illustrating conservation of these quantities.
In what follows, transformations on only one-particle wavefunctions in the form:
are considered, where denotes a unitary operator. Unitarity is generally required for operators representing transformations of space, time, and spin, since the norm of a state (representing the total probability of finding the particle somewhere with some spin) must be invariant under these transformations. The inverse is the Hermitian conjugate . The results can be extended to many-particle wavefunctions. Written in Dirac notation as standard, the transformations on quantum state vectors are:
Now, the action of changes to , so the inverse changes back to . Thus, an operator invariant under satisfies [I am sorry, but this is non-sequitor. You have not laid a foundation for this proposition]:
Concomitantly,
for any state ψ. Quantum operators representing observables are also required to be Hermitian so that their eigenvalues are real numbers, i.e. the operator equals its Hermitian conjugate, .
Overview of Lie group theory
Following are the key points of group theory relevant to quantum theory, examples are given throughout the article. For an alternative approach using matrix groups, see the books of Hall
Let be a Lie group, which is a group that locally is parameterized by a finite number of real continuously varying parameters . In more mathematical language, this means that is a smooth manifold that is also a group, for which the group operations are smooth.
the dimension of the group, , is the number of parameters it has.
the group elements, , in are functions of the parameters: and all parameters set to zero returns the identity element of the group: Group elements are often matrices which act on vectors, or transformations acting on functions.
The generators of the group are the partial derivatives of the group elements with respect to the group parameters with the result evaluated when the parameter is set to zero: In the language of manifolds, the generators are the elements of the tangent space to G at the identity. The generators are also known as infinitesimal group elements or as the elements of the Lie algebra of G. (See the discussion below of the commutator.) One aspect of generators in theoretical physics is they can be constructed themselves as operators corresponding to symmetries, which may be written as matrices, or as differential operators. In quantum theory, for unitary representations of the group, the generators require a factor of : The generators of the group form a vector space, which means linear combinations of generators also form a generator.
The generators (whether matrices or differential operators) satisfy the commutation relations: where are the (basis dependent) structure constants of the group. This makes, together with the vector space property, the set of all generators of a group a Lie algebra. Due to the antisymmetry of the bracket, the structure constants of the group are antisymmetric in the first two indices.
The representations of the group then describe the ways that the group (or its Lie algebra) can act on a vector space. (The vector space might be, for example, the space of eigenvectors for a Hamiltonian having as its symmetry group.) We denote the representations using a capital . One can then differentiate to obtain a representation of the Lie algebra, often also denoted by . These two representations are related as follows: without summation on the repeated index . Representations are linear operators that take in group elements and preserve the composition rule:
A representation which cannot be decomposed into a direct sum of other representations, is called irreducible. It is conventional to label irreducible representations by a superscripted number in brackets, as in , or if there is more than one number, we write .
There is an additional subtlety that arises in quantum theory, where two vectors that differ by multiplication by a scalar represent the same physical state. Here, the pertinent notion of representation is a projective representation, one that only satisfies the composition law up to a scalar. In the context of quantum mechanical spin, such representations are called spinorial.
Momentum and energy as generators of translation and time evolution, and rotation
The space translation operator acts on a wavefunction to shift the space coordinates by an infinitesimal displacement . The explicit expression can be quickly determined by a Taylor expansion of about , then (keeping the first order term and neglecting second and higher order terms), replace the space derivatives by the momentum operator . Similarly for the time translation operator acting on the time parameter, the Taylor expansion of is about , and the time derivative replaced by the energy operator .
The exponential functions arise by definition as those limits, due to Euler, and can be understood physically and mathematically as follows. A net translation can be composed of many small translations, so to obtain the translation operator for a finite increment, replace by and by , where is a positive non-zero integer. Then as increases, the magnitude of and become even smaller, while leaving the directions unchanged. Acting the infinitesimal operators on the wavefunction times and taking the limit as tends to infinity gives the finite operators.
Space and time translations commute, which means the operators and generators commute.
For a time-independent Hamiltonian, energy is conserved in time and quantum states are stationary states: the eigenstates of the Hamiltonian are the energy eigenvalues :
and all stationary states have the form
where is the initial time, usually set to zero since there is no loss of continuity when the initial time is set.
An alternative notation is .
Angular momentum as the generator of rotations
Orbital angular momentum
The rotation operator, , acts on a wavefunction to rotate the spatial coordinates of a particle by a constant angle :
where are the rotated coordinates about an axis defined by a unit vector through an angular increment , given by:
where is a rotation matrix dependent on the axis and angle. In group theoretic language, the rotation matrices are group elements, and the angles and axis are the parameters, of the three-dimensional special orthogonal group, SO(3). The rotation matrices about the standard Cartesian basis vector through angle , and the corresponding generators of rotations , are:
More generally for rotations about an axis defined by , the rotation matrix elements are:
where is the Kronecker delta, and is the Levi-Civita symbol.
It is not as obvious how to determine the rotational operator compared to space and time translations. We may consider a special case (rotations about the , , or -axis) then infer the general result, or use the general rotation matrix directly and tensor index notation with and . To derive the infinitesimal rotation operator, which corresponds to small , we use the small angle approximations and , then Taylor expand about or , keep the first order term, and substitute the angular momentum operator components.
The -component of angular momentum can be replaced by the component along the axis defined by , using the dot product .
Again, a finite rotation can be made from many small rotations, replacing by and taking the limit as tends to infinity gives the rotation operator for a finite rotation.
Rotations about the same axis do commute, for example a rotation through angles and about axis can be written
However, rotations about different axes do not commute. The general commutation rules are summarized by
In this sense, orbital angular momentum has the common sense properties of rotations. Each of the above commutators can be easily demonstrated by holding an everyday object and rotating it through the same angle about any two different axes in both possible orderings; the final configurations are different.
In quantum mechanics, there is another form of rotation which mathematically appears similar to the orbital case, but has different properties, described next.
Spin angular momentum
All previous quantities have classical definitions. Spin is a quantity possessed by particles in quantum mechanics without any classical analogue, having the units of angular momentum. The spin vector operator is denoted . The eigenvalues of its components are the possible outcomes (in units of ) of a measurement of the spin projected onto one of the basis directions.
Rotations (of ordinary space) about an axis through angle about the unit vector in space acting on a multicomponent wave function (spinor) at a point in space is represented by:
However, unlike orbital angular momentum in which the z-projection quantum number can only take positive or negative integer values (including zero), the z-projection spin quantum number s can take all positive and negative half-integer values. There are rotational matrices for each spin quantum number.
Evaluating the exponential for a given z-projection spin quantum number s gives a (2s + 1)-dimensional spin matrix. This can be used to define a spinor as a column vector of 2s + 1 components which transforms to a rotated coordinate system according to the spin matrix at a fixed point in space.
For the simplest non-trivial case of s = 1/2, the spin operator is given by
where the Pauli matrices in the standard representation are:
Total angular momentum
The total angular momentum operator is the sum of the orbital and spin
and is an important quantity for multi-particle systems, especially in nuclear physics and the quantum chemistry of multi-electron atoms and molecules.
We have a similar rotation matrix:
Conserved quantities in the quantum harmonic oscillator
The dynamical symmetry group of the n dimensional quantum harmonic oscillator is the special unitary group SU(n). As an example, the number of infinitesimal generators of the corresponding Lie algebras of SU(2) and SU(3) are three and eight respectively. This leads to exactly three and eight independent conserved quantities (other than the Hamiltonian) in these systems.
The two dimensional quantum harmonic oscillator has the expected conserved quantities of the Hamiltonian and the angular momentum, but has additional hidden conserved quantities of energy level difference and another form of angular momentum.
Lorentz group in relativistic quantum mechanics
Following is an overview of the Lorentz group; a treatment of boosts and rotations in spacetime. Throughout this section, see (for example) T. Ohlsson (2011) and E. Abers (2004).
Lorentz transformations can be parametrized by rapidity for a boost in the direction of a three-dimensional unit vector , and a rotation angle about a three-dimensional unit vector defining an axis, so and are together six parameters of the Lorentz group (three for rotations and three for boosts). The Lorentz group is 6-dimensional.
Pure rotations in spacetime
The rotation matrices and rotation generators considered above form the spacelike part of a four-dimensional matrix, representing pure-rotation Lorentz transformations. Three of the Lorentz group elements and generators for pure rotations are:
The rotation matrices act on any four vector and rotate the space-like components according to
leaving the time-like coordinate unchanged. In matrix expressions, is treated as a column vector.
Pure boosts in spacetime
A boost with velocity in the x, y, or z directions given by the standard Cartesian basis vector , are the boost transformation matrices. These matrices and the corresponding generators are the remaining three group elements and generators of the Lorentz group:
The boost matrices act on any four vector A = (A0, A1, A2, A3) and mix the time-like and the space-like components, according to:
The term "boost" refers to the relative velocity between two frames, and is not to be conflated with momentum as the generator of translations, as explained below.
Combining boosts and rotations
Products of rotations give another rotation (a frequent exemplification of a subgroup), while products of boosts and boosts or of rotations and boosts cannot be expressed as pure boosts or pure rotations. In general, any Lorentz transformation can be expressed as a product of a pure rotation and a pure boost. For more background see (for example) B.R. Durney (2011) and H.L. Berk et al. and references therein.
The boost and rotation generators have representations denoted and respectively, the capital in this context indicates a group representation.
For the Lorentz group, the representations and of the generators and fulfill the following commutation rules.
In all commutators, the boost entities mixed with those for rotations, although rotations alone simply give another rotation. Exponentiating the generators gives the boost and rotation operators which combine into the general Lorentz transformation, under which the spacetime coordinates transform from one rest frame to another boosted and/or rotating frame. Likewise, exponentiating the representations of the generators gives the representations of the boost and rotation operators, under which a particle's spinor field transforms.
In the literature, the boost generators and rotation generators are sometimes combined into one generator for Lorentz transformations , an antisymmetric four-dimensional matrix with entries:
and correspondingly, the boost and rotation parameters are collected into another antisymmetric four-dimensional matrix , with entries:
The general Lorentz transformation is then:
with summation over repeated matrix indices α and β. The Λ matrices act on any four vector A = (A0, A1, A2, A3) and mix the time-like and the space-like components, according to:
Transformations of spinor wavefunctions in relativistic quantum mechanics
In relativistic quantum mechanics, wavefunctions are no longer single-component scalar fields, but now 2(2s + 1) component spinor fields, where s is the spin of the particle. The transformations of these functions in spacetime are given below.
Under a proper orthochronous Lorentz transformation in Minkowski space, all one-particle quantum states locally transform under some representation of the Lorentz group:
where is a finite-dimensional representation, in other words a dimensional square matrix, and is thought of as a column vector containing components with the allowed values of :
Real irreducible representations and spin
The irreducible representations of and , in short "irreps", can be used to build to spin representations of the Lorentz group. Defining new operators:
so and are simply complex conjugates of each other, it follows they satisfy the symmetrically formed commutators:
and these are essentially the commutators the orbital and spin angular momentum operators satisfy. Therefore, and form operator algebras analogous to angular momentum; same ladder operators, z-projections, etc., independently of each other as each of their components mutually commute. By the analogy to the spin quantum number, we can introduce positive integers or half integers, , with corresponding sets of values and . The matrices satisfying the above commutation relations are the same as for spins a and b have components given by multiplying Kronecker delta values with angular momentum matrix elements:
where in each case the row number m′n′ and column number mn are separated by a comma, and in turn:
and similarly for J(n). The three J(m) matrices are each square matrices, and the three J(n) are each square matrices. The integers or half-integers m and n numerate all the irreducible representations by, in equivalent notations used by authors: , which are each square matrices.
Applying this to particles with spin ;
left-handed -component spinors transform under the real irreps ,
right-handed -component spinors transform under the real irreps ,
taking direct sums symbolized by (see direct sum of matrices for the simpler matrix concept), one obtains the representations under which -component spinors transform: where . These are also real irreps, but as shown above, they split into complex conjugates.
In these cases the refers to any of , , or a full Lorentz transformation .
Relativistic wave equations
In the context of the Dirac equation and Weyl equation, the Weyl spinors satisfying the Weyl equation transform under the simplest irreducible spin representations of the Lorentz group, since the spin quantum number in this case is the smallest non-zero number allowed: 1/2. The 2-component left-handed Weyl spinor transforms under and the 2-component right-handed Weyl spinor transforms under . Dirac spinors satisfying the Dirac equation transform under the representation , the direct sum of the irreps for the Weyl spinors.
The Poincaré group in relativistic quantum mechanics and field theory
Space translations, time translations, rotations, and boosts, all taken together, constitute the Poincaré group. The group elements are the three rotation matrices and three boost matrices (as in the Lorentz group), and one for time translations and three for space translations in spacetime. There is a generator for each. Therefore, the Poincaré group is 10-dimensional.
In special relativity, space and time can be collected into a four-position vector , and in parallel so can energy and momentum which combine into a four-momentum vector . With relativistic quantum mechanics in mind, the time duration and spatial displacement parameters (four in total, one for time and three for space) combine into a spacetime displacement , and the energy and momentum operators are inserted in the four-momentum to obtain a four-momentum operator,
which are the generators of spacetime translations (four in total, one time and three space):
There are commutation relations between the components four-momentum P (generators of spacetime translations), and angular momentum M (generators of Lorentz transformations), that define the Poincaré algebra:
where η is the Minkowski metric tensor. (It is common to drop any hats for the four-momentum operators in the commutation relations). These equations are an expression of the fundamental properties of space and time as far as they are known today. They have a classical counterpart where the commutators are replaced by Poisson brackets.
To describe spin in relativistic quantum mechanics, the Pauli–Lubanski pseudovector
a Casimir operator, is the constant spin contribution to the total angular momentum, and there are commutation relations between P and W and between M and W:
Invariants constructed from W, instances of Casimir invariants can be used to classify irreducible representations of the Lorentz group.
Symmetries in quantum field theory and particle physics
Unitary groups in quantum field theory
Group theory is an abstract way of mathematically analyzing symmetries. Unitary operators are paramount to quantum theory, so unitary groups are important in particle physics. The group of N dimensional unitary square matrices is denoted U(N). Unitary operators preserve inner products which means probabilities are also preserved, so the quantum mechanics of the system is invariant under unitary transformations. Let be a unitary operator, so the inverse is the Hermitian adjoint , which commutes with the Hamiltonian:
then the observable corresponding to the operator is conserved, and the Hamiltonian is invariant under the transformation .
Since the predictions of quantum mechanics should be invariant under the action of a group, physicists look for unitary transformations to represent the group.
Important subgroups of each U(N) are those unitary matrices which have unit determinant (or are "unimodular"): these are called the special unitary groups and are denoted SU(N).
U(1)
The simplest unitary group is U(1), which is just the complex numbers of modulus 1. This one-dimensional matrix entry is of the form:
in which θ is the parameter of the group, and the group is Abelian since one-dimensional matrices always commute under matrix multiplication. Lagrangians in quantum field theory for complex scalar fields are often invariant under U(1) transformations. If there is a quantum number a associated with the U(1) symmetry, for example baryon and the three lepton numbers in electromagnetic interactions, we have:
U(2) and SU(2)
The general form of an element of a U(2) element is parametrized by two complex numbers a and b:
and for SU(2), the determinant is restricted to 1:
In group theoretic language, the Pauli matrices are the generators of the special unitary group in two dimensions, denoted SU(2). Their commutation relation is the same as for orbital angular momentum, aside from a factor of 2:
A group element of SU(2) can be written:
where σj is a Pauli matrix, and the group parameters are the angles turned through about an axis.
The two-dimensional isotropic quantum harmonic oscillator has symmetry group SU(2), while the symmetry algebra of the rational anisotropic oscillator is a nonlinear extension of u(2).
U(3) and SU(3)
The eight Gell-Mann matrices (see article for them and the structure constants) are important for quantum chromodynamics. They originally arose in the theory SU(3) of flavor which is still of practical importance in nuclear physics. They are the generators for the SU(3) group, so an element of SU(3) can be written analogously to an element of SU(2):
where are eight independent parameters. The matrices satisfy the commutator:
where the indices , , take the values 1, 2, 3, ..., 8. The structure constants fabc are totally antisymmetric in all indices analogous to those of SU(2). In the standard colour charge basis (r for red, g for green, b for blue):
the colour states are eigenstates of the and matrices, while the other matrices mix colour states together.
The eight gluons states (8-dimensional column vectors) are simultaneous eigenstates of the adjoint representation of , the 8-dimensional representation acting on its own Lie algebra , for the and matrices. By forming tensor products of representations (the standard representation and its dual) and taking appropriate quotients, protons and neutrons, and other hadrons are eigenstates of various representations of of color. The representations of SU(3) can be described by a "theorem of the highest weight".
Matter and antimatter
In relativistic quantum mechanics, relativistic wave equations predict a remarkable symmetry of nature: that every particle has a corresponding antiparticle. This is mathematically contained in the spinor fields which are the solutions of the relativistic wave equations.
Charge conjugation switches particles and antiparticles. Physical laws and interactions unchanged by this operation have C symmetry.
Discrete spacetime symmetries
Parity mirrors the orientation of the spatial coordinates from left-handed to right-handed. Informally, space is "reflected" into its mirror image. Physical laws and interactions unchanged by this operation have P symmetry.
Time reversal flips the time coordinate, which amounts to time running from future to past. A curious property of time, which space does not have, is that it is unidirectional: particles traveling forwards in time are equivalent to antiparticles traveling back in time. Physical laws and interactions unchanged by this operation have T symmetry.
C, P, T symmetries
CPT theorem
CP violation
PT symmetry
Lorentz violation
Gauge theory
In quantum electrodynamics, the local symmetry group is U(1) and is abelian. In quantum chromodynamics, the local symmetry group is SU(3) and is non-abelian.
The electromagnetic interaction is mediated by photons, which have no electric charge. The electromagnetic tensor has an electromagnetic four-potential field possessing gauge symmetry.
The strong (color) interaction is mediated by gluons, which can have eight color charges. There are eight gluon field strength tensors with corresponding gluon four potentials field, each possessing gauge symmetry.
The strong (color) interaction
Color charge
Analogous to the spin operator, there are color charge operators in terms of the Gell-Mann matrices :
and since color charge is a conserved charge, all color charge operators must commute with the Hamiltonian:
Isospin
Isospin is conserved in strong interactions.
The weak and electromagnetic interactions
Duality transformation
Magnetic monopoles can be theoretically realized, although current observations and theory are consistent with them existing or not existing. Electric and magnetic charges can effectively be "rotated into one another" by a duality transformation.
Electroweak symmetry
Electroweak symmetry
Electroweak symmetry breaking
Supersymmetry
A Lie superalgebra is an algebra in which (suitable) basis elements either have a commutation relation or have an anticommutation relation. Symmetries have been proposed to the effect that all fermionic particles have bosonic analogues, and vice versa. These symmetry have theoretical appeal in that no extra assumptions (such as existence of strings) barring symmetries are made. In addition, by assuming supersymmetry, a number of puzzling issues can be resolved. These symmetries, which are represented by Lie superalgebras, have not been confirmed experimentally. It is now believed that they are broken symmetries, if they exist. But it has been speculated that dark matter is constitutes gravitinos, a spin 3/2 particle with mass, its supersymmetric partner being the graviton.
Exchange symmetry
The concept of exchange symmetry is derived from a fundamental postulate of quantum statistics, which states that no observable physical quantity should change after exchanging two identical particles. It states that because all observables are proportional to for a system of identical particles, the wave function must either remain the same or change sign upon such an exchange. More generally, for a system of n identical particles the wave function must transform as an irreducible representation of the finite symmetric group Sn. It turns out that, according to the spin-statistics theorem, fermion states transform as the antisymmetric irreducible representation of Sn and boson states as the symmetric irreducible representation.
Because the exchange of two identical particles is mathematically equivalent to the rotation of each particle by 180 degrees (and so to the rotation of one particle's frame by 360 degrees), the symmetric nature of the wave function depends on the particle's spin after the rotation operator is applied to it. Integer spin particles do not change the sign of their wave function upon a 360 degree rotation—therefore the sign of the wave function of the entire system does not change. Semi-integer spin particles change the sign of their wave function upon a 360 degree rotation (see more in spin–statistics theorem).
Particles for which the wave function does not change sign upon exchange are called bosons, or particles with a symmetric wave function. The particles for which the wave function of the system changes sign are called fermions, or particles with an antisymmetric wave function.
Fermions therefore obey different statistics (called Fermi–Dirac statistics) than bosons (which obey Bose–Einstein statistics). One of the consequences of Fermi–Dirac statistics is the exclusion principle for fermions—no two identical fermions can share the same quantum state (in other words, the wave function of two identical fermions in the same state is zero). This in turn results in degeneracy pressure for fermions—the strong resistance of fermions to compression into smaller volume. This resistance gives rise to the “stiffness” or “rigidity” of ordinary atomic matter (as atoms contain electrons which are fermions).
See also
Symmetric group
Spin-statistics theorem
Projective representation
Casimir operator
Pauli–Lubanski pseudovector
Symmetries in general relativity
Renormalization group
Representation of a Lie group
Representation theory of the Poincaré group
Representation theory of the Lorentz group
Footnotes
References
Further reading
External links
The molecular symmetry group @ The University of Western Ontario
(2010) Irreducible Tensor Operators and the Wigner-Eckart Theorem
Lie groups
Continuous Groups, Lie Groups, and Lie Algebras
Pauli exclusion principle
Special relativity
Quantum field theory
Group theory
Theoretical physics | Symmetry in quantum mechanics | [
"Physics",
"Mathematics"
] | 6,244 | [
"Quantum field theory",
"Theoretical physics",
"Quantum mechanics",
"Special relativity",
"Group theory",
"Fields of abstract algebra",
"Theory of relativity",
"Pauli exclusion principle"
] |
39,585,748 | https://en.wikipedia.org/wiki/Synthesis%20of%20bioglass | Bioactive glasses have been synthesized through methods such as conventional melting, quenching, the sol–gel process, flame synthesis, and microwave irradiation. The synthesis of bioglass has been reviewed by various groups, with sol-gel synthesis being one of the most frequently used methods for producing bioglass composites, particularly for tissue engineering applications. Other methods of bioglass synthesis have been developed, such as flame and microwave synthesis, though they are less prevalent in research.
History and methodology
Melt quench synthesis
The first bioactive glass, developed by Larry Hench in 1969, was produced by melting a mixture of related oxide precursors at relatively high temperatures. This original bioactive glass, named Bioglass, was melt-derived with a composition of 46.1 mol% SiO2, 24.4 mol% Na2O, 26.9 mol% CaO, and 2.6 mol% P2O5. The selection of glass composition for specific applications is often based on a comprehensive understanding of how each major component influences the properties of the glass, considering both its final use and its manufacturing process. Despite extensive research over the past 40 years, only a limited number of glass compositions have been approved for clinical use. Among these, the two melt-derived compositions approved by the U.S. Food and Drug Administration (FDA)—45S5 and S53P4—consist of four oxides: SiO2, Na2O, CaO, and P2O5. In general, a large number of elements can be dissolved in glasses. The effect of Al2O3, B2O3, Fe2O3, MgO, SrO, BaO, ZnO, Li2O, K2O, CaF2 and TiO2 on the in vitro or in vivo properties of certain compositions of bioactive glasses has been reported. However, the influence of composition on the properties and compatibility of bioactive and biodegradable glasses is not fully understood.
Scaffolds fabricated by the melt quench technique have much less porosity, which causes issues with healing and tissue integration during in-vivo testing.
Sol–gel process
The sol–gel process has a long history in the synthesis of silicate systems and other oxides and has become a widely researched field with significant technological relevance. This process is used for the fabrication of thin films, coatings, nanoparticles, and fibers. Sol-gel processing technology at low temperatures, an alternative to traditional melt processing of glasses, involves the synthesis of a solution (sol), typically composed of metal-organic and metal salt precursors. This is followed by the formation of a gel through chemical reaction or aggregation, and finally, thermal treatment for drying, organic removal, and sometimes crystallization and cooling treatment. The synthesis of specific silicate bioactive glasses using the sol–gel technique at low temperatures, employing metal alkoxides as precursors, was demonstrated in 1991 by Li et al. Typical precursors for bioactive glass synthesis include tetraethyl orthosilicate, calcium nitrate and triethyl phosphate. Following hydrolysis and poly-condensation reactions, a gel is formed, which is then calcined at 600–700°C to form the glass. Sol–gel derived products, such as thin films or particles, are highly porous and exhibit a high specific surface area. Recent research by Hong et al. has focused on fabricating bioactive silicate glass nanoparticles through a combination of the sol–gel route and the co-precipitation method. In this process, the mixture of precursors is hydrolyzed in an acidic environment, condensed in an alkaline condition, and then subjected to freeze-drying. The morphology and size of bioactive glass nanoparticles can be tailored by varying the production conditions and the feeding ratio of reagents.
Different ions can be incorporated into bioactive glasses, including zinc, magnesium, zirconium, titanium, boron and silver, to enhance functionality and bioactivity. However, synthesizing bioactive glasses at the nanoscale with these ions can be challenging. Recently, Delben et al. developed sol–gel-derived bioactive glass doped with silver, reporting that the Si–O–Si bond number increased with higher silver concentrations, resulting in structural densification. It was also observed that quartz and metallic silver crystallization increased with higher silver content, while hydroxyapatite crystallization decreased.
The sol–gel technique is widely regarded for its versatility in synthesizing inorganic materials and has proven suitable for producing various bioactive glasses. However, it is limited in the range of compositions that can be produced. Residual water or solvent content may complicate its application in biomedical fields, necessitating high-temperature calcination to eliminate organic remnants. Additionally, sol–gel processing is time-consuming and, being a batch process, can result in batch-to-batch variations.
Other methods
Beginning in 2006, researchers have produced alternate methods of synthesizing bioglass; these methods include flame synthesis and microwave synthesis. Flame synthesis works by baking the powders directly in a flame reactor. Microwave synthesis is a rapid and low-cost method where precursors are dissolved in water, transferred to an ultrasonic bath, and then irradiated.
References
Biomaterials
Glass chemistry
Chemical synthesis | Synthesis of bioglass | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering",
"Biology"
] | 1,103 | [
"Biomaterials",
"Glass engineering and science",
"Glass chemistry",
"Materials",
"Chemical synthesis",
"nan",
"Matter",
"Medical technology"
] |
52,478,131 | https://en.wikipedia.org/wiki/Rayleigh%27s%20equation%20%28fluid%20dynamics%29 | In fluid dynamics, Rayleigh's equation or Rayleigh stability equation is a linear ordinary differential equation to study the hydrodynamic stability of a parallel, incompressible and inviscid shear flow. The equation is:
with the flow velocity of the steady base flow whose stability is to be studied and is the cross-stream direction (i.e. perpendicular to the flow direction). Further is the complex valued amplitude of the infinitesimal streamfunction perturbations applied to the base flow, is the wavenumber of the perturbations and is the phase speed with which the perturbations propagate in the flow direction. The prime denotes differentiation with respect to
Background
The equation is named after Lord Rayleigh, who introduced it in 1880. The Orr–Sommerfeld equation – introduced later, for the study of stability of parallel viscous flow – reduces to Rayleigh's equation when the viscosity is zero.
Rayleigh's equation, together with appropriate boundary conditions, most often poses an eigenvalue problem. For given (real-valued) wavenumber and mean flow velocity the eigenvalues are the phase speeds and the eigenfunctions are the associated streamfunction amplitudes In general, the eigenvalues form a continuous spectrum. In certain cases there may further be a discrete spectrum of complex conjugate pairs of Since the wavenumber occurs only as a square in Rayleigh's equation, a solution (i.e. and ) for wavenumber is also a solution for the wavenumber
Rayleigh's equation only concerns two-dimensional perturbations to the flow. From Squire's theorem it follows that the two-dimensional perturbations are less stable than three-dimensional perturbations.
If a real-valued phase speed is in between the minimum and maximum of the problem has so-called critical layers near where At the critical layers Rayleigh's equation becomes singular. These were first being studied by Lord Kelvin, also in 1880. His solution gives rise to a so-called cat's eye pattern of streamlines near the critical layer, when observed in a frame of reference moving with the phase speed
Derivation
Consider a parallel shear flow in the direction, which varies only in the cross-flow direction The stability of the flow is studied by adding small perturbations to the flow velocity and in the and directions, respectively. The flow is described using the incompressible Euler equations, which become after linearization – using velocity components and
with the partial derivative operator with respect to time, and similarly and with respect to and The pressure fluctuations ensure that the continuity equation is fulfilled. The fluid density is denoted as and is a constant in the present analysis. The prime denotes differentiation of with respect to its argument
The flow oscillations and are described using a streamfunction ensuring that the continuity equation is satisfied:
Taking the - and -derivatives of the - and -momentum equation, and thereafter subtracting the two equations, the pressure can be eliminated:
which is essentially the vorticity transport equation, being (minus) the vorticity.
Next, sinusoidal fluctuations are considered:
with the complex-valued amplitude of the streamfunction oscillations, while is the imaginary unit () and denotes the real part of the expression between the brackets. Using this in the vorticity transport equation, Rayleigh's equation is obtained.
The boundary conditions for flat impermeable walls follow from the fact that the streamfunction is a constant at them. So at impermeable walls the streamfunction oscillations are zero, i.e. For unbounded flows the common boundary conditions are that
Notes
References
Fluid dynamics
Equations of fluid dynamics | Rayleigh's equation (fluid dynamics) | [
"Physics",
"Chemistry",
"Engineering"
] | 778 | [
"Equations of fluid dynamics",
"Equations of physics",
"Chemical engineering",
"Piping",
"Fluid dynamics"
] |
52,480,271 | https://en.wikipedia.org/wiki/OpenLB | OpenLB is an object-oriented implementation of the lattice Boltzmann methods (LBM). It is the first implementation of a generic platform for LBM programming, which is shared with the open source community (GPLv2).
The code is written in C++ and is used by application programmers as well as developers, with the ability to implement custom models
OpenLB supports complex data structures that allow simulations in complex geometries and parallel execution using MPI, OpenMP and CUDA on high-performance computers.
The source code uses the concepts of interfaces and templates, so that efficient, direct and intuitive implementations of the LBM become possible.
The efficiency and scalability has been checked and proved by code reviews.
A user manual and a source code documentation by DoxyGen are available on the project page.
Functions
OpenLB is being constantly developed. By now the following features are implemented:
Computational fluid dynamics in complex geometry
Automatic generation of a grid
Turbulent flow
Multi-component flow
Thermal flow
Light radiation
Topology optimizing
Particle flow (Euler–Euler and Euler–Lagrange method)
Automated grid generation
Automated grid generation is one of the great advantages of OpenLB over other CFD software packages. The main advantages are listed below:
Use of geometries in the STL file format or geometrically primitive forms (e.g. ball, cylinder, cone) and their union, intersection and difference
Very fast voxelization: 6003 ~ 1 minute
Handling non-watertight surfaces
Memory-friendly using octrees
Load distribution for parallel execution with MPI, OpenMP and CUDA.
The automatic grid generation can assume both an STL file as well as primitive geometries. For the geometry, a uniform and rectangular grid is created which encloses the entire space of the geometry. The superfluous grid cells are then removed and the remaining cuboids are shrunk to fit the given geometry. Finally, the grid is distributed to different threads or processors for the parallel execution of the simulation. The boundary conditions and start values can be set using material numbers.
Literature
Krause, Mathias J. and Latt, Jonas and Heuveline, Vincent. "Towards a hybrid parallelization of lattice Boltzmann methods." Computers & Mathematics with Applications 58.5 (2009): 1071–1080.
Heuveline, Vincent, and Mathias J. Krause. "OpenLB: towards an efficient parallel open source library for lattice Boltzmann fluid flow simulations." International Workshop on State-of-the-Art in Scientific and Parallel Computing. PARA. Vol. 9. 2010.
Krause, Mathias J., Thomas Gengenbach, and Vincent Heuveline. "Hybrid parallel simulations of fluid flows in complex geometries: Application to the human lungs." European Conference on Parallel Processing. Springer Berlin Heidelberg, 2010.
Krause, Mathias J. "Fluid flow simulation and optimisation with lattice Boltzmann methods on high performance computers: application to the human respiratory system." Karlsruhe Institute of Technology, KIT (2010).
Trunk, Robin, et al. "Inertial dilute particulate fluid flow simulations with an Euler–Euler lattice Boltzmann method." Journal of Computational Science (2016).
Mink, Albert, et al. "A 3D Lattice Boltzmann method for light simulation in participating media." Journal of Computational Science (2016).
Awards
Winner Mimics Innovation Award (2011)
Honorary certificate in the Group Humanitarian Impact, "Itanium® Solutions Alliance Innovation Awards" (2009)
Finalist in the Group Humanitarian Impact Innovation, "Itanium® Solutions Alliance Innovation Awards" (2007)
References
External links
Official website
Dynamic Cross Flow Filtration with OpenLB (YouTube Video)
OpenLB Trailer (YouTube Video)
C++ software
Computational fluid dynamics
Computer-aided engineering software for Linux
Continuum mechanics
Free science software
Free software programmed in C++
Open Source computer aided engineering applications
Scientific simulation software | OpenLB | [
"Physics",
"Chemistry"
] | 815 | [
"Continuum mechanics",
"Computational fluid dynamics",
"Classical mechanics",
"Computational physics",
"Fluid dynamics"
] |
36,713,242 | https://en.wikipedia.org/wiki/Partial%20differential%20algebraic%20equation | In mathematics a partial differential algebraic equation (PDAE) set is an incomplete system of partial differential equations that is closed with a set of algebraic equations.
Definition
A general PDAE is defined as:
where:
F is a set of arbitrary functions;
x is a set of independent variables;
y is a set of dependent variables for which partial derivatives are defined; and
z is a set of dependent variables for which no partial derivatives are defined.
The relationship between a PDAE and a partial differential equation (PDE) is analogous to the relationship between an ordinary differential equation (ODE) and a differential algebraic equation (DAE).
PDAEs of this general form are challenging to solve. Simplified forms are studied in more detail in the literature. Even as recently as 2000, the term "PDAE" has been handled as unfamiliar by those in related fields.
Solution methods
Semi-discretization is a common method for solving PDAEs whose independent variables are those of time and space, and has been used for decades. This method involves removing the spatial variables using a discretization method, such as the finite volume method, and incorporating the resulting linear equations as part of the algebraic relations. This reduces the system to a DAE, for which conventional solution methods can be employed.
References
Partial differential equations
Multivariable calculus
Numerical analysis | Partial differential algebraic equation | [
"Mathematics"
] | 268 | [
"Calculus",
"Computational mathematics",
"Mathematical relations",
"Numerical analysis",
"Multivariable calculus",
"Approximations"
] |
36,713,796 | https://en.wikipedia.org/wiki/Mean%20squared%20displacement | In statistical mechanics, the mean squared displacement (MSD, also mean square displacement, average squared displacement, or mean square fluctuation) is a measure of the deviation of the position of a particle with respect to a reference position over time. It is the most common measure of the spatial extent of random motion, and can be thought of as measuring the portion of the system "explored" by the random walker. In the realm of biophysics and environmental engineering, the Mean Squared Displacement is measured over time to determine if a particle is spreading slowly due to diffusion, or if an advective force is also contributing. Another relevant concept, the variance-related diameter (VRD, which is twice the square root of MSD), is also used in studying the transportation and mixing phenomena in the realm of environmental engineering. It prominently appears in the Debye–Waller factor (describing vibrations within the solid state) and in the Langevin equation (describing diffusion of a Brownian particle).
The MSD at time is defined as an ensemble average:
where N is the number of particles to be averaged, vector is the reference position of the -th particle, and vector is the position of the -th particle at time t.
Derivation of the MSD for a Brownian particle in 1D
The probability density function (PDF) for a particle in one dimension is found by solving the one-dimensional diffusion equation. (This equation states that the position probability density diffuses out over time - this is the method used by Einstein to describe a Brownian particle. Another method to describe the motion of a Brownian particle was described by Langevin, now known for its namesake as the Langevin equation.)
given the initial condition ; where is the position of the particle at some given time, is the tagged particle's initial position, and is the diffusion constant with the S.I. units (an indirect measure of the particle's speed). The bar in the argument of the instantaneous probability refers to the conditional probability. The diffusion equation states that the speed at which the probability for finding the particle at is position dependent.
The differential equation above takes the form of 1D heat equation. The one-dimensional PDF below is the Green's function of heat equation (also known as Heat kernel in mathematics):
This states that the probability of finding the particle at is Gaussian, and the width of the Gaussian is time dependent. More specifically the full width at half maximum (FWHM)(technically/pedantically, this is actually the Full duration at half maximum as the independent variable is time) scales like
Using the PDF one is able to derive the average of a given function, , at time :
where the average is taken over all space (or any applicable variable).
The Mean squared displacement is defined as
expanding out the ensemble average
dropping the explicit time dependence notation for clarity. To find the MSD, one can take one of two paths: one can explicitly calculate and , then plug the result back into the definition of the MSD; or one could find the moment-generating function, an extremely useful, and general function when dealing with probability densities. The moment-generating function describes the moment of the PDF. The first moment of the displacement PDF shown above is simply the mean: . The second moment is given as .
So then, to find the moment-generating function it is convenient to introduce the characteristic function:
one can expand out the exponential in the above equation to give
By taking the natural log of the characteristic function, a new function is produced, the cumulant generating function,
where is the cumulant of . The first two cumulants are related to the first two moments, , via and where the second cumulant is the so-called variance, . With these definitions accounted for one can investigate the moments of the Brownian particle PDF,
by completing the square and knowing the total area under a Gaussian one arrives at
Taking the natural log, and comparing powers of to the cumulant generating function, the first cumulant is
which is as expected, namely that the mean position is the Gaussian centre. The second cumulant is
the factor 2 comes from the factorial factor in the denominator of the cumulant generating function. From this, the second moment is calculated,
Plugging the results for the first and second moments back, one finds the MSD,
Derivation for n dimensions
For a Brownian particle in higher-dimension Euclidean space, its position is represented by a vector , where the Cartesian coordinates are statistically independent.
The n-variable probability distribution function is the product of the fundamental solutions in each variable; i.e.,
The Mean squared displacement is defined as
Since all the coordinates are independent, their deviation from the reference position is also independent. Therefore,
For each coordinate, following the same derivation as in 1D scenario above, one obtains the MSD in that dimension as . Hence, the final result of mean squared displacement in n-dimensional Brownian motion is:
Definition of MSD for time lags
In the measurements of single particle tracking (SPT), displacements can be defined for different time intervals between positions (also called time lags or lag times). SPT yields the trajectory , representing a particle undergoing two-dimensional diffusion.
Assuming that the trajectory of a single particle measured at time points , where is any fixed number, then there are non-trivial forward displacements (, the cases when are not considered) which correspond to time intervals (or time lags) . Hence, there are many distinct displacements for small time lags, and very few for large time lags, can be defined as an average quantity over time lags:
Similarly, for continuous time series :
It's clear that choosing large and can improve statistical performance. This technique allow us estimate the behavior of the whole ensembles by just measuring a single trajectory, but note that it's only valid for the systems with ergodicity, like classical Brownian motion (BM), fractional Brownian motion (fBM), and continuous-time random walk (CTRW) with limited distribution of waiting times, in these cases, (defined above), here denotes ensembles average. However, for non-ergodic systems, like the CTRW with unlimited waiting time, waiting time can go to infinity at some time, in this case, strongly depends on , and don't equal each other anymore, in order to get better asymptotics, introduce the averaged time MSD:
Here denotes averaging over N ensembles.
Also, one can easily derive the autocorrelation function from the MSD:
where is so-called autocorrelation function for position of particles.
MSD in experiments
Experimental methods to determine MSDs include neutron scattering and photon correlation spectroscopy.
The linear relationship between the MSD and time t allows for graphical methods to determine the diffusivity constant D. This is especially useful for rough calculations of the diffusivity in environmental systems. In some atmospheric dispersion models, the relationship between MSD and time t is not linear. Instead, a series of power laws empirically representing the variation of the square root of MSD versus downwind distance are commonly used in studying the dispersion phenomenon.
See also
Root-mean-square deviation of atomic positions: the average is taken over a group of particles at a single time, where the MSD is taken for a single particle over an interval of time
Mean squared error
References
Statistical mechanics
Statistical deviation and dispersion
Motion (physics) | Mean squared displacement | [
"Physics"
] | 1,552 | [
"Physical phenomena",
"Motion (physics)",
"Mechanics",
"Space",
"Spacetime",
"Statistical mechanics"
] |
36,714,938 | https://en.wikipedia.org/wiki/Lagrangian%20particle%20tracking | In experimental fluid mechanics, Lagrangian Particle Tracking refers to the process of determining trajectories of small neutrally buoyant particles (flow tracers) that are freely suspended within a turbulent flow field. These are usually obtained by 3-D Particle Tracking Velocimetry. A collection of such particle trajectories can be used for analyzing the Lagrangian dynamics of the fluid motion, for performing Lagrangian statistics of various flow quantities etc.
In computational fluid dynamics, the Lagrangian particle tracking (or in short LPT method) is a numerical technique for simulated tracking of particle paths Lagrangian within an Eulerian phase. It is also commonly referred to as Discrete Particle Simulation (DPS). Some simulation cases for which this method is applicable are: sprays, small bubbles, dust particles, and is especially optimal for dilute multiphase flows with large Stokes number.
See also
Lagrangian and Eulerian specification of the flow field
References
fluid dynamics | Lagrangian particle tracking | [
"Chemistry",
"Engineering"
] | 209 | [
"Piping",
"Chemical engineering",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
36,718,435 | https://en.wikipedia.org/wiki/Netgear%20Switch%20Discovery%20Protocol | Netgear Switch Discovery Protocol (NSDP) is a management protocol for several network device families, designed by Netgear.
Message structure
Common message header
Message body record structure
Message body records are type–length–value (TLV) structures. Type field may be one of following values in the table(list in incomplete):
Protocol flow examples
Network devices discovery (MAC-address an device model discovery):
Host with MAC=XX:XX:XX:XX:XX:XX from UDP-port 63321 or 63323 sending packet to broadcast IP-address 255.255.255.255 and UDP-port 63322 or 63324
Header @0x00000000 0x01 0x01 0x000000000000 0xXXXXXXXXXXXX 0x000000000000 0x0000 0x0001 0x4E534450 0x00000000
Body @0x00000020 0x0001 0x0000 0x0004 0x0000
Marker @0x00000028 0xFFFF0000
EACH Device responds with message like
Header @0x00000000 0x01 0x02 0x000000000000 0xXXXXXXXXXXXX 0xYYYYYYYYYYYY 0x0000 0x0001 0x4E534450 0x00000000
Body @0x00000020 0x0001 0x0028 0x47 0x53 0x31 0x30 0x35 0x45 0x20*0x22 0x0004 0x0006 0xYYYYYYYYYYYY
Marker @0x00000058 0xFFFF0000
Device support for protocol
GS105E ProSAFE Plus
GS108E ProSAFE Plus
GS724T
GS748T
FS116E (IP-network description and Firmware version TLVs are not supported)
FS726TP (uses 63323 and 63324 UDP-ports for interconnection)
Devices firmware update
Device firmware update is made with TFTP protocol, but you need to send NSDP request to turn on TFTP-server first.
See also
IP
UDP
MAC
Netgear
References
External links
NETGEAR official site
openSource Perl-written cross-platform toolkit for NSDP managed devices project site (in russian)
LinNetx openSource C-written utility for ProsafePlus switches management via NSDP, not operational
ngadmin C-written admin utility; GPLv2 license
ProSafeLinux Remark: sparse information; FreeBSD license
NSDP Protocol Wireshark dissector Remark: GPL license
Nsdtool – a toolset of scripts to detect NETGEAR switches in local networks
NETGEAR firmware update
NSDP
Network management
Device discovery protocols | Netgear Switch Discovery Protocol | [
"Technology",
"Engineering"
] | 631 | [
"Netgear",
"Wireless networking",
"Computer networks engineering",
"Network management"
] |
36,719,170 | https://en.wikipedia.org/wiki/Symmetrizable%20compact%20operator | In mathematics, a symmetrizable compact operator is a compact operator on a Hilbert space that can be composed with a positive operator with trivial kernel to produce a self-adjoint operator. Such operators arose naturally in the work on integral operators of Hilbert, Korn, Lichtenstein and Marty required to solve elliptic boundary value problems on bounded domains in Euclidean space. Between the late 1940s and early 1960s the techniques, previously developed as part of classical potential theory, were abstracted within operator theory by various mathematicians, including M. G. Krein, William T. Reid, Peter Lax and Jean Dieudonné. Fredholm theory already implies that any element of the spectrum is an eigenvalue. The main results assert that the spectral theory of these operators is similar to that of compact self-adjoint operators: any spectral value is real; they form a sequence tending to zero; any generalized eigenvector is an eigenvector; and the eigenvectors span a dense subspace of the Hilbert space.
Discussion
Let H be a Hilbert space. A compact operator K on H is symmetrizable if there is a bounded self-adjoint operator S on H such that S is positive with trivial kernel, i.e. (Sx,x) > 0 for all non-zero x, and SK is self-adjoint:
In many applications S is also compact. The operator S defines a new inner product on H
Let HS be the Hilbert space completion of H with respect to this inner product.
The operator K defines a formally self-adjoint operator on the dense subspace H of HS. As Krein (1947) and noted, the operator has the same operator norm as K. In fact the self-adjointness condition implies
It follows by induction that, if (x,x)S = 1, then
Hence
If K is only compact, Krein gave an argument, invoking Fredholm theory, to show that K defines a compact operator on HS. A shorter argument is available if K belongs to a Schatten class.
When K is a Hilbert–Schmidt operator, the argument proceeds as follows. Let R be the unique positive square root of S and for ε > 0 define
These are self-adjoint Hilbert–Schmidt operator on H which are uniformly bounded in the Hilbert–Schmidt norm:
Since the Hilbert–Schmidt operators form a Hilbert space, there is a subsequence converging weakly to s self-adjoint Hilbert–Schmidt operator A. Since Aε R tends to RK in Hilbert–Schmidt norm, it follows that
Thus if U is the unitary induced by R between HS and H, then the operator KS induced by the restriction of K corresponds to A on H:
The operators K − λI and K* − λI are Fredholm operators of index 0 for λ ≠ 0, so any spectral value of K or K* is an eigenvalue and the corresponding eigenspaces are finite-dimensional. On the other hand, by the special theorem for compact operators, H is the orthogonal direct sum of the eigenspaces of A, all finite-dimensional except possibly for the 0 eigenspace. Since RA = K* R, the image under R of the λ eigenspace of A lies in the λ eigenspace of K*.
Similarly R carries the λ eigenspace of K into the λ eigenspace of A. It follows that the eigenvalues of K and K* are all real. Since R is injective and has dense range it induces isomorphisms between the λ eigenspaces of A, K and K*. The same is true for generalized eigenvalues since powers of K − λI and K* − λI are also Fredholm of index 0. Since any generalized λ eigenvector of A is already an eigenvector, the same is true for K and K*. For λ = 0, this argument shows that Kmx = 0 implies Kx = 0.
Finally the eigenspaces of K* span a dense subspace of H, since it contains the image under R of the corresponding space for A. The above arguments also imply that the eigenvectors for non-zero eigenvalues of KS in HS all lie in the subspace H.
Hilbert–Schmidt operators K with non-zero real eigenvalues λn satisfy the following identities proved by :
Here tr is the trace on trace-class operators and det is the Fredholm determinant. For symmetrizable Hilbert–Schmidt operators the result states that the trace or determinant for K or K* is equal to the trace or determinant for A.
For symmetrizable operators, the identities for K* can be proved by taking H0 to be the kernel of K* and Hm the finite dimensional eigenspaces for the non-zero eigenvalues λm. Let PN be the orthogonal projection onto the direct sum of Hm with 0 ≤ m ≤ N. This subspace is left invariant by K*.
Although the sum is not orthogonal the restriction PNK*PN of K* is similar by a bounded operator with bounded inverse to the diagonal operator on the orthogonal direct sum with the same eigenvalues. Thus
Since PNK*PN tends to K* in Hilbert–Schmidt norm, the identities for K* follow by passing to the limit as N tends to infinity.
Notes
References
, Problem 82
Potential theory
Operator theory | Symmetrizable compact operator | [
"Mathematics"
] | 1,142 | [
"Functions and mappings",
"Mathematical relations",
"Mathematical objects",
"Potential theory"
] |
36,722,187 | https://en.wikipedia.org/wiki/Ajka%20Crystal | Ajka Crystal is a Hungarian manufacturer of crystal created in 1878 by Bernard Neumann. The company, one of the biggest in Central Europe, produces unique, handmade pieces of glass art. Ajka Crystal also goes under the name of "The Romanov Collection" in the United States. Ajka Crystal exports 90% of the factory's total production – both in tableware (stemware, tumblers etc...) and in giftware (vases, bowls) – for brands such as Wedgwood, Tiffany's, Rosenthal, Waterford Crystal, Polo Ralph Lauren, Christian Dior, Moser and other high-end French crystal manufacturers.
Ajka Crystal is located in Ajka, Hungary.
References
Glassmaking companies
Manufacturing companies of Hungary
Hungarian brands
Glass trademarks and brands
Veszprém County
1895 establishments in Austria-Hungary | Ajka Crystal | [
"Materials_science",
"Engineering"
] | 172 | [
"Glass engineering and science",
"Glassmaking companies",
"Engineering companies"
] |
36,722,204 | https://en.wikipedia.org/wiki/Quantum%20scar | In quantum mechanics, quantum scarring is a phenomenon where the eigenstates of a classically chaotic quantum system have enhanced probability density around the paths of unstable classical periodic orbits. The instability of the periodic orbit is a decisive point that differentiates quantum scars from the more trivial observation that the probability density is enhanced in the neighborhood of stable periodic orbits. The latter can be understood as a purely classical phenomenon, a manifestation of the Bohr correspondence principle, whereas in the former, quantum interference is essential. As such, scarring is both a visual example of quantum-classical correspondence, and simultaneously an example of a (local) quantum suppression of chaos.
A classically chaotic system is also ergodic, and therefore (almost) all of its trajectories eventually explore evenly the entire accessible phase space. Thus, it would be natural to expect that the eigenstates of the quantum counterpart would fill the quantum phase space in the uniform manner up to random fluctuations in the semiclassical limit. However, scars are a significant correction to this assumption. Scars can therefore be considered as an eigenstate counterpart of how short periodic orbits provide corrections to the universal spectral statistics of the random matrix theory. There are rigorous mathematical theorems on quantum nature of ergodicity, proving that the expectation value of an operator converges in the semiclassical limit to the corresponding microcanonical classical average. Nonetheless, the quantum ergodicity theorems do not exclude scarring if the quantum phase space volume of the scars gradually vanishes in the semiclassical limit.
On the classical side, there is no direct analogue of scars. On the quantum side, they can be interpreted as an eigenstate analogy to how short periodic orbits correct the universal random matrix theory eigenvalue statistics. Scars correspond to nonergodic states which are permitted by the quantum ergodicity theorems. In particular, scarred states provide a striking visual counterexample to the assumption that the eigenstates of a classically chaotic system would be without structure. In addition to conventional quantum scars, the field of quantum scarring has undergone its renaissance period, sparked by the discoveries of perturbation-induced scars and many-body scars that have subsequently paved the way towards emerging concepts within the field, such as antiscarring and quantum birthmarks.
Scar theory
The existence of scarred states is rather unexpected based on the Gutzwiller trace formula, which connects the quantum mechanical density of states to the periodic orbits in the corresponding classical system. According to the trace formula, a quantum spectrum is not a result of a trace over all the positions, but it is determined by a trace over all the periodic orbits only. Furthermore, every periodic orbit contributes to an eigenvalue, although not exactly equally. It is even more unlikely that a particular periodic orbit would stand out in contributing to a particular eigenstate in a fully chaotic system, since altogether periodic orbits occupy a zero-volume portion of the total phase space volume. Hence, nothing seems to imply that any particular periodic orbit for a given eigenvalue could have a significant role compared to other periodic orbits. Nonetheless, quantum scarring proves this assumption to be wrong. The scarring was first seen in 1983 by S. W. McDonald in his thesis on the stadium billiard as an interesting numerical observation. They did not show up well in his figure because they were fairly crude "waterfall" plots. This finding was not thoroughly reported in the article discussion about the wave functions and nearest-neighbor level spacing spectra for the stadium billiard. A year later, Eric J. Heller published the first examples of scarred eigenfunctions together with a theoretical explanation for their existence. The results revealed large footprints of individual periodic orbits influencing some eigenstates of the classically chaotic Bunimovich stadium, named as scars by Heller.
A wave packet analysis was a key in proving the existence of the scars, and it is still a valuable tool to understand them. In the original work of Heller, the quantum spectrum is extracted by propagating a Gaussian wave packet along a periodic orbit. Nowadays, this seminal idea is known as the linear theory of scarring. Scars stand out to the eye in some eigenstates of classically chaotic systems, but are quantified by projection of the eigenstates onto certain test states, often Gaussians, having both average position and average momentum along the periodic orbit. These test states give a provably structured spectrum that reveals the necessity of scars. However, there is no universal measure on scarring; the exact relationship of the stability exponent to the scarring strength is a matter of definition. Nonetheless, there is a rule of thumb: quantum scarring is significant when , and the strength scales as . Thus, strong quantum scars are, in general, associated with periodic orbits that are moderately unstable and relatively short. The theory predicts the scar enhancement along a classical periodic orbit, but it cannot precisely pinpoint which particular states are scarred and how much. Rather, it can be only stated that some states are scarred within certain energy zones, and by at least by a certain degree.
The linear scarring theory outlined above has been later extended to include nonlinear effects taking place after the wave packet departs the linear dynamics domain around the periodic orbit. At long times, the nonlinear effect can assist the scarring. This stems from nonlinear recurrences associated with homoclinic orbits. A further insight on scarring was acquired with a real-space approach by E. B. Bogomolny and a phase-space alternative by Michael V. Berry complementing the wave-packet and Hussimi space methods utilized by Heller and L. Kaplan.
As well as there being no universal measure for the level of scarring, there is also no generally accepted definition of it. Originally, it was stated that certain unstable periodic orbits are shown to permanently scar some quantum eigenfunctions as , in the sense that extra density surrounds the region of the periodic orbit. However, a more formal definition for scarring would be the following: A quantum eigenstate of a classically chaotic system is scarred by a periodic if its density on the classical invariant manifolds near and all along that periodic is systematically enhanced above the classical, statistically expected density along that orbit.
Most of the research on quantum scars has been restricted to non-relativistic quantum systems described by the Schrödinger equation, where the dependence of the particle energy on momentum is quadratic. However, scarring can occur in a relativistic quantum systems described by the Dirac equation, where the energy-momentum relation is linear instead. Heuristically, these relativistic scars are a consequence of the fact that both spinor components satisfy the Helmholtz equation, in analogue to the time-independent Schrödinger equation. Therefore, relativistic scars have the same origin as the conventional scarring introduced by E. J. Heller. Nevertheless, there is a difference in terms of the recurrence with respect to energy variation. Furthermore, it was shown that the scarred states can lead to strong conductance fluctuations in the corresponding open quantum dots via the mechanism of resonant transmission.
The first experimental confirmations of scars were obtained in microwave billiards in the early 1990s. Further experimental evidence for scarring has later been delivered by observations in, e.g., quantum wells, optical cavities and the hydrogen atom. In the early 2000s, the first observations were achieved in an elliptical billiard. Many classical trajectories converge in this system and lead to pronounced scarring at the foci, commonly called as quantum mirages. In addition, recent numerical results indicated the existence of quantum scars in ultracold atomic gases. Aside from these analog scars in classical wave experiments, the verification of this phenomenon had remained elusive in true quantum systems — until the recent achievement of using scanning tunneling microscopy to directly visualize (relativistic) quantum scars in a stadium-shaped, graphene-based quantum dot.
In addition to scarring described above, there are several similar phenomena, either connected by theory or appearance. First of all, when scars are visually identified, some of the states may remind of classical ''bouncing-ball'' motion, excluded from quantum scars into its own category. For example, a stadium billiard supports these highly nonergodic eigenstates, which reflect trapped bouncing motion between the straight walls. It has been shown that the bouncing states persist at the limit , but at the same time this result suggests a diminishing percentage of all the states in the agreement with the quantum ergodicity theorems of Alexander Schnirelman, Yves Colin de Verdière, and Steven Zelditch. Secondly, scarring should not be confused with statistical fluctuations. Similar structures of an enhanced probability density occur even as random superpositions of plane waves, in the sense of the Berry conjecture. Furthermore, there is a genre of scars, not caused by actual periodic orbits, but their remnants, known as ghosts. They refer to periodic orbits that are found in a nearby system in the sense of some tunable, external system parameter. Scarring of this kind has been associated to almost-periodic orbits. Another subclass of ghosts stems from complex periodic orbits which exist in the vicinity of bifurcation points.
Perturbation-induced (variational) quantum scars
A new class of quantum scars was discovered in disordered two-dimensional nanostructures. Even though similar in appearance to ordinary quantum scars described earlier, these scars have a fundamentally different origin. In the case, the disorder arising from small perturbations (see red dots in the figure) is sufficient to destroy classical long-time stability. Hence, there is no moderately unstable periodic in the classical counterpart to which a scar would corresponds in the ordinary scar theory. Instead, scars are formed around periodic orbits of the corresponding unperturbed system. Ordinary scar theory is further excluded by the behavior of the scars as a function of the disorder strength. When the potential bumps are made stronger while keeping they otherwise unchanged, the scars grow stronger and then fade away without changing their orientation. In contrary, a scar caused by conventional theory should become rapidly weaker due to the increase of the stability exponent of a periodic orbit with increasing disorder. Furthermore, comparing scars at different energies reveals that they occur in only a few distinct orientations This too contradicts predictions of ordinary scar theory.
Many-body quantum scarring
The area of quantum many-body scars is a subject of active research.
Scars have occurred in investigations for potential applications of Rydberg states to quantum computing, specifically acting as qubits for quantum simulation. The particles of the system in an alternating ground state-Rydberg state configuration continually entangled and disentangled rather than remaining entangled and undergoing thermalization. Systems of the same atoms prepared with other initial states did thermalize as expected. The researchers dubbed the phenomenon "quantum many-body scarring".
The causes of quantum scarring are not well understood. One possible proposed explanation is that quantum scars represent integrable systems, or nearly do so, and this could prevent thermalization from ever occurring. This has drawn criticisms arguing that a non-integrable Hamiltonian underlies the theory. Recently, a series of works has related the existence of quantum scarring to an algebraic structure known as dynamical symmetries.
Fault-tolerant quantum computers are desired, as any perturbations to qubit states can cause the states to thermalize, leading to loss of quantum information. Scarring of qubit states is seen as a potential way to protect qubit states from outside disturbances leading to decoherence and information loss.
Antiscarring
A fascinating consequence of quantum scarring is its dual partner -- antiscarring, which refers to a systematic depression of the probability density in quantum states along the path of the scar-generating periodic orbit. The existence of antiscarring is confirmed by a general stacking theorem: the cumulative probability density of the eigenstates becomes uniform when the energy window of the summed eigenstates is larger than the energy scale associated with the shortest periodic orbit in the system. Since there may be strongly scarred states among the eigenstates, the necessity for a uniform average over a large number of states requires the existence of antiscarred states with low probability in the region of ''regular'' scars. This effect has been demonstrated in the context of variational scarring, where it is promoted by the strength and similar orientation of the scars within a moderate energy window. Furthermore, it has been realized that some decay processes have antiscarred states with anomalously long escape times.
Quantum Birthmarks
A hallmark of classical ergodicity is the complete loss of memory of initial conditions, resulting from the eventual uniform exploration of phase space. In a quantum system, however, classical ergodic behavior can break down, as exemplified by the presence of quantum scars. The concept of a quantum birthmark bridges the short-term effects, such as due to scarring, and the long-term predictions of random matrix theory. By extending beyond quantum scarring, quantum birthmarks offer a new paradigm for understanding the elusive quantum nature of ergodicity.
Figure depicts two birthmarks unveiled within the time-averaged probability density of a wavepacket launched under different initial conditions (indicated by the black arrows) in the stadium. The upper plot shows the result for a wavepacket released vertically from the center along a bouncing-ball orbit; whereas the lower plot depicts an wapacket prepared off-center at an arbitrary angle, corresponding to a generic initial state. Notably, quantum birthmarks in each case respect the two reflection symmetries of the stadium. These two cases clearly demonstrate that a quantum system can violate the classical ergodicity assumption in the sense that the probability density becomes uniform even at infinite time. Therefore, quantum scars presents a new form of weak ergodicity breaking, beyond quantum scarring taking place at the eigenstate level. While any initial state and its short-term behavior will be memorized by a quantum system, the strength of the corresponding birthmark depends on the dynamical details of the birthplace, particularly this quantum memory effect is boosted in the presence of scarring.
See also
Quantum chaos
References
Quantum mechanics
Quantum chaos theory | Quantum scar | [
"Physics"
] | 2,943 | [
"Theoretical physics",
"Quantum mechanics"
] |
36,722,513 | https://en.wikipedia.org/wiki/Signatures%20with%20efficient%20protocols | Signatures with efficient protocols are a form of digital signature invented by Jan Camenisch and Anna Lysyanskaya in 2001. In addition to being secure digital signatures, they need to allow for the efficient implementation of two protocols:
A protocol for computing a digital signature in a secure two-party computation protocol.
A protocol for proving knowledge of a digital signature in a zero-knowledge protocol.
In applications, the first protocol allows a signer to possess the signing key to issue a signature to a user (the signature owner) without learning all the messages being signed or the complete signature.
The second protocol allows the signature owner to prove that he has a signature on many messages without revealing the signature and only a (possibly) empty subset of the messages.
The combination of these two protocols allows for the implementation of digital credential and ecash protocols.
See also
Topics in cryptography
References
Further reading
Jan Camenisch, Anna Lysyanskaya: A Signature Scheme with Efficient Protocols. SCN 2002: 268-289
Cryptography | Signatures with efficient protocols | [
"Mathematics",
"Engineering"
] | 207 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
36,723,243 | https://en.wikipedia.org/wiki/Nico%20van%20Kampen | Nicolaas 'Nico' Godfried van Kampen (June 22, 1921 – October 6, 2013) was a Dutch theoretical physicist, who worked mainly on statistical mechanics and non-equilibrium thermodynamics.
Van Kampen was born in Leiden, and was a nephew of Frits Zernike. He studied physics at Leiden University, where in 1952 under the direction of Hendrik Anthony Kramers he earned his PhD with thesis Contributions to the quantum theory of light scattering. He showed in his thesis how to deal with singularities in quantum mechanical scattering processes, an important step in the development of renormalization, according to Kramers. Van Kampen made fundamental contributions to non-equilibrium processes (in particular on the master equation) and in many-body theory (especially in plasma physics). His work on non-equilibrium processes began in 1953 in the research group of Sybren Ruurds de Groot (the successor to Kramers) in Leiden. In 1955 Van Kampen joined the Institute of Theoretical Physics at Utrecht University, where he later became full professor and professor emeritus after his retirement.
His monograph Stochastic processes in physics and chemistry (1981) is considered a classic. In his 2002 book Waanwetenschap (Science), Van Kampen condemned what he saw as pseudoscience, even within the scientific community; the book met with a mixed reaction—five scientists, including , Floris Takens and Dennis Dieks, wrote a commentary on his book. Van Kampen had 15 Dutch-type PhD students, including Barend Felderhof (1963), John Tjon (1964) and Johannes Roerdink (1983).
Van Kampen was an uncle of the Dutch theoretical physicist and Nobel prize winner Gerard 't Hooft, and encouraged 't Hooft to study physics in Utrecht. Van Kampen was a member of the Royal Netherlands Academy of Arts and Sciences since 1973. He died, aged 92, in Nieuwegein.
Quantum mechanics
Van Kampen was a severe critic of non-orthodox interpretations of quantum mechanics. Some of his views on this subject were published in his article "The scandal of quantum mechanics". He disclosed his own approach to quantum mechanics in his paper "Ten theorems about quantum mechanical measurements".
Additional works
with B. U. Felderhof: Theoretical methods in plasma physics, North Holland 1967,
Views of a physicist. Selected papers of N. G. van Kampen, World Scientific 2000 (ed. Paul H. E. Meijer)
Stochastic processes in physics and chemistry, North Holland 1981, 3rd edn., 2007,
Elimination of fast variables, Amsterdam: North-Holland, 1985. Series: Physics Reports, v. 124, no. 2
References
External links
20th-century Dutch physicists
1921 births
Scientists from Leiden
Leiden University alumni
Members of the Royal Netherlands Academy of Arts and Sciences
2013 deaths
Academic staff of Utrecht University
Dutch theoretical physicists
Quantum physicists | Nico van Kampen | [
"Physics"
] | 601 | [
"Quantum physicists",
"Quantum mechanics"
] |
48,634,391 | https://en.wikipedia.org/wiki/Non%20ideal%20compressible%20fluid%20dynamics | Non ideal compressible fluid dynamics (NICFD), or non ideal gas dynamics, is a branch of fluid mechanics studying the dynamic behavior of fluids not obeying ideal-gas thermodynamics. It is for example the case of dense vapors, supercritical flows and compressible two-phase flows. With the term dense vapors, we indicate all fluids in the gaseous state characterized by thermodynamic conditions close to saturation and the critical point. Supercritical fluids feature instead values of pressure and temperature larger than their critical values, whereas two-phase flows are characterized by the simultaneous presence of both liquid and gas phases.
In all these cases, the fluid requires to be modelled as a real gas, since its thermodynamic behavior considerably differs from that of an ideal gas, which by contrast appears for dilute thermodynamic conditions. The ideal-gas law can be employed in general as a reasonable approximation of the fluid thermodynamics for low pressures and high temperatures. Otherwise, intermolecular forces and dimension of fluid particles, which are neglected in the ideal-gas approximation, become relevant and can significantly affect the fluid behavior. This is extremely valid for gases made of complex and heavy molecules, which tend to deviate more from the ideal model.
While the fluid dynamics of compressible flows in ideal conditions is well-established and is characterized by several analytical results, when non-ideal thermodynamic conditions are considered, peculiar phenomena possibly occur. This is particularly valid in supersonic conditions, namely for flow velocities larger than the speed of sound in the fluid considered. All typical features of supersonic flows are affected by non-ideal thermodynamics, resulting in both quantitative and qualitative differences with respect to the ideal gas dynamics.
Non-ideal thermodynamics
For dilute thermodynamic conditions, the ideal-gas equation of state (EoS) provides sufficiently accurate results in modelling the fluid thermodynamics. This occurs in general for low values of reduced pressure and high values of reduced temperature, where the term reduced refers to the ratio of a certain thermodynamic quantity and its critical value. For some fluids such as air, the assumption of considering ideal conditions is perfectly reasonable and it is widely used.
On the other hand, when thermodynamic conditions approach condensation and the critical point or when high pressures are involved, real-gas models are needed in order to capture the real fluid behavior. In these conditions, in fact, intermolecular forces and compressibility effects come into play.
A measure of the fluid non-ideality is given by the compressibility factor , defined as
where
is the pressure [Pa];
is the specific volume [m3/kg];
is the specific gas constant [J/(kg K)], namely the universal gas constant divided by the fluid's molecular mass;
is the absolute temperature [K].
The compressibility factor is a dimensionless quantity which is equal to 1 for ideal gases and deviates from unity for increasing levels of non-ideality.
Several non-ideal models exist, from the simplest cubic equations of state (such as the Van der Waals and the Peng-Robinson models) up to complex multi-parameter ones, including the Span-Wagner equation of state.
State-of-the-art equations of state are easily accessible through thermodynamic libraries, such as FluidProp or the open-source software CoolProp.
Non-ideal gasdynamic regimes
The dynamic behavior of compressible flows is governed by the dimensionless thermodynamic quantity , which is known as the Landau derivative or fundamental derivative of gas dynamics and is defined as
where
is the speed of sound [m/s];
is the specific entropy per unit mass [J/(kg K)].
From a mathematical point of view, the Landau derivative is a non-dimensional measure of the curvature of isentropes in the pressure-volume thermodynamic plane. From a physical point of view, the definition of tells that the speed of sound increases with pressure in isentropic transformations for values of , while, by contrast, it decreases with pressure for .
Based on the value of , three gas dynamic regimes can be defined:
ideal gasdynamic regime for ;
non-ideal classical gasdynamic regime for ;
non-classical gasdynamic regime for .
Ideal gasdynamic regime
In the ideal regime, the usual ideal-gas behavior is qualitatively recovered. For an ideal gas, in fact, the value of the Landau derivative reduces to the constant value , where is the heat capacity ratio. By definition, is the ratio between the constant pressure and the constant volume specific heats, so it is larger than 1, leading to a value of larger than 1 too.
In this regime, only quantitative differences with respect to the ideal model are encountered. The flow evolution in fact depends on total, or stagnation, thermodynamic conditions. For example, the Mach number evolution of an ideal gas in a supersonic nozzle depends only on the heat capacity ratio (namely on the fluid) and on the exhaust-to-stagnation pressure ratio. Considering real-gas effects, instead, even fixing the fluid and the pressure ratio, different total states yield different Mach profiles.
Typically, for single-phase fluids made of simple molecules, only the ideal gasdynamic regime can be reached, even for thermodynamic conditions very close to saturation. It is for example the case of diatomic or triatomic molecules, such as nitrogen or carbon dioxide, which can only experience small departure from the ideal behavior.
Non-ideal classical gasdynamic regime
For fluids with high molecular complexity, state-of-the-art thermodynamic models predict values of in the single-phase region close to the saturation curve, where the speed of sound is largely sensitive to density variations along isentropes. Such fluids belong to different classes of chemical compounds, including hydrocarbons, siloxanes and refrigerants.
In the non-ideal regime, even qualitative differences with respect to ideal gasdynamics can be found, meaning that the flow evolution can be strongly different for varying total conditions. The most peculiar phenomenon of the non-ideal regime is the decrease of the Mach number in isentropic expansions occurring in the supersonic regime, namely processes in which the fluid density decreases. Indeed, for an ideal gas expanding isentropically in a converging-diverging nozzle, the Mach number increases monotonically as the density decreases. By contrast, for flows evolving in the non-ideal regime, a non-monotone Mach number evolution is possible in the divergent section, whereas the density reduction remains monotonic (see figure in the lead section). This particular phenomenon is governed by the quantity , which is a non-dimensional measure of the Mach number derivative with respect to density in isentropic processes:
where
is the Mach number;
is the density [kg/m3].
From the definition of , the Mach number increases with density for flow conditions featuring values of . Indeed, this is possible only for values of , that is in the non-ideal regime. However, this is not a sufficient condition for the non-monotone Mach number to appear, since a sufficiently large value of is also required. In particular, supersonic conditions () are necessary.
An analogous effect is encountered in the expansion around rarefactive ramps: for suitable thermodynamic conditions, the Mach number downstream of the ramp can be lower than the one upstream. By contrast, in oblique shock waves, the post-shock Mach number can be larger than the pre-shock one.
Non-classical gas-dynamic regime
Finally, fluids with an even higher molecular complexity can exhibit non-classical behavior in the single-phase vapor region near saturation. They are called Bethe-Zel’dovich-Thompson (BZT) fluids, from the name of physicists Hans Bethe, Yakov Zel'dovich, and Philip Thompson, who first worked on these kinds of fluids.
For thermodynamic conditions lying in the non-classical regime, the non-monotone evolution of the Mach number in isentropic expansions can be found even in subsonic conditions. In fact, for values of , positive values of can be reached also in subsonic flows (). In other words, the non-monotone Mach number evolution is also possible in the convergent section of an isentropic nozzle.
Moreover, a peculiar phenomenon of the non-classical regime is the so-called inverted gas-dynamics. In the classical regime, expansions are smooth isentropic processes, while compressions occur through shock waves, which are discontinuities in the flow. If gas-dynamics is inverted, the opposite occurs, namely rarefaction shock waves are physically admissible and compressions occur through smooth isentropic processes.
As a consequence of the negative value of , two other peculiar phenomena can occur for BZT fluids: shock splitting and composite waves. Shock splitting occurs when an inadmissible pressure discontinuity evolves in time by generating two weaker shock waves. Composite waves, instead, are referred to as phenomena in which two elementary waves propagate as a single entity.
Experimental evidence of a non-classical gas-dynamic regime is not available yet. The main reasons are the complexity of performing experiments in such challenging thermodynamic conditions and the thermal stability of these very complex molecules.
Applications
Compressible flows in non-ideal conditions are encountered in several industrial and aerospace applications. They are employed for example in Organic Rankine Cycles (ORC) and supercritical carbon dioxide (sCO2) systems for power production. In the aerospace field, fluids in conditions close to saturation can be used as oxiders in hybrid rocket motors or for surface cooling of rocket nozzles. Gases made of molecules of high molecular mass can be used in supersonic wind tunnels instead of air to obtain higher Reynolds numbers. Finally, non-ideal flows find application in fuels transportation at high-speed and in Rapid Expansion of Supercritical Solutions (RESS) of CO2 for particles generation or extraction of chemicals.
Organic Rankine cycles
Usual Rankine cycles are thermodynamic cycles that employ water as a working fluid to produce electric power from thermal sources. In Organic Rankine cycles, by contrast, water is substituted by molecularly complex organic compounds. Since the vaporization temperature of these kinds of fluids is lower than that of water at atmospheric pressure, low-to-medium temperature sources can be exploited allowing for heat recovery, for example, from biomass combustion, industrial waste heat, or geothermal heat. For these reasons, ORC technology belongs to the class of renewable energies.
For the design of mechanical components, such as turbines, working in ORC plants, it is fundamental to take into account typical non-ideal gas-dynamic phenomena. In fact, the single-phase vapor at the inlet of an ORC turbine stator usually evolves in the non-ideal thermodynamic region close to the liquid-vapor saturation curve and critical point. Moreover, due to the high molecular mass of the complex organic compounds employed, the speed of sound in these fluids is low compared to that of air and other simple gases. Therefore, turbine stators are very likely to involve supersonic flows even if rather low flow velocities are reached. High supersonic flows can produce large losses and mechanical stresses in the turbine blades due to the occurrence of shock waves, which cause a strong pressure raise. However, when working fluids of the BZT class are employed, expander performances could be improved by exploiting some non-classical phenomena.
Supercritical carbon dioxide cycles
When carbon dioxide is held above its critical pressure (73.773 bar) and temperature (30.9780 °C), it can behave both as a gas and as a liquid, that is it expands to fill entirely its container like a gas but has a density similar to that of a liquid.
Supercritical CO2 is chemically stable, very cheap, and non-flammable, making it suitable as a working fluid for transcritical cycles. For example, it is employed in domestic water heat pumps, which can reach high efficiencies.
Moreover, when used in power generation plants that employ Brayton and Rankine cycles, it can improve efficiency and power output. Its high density enables a strong reduction in turbomachines dimensions, still ensuring the high efficiency of these components. Simpler designs can therefore be adopted, while steam turbines require multiple turbine stages, which necessarily yield larger dimensions and costs.
By contrast, mechanical components within sCO2 Brayton cycles, especially turbomachinery and heat exchangers, suffer from corrosion.
See also
Compressible flow
Equation of state
Mach number
Organic Rankine cycle
Prandtl–Meyer expansion fan
Real gas
Shock wave
Supercritical carbon dioxide
Supersonic nozzle flow
References
Further reading
External links
Open-source thermodynamic library CoolProp
Thermodynamic library FluidProp
Rapid Expansion of Supercritical Solutions (RESS)
Fluid mechanics
Thermodynamics
Fluid dynamics | Non ideal compressible fluid dynamics | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 2,753 | [
"Chemical engineering",
"Civil engineering",
"Thermodynamics",
"Piping",
"Fluid dynamics",
"Fluid mechanics",
"Dynamical systems"
] |
48,640,009 | https://en.wikipedia.org/wiki/Complete%20active%20space%20perturbation%20theory | Complete active space perturbation theory (CASPTn) is a multireference electron correlation method for computational investigation of molecular systems, especially for those with heavy atoms such as transition metals, lanthanides, and actinides. It can be used, for instance, to describe electronic states of a system, when single reference methods and density functional theory cannot be used, and for heavy atom systems for which quasi-relativistic approaches are not appropriate.
Although perturbation methods such as CASPTn are successful in describing the molecular systems, they still need a Hartree-Fock wavefunction to provide a valid starting point. The perturbation theories cannot reach convergence if the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) are degenerate. Therefore, the CASPTn method is usually used in conjunction with the multi-configurational self-consistent field method (MCSCF) to avoid near-degeneracy correlation effects.
History
In the early 1960s, the perturbation theory in quantum chemical applications was introduced. Since, there has been a wide spread of uses of the theory through software such as Gaussian. The perturbation theory correlation method is used routinely by the non-specialists. This is because it can easily achieve the property of size extensivity comparing to other correlation methods.
During the starting point of the uses of perturbation theory, the applications using the method were based on nondegenerate many-body perturbation theory (MBPT). MBPT is a reasonable method for atomic and molecular system which a single non-degenerate Slater determinant can represent zeroth-order electronic description. Therefore, MBPT method would exclude atomic and molecular states, especially excited states, which cannot be represented in zeroth order as single Slater determinants. Moreover, the perturbation expansion would converges very slowly or not at all if the state is degenerate or near degenerate. Such degenerate states are often the case of atomic and molecular valence states. To counter the restrictions, there was an attempt to implement second-order perturbation theory in conjunction with complete active space self-consistent field (CASSCF) wave functions. At the time, it was rather difficult to compute three- and four-particle density matrices which are needed for matrix elements involving internal and semi-internal excitations. The results was rather disappointing with little or no improvement from usual CASSCF results. Another attempt was made in 1990, where the full interacting space was included in the first-order wave function while zeroth-order Hamiltonian was constructed from a Fock-type one-electron operator. For cases which has no active orbitals, the Fock-type one-electron operator that reduces to the Møller–Plesset-Plesset Hartree-Fock (HF) operator. A diagonal Fock operator was also used to make a computer implementation simple and effective.
References
Electronic structure methods | Complete active space perturbation theory | [
"Physics",
"Chemistry"
] | 620 | [
"Quantum chemistry",
"Quantum mechanics",
"Computational physics",
"Electronic structure methods",
"Computational chemistry"
] |
48,643,701 | https://en.wikipedia.org/wiki/Out-of-bag%20error | Out-of-bag (OOB) error, also called out-of-bag estimate, is a method of measuring the prediction error of random forests, boosted decision trees, and other machine learning models utilizing bootstrap aggregating (bagging). Bagging uses subsampling with replacement to create training samples for the model to learn from. OOB error is the mean prediction error on each training sample , using only the trees that did not have in their bootstrap sample.
Bootstrap aggregating allows one to define an out-of-bag estimate of the prediction performance improvement by evaluating predictions on those observations that were not used in the building of the next base learner.
Out-of-bag dataset
When bootstrap aggregating is performed, two independent sets are created. One set, the bootstrap sample, is the data chosen to be "in-the-bag" by sampling with replacement. The out-of-bag set is all data not chosen in the sampling process.
When this process is repeated, such as when building a random forest, many bootstrap samples and OOB sets are created. The OOB sets can be aggregated into one dataset, but each sample is only considered out-of-bag for the trees that do not include it in their bootstrap sample. The picture below shows that for each bag sampled, the data is separated into two groups.
This example shows how bagging could be used in the context of diagnosing disease. A set of patients are the original dataset, but each model is trained only by the patients in its bag. The patients in each out-of-bag set can be used to test their respective models. The test would consider whether the model can accurately determine if the patient has the disease.
Calculating out-of-bag error
Since each out-of-bag set is not used to train the model, it is a good test for the performance of the model. The specific calculation of OOB error depends on the implementation of the model, but a general calculation is as follows.
Find all models (or trees, in the case of a random forest) that are not trained by the OOB instance.
Take the majority vote of these models' result for the OOB instance, compared to the true value of the OOB instance.
Compile the OOB error for all instances in the OOB dataset.
The bagging process can be customized to fit the needs of a model. To ensure an accurate model, the bootstrap training sample size should be close to that of the original set. Also, the number of iterations (trees) of the model (forest) should be considered to find the true OOB error. The OOB error will stabilize over many iterations so starting with a high number of iterations is a good idea.
Shown in the example to the right, the OOB error can be found using the method above once the forest is set up.
Comparison to cross-validation
Out-of-bag error and cross-validation (CV) are different methods of measuring the error estimate of a machine learning model. Over many iterations, the two methods should produce a very similar error estimate. That is, once the OOB error stabilizes, it will converge to the cross-validation (specifically leave-one-out cross-validation) error. The advantage of the OOB method is that it requires less computation and allows one to test the model as it is being trained.
Accuracy and Consistency
Out-of-bag error is used frequently for error estimation within random forests but with the conclusion of a study done by Silke Janitza and Roman Hornung, out-of-bag error has shown to overestimate in settings that include an equal number of observations from all response classes (balanced samples), small sample sizes, a large number of predictor variables, small correlation between predictors, and weak effects.
See also
Boosting (meta-algorithm)
Bootstrap aggregating
Bootstrapping (statistics)
Cross-validation (statistics)
Random forest
Random subspace method (attribute bagging)
References
Ensemble learning
Machine learning algorithms
Computational statistics | Out-of-bag error | [
"Mathematics"
] | 852 | [
"Computational statistics",
"Computational mathematics"
] |
34,148,348 | https://en.wikipedia.org/wiki/Quasicircle | In mathematics, a quasicircle is a Jordan curve in the complex plane that is the image of a circle under a quasiconformal mapping of the plane onto itself. Originally introduced independently by and , in the older literature (in German) they were referred to as quasiconformal curves, a terminology which also applied to arcs. In complex analysis and geometric function theory, quasicircles play a fundamental role in the description of the universal Teichmüller space, through quasisymmetric homeomorphisms of the circle. Quasicircles also play an important role in complex dynamical systems.
Definitions
A quasicircle is defined as the image of a circle under a quasiconformal mapping of the extended complex plane. It is called a K-quasicircle if the quasiconformal mapping has dilatation K. The definition of quasicircle generalizes the characterization of a Jordan curve as the image of a circle under a homeomorphism of the plane. In particular a quasicircle is a Jordan curve. The interior of a quasicircle is called a quasidisk.
As shown in , where the older term "quasiconformal curve" is used, if a Jordan curve is the image of a circle under a quasiconformal map in a neighbourhood of the curve, then it is also the image of a circle under a quasiconformal mapping of the extended plane and thus a quasicircle. The same is true for "quasiconformal arcs" which can be defined as quasiconformal images of a circular arc either in an open set or equivalently in the extended plane.
Geometric characterizations
gave a geometric characterization of quasicircles as those Jordan curves for which the absolute value of the cross-ratio of any four points, taken in cyclic order, is bounded below by a positive constant.
Ahlfors also proved that quasicircles can be characterized in terms of a reverse triangle inequality for three points: there should be a constant C such that if two points z1 and z2 are chosen on the curve and z3 lies on the shorter of the resulting arcs, then
This property is also called bounded turning or the arc condition.
For Jordan curves in the extended plane passing through ∞, gave a simpler necessary and sufficient condition to be a quasicircle. There is a constant C > 0 such that if
z1, z2 are any points on the curve and z3 lies on the segment between them, then
These metric characterizations imply that an arc or closed curve is quasiconformal whenever it arises as the image of an interval or the circle under a bi-Lipschitz map f, i.e. satisfying
for positive constants Ci.
Quasicircles and quasisymmetric homeomorphisms
If φ is a quasisymmetric homeomorphism of the circle, then there are conformal maps f of [z| < 1 and g of |z|>1 into disjoint regions such that the complement of the images of f and g is a Jordan curve. The maps f and g extend continuously to the circle |z| = 1 and the sewing equation
holds. The image of the circle is a quasicircle.
Conversely, using the Riemann mapping theorem, the conformal maps f and g uniformizing the outside of a quasicircle give rise to a quasisymmetric homeomorphism through the above equation.
The quotient space of the group of quasisymmetric homeomorphisms by the subgroup of Möbius transformations provides a model of universal Teichmüller space. The above correspondence shows that the space of quasicircles can also be taken as a model.
Quasiconformal reflection
A quasiconformal reflection in a Jordan curve is an orientation-reversing quasiconformal map of period 2 which switches the inside and the outside of the curve fixing points on the curve. Since the map
provides such a reflection for the unit circle, any quasicircle admits a quasiconformal reflection. proved that this property characterizes quasicircles.
Ahlfors noted that this result can be applied to uniformly bounded holomorphic univalent functions f(z) on the unit disk D. Let Ω = f(D). As Carathéodory had proved using his theory of prime ends, f extends continuously to the unit circle if and only if ∂Ω is locally connected, i.e. admits a covering by finitely many compact connected sets of arbitrarily small diameter. The extension to the circle is 1-1 if and only if ∂Ω has no cut points, i.e. points which when removed from ∂Ω yield a disconnected set. Carathéodory's theorem shows that a locally set without cut points is just a Jordan curve and that in precisely this case is the extension of f to the closed unit disk a homeomorphism. If f extends to a quasiconformal mapping of the extended complex plane then ∂Ω is by definition a quasicircle. Conversely observed that if ∂Ω is a quasicircle and R1 denotes the quasiconformal reflection in ∂Ω then the assignment
for |z| > 1 defines a quasiconformal extension of f to the extended complex plane.
Complex dynamical systems
Quasicircles were known to arise as the Julia sets of rational maps R(z). proved that if the Fatou set of R has two components and the action of R on the Julia set is "hyperbolic", i.e. there are constants c > 0 and A > 1 such that
on the Julia set, then the Julia set is a quasicircle.
There are many examples:
quadratic polynomials R(z) = z2 + c with an attracting fixed point
the Douady rabbit (c = –0.122561 + 0.744862i, where c3 + 2 c2 + c + 1 = 0)
quadratic polynomials z2 + λz with |λ| < 1
the Koch snowflake
Quasi-Fuchsian groups
Quasi-Fuchsian groups are obtained as quasiconformal deformations of Fuchsian groups. By definition their limit sets are quasicircles.
Let Γ be a Fuchsian group of the first kind: a discrete subgroup of the Möbius group preserving the unit circle. acting properly discontinuously on the unit disk D and with limit set the unit circle.
Let μ(z) be a measurable function on D with
such that μ is Γ-invariant, i.e.
for every g in Γ. (μ is thus a "Beltrami differential" on the Riemann surface D / Γ.)
Extend μ to a function on C by setting μ(z) = 0 off D.
The Beltrami equation
admits a solution unique up to composition with a Möbius transformation.
It is a quasiconformal homeomorphism of the extended complex plane.
If g is an element of Γ, then f(g(z)) gives another solution of the Beltrami equation, so that
is a Möbius transformation.
The group α(Γ) is a quasi-Fuchsian group with limit set the quasicircle given by the image of the unit circle under f.
Hausdorff dimension
It is known that there are quasicircles for which no segment has finite length. The Hausdorff dimension of quasicircles was first investigated by , who proved that it can take all values in the interval [1,2). , using the new technique of "holomorphic motions" was able to estimate the change in the Hausdorff dimension of any planar set under a quasiconformal map with dilatation K. For quasicircles C, there was a crude estimate for the Hausdorff dimension
where
On the other hand, the Hausdorff dimension for the Julia sets Jc of the iterates of the rational maps
had been estimated as result of the work of Rufus Bowen and David Ruelle, who showed that
Since these are quasicircles corresponding to a dilatation
where
this led to show that for k small
Having improved the lower bound following calculations for the Koch snowflake with Steffen Rohde and Oded Schramm,
conjectured that
This conjecture was proved by ; a complete account of his proof, prior to publication, was already given in .
For a quasi-Fuchsian group and showed that the Hausdorff dimension d of the limit set is always greater than 1. When d < 2, the quantity
is the lowest eigenvalue of the Laplacian of the corresponding hyperbolic 3-manifold.
Notes
References
, Section 13.2, Dimension of quasicircles.
'' +
Complex analysis
Dynamical systems
Fractals | Quasicircle | [
"Physics",
"Mathematics"
] | 1,813 | [
"Functions and mappings",
"Mathematical analysis",
"Mathematical objects",
"Fractals",
"Mechanics",
"Mathematical relations",
"Dynamical systems"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.