id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
44,308,703 | https://en.wikipedia.org/wiki/Flajolet%E2%80%93Martin%20algorithm | The Flajolet–Martin algorithm is an algorithm for approximating the number of distinct elements in a stream with a single pass and space-consumption logarithmic in the maximal number of possible distinct elements in the stream (the count-distinct problem). The algorithm was introduced by Philippe Flajolet and G. Nigel Martin in their 1984 article "Probabilistic Counting Algorithms for Data Base Applications". Later it has been refined in "LogLog counting of large cardinalities" by Marianne Durand and Philippe Flajolet, and "HyperLogLog: The analysis of a near-optimal cardinality estimation algorithm" by Philippe Flajolet et al.
In their 2010 article "An optimal algorithm for the distinct elements problem", Daniel M. Kane, Jelani Nelson and David P. Woodruff give an improved algorithm, which uses nearly optimal space and has optimal O(1) update and reporting times.
The algorithm
Assume that we are given a hash function that maps input to integers in the range , and where the outputs are sufficiently uniformly distributed. Note that the set of integers from 0 to corresponds to the set of binary strings of length . For any non-negative integer , define to be the -th bit in the binary representation of , such that:
We then define a function that outputs the position of the least-significant set bit in the binary representation of , and if no such set bit can be found as all bits are zero:
Note that with the above definition we are using 0-indexing for the positions, starting from the least significant bit. For example, , since the least significant bit is a 1 (0th position), and , since the least significant set bit is at the 3rd position. At this point, note that under the assumption that the output of our hash function is uniformly distributed, then the probability of observing a hash output ending with (a one, followed by zeroes) is , since this corresponds to flipping heads and then a tail with a fair coin.
Now the Flajolet–Martin algorithm for estimating the cardinality of a multiset is as follows:
Initialize a bit-vector BITMAP to be of length and contain all 0s.
For each element in :
Calculate the index .
Set .
Let denote the smallest index such that .
Estimate the cardinality of as , where .
The idea is that if is the number of distinct elements in the multiset , then is accessed approximately times, is accessed approximately times and so on. Consequently, if , then is almost certainly 0, and if , then is almost certainly 1. If , then can be expected to be either 1 or 0.
The correction factor is found by calculations, which can be found in the original article.
Improving accuracy
A problem with the Flajolet–Martin algorithm in the above form is that the results vary significantly. A common solution has been to run the algorithm multiple times with different hash functions and combine the results from the different runs. One idea is to take the mean of the results together from each hash function, obtaining a single estimate of the cardinality. The problem with this is that averaging is very susceptible to outliers (which are likely here). A different idea is to use the median, which is less prone to be influences by outliers. The problem with this is that the results can only take form , where is integer. A common solution is to combine both the mean and the median: Create hash functions and split them into distinct groups (each of size ). Within each group use the mean for aggregating together the results, and finally take the median of the group estimates as the final estimate.
The 2007 HyperLogLog algorithm splits the multiset into subsets and estimates their cardinalities, then it uses the harmonic mean to combine them into an estimate for the original cardinality.
See also
Streaming algorithm
HyperLogLog
References
Additional sources
Algorithms | Flajolet–Martin algorithm | [
"Mathematics"
] | 795 | [
"Applied mathematics",
"Algorithms",
"Mathematical logic"
] |
44,309,864 | https://en.wikipedia.org/wiki/Fluid%20flow%20through%20porous%20media | In fluid mechanics, fluid flow through porous media is the manner in which fluids behave when flowing through a porous medium, for example sponge or wood, or when filtering water using sand or another porous material. As commonly observed, some fluid flows through the media while some mass of the fluid is stored in the pores present in the media.
Classical flow mechanics in porous media assumes that the medium is homogenous, isotropic, and has an intergranular pore structure. It also assumes that the fluid is a Newtonian fluid, that the reservoir is isothermal, that the well is vertical, etc. Traditional flow issues in porous media often involve single-phase steady state flow, multi-well interference, oil-water two-phase flow, natural gas flow, elastic energy driven flow, oil-gas two-phase flow, and gas-water two-phase flow.
The physicochemical flow process will involve various physical property changes and chemical reactions in contrast to the basic Newtonian fluid in the classical flow theory of porous system. Viscosity, surface tension, phase state, concentration, temperature, and other physical characteristics are examples of these properties. Non-Newtonian fluid flow, mass transfer through diffusion, and multiphase and multicomponent fluid flow are the primary flow issues.
Governing laws
The movement of a fluid through porous media is described by the combination of Darcy's law with the principle of conservation of mass in order to express the capillary force or fluid velocity as a function of various other parameters including the effective pore radius, liquid viscosity or permeability. However, the use of Darcy's law alone does not produce accurate results for heterogeneous media like shale, and tight sandstones, where there is a huge proportion of nanopores. This necessitates the use of a flow model that considers the weighted proportion of various flow regimes like Darcy flow, transition flow, slip flow, and free molecular flow.
Darcy's law
The basic law governing the flow of fluids through porous media is Darcy's Law, which was formulated by the French civil engineer Henry Darcy in 1856 on the basis of his experiments on vertical water filtration through sand beds.
According to Darcy's law, the fluid's viscosity, effective fluid permeability, and fluid pressure gradient determine the flow rate at any given location in the reservoir.
For transient processes in which the flux varies from point to-point, the following differential form of Darcy’s law is used.
Darcy's law is valid for situation where the porous material is already saturated with the fluid. For the calculation of capillary imbibition speed of a liquid to an initially dry medium, Washburn's or Bosanquet's equations are used.
Mass conservation
Mass conservation of fluid across the porous medium involves the basic principle that mass flux in minus mass flux out equals the increase in amount stored by a medium. This means that total mass of the fluid is always conserved. In mathematical form, considering a time period from to , length of porous medium from to and being the mass stored by the medium, we have
Furthermore, we have that , where is the pore volume of the medium between and and is the density. So where is the porosity. Dividing both sides by , while , we have for 1 dimensional linear flow in a porous medium the relation
In three dimensions, the equation can be written as
The mathematical operation on the left-hand side of this equation is known as the divergence of , and represents the rate at which fluid diverges from a given region, per unit volume.
Diffusion Equation
Using product rule(and chain rule) on right hand side of the above mass conservation equation (i),
Here, = compressibility of the fluid and = compressibility of porous medium. Now considering the left hand side of the mass conservation equation, which is given by Darcy's Law as
Equating the results obtained in & , we get
The second term on the left side is usually negligible, and we obtain the diffusion equation in 1 dimension as
where .
References
Further reading
Originally published in 1879, the 6th extended edition appeared first in 1932.
Originally published in 1938.
External links
Fundamentals of Fluid Flow in Porous Media
Geology Buzz: Porosity
Defining Permeability
Tailoring porous media to control permeability
Permeability of Porous Media
Graphical depiction of different flow rates through materials of differing permeability
Soil mechanics
Hydrology
Fluid mechanics | Fluid flow through porous media | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 927 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Soil mechanics",
"Civil engineering",
"Environmental engineering",
"Fluid mechanics"
] |
44,309,873 | https://en.wikipedia.org/wiki/Computational%20Fluid%20Dynamics%20for%20Phase%20Change%20Materials | Computational Fluid Dynamics (CFD) modeling and simulation for phase change materials (PCMs) is a technique used to analyze the performance and behavior of PCMs. The CFD models have been successful in studying and analyzing the air quality, natural ventilation and stratified ventilation, air flow initiated by buoyancy forces and temperature space for the systems integrated with PCMs. Simple shapes like flat plates, cylinders or annular tubes, fins, macro- and micro-encapsulations with containers of different shapes are often modeled in CFD software's to study.
Typically the CFD models include Reynold's Averaged Navier-Stokes equation (RANS) modeling and Large Eddy Simulation (LES). Conservation equations of mass, momentum and energy (Navier – Stokes) are linearised, discretised, and applied to finite volumes to obtain a detailed solution for field distributions of air pressure, velocity and temperature for both indoor spaces integrated with PCMs.
Governing Equations
Mass Equation
where
ρ is fluid density,
t is time,
u is the flow velocity vector field
S_m is a Constant.
Energy Equation
where
ρ is the fluid mass density,
S_E is the source term.
Navier Stokes equation
Here f represents "other" body forces (per unit volume), such as gravity or centrifugal force. The shear stress term becomes , where is the vector Laplacian.
Boussinesq eddy-viscosity approximation
where
is the mean rate of strain tensor
is the turbulence eddy viscosity
is the turbulence kinetic energy
and is the Kronecker delta.
Assumptions
commonly used assumptions are
Incompressible fluid,
Boussinesq approximation (density is considered constant, except in the gravity forces term).
Constant thermo-physical properties (properties of solid and liquid states are assumed to be equal)
Phase Change Model
Two main thermal characteristics of phase change are the enthalpy-temperature relationship and temperature hysteresis. PCMs tend to have varying enthalpy temperature relationships due to the fact that they are blends of different materials, but pure PCMs have a more localized relationship, which can be approximated by single values for the enthalpy and phase change temperature.
Hysteresis is the phenomenon which causes the PCM to melt and freezes in different temperature ranges and with different enthalpies, which results in a different temperature-enthalpy curve for melting and freezing. Hysteresis is related to the chemical and kinetic properties of the material.
The commonly used enthalpy-porosity model in commercial CFD codes assumes, a linear enthalpy-temperature relationship and ignores hysteresis.[8]
The alternate is to use enthalpy-porosity method. When used to simulate PCM sails and a PCM plate-fin unit it produce reasonable temperature prediction in global space temperature terms. However, there are inaccuracies in transient simulations where time dependent PCM and local wall and air temperatures are of interest. This is over come by use of source terms that considers hysteresis and varying enthalpy-temperature relationship. [9][10]
CFD-DEM model are also used sometimes. Phase motion of discrete solids or particles is obtained by the Discrete Element Method (DEM) which applies Newton's laws of motion to every particle and the flow of continuum fluid is described by the local averaged Navier–Stokes equations that can be solved by the traditional Computational Fluid Dynamics (CFD).CFDEMcoupling (DCS Computing GmbH) is one such open source toolbox for CFD-DEM coupling.
Process
The Governing equations are discretized using an explicit Finite Volume Method. The velocity-pressure coupling is resolved by adopting a Fractional Step Method. The adoption of the enthalpy method allows working with a fixed grid instead of an interface tracking method.
The momentum source term intended to model the presence of solid is only needed in the control volumes that contain solid and liquid, not in the pure solid containing volumes.
The final form of the source term coefficient(S)depends on the approximation adopted for the behavior of the flow in the “mushy zone” (where mixed solid and liquid states are present). However, in the case of constant phase change temperature, the solid-liquid interface should be of infinitesimal width (although it cannot be thinner than one control volume width in our simulations); therefore, the formulation used for the source term is not very important in a physical sense, as long as it manages to bring the velocity to zero in mostly solid control volumes and to vanish if the volume contains pure liquid.[11]
Applications
CFD applications for latent thermal energy storage in PCM
The various CFD codes[1-3] has been employed for the modeling and simulation of the PCM system to understand the heat transfer mechanism, solidification and melting process, distribution of temperature profile and prediction of the air flow. Various commercial packages have been coupled with the CFD analysis to appreciate the feasibility of evaluating the behavior of PCM integrated system.
CFD modeling in PCM in mobilized thermal energy storage
The simulated heat transfer behavior of the PCM in Mobilized Thermal Energy Storage, during the charging process can be successfully conducted by CFD modeling [4] The Volume-Of-Fluid method is employed to solve for the temperature distribution in the multiphase, 2-dimensional pressure-based model. It accounts for the heat transfer mechanism, melting time, and the influence of the structure in charging process using Fluent 12.1. The governing equations employed are mass conversion and continuity equations.
CFD analysis on selection of geometry and type of PCM to be used
Integral, quasi-1D calculations have been reported [5] mainly for conduction-dominated problem using CFD simulation. It was reported that out of three geometries (cubic, cylindrical and spherical), the spherical capsule will have the maximum heat for the heat transfer fluid. Also it is concluded that salt hydrates based PCMs are the better choice over organic PCMs.
CFD analysis on PCM in shell and tube latent thermal heat storage system
The systems are developed in such manner that phase change materials are in the shell portion of the module and passage for the flow of air through the tubes. Conjugate steady state CFD heat transfer analysis has been carried out [6] to analyze the flow and temperature variation of heat transfer fluid in the system. It paves the way for selection and assessment of the geometrical and flow parameters, PCM solidification characteristics for the given boundary conditions
The comparative analysis, to further enhance the effectiveness of shell and tube PCMs has also been accomplished via CFD analysis[7]. Various CFD models with different configuration such as pins embedded on a tube with heat transfer fluid (HTF) flowing in it, with PCM surrounding the tube, fins embedded instead of pins and different configurations of fins on the tube are analyzed, by employing ANSYS code.
References
[1] N. Tay, F. Bruno, M. Belusko. Experimental validation of a CFD model for tubes in a phase change thermal energy storage system. International Journal of Heat and Mass Transfer. 55 (2012) 574–85.
[2] G. Zhou, Y. Zhang, Q. Zhang, K. Lin, H. Di. Performance of a hybrid heating system with thermal storage using shape-stabilized phase-change material plates. Applied Energy. 84 (2007) 1068–77.
[3] C. Arkar, S. Medved. Influence of accuracy of thermal property data of a phase change material on the result of a numerical model of a packed bed latent heat storage with spheres. Thermochimica Acta. 438 (2005) 192–201.
[4] A. Hesaraki, J. Yan, H. Li. CFD modeling of heat charging process in a direct-contact container: for mobilized thermal energy storage. LAP LAMBERT Academic Publishing2012.
[5] E.B. Retterstøl. Thermal energy storage for environmental energy supply. (2012).
[6] V. Antony Aroul Raj, R. Velraj. Heat transfer and pressure drop studies on a PCM-heat exchanger module for free cooling applications. International Journal of Thermal Sciences. 50 (2011) 1573–82.
[7] N. Tay, F. Bruno, M. Belusko. Comparison of pinned and finned tubes in a phase change thermal energy storage system using CFD. Applied Energy. 104 (2013) 79–86.
[8] Mehling H, Cabeza LF, Heat and cold storage with PCM. 1st Ed. Springer-Verlag Heidelberg; 2008
[9] Ye WB, Zhu DS, Wang N. Numerical simulation on phase-change thermal storage/ release in a plate-fin unit, Applied Thermal Engineering 31 (2011), pp. 3871–3884
[10] Gowreesunker BL, Tassou SA, Kolokotroni M. Improved simulation of phase change processes in applications where conduction is the dominant heat transfer mode, Energy and Buildings 47 (2012), pp. 353–359
[11] P. A. Galione et al., Numerical Simulations of Thermal Energy Storage Systems With Phase Change Materials.
Computational fluid dynamics
Phase transitions | Computational Fluid Dynamics for Phase Change Materials | [
"Physics",
"Chemistry"
] | 1,915 | [
"Physical phenomena",
"Phase transitions",
"Computational fluid dynamics",
"Phases of matter",
"Critical phenomena",
"Computational physics",
"Statistical mechanics",
"Matter",
"Fluid dynamics"
] |
44,315,733 | https://en.wikipedia.org/wiki/Dry%20mortar%20production%20line | Dry mortar production line (or dry mortar machine) is a set of machinery that produces dry mortar (also known as dry premixed mortar or hydraulicity cement mortar) for construction industry and other uses. It is mainly composed of elevator, premix bin, stock bin, mixing engine, finished product warehouse, automatic packing machine, dust collector and electric control cabinet. Dry mortar mixer can be vertical or horizontal type and there are many models for choosing according to customer's actual conditions.
Mechanical principle
The paddles carried by a pair of counter-rotating spindles throw the aggregates up and cause zero gravity phenomenon. The aggregates get mixed into each other, and form a fluidized zero gravity zone. Swirling air generates surround the spindles and moves the aggregates for uniform mixing. Final mixture is transferred to storage bin through pneumatic gate.
Classifications
According to the structure, the dry mortar production line can be classified as four different types: Tower type, Stair type, Block type and Flat type.
According to the operating mode, the dry mortar production line can be classified as manual type, auto-manual type, auto type.
According to the output size, it can be divided into simple dry mortar production line and automatic dry mortar plant. The output of simple line is generally 1-8t per hour, and the output of automatic line is generally more than 15t per hour. The solution can be customized according to the customer's site and budget.
Application
The dry mortar production line can be adopted in production processes of regular dry masonry mortar, plastering mortar, thermal mortar, anti-crack mortar, self-leveling mortar and Decorative mortar etc.
Advantages of dry mix mortar production line
High efficiency, low cost, easy operation and high quality;
Easy to install and maintain;
Low noise and energy consumption;
Dry mix mortar production line has wide application, which can be used to produce varieties of mortars.
See also
Mortar (masonry)
References
Cement
Masonry
Building materials
Manufacturing | Dry mortar production line | [
"Physics",
"Engineering"
] | 398 | [
"Masonry",
"Building engineering",
"Manufacturing",
"Construction",
"Materials",
"Building materials",
"Mechanical engineering",
"Matter",
"Architecture"
] |
44,317,435 | https://en.wikipedia.org/wiki/Industrial%20and%20Mining%20Water%20Research%20Unit | The Industrial and Mining Water Research Unit (abbreviated IMWaRU) is one of several research entities based in the School of Chemical and Metallurgical Engineering at the University of the Witwatersrand, Johannesburg. It provides research as well as supervision to masters and doctorate students within the University, as well as consulting to industry.
Unit Structure
The unit deals with cross disciplinary water issues relating to industry and mining. As such the group includes experts in chemical engineering, microbiology and other sciences.
The unit includes five NRF rated researchers and over 20 masters and doctoral level postgraduate students in the faculties of engineering and science.
Members
The group currently comprises 7 academics (alphabetically - Mogopoleng (Paul) Chego, Kevin Harding, Michelle Low, Craig Sheridan, Geoffrey Simate, Karl Rumbold and Lizelle van Dyk), as well as several postgraduate students.
Logo
The logo of the Unit is in the shape of a drop of water, with the left half representing the blue of water.
The right half of the drop is modified to show grass and how water is linked to all life. Underneath the icon are the letters IMWaRU, while to the right, the name "Industrial and Mining Water Research Unit" appears.
Location
The unit is housed in several buildings across the University, most notably in the Richard Ward Building on East campus. Additionally, some members are located in the Biology Building on East Campus and have access to laboratories in that building.
They also have access to an outdoor facility on West Campus where constructed wetland, and other outdoor, experiments take place.
Research
The group has a broad range of research publications in the areas as listed below:
Acid mine drainage (AMD) - methods of reducing, treating and managing AMD.
Algal Studies - including to clean water, and as a source of biomass for biodiesel
Biorefineries - the use of biomass for values add product, including obtaining these with dual purpose water treatment.
Constructed wetlands (CW) - waste water remediation through natural biological processes.
Ecological Engineering - study of creating and sustaining cohabitation conditions for both humans and their environment.
Grade Engineering
Industrial biotechnology - the use of biotechnology in water related applications e.g. for water purification and water reduction.
Industrial Ecology - the use of sustainability principles in reducing environmental impacts; particularly relating to water.
Life-cycle assessment (LCA) - quantification and minimisation of liquid/solid/gaseous waste at sites which include food processing, industrial bioprocessing and others.
Material flow analysis
Membrane technology
Nanotechnology
Ozone - determination of optimal treatment techniques for cooling water purification systems, chemical vs ozone.
Water footprinting (WF) - quantification and minimisation of water use on, amongst others, mine and paper/pulp sites.
Wastewater treatment
and more.
Collaboration
The unit works closely with the Centre in Water and Research Development (CiWaRD), a cross disciplinary water research think tank.
Active collaborations include the Schools of Law, Chemistry, Civil and Mining Engineering and the Global Change Institute at the university, in addition to the Helmholtz Centre for Environmental Research in Leipzig, Germany. They have also collaborated with the Universities of Cape Town, Geneva, Queensland and the Pontifical Catholic University of Chile.
IMWaRU has had several Technology Innovation Agency (TIA) projects run through Wits Enterprise.
The unit exhibited with several other groups at Mine Closure 2014.
Presentations
Members of the group have had presentations given at:
South African Institution of Chemical Engineering 2012 (Champagne Sports Resorts, South Africa);
Water Institute of Southern Africa 2012 (Cape Town, South Africa);
International Conference on Energy, Nanotechnology and Environmental Sciences 2013 (Johannesburg, South Africa);
International Conference on Power Science and Engineering 2013 (Paris, France);
Water in Mining 2013 (Brisbane, Australia);
Water in Mining 2014 (Viña del Mar, Chile);
Water Institute of Southern Africa 2014 (Mbombela, South Africa);
International Conference on Acid Rock Drainage 2015 (Santiago, Chile);
25th Annual SETAC European Meeting 2015 (Barcelona, Spain);
African Utility Week 2015 (Cape Town, South Africa);
Sustainability Week 2015 (Water Resource Seminar) (Pretoria, South Africa);
Life Cycle Management 2015 (Bordeaux, France);
School of Chemical and Metallurgical Engineering 21st Birthday conference;
Water Institute of Southern Africa 2016 (Durban, South Africa)
Hydrometallurgy 2016 (Cape Town, South Africa),
International Conference on Environment, Materials and Green Technology (Sebokeng, South Africa),
International Conference on Sustainable Materials Processing and Manufacturing (SMPM 2017) (Skukuza, Kruger National Park, South Africa)
International Conference on Energy, Environment and Climate Change (Pointe aux Piments, Mauritius);
2nd International Conference on Sustainable Materials Processing and Manufacturing (SMPM 2019) (Sun City (South Africa)) and more.
Awards
The IMWaRU group was awarded a special presentation award at the GAP Bioscience gala dinner in December 2014 for work on remediating AMD using biological substrates.
Charne Germuizhuizen received the best mine water presentation award, while Mogopoleng (Paul) Chego received the 3rd place best technical talk, at the Water Institute of Southern Africa 2016 (WISA2016) conference in May 2016.
Tamlyn Naidu won the IOM3 2019 "YOUNG PERSONS' WORLD LECTURE COMPETITION"
References
External links
University of the Witwatersrand
Research institutes in South Africa
Water | Industrial and Mining Water Research Unit | [
"Chemistry",
"Engineering"
] | 1,114 | [
"Chemical engineering",
"Chemical engineering organizations"
] |
44,317,529 | https://en.wikipedia.org/wiki/Laminar%20flamelet%20model | The laminar flamelet model is a mathematical method for modelling turbulent combustion. The laminar flamelet model is formulated specifically as a model for non-premixed combustion
The concept of ensemble of laminar flamelets was first introduced by Forman A. Williams in 1975, while the theoretical foundation was developed by Norbert Peters in the early 80s.
Theory
The flamelet concept considers the turbulent flame as an aggregate of thin, laminar (Re < 2000), locally one-dimensional flamelet structures present within the turbulent flow field. Counterflow diffusion flame is a common laminar flame which is used to represent a flamelet in a turbulent flow. Its geometry consists of opposed and axi-symmetric fuel and oxidizer jets. As the distance between the jets is decreased and/or the velocity of the jets is increased, the flame is strained and departs from its chemical equilibrium until it eventually extinguishes. The mass fraction of species and temperature fields can be measured or calculated in laminar counterflow diffusion flame experiments. When calculated, a self-similar solution exists, and the governing equations can be simplified to only one dimension i.e. along the axis of the fuel and oxidizer jets. It is in this direction where complex chemistry calculations can be performed affordably.
Logic and formulae
To model a non-premixed combustion, governing equations for fluid elements are required. The conservation equation for the species mass fraction is as follows:-
Lek → lewis number of kth species and the above formula was derived with keeping constant heat capacity. The energy equation with variable heat capacity:-
As can be seen from above formulas that the mass fraction and temperature are dependent on
1. Mixture fraction Z
2. Scalar dissipation χ
3. Time
Many times we neglect the unsteady terms in above equation and assume the local flame structure having a balance between steady chemical equations and steady diffusion equation which result in Steady Laminar Flamelet Models (SLFM). For this, an average value of χ is computed known as Favre value
The basic assumption of a SLFM model is that a turbulent flame front behaves locally as a one dimensional, steady and laminar which proves to be a very useful while reducing the situation to a much simpler terms but it does create problems as few of the effects are not accounted for.
Advantages
The advantages of using this combustion model are as follows:
1. They have the advantage of showing strong coupling between chemical reactions and molecular transport.
2. The steady laminar flamelet model is also used to predict chemical non-equilibrium due to aerodynamic straining of the flame by the turbulence.
Disadvantages
The disadvantages of Steady Laminar Flamelet model due to above mentioned reason are:
1. It does not account for the curvature effects which can change the flame structure and is more detrimental while the structure hasn’t reached the quasi-steady state.
2. Such transient effects also arise in turbulent flow, the scalar dissipation experience a sudden change. As the flame structure take time to get stabilize.
To improve the above SLFM models, few more models has been proposed like Transient laminar flamelet model (TLFM) by Ferreira.
References
Further reading
1. Versteeg H.K. and Malalasekera W., An introduction to computational fluid dynamics, .
2. Stefano Giuseppe Piffaretti, Flame Age Model: a transient laminar flamelet approach for turbulent diffusion flames, A dissertation submitted to the Swiss Federal Institute of Technology in Zurich.
3. N. Peters, Institut für Technische Mechanik RWTH Aachen, Four Lectures on turbulent Combustion.
Combustion
Combustion engineering | Laminar flamelet model | [
"Chemistry",
"Engineering"
] | 753 | [
"Combustion",
"Combustion engineering",
"Industrial engineering"
] |
47,401,261 | https://en.wikipedia.org/wiki/Nitrogen%20dioxide%20poisoning | Nitrogen dioxide poisoning is the illness resulting from the toxic effect of nitrogen dioxide (). It usually occurs after the inhalation of the gas beyond the threshold limit value.
Nitrogen dioxide is reddish-brown with a very harsh smell at high concentrations, at lower concentrations it is colorless but may still have a harsh odour. Nitrogen dioxide poisoning depends on the duration, frequency, and intensity of exposure.
Nitrogen dioxide is an irritant of the mucous membrane linked with another air pollutant that causes pulmonary diseases such as obstructive lung disease, asthma, chronic obstructive pulmonary disease and sometimes acute exacerbation of COPD and in fatal cases, deaths.
Its poor solubility in water enhances its passage and its ability to pass through the moist oral mucosa of the respiratory tract.
Like most toxic gases, the dose inhaled determines the toxicity on the respiratory tract. Occupational exposures constitute the highest risk of toxicity and domestic exposure is uncommon. Prolonged exposure to low concentration of the gas may have lethal effects, as can short-term exposure to high concentrations like chlorine gas poisoning. It is one of the major air pollutants capable of causing severe health hazards such as coronary artery disease as well as stroke.
Nitrogen dioxide is often released into the environment as a byproduct of fuel combustion but rarely released by spontaneous combustion. Known sources of nitrogen dioxide gas poisoning include automobile exhaust and power stations.
The toxicity may also result from non-combustible sources such as the one released from anaerobic fermentation of food grains and anaerobic digestion of biodegradable waste.
The World Health Organization (WHO) developed a global recommendation limiting exposures to less than 20 parts per billion for chronic exposure and value less 100 ppb for one hour for acute exposure, using nitrogen dioxide as a marker for other pollutants from fuel combustion.
There is a significant association between indoor levels and increased respiratory symptoms such as wheeze, chest tightness and severity of infections among children with asthma.
Historically, some cities in the United States including Chicago and Los Angeles have higher levels of nitrogen dioxide than the EPA maximum exposure limits of 100 ppb for a one-hour exposure and less than 53 ppb for chronic exposure.
Signs and symptoms
Nitrogen dioxide poisoning is harmful to all forms of life just like chlorine gas poisoning and carbon monoxide poisoning. It is easily absorbed through the lungs and its inhalation can result in heart failure and sometimes death in severe cases.
Individuals and races may differ in nitrogen dioxide tolerance level and individual tolerance level for the gas may be altered by several factors, such as metabolic rate, barometric pressure, and hematological disorders but significant exposure may result in fatal conditions that could lead to shorter lifespan due to heart failure.
Acute poisoning
Exposure to high level of nitrogen dioxide may lead to inflammation of the mucous membrane and the lower and upper respiratory tracts.
The symptoms of acute nitrogen dioxide poisoning is non-specific and have a semblance with ammonia gas poisoning, chlorine gas poisoning, and carbon monoxide poisoning. The symptoms also resembles that of pneumonia or viral infection and other inhalational injuries but common symptoms includes rhinitis wheezing or coughing, conjunctivitis, headache, throat irritation and dyspnea which may progress to nasal fissures, ulcerations, or perforation.
The patient is usually ill-appearing and presents with hypoxemia coupled with shallow rapid breathing.
Therapy is supportive and includes removal from further nitrogen dioxide exposure.
Systemic symptoms include fever and anorexia. Electrocardiography and chest radiography can help in revealing diffuse, bilateral alveolar infiltrates.
Chest radiography may be used in diagnosis and the baseline could be established with pulmonary function testing.
There is no specific laboratory diagnostic test for acute nitrogen dioxide poisoning but analysis of arterial blood gas level,
methemoglobin level, complete blood count, glucose test, lactate threshold measurement and r peripheral blood smear may be helpful in the diagnosis of nitrogen dioxide poisoning.
The determination of nitrogen dioxide in urine or tissue does not establish the diagnosis, and there are technical and interpretive problems with these tests.
Chronic poisoning
Prolonged exposure to high levels of nitrogen dioxide can have an inflammatory effect that principally targets the respiratory tracts leading to chronic nitrogen dioxide poisoning which can occur within days or weeks after the threshold limit value is excessively exceeded.
This condition causes fever, rapid breathing coupled with rapid heart rate, labored breathing and severe shortness of breath. Other effects include diaphoresis, chest pain, and persistent dry cough, all of which may result in weight loss, anorexia and may also lead to right-side heart enlargement and heart disease in advanced cases.
Prolonged exposure to relatively low levels of nitrogen (II) oxide may cause persistent headaches and nausea.
Like chlorine gas poisoning, symptoms usually resolve themselves upon removal from further nitrogen dioxide exposure, unless there had been an episode of severe acute poisoning.
Treatment and management vary with symptoms. Patients are often observed for hypoxemia for a minimum of 12 hours if there are no initial symptoms and if the patient is hypoxemic, oxygen may be administered but high-dose steroids are recommended for patients with pulmonary manifestations. Patients may also be hospitalized for 12 to 24 hours or longer for observation if the gaseous exchange is impaired.
In a case where gaseous exchange is impaired, mechanical ventilation and intubation may be necessary and if bronchiolitis obliterans develop within 2 to 6 weeks of nitrogen dioxide exposure, corticosteroid therapy or anticholinergic medications may be required for 6 to 12 months to lower the body overreaction to nitrogen dioxide gas.
Cause
Occupational exposures constitute the highest risk of toxicity and it is often high for farmers especially those that deal with food grains. It is equally high for firefighters and military personnel, especially those officers that deal in explosives. The risk is also high for arc welders, traffic officers, aerospace staffs and miners as well as those people whose occupations are connected with the nitric acid. Silo-filler's disease is a consequence of exposure to nitrogen dioxide poisoning by farmers dealing with silos. Food grains such as corn and millet, as well as grasses such as alfalfa and some other plant material, produces nitrogen dioxide within hours due to anaerobic fermentation. The threshold concentrations of nitrogen dioxide are often attained within 1 to 2 days and begin to decline gradually after 10 to 14 days but if the silos is well sealed, the gas may remain in there for weeks. Heavily fertilized silage, particularly the ones produced from immature plants, generate a higher concentration of the gas within the silo.
Nitrogen dioxide is about 1.5 times heavier than air and during silage storage, nitrogen dioxide remains in the silage material.
Improper ventilation may result in exposure during the leveling of the silage.
Pathophysiology
Nitrogen dioxide is sparingly soluble in water and on inhalation, it diffuses into the lung and slowly hydrolyzes to nitrous and nitric acid which causes pulmonary edema and pneumonitis leading to the inflammation of the bronchioles and pulmonary alveolus resulting from lipid peroxidation and oxidative stress.
Mucous membrane is primarily affected along with type I pneumocyte and the respiratory epithelium. The generation of free radicals from lipid peroxidation results in irritation of the bronchioles and alveoli that causes rapid destruction of the respiratory epithelial cells.
The overall reaction results in the release of fluid that causes pulmonary edema.
Nitrogen dioxide poisoning may alter macrophage activity and immune function leading to susceptibility of the body to a wide range of infections, and overexposure to the gas may also lead to methemoglobinemia, a disorder characterized by a higher than normal level of methemoglobin (metHb, i.e., ferric [Fe3+] rather than ferrous [Fe2+] haemoglobin) in the blood.
Methemoglobinemia prevents the binding of oxygen to haemoglobin causing oxygen depletion that could lead to severe hypoxia.
If nitrogen dioxide poisoning is untreated, fibrous granulation tissue is likely to develop within the alveolar ducts, tiny ducts that connect the respiratory bronchioles to alveolar sacs, each of which contains a collection of alveoli (small mucus-lined pouches made of flattened epithelial cells). The overall reaction may cause an obstructive lung disease. Meanwhile, proliferative bronchiolitis is a secondary effect of nitrogen dioxide poisoning.
Epidemiology
The EPA have some regulations and guidelines for monitoring nitrogen dioxide levels. Historically, some states in the US including Chicago, Northeast corridor and Los Angeles have had high levels of nitrogen dioxide.
In 2006, the WHO estimated that over 2 million deaths result annually from air pollution in which nitrogen dioxide constitute one of the pollutants. While over 50% of the disease that results from these pollutants are common in developing countries and the effects in developed countries is also significant. An EPA survey in the US suggests that 16 percent of United States' housing units are sited close to an airport, highway or railroad increasing in the United States the exposure risk of approximately 48 million people.
A feasibility study of the ozone formed from the oxidation of nitrogen dioxide in ambient air reported by the WHO suggested that daily deaths of 1 to 2% is attributed to exposure to ozone concentration above 47.3 ppb and exposure above 75.7 ppb is attributed to 3 to 5% increase in daily mortality. A level of 114 ppb was attributed to 5 to 9% increase daily mortality.
Silo filler's disease is pervasive during the harvest seasons of food grains.
In May 2015, the National Green Tribunal directed Delhi and other states in India to ban diesel vehicles over 10 years old as a measure to reduce nitrogen dioxide emission that may result in nitrogen dioxide poisoning. In 2008, the report of United Kingdom Committee on the Medical Effects of Air Pollutants (COMEAP) suggested that air pollution is the cause of about 29,000 deaths in UK. The WHO urban air quality database estimated Delhi's mean annual PM 10 levels in 2010 as 286 μg /m3 and London as 23 μg /m3. In 2014, the database estimated Delhi's annual mean PM 2.5 particulate matter levels in 2013 as 156 μg /m3 whereas, London have only 8 μg /m3 in 2010 but the nitrogen dioxide in London breach the European Union's standard. In 2013, the annual mean nitrogen dioxide level in London was estimated as 58 μg /m3 but the save and "threshold limit value" is 40 μg /m3. In March 2015, Brussels took the United Kingdom into court for breaching emissions limits of nitrogen dioxide at its coal-fired Aberthaw power stations in Wales. The plant operated under a permit allowing emissions of 1200 mg/Nm3, which is more than twice the 5 mg/Nm3 limit specified in the EU's large combustion plant directive.
Prognosis
Generally, long-term prognosis is helpful to survival of initial exposure to nitrogen dioxide. Some cases of nitrogen dioxide poisoning resolves with no observable symptoms and patient may be determined by pulmonary function testing. If chronic exposure causes lung damage, it could take several days or months for the pulmonary function to improve. Meanwhile, permanent mild dysfunction may result from bronchiolitis obliterans and could manifest as abnormal flow at 50 to 70 percent of vital capacity. It may also manifest as mild hyperinflammation, airway obstruction and in that case, patient may be subject to steroid treatment to treat deconditioning.
Complications from prolong exposure includes bronchiolitis obliterans and other secondary infections such as pneumonia due to injuries on the mucous membrane from pulmonary edema and inhibition of immune system by nitrogen dioxide. Nitrogen dioxide inhalation can result in short and long-term morbidity or death depending on the extent of exposure and inhaled concentration and the exposure time.
Illness resulting from acute exposure is usually not fatal although some exposure may cause bronchiolitis obliterans, pulmonary edema as well as rapid asphyxiation.
If the concentration of exposure is excessively high, the gas may displace oxygen resulting in fatal asphyxiation.
Generally, patients and workers should be educated by medical personnel on how to identify the signs and symptoms of Nitrogen dioxide poisoning.
Farmers and other farm workers should be educated on the proper way of food grain storage to prevent silo filler's disease.
Biochemical effects
Chronic exposure to high level of nitrogen dioxide results in the allosteric inhibition of glutathione peroxidase and glutathione S-transferase, both of which are important enzymes found in the mucous membrane antioxidant defense system, that catalyse nucleophilic attack by reduced glutathione (GSH) on non-polar compounds that contain an electrophilic carbon and nitrogen. These inhibition mechanisms generates free radicals that causes peroxidation from the lipids in the mucous membrane leading to increased peroxidized erythrocyte lipids, a reaction that proceeds by a free radical chain reaction mechanism that result in oxidative stress. The oxidative stress on the mucous membrane causes the dissociation of the GSTp-JNK complex, oligomerization of GSTP and induction of the JNK pathway, resulting in apoptosis or inflammation of the bronchioles and pulmonary alveolus in mild cases.
On migrating to the bloodstream, nitrogen dioxide poisoning results in an irreversible inhibition of the erythrocyte membrane acetylcholinesterase which may lead to muscular paralysis, convulsions, bronchoconstriction, the narrowing of the airways in the lungs (bronchi and bronchioles) and death by asphyxiation.
It also causes a decrease in glucose-6-phosphate dehydrogenase which may results in glucose-6-phosphate dehydrogenase deficiency known as favism, a condition that predisposes to hemolysis (spontaneous destruction of red blood cells).
Acute and chronic exposure also reduces glutathione reductase, an enzyme that catalyzes the reduction of glutathione disulfide (GSSG) to the sulfhydryl form glutathione (GSH), which is a critical molecule in resisting oxidative stress and maintaining the reducing environment of the cell.
Reproductive effects
Exposure to nitrogen dioxide has a significant effect on the male reproductive system by inhibiting the production of Sertoli cells, the "nurse" cells of the testicles that are part of a seminiferous tubule and help in the process of spermatogenesis.
These effects consequently retard the production of sperm cells.
The effects of nitrogen dioxide poisoning on female reproduction may be linked with the effects of oxidative stress on female reproduction.
Nitrogen dioxide poisoning disrupts the balance of reactive oxygen species (ROS), which results in oxidative stress, leading to significant effects on the female reproductive lifespan. ROS play a significant role in body physiology, from oocyte production, development and maturation to fertilization, development of the embryo and gestation.
Exposure to nitrogen dioxide causes ovulation-induced oxidative damage to the DNA of ovarian epithelium.
There is a growing body of literature on the pathological effects of ROS on female reproduction as evidenced by free-radical-induced birth defects, abortions, hydatidiform moles and pre-eclampsia. ROS also play a significant role in the etiopathogenesis of endometriosis, a disease in which tissue that normally grows inside the uterus grows outside of it.
Oxidative stress causes defective placentation, which is likely to lead to placental hypoxia, shortage of oxygen in the placental as well as reperfusion injury resulting from ischemia, which may lead to endothelial cell dysfunction.
Increased oxidative stress caused by nitrogen dioxide poisoning may result in ovarian epithelium inflammation and potentially to cancer in the most severe cases.
References
External links
Inorganic nitrogen compounds
Nitrogen oxides
Hazardous air pollutants
Smog
Free radicals
Food additives
Toxic effects of substances chiefly nonmedicinal as to source
Gases
Medical emergencies
Suicide by poison
Industrial hygiene
Indoor air pollution | Nitrogen dioxide poisoning | [
"Physics",
"Chemistry",
"Biology",
"Environmental_science"
] | 3,440 | [
"Visibility",
"Gases",
"Toxicology",
"Physical quantities",
"Inorganic compounds",
"Smog",
"Phases of matter",
"Free radicals",
"Inorganic nitrogen compounds",
"Senescence",
"Biomolecules",
"Toxic effects of substances chiefly nonmedicinal as to source",
"Statistical mechanics",
"Matter"
] |
61,929,977 | https://en.wikipedia.org/wiki/B%C3%BClent%20%C5%9E%C4%B1k | Bülent Şık is a Turkish food engineer, environmental and human rights activist and a whistleblower. He was convicted after disclosing the results from a government study on environmental pollution and carcinogens.
Early life and education
Career
Şık has worked at Akdeniz University in Antalya, where he was a deputy director of the Food Safety and Agricultural Research Center.
In the early 2010s, Şık worked on a 5-year research project for the Turkey's Ministry of Health investigating a possible relation between the high incidence of cancer in western Turkey (Kocaeli, Tekirdağ, Kırklareli, Edirne and Antalya) and toxicity in local soil, water, and food. Şık found dangerous levels of toxicity in a number of food and water samples, concluding that water in several residential areas is unsafe for drinking. In 2015, he reported his findings to the government.
In 2016, he was fired from his university position as assistant professor by a presidential decree-law after signing a petition "calling for peace between Turkish forces and Kurdish militants in southeast Turkey".
In April 2018, as no action was taken on the water pollution for three years, Şık published his findings in the opposition newspaper Cumhuriyet. After the publication, the Turkish government claimed the newspaper publication violated the confidentiality clauses prohibiting to reveal the findings unless approved by the authorities, but it did not deny the accuracy of information. Subsequently the Ministry of Health sued Şık for "revealing confidential information as well as provoking outrage among the public".
On 26 September 2019, Şık was sentenced to 15 months in jail for "disclosing information about duty" while he has been acquitted of "providing prohibited information". Amnesty International has criticized the trial, describing Şık as a whistleblower.
Private life
Bülent Şık is the brother of Ahmet Şık, a journalist and an opposition party member of Parliament.
References
Living people
20th-century births
Food engineers
Turkish engineering academics
Academic staff of Akdeniz University
Cancer researchers
Turkish environmentalists
Turkish human rights activists
Turkish whistleblowers
Turkish prisoners and detainees
Year of birth missing (living people) | Bülent Şık | [
"Engineering"
] | 449 | [
"Food engineers",
"Food engineering"
] |
61,934,142 | https://en.wikipedia.org/wiki/PCB%20reverse%20engineering | Reverse engineering of printed circuit boards (sometimes called “cloning”, or PCB RE) is the process of generating fabrication and design data for an existing circuit board, either closely or exactly replicating its functionality.
Obtaining circuit board design data is not by necessity malicious or aimed at intellectual property theft. The data generated in the reverse engineering process can be used for troubleshooting, repair, redesign and re-manufacturing, or even testing the security of a device to be used in a restricted environment.
Uses
Legacy product support
Legacy systems need maintenance and replacement parts to operate past their intended life cycle. Demand for parts that are no longer being manufactured can lead to material shortages of parts, called DMS/DMSMS.
There is much demand that entire government divisions have been created to regulate and plan the obsolescence of those systems and parts. Areas commonly affected by technical obsolescence include power station controls, ATC and aviation controls, medical imaging systems, and many aspects of military technology.
There are many legacy systems developed in the 70s, 80s or 90s whose original manufacturer is no longer in business or no longer has the original design data, but whose original equipment is still in use. In many cases exact form, fit and function is required, either that so parts can “handshake” properly with the existing framework, or to avoid requirements of time-consuming and costly testing.
For industries with highly regulated electronics, (like military or aerospace) this approach can vastly reduce the time required to fabricate replacement parts for system repairs, since the new part's specifications match the original design exactly and therefore do not need to undergo the same level of rigorous re-certification and testing that would be required of a newly designed or revised circuit board.
For example, a power company in Florida was forced to shut down due to the failure of a single, inexpensive PCB, which had no replacement parts and no data available to print them. The failure occurred during peak usage hours, and a power outage at that time can cost a power company thousands of dollars per hour.
An engineering firm successfully reverse engineered the PCB to generate an exact copy of the PCB using the destructive imaging and milling process, and the power station was subsequently able to resume normal operation.
Benchmarking
The process can be used to provide important benchmarking information about newly acquired products, prototype PCBs or any circuit board the company does not own. For example, reverse engineering a circuit assembly reveals whether or not the fabricator has exactly matched the design specifications of the board.
The process can be used to inspect for counterfeit or malicious circuits embedded in a PCB, or, if a new product has been purchased by a company, to create schematics or other documentation that may not have been included with the product.
Use with additive manufacturing
Data from the reverse engineering process can be used to immediately repair or reprint a circuit board using additive manufacturing techniques on multi-headed 3-D printers.
In situations where resources are limited like on a ship, submarine, space, or forward deployment, the reverse engineering process can enable a crew to maintain electronics equipment without being required to bring along spare parts. In an ideal scenario, the crew would have access to the design data to use with the 3D printer, but in the event that crew did not have the proper data for the PCBs, they would need to reverse engineer the artifact on hand to create more.
Malicious Intent
Data from reverse engineering can be taken with good intentions but mitigating intellectual property theft and maintaining privacy is increasingly important. Obfuscating PCBs, or hiding the intent of processing is one way to help deter theft. Another is using physical unclonable functions (PUFs) as a digital fingerprint on your PCB that is impossible to recreate.
Methods
Types
Destructive RE
Destructive reverse engineering (DRE) is a process where all layers of the board are imaged and subsequently removed by various milling techniques or tools. While it is possible to use nearly any camera or image source for this method, purpose-built RE systems utilize calibrated image sources that allow for extremely accurate reproduction of the design data for the board. This allows an engineer to match the exact form, fit and function of the original PCB. The drawback to this method is that it destroys the PCB. If the data comes from the last remaining circuit card in existence, it cannot be compared to a sample since little or no circuit board remains at the end of the destructive process. Also, care must be taken during the milling process to avoid damaging the copper. If areas of copper are removed before they are imaged, this represents a permanent loss of data which can only be rectified by existing documentation of the PCB, or by reverse engineering a second, identical board.
Non-Destructive RE
There is a growing desire and need for non-destructive reverse engineering technology (NDRE), especially in scenarios like the one mentioned above where there is only a single PCB that can be used. Non-destructive PCB RE (NDRE) mean that the circuit board itself is not destroyed in the process; however, most non-destructive techniques require removing components from the surface of the board.
The primary difference between DRE and NDRE methods are in the way that images for the board are captured before new data is generated - in some cases optical images of the top and bottom of the board are captured, then merged with X-ray images of the boards internal layers. Once all images of all of the layers of the board have been captured the process of generating digital manufacturing data is similar to the destructive process.
X-ray Computed Tomography
In recent years, X-ray computed tomography-based imaging processes have advanced to the point that they are able to capture images of the circuit board well enough to isolate individual layers and the features on each of these layers. For simpler boards, X-ray or CT Scans can provide high enough resolution images to reverse engineer a board without requiring the use of destructive milling.
Generally, a high resolution CT scanning machine will capture images of the board in 2-D slices, varying the angle and intensity. The resulting image captures of the board are computationally assembled into a 3-D volumetric model, and images of each layer can then be extracted. Additional research is underway presently to improve the procedure of CT scanning, volumetric data reconstruction, and circuit layer extraction.
In principle this process seems fairly simple, however certain issues such as the non-planarity of circuit layers, resolution and size limitations, and X-ray artifacting greatly complicate the extraction of usable circuit images.
X-ray/CT imaging processes suffer many drawbacks, including resolution, equipment costs, and beam hardening and other X-ray artifacts which can distort images or make them harder to use for the reverse engineering process. Additionally, some IC chips can be damaged by exposure to powerful X-rays, so the board must be depopulated before being imaged if components are going to be salvaged for reuse.
Another drawback is the time involved in creating the images used to generate circuit board design data. In one study, a Versa 510 X-ray machine was used to image a 6 layer board, measuring about - the imaging and processing of the cloud data took over 18 hours to complete. By comparison, destructive reverse engineering can produce high resolution, calibrated optical images of the same 6 layer board in under 2 hours at very low cost by a skilled operator.
Flying Probe Test
Often, a flying probe test machine (FPT machine) can also be used to generate data from a circuit board. Unlike destructive methods, with this process the PCB can generally be reused. But the only output from this process is a list of connections between surface pads on the board, also known as a netlist.
The netlist is entirely dependent on the electrical connectivity of the PCB. If a PCB has become damaged or delaminated over the course of its life-cycle, it is possible that either via barrels or the copper traces have become broken, and if the damage occurs on the inner layers of the PCB, the FPT operator will have no way to know about the damage. The resulting netlist will reflect the breaks in the track, and should not be used to produce a schematic or additional boards. Additionally, a netlist is a fairly narrow data format that only provides insight into whether different component pins are connected or not. There is no information about the internal geometries of the copper circuits, which are crucial to proper functionality of radio emitting circuits, or circuits with differential signalling. It is impossible to create an identical PCB using this method. These drawbacks mean that this method is generally reserved for the creation of schematics or for troubleshooting and repair purposes.
Films
Before the digital age of data processing and storage, PCB designers created and stored the designs on Mylar/BoPET drafting films, which were used in the photo-resistive fabrication process for circuit boards. These films were oftentimes the only copy of the design data for the board. While their primary use was in the manufacturing of PCBs they also doubled as their own storage media. Ultimately these films can disintegrate with time and use, so the design must be imaged and converted to vector formatting in order to be used for future fabrication. The reverse engineering of film sets is roughly the same process as reverse engineering a PCB - each layer is imaged, and Gerber/vector data is created for the different circuit layers.
Final outputs and reproduction
Whether the board is reverse engineered using a destructive or non-destructive method, the result is that a netlist is obtained. While the netlist itself cannot be used to create an identical replacement, it can be used to generate supporting data for the board like a schematic. Whereas a netlist is a simple ASCII-based text file that simply lists all of the connections of the board, a PCB Schematic relays the same information in a more visual manner.
In addition, a schematic can be merged with the bill of materials (BOM) and component pick and place data to further enhance its usability in troubleshooting scenarios, or can be used as a base for the design of a brand new PCB. If a destructive RE process has been used or images for all PCB layers have been captured using X-ray imaging, the resulting data should include not only a netlist, BOM, and/or Schematic, but also a complete graphical layout of the copper layers of the board. This data can be represented in a vast number of different formats, but the most common data formats created in the reverse engineering process include the following:
Circuit layers (Gerber RS274x, IPC-2581 or ODB++)
Soldermask and solderpaste/stencil cut files (Gerber RS274x)
Drill files (Excellon II/ASCII and/or Gerber RS274x)
Plated and NonPlated Through-holes (Excellon II/ASCII)
Per-layer Blind/Buried Drills (Excellon II/ASCII)
Component Centroid/Pick-and-place data (ASCII) and component pinouts
Component Netlist (IPC-D-356/ASCII)
BOM (Spreadsheet)
Schematics (PDF, Cadence Allegro, OrCAD, Altium, PADS, and other proprietary formats commonly available)
The data produced in the reverse engineering process can be immediately sent to a PCB manufacturer for fabrication of replica/"clone" PCBs, or be used for creation of supporting documents.
References
Reverse engineering
Printed circuit board manufacturing | PCB reverse engineering | [
"Engineering"
] | 2,391 | [
"Electrical engineering",
"Electronic engineering",
"Reverse engineering",
"Printed circuit board manufacturing"
] |
61,934,716 | https://en.wikipedia.org/wiki/AC%2020-152 | The Advisory Circular AC 20-152A, Development Assurance for Airborne Electronic Hardware, identifies the RTCA-published standard DO-254 as defining "an acceptable means, but not the only means" to secure FAA approval of electronic hardware for use within the airspace subject to FAA authority. With the 2022 release of Revision A, this Advisory Circular becomes a very important instrument for completing some guidance of DO-254 and providing applicants with clarifications and additional information on that standard.
Initially, the DO-254 was commonly interpreted as applying only to complex custom micro-coded components within aircraft systems with Item Design Assurance Levels (IDAL) of A, B, or C. DO-254 guidance on simple electronic hardware and other topics needed some clarification. However, Revision A of this AC clarifies that AC 20-152() and DO-254 apply to the type certification of all electronic hardware aspects of airborne systems, including all electronic hardware that is not complex, that is, "simple electronic devices". Revision A also defines 29 objectives in addition to those identified in DO-254; applicants choosing to follow DO-254 under the authority of AC 20-152A must also accomplish these additional objectives if they apply to their particular hardware.
Specifically excluding COTS microcontrollers (see AC 20-115()/DO-178C), complex custom micro-coded components include field programmable gate arrays (FPGA), programmable logic devices (PLD), and application-specific integrated circuits (ASIC), particularly in cases where correctness and safety can not be verified through testing alone, necessitating methodical design assurance. Simple devices are those that are verifiable with testing alone, such that the FAA may agree that methodical design assurance is unnecessary.
For DAL D hardware, as long as the applicant follows DO-254, the applicant does not need to apply this advisory circular since the FAA does not expect to examine the life cycle data. However, if the applicant chooses to follow other design practices for DAL D hardware (as permitted by this AC) the FAA will review the data.
Cetain of the new objectives in AC 20-152A explicitly state DO-254's application to circuit board assemblies (CBA).
Relationship to FAA Order 8110.105
With the release of the expanded AC 20-152A and its companion AC 00-72, Best Practices for Airborne Electronic Hardware Design Assurance Using EUROCAE ED-80() and RTCA DO-254(), chapters 3 through 6 of FAA Order 8110.105A were removed in a Revision B released in 2024 to eliminate any duplication or conflict with the new ACs. The removed sections had been published as an expedient solution the concerns of authorities and applicants, but the FAA wished ultimately to not provide applicants with guidance and clarifications in its orders to certification workers. Where this pair of new ACs replace material in Order 8110.105, AC 20-152A provides new guidance to close gaps in DO-254, AC 00-72 provides "additional information" on some of the new objectives in AC 20-152A.
Revision History
References
External links
AC 20-152A, Design Assurance Guidance for Airborne Electronic Hardware, FAA
Avionics
Safety
Software requirements
RTCA standards
Computer standards | AC 20-152 | [
"Technology",
"Engineering"
] | 678 | [
"Software requirements",
"Computer standards",
"Avionics",
"Software engineering",
"Aircraft instruments"
] |
55,940,145 | https://en.wikipedia.org/wiki/Climate%20restoration | Climate restoration is the climate change goal and associated actions to restore to levels humans have actually survived long-term, below 300 ppm. This would restore the Earth system generally to a safe state, for the well-being of future generations of humanity and nature. Actions include carbon dioxide removal from the Carbon dioxide in Earth's atmosphere, which, in combination with emissions reductions, would reduce the level of in the atmosphere and thereby reduce the global warming produced by the greenhouse effect of an excess of over its pre-industrial level. Actions also include restoring pre-industrial atmospheric methane levels by accelerating natural methane oxidation.
Climate restoration enhances legacy climate goals (stabilizing Earth's climate) to include ensuring the survival of humanity by restoring to levels of the last 6000 years that allowed agriculture and civilization to develop.
Restoration and mitigation
Climate restoration is the goal underlying climate change mitigation, whose actions are intended to "limit the magnitude or rate of long-term climate change". Advocates of climate restoration accept that climate change has already had major negative impacts which threaten the long-term survival of humanity. The current mitigation pathway leaves the risk that conditions will go beyond adaptation and abrupt climate change will be upon us. There is a human moral imperative to maximize the chances of future generations' survival. By promoting the vision of the "survival and flourishing of humanity", with the Earth System restored to a state close to that in which our species and civilization evolved, advocates claim that there is a huge incentive for innovation and investment to ensure that this restoration takes place safely and in a timely fashion. As stated in "The Economist" in November 2017, "in any realistic scenario, emissions cannot be cut fast enough to keep the total stock of greenhouse gases sufficiently small to limit the rise in temperature successfully. But there is barely any public discussion of how to bring about the extra "negative emissions" needed to reduce the stock of ... Unless that changes, the promise of limiting the harm of climate change is almost certain to be broken."
Climate restoration as a policy goal
A first peer-reviewed article about climate restoration was published in April 2018 by the Rand Corporation.
The analysis "examines climate restoration through the lens of risk management under conditions of deep uncertainty, exploring the technology, economic, and policy conditions under which it might be possible to achieve various climate restoration goals and the conditions under which society might be better off with (rather than without) a climate restoration goal." One key finding of the study is that it would be possible to restore the atmospheric concentrations to preindustrial levels at an acceptable cost under two scenarios, where greenhouse gas reductions and direct air capture (DAC) technologies prove to be economically efficient. One example is Carbon Engineering, a Canadian-based clean energy company focussing on the commercialization of Direct Air Capture (DAC) technology that captures carbon dioxide () directly from the atmosphere.
One key recommendation of the Rand Corporation study is that an ambitious climate restoration goal may seek to achieve preindustrial concentration by 2075, or by the end of the century. It concludes that "The best we can do is pursue climate restoration with a passion while embedding it in a process of testing, experimentation, correction, and discovery."
On September 25, 2018, Rep. Jamie Raskin introduced a resolution on Climate Restoration to the U.S House Committee of Energy and Commerce, concluding with "Whereas scientists have researched methods for keeping warming below 2°C, but have not yet researched the best methods to remove all excess , stop sea-level rise, and restore a safe and healthy climate for future generations; and whereas declaring a goal of restoring a safe and healthy climate will encourage scientists to research the most effective ways to restore safe levels, stop sea-level rise, and restore a safe and healthy climate for future generations." This was followed by the Congressional Climate Emergency Resolutions (S.Con.Res.22, H.Con.Res.52) which "demands a national, social, industrial, and economic mobilization of the resources and labor of the United States at a massive-scale to halt, reverse, mitigate, and prepare for the consequences of the climate emergency and to restore the climate for future generations...."
On August 23, 2023, the California Senate passed SR-34, the nation's first resolution to explicitly recognize climate restoration as a policy priority It concludes: "WHEREAS, Climate restoration will benefit the people of the State of California by reducing losses and damage from wildfires, while producing positive effects on human and ecosystem health, industry, and jobs in agriculture and other sectors; now, therefore, be it resolved by the Senate of the State of California, That the Senate formally recognizes the obligation to future generations to restore a safe climate, and declares climate restoration, along with achieving net-zero and net-negative CO2 emissions, a climate policy priority; and be it further resolved, That the Senate calls on the State Air Resources Board to engage necessary federal entities as appropriate to urge the United States Ambassador to the United Nations to propose a climate treaty that would restore and stabilize GHG levels as our common climate goal."
Critical parameters
The endpoint goal of climate restoration is to generally maximize the probability of survival of our species and civilization by restoring Atmospheric CO2 levels. The approximate target levels are those of the Holocene norm in which our species and civilization most recently evolved. That is stated technically as "pre-industrial", or poetically as "like our grandparents had a hundred years ago". Numerically the goal is stated as getting atmospheric CO2 back below the highest levels humans have actually survived long-term, 300 ppm, by 2050. Achieving this will require permanently removing approximately a trillion tonnes of atmospheric .
Critical parameters of the Earth System include:
levels of climate forcing agents in the atmosphere, especially and methane for positive forcing and aerosol for negative forcing;
global mean surface temperature (compared to some baseline) and its rate of increase;
sea level and the rate that sea level is rising;
pH and rate of ocean acidification.
Ice levels of the polar ice caps.
One of the principal goals for climate restoration is to bring the level down from current level of ~420 ppm (2022) towards its pre-industrial level of ~280 ppm. Not only will this reduce 's global warming effect but also its effect on ocean acidification. The removed carbon would be sequestered or used as a construction material.
Climate restoration open letter
On November 13, 2020, an open letter, put together by the youth organisation Worldward, calling for climate restoration was published in the Guardian newspaper. The letter was signed by prominent scientists and activists, including: Michael E Mann, Dr James Hansen, George Monbiot, Hindou Oumarou Ibrahim, Dr Rowan Williams, Bella Lack, Will Attenborough, Mark Lynas, Chloe Ardijis, Dr Shahrar Ali, and many more. After its publication, the letter was opened up to general signatories, and the signatories published on Worldward's website.
Climate Restoration publications
White Paper
On September 17, 2019, the Foundation for Climate Restoration published a White Paper on existing Climate Restoration solutions and developing technologies. These solutions and technologies include proven, commercially viable projects, such as creating synthetic rock from carbon captured in the air for use in construction and paving, as well as emerging methods for removing and storing carbon, restoring oceans and fisheries. The White Paper also discusses Climate Restoration strategy and costs. A main goal of the Foundation for Climate Restoration is the reduction of atmospheric to below 300 ppm (i.e. near its pre-industrial level) by 2050.
Climate Restoration: The Only Future That Will Sustain the Human Race
Authored by Peter Fiekowsky and Carole Douglis, this book was published on April 21, 2022. It describes, among others, the criteria for climate restoration: Permanence —so the stays out of the atmosphere for at least 100 years; Scalability —the method must be able to remove at least 25 billion tons of a year; Financial viability—funding for at-scale carbon removal must be in place. It then describes four solutions that appear to fit the criteria: a) ocean fertilization; b) synthetic limestone; c) seaweed; d) enhanced atmospheric methane oxidation using iron chloride. It claims that the required technologies and finance are now in place to restore the climate. Scale-up now requires that the restoration goal be endorsed by the UN and large NGOs so that investors and governments can justify funding the projects. Because the projects are commercially self-funding, initial investments of only $2 billion per year through 2030 are estimated to be required globally.
Limitations
Not every aspect of the Earth System can be returned to a previous state: notably the warming of the deep sea or deep ocean and the associated sea level rise which has already taken place may be essentially irreversible this century. Conversely, there are certain aspects of the Earth System that need to be improved with respect to the recent past: notably food productivity, considering an increased global population by 2050 or 2100.
Key organisations
Worldward
Foundation for Climate Restoration
Global Coalition for Climate Restoration
The Climate Foundation
References
Climate engineering
Climate change policy
Ecological restoration
Environmental terminology
Planetary engineering | Climate restoration | [
"Chemistry",
"Engineering"
] | 1,881 | [
"Planetary engineering",
"Geoengineering",
"Ecological restoration",
"Environmental engineering"
] |
55,940,397 | https://en.wikipedia.org/wiki/International%20Energy%20Agency%20Energy%20in%20Buildings%20and%20Communities%20Programme | The International Energy Agency Energy in Buildings and Communities (IEA EBC) Programme, formerly known as the Energy in Buildings and Community Systems Programme (ECBCS), is one of the International Energy Agency's Technology Collaboration Programmes (TCPs). The Programme "carries out research and development activities toward near-zero energy and carbon emissions in the built environment".
History
The programme was formally launched in 1977, following the oil crisis which drove research into alternative sources of energy and technologies to improve energy efficiency. Since then, IEA EBC's main aim has been to provide an international focus for energy efficiency research in the building sector, with its current mission being to “develop and facilitate the integration of technologies and processes for energy efficiency and conservation into healthy, low emission and sustainable buildings and communities, through innovation and research”.
Former EBC Executive Committee Chairs
Dr. Takao Sawachi, Building Research Institute, Japan
Andreas Eckmanns, Bundesamt für Energie, Switzerland
Morad R. Atif, National Research Council, Canada
Richard Karney, Department of Energy, USA
Sherif Barakat, National Research Council, Canada
Gerald S. Leighton, Department of Energy, USA
EBC Strategic Plan
Every five years, the IEA Committee on Energy Research and Technology (CERT) renews the Programme's Strategic Plan. The latest EBC Strategic Plan was developed in 2023 and is effective until 2029.
The strategic objectives of the EBC TCP are:
the refurbishment of existing buildings;
reducing the performance gap between design and operation;
creating robust and affordable technologies;
the development of energy efficient cooling;
the creation of district level solution sets.
EBC Participating Countries
Countries currently participating in the EBC are Australia, Austria, Belgium, Brazil, Canada, P.R. China, Denmark, Finland, France, Germany, Ireland, Italy, Japan, Republic of Korea, Netherlands, New Zealand, Norway, Portugal, Singapore, Spain, Sweden, Switzerland, Türkiye, United Kingdom and the United States of America.
EBC Annexes
The EBC carries out research and development (R&D) projects known as Annexes, with a typical duration of 3 to 4 years forming the Programme's basis. “The outcomes of the Annexes address the determining factors for energy use in three domains: technological aspects, policy measures, and occupant behaviour”. Below is a list with completed and current Annexes.
Completed
Annex 1: Load Energy Determination of Buildings (1977-1980)
Annex 2: Ekistics and Advanced Community Energy Systems (1976-1978)
Annex 3: Energy Conservation in Residential Buildings (1979-1982)
Annex 4: Glasgow Commercial Building Monitoring (1979-1982)
Annex 6: Energy Systems and Design of Communities (1979-1981)
Annex 7: Local Government Energy Planning (1981-1983)
Annex 8: Inhabitants Behaviour with Regard to Ventilation (1984-1987)
Annex 9: Minimum Ventilation Rates (1982-1986)
Annex 10: Building HVAC System Simulation (1982-1987)
Annex 11: Energy Auditing (1982-1987)
Annex 12: Windows and Fenestration (1982-1986)
Annex 13: Energy Management in Hospitals (1985-1989)
Annex 14: Condensation and Energy (1987-1990)
Annex 15: Energy Efficiency in Schools (1988 - 1990)
Annex 16: Building Energy Management Systems (BEMS) 1- User Interfaces and System Integration (1987-1991)
Annex 17: Building Energy Management Systems (BEMS) 2- Evaluation and Emulation Techniques (1988-1992)
Annex 18: Demand Controlled Ventilation Systems (1987-1992)
Annex 19: Low Slope Roof Systems (1987-1993)
Annex 20: Air Flow Patterns within Buildings (1988-1991)
Annex 21: Environmental Performance (1988-1993)
Annex 22: Energy Efficient Communities (1991-1993)
Annex 23: Multi Zone Air Flow Modelling (COMIS) (1990-1996)
Annex 24: Heat, Air and Moisture Transport (1991-1995)
Annex 25: Real time HVAC Simulation (1991-1995)
Annex 26: Energy Efficient Ventilation of Large Enclosures (1993-1996)
Annex 27: Evaluation and Demonstration of Domestic Ventilation Systems (1993-1997 + extension to 2003)
Annex 28: Low Energy Cooling Systems (1993-1997)
Annex 29: Daylight in Buildings (1995-1999)
Annex 30: Bringing Simulation to Application (1995-1998)
Annex 31: Energy-Related Environmental Impact of Buildings (1996-1999)
Annex 32: Integral Building Envelope Performance Assessment (1996-1999)
Annex 33: Advanced Local Energy Planning (1996-1998)
Annex 34: Computer-Aided Evaluation of HVAC System Performance (1997-2001)
Annex 35: Design of Energy Efficient Hybrid Ventilation (HYBVENT) (1998-2002)
Annex 36: Retrofitting of Educational Buildings (1999-2003)
Annex 37: Low Exergy Systems for Heating and Cooling of Buildings (LowEx) (1999-2003)
Annex 38: Solar Sustainable Housing (1999-2003)
Annex 39: High Performance Insulation Systems (2001-2005)
Annex 40: Building Commissioning to Improve Energy Performance (2001-2004)
Annex 41:Whole Building Heat, Air and Moisture Response (MOIST-ENG) (2003-2007)
Annex 42: The Simulation of Building-Integrated Fuel Cell and Other Cogeneration Systems (FC+COGEN-SIM) (2003-2007)
Annex 43: Testing and Validation of Building Energy Simulation Tools (2003-2007)
Annex 44: Integrating Environmentally Responsive Elements in Buildings (2004-2011)
Annex 45: Energy Efficient Electric Lighting for Buildings (2004-2010)
Annex 46: Holistic Assessment Tool-kit on Energy Efficient Retrofit Measures for Government Buildings (EnERGo) (2005-2010)
Annex 47: Cost-Effective Commissioning for Existing and Low Energy Buildings (2005-2010)
Annex 48: Heat Pumping and Reversible Air Conditioning (2005-2011)
Annex 49: Low Exergy Systems for High Performance Buildings and Communities (2005-2010)
Annex 50: Prefabricated Systems for Low Energy Renovation of Residential Buildings (2006-2011)
Annex 51: Energy Efficient Communities (2007-2013)
Annex 52: Towards Net Zero Energy Solar Buildings (2008-2014)
Annex 53: Total Energy Use in Buildings: Analysis & Evaluation Methods (2008-2013)
Annex 54: Integration of Micro-Generation & Related Energy Technologies in Buildings (2009-2014)
Annex 55: Reliability of Energy Efficient Building Retrofitting - Probability Assessment of Performance & Cost (RAP-RETRO) (2010-2015)
Annex 56: Cost Effective Energy & Emissions Optimization in Building Renovation
Annex 57: Evaluation of Embodied Energy & Equivalent Emissions for Building Construction (2011-2016)
Annex 58: Reliable Building Energy Performance Characterisation Based on Full Scale Dynamic Measurements (2011-2016)
Annex 59: High Temperature Cooling & Low Temperature Heating in Buildings (2012-2016)
Annex 60: New Generation Computational Tools for Building & Community Energy Systems
Annex 61: Business and Technical Concepts for Deep Energy Retrofit of Public Buildings
Annex 62: Ventilative Cooling
Annex 63: Implementation of Energy Strategies in Communities
Annex 64: LowEx Communities - Optimised Performance of Energy Supply Systems with Exergy Principles
Annex 65: Long-Term Performance of Super-Insulating Materials in Building Components and Systems
Annex 66: Definition and Simulation of Occupant Behavior in Buildings
Annex 67: Energy Flexible Buildings
Annex 68: Indoor Air Quality Design and Control in Low Energy Residential Buildings
Annex 69: Strategy and Practice of Adaptive Thermal Comfort in Low Energy Buildings
Annex 70: Energy Epidemiology: Analysis of Real Building Energy Use at Scale
Annex 71: Building Energy Performance Assessment Based on In-situ Measurements
Annex 72: Assessing Life Cycle Related Environmental Impacts Caused by Buildings
Annex 73: Towards Net Zero Energy Public Communities
Annex 74: Competition and Living Lab Platform
Annex 75: Cost-effective Building Renovation at District Level Combining Energy Efficiency & Renewables
Annex 76 / SHC Task 59: Deep Renovation of Historic Buildings Towards Lowest Possible Energy Demand and Emissions
Annex 77 / SHC Task 61: Integrated Solutions for Daylight and Electric Lighting
Working Group - Energy Efficiency in Educational Buildings (1988 - 1990)
Working Group - Indicators of Energy Efficiency in Cold Climate Buildings (1995-1999)
Working Group - Annex 36 Extension: The Energy Concept Adviser for Technical Retrofit Measures (2003-2005)
Working Group - Communities and Cities
Working Group - HVAC Energy Calculation Methodologies for Non-residential Buildings
Current
Annex 5: Air Infiltration and Ventilation Centre
Annex 78: Supplementing Ventilation with Gas-phase Air Cleaning, Implementation and Energy Implications
Annex 79: Occupant Behaviour-Centric Building Design and Operation
Annex 80: Resilient Cooling
Annex 81: Data-Driven Smart Buildings
Annex 82: Energy Flexible Buildings Towards Resilient Low Carbon Energy Systems
Annex 83: Positive Energy Districts
Annex 84: Demand Management of Buildings in Thermal Networks
Annex 85: Indirect Evaporative Cooling
Annex 86: Energy Efficient Indoor Air Quality Management in Residential Buildings
Annex 87: Energy and Indoor Environmental Quality Performance of Personalised Environmental Control Systems
Annex 88: Evaluation and Demonstration of Actual Energy Efficiency of Heat Pump Systems in Buildings
Annex 89: Ways to Implement Net-zero Whole Life Carbon Buildings
Annex 90: EBC Annex 90 / SHC Task 70 Low Carbon, High Comfort Integrated Lighting
Annex 91: Open BIM for Energy Efficient Buildings
Annex 92: Smart Materials for Energy-efficient Heating, Cooling and IAQ Control in Residential Buildings
Working Group - Building Energy Codes
EBC publications
The EBC Programme produces a series of scientific publications.
Outcomes and summary reports (for policy and decision makers) of the various running and completed projects are published when available.
The EBC newsletter “EBC News” is published twice per year, including feedback from running and forthcoming Annexes as well as other articles in the field of energy use for buildings and communities.
The EBC Annual Report outlines the Programme's yearly progress, including among others separate sections summarizing the status and available deliverables for each Annex.
References
External links
International Energy Agency - Energy in Buildings and Communities Programme
International Energy Agency - Energy in Buildings and Communities Programme – On-going projects
International Energy Agency - Energy in Buildings and Communities Programme – Completed projects
International Energy Agency - Energy in Buildings and Communities Programme – EBC News
International Energy Agency - Energy in Buildings and Communities Programme – Annual Reports
International Energy Agency - Energy in Buildings and Communities Programme – Summary Reports
International Energy Agency - Energy in Buildings and Communities Programme – Project Reports
International Energy Agency Technology Collaboration Programmes (TCPs)
International Energy Agency
International energy organizations
Energy conservation
Low-energy building
Energy policy | International Energy Agency Energy in Buildings and Communities Programme | [
"Engineering",
"Environmental_science"
] | 2,141 | [
"International energy organizations",
"Environmental social science",
"Energy organizations",
"Energy policy"
] |
65,745,718 | https://en.wikipedia.org/wiki/Peptidoglycan%20recognition%20protein%203 | Peptidoglycan recognition protein 3 (PGLYRP3, formerly PGRP-Iα) is an antibacterial and anti-inflammatory innate immunity protein that in humans is encoded by the PGLYRP3 gene.
Discovery
PGLYRP3 (formerly PGRP-Iα), a member of a family of human Peptidoglycan Recognition Proteins (PGRPs), was discovered in 2001 by Roman Dziarski and coworkers who cloned and identified the genes for three human PGRPs, PGRP-L, PGRP-Iα, and PGRP-Iβ (named for long and intermediate size transcripts), and established that human genome codes for a family of 4 PGRPs: PGRP-S (short PGRP or PGRP-S) and PGRP-L, PGRP-Iα, and PGRP-Iβ. Subsequently, the Human Genome Organization Gene Nomenclature Committee changed the gene symbols of PGRP-S, PGRP-L, PGRP-Iα, and PGRP-Iβ to PGLYRP1 (peptidoglycan recognition protein 1), PGLYRP2 (peptidoglycan recognition protein 2), PGLYRP3 (peptidoglycan recognition protein 3), and PGLYRP4 (peptidoglycan recognition protein 4), respectively, and this nomenclature is currently also used for other mammalian PGRPs.
Tissue distribution and secretion
PGLYRP3 has similar expression to PGLYRP4 (peptidoglycan recognition protein 4) but not identical. PGLYRP3 is constitutively expressed in the skin, in the eye, and in the mucous membranes in the tongue, throat, and esophagus, and at a much lower level in the remaining parts of the intestinal tract. Bacteria and their products increase the expression of PGLYRP3 in keratinocytes and oral epithelial cells. Mouse PGLYRP3 is also differentially expressed in the developing brain and this expression is influenced by the intestinal microbiome. PGLYRP3 is secreted and forms disulfide-linked dimers.
Structure
PGLYRP3, similar to PGLYRP4, has two peptidoglycan-binding type 2 amidase domains (also known as PGRP domains), which are not identical (have 38% amino acid identity in humans) and do not have amidase enzymatic activity. PGLYRP3 is secreted, it is glycosylated, and its glycosylation is required for its bactericidal activity. PGLYRP3 forms disulfide-linked homodimers, but when expressed in the same cells with PGLYRP4, it forms PGLYRP3:PGLYRP4 disulfide-linked heterodimers.
The C-terminal peptidoglycan-binding domain of human PGLYRP3 has been crystallized and its structure solved and is similar to human PGLYRP1. PGLYRP3 C-terminal PGRP domain contains a central β-sheet composed of five β-strands and three α-helices and N-terminal segment unique to PGRPs and not found in bacteriophage and prokaryotic amidases.
Human PGLYRP3 C-terminal PGRP domain, similar to PGLYRP1, has three pairs of cysteines, which form three disulfide bonds at positions 178–300, 194–238, and 214–220. The Cys214–Cys220 disulfide is broadly conserved in invertebrate and vertebrate PRGPs, the Cys178–Cys300 disulfide is conserved in all mammalian PGRPs, and the Cys194–238 disulfide is unique to mammalian PGLYRP1, PGLYRP3, and PGLYRP4, but not found in the amidase-active PGLYRP2. The structures of the entire PGLYRP3 molecule (with two PGRP domains) and of the disulfide-linked dimer are unknown.
PGLYRP3 C-terminal PGRP domain contains peptidoglycan-binding site, which is a long cleft whose walls are formed by α-helix and five β-loops and the floor by a β-sheet. This site binds muramyl-tripeptide (MurNAc-L-Ala-D-isoGln-L-Lys), but can also accommodate larger peptidoglycan fragments, such as disaccharide-pentapeptide. Located opposite the peptidoglycan-binding cleft is a large hydrophobic groove, formed by residues 177–198 (the PGRP-specific segment).
Functions
The PGLYRP3 protein plays an important role in the innate immune responses.
Peptidoglycan binding
PGLYRP3 binds peptidoglycan, a polymer of β(1-4)-linked N-acetylglucosamine (GlcNAc) and N-acetylmuramic acid (MurNAc) cross-linked by short peptides, the main component of bacterial cell wall. The smallest peptidoglycan fragment that binds to human PGLYRP3 is MurNAc-tripeptide (MurNAc-L-Ala-D-isoGln-L-Lys), which binds with low affinity (Kd = 4.5 x 10−4 M), whereas a larger fragment, MurNAc-pentapeptide (MurNAc-L-Ala-γ-D-Gln-L-Lys-D-Ala-D-Ala), binds with higher affinity (Kd = 6 x 10-6 M). Human PGLYRP3, in contrast to PGLYRP1, does not bind meso-diaminopimelic acid (m-DAP) containing fragment (MurNAc-L-Ala-γ-D-Gln-DAP-D-Ala-D-Ala). m-DAP is present in the third position of peptidoglycan peptide in Gram-negative bacteria and Gram-positive bacilli, whereas L-lysine is in this position in peptidoglycan peptide in Gram-positive cocci. Thus, PGLYRP3 C-terminal PGRP domain has a preference for binding peptidoglycan fragments from Gram-positive cocci. Binding of MurNAc-pentapeptide induces structural rearrangements in the binding site that are essential for entry of the ligand and locks the ligand in the binding cleft. The fine specificity of the PGLYRP3 N-terminal PGRP domain is not known.
Bactericidal activity
Human PGLYRP3 is directly bactericidal for both Gram-positive (Bacillus subtilis, Bacillus licheniformis, Bacillus cereus, Lactobacillus acidophilus, Listeria monocytogenes, Staphylococcus aureus, Streptococcus pyogenes) and Gram-negative (Escherichia coli, Proteus vulgaris, Salmonella enterica, Shigella sonnei, Pseudomonas aeruginosa) bacteria.
The mechanism of bacterial killing by PGLYRP3 is based on induction of lethal envelope stress, which eventually leads to the shutdown of transcription and translation. PGLYRP3-induced killing involves simultaneous induction of three stress responses in both Gram-positive and Gram-negative bacteria: oxidative stress due to production of reactive oxygen species (hydrogen peroxide and hydroxyl radicals), thiol stress due to depletion (oxidation) of cellular thiols, and metal stress due to an increase in intracellular free (labile) metal ions. PGLYRP3-induced bacterial killing does not involve cell membrane permeabilization, which is typical for defensins and other antimicrobial peptides, cell wall hydrolysis, or osmotic shock. Human PGLYRP3 has synergistic bactericidal activity with antibacterial peptides.
Defense against infections
PGLYRP3 plays a limited role in host defense against infections. Intranasal administration of PGLYRP3 protects mice from lung infection with S. aureus and E. coli, but PGLYRP3-deficient mice do not have altered sensitivity to Streptococcus pneumoniae-induced pneumonia.
Maintaining microbiome
Mouse PGLYRP3 plays a role in maintaining healthy microbiome, as PGLYRP3-deficient mice have significant changes in the composition of their intestinal microbiome, which affect their sensitivity to colitis.
Effects on inflammation
Mouse PGLYRP3 plays a role in maintaining anti- and pro-inflammatory homeostasis in the intestine and skin. PGLYRP3-deficient mice are more sensitive than wild type mice to dextran sodium sulfate (DSS)-induced colitis, which indicates that PGLYRP3 protects mice from DSS-induced colitis. The anti-inflammatory effect of PGLYRP3 on DSS-induced colitis depends on the PGLYRP3-regulated intestinal microbiome, because this greater sensitivity of PGLYRP3-deficient mice to DSS-induced colitis could be transferred to wild type germ-free mice or to antibiotic-treated mice by microbiome transplant from PGLYRP3-deficient mice or by PGLYRP3-regulated bacteria. PGLYRP3 is also directly anti-inflammatory in intestinal epithelial cells.
PGLYRP3-deficient mice are more sensitive than wild type mice to experimentally induced atopic dermatitis. These results indicate that mouse PGLYRP3 is anti-inflammatory and protects skin from inflammation. This anti-inflammatory effect is due to decreased numbers and activity of T helper 17 (Th17) cells and increased numbers of T regulatory (Treg) cells.
Medical relevance
Genetic PGLYRP3 variants are associated with some diseases. Patients with inflammatory bowel disease (IBD), which includes Crohn's disease and ulcerative colitis, have significantly more frequent missense variants in PGLYRP3 gene (and also in the other three PGLYRP genes) than healthy controls. PGLYRP3 variants are also associated with Parkinson's disease and psoriasis. These results suggest that PGLYRP3 protects humans from these diseases, and that mutations in PGLYRP3 gene are among the genetic factors predisposing to these diseases. PGLYRP3 variants are also associated with the composition of airway microbiome.
See also
Peptidoglycan recognition protein
Peptidoglycan recognition protein 1
Peptidoglycan recognition protein 2
Peptidoglycan recognition protein 4
Peptidoglycan
Innate immune system
Bacterial cell walls
References
Further reading
Proteins
Genetics | Peptidoglycan recognition protein 3 | [
"Chemistry"
] | 2,292 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
65,756,949 | https://en.wikipedia.org/wiki/GSK1702934A | GSK1702934A is a chemical compound which acts as an activator of the TRPC family of calcium channels, with selectivity for the TRPC3 and TRPC6 subtypes. It has been used to investigate the role of TRPC channels in heart function and regulation of blood pressure, as well as roles in the brain.
References
Ion channel openers
Thiophenes
Piperidines
Benzimidazoles
Ketones | GSK1702934A | [
"Chemistry"
] | 93 | [
"Ketones",
"Functional groups"
] |
42,868,663 | https://en.wikipedia.org/wiki/Piston-cylinder%20apparatus | The piston-cylinder apparatus is a solid media device, used in Geosciences and Material Sciences, for generating simultaneously high pressure (up to 6 GPa) and temperature (up to 1700 °C). Modifications of the normal set-up can push these limits to even higher pressures and temperatures. A particular type of piston-cylinder, called Griggs apparatus, is also able to add a deviatoric stress on the sample.
The principle of the instrument is to generate pressure by compressing a sample assembly, which includes a resistance furnace, inside a pressure vessel. Controlled high temperature is generated by applying a regulated voltage to the furnace and monitoring the temperature with a thermocouple. The pressure vessel is a cylinder that is closed at one end by a rigid plate with a small hole for the thermocouple to pass through. A piston is advanced into the cylinder at the other hand.
History
Sir Charles Parsons was the first to attack the problem of generating high pressure simultaneously with high temperature. His pressure apparatus consisted of piston-cylinder devices that used internal electrical resistance heating. He used a solid pressure transmitting material, which also served as thermal and electrical insulation. His cylindrical chambers ranged in diameter from 1 to 15 cm. The maximum pressure at the temperature he reported was of the order of 15000 atm (corresponding to ~1.5 GPa) at 3000 °C.
Loring L. Coes, Jr., of the Norton Co., was the first person to develop a piston-cylinder device with capabilities substantially beyond those of the Parsons device. He did not personally publish a description of this equipment until 1962. The key feature of this device is the use of a hot, molded alumina liner or cylinder. The apparatus is double ended, pressure being generated by pushing a tungsten carbide piston into each end of the alumina cylinder. Because the alumina cylinder is electrically insulating, heating is accomplished, very simply, by passing an electric current from one piston through a sample heating tube and out through the opposite piston. The apparatus was used at pressures as high as 45000 atm (corresponding to ~4.5 GPa) simultaneously with a temperature of 800 °C. Temperature was measured by means of a thermocouple located in a well. At these temperature and pressure conditions, only one run is obtained in this device, the pistons and the alumina cylinder both being expendable. Even at 30000 atm (corresponding to ~3.0 GPa) the alumina cylinder is only useful for a few runs, as is also the case for the tungsten carbide pistons. The expense of using such a device is great.
Nowadays both the piston and the cylinder are constructed of cemented tungsten carbide and electrical insulation is provided in a different manner than in the device of Coes. In particular, the basis for the modern piston-cylinder apparatus is given by the design described by Boyd and England in 1960, which has been the first machine that allowed experiments under upper mantle conditions to be routinely carried out in a laboratory.
Geologist Bernard Wood has made multiple important contributions to science using piston-cylinder experiments and has consequently become a prominent figure in experimental petrology. Along with Fred Wheeler, a workshop worker at the University of Bristol, he has designed a model of piston-cylinder that is known for its simplicity and blue features. Several units of this model have been made at the University of Oxford.
Theory
The piston-cylinder apparatus is based on the same simple relationship of other high-pressure devices (e.g. Multi-anvil press and Diamond Anvil Cell):
where P is the pressure, F the applied force and A the area.
It achieves high pressures using the principle of pressure amplification: converting a small load on a large piston to a relatively large load on a small piston. The uniaxial pressure is then distributed (quasi-hydrostatically) over the sample through deformation of the assembly materials.
Components
The main components of the piston-cylinder apparatus are the pressure generating system, the pressure vessel, and the assembly parts within the vessel. There are two types of piston-cylinder apparatus: non end-loaded and end-loaded, which involve, respectively, one or two hydraulic rams. In the end-loaded type the second hydraulic ram is used to vertically load and strengthen the pressure vessel. The non end-loaded type is smaller, more compact and cheaper, and is operable only to approximately 4 GPa.
Pressure is applied to the sample by pressing a piston into the sample volume of the pressure vessel. The sample assembly consists of a solid pressure medium, a resistance heater and a small central volume for the sample. Three common configurations are used: ”, ” and 1”, which are the diameters of the piston and thus the sample assembly. According to the pressure amplification concept, the choice of the piston depends on the pressure you need to achieve.
During the experiment, water circulates around the pressure vessel, the bridge and the upper plates to cool the system.
Sample assemblies
The purposes of the sample assembly are to transmit hydrostatic pressure to the sample from the compressing piston, to provide controlled heating of the sample and to provide, via the capsule, a suitable volatile and oxygen fugacity environment for the experiment. Therefore, it includes a component for each of these purposes.
The outer cylinder is a pressure transmitting, electrically insulating cylinder made from NaCl, talc, BaCO3, KBr, CaF2, or even borosilicate glass. The next components are, in order, an electrically insulating borosilicate glass cylinder and a graphite cylinder, which acts as the “furnace”. To locate the sample exactly in the centre of the furnace and to grip the thermocouple, a support rod usually made of crushable ceramics is used. The final component is a conductive steel base plug, located at the top of the sample assembly.
The final part of the assembly is the thermocouple itself, whose wires are insulated from one another and from the material of the assembly by a tube made of mullite.
Capsules
The sample capsule must contain the sample and prevent reaction between the sample and the other materials of the sample assembly and not, itself, react with the sample. It must also be weak so as not to interfere with pressure transmission during the run. For this purpose, the materials most used are: Au, Pt, AgPd alloys, Ni and graphite.
Sample volumes are typically 200 mm3, which translates to ~500 mg of starting material, but with larger assemblies the volume can be up to 750 mm3.
Pressure control
The nominal pressure in an experiment can be calculated from the amplification of the oil pressure through the reduction in area over which it is applied, but every component has a characteristic yield stress, consequently the nominal pressure is different from the effective one. Thus, it must be adjusted taking into account the friction:
Peffective = Pnominal + Pcorrection
In order to determine the effective pressure, calibration experiments can be done using either static or dynamic methods, and usually make use of known phase transitions or reactions, melting curves or measured water solubility in melts.
Since frictional effects also depend on whether the press is in compression or in decompression, it is good practice to perform the experiments in the same way as the calibration runs.
Temperature control
Temperature can be measured using a thermocouple within an accuracy of ± 1 °C. The accuracy of the temperature is influenced by both random and systematic errors, and is smaller at higher temperature and pressure conditions. Such errors can arise from temperature gradients, differential pressures in the assembly, contamination during the experiment and the effect of pressure on thermocouple electromotive force. These errors can be cushioned choosing the appropriate thermocouple type for the experimental conditions. Temperature gradients, on the other hand, can be minimised using a tapered furnace.
Applications
The main advantages of the piston-cylinder press are the relatively large volume of the assembly, fast heating and quenching rates, and the stability of the equipment over long run durations.
These aspects, together with the ease and safety of procedure make this device suitable for geochemical studies and in-situ measurements of the physical properties of materials.
Some applications, especially in Geosciences, are: synthesis of high-pressure and temperature materials, hot pressing and investigation of partial melting of rocks.
References
Scientific equipment
Engineering thermodynamics | Piston-cylinder apparatus | [
"Physics",
"Chemistry",
"Engineering"
] | 1,750 | [
"Engineering thermodynamics",
"Thermodynamics",
"Mechanical engineering"
] |
42,869,086 | https://en.wikipedia.org/wiki/Macy%20catheter | The Macy Catheter is a specialized catheter designed to provide comfortable and discreet administration of ongoing medications via the rectal route. The catheter was developed to make rectal access more practical and provide a way to deliver and retain liquid formulations in the distal rectum so that health practitioners can leverage the established benefits of rectal administration. Patients often need medication when the oral route is compromised, and the Macy Catheter provides an alternative for those medications that can be prescribed per rectum. The Macy Catheter is of particular relevance during the end of life, when it can help patients to remain comfortable in their home.
Key features and functions
The Macy Catheter is a disposable device approved by the U.S. Food and Drug Administration (FDA), consisting of a dual-porl-lumen ballooned tube that is inserted by a clinician into the rectum just past the rectal sphincter. Once inserted into the rectum, a soft balloon is inflated with water via a balloon inflation valve to hold the device in place. This small, flexible "semi-retention" balloon exerts very little pressure on the rectal wall, and is designed for safety and comfort, while also allowing the catheter to be easily expelled when the patient needs to defecate. The catheter utilizes a small flexible silicone shaft, allowing the device to be placed safely and remain comfortably in the rectum for repeated administration of medications or liquids.
Once in place, the medication delivery port of the Macy Catheter rests on the patient's leg or abdomen, where it is easily accessible for repeated administration of liquid medications in solution or suspension form. The device stays in place until the patient has a bowel movement and expels the retention balloon, or until manually removed after first deflating the balloon.
The Macy Catheter medication port has a specialized valve to prevent leakage and is designed to be non-clogging and compatible only with the connectors on oral/enteral syringes (not syringes) for safety. The device is FDA-approved to remain in the rectum for up to 28 days. The catheter has a small lumen, allowing for small flush volumes to get medication to the rectum. Small volumes of medications (under 15ml) improve comfort by not stimulating the defecation response of the rectum, and can increase the overall absorption of a given dose by decreasing pooling of medication and migration of medication into more proximal areas of the rectum where absorption can be less effective.
Indications for use
The Macy Catheter is intended to provide rectal access to administer liquids and medications. The Macy Catheter can be used in the following clinical situations:
Medication administration when the oral route fails
Administration of fluids and electrolytes
Administration of retention enemas
Common clinical use scenarios
The Macy Catheter provides an immediate way to administer medication or liquids for patients in the home setting when the oral route of medication administration is compromised. Unlike intravenous lines, which usually need to be placed in an inpatient environment and require special formulation of sterile medications, the Macy Catheter can be placed by a clinician, such as a hospice nurse or home health nurse in the home. Many oral forms of medications can be crushed and suspended in water to be given via the Macy Catheter. The Macy Catheter is useful for patients who cannot swallow, including those near the end of life (an estimated 1.65 million people are in hospice care in the US each year). Because the Macy Catheter enables a rapid, safe, and lower cost alternative to administration of medications, it may also be applicable to care of patients in long-term care or palliative care, or as an alternative to intravenous or subcutaneous medication delivery in some instances.
The Macy Catheter is clinically indicated for the following scenarios:
1. Symptom management at the end of life, including but not limited to:
Pain
Agitation
Dyspnea, or shortness of breath
Nausea and vomiting
Seizures
Fever
2. Bowel obstruction
For medication management, hydration, and symptom control when the oral route is not viable due to total obstruction
3. Discharge from acute care to the home setting
Allows for easy, discreet, and safe medication administration and short-term hydration in the home setting
For transitioning from intravenous or subcutaneous route to the rectal route when discharged from acute settings to the home setting
History
The Macy Catheter was invented by Brad Macy, RN, BSN, a 22-year veteran hospice nurse. Inspired by a patient who was terminally agitated and not responding to a solid form of a rectally delivered medication, Macy administered the same medication in a liquid suspension with a small tube inserted into the patient's rectum. The patient's agitation rapidly diminished, and the patient was sleeping within 30 minutes. After practicing repeated successful interventions involving the application of medication in highly concentrated form to the distal one-third of the rectum, Macy realized the potential implications for hospice and palliative patients worldwide. With this motivation, he proceeded to develop the Macy Catheter, a device designed and developed for commercial use. The commercial product is protected by two issued U.S. patents and received 510(k) clearance from the Food and Drug Administration in early 2014.
Rectal drug delivery
Rectal drug delivery is an effective route of medication delivery for many medications used at the end of life. The walls of the rectum absorb many medications quickly and effectively. Medications delivered to the distal one-third of the rectum at least partially avoid the "first pass effect" through the liver, which allows for greater bio-availability of many medications than that of the oral route.
The rectal route of administration is highly effective as the rectal mucosa is highly vascularized tissue that allows for rapid and effective absorption of medications. Although intravenous administration is the most commonly used alternate route in acute care settings, it is rarely used in hospice care, given the associated cost and need for a high level of care and training for providers. It can also lead to complications such as infection and pain. Although subcutaneous medication delivery is more common in hospice, it is also expensive and can cause infection, pain and swelling. The rectal route of administration is highly effective as the rectal mucosa is highly vascularized tissue that allows for rapid and effective absorption of medications. The Macy Catheter provides a solution to overcome the challenges and leverage the benefits of rectal administration.
References
Routes of administration | Macy catheter | [
"Chemistry"
] | 1,348 | [
"Pharmacology",
"Routes of administration"
] |
42,870,384 | https://en.wikipedia.org/wiki/CGView | CGView (Circular Genome Viewer) is a freely available downloadable Java software program, applet and API (application programming interface) for generating colorful, zoomable, hyperlinked, richly annotated images of circular genomes such as bacterial chromosomes, mitochondrial DNA and plasmids. It is commonly used in bacterial sequence annotation pipelines to generate visual output suitable for the web. It has also been used in a variety of popular web servers (the CGView webserver, PlasMapper, BASys) and databases (BacMap).
Overview
More than 4000 bacterial genomes and thousands of plasmid genomes have been sequenced thanks to the advance in DNA sequencing technology. CGView was developed to address the specialized needs for visualizing and annotating circular genomes, such as bacterial, plasmid, chloroplast, mitochondrial DNA sequences. Once installed, the CGView program accepts a number of different file formats where feature data and rendering information can be XML file, a tab delimited file, or an NCBI ptt file. CGView then converts the input into a graphical map in various (PNG, JPG, or SVG) image formats that can include labels, titles, legends and footnotes. The images can be static, interactive, or poster-sized images for printing or for embedding into web pages.
Technology and Accessibility
CGView is written in the Java programming language. It is available as a downloadable Java application package as well as an applet and an API. The applet package can be used to embed interactive maps into web pages. The API can be used to incorporate CGView into another Java applications. A CGView server has recently been developed.
See also
Genomics
Genome Browser
BASys
PlasMapper
References
External links
CGView web server
Biological databases | CGView | [
"Biology"
] | 383 | [
"Bioinformatics",
"Biological databases"
] |
41,441,485 | https://en.wikipedia.org/wiki/Operational%20modal%20analysis | Ambient modal identification, also known as operational modal analysis (OMA), aims at identifying the modal properties of a structure based on vibration data collected when the structure is under its operating conditions, i.e., no initial excitation or known artificial excitation. The modal properties of a structure include primarily the natural frequencies, damping ratios and mode shapes. In an ambient vibration test the subject structure can be under a variety of excitation sources which are not measured but are assumed to be 'broadband random'. The latter is a notion that one needs to apply when developing an ambient identification method. The specific assumptions vary from one method to another. Regardless of the method used, however, proper modal identification requires that the spectral characteristics of the measured response reflect the properties of the modes rather than those of the excitation.
Pros and cons
Implementation economy is one primary advantage of ambient vibration tests as only the (output) vibration of the structure needs to be measured. This is particularly attractive for civil engineering structures (e.g., buildings, bridges) where it can be expensive or disruptive to carry out free vibration or forced vibration tests (with known input).
Identifying modal properties using ambient data does have disadvantages:
The identification methods are more sophisticated. As the loading is not measured, in the development of the identification method, it needs to be modeled (by some stochastic process), or its dynamic effects on the measured response have to be removed. Otherwise, it is not possible to explain the characteristics in the data based solely on the modal properties.
Without loading information, the identified modal properties can have significant identification uncertainties. In particular, the results are as good as the broadband assumption applied.
The identified modal properties only reflect the properties at the ambient vibration level, which is usually lower than the serviceability level or other design cases of interest. This is especially relevant for the damping ratio, which is commonly perceived to be amplitude-dependent.
The measurement system needs to be low-noise and sensitive, since structures mainly vibrate at low levels in their operational conditions.
Methods
Methods of OMA can be broadly classified by two aspects, 1) frequency domain or time domain, and 2) Bayesian or non-Bayesian. Non-Bayesian methods were developed earlier than Bayesian ones. They make use of some statistical estimators with known theoretical properties for identification, e.g., the correlation function or spectral density of measured vibrations. Common non-Bayesian methods include stochastic subspace identification (time domain) and frequency domain decomposition (frequency domain). Bayesian methods have been developed in the time-domain and frequency-domain.
Frequency domain and time domain operational modal analysis of structures
The objective of operational modal analysis is to extract resonant frequencies, damping, and/or operating shapes (unscaled mode shapes) of a structure. This method sometime called output-only modal analysis because only the response of the structure is measured. The structure might be excited using natural operating conditions or some other excitations might be applied to the structure; however, as long as the operating shapes are not scaled based on the applied force, it is called operational modal analysis (e.g. operating shapes of a wind turbine blade excited by a shaker are measured using operating modal analysis). This method has been used to extract operating modes of a hovering helicopter.
Operational modal analysis versus operational deflection shape
The two terms, Operational Modal Analysis and Operational Deflection Shape, are very similar, but refer to two different analysis approaches. Both use ambient vibration data as inputs, but in the case of Operational Deflection Shapes, a shape that corresponds to the overall vibration response is created. It is based on the vibration amplitude only, there is no attempt to extract a mode shape and no quantification of the modal damping can be obtained. While Operational Modal Analysis, when the main assumptions are met, yields a representation of a system characteristic in its operating environment, an Operational Deflection Shape will simply extract the system response under the currently applied loads.
Notes
See monographs on non-Bayesian OMA and Bayesian OMA.
See OMA datasets.
See also
Frequency domain decomposition
Bayesian operational modal analysis
Ambient vibrations
Microtremor
Modal analysis
Modal testing
References
Wave mechanics | Operational modal analysis | [
"Physics"
] | 897 | [
"Wave mechanics",
"Waves",
"Physical phenomena",
"Classical mechanics"
] |
41,442,019 | https://en.wikipedia.org/wiki/Bayesian%20operational%20modal%20analysis | Bayesian operational modal analysis (BAYOMA) adopts a Bayesian system identification approach for operational modal analysis (OMA). Operational modal analysis aims at identifying the modal properties (natural frequencies, damping ratios, mode shapes, etc.) of a constructed structure using only its (output) vibration response (e.g., velocity, acceleration) measured under operating conditions. The (input) excitations to the structure are not measured but are assumed to be 'ambient' ('broadband random'). In a Bayesian context, the set of modal parameters are viewed as uncertain parameters or random variables whose probability distribution is updated from the prior distribution (before data) to the posterior distribution (after data). The peak(s) of the posterior distribution represents the most probable value(s) (MPV) suggested by the data, while the spread of the distribution around the MPV reflects the remaining uncertainty of the parameters.
Pros and cons
In the absence of (input) loading information, the identified modal properties from OMA often have significantly larger uncertainty (or variability) than their counterparts identified using free vibration or forced vibration (known input) tests. Quantifying and calculating the identification uncertainty of the modal parameters become relevant.
The advantage of a Bayesian approach for OMA is that it provides a fundamental means via the Bayes' Theorem to process the information in the data for making statistical inference on the modal properties in a manner consistent with modeling assumptions and probability logic.
The potential disadvantage of Bayesian approach is that the theoretical formulation can be more involved and less intuitive than their non-Bayesian counterparts. Algorithms are needed for efficient computation of the statistics (e.g., mean and variance) of the modal parameters from the posterior distribution. Unlike non-Bayesian methods, the algorithms are often implicit and iterative. E.g., optimization algorithms may be involved in the determination of most probable value, which may not converge for poor quality data.
Methods
Bayesian formulations have been developed for OMA in the time domain and in the frequency domain using the spectral density matrix and fast Fourier transform (FFT) of ambient vibration data. Based on the formulation for FFT data, fast algorithms have been developed for computing the posterior statistics of modal parameters. Recent developments based on EM algorithm show promise for simpler algorithms and reduced coding effort. The fundamental precision limit of OMA has been investigated and presented as a set of uncertainty laws which can be used for planning ambient vibration tests.
Connection with maximum likelihood method
Bayesian method and maximum likelihood method (non-Bayesian) are based on different philosophical perspectives but they are mathematically connected; see, e.g., and Section 9.6 of. For example,
Assuming a uniform prior, the most probable value (MPV) of parameters in a Bayesian method is equal to the location where the likelihood function is maximized, which is the estimate in Maximum Likelihood Method
Under a Gaussian approximation of the posterior distribution of parameters, their covariance matrix is equal to the inverse of Hessian of the negative log of likelihood function at the MPV. Generally, this covariance depends on data. However, if one assumes (hypothetically; non-Bayesian) that the data is indeed distributed as the likelihood function, then for large data size it can be shown that the covariance matrix is asymptotically equal to the inverse of the Fisher information matrix (FIM) of parameters (which has a non-Bayesian origin). This coincides with the Cramer–Rao bound in classical statistics, which gives the lower bound (in the sense of matrix inequality) of the ensemble variance of any unbiased estimator. Such lower bound can be reached by maximum-likelihood estimator for large data size.
In the above context, for large data size the asymptotic covariance matrix of modal parameters depends on the 'true' parameter values (a non-Bayesian concept), often in an implicit manner. It turns out that by applying further assumptions such as small damping and high signal-to-noise ratio, the covariance matrix has mathematically manageable asymptotic form, which provides insights on the achievable precision limit of OMA and can be used to guide ambient vibration test planning. This is collectively referred as 'uncertainty law'.
See also
Operational modal analysis
Bayesian inference
Ambient vibrations
Microtremor
Modal analysis
Modal testing
Notes
See monographs on non-Bayesian OMA and Bayesian OMA
See OMA datasets
See Jaynes and Cox for Bayesian inference in general.
See Beck for Bayesian inference in structural dynamics (relevant for OMA)
The uncertainty of the modal parameters in OMA can also be quantified and calculated in a non-Bayesian manner. See Pintelon et al.
References
Wave mechanics | Bayesian operational modal analysis | [
"Physics"
] | 1,005 | [
"Waves",
"Wave mechanics",
"Physical phenomena",
"Classical mechanics"
] |
41,442,860 | https://en.wikipedia.org/wiki/Benzoxazinone%20biosynthesis | The biosynthesis of benzoxazinone, a cyclic hydroxamate and a natural insecticide, has been well-characterized in maize and related grass species. In maize, genes in the pathway are named using the symbol bx. Maize Bx-genes are tightly linked, a feature that has been considered uncommon for plant genes of a biosynthetic pathways. Especially notable are genes encoding the different enzymatic functions BX1, BX2 and BX8 and which are found within about 50 kilobases. Results from wheat and rye indicate that the cluster is an ancient feature. In wheat the cluster is split into two parts. The wheat genes Bx1 and Bx2 are located in close proximity on chromosome 4 and wheat Bx3, Bx4 and Bx5 map to the short arm of chromosome 5; an additional Bx3 copy was detected on the long arm of chromosome 5B. Recently, additional biosynthetic clusters have been detected in other plants for other biosynthetic pathways and this organization might be common in plants.
Maize genes
The bx1 gene encodes a protein, BX1, that forms indol from indol-3-glycerol phosphate in the plastid. It is the first step in the pathway and determines much of the natural variation in levels of DIMBOA in maize. The next steps in the pathway occur in the endoplasmic reticulum, also referred to as the microsomes in cell fractionation experiments, and are carried by proteins encoded by genes bx2, bx3, bx4, and bx5.
References
Biochemistry
Genetics
Biosynthesis | Benzoxazinone biosynthesis | [
"Chemistry",
"Biology"
] | 348 | [
"Genetics",
"Biosynthesis",
"nan",
"Chemical synthesis",
"Biochemistry",
"Metabolism"
] |
41,443,677 | https://en.wikipedia.org/wiki/Poly%28ethylene%20adipate%29 | Poly(ethylene adipate) or PEA is an aliphatic polyester. It is most commonly synthesized from a polycondensation reaction between ethylene glycol and adipic acid. PEA has been studied as it is biodegradable through a variety of mechanisms and also fairly inexpensive compared to other polymers. Its lower molecular weight compared to many polymers aids in its biodegradability.
Synthesis
Polycondensation
Poly(ethylene adipate) can be synthesized through a variety of methods. First, it could be formed from the polycondensation of dimethyl adipate and ethylene glycol mixed in equal amounts and subjected to increasing temperatures (100 °C, then 150 °C, and finally 180 °C) under nitrogen atmosphere. Methanol is released as a byproduct of this polycondensation reaction and must be distilled off. Second, a melt condensation of ethylene glycol and adipic acid could be carried out at 190-200 °C under nitrogen atmosphere. Lastly, a two-step reaction between adipic acid and ethylene glycol can be carried out. A polyesterification reaction is carried out first followed by polycondensation in the presence of a catalyst. Both of these steps are carried out at 190 °C or above. Many different catalysts can be used such as stannous chloride and tetraisopropyl orthotitanate. Generally, the PEA is then dissolved in a small amount of chloroform followed by precipitation out in methanol.
Ring-opening polymerization
An alternate and less frequently used method of synthesizing PEA is ring-opening polymerization. Cyclic can be mixed with di-n-butyltin in chloroform. This requires temperatures similar to melt condensation.
Properties
PEA has a density of 1.183 g/mL at 25 °C and it is soluble in benzene and tetrahydrofuran. PEA has a glass transition temperature of -50 °C. PEA can come in a high molecular weight or low molecular weight variety, i.e.10,000 or 1,000 Da. Further properties can be broken down into the following categories.
Mechanical properties
In general, most aliphatic polyesters have poor mechanical properties and PEA is no exception. Little research has been done on the mechanical properties of pure PEA but one study found PEA to have a tensile modulus of 312.8 MPa, a tensile strength of 13.2 MPa, and an elongation at break of 362.1%. Alternate values that have been found are a tensile strength of ~10 MPa and a tensile modulus of ~240 MPa.
Chemical properties
IR spectra for PEA show two peaks at 1715–1750 cm−1, another at 1175–1250 cm−1, and a last notable peak at 2950 cm−1. These peaks can be easily determined to be from ester groups, COOC bonds, and CH bonds respectively.
Crystallization properties
PEA has been shown to be able to form both ring-banded and Maltese-cross (or ring-less) type spherulites. Ring-banded spherulites most notably form when crystallization is carried out between 27 °C and 34 °C whereas Maltese-cross spherulites form outside of those temperatures. Regardless of the manner of banding, PEA polymer chains pack into a monoclinic crystal structure (some polymers may pack into multiple crystal structures but PEA does not). The length of the crystal edges are given as follows: a = 0.547 nm, b = 0.724 nm, and c = 1.55 nm. The monoclinic angle, α, is equal to 113.5°. The bands formed by PEA have been said to resemble corrugation, much like a butterfly wing or Pollia fruit skin.
Electrical properties
Conductivity of films made of PEA mixed with salts was found to exceed that of PEO4.5LiCF3SO3 and of poly(ethylene succinate)/LiBF4 suggesting it could be a practical candidate for use in lithium-ion batteries. Notably, PEA is used as a plasticizer and therefore amorphous flows occur at fairly low temperatures rendering it less plausible for use in electrical applications. Blends of PEA with polymers such as poly(vinyl acetate) showed improved mechanical properties at elevated temperatures.
Miscibility
PEA is miscible with a number of polymers including: (PLLA), (PBA), poly(ethylene oxide), tannic acid (TA), and (PBS). PEA is not miscible with low density polyethylene (LDPE). Miscibility is determined by the presence of only a single glass transition temperature being present in a polymer mixture.
Degradability
Biodegradability
Aliphatic copolyesters are well known for their biodegradability by lipases and esterases as well as some strains of bacteria. PEA in particular is well degraded by hog liver esterase, Rh. delemar, Rh. arrhizus, P. cepacia, R. oryzae, and Aspergillus sp. An important property in the speed of degradation is the crystallinity of the polymer. Neat PEA has been shown to have a slightly lower degradation rate than copolymers due to a loss in crystallinity. PEA/poly(ethylene furanoate) (PEF) copolymers at high PEA concentrations were shown to degrade within 30 days while neat PEA had not fully degraded, however, mixtures approaching 50/50 mol% hardly degrade at all in the presence of lipases. Copolymerizing styrene glycol with adipic acid and ethylene glycol can result in phenyl side chains being added to PEA. Adding phenyl side chains increases steric hindrance causing a decrease in the crystallinity in the PEA resulting in an increase in biodegradability but also a notable loss in mechanical properties.
Further work has shown that decreasing crystallinity is more important to degradation carried out in water than whether or not a polymer is hydrophobic or hydrophilic. PEA polymerized with 1,2-butanediol or 1,2-decanediol had an increased biodegradability rate over PBS copolymerized with the same side branches. Again, this was attributed to a greater loss in crystallinity as PEA was more affected by steric hindrance, even though it is more hydrophobic than PBS.
Poly(ethylene adipate) urethane combined with small amounts of ligin can aid in preventing degradation by acting as an antioxidant. Additionally, the mechanical properties of the PEA urethane increased by ligin addition. This is thought to be due to the rigid nature of ligin which aids in reinforcing soft polymers such as PEA urethane.
When PEA degrades, it has been shown that cyclic oligomers are the highest fraction of formed byproducts.
Ultrasonic degradation
Using toluene as a solvent, the efficacy of degrading PEA through ultrasonic sound waves was examined. Degradation of a polymer chain occurs due to cavitation of the liquid leading to scission of chemical chains. In the case of PEA, degradation was not observed due to ultrasonic sound waves. This was determined to be likely due to PEA not having a high enough molar mass to warrant degradation via these means. A low molecular weight has been indicated as being necessary for the biodegradation of polymers.
Applications
Plasticizer
Poly(ethylene adipate) can effectively be used as a plasticizer reducing the brittleness of other polymers. Adding PEA to PLLA was shown to reduce the brittleness of PLLA significantly more than (PBA), (PHA), and (PDEA) but reduced the mechanical strength. The elongation at break was increased approximately 65x over neat PLLA. The thermal stability of PLLA also showed a significant increase with an increasing concentration of PEA.
PEA has also been shown to increase the plasticity and flexibility of the terpolymer maleic anhydride-styrene-methyl metacrylate (MAStMMA). Observing the changes in thermal expansion coefficient allowed for the increasing in plasticity to be determined for this copolymer blend.
Mending capabilities
Self-healing polymers is an effective method of healing microcracks caused by an accumulation of stress. Diels-Alder (DA) bonds can be incorporated into a polymer allowing microcracks to occur preferentially along these weaker bonds. Furyl-telechelic poly(ethylene adipate) (PEAF2) and tris-maleimide (M3) can be combined through a DA reaction in order to bring about self-healing capabilities in PEAF2. PEAF2M3 was found to have some healing capabilities after 5 days at 60 °C, although significant evidence of the original cut appeared and the original mechanical properties were not fully restored.
Microcapsules for drug delivery
PEA microbeads intended for drug delivery can be made through water/oil/water double emulsion methods. By blending PEA with Poly-ε-caprolactone, beads can be given membrane porosity. Microbeads were placed into a variety of solutions including a synthetic stomach acid, pancreatin, Hank's buffer, and newborn calf serum. The degradation of the microcapsules and therefore the release of the drug was the greatest in newborn calf serum, followed by pancreatin, then synthetic stomach acid, and lastly Hank's buffer. The enhanced degradation in newborn calf serum and pancreatin was attributed to the presence of enzyme activity and that simple ester hydrolysis was able to be carried out. Additionally, an increase in pH is correlated with higher degradation rates.
References
Polymers
Adipate esters
Glycol esters | Poly(ethylene adipate) | [
"Chemistry",
"Materials_science"
] | 2,063 | [
"Polymers",
"Polymer chemistry"
] |
41,449,061 | https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein%20condensation%20of%20quasiparticles | Bose–Einstein condensation can occur in quasiparticles, particles that are effective descriptions of collective excitations in materials. Some have integer spins and can be expected to obey Bose–Einstein statistics like traditional particles. Conditions for condensation of various quasiparticles have been predicted and observed. The topic continues to be an active field of study.
Properties
BECs form when low temperatures cause nearly all particles to occupy the lowest quantum state. Condensation of quasiparticles occurs in ultracold gases and materials. The lower masses of material quasiparticles relative to atoms lead to higher BEC temperatures. An ideal Bose gas has a phase transitions when inter-particle spacing approaches the thermal De-Broglie wavelength: . The critical concentration is then , leading to a critical temperature: . The particles obey the Bose–Einstein distribution and all occupy the ground state:
The Bose gas can be considered in a harmonic trap, , with the ground state occupancy fraction as a function of temperature:
This can be achieved by cooling and magnetic or optical control of the system. Spectroscopy can detect shifts in peaks indicating thermodynamic phases with condensation. Quasiparticle BEC can be superfluids. Signs of such states include spatial and temporal coherence and polarization changes. Observation for excitons in solids was seen in 2005 and for magnons in materials and polaritons in microcavities in 2006. Graphene is another important solid state system for studies of condensed matter including quasi particles; It's a 2D electron gas, similar to other thin films.
Excitons
Excitons are electron-hole pairs. Similar to helium-4 superfluidity at the -point (2.17K); a condensate was proposed by Böer et al. in 1961. Experimental phenomenon were predicted leading to various pulsed laser searches that failed to produce evidence. Signs were first seen by Fuzukawa et al. in 1990, but definite detection was published later in the 2000s. Condensed excitons are a superfluid and will not interact with phonons. While the normal exciton absorption is broadened by phonons, in the superfluid absorption degenerates to a line.
Theory
Excitons results from photons exciting electrons creating holes, which are then attracted and can form bound states. The 1s paraexciton and orthoexciton are possible. The 1s triplet spin state, 12.1meV below the degenerate orthoexciton states(lifetime ~ns), is decoupled and has a long lifetime to an optical decay. Dilute gas densities (n~1014cm−3) are possible, but paraexciton generation scales poorly, so significant heating occurs in creating high densities(1017cm−3) preventing BECs. Assuming a thermodynamic phase occurs when separation reaches the de Broglie wavelength() gives:
Where, is the exciton density, effective mass(of electron mass order) , and , are the Planck and Boltzmann constants. Density depends on the optical generation and lifetime as: . Tuned lasers create excitons which efficiently self-annihilate at a rate: , preventing a high density paraexciton BEC. A potential well limits diffusion, damps exciton decay, and lowers the critical number, yielding an improved critical temperature versus the T3/2 scaling of free particles:
Experiments
In an ultrapure Cu2O crystal: = 10s. For an achievable T = 0.01K, a manageable optical pumping rate of 105/s should produce a condensate. More detailed calculations by J. Keldysh and later by D. Snoke et al. started a large number of experimental searches into the 1990s that failed to detect signs. Pulse methods led to overheating, preventing condensate states. Helium cooling allows mili-kelvin setups and continuous wave optics improves on pulsed searches. Relaxation explosion of a condensate at lattice temperature 354 mK was seen by Yoshioka et al. in 2011. Recent experiments by Stolz et al. using a potential trap have given more evidence at ultralow temperature 37 mK. In a parabolic trap with exciton temperature 200 mK and lifetime broadened to 650ns, the dependence of luminescence on laser intensity has a kink which indicates condensation. The theory of a Bose gas is extended to a mean field interacting gas by a Bogoliubov approach to predict the exciton spectrum; The kink is considered a sign of transition to BEC. Signs were seen for a dense gas BEC in a GaAs quantum well.
Magnons
Magnons, electron spin waves, can be controlled by a magnetic field. Densities from the limit of a dilute gas to a strongly interacting Bose liquid are possible. Magnetic ordering is the analog of superfluidity. The condensate appears as the emission of monochromatic microwaves, which are tunable with the applied magnetic field.
In 1999 condensation was demonstrated in antiferromagnetic TlCuCl3, at temperatures as large as 14 K. The high transition temperature (relative to atomic gases) is due to the small mass (near an electron) and greater density. In 2006, condensation in a ferromagnetic Yttrium-iron-garnet thin film was seen even at room temperature with optical pumping. Condensation was reported in gadolinium in 2011. Magnon BECs have been considered as qubits for quantum computing.
Polaritons
Polaritons, caused by light coupling to excitons, occur in optical cavities and condensation of exciton-polaritons in an optical microcavity was first published in Nature in 2006. Semiconductor cavity polariton gases transition to ground state occupation at 19K. Bogoliubov excitations were seen polariton BECs in 2008.
The signatures of BEC were observed at room temperature for the first time in 2013, in a large exciton energy semiconductor device and in a polymer microcavity.
Other quasiparticles
Rotons, an elementary excitation in superfluid 4He introduced by Landau, were discussed by Feynman and others. Rotons condense at low temperature. Experiments have been proposed and the expected spectrum has been studied, but roton condensates have not been detected. Phonons were first observed in a condensate in 2004 by ultrashort pulses in a bismuth crystal at 7K.
See also
Bose–Einstein condensate
Bose-Einstein condensation of polaritons
Important publications
References
Bose–Einstein condensates
Quasiparticles | Bose–Einstein condensation of quasiparticles | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,407 | [
"Bose–Einstein condensates",
"Phases of matter",
"Subatomic particles",
"Condensed matter physics",
"Quasiparticles",
"Matter"
] |
41,452,537 | https://en.wikipedia.org/wiki/A%20Slower%20Speed%20of%20Light | A Slower Speed of Light is a freeware video game developed by MIT Game Lab that demonstrates the effects of special relativity by gradually slowing down the speed of light to a walking pace. The game runs on the Unity engine using its open-source OpenRelativity toolkit.
Gameplay
In A Slower Speed of Light, the player controls the ghost of a young child who was killed in an unspecified accident. The child wants to "become one with light", but the speed of light is too fast for the child. This is solved through the use of magic orbs which, as each are collected, slow down the speed of light, until by the end it is at walking speed. These orbs are spread throughout the level. At the beginning of the game, walking around and collecting these orbs is easy; however, as the game progresses, the effects of special relativity become apparent. This gradually increases the difficulty of the game. After collecting all 100 orbs, a portal (as seen in the poster) appears. Entering the portal will open a tab explaining the effects of Special relativity, and will also show the time the game was finished in. The time the game was completed in is displayed with both the actual real world time as well as the time your character experienced, which differs due to the simulated effects of special relativity.
Effects of special relativity
As the game progresses, the light becomes slower, and therefore the effects of special relativity start to become more apparent, increasing the difficulty of the game. These effects include the Doppler Effect (red/blue-shifting of visible light and the shifting of ultraviolet and infrared into the visible spectrum), the Searchlight Effect (increased brightness in the direction of travel), Time Dilation (difference between the passage of time perceived by the player and the outside world), Length Contraction and Terrell Rotation (the perceived warping of the environment at near-light speeds), and the runtime effect (seeing objects in the past because of the speed of light).
OpenRelativity
OpenRelativity is a toolkit designed for use with the proprietary Unity game engine. It was developed by MIT Game Lab during the development of A Slower Speed of Light. The toolkit allows for the accurate simulation of a 3D environment when light is slowed down. It is hosted on GitHub and has been published under the permissive MIT license.
Use in education
A Slower Speed of Light was developed in hopes of being used as an educational tool to explain special relativity in an easy-to-understand fashion. The game is meant to be used as an interactive learning tool for those interested in physics.
See also
Numerical relativity
Special relativity
References
External links
Unofficial Speedrun page
Official game page
2012 video games
Educational games
Freeware games
Linux games
MacOS games
Windows games
Special relativity
Video games developed in the United States | A Slower Speed of Light | [
"Physics"
] | 572 | [
"Special relativity",
"Theory of relativity"
] |
53,046,273 | https://en.wikipedia.org/wiki/Continuing%20airworthiness%20management%20organization | Continuing airworthiness management organisation (CAMO) is a civil aviation organization authorized to schedule and control continuing airworthiness activities on aircraft and their parts
The scope of the CAMO is to organise and manage all documents and publications for Maintenance Organizations Part 145 and Part M approved, like development and management of aircraft maintenance programmes fulfilled. A CAMO must also provide record keeping of maintenance performed. In other words, a CAMO is responsible to the Air Operator Certificate (AOC) holder. EASA has the power to give CAMO second privileges also but not in all cases. These second privileges allow the CAMO to conduct airworthiness review on aircraft, issue (or recommend for issue) Airworthiness Review Certificates and issue 'permit to fly' for maintenance check flights.
General requirements to be met by a CAMO are facilities (offices and documentation storage), a Continuing Airworthiness Management Exposition (CAME) which must be approved by the competent authority of the country or EASA and company procedures (to comply with Part M requirements).
A CAMO can also be the operator of the aircraft.
Personnel required to be employed in a CAMO are the Accountable Manager (which can be the same person for CAMO and operator), the Quality Manager (to ensure all EASA requirements are in compliance) and appropriately qualified staff for airworthiness management. These personnel must be mentioned in the CAME. In case of second privileges Airworthiness Review Staff must be employed.
Like any other aviation organisation a CAMO is audited by authorities and must fulfill all requirements. Findings in audits are categorized in levels.
Level 1 finding is a serious hazard to flight safety and the approval to operate can be revoked until a satisfactory correction is taken.
Level 2 finding is non serious to flight safety, but must be taken care of because it can lead to a Level 1 finding.
References
Aircraft maintenance
Aviation safety organizations
Civil aviation | Continuing airworthiness management organization | [
"Engineering"
] | 384 | [
"Aircraft maintenance",
"Aerospace engineering"
] |
53,046,631 | https://en.wikipedia.org/wiki/Multilinear%20multiplication | In multilinear algebra, applying a map that is the tensor product of linear maps to a tensor is called a multilinear multiplication.
Abstract definition
Let be a field of characteristic zero, such as or .
Let be a finite-dimensional vector space over , and let be an order-d simple tensor, i.e., there exist some vectors such that . If we are given a collection of linear maps , then the multilinear multiplication of with is defined as the action on of the tensor product of these linear maps, namely
Since the tensor product of linear maps is itself a linear map, and because every tensor admits a tensor rank decomposition, the above expression extends linearly to all tensors. That is, for a general tensor , the multilinear multiplication is
where with is one of 's tensor rank decompositions. The validity of the above expression is not limited to a tensor rank decomposition; in fact, it is valid for any expression of as a linear combination of pure tensors, which follows from the universal property of the tensor product.
It is standard to use the following shorthand notations in the literature for multilinear multiplications:andwhere is the identity operator.
Definition in coordinates
In computational multilinear algebra it is conventional to work in coordinates. Assume that an inner product is fixed on and let denote the dual vector space of . Let be a basis for , let be the dual basis, and let be a basis for . The linear map is then represented by the matrix . Likewise, with respect to the standard tensor product basis , the abstract tensoris represented by the multidimensional array . Observe that
where is the jth standard basis vector of and the tensor product of vectors is the affine Segre map . It follows from the above choices of bases that the multilinear multiplication becomes
The resulting tensor lives in .
Element-wise definition
From the above expression, an element-wise definition of the multilinear multiplication is obtained. Indeed, since is a multidimensional array, it may be expressed as where are the coefficients. Then it follows from the above formulae that
where is the Kronecker delta. Hence, if , then
where the are the elements of as defined above.
Properties
Let be an order-d tensor over the tensor product of -vector spaces.
Since a multilinear multiplication is the tensor product of linear maps, we have the following multilinearity property (in the construction of the map):
Multilinear multiplication is a linear map:
It follows from the definition that the composition of two multilinear multiplications is also a multilinear multiplication:
where and are linear maps.
Observe specifically that multilinear multiplications in different factors commute,
if
Computation
The factor-k multilinear multiplication can be computed in coordinates as follows. Observe first that
Next, since
there is a bijective map, called the factor-k standard flattening, denoted by , that identifies with an element from the latter space, namely
where is the jth standard basis vector of , , and is the factor-k flattening matrix of whose columns are the factor-k vectors in some order, determined by the particular choice of the bijective map
In other words, the multilinear multiplication can be computed as a sequence of d factor-k multilinear multiplications, which themselves can be implemented efficiently as classic matrix multiplications.
Applications
The higher-order singular value decomposition (HOSVD) factorizes a tensor given in coordinates as the multilinear multiplication , where are orthogonal matrices and .
Further reading
Tensors
Multilinear algebra | Multilinear multiplication | [
"Engineering"
] | 735 | [
"Tensors"
] |
70,105,931 | https://en.wikipedia.org/wiki/Cole%E2%80%93Davidson%20equation | The Cole-Davidson equation is a model used to describe dielectric relaxation in glass-forming liquids. The equation for the complex permittivity is
where is the permittivity at the high frequency limit, where is the static, low frequency permittivity, and is the characteristic relaxation time of the medium. The exponent represents the exponent of the decay of the high frequency wing of the imaginary part, .
The Cole–Davidson equation is a generalization of the Debye relaxation keeping the initial increase of the low frequency wing of the imaginary part, . Because this is also a characteristic feature of the Fourier transform of the stretched exponential function it has been considered as an approximation of the latter, although nowadays an approximation by the Havriliak-Negami function or exact numerical calculation may be preferred.
Because the slopes of the peak in in double-logarithmic representation are different it is considered an asymmetric generalization in contrast to the Cole-Cole equation.
The Cole–Davidson equation is the special case of the Havriliak-Negami relaxation with .
The real and imaginary parts are
and
See also
Debye relaxation
Cole-Cole relaxation
Havriliak–Negami relaxation
Curie–von Schweidler law
References
Equations
Glass
Liquids
Electric and magnetic fields in matter | Cole–Davidson equation | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 265 | [
"Glass",
"Unsolved problems in physics",
"Phases of matter",
"Electric and magnetic fields in matter",
"Mathematical objects",
"Homogeneous chemical mixtures",
"Equations",
"Materials science",
"Condensed matter physics",
"Amorphous solids",
"Matter",
"Liquids"
] |
49,162,954 | https://en.wikipedia.org/wiki/Quick%20return%20mechanism | A quick return mechanism is an apparatus to produce a reciprocating motion in which the time taken for travel in return stroke is less than in the forward stroke. It is driven by a circular motion source (typically a motor of some sort) and uses a system of links with three turning pairs and a sliding pair. A quick-return mechanism is a subclass of a slider-crank linkage, with an offset crank.
Quick return is a common feature of tools in which the action is performed in only one direction of the stroke, such as shapers and powered saws, because it allows less time to be spent on returning the tool to its initial position.
History
During the early-nineteenth century, cutting methods involved hand tools and cranks, which were often lengthy in duration. Joseph Whitworth changed this by creating the quick return mechanism in the mid-1800s. Using kinematics, he determined that the force and geometry of the rotating joint would affect the force and motion of the connected arm. From an engineering standpoint, the quick return mechanism impacted the technology of the Industrial Revolution by minimizing the duration of a full revolution, thus reducing the amount of time needed for a cut or press.
Applications
Quick return mechanisms are found throughout the engineering industry in different machines:
Shaper
Screw press
Power-driven saw
Mechanical actuator
Revolver mechanisms
Design
The disc influences the force of the arm, which makes up the frame of reference of the quick return mechanism. The frame continues to an attached rod, which is connected to the circular disc. Powered by a motor, the disc rotates and the arm follows in the same direction (linear and left-to-right, typically) but at a different speed. When the disc nears a full revolution, the arm reaches its furthest position and returns to its initial position at a quicker rate, hence its name. Throughout the cut, the arm has a constant velocity. Upon returning to its initial position after reaching its maximum horizontal displacement, the arm reaches its highest velocity.
The quick return mechanism was modeled after the crank and slider (arm), and this is present in its appearance and function; however, the crank is usually hand powered and the arm has the same rate throughout an entire revolution, whereas the arm of a quick return mechanism returns at a faster rate. The "quick return" allows for the arm to function with less energy during the cut than the initial cycle of the disc.
Specifications
When using a machine that involves this mechanism, it is very important to not force the machine into reaching its maximum stress capacity; otherwise, the machine will break. The durability of the machine is related to the size of the arm and the velocity of the disc, where the arm might not be flexible enough to handle a certain speed. Creating a graphical layout for a quick return mechanism involves all inversions and motions, which is useful in determining the dimensions for a functioning mechanism. A layout would specify the dimensions of the mechanism by highlighting each part and its interaction among the system. These interactions would include torque, force, velocity, and acceleration. By relating these concepts to their respective analyses (kinematics and dynamics), one can comprehend the effect each part has on another.
Mechanics
In order to derive the force vectors of these mechanisms, one must approach a mechanical design consisting of both kinematic and dynamic analyses.
Kinematic Analysis
Breaking the mechanism up into separate vectors and components allows us to create a kinematic analysis that can solve for the maximum velocity, acceleration, and force the mechanism is capable of in three-dimensional space. Most of the equations involved in the quick return mechanism setup originate from Hamilton's principle.
The position of the arm can be found at different times using the substitution of Euler's formula:
into the different components that have been pre-determined, according to the setup.
This substitution can solve for various radii and components of the displacement of the arm at different values. Trigonometry is needed for the complete understanding of the kinematic analyses of the mechanism, where the entire design can be transcribed onto a plane layout, highlighting all of the vector components.
An important concept for the analysis of the velocity of the disc relative to the arm is the angular velocity of the disc:
If one desires to calculate the velocity, one must derive the angles of interaction at a single moment of time, making this equation useful.
Dynamic Analysis
In addition to the kinematic analysis of a quick return mechanism, there is a dynamic analysis present. At certain lengths and attachments, the arm of the mechanism can be evaluated and then adjusted to certain preferences. For example, the differences in the forces acting upon the system at an instant can be represented by D'Alembert's principle. Depending on the structural design of the quick return mechanism, the law of cosines can be used to determine the angles and displacements of the arm. The ratio between the working stroke (engine) and the return stroke can be simplified through the manipulation of these concepts.
Despite similarities between quick return mechanisms, there are many different possibilities for the outline of all forces, speeds, lengths, motions, functions, and vectors in a mechanism.
See also
References
Mechanisms (engineering)
Mechanical power transmission | Quick return mechanism | [
"Physics",
"Engineering"
] | 1,061 | [
"Mechanical power transmission",
"Mechanics",
"Mechanical engineering",
"Mechanisms (engineering)"
] |
49,167,470 | https://en.wikipedia.org/wiki/Match%20Analysis | Match Analysis is a US company with headquarters in Emeryville, California. The company employs 70 staff in their offices and data collection facilities in California and Mexico City, Mexico.
The company provides video analysis tools and digital library archiving services supplying performance and physical tracking data to football (soccer) coaches, teams, and players. The objective is to improve individual and team performance and/or analyze opposition patterns of play to give tactical advantage.
Match Analysis records and verifies over 2,500 distinct events per football match with every touch by every player catalogued, synchronized against video feeds, and stored in a searchable video database.
History
Match Analysis was founded in 2000 by Mark Brunkhart, its current President, after he developed a system to help amateur football players see the game objectively.
The system evolved from a collection of printed reports and info graphics into video analysis software and statistical data tools supplied to professional and amateur football teams, governing bodies/professional organizations and media partners around the world.
Match Analysis is one of the pioneers of statistical analysis in football. In 2002, the company released Mambo Studio, the first video editing and retrieval system for football. In 2004, Tango Online was launched to replace printed reports with the first instant access online video database of a complete league.
In May 2012, Match Analysis acquired Edinburgh based Spinsight Ltd purchasing the intellectual property and other assets relating to its K2 Panoramic Video Camera System.
Match Analysis signed strategic alliances with Major League Soccer and Liga MX in 2013.
In addition Match Analysis's K2 Panoramic Video Camera System was implemented in every stadium across Major League Soccer and Liga MX in the summer of 2013.
During November 2015, Match Analysis participated in discussions with IFAB and FIFA at their headquarters in Zürich, Switzerland to advise on global standards for electronic performance and tracking systems.
In May 2016, Match Analysis announced the introduction of Tango VIP their new foundational technology platform for their extensive online presence.
Products
Match Analysis tools and services provide video indexing and archiving, statistical analysis, live data collection, player tracking, fitness reports, and performance analysis.
The company's product range includes Mambo Studio, K2 Panoramic Video, TrueView Visualizations, Tango Online, Tango Live, Tango ToGo, Player Tracking and Fitness Reports.
Clients
The company has worked with eight different national teams including Germany, the United States, and Mexico and has relationships with over 50 professional clubs. Match Analysis currently supports league-wide deals with Major League Soccer and Liga MX.
Over the past decade, Match Analysis has worked with almost every major professional club in North America and media outlets including the New York Times World Cup coverage.
Current Match Analysis clients include all 18 Liga MX clubs in Mexico, 17 MLS clubs, the Mexico national team, PRO Professional Referee Organization and a wide array of college and amateur sides.
References
External links
Association football equipment
Motion in computer vision
Tracking | Match Analysis | [
"Physics",
"Technology"
] | 582 | [
"Physical phenomena",
"Wireless locating",
"Tracking",
"Motion (physics)",
"Motion in computer vision"
] |
49,172,235 | https://en.wikipedia.org/wiki/Camp%20Thomas%20A.%20Scott | Camp Thomas A. Scott, located in Fort Wayne, Indiana, was a Railway Operating Battalion training center for the Pennsylvania Railroad from 1942 to 1944 and a prisoner of war camp during World War II. It was named for Thomas A. Scott, who served as the fourth president of the Pennsylvania Railroad from 1874 to 1880. As the United States Assistant Secretary of War in 1861, Scott was instrumental in using railroads for military purposes during the American Civil War.
Pennsylvania Railroad Training Center
Camp Scott was built in August 1942 as a training camp for U.S. Army Railway Operating Battalions. This made sense because Fort Wayne was a major hub for the Pennsylvania Railroad, and Camp Scott was constructed adjacent to Pennsylvania Railroad lines. The 717th, the 730th, and the 750th Railway Operating Battalions were all trained on Pennsylvania Railroad lines in Fort Wayne. The last battalion was deployed from Camp Scott in mid-1944.
Prisoner of war camp
Camp Scott was a branch camp of Camp Perry in Ohio. Camp Scott housed approximately 600 prisoners of war. Most of these prisoners were German and had served in the Afrika Korps, although some were Italians captured at the Battle of Anzio and the Battle of Monte Cassino in Italy. Like the rest of the United States, Fort Wayne suffered labor shortages due to wartime enlistment, and prisoners from Camp Scott were put to work in Fort Wayne and surrounding areas of Allen County, Indiana. Prisoners weeded and harvested potatoes for local farmers, cleared snow from Fort Wayne streets, and set pins at a local bowling alley. Following VE Day, the prisoners were gradually repatriated, and Camp Scott officially closed on November 16, 1945.
Uses after 1945
Camp Scott sat dormant until January 1946, when the Fort Wayne Housing Authority began the process of converting camp buildings into much-needed housing for returning American veterans and their families. In the years following, more housing was built in Fort Wayne, and the families living at Camp Scott gradually relocated to other homes. Camp Scott served as a temporary housing facility until August 1949. Over the next decades, the buildings were torn down, with the last building being demolished in 1977.
The City of Fort Wayne converted some of the land on which Camp Scott stood into a constructed wetland. It also serves as a facility for storing and treating stormwater run-off.
References
Eastes, Erick E. "'A By-Product of War': A History of Camp Thomas A. Scott 1942-1949" Old Fort News 49.2 (1986).
Hawfield, Michael. "World War II Camp Had Impact on City" The News-Sentinel 15 December 1990.
Camp Thomas A. Scott - Fort Wayne, Indiana - World War II Prisoner of War Camps on Waymarking.com
http://explorepahistory.com/story.php?storyId=1-9-10&chapter=1
World War II prisoner-of-war camps in the United States
Constructed wetlands
1942 establishments in Indiana
1945 disestablishments in Indiana
Buildings and structures in Fort Wayne, Indiana | Camp Thomas A. Scott | [
"Chemistry",
"Engineering",
"Biology"
] | 615 | [
"Bioremediation",
"Constructed wetlands",
"Environmental engineering"
] |
68,611,033 | https://en.wikipedia.org/wiki/James%20Marrow | Thomas James Marrow (born 23 November 1966) is a British scientist who is a professor of nuclear materials at the University of Oxford and holds the James Martin Chair in Energy Materials. He specialises in physical metallurgy, micromechanics, and X-ray crystallography of engineering materials, mainly ceramic matrix composite and nuclear graphite.
Biography
Early life and education
James Marrow was born on 23 November 1966 in Bromborough, Wirral to John Williams Marrow and Mary Elizabeth Marrow. He attended Wirral Grammar School for Boys, then graduated with a 1st Class Honours Master of Arts (M.A) in Natural Sciences (Materials Science) from the University of Cambridge in 1988, where he was a student at Clare College, Cambridge before pursuing and completing a Doctor of Philosophy degree in 1991. During his PhD, he studied the Fatigue mechanisms in embrittled duplex stainless steel and was supervised by Julia King.
Career
From 1992 to 1993, Marrow was appointed as postdoctoral research associate in the Department of Materials, University of Oxford, and a junior research fellow at Linacre College, Oxford, but moved with an Engineering and Physical Sciences Research Council (EPSRC) postdoctoral research fellowship to the School of Metallurgy and Materials, University of Birmingham. In 2001, he joined the Manchester Materials Science Centre, University of Manchester, as senior lecturer in physical metallurgy, where he became assistant director of Materials Performance Centre in 2002 and the director in 2009.
Marrow moved to the University of Oxford to become Oxford Martin School co-director of the school programme in Nuclear and Energy Materials from 2010 to 2015, Professor in Energy Materials, Department of Materials, Oxford University, and Fellow of Mansfield College, Oxford. , Marrow is the Associate Head of Department of Materials (Teaching).
Marrow is a council member of the UK Forum for Engineering Structural Integrity (FESI), UK representative for the European Energy Research Alliance Joint Programme on Nuclear Materials, member (ex-chair) of the OECD-NEA Expert Group on Innovative Structural Materials, independent advisor to the UK Office of Nuclear Regulation on materials/structural integrity, and UK representative on Graphite for BEIS to the Generation IV International Forum. Marrow is the co-director of the Nuclear Research Centre (NRC), which is a joint venture between the University of Bristol and the University of Oxford to train new nuclear scientists and engineers.
Personal life
Marrow married Daiva Kojelyte in 1998 and he is a father of a son and a daughter.
Research
Marrow's research focuses on the degradation of structural materials, the role of microstructure, and the mechanisms of materials ageing. A key aspect is the investigation of fundamental mechanisms of damage accumulation - including irradiation - using novel materials characterisation techniques. This has concentrated recently on computed X-ray tomography and strain mapping by digital image correlation and digital volume correlation, together with X-ray and neutron diffraction. He applies these techniques to study the degradation of Generation IV nuclear materials such as graphite and silicon carbide composites, as well as new materials for electrical energy storage.
Public engagement
Marrow is part of I'm a Scientist, Get me out of here! energy generation zone. He has also been a key developer and academic consultant for the Dissemination of IT for the Promotion of Materials Science (DoITPoMS). Global Cycle Network Technology (GCN Tech) interviewed James about carbon fibre fatigue and strain in 2022.
See also
References
British materials scientists
Academics of the University of Oxford
1966 births
Living people
British nuclear engineers
Metallurgists
Alumni of the University of Cambridge | James Marrow | [
"Chemistry",
"Materials_science"
] | 727 | [
"Metallurgists",
"Metallurgy"
] |
68,611,074 | https://en.wikipedia.org/wiki/Joe%20Murphy%20%28contractor%29 | John "Joe" Murphy (1917 – 2 August 2000) was an Irish civil engineering contractor. In his early life he worked as a police officer in the Garda Síochána but moved to England in 1945 to work in construction with his brother John Murphy. After ten years Joe established his own company, Murphy Limited, which became known as "Grey Murphy" to distinguish it from his brother's "Green Murphy" (J Murphy and Sons). Grey Murphy specialised in below-ground works, while Green Murphy specialised in above-ground works. Grey Murphy did well during the 1960s building boom and grew to become one of the largest Irish-owned construction firms.
Early life
Joe Murphy was born as John Murphy in Cahersiveen, County Kerry in Ireland in 1917. He was educated at a national school at Knockeen, County Waterford, before joining the Garda Síochána (police service). Murphy travelled to England in 1945 to join his older brother, who was working in the construction industry there. Murphy's brother had adopted the name John when he arrived in England so Murphy adopted "Joseph" to avoid confusion. The pair worked as labourers before setting up their own sub-contractor. One of the firm's first projects was to remove hazards to shipping on the Dover-Calais route in the English Channel.
Grey Murphy
After 10 years in partnership with his brother, Murphy left the firm to establish his own company specialising in cable-laying. Joe Murphy's company was Murphy Limited (under JMCC Holdings), while his brother's firm was J Murphy and Sons. The firms distinguished from each other as "Grey Murphy" and "Green Murphy" respectively, for the colours of their company vehicles. Grey Murphy tended to focus on below-ground work and Green Murphy on above-ground. At one stage the two companies accounted for 10% of the UK construction market.
Murphy's company, as well as other Irish-owned contractors, did well during the 1960s building boom. Murphy became a member of the fashionable Irish Club in Eaton Square, London, which became a centre for industry gossip. Murphy and his company became renowned for valuing good workmanship, for paying high wages and for employing Irish nationals. His friend, the actor Joe Lynch, claimed that Murphy employed more Irishmen than any firm based in Ireland.
Grey Murphy's Irish subsidiary, JMSE, was accused of bribing the senior politician Ray Burke. The company was investigated by the Flood Tribunal established in 1997. Murphy was not called in front of the tribunal due to ill health but was interviewed by them in Guernsey, though no action was taken against Murphy or the company.
Personal life
Murphy's wife died in 1962, leaving him to raise a daughter and a three-month-old son. Murphy remarried in 1968 to the sister of his dead wife, they had no further children. In the 1960s Murphy and his brother invested heavily in the Isle of Man-based bank the International Finance and Trust Corporation. The bank collapsed and both men lost millions of pounds. Murphy was briefly in crisis, but eventually recovered around 80% of his investment. Murphy later moved to Guernsey and lived there as a tax exile.
Murphy's second wife died in 1991. Murphy died of cancer at home in Guernsey on 2 August 2000. At the time his company was one of the top 10 largest Irish-owned building firms; Murphy was personally worth £36 million. The company entered administration in 2013 and was closed down.
References
1917 births
2000 deaths
People from Cahersiveen
Civil engineering contractors
20th-century Irish businesspeople
Irish emigrants to the United Kingdom | Joe Murphy (contractor) | [
"Engineering"
] | 733 | [
"Civil engineering",
"Civil engineering contractors"
] |
68,611,391 | https://en.wikipedia.org/wiki/Aquamarine%20%28gem%29 | Aquamarine is a pale-blue to light-green variety of the beryl family, with its name relating to water and sea. The color of aquamarine can be changed by heat, with a goal to enhance its physical appearance (though this practice is frowned upon by collectors and jewelers). It is the birth stone of March.
Aquamarine is a fairly common gemstone, rendering it more accessible for purchase, compared to other gems in the beryl family. Overall, its value is determined by weight, color, cut, and clarity.
It is transparent to translucent and possesses a hexagonal crystal system. Aquamarine mainly forms in granite pegmatites and hydrothermal veins, and it is a very lengthy process that can take millions of years to form.
Aquamarine occurs in many countries over the world, and is most commonly used for jewelry, decoration and its properties.
Aquamarine is mainly extracted through open-pit mining, however underground mining is also a possibility to access aquamarine reserves.
Aquamarine is a durable gemstone, but it is highly recommended to conserve it on its own to prevent damage/scratches.
Famous aquamarines include the Dom Pedro, the Roosevelt Aquamarine, the Hirsch Aquamarine, Queen Elizabeth's Tiara, Meghan Markle's ring, and the Schlumberger bow.
Name and etymology
The name aquamarine comes from , and marine, deriving from . The word aquamarine was first used in the year 1677.
The word aquamarine has been used as a modifier for other minerals like aquamarine tourmaline, aquamarine emerald, aquamarine chrysolite, aquamarine sapphire, or aquamarine topaz.
Physical properties
Aquamarine is blue with hues of green, caused by trace amounts of iron found within the crystal structure. It can vary from pale to vibrant and transparent to translucent. Better transparency in aquamarine gemstones means that light may go through the crystal with less interference. The hexagonal crystal system is where aquamarine crystallizes. It forms prismatic crystals with a hexagonal cross-section. These crystals can be microscopic to enormous in size and frequently feature faces with vertical striating. The lustre of aquamarine ranges from vitreous to resinous. It can have a glass-like brilliance and a sheen when cut and polished correctly.
Chemical composition
Aquamarine has a chemical composition of , also containing Fe2+. It belongs to the beryl family, being a beryllium aluminum silicate mineral. It is closely related to emerald, morganite, and heliodor. Aquamarine is chemically stable and resistant to most common chemicals and acids. It has a hardness of 7.5–8 on the Mohs scale. While aquamarine often contains no inclusions, it may possess them, with content such as mica, hematite, saltwater, biotite, rutile or pyrite. Its hardness on the Mohs scale of mineral hardness is rated as 7.5-8. This rating gives aquamarine the chance to be a very suitable gem for everyday wear.
Geological formation
Aquamarine mainly forms in granite pegmatites (coarse-grained igneous rock) and hydrothermal vents. The remaining liquid that is left behind after granitic magma crystallizes is what gives rise to pegmatites. The residual fluids, which are rich in volatile elements and minerals such as silicon, aluminum, and beryllium, concentrate when the magma cools and solidifies.
Aquamarine may also be formed by hydrothermal fluids, which are hot, mineral-rich solutions. These liquids contain dissolved minerals and metals as they move through fissures and cavities in the crust of the Earth. Fractures, faults, and veins are just a few of the geological environments that hydrothermal systems can be linked to.
Beryllium is a necessary component for the production of aquamarine, a type of beryl. Although beryllium is a relatively uncommon element in the crust of the Earth, it can be found in concentrated forms in some geological settings. These include beryllium-rich hydrothermal systems and granite pegmatites, which contain large amounts of beryllium-bearing minerals.
The dissolved elements start to precipitate out of the solution and form crystals as the hydrothermal fluids cool and come into contact with the right minerals and circumstances. Crystals of beryl, which include aquamarine, begin to form in pegmatite veins and host rock fissures or cavities. Aquamarine crystals grow over long periods, which enables them to take on their distinctive hexagonal prismatic shape.
This is a very long process that can take millions of years to form. The settings in which aquamarine forms can vary and may lead to variations in gem quality, size, and color.
Value
The value of aquamarine is determined by its weight, color, cut, and clarity. Due to its relative abundance, aquamarine is comparatively less expensive than other gemstones within the beryl group, such as emerald or bixbite (red beryl), however it is typically more expensive than similarly colored gemstones such as blue topaz. Maxixe is a rarer variant of aquamarine, with its deep blue coloration, however, its color can fade due to sunlight. The color of maxixe is caused by NO3. Dark-blue maxixe color can be produced in green, pink or yellow beryl by irradiating it with high-energy radiation (gamma rays, neutrons or even X-rays). Naturally occurring blue hued aquamarine specimens are more expensive than those that have undergone heat treatment to reduce yellow tones caused by ferric iron. Cut aquamarines that are over 25 carats will have a lower price per carat than smaller ones of the same quality. Overall, the quality and color will vary depending on the source of the gem.
In culture
Aquamarine is the birth stone for the month of March. It has historically been used a symbol for youth and happiness due to its color, which has also, along with its name, made Western culture connect it with the ocean. Ancient tales have claimed that aquamarine came from the treasure chests of mermaids; which led to sailors using this gemstone as a lucky charm to protect against shipwreck. Additionally, ancient Romans believed this stone had healing properties, due to the stone being almost invisible when submerged in water.
The Chinese used it to make seals, and showpiece dolls. The Japanese used it to make netsuke.
The Egyptians, Greeks, Hebrews, and Sumerians all believed that aquamarine stones were worn by the High Priest of the Second Temple. It was said that these stones were engraved to represent the six tribes of Israel. Greeks also engraved designs into aquamarine 2 thousand years ago and turned them into intaglios.
In our modern era, aquamarine is mainly used for jewelry, decoration and its properties. It can be cut and shaped into rings, earrings, necklaces, and bracelets.
Aquamarine became a state gem for Colorado in 1971.
Occurrence
Aquamarine can be found in countries like Afghanistan, China, Kenya, Pakistan, Russia, Mozambique, the United States, Brazil, Nigeria, Madagascar, Zambia, Tanzania, Sri Lanka, Malawi, India, Zimbabwe, Australia, Myanmar, and Namibia. The state of Minas Gerais is a major source for aquamarine.
Aquamarine can mostly be found in granite pegmatites. It can also be found in veins of metamorphic rocks that became mineralized by hydrothermal activity.
The largest known example is the Dom Pedro aquamarine found in Pedra Azul, Minas Gerais, Brazil, in the late 1980's. It weighs roughly 4.6 pounds, cut from a 100-pound aquamarine crystal, and measures 10,363 carats. It resides in the National Museum of Natural History in Washington.
Mining and extraction
The initial stages of the aquamarine mining process involve prospecting and exploration. Finding prospective locations or regions with aquamarine reserves is necessary. Geological mapping, remote sensing, mapping, remote sensing, sampling, and other methods are used by geologists and mining firms to locate potentially aquamarine-containing geological formations and structures. Preparation of the site is the next step, which includes removing any vegetation, leveling the land, and constructing the facilities - such as access roads and workspaces. It is possible to mine aquamarine using both open-pit and underground techniques. This will depend on the size of the operation, the features of the deposit, and environmental conditions.
The most popular technique for extracting aquamarine on a large scale is open-pit mining. In order to reveal the aquamarine-bearing ore, the soil, vegetation, and rock cover must be removed. The ore is extracted using trucks, bulldozers, and excavators, to remove the material.
Underground mining may occasionally be used to obtain aquamarine reserves. This process entails digging shafts and tunnels to reach the ore bodies or veins that contain gems. When the aquamarine deposit is deep or the surrounding rock is too hard for open-pit extraction, underground mining is used, even though it can be more difficult and expensive than open-pit mining.
After extraction, the ore containing aquamarine is delivered to a processing plant. To extract the aquamarine crystals from the surrounding rock and other minerals, the ore is crushed, processed, and occasionally cleaned. The aquamarine can be concentrated and purified using a variety of methods, such as magnetic separation, froth flotation, and gravity separation.
The aquamarine crystals are then sorted according to size, shape, color, and clarity following the initial processing. The gemstones are assessed and graded by gemologists and experts according to predetermined standards, such as the four C's (color, clarity, cut, and carat weight). Only the best aquamarine crystals are chosen to be used in jewelry made of gemstones.
Care and maintenance
Aquamarine is classified as a durable gem, however, it may still be damaged. In storage, it is advised to place it on its own, without the interruption of other gemstones to prevent scratches. Warm soapy water and a soft brush are the best ways to clean this gemstone, however, ultrasonic cleaners are relatively safe for aquamarine.
Alternative uses
Although aquamarine is mainly used for jewelry, aquamarine powder has proven to be a beneficial ingredient in cosmetics. It has a binding and skin protecting function that ensures protection of the skin from external influences.
Notable examples
See also
List of gemstones
List of minerals
References
Gemstones
Beryl group | Aquamarine (gem) | [
"Physics"
] | 2,162 | [
"Materials",
"Gemstones",
"Matter"
] |
68,613,853 | https://en.wikipedia.org/wiki/Leticia%20Myriam%20Torres%20Guerra | Leticia Myriam Torres Guerra (born September 9, 1955) is a Mexican chemist.
Her research work focuses on the development and synthesis of advanced materials such as semiconductors and their application as powders and films in renewable energy and sustainable decontamination projects.
In 2005, she was appointed head of the Faculty of Civil Engineering's Department of Ecomaterials and Energy at the Autonomous University of Nuevo León (UANL). As of 2019, she is the general director of the .
Biography
Leticia Myriam Torres Guerra was born in Monterrey on September 9, 1955. She graduated from UANL with a licentiate in industrial chemistry in 1976. She earned her doctorate in advanced ceramic materials at the University of Aberdeen in 1984. In 1985, she began her work as a research professor at UANL's Faculty of Chemical Sciences, and went on to receive the university's research award 15 times by 2010. She became a Level 3 member of the Sistema Nacional de Investigadores in 1986, the only woman to do so for ten years.
Other positions she has held are deputy director of research of the UANL Faculty of Chemical Sciences from 1995 to 2001, and deputy director of scientific and technological development of the National Council of Science and Technology (CONACYT) from 2011 to 2013. During 2014 and 2015 she was a certified leader in renewable energies and energy efficiency at Harvard University. She has been a member of the Mexican Academy of Sciences since 1999, the Mexican Materials Society since 2009, and the International Union of Materials Research Societies since 2017. She is on four committees of Mexico's .
Torres founded the Center for Research and Development of Ceramic Materials (active from 1990 to 1995) at UANL's Faculty of Chemical Sciences. She has carried out technological developments in collaboration with the industrial sector, including an agreement with the Vitro Group in 1996 to teach a master of science program with a specialty oriented to glass, and one with Cemex to implement a special UNI-EMPRESA scholarship program.
In 2019, she was named general director of the .
Research
Torres' work has focused on materials science; she began her research with the synthesis of advanced ceramic materials and crystal chemistry. Her most notable scientific investigations have focused on the synthesis and modification of semiconductors such as titanates, tantalates, and zirconates of alkali and alkaline earth metals for decontamination of air, soil, and water through photocatalysis, as well as their use in hydrogen production. The materials developed in her work group have shown high photoelectrocatalytic efficiency, allowing the development of prototypes of an "artificial leaf" to transform solar energy into chemical energy.
Awards and recognition
2012: "Flama, Vida y Mujer" award from the Autonomous University of Nuevo León
2015: "Master of Business Leadership" and "Master of Business Management" distinctions from the World Confederation of Businesses in Houston, Texas
2015: Medal of Civic Merit from the State of Nuevo León for her successful performance in the area of scientific research
2018: National Prize for Arts and Sciences in the field of Technology, Innovation, and Development
2019: Valor Regiomontano Award from the Universidad Regiomontana
Selected publications
Diagramas de Equilibrio de Fases. 2012. Patricia Quintana Owen, Leticia M. Torres-Martínez.
Fotosíntesis Artificial: Estudio de fundamentación social, económica, científica y tecnológica de la Fotosíntesis Artificial para la reducción de CO2 ambiental y la producción de energéticos sustentables en México. 2013. Alfredo Aguilar, Diego M.M. de la Escalera, Gisela Aguirre, Jessica Rangel, Jorge A. Ascencio, Leticia Torres, Edilso Reguera, Ricardo Gómez, Lorenzo Martínez.
References
External links
Leticia Myriam Torres Guerra at the Autonomous University of Nuevo León
1955 births
Alumni of the University of Aberdeen
Living people
Materials scientists and engineers
Members of the Mexican Academy of Sciences
Mexican chemists
Mexican women chemists
Women materials scientists and engineers
Autonomous University of Nuevo León alumni
Mexican scientists | Leticia Myriam Torres Guerra | [
"Materials_science",
"Technology",
"Engineering"
] | 855 | [
"Women materials scientists and engineers",
"Materials scientists and engineers",
"Women in science and technology",
"Materials science"
] |
68,614,556 | https://en.wikipedia.org/wiki/Agn%C3%A8s%20Sulem | Agnès Sulem (born 1959) is a French applied mathematician whose research topics include stochastic control, jump diffusion, and mathematical finance.
Education
Sulem earned a Ph.D. in 1983 at Paris Dauphine University, with the dissertation Résolution explicite d'Inéquations Quasi-Variationnelles associées à des problèmes de gestion de stock supervised by Alain Bensoussan.
Career
She is a director of research at the French Institute for Research in Computer Science and Automation (INRIA) in Paris, where she heads the MATHRISK project on mathematical risk handling. She is currently a professor at the University of Luxembourg in the Mathematics department. She is a coauthor of the book Applied Stochastic Control of Jump Diffusions (with Bernt Øksendal, Springer, 2005; 2nd ed., 2007; 3rd ed., 2019). Sulem is also an associate editor at the Journal of Mathematical Analysis and Applications and at the SIAM Journal on Financial Mathematics.
References
External links
Agnès Sulem publications indexed by INRIA
1959 births
Living people
French mathematicians
21st-century French women mathematicians
21st-century French mathematicians
Control theorists
Mathematical economists | Agnès Sulem | [
"Engineering"
] | 239 | [
"Control engineering",
"Control theorists"
] |
68,617,077 | https://en.wikipedia.org/wiki/Bacteriophage%20%CF%86Cb5 | Bacteriophage φCb5 is a bacteriophage that infects Caulobacter bacteria and other caulobacteria. The bacteriophage was discovered in 1970, it belongs to the genus Cebevirus of the Steitzviridae family and is the namesake of the genus. The bacteriophage is widely distributed in the soil, freshwater lakes, streams and seawater, places where caulobacteria inhabit and can be sensitive to salinity.
Description
The capsid has icosahedral geometries, and T = 3 symmetry. It does not have a viral envelope. The diameter is around 26 nm. The genomes are linear, positive single-stranded RNA, and about 3.4 kb in length. The genome segmentation is monopartite and has 2 or 3 ORFs. Viral replication occurs in the cytoplasm and entry into the bacterial cell occurs by penetration into the pilus. The routes of transmission are by contact.
The bacteriophage is similar to the RNA bacteriophages of Escherichia in that it is composed of a single positive single-stranded RNA molecule and a protein coat with two structural proteins and apparently contains the genetic ability to encode a subunit of the coat protein, a maturation-like protein and a similar RNA replicase. The φCb5 bacteriophage differs from Escherichia RNA bacteriophages in host specificity, salt sensitivity, and the presence of histidine, but not methionine, in the coat protein. As for related bacteriophages, ORFs encode maturation, coat, RNA replicase, and lysis proteins, but unlike other members of Leviviricetes, the φCb5 lysis protein gene completely overlaps with RNA replicase in a single frame different reading. The lysis protein of φCb5 is approximately twice as long as that of the distantly related bacteriophage MS2 and presumably contains two transmembrane helices.
References
Bacteriophages
Riboviria | Bacteriophage φCb5 | [
"Biology"
] | 435 | [
"Viruses",
"Riboviria"
] |
57,667,287 | https://en.wikipedia.org/wiki/Dirichlet%20hyperbola%20method | In number theory, the Dirichlet hyperbola method is a technique to evaluate the sum
where is a multiplicative function. The first step is to find a pair of multiplicative functions and such that, using Dirichlet convolution, we have ; the sum then becomes
where the inner sum runs over all ordered pairs of positive integers such that . In the Cartesian plane, these pairs lie on a hyperbola, and when the double sum is fully expanded, there is a bijection between the terms of the sum and the lattice points in the first quadrant on the hyperbolas of the form , where runs over the integers : for each such point , the sum contains a term , and vice versa.
Let be a real number, not necessarily an integer, such that , and let . Then the lattice points can be split into three overlapping regions: one region is bounded by and , another region is bounded by and , and the third is bounded by and . In the diagram, the first region is the union of the blue and red regions, the second region is the union of the red and green, and the third region is the red. Note that this third region is the intersection of the first two regions. By the principle of inclusion and exclusion, the full sum is therefore the sum over the first region, plus the sum over the second region, minus the sum over the third region. This yields the formula
Examples
Let be the divisor-counting function, and let be its summatory function:
Computing naïvely requires factoring every integer in the interval ; an improvement can be made by using a modified Sieve of Eratosthenes, but this still requires time. Since admits the Dirichlet convolution , taking in () yields the formula
which simplifies to
which can be evaluated in operations.
The method also has theoretical applications: for example, Peter Gustav Lejeune Dirichlet introduced the technique in 1849 to obtain the estimate
where is the Euler–Mascheroni constant.
References
Number theory
External links
Discussion of the Dirichlet hyperbola method for computational purposes | Dirichlet hyperbola method | [
"Mathematics"
] | 431 | [
"Discrete mathematics",
"Number theory"
] |
57,671,069 | https://en.wikipedia.org/wiki/Two%20capacitor%20paradox | The two capacitor paradox or capacitor paradox is a paradox, or counterintuitive thought experiment, in electric circuit theory. The thought experiment is usually described as follows:
Two identical capacitors are connected in parallel with an open switch between them. One of the capacitors is charged with a voltage of , the other is uncharged. When the switch is closed, some of the charge on the first capacitor flows into the second, reducing the voltage on the first and increasing the voltage on the second. When a steady state is reached and the current goes to zero, the voltage on the two capacitors must be equal since they are connected together. Since they both have the same capacitance the charge will be divided equally between the capacitors so each capacitor will have a charge of and a voltage of .
At the beginning of the experiment the total initial energy in the circuit is the energy stored in the charged capacitor:
At the end of the experiment the final energy is equal to the sum of the energy in the two capacitors:
Thus the final energy is equal to half of the initial energy . Where did the other half of the initial energy go?
Solutions
This problem has been discussed in electronics literature at least as far back as 1955. Unlike some other paradoxes in science, this paradox is not due to the underlying physics, but to the limitations of the 'ideal circuit' conventions used in circuit theory. The description specified above is not physically realizable if the circuit is assumed to be made of ideal circuit elements, as is usual in circuit theory. If the series resistance of the wires and conductors in the circuit is , the initial current when the switch is closed is
If the wires connecting the two capacitors, the switch, and the capacitors themselves are idealized as having no electrical resistance or inductance as is usual, then closing the switch would connect points at different voltage with a perfect conductor, causing an infinite current to flow, which is impossible. Therefore a solution requires that one or more of the 'ideal' characteristics of the elements in the circuit be relaxed, which was not specified in the above description. The solution differs depending on which of the assumptions about the actual characteristics of the circuit elements is abandoned:
If the connecting wires are assumed to have any nonzero resistance at all, it is an RC circuit, and the current will decrease exponentially to zero. Since none of the original charge is lost, the final state of the capacitors will be as described above, with half the initial voltage on each capacitor. Since in this state the two capacitors together are left with half the energy, regardless of the amount of resistance half of the initial energy will be dissipated as heat in the wire resistance.
If the wires are assumed to have inductance but no resistance, the current will not be infinite, but the circuit does not have any energy dissipating components, so it will not settle to a steady state, as assumed in the description. It will constitute an LC circuit with no damping, so the charge will oscillate perpetually back and forth between the two capacitors; the voltage on the two capacitors and the current will vary sinusoidally. None of the initial energy will be lost, at any point the sum of the energy in the two capacitors and the energy stored in the magnetic field around the wires will equal the initial energy.
If the connecting wires, in addition to having inductance and no resistance, are assumed to have a nonzero length, the oscillating circuit will act as an antenna and lose energy by radiating electromagnetic waves (radio waves). The effect of this energy loss is exactly the same as if there were a resistance called the radiation resistance in the circuit, so the circuit will be equivalent to an RLC circuit. The oscillating current in the wires will be an exponentially decaying sinusoid. Since none of the original charge is lost, the final state of the capacitors will be as in the case of the resistor, with half the initial voltage on each. Since in this state the capacitors contain half the initial energy, the missing half of the energy will have been radiated away by the electromagnetic waves.
If in addition to nonzero length and inductance the wires are assumed to have resistance, the total energy loss will be the same, half the initial energy, but will be divided between the radiated electromagnetic waves and heat dissipated in the resistance.
Various additional solutions have been devised, based on more detailed assumptions about the characteristics of the components.
Alternate versions
There are several alternate versions of the paradox. One is the original circuit with the two capacitors initially charged with equal and opposite voltages and . Another equivalent version is a single charged capacitor short circuited by a perfect conductor. In these cases in the final state the entire charge has been neutralized, the final voltage on the capacitors is zero, so the entire initial energy has vanished. The solutions to where the energy went are similar to those described in the previous section.
See also
List of paradoxes
References
Electrical circuits
Capacitors
Physical paradoxes
Thought experiments in physics | Two capacitor paradox | [
"Physics",
"Engineering"
] | 1,068 | [
"Physical quantities",
"Capacitors",
"Electronic engineering",
"Capacitance",
"Electrical engineering",
"Electrical circuits"
] |
57,672,909 | https://en.wikipedia.org/wiki/Thiosilicate | In chemistry and materials science, thiosilicate refers to materials containing anions of the formula . Derivatives where some sulfide is replaced by oxide are also called thiosilicates, examples being materials derived from the oxohexathiodisilicate . Silicon is tetrahedral in all thiosilicates and sulfur is bridging or terminal. Formally such materials are derived from silicon disulfide in analogy to the relationship between silicon dioxide and silicates. Thiosilicates are typically encountered as colorless solids. They are characteristically sensitive to hydrolysis. They are from the class of chalcogenidotetrelates.
Materials science
The LISICON (LIthium Super Ionic CONductor) include thiosilicates, which are fast ion conductors. Thiosilicates and related thiogermanates are also of interest for infrared optics, since they only absorb low frequency IR modes.
References
Inorganic silicon compounds
Sulfides
Inorganic polymers
Sulfur ions | Thiosilicate | [
"Physics",
"Chemistry"
] | 200 | [
"Matter",
"Inorganic compounds",
"Inorganic polymers",
"Inorganic silicon compounds",
"Sulfur ions",
"Ions"
] |
57,673,633 | https://en.wikipedia.org/wiki/COSMIC%20functional%20size%20measurement | COSMIC functional size measurement is a method to measure a standard functional size of a piece of software. COSMIC is an acronym of COmmon Software Measurement International Consortium, a voluntary organization that has developed the method and is still expanding its use to more software domains.
The method
The "Measurement Manual" defines the principles, rules and a process for measuring a standard functional size of a piece of software. Functional size is a measure of the amount of functionality provided by the software, completely independent of any technical or quality considerations. The generic principles of functional size are described in the ISO/IEC 14143 standard. This method is also an International Standard by itself. The COSMIC standard is the first of the old generation implementation of the ISO/IEC 14143 standard. There are also four first generation implementations:
ISO/IEC 20926 - IFPUG function points
ISO/IEC 20968 - Mk II function points
ISO/IEC 24570 - Nesma function points
ISO/IEC 29881 - FiSMA function points
These first generation functional size measurement methods consisted of rules that are based on empirical results. Part of the terminology that deals with users and requirements has overlap with similar terms in software engineering. They work well for the software domains the rules were designed for, but for other domains, the rules need to be altered or extended. Key elements of a second generation functional size measurement method are:
Adoption of all measurement concepts from the ISO metrology
A defined measurement unit
Fully compliant with ISO/IEC 14143
Preferably domain independent
The method is based on principles rather than rules that are domain independent. The principles of the method are based on fundamental software engineering principles, which have been subsequently tested in practice.
The method may be used to size software that is dominated by functionality to maintain data, rather than software that predominantly manipulates data. As a consequence of measuring the size, the method can be used to establish benchmarks of (and subsequent estimates) regarding the effort, cost, quality and duration of software work.
The method can be used in a wide variety of domains, like business applications, real-time software, mobile apps, infrastructure software and operating systems. The method breaks down the Functional User Requirements of the software into combinations of the four data movements types:
Entry (E)
Exit (X)
Read (R)
Write (W)
The function point count provides measurement of software size, which is the sum of the data movements for a given functional requirement. It may be used to estimate (and benchmark) software project effort, cost, duration, quality and maintenance work.
The foundation of the method is the ISO/IEC 19761 standard, which contains the definitions and basic principles that are described in more detail in the COSMIC measurement manual.
The applicability of the COSMIC functional size measurement method
Since the COSMIC method is based on generic software principles, these principles can be applied in various software domains. For a number of domains guidelines have been written to assist measurers to apply the COSMIC method in their domain:
Real-time Software Real-time software "controls an environment by receiving data, processing them, and returning the results sufficiently quickly to affect the environment at that time". The guideline describes how to use the generic principles in this environment.
Service Oriented Architecture (SOA) This is a software architecture where services are provided to the other components by application components, through a communication protocol over a network. A service is a discrete unit of functionality that can be accessed remotely and acted upon and updated independently, such as retrieving a credit card statement online. The guideline describes how to measure the functional size of distinct components.
Data WareHouse and Big Data is a field that treats ways to analyze, systematically extract information from, or otherwise deal with data sets that are too large or complex to be dealt with by traditional data-processing application software. The guideline describes how to transform the principles in that field to a functional size.
Business Application Software This is software designed to perform a group of coordinated functions, tasks, or activities for the benefit of the user. Examples of an application include a word processor, a spreadsheet, an accounting application, a web browser, an email client, a media player, a file viewer, a flight simulator or a photo editor. Business Application Software contrasts with system software, which is mainly involved with running the computer. The guideline describes how to deal with application specific features, like data storage and retrieval.
To explain the use of the method a number of case studies have been developed. The method is of particular validity in the estimation of cost of software undertakings.
The organization behind the method
The COSMIC organization commenced its work in 1998. Legally COSMIC is an incorporated not for profit organization under Canadian law. The organization grew informally to a global community of professionals. COSMIC is an open and democratic organization. The organization relies and will continue to rely on unpaid efforts by volunteers, who work on various aspects of the method, based on their professional interest.
The first generation functional size measurement methods consisted of rules that are based on empirical results. Some define their own terminology, which may have overlap with other terms in software engineering. They work well for the software domains the rules were designed for, but for other domains, the rules need to be altered or extended. Key elements of a second generation functional size measurement method are:
Adoption of all measurement concepts from the ISO metrology
A defined measurement unit
Fully compliant with ISO/IEC 14143
Preferably domain independent
The method is based on principles and rules that are domain independent. The principles of the method are based on fundamental software engineering principles, which have been subsequently tested in practice.
References
External links
COSMIC website A public domain version of the COSMIC measurement manual and other technical reports
COSMIC Publications Public domain publications for the COSMIC method
Software metrics
Software engineering costs | COSMIC functional size measurement | [
"Mathematics",
"Engineering"
] | 1,160 | [
"Software engineering",
"Quantity",
"Metrics",
"Software metrics"
] |
57,674,142 | https://en.wikipedia.org/wiki/Photothermal%20time | Photothermal time (PTT) is a product between growing degree-days (GDD) and day length (hours) for each day. PTT = GDD × DL It can be used to quantify environment, as well as the timing of developmental stages of plants.
References
Product certification
Measurement
Ecology | Photothermal time | [
"Physics",
"Mathematics",
"Biology"
] | 63 | [
"Physical quantities",
"Quantity",
"Ecology",
"Measurement",
"Size"
] |
57,675,728 | https://en.wikipedia.org/wiki/Drug%20titration | Drug titration is the process of adjusting the dose of a medication for the maximum benefit without adverse effects.
When a drug has a narrow therapeutic index, titration is especially important, because the range between the dose at which a drug is effective and the dose at which side effects occur is small. Some examples of the types of drugs commonly requiring titration include insulin, anticonvulsants, blood thinners, anti-depressants, and sedatives.
Titrating off of a medication instead of stopping abruptly is recommended in some situations. Glucocorticoids should be tapered after extended use to avoid adrenal insufficiency.
Drug titration is also used in phase I of clinical trials. The experimental drug is given in increasing dosages until side effects become intolerable. A clinical trial in which a suitable dose is found is called a dose-ranging study.
See also
Therapeutic drug monitoring
Pituri – chewed as a stimulant (or, after extended use, a depressant) by Aboriginal Australians
References
Pharmacology | Drug titration | [
"Chemistry"
] | 224 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry",
"Medicinal chemistry stubs"
] |
59,327,836 | https://en.wikipedia.org/wiki/Bray%E2%80%93Moss%E2%80%93Libby%20model | In premixed turbulent combustion, Bray–Moss–Libby (BML) model is a closure model for a scalar field, built on the assumption that the reaction sheet is infinitely thin compared with the turbulent scales, so that the scalar can be found either at the state of burnt gas or unburnt gas. The model is named after Kenneth Bray, J. B. Moss and Paul A. Libby.
Mathematical description
Let us define a non-dimensional scalar variable or progress variable such that at the unburnt mixture and at the burnt gas side. For example, if is the unburnt gas temperature and is the burnt gas temperature, then the non-dimensional temperature can be defined as
The progress variable could be any scalar, i.e., we could have chosen the concentration of a reactant as a progress variable. Since the reaction sheet is infinitely thin, at any point in the flow field, we can find the value of to be either unity or zero. The transition from zero to unity occurs instantaneously at the reaction sheet. Therefore, the probability density function for the progress variable is given by
where and are the probability of finding unburnt and burnt mixture, respectively and is the Dirac delta function. By definition, the normalization condition leads to
It can be seen that the mean progress variable,
is nothing but the probability of finding burnt gas at location and at the time . The density function is completely described by the mean progress variable, as we can write (suppressing the variables )
Assuming constant pressure and constant molecular weight, ideal gas law can be shown to reduce to
where is the heat release parameter. Using the above relation, the mean density can be calculated as follows
The Favre averaging of the progress variable is given by
Combining the two expressions, we find
and hence
The density average is
General density function
If reaction sheet is not assumed to be thin, then there is a chance that one can find a value for in between zero and unity, although in reality, the reaction sheet is mostly thin compared to turbulent scales. Nevertheless, the general form the density function can be written as
where is the probability of finding the progress variable which is undergoing reaction (where transition from zero to unity is effected). Here, we have
where is negligible in most regions.
References
Fluid dynamics
Combustion
Turbulence | Bray–Moss–Libby model | [
"Chemistry",
"Engineering"
] | 472 | [
"Turbulence",
"Chemical engineering",
"Combustion",
"Piping",
"Fluid dynamics"
] |
54,351,003 | https://en.wikipedia.org/wiki/Radiation%20Budget%20Instrument | The Radiation Budget Instrument (RBI) is a scanning radiometer capable of measuring Earth's reflected sunlight and emitted thermal radiation. The project was cancelled on January 26, 2018; NASA cited technical, cost, and schedule issues and the impact of anticipated RBI cost growth on other programs.
RBI was scheduled to fly on the Joint Polar Satellite System 2 (JPSS-2) mission planned for launch in November 2021; the JPSS-3 mission planned for launch in 2026; and the JPSS-4 mission planned for launch in 2031. The one on JPSS-2 would have been the 14th in the series that started with the Earth radiation budget instruments launched in 1985, and would have extended the unique global climate measurements of the Earth's radiation budget provided by the Clouds and the Earth's Radiant Energy System (CERES) instruments since 1998.
References
External links
Electromagnetic radiation meters
Radiometry | Radiation Budget Instrument | [
"Physics",
"Technology",
"Engineering"
] | 182 | [
"Telecommunications engineering",
"Spectrum (physical sciences)",
"Electromagnetic radiation meters",
"Electromagnetic spectrum",
"Measuring instruments",
"Radiometry"
] |
54,358,158 | https://en.wikipedia.org/wiki/Mouse%20ear%20swelling%20test | The mouse ear swelling test is a toxicological test that aims to mimic human skin reactions to chemicals. It avoids post-mortem examination of tested animals.
References
See also
Local lymph node assay
Draize test
Freund's Complete Adjuvant
Toxicology
Allergology | Mouse ear swelling test | [
"Environmental_science"
] | 62 | [
"Toxicology",
"Toxicology stubs"
] |
51,503,229 | https://en.wikipedia.org/wiki/Montana%20flume | A Montana flume is a popular modification of the standard Parshall flume. The Montana flume removes the throat and discharge sections of the Parshall flume, resulting a flume that is lighter in weight, shorter in length, and less costly to manufacture. Montana flumes are used to measure surface waters, irrigations flows, industrial discharges, and wastewater treatment plant flows.
As a short-throated flume, the Montana flume has a single, specified point of measurement in the contracting section at which the level is measured. The Montana flume is described in US Bureau of Reclamation's Water Measurement Manual and two technical standards MT199127AG and MT199128AG by Montana State University.
As a modification of the Parshall flume, the design of the Montana flume is standardized under ASTM D1941, ISO 9826:1992, and JIS B7553-1993. The flumes are not patented and the discharge tables are not copyright protected.
A total of 22 standard sizes of Montana flumes have been developed, covering flow ranges from 0.005 cfs [0.1416 L/s] to 3,280 cfs [92,890 L/s].
Lacking the extended throat and discharge sections of the Parshall flume, Montana flumes are not intended for use under submerged conditions. Where submergence is possible, a full length Parshall flume should be used. Should submergence occur, investigations have been made into correcting the flow.
Under laboratory conditions the Parshall flume - upon which the Montana is based - can be expected to exhibit accuracies to within +/-2%, although field conditions make accuracies better than 5% doubtful.
Free-Flow Characteristics
The Montana Flume is a restriction with free-spilling discharge that accelerates flow from a sub-critical state (Fr~0.5) to a supercritical one (Fr>1).
The free-flow discharge can be summarized as
Where
Q is flow rate
C is the free-flow coefficient for the flume
Ha is the head at the primary point of measurement
n varies with flume size (See Table 1 below)
Montana flume discharge table for free flow conditions:
Free-Flow vs. Submerged Flow
Free-Flow – when there is no “back water” to restrict flow through a flume. Only the single depth (primary point of measurement -Ha) needs to be measured to calculate the flow rate. A free flow also induces a hydraulic jump downstream of the flume.
Submerged Flow – when the water surface downstream of the flume is high enough to restrict flow through a flume, the flume is deemed to be submerged. Lacking the extended throat and discharge sections of the Parshall flume, the Montana flume has little resistance to the effects of submergence and as such it should be avoided. Where submerged flow is or may become present, there are several methods of correcting the situation: the flume may be raised above the channel floor, the downstream channel may be modified, or a different flume type may be used (typically a Parshall flume). Although commonly thought of as occurring at higher flow rates, submerged flow can exist at any flow level as it is a function of downstream conditions. In natural stream applications, submerged flow is frequently the result of vegetative growth on the downstream channel banks, sedimentation, or subsidence of the flume.
Construction
Montana flumes can be constructed from a variety of materials:
Fiberglass (wastewater applications due to its corrosion resistance)
Stainless steel (applications involving high temperatures / corrosive flow streams)
Galvanized steel (water rights / irrigation)
Concrete
Aluminum (portable applications)
Wood (temporary flow measurement)
Plastic (PVC or polycarbonate / Lexan)
Smaller Montana flumes tend to be fabricated from fiberglass and galvanized steel (depending upon the application), while larger Montana flumes can be fabricated from fiberglass (sizes up to 160") or concrete (160"-600").
In practice, is it usual to see Montana flumes larger than 48-inches as the need for free-spilling discharge can not usually be met, downstream scour would be excessive, or other flume types better handle the flow.
Drawbacks
Montana flumes require free-spilling discharge off the flume (for free-flow conditions). To accommodate the drop in an existing channel either the flume must be raised above the channel floor (raising the upstream water level) or the downstream channel must be modified.
As with weirs, flumes can also have an effect on local fauna. Some species or certain life stages of the same species may be blocked by flumes due to relatively slow swim speeds or behavioral characteristics. The elevated nature of the Montana flume exacerbates this problem.
In earthen channels, upstream bypass may occur and downstream scour will occur unless the channel is armored.and downstream scour may occur.
Montana flumes smaller than 3 inches in size should not be used on unscreened sanitary flows, due to the likelihood of clogging.
The Montana flume is an empirical device. Interpolation between sizes is not an accurate method of developing intermediate size Montana flumes as the flumes are not scale models of each other. The 30-inch [76.2 cm] and 42-inch [106.7 cm] sizes are examples of intermediate sizes of Montana flumes that have crept into the marketplace without the backing of published research into their sizing and flow rates.
References
External links
Pictures of fiberglass, galvanized and stainless steel Montana flumes
Fluid mechanics
Hydraulic structures
Water supply infrastructure
Hydrology | Montana flume | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,172 | [
"Hydrology",
"Civil engineering",
"Fluid mechanics",
"Environmental engineering"
] |
51,506,064 | https://en.wikipedia.org/wiki/DPANN | DPANN is a superphylum of Archaea first proposed in 2013. Many members show novel signs of horizontal gene transfer from other domains of life. They are known as nanoarchaea or ultra-small archaea due to their smaller size (nanometric) compared to other archaea.
DPANN is an acronym formed by the initials of the first five groups discovered, Diapherotrites, Parvarchaeota, Aenigmarchaeota, Nanoarchaeota and Nanohaloarchaeota. Later Woesearchaeota and Pacearchaeota were discovered and proposed within the DPANN superphylum. In 2017, another phylum Altiarchaeota was placed into this superphylum. The monophyly of DPANN is not yet considered established, due to the high mutation rate of the included phyla, which can lead to the artifact of the long branch attraction (LBA) where the lineages are grouped basally or artificially at the base of the phylogenetic tree without being related. These analyzes instead suggest that DPANN belongs to Euryarchaeota or is polyphyletic occupying various positions within Euryarchaeota.
The DPANN groups together different phyla with a variety of environmental distribution and metabolism, ranging from symbiotic and thermophilic forms such as Nanoarchaeota, acidophiles like Parvarchaeota and non-extremophiles like Aenigmarchaeota and Diapherotrites. DPANN was also detected in nitrate-rich groundwater, on the water surface but not below, indicating that these taxa are still quite difficult to locate.
Since the recognition of the kingdom rank by the ICNP, the only validly published name for this group is kingdom Nanobdellati.
Characteristics
They are characterized by being small in size compared to other archaea (nanometric size) and in keeping with their small genome, they have limited but sufficient catabolic capacities to lead a free life, although many are thought to be episymbionts that depend on a symbiotic or parasitic association with other organisms. Many of their characteristics are similar or analogous to those of ultra-small bacteria (CPR group).
Limited metabolic capacities are a product of the small genome and are reflected in the fact that many lack central biosynthetic pathways for nucleotides, aminoacids, and lipids; hence most DPANN archaea, such as ARMAN archaea, which rely on other microbes to meet their biological requirements. But those that have the potential to live freely are fermentative and aerobic heterotrophs.
They are mostly anaerobic and have not been cultivated. They live in extreme environments such as thermophilic, hyperacidophilic, hyperhalophilic or metal-resistant; or also in the temperate environment of marine and lake sediments. They are rarely found on the ground or in the open ocean.
Classification
Diapherotrites. Found by phylogenetic analysis of the genomes recovered from the groundwater filtration of a gold mine abandoned in the USA.
Parvarchaeota and Micrarchaeota. Discovered in 2006 in acidic mine drainage from a US mine. They are of very small size and provisionally called ARMAN (Archaeal Richmond Mine acidophilic nanoorganisms).
Woesearchaeota and Pacearchaeota. They have been identified both in sediments and in surface waters of aquifers and lakes, abounding especially in saline conditions.
Aenigmarchaeota. Found in wastewater from mines and in sediments from hot springs.
Nanohalarchaeota. Distributed in environments with high salinity.
Nanoarchaeota. They were the first discovered (in 2002) in a hydrothermal source next to the coast of Iceland. They live as symbionts of other archaea.
Phylogeny
Taxonomy
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI).
Superphylum "DPANN" Rinke et al. 2013 (kingdom Nanobdellati)
Phylum Microcaldota Sakai et al. 2023
Class Microcaldia Sakai et al. 2023
Order ?Microcaldales Sakai et al. 2023
Phylum "Undinarchaeota" Dombrowski et al. 2020
Class "Undinarchaeia" Dombrowski et al. 2020
Order "Undinarchaeales" Dombrowski et al. 2020
Phylum "Huberarchaeota" Probst et al. 2019
Class "Huberarchaeia" corrig. Probst et al. 2019
Order "Huberarchaeales" Rinke et al. 2020
Phylum "Aenigmatarchaeota" corrig. Rinke et al. 2013 (DSEG, DUSEL2)
Class "Aenigmatarchaeia" corrig. Rinke et al. 2020
Order "Aenigmatarchaeales" corrig. Rinke et al. 2020
Phylum "Nanohalarchaeota" corrig. Rinke et al. 2013
Class "Nanohalobiia" corrig.La Cono et al. 2020
Order "Nanohalobiales" La Cono et al. 2020
Class "Nanohalarchaeia" corrig. Narasingarao et al. 2012
Order ?"Nanohalarchaeales"
Order ?"Nanohydrothermales" Xie et al. 2022
Order ?"Nucleotidisoterales" Xie et al. 2022
Phylum Altarchaeota Probst et al. 2018 (SM1)
Class "Altarchaeia" corrig. Probst et al. 2014
Order "Altarchaeales" corrig. Probst et al. 2014
Phylum "Iainarchaeota" corrig. Rinke et al. 2013 ["Diapherotrites" Rinke et al. 2013] (DUSEL-3)
Class "Iainarchaeia" Rinke et al. 2020
Order "Forterreales" Probst & Banfield 2017
Order "Iainarchaeales" Rinke et al. 2020
Phylum "Micrarchaeota" Baker & Dick 2013
Class "Micrarchaeia" Vazquez-Campos et al. 2021
Order "Anstonellales" Vazquez-Campos et al. 2021 (LFWA-IIIc)
Order "Burarchaeales" Vazquez-Campos et al. 2021 (LFWA-IIIb)
Order "Fermentimicrarchaeales" Kadnikov et al. 2020
Order "Gugararchaeales" Vazquez-Campos et al. 2021 (LFWA-IIIa)
Order "Micrarchaeales" Vazquez-Campos et al. 2021
Order "Norongarragalinales" Vazquez-Campos et al. 2021 (LFWA-II)
Phylum Nanobdellota Huber et al. 2023
Class Nanobdellia Kato et al. 2022
Order JAPDLS01
Order "Jingweiarchaeales" Rao et al. 2023 [DTBS01]
Order Nanobdellales Kato et al. 2022
Order "Pacearchaeales" (DHVE-5, DUSEL-1)
Order "Parvarchaeales" Rinke et al. 2020 (ARMAN 4 & 5)
Order "Tiddalikarchaeales" Vazquez-Campos et al. 2021 (LFW-252_1)
Order "Woesearchaeales" (DHVE-6)
Phylum ?"Mamarchaeota"
Order ?"Wiannamattarchaeales"
See also
List of Archaea genera
References
External links
Archaea taxa
Extremophiles
Superphyla | DPANN | [
"Biology",
"Environmental_science"
] | 1,673 | [
"Archaea",
"Organisms by adaptation",
"Extremophiles",
"Bacteria",
"Archaea taxa",
"Environmental microbiology"
] |
51,508,407 | https://en.wikipedia.org/wiki/Pseudo-ring | In mathematics, and more specifically in abstract algebra, a pseudo-ring is one of the following variants of a ring:
A rng, i.e., a structure satisfying all the axioms of a ring except for the existence of a multiplicative identity.
A set R with two binary operations + and ⋅ such that is an abelian group with identity 0, and and for all a, b, c in R.
An abelian group equipped with a subgroup B and a multiplication making B a ring and A a B-module.
None of these definitions are equivalent, so it is best to avoid the term "pseudo-ring" or to clarify which meaning is intended.
See also
Semiring – an algebraic structure similar to a ring, but without the requirement that each element must have an additive inverse
References
Ring theory
Algebraic structures
Algebras | Pseudo-ring | [
"Mathematics"
] | 172 | [
"Algebra stubs",
"Mathematical structures",
"Algebras",
"Ring theory",
"Mathematical objects",
"Fields of abstract algebra",
"Algebraic structures",
"Algebra"
] |
71,535,495 | https://en.wikipedia.org/wiki/Kuratowski%27s%20intersection%20theorem | In mathematics, Kuratowski's intersection theorem is a result in general topology that gives a sufficient condition for a nested sequence of sets to have a non-empty intersection. Kuratowski's result is a generalisation of Cantor's intersection theorem. Whereas Cantor's result requires that the sets involved be compact, Kuratowski's result allows them to be non-compact, but insists that their non-compactness "tends to zero" in an appropriate sense. The theorem is named for the Polish mathematician Kazimierz Kuratowski, who proved it in 1930.
Statement of the theorem
Let (X, d) be a complete metric space. Given a subset A ⊆ X, its Kuratowski measure of non-compactness α(A) ≥ 0 is defined by
Note that, if A is itself compact, then α(A) = 0, since every cover of A by open balls of arbitrarily small diameter will have a finite subcover. The converse is also true: if α(A) = 0, then A must be precompact, and indeed compact if A is closed. Also, if A is a subset of B, then α(A) ≤ α(B). In some sense, the quantity α(A) is a numerical description of "how non-compact" the set A is.
Now consider a sequence of sets An ⊆ X, one for each natural number n. Kuratowski's intersection theorem asserts that if these sets are non-empty, closed, decreasingly nested (i.e. An+1 ⊆ An for each n), and α(An) → 0 as n → ∞, then their infinite intersection
is a non-empty compact set.
The result also holds if one works with the ball measure of non-compactness or the separation measure of non-compactness, since these three measures of non-compactness are mutually Lipschitz equivalent; if any one of them tends to zero as n → ∞, then so must the other two.
References
Compactness theorems | Kuratowski's intersection theorem | [
"Mathematics"
] | 428 | [
"Compactness theorems",
"Theorems in topology"
] |
71,535,589 | https://en.wikipedia.org/wiki/Selenosulfide | In chemistry, a selenosulfide refers to distinct classes of inorganic and organic compounds containing sulfur and selenium. The organic derivatives contain Se-S bonds, whereas the inorganic derivatives are more variable.
Organic selenosulfides
These species are classified as both organosulfur and organoselenium compounds. They are hybrids of organic disulfides and organic diselenides.
Preparation, structure, and reactivity
Selenosulfides have been prepared by the reaction of selenyl halides with thiols:
The equilibrium between diselenides and disulfides lies on the left:
RSeSeR + R'SSR' 2 RSeSR'
Because of the facility of this equilibrium, many of the best characterized examples of selenosulfides are cyclic, whereby S-Se bonds are stabilized intramolecularly. One example is the 1,8-selenosulfide of naphthalene. The selenium-sulfur bond length is about 220 picometers, the average of a typical S-S and Se-Se bond.
Occurrence
Selenosulfide groups can be found in almost all living organisms as part of various peroxidase enzymes, such as glutathione peroxidase and thioredoxin reductase. They are formed by the oxidative coupling of selenocysteine and cysteine residues. This reaction is powered by the decomposition of cellular peroxides, which can be highly damaging and a source of oxidative stress. Selenocysteine has a lower reduction potential than cysteine, making it very suitable for proteins that are involved in antioxidant activity.
Selenosulfides have been identified in some species of Allium and in roasted coffee. The mammalian version of the protein thioredoxin reductase contains a selenocysteine residue which forms a thioselenide (analogous to a disulfide) upon oxidation.
Inorganic selenosulfides
Some inorganic selenide sulfide compounds are also known. Simplest is the material selenium sulfide, which has medicinal properties. It adopt the diverse structures of elemental sulfur but with some S atoms replaced by Se.
Other inorganic selenide sulfide compounds occur as minerals and as pigments. One example is antimony selenosulfide.
The pigment cadmium red consists of cadmium sulfoselenide. It is a solid solution of cadmium sulfide, which is yellow, and cadmium selenide, which is dark brown. It is used as an artist's pigment. Unlike the organic selenosulfides and unlike selenide sulfide itself, no S-Se bond exists in CdS1−xSex or in Sb2S3−xSex.
References
Organoselenium compounds
Organosulfur compounds
Cadmium compounds
Selenides
Semiconductor materials
Optical materials
Mixed anion compounds
Sulfides | Selenosulfide | [
"Physics",
"Chemistry"
] | 610 | [
"Matter",
"Mixed anion compounds",
"Organosulfur compounds",
"Semiconductor materials",
"Organic compounds",
"Materials",
"Optical materials",
"Ions"
] |
71,542,093 | https://en.wikipedia.org/wiki/Myrtenal | Myrtenal is a bicyclic monoterpenoid with the chemical formula C10H14O. It is a naturally occurring molecule that can be found in numerous plant species including Hyssopus officinalis, Salvia absconditiflora, and Cyperus articulatus.
Biological research
Myrtenal was shown to inhibit acetylcholinesterase, which is a common method of treatment of alzheimer's disease and dementia, in-vitro. In addition, mytenal has been shown to have antioxidant properties in rats.
See also
Myrtenol
References
Aldehydes
Bicyclic compounds
Cycloalkenes
Monoterpenes | Myrtenal | [
"Chemistry"
] | 146 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
64,263,166 | https://en.wikipedia.org/wiki/Pantometrum%20Kircherianum | Pantometrum Kircherianum is a 1660 work by the Jesuit scholars Gaspar Schott and Athanasius Kircher. It was dedicated to Christian Louis I, Duke of Mecklenburg and printed in Würzburg by Johann Gottfried Schönwetter. It was a description, with building instructions, of a measuring device called the pantometer, that Kircher had developed some years before. The first edition include 32 copperplate illustrations.
Description of the pantometer
The name "pantometer" derives from Greek, in which "pan" means "all" and "metron" means "measure" - indicating that this instrument can be used to measure anything. As described in the book, it consisted of a square frame, a dioptra, and a disc that fitted within the square. The disc contained a built-in compass and a space for putting a sheet of paper. The disc could turn freely within the square, or be locked in a fixed position. Mounted on this apparatus was a movable ruler parallel to the edge of the square on which the dioptra was attached. An illustration in the book showed how the device could be used to measure the distance of objects by triangulating from two different points on a baseline.
The introduction to the book emphasised both the accuracy of the device and its ease of use, and stated that it could be used to "measure all, witness latitudes, longitudes, altitudes, depths and surfaces, terrestrial and celestial bodies, and whatever indeed we are accustomed to doing with other instruments."
Kircher's development of the pantometer
Kircher had mentioned the pantometer in his Specula Melitensis Encyclica noting that it was designed to help the Knights Hospitaller to solve "the most important mathematical and physical problems." It was a surveying tool that resembled a draughts board and could be used to calculate distances, weights and dimensions. In Magnes sive de Arte Magnetica (1643) Kircher has described an "Instrumentum, Pantometrum, Ichnographicum Magneticum" which allowed all things to be measured. It was 'magnetic' because it incorporated a compass, and 'ichnographic' because it could be used in map-making.
According to Schott, Kircher had first conceived of it in the company of father Ziegler, perhaps as early as 1623. Schott has been with Kircher in 1631 when he had first assembled the instrument and named it the 'pantometrum', sending an early example to Holy Roman Emperor Frederick III. Kircher had certainly used the pantometer himself to take scientific measurements when he was lowered into the crater of Vesuvius in 1638.
Later editions and references
Pantometrum Kircherianum was reprinted by Cholinus in Frankfurt in 1668 and again in 1669. The work was referenced in books by a number of later writers, including Jacob Leupold's Theatrum Arithmetico-Geometricum (1727) and Christian Wolff's Mathematisches Lexikon (1747).
External links
digital copy of Pantometrum Kircherianum at the Max Planck Institute Library
digital copy of Pantometrum Kircherianum at the Bayerische Staatsbibliothek
See also
Graphometer
References
1660 in science
1660 in the Holy Roman Empire
1660 books
Dimensional instruments
Athanasius Kircher | Pantometrum Kircherianum | [
"Physics",
"Mathematics"
] | 704 | [
"Quantity",
"Dimensional instruments",
"Physical quantities",
"Size"
] |
64,266,300 | https://en.wikipedia.org/wiki/Schwartz%20topological%20vector%20space | In functional analysis and related areas of mathematics, Schwartz spaces are topological vector spaces (TVS) whose neighborhoods of the origin have a property similar to the definition of totally bounded subsets. These spaces were introduced by Alexander Grothendieck.
Definition
A Hausdorff locally convex space with continuous dual , is called a Schwartz space if it satisfies any of the following equivalent conditions:
For every closed convex balanced neighborhood of the origin in , there exists a neighborhood of in such that for all real , can be covered by finitely many translates of .
Every bounded subset of is totally bounded and for every closed convex balanced neighborhood of the origin in , there exists a neighborhood of in such that for all real , there exists a bounded subset of such that .
Properties
Every quasi-complete Schwartz space is a semi-Montel space.
Every Fréchet Schwartz space is a Montel space.
The strong dual space of a complete Schwartz space is an ultrabornological space.
Examples and sufficient conditions
Vector subspace of Schwartz spaces are Schwartz spaces.
The quotient of a Schwartz space by a closed vector subspace is again a Schwartz space.
The Cartesian product of any family of Schwartz spaces is again a Schwartz space.
The weak topology induced on a vector space by a family of linear maps valued in Schwartz spaces is a Schwartz space if the weak topology is Hausdorff.
The locally convex strict inductive limit of any countable sequence of Schwartz spaces (with each space TVS-embedded in the next space) is again a Schwartz space.
Counter-examples
Every infinite-dimensional normed space is not a Schwartz space.
There exist Fréchet spaces that are not Schwartz spaces and there exist Schwartz spaces that are not Montel spaces.
See also
References
Bibliography
Functional analysis
Topological vector spaces | Schwartz topological vector space | [
"Mathematics"
] | 360 | [
"Functions and mappings",
"Functional analysis",
"Vector spaces",
"Mathematical objects",
"Topological vector spaces",
"Space (mathematics)",
"Mathematical relations"
] |
64,267,601 | https://en.wikipedia.org/wiki/Buchdahl%27s%20theorem | In general relativity, Buchdahl's theorem, named after Hans Adolf Buchdahl, makes more precise the notion that there is a maximal sustainable density for ordinary gravitating matter. It gives an inequality between the mass and radius that must be satisfied for static, spherically symmetric matter configurations under certain conditions. In particular, for areal radius , the mass must satisfy
where is the gravitational constant and is the speed of light. This inequality is often referred to as Buchdahl's bound. The bound has historically also been called Schwarzschild's limit as it was first noted by Karl Schwarzschild to exist in the special case of a constant density fluid. However, this terminology should not be confused with the Schwarzschild radius which is notably smaller than the radius at the Buchdahl bound.
Theorem
Given a static, spherically symmetric solution to the Einstein equations (without cosmological constant) with matter confined to areal radius that behaves as a perfect fluid with a density that does not increase outwards. (An areal radius corresponds to a sphere of surface area . In curved spacetime the proper radius of such a sphere is not necessarily .) Assumes in addition that the density and pressure cannot be negative. The mass of this solution must satisfy
For his proof of the theorem, Buchdahl uses the Tolman-Oppenheimer-Volkoff (TOV) equation.
Significance
The Buchdahl theorem is useful when looking for alternatives to black holes. Such attempts are often inspired by the information paradox; a way to explain (part of) the dark matter; or to criticize that observations of black holes are based on excluding known astrophysical alternatives (such as neutron stars) rather than direct evidence. However, to provide a viable alternative it is sometimes needed that the object should be extremely compact and in particular violate the Buchdahl inequality. This implies that one of the assumptions of Buchdahl's theorem must be invalid. A classification scheme can be made based on which assumptions are violated.
Special Cases
Incompressible fluid
The special case of the incompressible fluid or constant density, for , is a historically important example as, in 1916, Schwarzschild noted for the first time that the mass could not exceed the value for a given radius or the central pressure would become infinite. It is also a particularly tractable example. Within the star one finds.
and using the TOV-equation
such that the central pressure, , diverges as .
Extensions
Extensions to Buchdahl's theorem generally either relax assumptions on the matter or on the symmetry of the problem. For instance, by introducing anisotropic matter or rotation. In addition one can also consider analogues of Buchdahl's theorem in other theories of gravity
References
Mathematical theorems
1959 in science
Energy (physics) | Buchdahl's theorem | [
"Physics",
"Mathematics"
] | 587 | [
"Mathematical theorems",
"Physical quantities",
"Quantity",
"Energy (physics)",
"nan",
"Wikipedia categories named after physical quantities",
"Mathematical problems"
] |
67,227,898 | https://en.wikipedia.org/wiki/Edmond%E2%80%93Ogston%20model | The Edmond–Ogston model is a thermodynamic model proposed by Elizabeth Edmond and Alexander George Ogston in 1968 to describe phase separation of two-component polymer mixtures in a common solvent. At the core of the model is an expression for the Helmholtz free energy
that takes into account terms in the concentration of the polymers up to second order, and needs three Virial coefficients and as input. Here is the molar concentration of polymer , is the universal gas constant, is the absolute temperature, is the system volume. It is possible to obtain explicit solutions for the coordinates of the critical point
,
where represents the slope of the binodal and spinodal in the critical point. Its value can be obtained by solving a third order polynomial in ,
,
which can be done analytically using Cardano's method and choosing the solution for which both and are positive.
The spinodal can be expressed analytically too, and the Lambert W function has a central role to express the coordinates of binodal and tie-lines.
The model is closely related to the Flory–Huggins model.
The model and its solutions have been generalized to mixtures with an arbitrary number of components , with greater or equal than 2.
References
Polymer chemistry
Solutions
Thermodynamic free energy | Edmond–Ogston model | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 263 | [
"Thermodynamics stubs",
"Thermodynamic properties",
"Physical quantities",
"Materials science",
"Homogeneous chemical mixtures",
"Thermodynamic free energy",
"Energy (physics)",
"Thermodynamics",
"Polymer chemistry",
"Solutions",
"Wikipedia categories named after physical quantities",
"Physica... |
44,321,440 | https://en.wikipedia.org/wiki/Supersonic%20flow%20over%20a%20flat%20plate | Supersonic flow over a flat plate is a classical fluid dynamics problem. There is no exact solution to it.
Physics
When a fluid flow at the speed of sound over a thin sharp flat plate over the leading edge at low incident angle at low Reynolds Number. Then a laminar boundary layer will be developed at the leading edge of the plate. And as there are viscous boundary layer, the plate will have a fictitious boundary layer so that a curved induced shock wave will be generated at the leading edge of the plate.
The shock layer is the region between the plate surface and the boundary layer. This shock layer be further subdivided into layer of viscid and inviscid flow, according to the values of Mach number, Reynolds Number and Surface Temperature. However, if the entire layer is viscous, it is called as merged shock layer.
Solution to the Problem
This Fluid dynamics problem can be solved by different Numerical Methods. However, to solve it with Numerical Methods several assumptions have to be considered. And as a result shock layer properties and shock location is determined. Results vary with one or more than one of viscosity of the fluid, Mach number and angle of incidence changes. Generally for large angles of incidences, the variation of Reynold's Number has significant effects on the change of the flow variables, whereas the viscous effects are dominant on the upper surface of the plate as well as behind the trailing edge of the plate.
Different experimenters get different result as per the assumptions they have made to solve the problem.
The primary method which is generally used to this problem:
Explicit Finite Difference Approach
This method involves using time-dependent Navier-Stokes equation which is advantageous because of its inherent ability to evolve to the correct steady state solution.
The continuity, momentum and energy equations and some other situational equations are needed to solve the problem. MacCormack's time marching technique is applied and then using Taylor series expansion the flow field variables are advanced at each grid point. Then, initial boundary conditions are applied and solving equations will converge to approximated results.
These equations can be solved by using different algorithms to get better and efficient results with minimum errors.
References
On boundary-layer flow past two-dimensional obstacles By F. T. SMITH, Department of Mathematics, Imperial College, London SW7 2BZ P. W. M. BRIGHTON,? P. S. JACKSONS AND J. c . R. HUNT http://www.cpom.org/people/jcrh/jfm-113
A numerical study of the viscous supersonic flow past a flat plate at large angles of incidence By D. Drikakis and F. Durst Lehrstuhlfiir Stromungsmechanik, Universitat Erlangen-Niirnberg, Cauerstrasse. 4, D-91058 Erlangen, Germany https://www.deepdive.com/search?author=Durst%2C+F.&numPerPage=25
Receptivity of a supersonic boundary layer over a flat plate. Part 1. Wave structures and interactions By YANBAO MA AND XIAOLIN ZHONG Mechanical and Aerospace Engineering Department, University of California, Los Angeles, CA 90095 USA http://www.journals.cambridge.org/article_S0022112003004786
Computational Fluid Dynamics The Basics with Applications By John D. Anderson, Jr.
Aerodynamics
Fluid dynamics | Supersonic flow over a flat plate | [
"Chemistry",
"Engineering"
] | 708 | [
"Chemical engineering",
"Aerodynamics",
"Aerospace engineering",
"Piping",
"Fluid dynamics"
] |
44,325,436 | https://en.wikipedia.org/wiki/Petroleomics | Petroleomics is the identification of the totality of the constituents of naturally occurring petroleum and crude oil using high resolution mass spectrometry. In addition to mass determination, petroleomic analysis sorts the chemical compounds into heteroatom class (nitrogen, oxygen and sulfur), type (degree of unsaturation, and carbon number). The name is a combination of petroleum and -omics (collective chemical characterization and quantification).
History
Mass spectrometry characterization of petroleum has been performed since the first commercial mass spectrometers were introduced in the 1940s. Early mass spectrometry was limited to relatively low molecular weight nonpolar species accessed mainly by electron ionization with mass analysis with sector mass spectrometers. By the end of the 20th century, separations combined with mass spectrometric techniques such as gas chromatography-mass spectrometry and liquid chromatography mass spectrometry have characterizated petroleum distillates such as gasoline, diesel, and gas oil.
The first petroleum analysis with electrospray ionization was demonstrated in 2000 by Zhan and Fenn, who studied the polar species in petroleum distillates with low-resolution MS. Electrospray ionization was coupled with high-resolution FT-ICR by Marshall and coworkers. To date, many studies on petroleomic analysis of crude oils have been published. Most work has been done by the group of Marshall at the National High Magnetic Field Laboratory (NHMFL) and Florida State University.
Ionization methods
Ionization of nonpolar petroleum components can be achieved by field desorption ionization and atmospheric pressure photoionization (APPI). field desorption FT-ICR MS has enabled the identification of a large number of nonpolar components in crude oils that are not accessible by electrospray, such as benzo- and dibenzothiophenes, furans, cycloalkanes, and polycyclic aromatic hydrocarbons (PAHs). A drawback of field desorption is that it is slow, mainly due to the need of ramping the current to the emitter in order to volatilize and ionize molecules. APPI can ionize both polar and nonpolar species, and an APPI spectrum can be generated in just a few seconds. However, APPI ionizes a broad range of compound classes and produces both protonated and molecular ion peaks, resulting in a complex mass spectrum.
Kendrick analysis
High mass resolution data analysis is usually undertaken by converting the mass spectra to the Kendrick mass scale, in which the mass of a methylene unit is set to exactly 14 (CH2 = 14.0000 instead of 14.01565 daltons). This rescaling of the data aids in the identification of homologous series according to alkylation, class (number of heteroatoms), and type (double bond equivalent, DBE, also called rings plus double bonds or degree of unsaturation). The scaled data is then used to obtain the Kendrick mass defect (KMD), which is given by
where the nominal Kendrick is the Kendrick mass rounded to the nearest integer. Double bond equivalent (DBE) is calculated according to
where C = number of carbons, H = number of hydrogens, X= number of halogens and N = number of nitrogens. O
Compounds with the same DBE have the same mass defect. Therefore, Kendrick normalization yields a set of series with identical mass defect that appear as horizontal rows in a plot of DBE versus Kendrick mass. The data can also be plotted as a 3D heat-map to indicate the relative intensity of the mass spectral peaks. From the Kendrick plot, the species with peaks in the mass spectrum can be sorted into compound classes by the number of nitrogen, oxygen and sulfur heteroatoms.
The data can also be represented with a Van Krevelen diagram.
See also
Orbitrap
References
External links
Mass spectrometry
Petroleum | Petroleomics | [
"Physics",
"Chemistry"
] | 830 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Petroleum",
"Mass spectrometry",
"Chemical mixtures",
"Matter"
] |
47,413,207 | https://en.wikipedia.org/wiki/Visible%20Multi%20Object%20Spectrograph | The Visible Multi-Object Spectrograph (VIMOS) is a wide field imager and a multi-object spectrograph installed at the European Southern Observatory's Very Large Telescope (VLT), in Chile. The instrument used for deep astronomical surveys delivers visible images and spectra of up to 1,000 galaxies at a time. VIMOS images four rectangular areas of the sky, 7 by 8 arcminutes each, with gaps of 2 arcminutes between them. Its principal investigator was Olivier Le Fèvre.
The Franco-Italian instrument operates in the visible part of the spectrum from 360 to 1000 nanometers (nm). In the conceptual design phase, the multi-object spectrograph then called VIRMOS included an additional instrument, NIMOS, operating in the near-infrared spectrum of 1100–1800 nm.
Operating in the three different observation modes, direct imaging, multi-slit spectroscopy, and integral field spectroscopy, the main objective of the instrument is to study the early universe through massive redshift surveys, such as the VIMOS-VLT Deep Survey.
VIMOS saw its first light on 26 February 2002, and has since been mounted on the Nasmyth B focus of VLT's Melipal unit telescope (UT3).
It was retired in 2018 to make space for the return of CRIRES+.
Gallery
See also
List of instruments at the Very Large Telescope
References
Astronomical instruments
Telescope instruments
Spectrographs
2002 introductions
2002 establishments in Chile | Visible Multi Object Spectrograph | [
"Physics",
"Chemistry",
"Astronomy"
] | 303 | [
"Telescope instruments",
"Spectrum (physical sciences)",
"Spectrographs",
"Astronomical instruments",
"Spectroscopy"
] |
47,422,135 | https://en.wikipedia.org/wiki/Ernst%20K.%20Zinner | Ernst Kunibert Zinner (30 January 1937 – 30 July 2015) was an Austrian astrophysicist, known for his pioneering work in the analysis of stardust in the laboratory. He long had a position in the United States at the Laboratory for Space Physics (later part of the McDonnell Center for the Space Sciences) at Washington University in St. Louis, Missouri, where he had earned his doctorate. He came to the United States in the 1960s for graduate work. In addition, Zinner regularly taught at European universities, and other American institutions.
Personal life
Zinner was born on 30 January 1937 at Steyr, Austria, a small town about 100 miles west of Vienna. Although his father, Kunibert Zinner, was a renowned sculptor, Ernst was more interested as a boy in nature and science. Zinner's four younger siblings, and other relatives, live in Austria.
While on sabbatical later in his career, he met Brigitte Wopenka, a faculty member of the Institute of Analytical Chemistry in Vienna. She returned with him to the United States and they married in 1980. They had a son, Max Giacobini Zinner. The son now lives in New York City.
Education and career
Zinner obtained an undergraduate degree in physics from the Vienna University of Technology and started working. In the mid-1960s, he moved to St. Louis, Missouri to attend Washington University for graduate work. He earned his Ph.D. there in 1972 in high energy physics.
That year he was invited by Robert M. Walker to work at the Laboratory for Space Physics (later part of the McDonnell Center for the Space Sciences) at Washington University.
He also held positions at:
Max Planck Institute for Nuclear Physics (1980)
Vienna University of Technology (1980–82)
University of Pavia (1989)
University of Bern (1994)
Australian National University (1995)
Max Planck Institute for Chemistry (2001, 03, 04)
National Museum of Natural History (France) (2006)
Carnegie Institution for Science (2010)
University of Perugia (2011)
University of Granada (2013)
Zinner continued to work at the McDonnell Center for the Space Sciences for the rest of his career, in 1989 being named as a Research Professor of Physics and Earth and Planetary Sciences. He retired early in 2015.
Zinner was a member of the American Association for the Advancement of Science, the American Geophysical Union, and Sigma Xi. He was also a fellow of the American Physical Society, the Meteoritical Society, the Geochemical Society, and the European Association of Geochemistry.
Zinner had mantle cell lymphoma for the last 19 years of his life. He died on 30 July 2015 at the age of 78.
Research
Zinner's PhD research was in high energy physics. He subsequently studied the effects that the environment within the Solar System would have on the Moon and the parent bodies of meteors, using nuclear particle tracks, micrometeoid craters, and elements in the solar wind. His later research was focused on the information contained in presolar grains carried by early meteorites. These grains were formed in atmospheres and explosions of stars outside of the Solar System. They can provide information about the history of stellar nucleosynthesis and the formation of the Solar System.
Since 1974, Zinner's research has involved Ion microprobe analysis. He has worked with the Cameca IMS 3f instrument since 1982, and the Cameca NanoSIMS instrument since 2000. He led the Long Duration Exposure Facility. Zinner was instrumental in identifying, for the first time, material in meteorites that pre-dated the formation of the Solar System 4.6 billion years ago. Zinner and his colleagues found minute amounts of stardust - diamond and silicon carbide - that originated outside the solar system. Identification of these grains involved a measurement technique called secondary ion mass spectrometry (SIMS). Zinner and Ghislaine Crozaz expanded the use of SIMS to examine rare earth elements and applied this new technique to measure rare earth elements in thin sections of rocks and minerals.
Awards and honours
1987 Antarctic Service Medal, National Science Foundation
1997 J. Lawrence Smith Medal, National Academy of Sciences
1997 Leonard Medal of the Meteoritical Society
2010 Merle A. Tuve Fellow of the Carnegie Institution of Washington
2011 Fellow of the American Association for the Advancement of Science
Legacy
After his death, his family established an "Ernst Zinner Scholarship Fund" to support advanced cello students in the Community Music School at Webster University. Zinner had started learning cello at age 55, along with his son, then age 4.
References
1937 births
2015 deaths
Austrian physicists
Astrophysicists
Washington University in St. Louis physicists
Scientists from Missouri
Deaths from cancer in Missouri
Deaths from lymphoma in the United States
Washington University in St. Louis alumni
TU Wien alumni
Fellows of the American Physical Society | Ernst K. Zinner | [
"Physics"
] | 1,010 | [
"Astrophysicists",
"Astrophysics"
] |
49,182,501 | https://en.wikipedia.org/wiki/Music%20technology | Music technology is the study or the use of any device, mechanism, machine or tool by a musician or composer to make or perform music; to compose, notate, playback or record songs or pieces; or to analyze or edit music.
History
The earliest known applications of technology to music was prehistoric peoples' use of a tool to hand-drill holes in bones to make simple flutes.
Ancient Egyptians developed stringed instruments, such as harps, lyres and lutes, which required making thin strings and some type of peg system for adjusting the pitch of the strings. Ancient Egyptians also used wind instruments such as double clarinets and percussion instruments such as cymbals.
In ancient Greece, instruments included the double-reed aulos and the lyre.
Numerous instruments are referred to in the Bible, including the cornu, pipe, lyre, harp, and bagpipe. During Biblical times, the cornu, flute, horn, pipe organ, pipe, and trumpet were also used.
During the Middle Ages, music notation was used to create a written record of the notes of plainchant melodies.
During the Renaissance music era (c. 1400–1600), the printing press was invented, allowing for sheet music to be mass-produced (previously having been hand-copied). This helped to spread musical styles more quickly and across a larger area.
During the Baroque era (c. 1600–1750), technologies for keyboard instruments developed, which led to improvements in the designs of pipe organs and the harpsichord, and the development of a new keyboard instrument in approximately 1700, the piano.
In the Classical era, Beethoven added new instruments to the orchestra such as the piccolo, contrabassoon, trombones, and untuned percussion in his Ninth Symphony.
During the Romantic music era (c. 1810–1900), one of the key ways that new compositions became known to the public was by the sales of sheet music, which amateur music lovers would perform at home on their piano or other instruments. In the 19th century, new instruments such as saxophones, euphoniums, Wagner tubas, and cornets were added to the orchestra.
Around the turn of the 20th century, with the invention and popularization of the gramophone record (commercialized in 1892), and radio broadcasting (starting on a commercial basis ca. 1919–1920), there was a vast increase in music listening, and it was easier to distribute music to a wider public.
The development of sound recording had a major influence on the development of popular music genres because it enabled recordings of songs and bands to be widely distributed. The invention of sound recording also gave rise to a new subgenre of classical music: the Musique concrete style of electronic composition.
The invention of multitrack recording enabled pop bands to overdub many layers of instrument tracks and vocals, creating new sounds that would not be possible in a live performance.
In the early 20th century, electric technologies such as electromagnetic pickups, amplifiers and loudspeakers were used to develop new electric instruments such as the electric piano (1929), electric guitar (1931), electro-mechanical organ (1934) and electric bass (1935). The 20th-century orchestra gained new instruments and new sounds. Some orchestra pieces used the electric guitar, electric bass or the Theremin.
The invention of the miniature transistor in 1947 enabled the creation of a new generation of synthesizers, which were used first in pop music in the 1960s. Unlike prior keyboard instrument technologies, synthesizer keyboards do not have strings, pipes, or metal tines. A synthesizer keyboard creates musical sounds using electronic circuitry, or, later, computer chips and software. Synthesizers became popular in the mass market in the early 1980s.
With the development of powerful microchips, a number of new electronic or digital music technologies were introduced in the 1980s and subsequent decades, including drum machines and music sequencers. Electronic and digital music technologies are any device, such as a computer, an electronic effects unit or software, that is used by a musician or composer to help make or perform music. The term usually refers to the use of electronic devices, computer hardware and computer software that is used in the performance, playback, composition, sound recording and reproduction, mixing, analysis and editing of music.
Mechanical technologies
Prehistoric eras
Findings from paleolithic archaeology sites suggest that prehistoric people used carving and piercing tools to create instruments. Archeologists have found Paleolithic flutes carved from bones in which lateral holes have been pierced. The disputed Divje Babe flute, a perforated cave bear femur, is at least 40,000 years old. Instruments such as the seven-holed flute and various types of stringed instruments, such as the Ravanahatha, have been recovered from the Indus Valley civilization archaeological sites. India has one of the oldest musical traditions in the world—references to Indian classical music (marga) are found in the Vedas, ancient scriptures of the Hindu tradition. The earliest and largest collection of prehistoric musical instruments was found in China and dates back to between 7000 and 6600 BC.
Ancient Egypt
In prehistoric Egypt, music and chanting were commonly used in magic and rituals, and small shells were used as whistles. Evidence of Egyptian musical instruments dates to the Predynastic period, when funerary chants played an important role in Egyptian religion and were accompanied by clappers and possibly the flute. The most reliable evidence of instrument technologies dates from the Old Kingdom, when technologies for constructing harps, flutes and double clarinets were developed. Percussion instruments, lyres and lutes were used by the Middle Kingdom. Metal cymbals were used by ancient Egyptians. In the early 21st century, interest in the music of the pharaonic period began to grow, inspired by the research of such foreign-born musicologists as Hans Hickmann. By the early 21st century, Egyptian musicians and musicologists led by the musicology professor Khairy El-Malt at Helwan University in Cairo had begun to reconstruct musical instruments of ancient Egypt, a project that is ongoing.
Indus Valley
The Indus Valley civilization has sculptures that show old musical instruments, like the seven-holed flute. Various types of stringed instruments and drums have been recovered from Harappa and Mohenjo Daro by excavations carried out by Sir Mortimer Wheeler.
References in the Bible
According to the Scriptures, Jubal was the father of harpists and organists (Gen. 4:20–21). The harp was among the chief instruments and the favorite of David, and it is referred to more than fifty times in the Bible. It was used at both joyful and mournful ceremonies, and its use was "raised to its highest perfection under David" (1 Sam. 16:23). Lockyer adds that "It was the sweet music of the harp that often dispossessed Saul of his melancholy (1 Sam. 16:14–23; 18:10–11). When the Jews were captive in Babylon they hung their harps up and refused to use them while in exile, earlier being part of the instruments used in the Temple (1 Kgs. 10:12). Another stringed instrument of the harp class, and one also used by the ancient Greeks, was the lyre. A similar instrument was the lute, which had a large pear-shaped body, long neck, and fretted fingerboard with head screws for tuning. Coins displaying musical instruments, the Bar Kochba Revolt coinage, were issued by the Jews during the Second Jewish Revolt against the Roman Empire of 132–135 AD. In addition to those, there was the psaltery, another stringed instrument that is referred to almost thirty times in Scripture. According to Josephus, it had twelve strings and was played with a quill, not with the hand. Another writer suggested that it was like a guitar, but with a flat triangular form and strung from side to side.
Among the wind instruments used in the biblical period were the cornet, flute, horn, organ, pipe, and trumpet. There were also silver trumpets and the double oboe. Werner concludes that from the measurements taken of the trumpets on the Arch of Titus in Rome and from coins, that "the trumpets were very high pitched with thin body and shrill sound." He adds that in War of the Sons of Light Against the Sons of Darkness, a manual for military organization and strategy discovered among the Dead Sea Scrolls, these trumpets "appear clearly capable of regulating their pitch pretty accurately, as they are supposed to blow rather complicated signals in unison." Whitcomb writes that the pair of silver trumpets were fashioned according to Mosaic law and were probably among the trophies that the Emperor Titus brought to Rome when he conquered Jerusalem. She adds that on the Arch raised to the victorious Titus, "there is a sculptured relief of these trumpets, showing their ancient form. (see photo)
The flute was commonly used for festal and mourning occasions, according to Whitcomb. "Even the poorest Hebrew was obliged to employ two flute players to perform at his wife's funeral." The shofar (the horn of a ram) is still used for special liturgical purposes such as the Jewish New Year services in orthodox communities. As such, it is not considered a musical instrument but an instrument of theological symbolism that has been intentionally kept to its primitive character. In ancient times it was used for warning of danger, to announce the new moon or beginning of Sabbath, or to announce the death of a notable. "In its strictly ritual usage it carried the cries of the multitude to God," writes Werner.
Among the percussion instruments were bells, cymbals, sistrum, tabret, hand drums, and tambourines. The tabret, or timbrel, was a small hand drum used for festive occasions and was considered a woman's instrument. In modern times it was often used by the Salvation Army. According to the Bible, when the children of Israel came out of Egypt and crossed the Red Sea, "Miriam took a timbrel in her hands; and all the women went out after her with timbrels and with dance."
Ancient Greece
In ancient Greece, instruments in all music can be divided into three categories, based on how sound is produced: string, wind, and percussion. The following were among the instruments used in the music of ancient Greece:
the lyre: a strummed and occasionally plucked string instrument, essentially a hand-held zither built on a tortoise-shell frame, generally with seven or more strings tuned to the notes of one of the modes. The lyre was used to accompany others or even oneself for recitation and song.
the kithara, also a strummed string instrument, more complicated than the lyre. It had a box-type frame with strings stretched from the cross-bar at the top to the sounding box at the bottom; it was held upright and played with a plectrum. The strings were tunable by adjusting wooden wedges along the cross-bar.
the aulos, usually double, consisting of two double-reed (like an oboe) pipes, not joined but generally played with a mouth-band to hold both pipes steadily between the player's lips. Modern reconstructions indicate that they produced a low, clarinet-like sound. There is some confusion about the exact nature of the instrument; alternate descriptions indicate single reeds instead of double reeds.
the Pan pipes, also known as panflute and syrinx (Greek συριγξ), (so-called for the nymph who was changed into a reed in order to hide from Pan) is an ancient musical instrument based on the principle of the stopped pipe, consisting of a series of such pipes of gradually increasing length, tuned (by cutting) to a desired scale. Sound is produced by blowing across the top of the open pipe (like blowing across a bottle top).
the hydraulis, a keyboard instrument, the forerunner of the modern organ. As the name indicates, the instrument used water to supply a constant flow of pressure to the pipes. Two detailed descriptions have survived: that of Vitruvius and Heron of Alexandria. These descriptions deal primarily with the keyboard mechanism and with the device by which the instrument was supplied with air. A well-preserved model in pottery was found at Carthage in 1885. Essentially, the air to the pipes that produce the sound comes from a wind chest connected by a pipe to a dome; air is pumped in to compress water, and the water rises in the dome, compressing the air, and causing a steady supply of air to the pipes.
In the Aeneid, Virgil makes numerous references to the trumpet. The lyre, kithara, aulos, hydraulis (water organ) and trumpet all found their way into the music of ancient Rome.
Roman Empire
The Romans may have borrowed the Greek method of enchiriadic notation to record their music if they used any notation at all. Four letters (in English notation 'A', 'G', 'F' and 'C') indicated a series of four succeeding tones. Rhythm signs, written above the letters, indicated the duration of each note. Roman art depicts various woodwinds, "brass", percussion and stringed instruments. Roman-style instruments are found in parts of the Empire where they did not originate and indicate that music was among the aspects of Roman culture that spread throughout the provinces.
Roman instruments include:
The Roman tuba was a long, straight bronze trumpet with a detachable, conical mouthpiece. Extant examples are about 1.3 metres long, and have a cylindrical bore from the mouthpiece to the point where the bell flares abruptly, similar to the modern straight trumpet seen in presentations of 'period music'. Since there were no valves, the tuba was capable only of a single overtone series. In the military, it was used for "bugle calls". The tuba is also depicted in art such as mosaics accompanying games (ludi) and spectacle events.
The cornu (Latin "horn") was a long tubular metal wind instrument that curved around the musician's body, shaped rather like an uppercase G. It had a conical bore (again like a French horn) and a conical mouthpiece. It may be hard to distinguish from the buccina. The cornu was used for military signals and on parade. The cornicen was a military signal officer who translated orders into calls. Like the tuba, the cornu also appears as accompaniment for public events and spectacle entertainments.
The tibia (Greek aulos – αὐλός), usually double, had two double-reed (as in a modern oboe) pipes, not joined but generally played with a mouth-band capistrum to hold both pipes steadily between the player's lips.
The askaules — a bagpipe.
Versions of the modern flute and panpipes.
The lyre, borrowed from the Greeks, was not a harp, but instead had a sounding body of wood or a tortoise shell covered with skin, and arms of animal horn or wood, with strings stretched from a cross bar to the sounding body.
The cithara was the premier musical instrument of ancient Rome and was played both in popular and elevated forms of music. Larger and heavier than a lyre, the cithara was a loud, sweet and piercing instrument with precision tuning ability.
The lute (pandura or monochord) was known by several names among the Greeks and Romans. In construction, the lute differs from the lyre in having fewer strings stretched over a solid neck or fret-board, on which the strings can be stopped to produce graduated notes. Each lute string is thereby capable of producing a greater range of notes than a lyre string. Although long-necked lutes are depicted in art from Mesopotamia as early as 2340–2198 BC, and also occur in Egyptian iconography, the lute in the Greco-Roman world was far less common than the lyre and cithara. The lute of the medieval West is thought to owe more to the Arab oud, from which its name derives (al ʿūd).
The hydraulic pipe organ (hydraulis), which worked by water pressure, was "one of the most significant technical and musical achievements of antiquity". Essentially, the air to the pipes that produce the sound comes from a mechanism of a wind-chest connected by a pipe to a dome; air is pumped in to compress water, and the water rises in the dome, compressing the air and causing a steady supply to reach the pipes (also see Pipe organ#History). The hydraulis accompanied gladiator contests and events in the arena, as well as stage performances.
Variations of a hinged wooden or metal device, called a scabellum used to beat time. Also, there were various rattles, bells and tambourines.
Drum and percussion instruments like timpani and castanets, the Egyptian sistrum, and brazen pans, served various musical and other purposes in ancient Rome, including backgrounds for rhythmic dance, celebratory rites like those of the Bacchantes and military uses.
The sistrum was a rattle consisting of rings strung across the cross-bars of a metal frame, which was often used for ritual purposes.
Cymbala (Lat. plural of cymbalum, from the Greek kymbalon) were small cymbals: metal discs with concave centres and turned rims, used in pairs which were clashed together.
Islamic world
A number of musical instruments later used in medieval European music were influenced by Arabic musical instruments, including the rebec (an ancestor of the violin) from the rebab and the naker from naqareh. Many European instruments have roots in earlier Eastern instruments that were adopted from the Islamic world. The Arabic rabāb, also known as the spiked fiddle, is the earliest known bowed string instrument and the ancestor of all European bowed instruments, including the rebec, the Byzantine lyra, and the violin.
The plucked and bowed versions of the rebab existed alongside each other. The bowed instruments became the rebec or rabel and the plucked instruments became the gittern. Curt Sachs linked this instrument with the mandola, the kopuz and the gambus, and named the bowed version rabâb.
The Arabic oud in Islamic music was the direct ancestor of the European lute. The oud is also cited as a precursor to the modern guitar. The guitar has roots in the four-string oud, brought to Iberia by the Moors in the 8th century. A direct ancestor of the modern guitar is the (Moorish guitar), which was in use in Spain by 1200. By the 14th century, it was simply referred to as a guitar.
The origin of automatic musical instruments dates back to the 9th century when the Persian Banū Mūsā brothers invented a hydropowered organ using exchangeable cylinders with pins, and also an automatic flute playing machine using steam power. These were the earliest automated mechanical musical instruments. The Banu Musa brothers' automatic flute player was the first programmable musical device, the first music sequencer, and the first example of repetitive music technology, powered by hydraulics.
In 1206, the Arab engineer Al-Jazari invented a programmable humanoid automata band. According to Charles B. Fowler, the automata were a "robot band" which performed "more than fifty facial and body actions during each musical selection." It was also the first programmable drum machine. Among the four automaton musicians, two were drummers. It was a drum machine where pegs (cams) bumped into little levers that operated the percussion. The drummers could be made to play different rhythms and different drum patterns if the pegs were moved around.
Middle Ages
During the medieval music era (476 to 1400) the plainchant tunes used for religious songs were primarily monophonic (a single line, unaccompanied melody). In the early centuries of the medieval era, these chants were taught and spread by oral tradition ("by ear"). The earliest Medieval music did not have any kind of notational system for writing down melodies. As Rome tried to standardize the various chants across vast distances of its empire, a form of music notation was needed to write down the melodies. Various signs written above the chant texts, called neumes were introduced. By the ninth century, it was firmly established as the primary method of musical notation. The next development in musical notation was heighted neumes, in which neumes were carefully placed at different heights in relation to each other. This allowed the neumes to give a rough indication of the size of a given interval as well as the direction.
This quickly led to one or two lines, each representing a particular note, being placed on the music with all of the neumes relating back to them. The line or lines acted as a reference point to help the singer gauge which notes were higher or lower. At first, these lines had no particular meaning and instead had a letter placed at the beginning indicating which note was represented. However, the lines indicating middle C and the F a fifth below slowly became most common. The completion of the four-line staff is usually credited to Guido d' Arezzo (c. 1000–1050), one of the most important musical theorists of the Middle Ages. The neumatic notational system, even in its fully developed state, did not clearly define any kind of rhythm for the singing of notes or playing of melodies. The development of music notation made it faster and easier to teach melodies to new people, and facilitated the spread of music over long geographic distances.
Instruments used to perform medieval music include earlier, less mechanically sophisticated versions of a number of instruments that continue to be used in the 2010s. Medieval instruments include the flute, which was made of wood and could be made as a side-blown or end-blown instrument (it lacked the complex metal keys and airtight pads of 2010s-era metal flutes); the wooden recorder and the related instrument called the gemshorn; and the pan flute (a group of air columns attached together). Medieval music used many plucked string instruments like the lute, mandore, gittern and psaltery. The dulcimers, similar in structure to the psaltery and zither, were originally plucked, but became struck by hammers in the 14th century after the arrival of new technology that made metal strings possible.
Bowed strings were used as well. The bowed lyra of the Byzantine Empire was the first recorded European bowed string instrument. The Persian geographer Ibn Khurradadhbih of the 9th century (d. 911) cited the Byzantine lyra as a bowed instrument equivalent to the Arab rabāb and typical instrument of the Byzantines along with the urghun (organ), shilyani (probably a type of harp or lyre) and the salandj (probably a bagpipe). The hurdy-gurdy was a mechanical violin using a rosined wooden wheel attached to a crank to "bow" its strings. Instruments without sound boxes like the jaw harp were also popular in the time. Early versions of the organ, fiddle (or vielle), and trombone (called the sackbut) existed in the medieval era.
Renaissance
The Renaissance music era (c. 1400 to 1600) saw the development of many new technologies that affected the performance and distribution of songs and musical pieces. Around 1450, the printing press was invented, which made printed sheet music much less expensive and easier to mass-produce (prior to the invention of the printing press, all notated music was laboriously hand-copied). The increased availability of printed sheet music helped to spread musical styles more quickly and across a larger geographic area.
Many instruments originated during the Renaissance; others were variations of, or improvements upon, instruments that had existed previously in the medieval era. Brass instruments in the Renaissance were traditionally played by professionals. Some of the more common brass instruments that were played included:
Slide trumpet: Similar to the trombone of today except that instead of a section of the body sliding, only a small part of the body near the mouthpiece and the mouthpiece itself is stationary.
Cornett: Made of wood and was played like the recorder, but blown like a trumpet.
Trumpet: Early trumpets from the Renaissance era had no valves, and were limited to the tones present in the overtone series. They were also made in different sizes.
Sackbut: A different name for the trombone, which replaced the slide trumpet by the middle of the 15th century
Stringed instruments included:
Viol: This instrument, developed in the 15th century, commonly has six strings. It was usually played with a bow.
Lyre: Its construction is similar to a small harp, although instead of being plucked, it is strummed with a plectrum. Its strings varied in quantity from four, seven, and ten, depending on the era. It was played with the right hand, while the left hand silenced the notes that were not desired. Newer lyres were modified to be played with a bow.
Hurdy-gurdy: (Also known as the wheel fiddle), in which the strings are sounded by a wheel which the strings pass over. Its functionality can be compared to that of a mechanical violin, in that its bow (wheel) is turned by a crank. Its distinctive sound is mainly because of its "drone strings" which provide a constant pitch similar in their sound to that of bagpipes.
Gittern and mandore: these instruments were used throughout Europe. Forerunners of modern instruments including the mandolin and acoustic guitar.
Percussion instruments included:
Tambourine: The tambourine is a frame drum equipped with jingles that produce a sound when the drum is struck.
Jew's harp: An instrument that produces sound using shapes of the mouth and attempting to pronounce different vowels with one's mouth.
Woodwind instruments included:
Shawm: A typical shawm is keyless and is about a foot long with seven finger holes and a thumb hole. The pipes were also most commonly made of wood and many of them had carvings and decorations on them. It was the most popular double reed instrument of the Renaissance period; it was commonly used in the streets with drums and trumpets because of its brilliant, piercing, and often deafening sound. To play the shawm a person puts the entire reed in their mouth, puffs out their cheeks, and blows into the pipe whilst breathing through their nose.
Reed pipe: Made from a single short length of cane with a mouthpiece, four or five finger holes, and reed fashioned from it. The reed is made by cutting out a small tongue but leaving the base attached. It is the predecessor of the saxophone and the clarinet.
Hornpipe: Same as reed pipe but with a bell at the end.
Bagpipe/Bladderpipe: It used a bag made out of sheep or goat skin that would provide air pressure for a pipe. When the player takes a breath, the player only needs to squeeze the bag tucked underneath their arm to continue the tone. The mouth pipe has a simple round piece of leather hinged on to the bag end of the pipe and acts like a non-return valve. The reed is located inside the long metal mouthpiece, known as a bocal.
Panpipe: Designed to have sixteen wooden tubes with a stopper at one end and open on the other. Each tube is a different size (thereby producing a different tone), giving it a range of an octave and a half. The player can then place their lips against the desired tube and blow across it.
Transverse flute: The transverse flute is similar to the modern flute with a mouth hole near the stoppered end and finger holes along the body. The player blows in the side and holds the flute to the right side.
Recorder: It uses a whistle mouthpiece, which is a beak-shaped mouthpiece, as its main source of sound production. It is usually made with seven finger holes and a thumb hole.
Baroque
During the Baroque era of music (ca. 1600–1750), technologies for keyboard instruments developed, which led to improvements in the designs of pipe organs and harpsichords, and to the development of the first pianos. During the Baroque period, organ builders developed new types of pipes and reeds that created new tonal colors. Organ builders fashioned new stops that imitated various instruments, such as the viola da gamba. The Baroque period is often thought of as organ building's "golden age," as virtually every important refinement to the instrument was brought to a peak. Builders such as Arp Schnitger, Jasper Johannsen, Zacharias Hildebrandt and Gottfried Silbermann constructed instruments that displayed both exquisite craftsmanship and beautiful sound. These organs featured well-balanced mechanical key actions, giving the organist precise control over the pipe speech. Schnitger's organs featured particularly distinctive reed timbres and large Pedal and Rückpositiv divisions.
Harpsichord builders in the Southern Netherlands built instruments with two keyboards that could be used for transposition. These Flemish instruments served as the model for Baroque-era harpsichord construction in other nations. In France, the double keyboards were adapted to control different choirs of strings, making a more musically flexible instrument (e.g., the upper manual could be set to a quiet lute stop, while the lower manual could be set to a stop with multiple string choirs, for a louder sound). Instruments from the peak of the French tradition, by makers such as the Blanchet family and Pascal Taskin, are among the most widely admired of all harpsichords and are frequently used as models for the construction of modern instruments. In England, the Kirkman and Shudi firms produced sophisticated harpsichords of great power and sonority. German builders extended the sound repertoire of the instrument by adding sixteen-foot choirs, adding to the lower register and two-foot choirs, which added to the upper register.
The piano was invented during the Baroque era by the expert harpsichord maker Bartolomeo Cristofori (1655–1731) of Padua, Italy, who was employed by Ferdinando de' Medici, Grand Prince of Tuscany. Cristofori invented the piano at some point before 1700. While the clavichord allowed expressive control of volume, with harder or louder key presses creating louder sound (and vice versa) and fairly sustained notes, it was too quiet for large performances. The harpsichord produced a sufficiently loud sound, but offered little expressive control over each note. Pressing a harpsichord key harder or softer had no effect on the instrument's loudness. The piano offered the best of both, combining loudness with dynamic control. Cristofori's great success was solving, with no prior example, the fundamental mechanical problem of piano design: the hammer must strike the string, but not remain in contact with it (as a tangent remains in contact with a clavichord string) because this would damp the sound. Moreover, the hammer must return to its rest position without bouncing violently, and it must be possible to repeat the same note rapidly. Cristofori's piano action was a model for the many approaches to piano actions that followed. Cristofori's early instruments were much louder and had more sustain than the clavichord. Even though the piano was invented in 1700, the harpsichord and pipe organ continued to be widely used in orchestra and chamber music concerts until the end of the 1700s. It took time for the new piano to gain in popularity. By 1800, though, the piano generally was used in place of the harpsichord (although pipe organ continued to be used in church music such as Masses).
Classicism
From about 1790 onward, the Mozart-era piano underwent tremendous changes that led to the modern form of the instrument. This revolution was in response to a preference by composers and pianists for a more powerful, sustained piano sound, and made possible by the ongoing Industrial Revolution with resources such as high-quality steel piano wire for strings, and precision casting for the production of iron frames. Over time, the tonal range of the piano was also increased from the five octaves of Mozart's day to the 7-plus range found on modern pianos.
Early technological progress owed much to the firm of Broadwood. John Broadwood joined with another Scot, Robert Stodart, and a Dutchman, Americus Backers, to design a piano in the harpsichord case—the origin of the "grand". They achieved this in about 1777. They quickly gained a reputation for the splendour and powerful tone of their instruments, with Broadwood constructing ones that were progressively larger, louder, and more robustly constructed.
They sent pianos to both Joseph Haydn and Ludwig van Beethoven, and were the first firm to build pianos with a range of more than five octaves: five octaves and a fifth (interval) during the 1790s, six octaves by 1810 (Beethoven used the extra notes in his later works), and seven octaves by 1820. The Viennese makers similarly followed these trends; however the two schools used different piano actions: Broadwoods were more robust, Viennese instruments were more sensitive.
Beethoven's instrumentation for orchestra added piccolo, contrabassoon, and trombones to the triumphal finale of his Symphony No. 5. A piccolo and a pair of trombones help deliver storm and sunshine in the Sixth. Beethoven's use of piccolo, contrabassoon, trombones, and untuned percussion in his Ninth Symphony expanded the sound of the orchestra.
Romanticism
During the Romantic music era (c. 1810 to 1900), one of the key ways that new compositions became known to the public was by the sales of sheet music, which amateur music lovers would perform at home on their piano or in chamber music groups, such as string quartets. Saxophones began to appear in some 19th-century orchestra scores. While appearing only as featured solo instruments in some works, for example Maurice Ravel's orchestration of Modest Mussorgsky's Pictures at an Exhibition and Sergei Rachmaninoff's Symphonic Dances, the saxophone is included in other works, such as Ravel's Boléro, Sergei Prokofiev's Romeo and Juliet Suites 1 and 2. The euphonium is featured in a few late Romantic and 20th-century works, usually playing parts marked "tenor tuba", including Gustav Holst's The Planets, and Richard Strauss's Ein Heldenleben. The Wagner tuba, a modified member of the horn family, appears in Richard Wagner's cycle Der Ring des Nibelungen and several other works by Strauss, Béla Bartók, and others; it has a prominent role in Anton Bruckner's Symphony No. 7 in E Major. Cornets appear in Pyotr Ilyich Tchaikovsky's ballet Swan Lake, Claude Debussy's La Mer, and several orchestral works by Hector Berlioz.
The piano continued to undergo technological developments in the Romantic era, up until the 1860s. By the 1820s, the center of piano building innovation had shifted to Paris, where the Pleyel firm manufactured pianos used by Frédéric Chopin and the Érard firm manufactured those used by Franz Liszt. In 1821, Sébastien Érard invented the double escapement action, which incorporated a repetition lever (also called the balancier) that permitted repeating a note even if the key had not yet risen to its maximum vertical position. This facilitated rapid playing of repeated notes, a musical device exploited by Liszt. When the invention became public, as revised by Henri Herz, the double escapement action gradually became standard in grand pianos and is still incorporated into all grand pianos currently produced. Other improvements of the mechanism included the use of felt hammer coverings instead of layered leather or cotton. Felt, which was first introduced by Jean-Henri Pape in 1826, was a more consistent material, permitting wider dynamic ranges as hammer weights and string tension increased. The sostenuto pedal, invented in 1844 by Jean-Louis Boisselot and copied by the Steinway firm in 1874, allowed a wider range of effects.
One innovation that helped create the sound of the modern piano was the use of a strong iron frame. Also called the "plate", the iron frame sits atop the soundboard, and serves as the primary bulwark against the force of string tension that can exceed 20 tons in a modern grand. The single piece cast iron frame was patented in 1825 in Boston by Alpheus Babcock, combining the metal hitch pin plate (1821, claimed by Broadwood on behalf of Samuel Hervé) and resisting bars (Thom and Allen, 1820, but also claimed by Broadwood and Érard). The increased structural integrity of the iron frame allowed the use of thicker, tenser, and more numerous strings. In 1834, the Webster & Horsfal firm of Birmingham brought out a form of piano wire made from cast steel; according to Dolge it was "so superior to the iron wire that the English firm soon had a monopoly."
Other important advances included changes to the way the piano is strung, such as the use of a "choir" of three strings rather than two for all but the lowest notes, and the implementation of an over-strung scale, in which the strings are placed in two separate planes, each with its own bridge height. The mechanical action structure of the upright piano was invented in London, England in 1826 by Robert Wornum, and upright models became the most popular model, also amplifying the sound.
20th- and 21st-century music
With 20th-century music, there was a vast increase in music listening, as the radio gained popularity and phonographs were used to replay and distribute music. The invention of sound recording and the ability to edit music gave rise to new subgenre of classical music, including the acousmatic and Musique concrète schools of electronic composition. Sound recording was also a major influence on the development of popular music genres, because it enabled recordings of songs and bands to be widely distributed. The introduction of the multitrack recording system had a major influence on rock music, because it could do much more than record a band's performance. Using a multitrack system, a band and their music producer could overdub many layers of instrument tracks and vocals, creating new sounds that would not be possible in a live performance.
The 20th-century orchestra was far more flexible than its predecessors. In Beethoven's and Felix Mendelssohn's time, the orchestra was composed of a fairly standard core of instruments which was very rarely modified. As time progressed, and as the Romantic period saw changes in accepted modification with composers such as Berlioz and Mahler, the 20th century saw that instrumentation could practically be hand-picked by the composer. Saxophones were used in some 20th-century orchestra scores such as Vaughan Williams' Symphonies No. 6 and 9 and William Walton's Belshazzar's Feast, and many other works as a member of the orchestral ensemble. In the 2000s, the modern orchestra became standardized with the modern instrumentation that includes a string section, woodwinds, brass instruments, percussion, piano, celeste, and even, for some 20th century or 21st-century works, electric instruments such as electric guitar, electric bass and/or electronic instruments such as the Theremin or synthesizer.
Electric and electro-mechanical
Electric music technology refers to musical instruments and recording devices that use electrical circuits, which are often combined with mechanical technologies. Examples of electric musical instruments include the electro-mechanical electric piano (invented in 1929), the electric guitar (invented in 1931), the electro-mechanical Hammond organ (developed in 1934) and the electric bass (invented in 1935). None of these electric instruments produce a sound that is audible by the performer or audience in a performance setting unless they are connected to instrument amplifiers and loudspeaker cabinets, which made them sound loud enough for performers and the audience to hear. Amplifiers and loudspeakers are separate from the instrument in the case of the electric guitar (which uses a guitar amplifier), electric bass (which uses a bass amplifier) and some electric organs (which use a Leslie speaker or similar cabinet) and electric pianos. Some electric organs and electric pianos include the amplifier and speaker cabinet within the main housing for the instrument.
Electric piano
An electric piano is an electric musical instrument which produces sounds when a performer presses the keys of the piano-style musical keyboard. Pressing keys causes mechanical hammers to strike metal strings or tines, leading to vibrations which are converted into electrical signals by magnetic pickups, which are then connected to an instrument amplifier and loudspeaker to make a sound loud enough for the performer and audience to hear. Unlike a synthesizer, the electric piano is not an electronic instrument. Instead, it is an electromechanical instrument. Some early electric pianos used lengths of wire to produce the tone, like a traditional piano. Smaller electric pianos used short slivers of steel, metal tines or short wires to produce the tone. The earliest electric pianos were invented in the late 1920s.
Electric guitar
An electric guitar is a guitar that uses a pickup to convert the vibration of its strings into electrical impulses. The most common guitar pickup uses the principle of direct electromagnetic induction. The signal generated by an electric guitar is too weak to drive a loudspeaker, so it is amplified before being sent to a loudspeaker. The output of an electric guitar is an electric signal, and the signal can easily be altered by electronic circuits to add "color" to the sound. Often the signal is modified using electronic effects such as reverb and distortion. Invented in 1931, the electric guitar became a necessity as jazz guitarists sought to amplify their sound in the big band format.
Hammond organ
The Hammond organ is an electric organ, invented by Laurens Hammond and John M. Hanert and first manufactured in 1935. Various models have been produced, most of which use sliding drawbars to create a variety of sounds. Until 1975, Hammond organs generated sound by creating an electric current from rotating a metal tonewheel near an electromagnetic pickup. Around two million Hammond organs have been manufactured, and it has been described as one of the most successful organs. The organ is commonly used with, and associated with, the Leslie speaker. The organ was originally marketed and sold by the Hammond Organ Company to churches as a lower-cost alternative to the wind-driven pipe organ, or instead of a piano. It quickly became popular with professional jazz bandleaders, who found that the room-filling sound of a Hammond organ could form small bands such as organ trios which were less costly than paying an entire big band.
Electric bass
The electric bass (or bass guitar) was invented in the 1930s, but it did not become commercially successful or widely used until the 1950s. It is a stringed instrument played primarily with the fingers or thumb, by plucking, slapping, popping, strumming, tapping, thumping, or picking with a plectrum, often known as a pick. The bass guitar is similar in appearance and construction to an electric guitar, but with a longer neck and scale length, and four to six strings or courses. The electric bass usually uses metal strings and an electromagnetic pickup which senses the vibrations in the strings. Like the electric guitar, the bass guitar is plugged into an amplifier and speaker for live performances.
Electronic or digital
Electronic or digital music technology is any device, such as a computer, an electronic effects unit or software, that is used by a musician or composer to help make or perform music. The term usually refers to the use of electronic devices, computer hardware and computer software that is used in the performance, composition, sound recording and reproduction, mixing, analysis and editing of music. Electronic or digital music technology is connected to both artistic and technological creativity. Musicians and music technology experts are constantly striving to devise new forms of expression through music, and they are physically creating new devices and software to enable them to do so. Although in the 2010s, the term is most commonly used in reference to modern electronic devices and computer software such as digital audio workstations and Pro Tools digital sound recording software, electronic and digital musical technologies have precursors in the electric music technologies of the early 20th century, such as the electromechanical Hammond organ, which was invented in 1929. In the 2010s, the ontological range of music technology has greatly increased, and it may now be electronic, digital, software-based or indeed even purely conceptual.
A synthesizer is an electronic musical instrument that generates electric signals that are converted to sound through instrument amplifiers and loudspeakers or headphones. Synthesizers may either imitate existing sounds (instruments, vocal, natural sounds, etc.), or generate new electronic timbres or sounds that did not exist before. They are often played with an electronic musical keyboard, but they can be controlled via a variety of other input devices, including music sequencers, instrument controllers, fingerboards, guitar synthesizers, wind controllers, and electronic drums. Synthesizers without built-in controllers are often called sound modules, and are controlled using a controller device.
References
Sources
Further reading
External links
Sound recording
Audio electronics
Music history
Musical instruments | Music technology | [
"Engineering"
] | 9,298 | [
"Audio electronics",
"Audio engineering"
] |
49,183,008 | https://en.wikipedia.org/wiki/Lead%20abatement | Lead abatement includes lead-based paint abatement activities, such as inspections, risk assessments, as well as removal. Lead abatement must be performed by educated, certified professionals with proper safety protocols to limit lead exposure. The goal is to permanently eliminate lead-based paint hazards, such as serious permanent and irreversible health damage due to lead poisoning in children. This is especially important in home environments and in any facility with frequent visitation by children, particularly those built before 1978.
Techniques
Residential
There are various lead abatement techniques to remove residential lead-based paint and lead in household dusts. Encapsulation and enclosure makes the hazard of lead-based paint inaccessible, while chemical stripping, removal of abrasives, scraping with the hand, and component replacement are effective in permanently removing lead-based paints from households. Encapsulation refers to the technique that coats all lead-contaminated surfaces with a special liquid coating, which provides a long-lasting and effective barrier and prevents lead dust particles from being released. Enclosure refers to covering all lead-contaminated surfaces and objects with a solid, dust-tight barrier, which is effective in not exposing children to harmful lead paint but is not a permanent solution. Removal is the act of scraping, stripping, vacuuming, and blasting lead-based paint from contaminated surfaces. Replacement is the simple removal and substitution of only objects contaminated with lead, such as lead-painted doors and windows. However, residential lead abatement practices are relatively expensive, and some practices are ineffective and could even worsen the current situation.
Soils
Lead contaminated soil is one of the leading sources of lead poisoning for children in the United States. Soils with lead are especially prominent in urban landscapes and near old homes and child-occupied facilities that were built before 1978 (also known as target housing). Lead can get into soils via deposits from leaded gasoline (which was banned in the United States in 1996 by the Clean Air Act), degradation of leaded paint on nearby paint surfaces, exterior lead-based paint chippings and dust, and industrial sites. It is important that lead contaminated soils be properly disposed as soon as possible. For risk assessment, it is recommended by the US EPA that more than two dust and soil samples are taken. Remediation technologies and methods for lead contaminated soils include excavation and off-site disposal to permanently remove the contaminated soils from the site, containment technologies (such as asphalt capping) to reduce human health exposure and hazards, and traditional techniques (such as washing and stabilizing soil). In addition, it is important that there is a comprehensive nationwide understanding by the people of the hazards and proper removal of lead contaminated soils. Specific, official guidelines for removal or covering of lead-contaminated soil can be found in the US EPA guidance issues in 1994, also known as the Section 403 Guidance.
Lead abatement vs. RRP
Lead abatement and RRP (Renovation, Repair, and Painting) activities are similar in that they are both performed in target housing and child-occupied facilities. In the United States, they are both protected under OSHA 29 CFR 1926.62 and required to post signage in lead-related work areas.
However, even though the activities performed look similar, lead abatement and RRP activities are also very different. Lead abatement is a specialized activity that is performed to permanently eliminate lead-based paint hazards and is usually done based on orders from state or local governments after a serious lead-related incident. Meanwhile, RRP activities are only performed at the discretion and desire of the home or facility owner to temporarily minimize lead-related hazards for aesthetic or lead-unrelated purposes. Lead abatement techniques include encapsulation, enclosure, removal, and/or replacement, while RRP activities include modification or repair of painted doors, surface restoration, window repair, surface preparation that usually produces paint dust, removal of painted building components, cutting holes in painted surfaces to install insulation, and installation of interim controls that disturb existing painted surfaces. Thus, RRP activities are more high risk than lead abatement techniques because they can further disturb the existing lead paint and institute more problems.
Rules and regulations in the United States
Lead abatement, also known as lead-based paint activities, are regulated by the United States Environmental Protection Agency (EPA). Laws and policies involving lead abatement activities are enforced and kept in check by the EPA, local government, and state government. All lead-based paint activities intended by state governments and facility owners must receive proper authorization from the EPA before being carried out. Because working with lead poses health risks and even permanent damage to health, it is important that lead abatement workers, supervisors, inspectors, and other individuals dealing with lead-based paint strictly comply to the following regulations. Individuals performing lead abatement activities in the home environment or child-occupied facilities must be properly trained, take classes, and get certified; training programs providing instruction in such activities must be accredited and approved by the EPA; and these activities must be carried out in accordance to reliable, effective, and safe work practice standards, as defined by OSHA. Improper lead abatement or removal by the incorrect method or by an unlicensed individual can result in severe consequences. If lead abatement is improperly managed or carried out, chips and dust can pose additional health hazards. Furthermore, the current situation can worsen as well. To prevent this, Congress passed the 1992 Residential Lead Based Paint Hazard Reduction Act of 1992, which ensures that contractors are well qualified and properly trained.
Lead-based paints
The production and consumption of lead-based paints is a worldwide issue. Many Asian countries, especially China, depend on the import, export, production, and consumption of paints. However, the paints produced in China and other Asian countries are problematic. Paints used for toys, paint to make finger paint, interior and exterior wall paint, and woodenware paints are all the types of paints produced in Asian countries, particularly China. Many paints contain much more lead content than regulatory standards allow. It is important to note that the regulatory standards can be misleading as well. For example, in China, the safe lead content criteria is based on soluble lead levels, not total lead levels; this can lead to paints being labeled as safe even when they contain harmful concentrations of lead. It was also found in a study that colored paints have more lead concentrations than pure white paints, most likely due to lead being added for additional color enhancement. As such, even with regulatory standards for lead content in paints, there are still paints with lead being produced around the world. It has been observed that paints produced in third world, developing countries contain higher lead content than those produced in first world, developed countries. The production and consumption of lead-based paints is actually growing in developing countries. In these developing countries, many of which do not have the proper resources, personnel, technology, and tools to remove lead from their paints. Thus, many lead-related illnesses and defects, especially in children, are arising in these countries. In order to prevent the detrimental effects of the production and consumption of lead-based paints, the United Nations Environment Program (UNEP) has initiated the Global Alliance to Eliminate Lead Paint to prevent exposure to lead through advocating the phase-out of paints containing lead. Through these kinds of programs, it is important that all countries worldwide understand what lead-based paints are, what the effects and risks of lead-based paints are, what alternatives to lead-based paints there are, and what the proper removal methods and strategies are. Verification of all paint content labels, correct adjustment of regulatory standards and criteria for safe lead content in paints, and awareness and education are crucial in abating lead-based paint and its harmful effects around the world.
See also
Environmental toxicology
History of the tetraethyllead controversy
Lead and crime hypothesis
Organolead chemistry
Pollution control
Lead abatement in the United States
References
External links
Abatement
Pollution control technologies | Lead abatement | [
"Chemistry",
"Engineering"
] | 1,624 | [
"Pollution control technologies",
"Environmental engineering"
] |
49,185,980 | https://en.wikipedia.org/wiki/Emission%20channeling | Emission channeling is an experimental technique for identifying the position of short-lived radioactive atoms in the lattice of a single crystal.
When the radioactive atoms decay, they emit fast charged particles (e.g., α-particles and β-particles). Because of their charge, the emitted particles interact in characteristic ways with the electrons and nuclei of the crystal atoms, giving rise to channeling and blocking directions for the particle escaping the crystal. The intensity (or yield) of the emitted particles is therefore dependent on the position of the detector relative to crystal planes and axes. This fact is used to infer the location of the radioactive species in the lattice by varying the emission angles and subsequent comparison to simulation results. For the simulations, the manybeam formalism can be employed, and resolutions below 1 Å are achievable.
Among others, the technique has been used to determine the sites of manganese impurities implanted in semiconducting gallium arsenide: 70% occupy substitutional gallium sites and 28% are located at tetrahedral interstitial sites with arsenic as nearest neighbors.
See also
Channelling (physics)
References
External links
Radioactivity
Experimental physics | Emission channeling | [
"Physics",
"Chemistry"
] | 241 | [
"Experimental physics",
"Radioactivity",
"Nuclear physics"
] |
49,187,020 | https://en.wikipedia.org/wiki/Triple%20beam%20balance | The triple beam balance is an instrument used to measure weight or mass very precisely. Such devices typically have a reading error of ±0.05 grams. Its name refers to its three beams, where the middle beam is the largest, the far beam of medium size, and the front beam the smallest. The difference in size of the beams indicates the difference in weights and reading scale that each beam measures. Typically, the reading scale of the middle beam reads in 100 gram increments, the far beam in 10 gram increments, and the front beam can read from 0 to 10 grams. The triple beam balance can be used to measure mass directly from the objects, find mass by difference for liquid, and measure out substances.
Parts
The parts of triple beam balance are identified as the following.
Weighing pan - The area in which an object is placed in order to be weighed.
Base - The base rests underneath the weighing pan and can usually be customized to fit on a workbench or set up with tripod legs.
Beams - The three beams on the balance are used to set the level of precision, with each beam working at different increments (1-10 grams, 10 grams and 100 grams). When using the triple beam balance, it is recommended that one start with the lowest level of precision (e.g 100 gram increments). For example, if an object weighs 327 grams, the 100 gram pointer will drop below the fixed mark on the 4th notch (400g); it will then need to be moved back to the third notch (300g). This process will then need to be repeated for the 10 gram increments (20g) and then single figure units (7g).
Riders - The riders are the sliding pointers placed on top of the balance beams to indicate the mass in grams on the pan and beam.
Pointers - The scale pointer marks the equal point of the object's mass on the scale and mass on the beam
Zero adjustment knob - This is used to manually adjust the triple beam balance to the 'zero' mark (check to ensure that the pointer is at zero before use).
Before using triple beam balance, the scale pointer should be at zero. The zero adjustment knob can be used to adjust the scale pointer. The objects are placed on the pan and the riders are adjusted. The hundred rider should be initially adjusted and follow by the tens rider. The ones rider is adjusted until the scale pointer is at zero again.
Image
See also
Analytical balance
Weighing scale
References
Weighing instruments
Laboratory equipment | Triple beam balance | [
"Physics",
"Technology",
"Engineering"
] | 517 | [
"Weighing instruments",
"Mass",
"Matter",
"Measuring instruments"
] |
49,190,541 | https://en.wikipedia.org/wiki/Cold%20blob | The cold blob in the North Atlantic (also called the North Atlantic warming hole) describes a cold temperature anomaly of ocean surface waters, affecting the Atlantic Meridional Overturning Circulation (AMOC) which is part of the thermohaline circulation, possibly related to global warming-induced melting of the Greenland ice sheet.
General
AMOC is driven by ocean temperature and salinity differences. The major possible mechanism causing the cold ocean surface temperature anomaly is based on the fact that freshwater decreases ocean water salinity, and through this process prevents colder waters sinking. Observed freshwater increase originates probably from Greenland ice melt.
Research
2015 and earlier
Climate scientists Michael Mann of Penn State and Stefan Rahmstorf from the Potsdam Institute for Climate Impact Research suggested that the observed cold pattern during years of temperature records is a sign that the Atlantic Ocean's Meridional overturning circulation (AMOC) may be weakening. They published their findings, and concluded that the AMOC circulation shows exceptional slowdown in the last century, and that Greenland melt is a possible contributor. Tom Delworth of NOAA suggested that natural variability, which includes different modes, here namely the North Atlantic Oscillation and the Atlantic Multidecadal Oscillation through wind driven ocean temperatures are also a factor. A 2014 study by Jon Robson et al. from the University of Reading concluded about the anomaly, "...suggest that a substantial change in the AMOC is unfolding now." Another study by Didier Swingedouw concluded that the slowdown of AMOC in the 1970s may have been unprecedented over the last millennium.
2016
A study published in 2016, by researchers from the University of South Florida, Canada and the Netherlands, used GRACE satellite data to estimate freshwater flux from Greenland. They concluded that freshwater runoff is accelerating, and could eventually cause a disruption of AMOC in the future, which would affect Europe and North America.
Another study published in 2016, found further evidence for a considerable impact from sea level rise for the U.S. East Coast. The study confirms earlier research findings which identified the region as a hotspot for rising seas, with a potential to divert 3–4 times higher than the global average sea level rise rate. The researchers attribute the possible increase to an ocean circulation mechanism called deep water formation, which is reduced due to AMOC slow down, leading to more warmer water pockets below the surface. Additionally, the study noted: "Our results suggest that higher carbon emission rates also contribute to increased [sea level rise] in this region compared to the global average".
Background
In 2005, British researchers noticed that the net flow of the northern Gulf Stream had decreased by about 30% since 1957. Coincidentally, scientists at Woods Hole had been measuring the freshening of the North Atlantic as Earth becomes warmer. Their findings suggested that precipitation increases in the high northern latitudes, and polar ice melts as a consequence. By flooding the northern seas with excessive fresh water, global warming could, in theory, divert the Gulf Stream waters that usually flow northward, past the British Isles and Norway, and cause them to instead circulate toward the equator. Were this to happen, Europe's climate would be seriously impacted.
Don Chambers from the USF College of Marine Science mentioned, "The major effect of a slowing AMOC is expected to be cooler winters and summers around the North Atlantic, and small regional increases in sea level on the North American coast." James Hansen and Makiko Sato stated, "AMOC slowdown that causes cooling ~1°C and perhaps affects weather patterns is very different from an AMOC shutdown that cools the North Atlantic several degrees Celsius; the latter would have dramatic effects on storms and be irreversible on the century time scale." Downturn of the Atlantic meridional overturning circulation, has been tied to extreme regional sea level rise.
Measurements
Since 2004 the RAPID program monitors the ocean circulation.
See also
Abrupt climate change
The Blob (Pacific Ocean)
Deglaciation
Physical impacts of climate change
References
External links
Extended lecture by Stefan Rahmstorf about AMOC slowdown (May 27, 2016)
A Nasty Surprise in the Greenhouse (video about the shutdown of the thermohaline circulation, 2015)
Blizzard Jonas and the slowdown of the Gulf Stream System (RealClimate January 24, 2016)
Atlantic Ocean
Effects of climate change
Physical oceanography
Chemical oceanography
Anomalous weather | Cold blob | [
"Physics",
"Chemistry"
] | 904 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Weather",
"Chemical oceanography",
"Physical oceanography",
"Anomalous weather"
] |
49,190,858 | https://en.wikipedia.org/wiki/Palmqvist%20method | The Palmqvist method, or the Palmqvist toughness test, (after Sven Robert Palmqvist) is a common method to determine the fracture toughness for cemented carbides. In this case, the material's fracture toughness is given by the critical stress intensity factor KIc.
Approach
The Palmqvist-method uses the lengths of the cracks from a number of Vickers indentions to determine the fracture toughness. The Palmqvist fracture toughness is given by
in units of MPa,
where HV is the Vickers hardness in N/mm2 (or MPa) (i.e., 9.81 x numerical HV), P is the indentation load in N (typically 30 kgf is used) and T is the total crack length (mm) after application of the indenter.
Notes
Materials testing
Fracture mechanics | Palmqvist method | [
"Materials_science",
"Engineering"
] | 173 | [
"Structural engineering",
"Fracture mechanics",
"Materials science",
"Materials testing",
"Materials degradation"
] |
49,191,413 | https://en.wikipedia.org/wiki/Barium%20stannate | Barium stannate is an oxide of barium and tin with the chemical formula BaSnO3. It is a wide band gap semiconductor with a perovskite crystal structure.
References
Barium compounds
Stannates
Semiconductor materials
Perovskites | Barium stannate | [
"Chemistry"
] | 51 | [
"Semiconductor materials"
] |
42,877,569 | https://en.wikipedia.org/wiki/Relationship%20between%20mathematics%20and%20physics | The relationship between mathematics and physics has been a subject of study of philosophers, mathematicians and physicists since antiquity, and more recently also by historians and educators. Generally considered a relationship of great intimacy, mathematics has been described as "an essential tool for physics" and physics has been described as "a rich source of inspiration and insight in mathematics". Some of the oldest and most discussed themes are about the main differences between the two subjects, their mutual influence, the role of mathematical rigor in physics, and the problem of explaining the effectiveness of mathematics in physics.
In his work Physics, one of the topics treated by Aristotle is about how the study carried out by mathematicians differs from that carried out by physicists. Considerations about mathematics being the language of nature can be found in the ideas of the Pythagoreans: the convictions that "Numbers rule the world" and "All is number", and two millennia later were also expressed by Galileo Galilei: "The book of nature is written in the language of mathematics".
Historical interplay
Before giving a mathematical proof for the formula for the volume of a sphere, Archimedes used physical reasoning to discover the solution (imagining the balancing of bodies on a scale). Aristotle classified physics and mathematics as theoretical sciences, in contrast to practical sciences (like ethics or politics) and to productive sciences (like medicine or botany).
From the seventeenth century, many of the most important advances in mathematics appeared motivated by the study of physics, and this continued in the following centuries (although in the nineteenth century mathematics started to become increasingly independent from physics). The creation and development of calculus were strongly linked to the needs of physics: There was a need for a new mathematical language to deal with the new dynamics that had arisen from the work of scholars such as Galileo Galilei and Isaac Newton. The concept of derivative was needed, Newton did not have the modern concept of limits, and instead employed infinitesimals, which lacked a rigorous foundation at that time. During this period there was little distinction between physics and mathematics; as an example, Newton regarded geometry as a branch of mechanics.
Non-Euclidean geometry, as formulated by Carl Friedrich Gauss, János Bolyai, Nikolai Lobachevsky, and Bernhard Riemann, freed physics from the limitation of a single Euclidean geometry. A version of non-Euclidean geometry, called Riemannian geometry, enabled Einstein to develop general relativity by providing the key mathematical framework on which he fit his physical ideas of gravity.
In the 19th century Auguste Comte in his hierarchy of the sciences, placed physics and astronomy as less general and more complex than mathematics, as both depend on it. In 1900, David Hilbert in his 23 problems for the advancement of mathematical science, considered the axiomatization of physics as his sixth problem. The problem remains open.
In 1930, Paul Dirac invented the Dirac delta function which produced a single value when used in an integral.
The mathematical rigor of this function was in doubt until the mathematician Laurent Schwartz developed on the theory of distributions.
Connections between the two fields sometimes only require identifing similar concepts by different names, as shown in the 1975 Wu–Yang dictionary, that related concepts of gauge theory with differential geometry.
Physics is not mathematics
Despite the close relationship between math and physics, they are not synonyms. In mathematics objects can be defined exactly and logically related, but the object need have no relationship to experimental measurements. In physics, definitions are abstractions or idealizations, approximations adequate when compared to the natural world. In 1960, Georg Rasch noted that no models are ever true, not even Newton's laws, emphasizing that models should not be evaluated based on truth but on their applicability for a given purpose. For example, Newton built a physical model around definitions like his second law of motion based on observations, leading to the development of calculus and highly accurate planetary mechanics, but later this definition was superseded by improved models of mechanics. Mathematics deals with entities whose properties can be known with certainty. According to David Hume, only statements that deal solely with ideas themselves—such as those encountered in mathematics—can be demonstrated to be true with certainty, while any conclusions pertaining to experiences of the real world can only be achieved via "probable reasoning". This leads to a situation that was put by Albert Einstein as "No number of experiments can prove me right; a single experiment can prove me wrong." The ultimate goal in research in pure mathematics are rigorous proofs, while in physics heuristic arguments may sometimes suffice in leading-edge research. In short, the methods and goals of physicists and mathematicians are different. Nonetheless, according to Roland Omnès, the axioms of mathematics are not mere conventions, but have physical origins.
Role of rigor in physics
Rigor is indispensable in pure mathematics. But many definitions and arguments found in the physics literature involve concepts and ideas that are not up to the standards of rigor in mathematics.
For example,
Freeman Dyson characterized quantum field theory as having two "faces". The outward face looked at nature and there the predictions of quantum field theory are exceptionally successful. The inward face looked at mathematical foundations and found inconsistency and mystery. The success of the physical theory comes despite its lack of rigorous mathematical backing.
Philosophical problems
Some of the problems considered in the philosophy of mathematics are the following:
Explain the effectiveness of mathematics in the study of the physical world: "At this point an enigma presents itself which in all ages has agitated inquiring minds. How can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality?" —Albert Einstein, in Geometry and Experience (1921).
Clearly delineate mathematics and physics: For some results or discoveries, it is difficult to say to which area they belong: to the mathematics or to physics.
What is the geometry of physical space?
What is the origin of the axioms of mathematics?
How does the already existing mathematics influence in the creation and development of physical theories?
Is arithmetic analytic or synthetic? (from Kant, see Analytic–synthetic distinction)
What is essentially different between doing a physical experiment to see the result and making a mathematical calculation to see the result? (from the Turing–Wittgenstein debate)
Do Gödel's incompleteness theorems imply that physical theories will always be incomplete? (from Stephen Hawking)
Is mathematics invented or discovered? (millennia-old question, raised among others by Mario Livio)
Education
In recent times the two disciplines have most often been taught separately, despite all the interrelations between physics and mathematics. This led some professional mathematicians who were also interested in mathematics education, such as Felix Klein, Richard Courant, Vladimir Arnold and Morris Kline, to strongly advocate teaching mathematics in a way more closely related to the physical sciences. The initial courses of mathematics for college students of physics are often taught by mathematicians, despite the differences in "ways of thinking" of physicists and mathematicians about those traditional courses and how they are used in the physics courses classes thereafter.
See also
Non-Euclidean geometry
Fourier series
Conic section
Kepler's laws of planetary motion
Saving the phenomena
The Unreasonable Effectiveness of Mathematics in the Natural Sciences
Mathematical universe hypothesis
Zeno's paradoxes
Axiomatic system
Mathematical model
Empiricism
Logicism
Formalism
Mathematics of general relativity
Bourbaki
Experimental mathematics
History of Maxwell's equations
History of astronomy
Why Johnny Can't Add
Mathematical formulation of quantum mechanics
Scientific modelling
All models are wrong
References
Further reading
(part 1) (part 2).
External links
Gregory W. Moore – Physical Mathematics and the Future (July 4, 2014)
IOP Institute of Physics – Mathematical Physics: What is it and why do we need it? (September 2014)
Feynman explaining the differences between mathematics and physics in a video available on YouTube
Philosophy of physics
Philosophy of mathematics
History of science
Mathematics education
Physics education
Foundations of mathematics
History of mathematics
History of physics
mathematics | Relationship between mathematics and physics | [
"Physics",
"Mathematics",
"Technology"
] | 1,631 | [
"Philosophy of physics",
"Applied and interdisciplinary physics",
"Foundations of mathematics",
"History of science",
"Physics education",
"nan",
"History of science and technology"
] |
42,884,381 | https://en.wikipedia.org/wiki/Equilibrant%20force | In mechanics, an equilibrant force is a force which brings a body into mechanical equilibrium. According to Newton's second law, a body has zero acceleration when the vector sum of all the forces acting upon it is zero:
Therefore, an equilibrant force is equal in magnitude and opposite in direction to the resultant of all the other forces acting on a body. The term has been attested since the late 19th century.
Example
Suppose that two known forces, which are going to represented as vectors, A and B are pushing an object and an unknown equilibrant force, C, is acting to maintain that object in a fixed position. Force A points to the west and has a magnitude of 10 N and is represented by the vector <-10, 0>N. Force B points to the south and has a magnitude of 8.0 N and is represented by the vector <0, -8>N. Since these forces are vectors, they can be added by using the parallelogram rule or vector addition. This addition will look like A + B = <-10, 0>N + <0, -8>N = <-10, -8>N which is the vector representation of the resultant force. By the Pythagorean theorem, the magnitude of the resultant force is [(-10)2 + (-8)2]1/2 ≈ 12.8 N, which is also the magnitude of the equilibrant force. The angle of the equilibrant force can be found by trigonometry to be approximately 51 degrees north of east. Because the angle of the equilibrant force is opposite of the resultant force, if 180 degrees are added or subtracted to the resultant force's angle, the equilibrant force's angle will be known. Multiplying the resultant force vector by a -1 will give the correct equilibrant force vector: <-10, -8>N x (-1) = <10, 8>N = C.
References
External links
Equilibrium
Force | Equilibrant force | [
"Physics",
"Mathematics"
] | 426 | [
"Force",
"Physical quantities",
"Quantity",
"Mass",
"Classical mechanics stubs",
"Classical mechanics",
"Wikipedia categories named after physical quantities",
"Matter"
] |
55,941,264 | https://en.wikipedia.org/wiki/Waldspurger%20formula | In representation theory of mathematics, the Waldspurger formula relates the special values of two L-functions of two related admissible irreducible representations. Let be the base field, be an automorphic form over , be the representation associated via the Jacquet–Langlands correspondence with . Goro Shimura (1976) proved this formula, when and is a cusp form; Günter Harder made the same discovery at the same time in an unpublished paper. Marie-France Vignéras (1980) proved this formula, when and is a newform. Jean-Loup Waldspurger, for whom the formula is named, reproved and generalized the result of Vignéras in 1985 via a totally different method which was widely used thereafter by mathematicians to prove similar formulas.
Statement
Let be a number field, be its adele ring, be the subgroup of invertible elements of , be the subgroup of the invertible elements of , be three quadratic characters over , , be the space of all cusp forms over , be the Hecke algebra of . Assume that, is an admissible irreducible representation from to , the central character of π is trivial, when is an archimedean place, is a subspace of such that . We suppose further that, is the Langlands -constant [ ; ] associated to and at . There is a such that .
Definition 1. The Legendre symbol
Comment. Because all the terms in the right either have value +1, or have value −1, the term in the left can only take value in the set {+1, −1}.
Definition 2. Let be the discriminant of .
Definition 3. Let .
Definition 4. Let be a maximal torus of , be the center of , .
Comment. It is not obvious though, that the function is a generalization of the Gauss sum.
Let be a field such that . One can choose a K-subspace of such that (i) ; (ii) . De facto, there is only one such modulo homothety. Let be two maximal tori of such that and . We can choose two elements of such that and .
Definition 5. Let be the discriminants of .
Comment. When the , the right hand side of Definition 5 becomes trivial.
We take to be the set {all the finite -places doesn't map non-zero vectors invariant under the action of to zero}, to be the set of (all -places is real, or finite and special).
Comments:
The case when and is a metaplectic cusp form
Let p be prime number, be the field with p elements, be the integer ring of . Assume that, , D is squarefree of even degree and coprime to N, the prime factorization of is . We take to the set to be the set of all cusp forms of level N and depth 0. Suppose that, .
Definition 1. Let be the Legendre symbol of c modulo d, . Metaplectic morphism
Definition 2. Let . Petersson inner product
Definition 3. Let . Gauss sum
Let be the Laplace eigenvalue of . There is a constant such that
Definition 4. Assume that . Whittaker function
Definition 5. Fourier–Whittaker expansion One calls the Fourier–Whittaker coefficients of .
Definition 6. Atkin–Lehner operator with
Definition 7. Assume that, is a Hecke eigenform. Atkin–Lehner eigenvalue with
Definition 8.
Let be the metaplectic version of , be a nice Hecke eigenbasis for with respect to the Petersson inner product. We note the Shimura correspondence by
Theorem [ , Thm 5.1, p. 60 ]. Suppose that , is a quadratic character with . Then
References
Representation theory
Algebraic number theory
Harmonic analysis
Langlands program | Waldspurger formula | [
"Mathematics"
] | 799 | [
"Langlands program",
"Fields of abstract algebra",
"Algebraic number theory",
"Representation theory",
"Number theory"
] |
55,943,839 | https://en.wikipedia.org/wiki/GPS%20Block%20IIIF | GPS Block IIIF, or GPS III Follow On (GPS IIIF), is the second set of GPS Block III satellites, consisting of up to 22 space vehicles. The United States Air Force began the GPS Block IIIF acquisition effort in 2016. On 14 September 2018, a manufacturing contract with options worth up to $7.2 billion was awarded to Lockheed Martin. The 22 satellites in Block IIIF are projected to start launching in 2027, with launches estimated to last through at least 2037.
System enhancements
Engineering efforts for Block IIIF satellites began upon contract award in 2016—a full 16 years after the government approved entry into the initial modernization efforts for GPS III in 2000. As a result, GPS Block IIIF introduces a number of improvements and novel capabilities compared to all previous GPS satellite blocks.
Improvements
Nuclear detonation detection system
Block IIIF satellites host a redesigned U.S. Nuclear Detonation Detection System (USNDS) capability that is both smaller and lighter than previous systems.
The USNDS is a worldwide system of space-based sensors and ground processing equipment designed to detect, identify, locate, characterize, and report nuclear detonations in the Earth's atmosphere and in space.
Fully-digital navigation
GPS IIIF satellites are the first to feature a 100% digital navigation payload.
The fully-digital navigation payload introduced by Block IIIF (SV11+) produces improved accuracy, better reliability, and stronger signals compared to the 70% digital navigation payload used by GPS Block III (SV01-SV10).
Improved satellite bus
GPS IIIF-03 and beyond (GPS III SV13+) will incorporate the Lockheed Martin LM2100 Combat Bus, an improvement on the LM2100M bus used in GPS III SV01 through SV12. The LM2100 Combat Bus provides improved resilience to cyber attacks, as well as improved spacecraft power, propulsion, and electronics.
Novel capabilities
Energetic charged particle sensor
GPS IIIF satellites will be the first GPS satellites to host an Energetic Charged Particle (ECP) sensor payload.
In March 2015, the U.S. Secretary of the Air Force enacted policy mandating all new Air Force satellite programs must include ECP sensors. Aggregating ECP data from multiple satellites allows for enhanced space domain awareness, enabling improved detection of space weather effects as well as differentiation between anomalies induced by hostile activity, the natural environment, or other non-hostile causes.
Search and rescue distress beacon detection
GPS IIIF will be the first GPS satellite block to have all space vehicles participate in the Cospas-Sarsat system. The Cospas-Sarsat system is an international collection of satellites spanning low-earth, medium-earth, and geostationary orbit satellites which all listen for 406 MHz distress signals generated by beacons on earth. Satellites relay distress signals to ground stations to initiate timely emergency response efforts.
Laser retro-reflector array
Adding laser retro-reflector arrays (LRAs) to all GPS IIIF Space Vehicles allows GPS monitoring stations on earth equipped with laser rangefinding equipment to determine much more precise 3D locations for every GPS IIIF satellite. This improves the ability of the GPS system to provide more accurate time/position fixes to GPS receivers. Estimates are that as more GPS satellites host LRAs, the location accuracy will improve from one meter achievable today to one centimeter accuracy, an improvement of several orders of magnitude.
Unified S-band capability compliance
Block IIIF will be compliant with the Unified S-Band (USB) capabilities, allowing for consolidation of radio frequencies used for telemetry, tracking, and commanding of Block IIIF satellites.
Regional military protection capability
Regional Military Protection (RMP) is an anti-jamming technology for military GPS consumers. RMP involves directing a massively-amplified spot beam which only includes military GPS signals over a small geographic area. US/allied military GPS receivers located within the RMP spot beam's signal footprint are significantly more difficult for adversaries to jam due to the extremely-amplified signal strength in the area.
On-orbit servicing
GPS IIIF-03 and newer satellites (GPS III SV13+) will incorporate Lockheed-Martin's LM2100 Combat Bus. Satellites based on the Combat Bus are capable of hosting the "Augmentation System Port Interface" (ASPIN), an interface that allows for future on-orbit servicing and upgrade opportunities.
Launch history
The first GPS Block IIIF satellite is planned to launch in 2027.
Navigational signals
Note: none of the navigation signals that GPS Block IIIF satellites transmit are new in Block IIIF; all signals were first supported in previous generation (Block I, Block II, or Block III) GPS satellites.
Civilian
Design
GPS IIIF is an evolution of GPS III, which uses the A2100 bus as its core. The new models use the modernized LM2100 bus along with a fully digital navigation payload from L3Harris, a significant upgrade from the previous 70% digital payload used in GPS III.
An upgraded version known as the LM2100 Combat Bus will be used starting with the third service vehicle. It will enable on-orbit servicing at a later date, which may include hardware upgrades, component replacement, or refuelling.
Medium Earth Orbit Search and Rescue (MEOSAR) payloads are being provided by the Canadian government on behalf of the Canadian Armed Forces. The time it takes to detect and locate a distress signal will be reduced from an hour to five minutes, along with greatly improved accuracy in locating a distress beacon.
Laser Retroreflector Arrays (LRAs) will be built by the United States Naval Research Lab. This is a passive reflector system that improves accuracy and provides better ephemeris data. The National Geospatial-Intelligence Agency (NGA) will fund the integration costs of the LRA.
Other significant enhancements include: unified S-Band (USB) interface compliance, integration of hosted payloads including a redesigned United States Nuclear Detonation (NUDET) Detection System (USNDS) payload, Energetic Charged Particles (ECP) sensor, and Regional Military Protection (RMP) capabilities that provide the ability to deliver high-power regional Military Code (M-Code) signals in specific areas of intended effect.
The U.S. Air Force has identified four "technology insertion points" for GPS Block IIIF. These four points are the only four times during the block's lifecycle where new capabilities will be allowed to be introduced to Block IIIF satellites.
Technology Insertion Point 1 (estimated FY2026)
First Space Vehicle: GPS IIIF-01
Proposed/possible new functionality:
On Orbit Reprogrammable Digital Payload
High Power Amplifiers (SSPA's)
Regional Military Protection (RMP)
Technology Insertion Point 2 (estimated FY2028)
First Space Vehicle: GPS IIIF-07
Proposed/possible new functionality:
M-Code Space Service Volume
Technology Insertion Point 3 (estimated FY2030)
First Space Vehicle: GPS IIIF-13
Proposed/possible new functionality:
Near Real-Time Commanding
Advanced Clocks
Technology Insertion Point 4 (estimated FY2033)
First Space Vehicle: GPS IIIF-19
Proposed/possible new functionality:
TBD
Development
Space Segment (Satellites)
The U.S. Air Force employed a two-phase competitive bid acquisition process for the GPS Block IIIF satellites.
Phase One: Production Feasibility Assessment
On 5 May 2016, the U.S. Air Force awarded three Phase One Production Readiness Feasibility Assessment contracts for GPS III Space Vehicles (SV's) 11+, one each to Boeing Network and Space Systems, Lockheed Martin Space Systems Company, and Northrop Grumman Aerospace Systems. The phase one contracts were worth up to six million dollars each. During the phase one effort, both Boeing and Northrop Grumman demonstrated working navigation payloads.
Phase Two: Satellite Manufacturing
On 19 April 2017, the U.S. Air Force Space Command announced the start of the second phase of its acquisition strategy with the publication of a special notice for an "Industry Day" for companies planning on bidding for the contract to manufacture GPS III vehicles 11+. During the Industry Day event, the Air Force shared the tentative acquisition strategy which it will use to evaluate proposals, then solicited feedback from potential bidders.
In July 2017, the Deputy Director of the U.S. Air Force GPS Directorate stated the acquisition strategy for GPS Block IIIF would be to award the manufacturing contracts for all 22 Block IIIF satellites to the same contractor.
In November 2017, the Deputy Director of the U.S. Air Force's GPS Directorate announced the name of the second tranche of GPS III satellites was "GPS Block IIIF".
Also in November 2017, it was announced that development of the fully digital navigation payload for GPS Block IIIF satellites had completed. The Block IIIA program schedule was delayed multiple times due to issues with the navigation payload.
Bidding
While the Air Force originally expected to publish the formal Request For Proposals (RFP) for GPS Block IIIF production in September 2017, it was not released until 13 February 2018. The RFP was for a firm-fixed price (FFP) contract for a single company to manufacture all 22 space vehicles. All three participants from phase one (Boeing, Lockheed Martin, and Northrop Grumman) were believed to be likely to submit proposals. The government held a pre-proposal conference in El Segundo, California, to be held on 15 March 2018 for potential bidders to ask the Air Force questions about the solicitation. The submission deadline for proposals was 16 April 2018.
The bid status of companies who participated in phase one, in alphabetical order:
Boeing: declined to submit a proposal
Lockheed Martin: submitted a proposal
Northrop: declined to submit a proposal
Funding
On 14 September 2018, the Air Force awarded a manufacturing contract with options worth up to US$7.2 billion to Lockheed Martin.
Control Segment (Ground-Based Command & Control)
GPS Block IIIF's ground control system of record will be the same used for GPS Block III, the Next Generation GPS Operational Control System (OCX).
In order to be able to command and control Block IIIF satellites, in April 2021 the U.S. Space Force awarded a $228 million contract to Raytheon Intelligence and Space called OCX Block 3F, which builds on the existing OCX Block 2 system and adds the ability to perform Launch and Checkout of Block IIIF satellites.
OCX Block 3F delivery was expected in July 2025, with operational acceptance expected in late 2027.
See also
BeiDou Navigation Satellite System
BeiDou-2 (COMPASS) navigation system
Galileo (satellite navigation)
GLONASS
Quasi-Zenith Satellite System
References
Global Positioning System | GPS Block IIIF | [
"Technology",
"Engineering"
] | 2,193 | [
"Global Positioning System",
"Wireless locating",
"Aircraft instruments",
"Aerospace engineering"
] |
55,943,952 | https://en.wikipedia.org/wiki/Subrepresentation | In representation theory, a subrepresentation of a representation of a group G is a representation such that W is a vector subspace of V and .
A nonzero finite-dimensional representation always contains a nonzero subrepresentation that is irreducible, the fact seen by induction on dimension. This fact is generally false for infinite-dimensional representations.
If is a representation of G, then there is the trivial subrepresentation:
If is an equivariant map between two representations, then its kernel is a subrepresentation of and its image is a subrepresentation of .
References
Representation theory | Subrepresentation | [
"Mathematics"
] | 130 | [
"Representation theory",
"Fields of abstract algebra"
] |
41,454,040 | https://en.wikipedia.org/wiki/Ludwig%20Blattner | Ludwig Blattner (5 February 1880 – 29 October 1935) was a German-born inventor, film producer, director and studio owner in the United Kingdom, and developer of one of the earliest magnetic sound recording devices.
Career
Ludwig Blattner, also known as Louis Blattner, was a pioneer of early magnetic sound recording, licensing a steel wire-based design from German inventor Dr. Kurt Stille, and enhancing it to use steel tape instead of wire, thereby creating an early form of tape recorder. This device was marketed as the Blattnerphone. Whilst on a promotional tour of his sound recording technology in 1928 he would choose ladies from the audience to dance with to music being played from a Blattnerphone.
Prior to the First World War, Blattner was involved in the entertainment industry in the Liverpool City Region: he managed the "La Scala" cinema in Wallasey from 1912 to 1914, conducted the cinema's orchestra, and composed a waltz "The Ladies of Wallasey". In about 1920 he moved to Manchester where he managed a chain of cinemas. There, in 1923 he composed and published a piece of music about the film actress Pola Negri titled "Pola Negri Grand Souvenir March". Later in the 1920s, he bought the British film rights to Lion Feuchtwanger's novel Jew Süss although the film was not made until 1934 after Blattner had sold the rights to Gaumont British. In early 1928, press reports appeared saying that Blattner was planning a 400-acre "Hollywood, England" complex with a hospital, 150 room hotel, aeroplane club and the largest collection of studios in the world, for which he was planning to spend between 2 million and 5 million pounds. Blattner later formed the Ludwig Blattner Picture Corporation in Borehamwood in the studio complex that is now known as BBC Elstree Centre, buying the Ideal Film Company studio (formerly known as Neptune Studios) in 1928, renaming it as Blattner Studios. In 1928 his company produced a series of short films of musical performances such as "Albert Sandler and His Violin [Serenade – Schubert]" and "Teddy Brown and His Xylophone". The best known films produced by his film company were A Knight in London (1929) and My Lucky Star (1933), which was co-directed by Blattner. Films produced by other companies at the Blattner Studios included Dorothy Gish and Charles Laughton's first drama talkie Wolves (1930), the 1934 adaptation of Edgar Allan Poe's short story "The Tell-Tale Heart", Rookery Nook (1930) and A Lucky Sweep (1932).
Ludwig Blattner was also involved in an early colour motion picture process: in about 1929 he bought the rights for the use outside the USA of a lenticular colour process called Keller-Dorian cinematography. This process was then known as the Blattner Keller-Dorian process, which lost out to rival colour systems.
Ludwig Blattner originally intended the Blattnerphone to be used as a system of recording and playback for talking pictures, but the BBC saw its potential to record and "timeshift" BBC radio programmes for use with the BBC Empire Service, and rented several Blattnerphones from 1930 onwards, one of which was used to record King George V's speech at the opening of the India Round Table Conference on 12 November 1930. The 1932 BBC Year Book (covering November 1930 to October 1931) said: In 1939, the BBC used a Blattnerphone (not the later Marconi-Stille recorder) to record Prime Minister Neville Chamberlain's announcement to Britain of the outbreak of World War II.
In 1930, Blattner promoted a version of his Blattnerphone technology as one of the first telephone answering machines, and in 1931 Blatter promoted a version of the Blattnerphone as the Blattner Book Reader, an early Audiobook playback system for the blind.
Despite being a "promoter of genius with far-seeing ideas about technical developments in sound and colour" according to the film director Michael Powell, business problems with the studio, due to the advent of rival talking picture systems, led to heavy financial loss, and in 1934 Joe Rock leased Elstree Studios from Ludwig Blattner, and bought it outright in 1936, a year after Blattner's suicide.
Personal life
Born into a Jewish family in Altona, Hamburg, Blattner first visited Great Britain in 1897. He appears to have returned later and worked for a while in the publicity department of Mellin's Food probably arranged through family contact with Gustav Mellin. He moved to Birkenhead by 1901 and settled in New Brighton, Merseyside where he married Margaret Mary Gracey and they had two British-born children, Gerry Blattner (born 1913 in Liverpool), and Betty Blattner (born in 1914 in Cheshire). They both followed their father into the film business, Gerry as a producer and Betty as a makeup artist. Ludwig Blattner never became a British citizen, and during the First World War he remained in an internment camp, which interrupted his management of the Gaiety cinema in Wallasey. The hearsay based suggestion in a letter by Jay Leyda in 1968 that he married Else (also known as Elisabeth), the widow of Edmund Meisel the composer of the score for Battleship Potemkin, some time after Meisel's death in 1930, is without any hard evidence. Indeed, he was resident with his wife Margaret Mary at the Country Club in Elstree when he took his own life in 1935.
Ludwig hanged himself at the Elstree Country Club in October 1935, when his son was 22 and his daughter was 21. Ludwig and Gerry were honoured by the naming of Blattner Close in Elstree in the mid-1990s.
References
External links
Blattnerphone at the BBC's A History of the World
Blattner Close at streetmap.co.uk
British film producers
Audio engineering
German emigrants to the United Kingdom
19th-century German Jews
20th-century German inventors
1881 births
Suicides by hanging in England
1935 suicides
1935 deaths | Ludwig Blattner | [
"Engineering"
] | 1,276 | [
"Electrical engineering",
"Audio engineering"
] |
41,454,149 | https://en.wikipedia.org/wiki/James%20B.%20Anderson | James Bernhard Anderson (November 16, 1935 – January 14, 2021) was an American chemist and physicist. From 1995 to 2014 he was Evan Pugh Professor of Chemistry and Physics at the Pennsylvania State University. He specialized in Quantum Chemistry by Monte Carlo methods, molecular dynamics of reactive collisions, kinetics and mechanisms of gas phase reactions, and rare-event theory.
Life
James Anderson was born in 1935 in Cleveland, Ohio to American-born parents of Swedish descent, Bertil and Lorraine Anderson. He was raised in Morgantown, West Virginia and spent his childhood summers on the island of Put-in-Bay, Ohio.
Anderson earned a B.S. in chemical engineering from the Pennsylvania State University, an M.S. from the University of Illinois, and an M.A. and Ph.D. from Princeton University.
Anderson married his wife Nancy Anderson (née Trotter) in 1958. They have three children and six grandchildren.
He died on January 14, 2021, in State College, Pennsylvania.
Career
Anderson began his professional career as an engineer in petrochemical research and development with Shell Chemical Company from 1958–60 in Deer Park, Texas. He began his academic career as a professor of chemical engineering at Princeton University in 1964 and continued as a professor of engineering at Yale University in 1968 before moving to the Pennsylvania State University in 1974. From 1995 until his retirement in 2014, he was Evan Pugh Professor of Chemistry and Physics at the Pennsylvania State University. Anderson also served as a visiting professor at Cambridge University, the University of Milan, the University of Kaiserslautern, the University of Göttingen, Free University of Berlin, and RWTH Aachen University.
Research
Anderson made key contributions in several areas of chemistry and physics. The main areas of impact are: reaction kinetics and molecular dynamics, the 'rare-event' approach to chemical reactions, Quantum Monte Carlo (QMC) methods, Monte Carlo simulation of radiative processes, and direct Monte Carlo simulation of reaction systems.
Anderson's first contributions were experimental and theoretical in the area of nozzle-source molecular beams (supersonic beams) and the free jet fuels and skimmers for generating such beams. This research contributed to success in generating molecular beams of high energy and narrow velocity distributions.
Anderson's experiments with supersonic beams for the reaction HI + HI → H2 + I2 led him to early studies using classical trajectory methods. He carried out the first calculations of the F-H-H system with a study of the energy requirements for the reaction H + HF → H2 + F and followed this work with calculations for F + H2 → HF + H, a reaction basic to the understanding of molecular dynamics.
Trajectory calculations for the HI + HI reaction, a rare event, led to his work on predicting rare events in molecular dynamics by sampling trajectories crossing a surface in phase space. Initially called "variational theory of reaction rate" by James C. Keck (1960), it has since 1973 often been called "the reactive flux method." Anderson extended Keck's original method and defended it against a number of critics. The earliest applications were to three- and four-body reactions, but it has been extended to reactions in solution, to condensed matter, to protein folding, and most recently to enzyme-catalyzed reactions.
Anderson pioneered the development of the quantum Monte Carlo (QMC) method of simulating the Schrödinger equation. His 1975–76 papers were the first to describe applications of random walk methods to polyatomic systems and many-electron systems. Today, QMC methods are often the methods of choice for high accuracy for a range of systems: small and large molecules, molecules in solution, electron gas, clusters, solid materials, vibrating molecules, and many others.
Anderson succeeded in bringing the power of modern computers to the direct simulation of reacting systems. His extension of an earlier method for rarefied gas dynamics by Graeme Bird (1963) eliminates the use of differential equations and treats reaction kinetics on a probabilistic basis collision-by-collision. It is the method of choice for many low-density systems with coupled relaxation and reaction, and with non-equilibrium distributions. It has been applied to the complete simulation of detonations as well as to the prediction of ultra-fast detonations.
Awards and honors
Bausch & Lomb Award
Evan Pugh Medal (Silver), The Pennsylvania State University
Evan Pugh Medal (Gold), The Pennsylvania State University
National Science Foundation Graduate Fellowship
Fellow of the American Physical Society (1988)
Fellow of the American Association for the Advancement of Science
Faculty Scholar Medal, The Pennsylvania State University
Senior Research Award, Alexander von Humboldt Foundation, Bonn, Germany
Selected publications
See The Anderson Group webpage for a full list of publications.
Molecular Beams and Free Jets (Supersonic Beams)
Classical Trajectory Calculations
Rare Event Theory (Combined Phase-Space Trajectory Method)
Quantum Monte Carlo
J. B. Anderson, (Book) Quantum Monte Carlo: Origins, Development, Applications, Oxford University Press, 2007. .
Simulation of Radiative Processes
Direct Simulation of Chemical Reactions
Simulations of Enzyme-Catalyzed Reactions
References
1935 births
2021 deaths
Scientists from Cleveland
21st-century American physicists
Pennsylvania State University faculty
Fellows of the American Physical Society
Fellows of the American Association for the Advancement of Science
21st-century American chemists
American quantum physicists
Monte Carlo methods
Fellows of the American Academy of Microbiology | James B. Anderson | [
"Physics"
] | 1,102 | [
"Monte Carlo methods",
"Computational physics"
] |
41,465,868 | https://en.wikipedia.org/wiki/Shannon%20%28unit%29 | The shannon (symbol: Sh) is a unit of information named after Claude Shannon, the founder of information theory. IEC 80000-13 defines the shannon as the information content associated with an event when the probability of the event occurring is . It is understood as such within the realm of information theory, and is conceptually distinct from the bit, a term used in data processing and storage to denote a single instance of a binary signal. A sequence of n binary symbols (such as contained in computer memory or a binary data transmission) is properly described as consisting of n bits, but the information content of those n symbols may be more or less than n shannons depending on the a priori probability of the actual sequence of symbols.
The shannon also serves as a unit of the information entropy of an event, which is defined as the expected value of the information content of the event (i.e., the probability-weighted average of the information content of all potential events). Given a number of possible outcomes, unlike information content, the entropy has an upper bound, which is reached when the possible outcomes are equiprobable. The maximum entropy of n bits is n Sh. A further quantity that it is used for is channel capacity, which is generally the maximum of the expected value of the information content encoded over a channel that can be transferred with negligible probability of error, typically in the form of an information rate.
Nevertheless, the term bits of information or simply bits is more often heard, even in the fields of information and communication theory, rather than shannons; just saying bits can therefore be ambiguous. Using the unit shannon is an explicit reference to a quantity of information content, information entropy or channel capacity, and is not restricted to binary data, whereas bits can as well refer to the number of binary symbols involved, as is the term used in fields such as data processing.
Similar units
The shannon is connected through constants of proportionality to two other units of information:
The hartley, a seldom-used unit, is named after Ralph Hartley, an electronics engineer interested in the capacity of communications channels. Although of a more limited nature, his early work, preceding that of Shannon, makes him recognized also as a pioneer of information theory. Just as the shannon describes the maximum possible information capacity of a binary symbol, the hartley describes the information that can be contained in a 10-ary symbol, that is, a digit value in the range 0 to 9 when the a priori probability of each value is . The conversion factor quoted above is given by log10(2).
In mathematical expressions, the nat is a more natural unit of information, but 1 nat does not correspond to a case in which all possibilities are equiprobable, unlike with the shannon and hartley. In each case, formulae for the quantification of information capacity or entropy involve taking the logarithm of an expression involving probabilities. If base-2 logarithms are employed, the result is expressed in shannons, if base-10 (common logarithms) then the result is in hartleys, and if natural logarithms (base e), the result is in nats. For instance, the information capacity of a 16-bit sequence (achieved when all 65536 possible sequences are equally probable) is given by log(65536), thus , , or .
Information measures
In information theory and derivative fields such as coding theory, one cannot quantify the 'information' in a single message (sequence of symbols) out of context, but rather a reference is made to the model of a channel (such as bit error rate) or to the underlying statistics of an information source. There are thus various measures of or related to information, all of which may use the shannon as a unit.
For instance, in the above example, a 16-bit channel could be said to have a channel capacity of 16 Sh, but when connected to a particular information source that only sends one of 8 possible messages, one would compute the entropy of its output as no more than 3 Sh. And if one already had been informed through a side channel in which set of 4 possible messages the message is, then one could calculate the mutual information of the new message (having 8 possible states) as no more than 2 Sh. Although there are infinite possibilities for a real number chosen between 0 and 1, so-called differential entropy can be used to quantify the information content of an analog signal, such as related to the enhancement of signal-to-noise ratio or confidence of a hypothesis test.
References
Units of information
unit | Shannon (unit) | [
"Mathematics"
] | 940 | [
"Units of information",
"Quantity",
"Units of measurement"
] |
41,467,632 | https://en.wikipedia.org/wiki/Character%20literal | A character literal is a type of literal in programming for the representation of a single character's value within the source code of a computer program.
Languages that have a dedicated character data type generally include character literals; these include C, C++, Java, and Visual Basic. Languages without character data types (like Python or PHP) will typically use strings of length 1 to serve the same purpose a character data type would fulfil. This simplifies the implementation and basic usage of a language but also introduces new scope for programming errors.
A common convention for expressing a character literal is to use a single quote (') for character literals, as contrasted by the use of a double quote (") for string literals. For example, 'a' indicates the single character a while "a" indicates the string a of length 1.
The representation of a character within the computer memory, in storage, and in data transmission, is dependent on a particular character encoding scheme. For example, an ASCII (or extended ASCII) scheme will use a single byte of computer memory, while a UTF-8 scheme will use one or more bytes, depending on the particular character being encoded.
Alternative ways to encode character values include specifying an integer value for a code point, such as an ASCII code value or a Unicode code point. This may be done directly via converting an integer literal to a character, or via an escape sequence.
See also
String literal
XML Literals
– for multicharacter literals
References
Character encoding
Data types | Character literal | [
"Technology"
] | 319 | [
"Natural language and computing",
"Character encoding"
] |
65,762,490 | https://en.wikipedia.org/wiki/CYP139%20family | Cytochrome P450, family 139, also known as CYP139, is a cytochrome P450 monooxygenase family in bacteria. The first gene identified in this family is CYP139A1 from Mycobacterium tuberculosis. Most member of this family belonged to the subfamily A, and involved in the synthesis of secondary metabolites in many mycobacterial species.
References
139
Protein families | CYP139 family | [
"Biology"
] | 91 | [
"Protein families",
"Protein classification"
] |
65,764,965 | https://en.wikipedia.org/wiki/Solar-blind%20technology | Solar-blind technology is a set of technologies to produce images without interference from the Sun. This is done by using wavelengths of ultraviolet light that are totally absorbed by the ozone layer, yet are transmitted in the Earth's atmosphere. Wavelengths from 240 to 280 nm are completely absorbed by the ozone layer. Elements of this technology are ultraviolet light sources, ultraviolet image detectors, and filters that only transmit the range of wavelengths that are blocked by ozone. A system will also have a signal processing system, and a way to display the results (image).
Ultraviolet sources
Ultraviolet illumination can be produced from longer wavelengths using non-linear optical materials. These can be a second harmonic generator. They must have a suitable birefringence in order to phase match the output frequency doubled UV light. One compound commercially used is L-arginine phosphate monohydrate known as LAP. Research is underway for substances that are very non-linear, have a suitable birefringence, are transparent in the spectrum and have a high degree of resistance to damage from lasers.
Optical system
Normal glass does not transmit below 350 nm, so it is not used for optics in solar-blind systems. Instead calcium fluoride, fused silica, and magnesium fluoride are used as they are transparent to shorter wavelengths.
Filters
An optical filter can be used to block out visible light and near-ultraviolet light. It is important to have a high transmittance within the solar-blind spectrum, but to strongly block the other wavelengths.
Interference filters can pass 25% of the wanted rays, and reduce others by 1000 to 10,000 times. However they are unstable and have a narrow field of view.
Absorption filters may only pass 10% of wanted UV, but can reject by a ratio of 1012. They can have a wide field of view and are stable.
Ultraviolet detectors
Semiconductor ultraviolet detectors are solid state, and convert an ultraviolet photon into an electric pulse. If they are transparent to visible light, then they will not be sensitive to light.
Use
Solar-blind imaging can be used to detect corona discharge, in electrical infrastructure. Missile exhaust can be detected from the troposphere or ground. Also when looking down on the Earth from space, the Earth appears dark in this range, so rockets can be easily detected from above once they pass the ozone layer.
Israel, People's Republic of China, Russia, South Africa, United Kingdom, and United States are developing this technology.
References
Ultraviolet radiation
Military optical devices | Solar-blind technology | [
"Physics",
"Chemistry"
] | 503 | [
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Ultraviolet radiation"
] |
65,770,702 | https://en.wikipedia.org/wiki/Scythian%20metallurgy | From the 7th to 3rd Century BC, the Scythian people of the Pontic–Caspian steppe engaged in the widespread practice of metallurgy. Though Scythian society was heavily based around a nomadic, mobile lifestyle, the culture was capable of practicing metallurgy and of producing metal objects. Many works of Scythian metalworking have subsequently been found throughout the range of the people.
Description
The Scythians emerged as a people prior to the 7th Century BC, when they were first mentioned in historical records. The Scythian civilization consisted of a number of distinct tribal groups scattered across the Pontic Steppes, Caucasus, and Central Asia. Though primarily a nomadic people, the Scythians established a number of settlements across their territory; these establishments in turn allowed for the development of a sedentary society and the accompanying development of trade skills, including metalworking.
Scythian knowledge of metalworking likely originated with the peoples of Iran and China, with this knowledge spreading along trade routes and arriving in the steppes from the 2nd to 1st Millennium BC. Early Scythian metallurgy was centered around bronzeworking, as these skills had already been widely adopted by the Scythians' neighbors. The Minusinsk Basin of Siberia has been speculated as the origin point for the raw materials used in Bronze-age Scythian metallurgy, and Scythian access to this region fueled the peoples' later centuries of expansion. During the 8th Century BC Scythians were often employed by nations in the Near East and these returning soldiers may have brought knowledge of iron-working back to their homeland, and by the start of the 6th-century BC the practice was widespread in the Pontic steppes. In addition to bronze and iron working, gold and copper-working were also present in Scythian society; in his commentary on the Scythian people, Greek historian Herodotus remarked on their fondness for making things from gold and copper.
Metallurgy held a major place in Scythian society as metalworkers were needed to produce material goods to support the Scythian way of life. As a nomadic society with broad borders, the Scythians often raided neighboring peoples and as such required metal weaponry - particularly iron swords and bronze arrowheads. It has been speculated that the Scythian's use of stylized metal adornments may have been copied from their opponents during these conflicts. In addition, jewelry and other adornment was in demand among all classes of society, as can be seen with the discovery of metal adornments in the burial tombs attributed to the Scythians. One notable aspect of Scythian clothing was the widespread use of metal belts.
Other signs of Scythian metalworking can be found throughout sites attributed to the people. Several notable Scythian archeological sites contain the remnants of metalworking operations; at one settlement along the Dnieper, remnants of blast furnaces and slag have been found, implying the existence of a large metallurgical center. Studies of other Scythian sites have also led to the remains of metal workshops and tools being found, further supporting the theory that the Scythians were organized craftspeople. Scythian metalworkers were particularly renowned for the high quality of their copper crafting. During war, portable molds were brought to forge arrowheads for the Scythian cavalry. Scythian metallurgy also influenced the metallurgy of the Koban people of the North Caucasus.
References
See also
Scythian art
Scythian clothing
History of metallurgy
Scythia | Scythian metallurgy | [
"Chemistry",
"Materials_science"
] | 734 | [
"Metallurgy",
"History of metallurgy"
] |
60,773,258 | https://en.wikipedia.org/wiki/Tropifexor | Tropifexor is an investigational drug that acts as an agonist of the farnesoid X receptor (FXR). It was discovered by researchers from Novartis and Genomics Institute of the Novartis Research Foundation. Its synthesis and pharmacological properties were published in 2017. It was developed for the treatment of cholestatic liver diseases and nonalcoholic steatohepatitis (NASH). In combination with cenicriviroc, a CCR2 and CCR5 receptor inhibitor, it is undergoing a phase II clinical trial for NASH and liver fibrosis.
Rats treated orally with tropifexor (0.03 to 1 mg/kg) showed an upregulation of the FXR target genes, BSEP and SHP, and a down-regulation of CYP8B1. Its EC50 for FXR is between 0.2 and 0.26 nM depending on the biochemical assay.
The patent that covers tropifexor and related compounds was published in 2010.
References
Drugs developed by Novartis
Benzothiazoles
Farnesoid X receptor agonists
Isoxazoles
Tropanes
Carboxylic acids
Trifluoromethyl ethers
Cyclopropyl compounds | Tropifexor | [
"Chemistry"
] | 266 | [
"Carboxylic acids",
"Functional groups"
] |
60,776,979 | https://en.wikipedia.org/wiki/Polydiketoenamine | Polydiketoenamine (PDK) is a polymer discovered in 2019 that can be recycled over and over without loss of performance. It is obtained from carboxylic acids and polyamides. The compound contains a cross-linked network which gives it the properties of higher performance and chemical resistance. The mechanical reprocessing of PDK is done without degrading its properties or performance. When the compound is under a substantial amount of heat, the total number of bonds remain constant; therefore, under heat bonds will break and make to reform. Researchers at Lawrence Berkeley National Laboratory studied PDK and published the results in Nature Chemistry in April 2019. Submersion in an acidic solution breaks down the polymer to its original monomers and separates the monomers from additives.
See also
Enamine
References
Lawrence Berkeley National Laboratory
Organic polymers | Polydiketoenamine | [
"Chemistry"
] | 171 | [
"Polymer stubs",
"Organic polymers",
"Organic compounds",
"Organic chemistry stubs"
] |
60,777,446 | https://en.wikipedia.org/wiki/Barrel%20plating | Barrel plating is a form of electroplating used for plating a large number of smaller metal objects in one sitting. It consists of a non-conductive barrel-shaped cage in which the objects are placed before being subjected to the chemical bath in which they become plated. An important aspect of the barrel plating process is that the individual pieces establish a bipolar contact with one another — this results in high plating efficiency. However, because of the large amount of surface contact that the pieces have with each other, barrel plating is generally not recommended when precisely engineered or ornamental finishes are required.
Barrel plating began as a practice in the United States during the US Civil War. The harsh chemicals required, however, meant that it had to await the development of non-conductive and chemically resistant plastics— primarily perspex and polypropylene— before it could receive widespread use. By 2004, however, barrel plating had become widespread: it was estimated that as much as 70% of modern electroplating facilities used barrel plating techniques at that time.
References
Metal plating | Barrel plating | [
"Chemistry"
] | 223 | [
"Metallurgical processes",
"Coatings",
"Metal plating"
] |
60,777,557 | https://en.wikipedia.org/wiki/Value%20tree%20analysis | Value tree analysis is a multi-criteria decision-making (MCDM) implement by which the decision-making attributes for each choice to come out with a preference for the decision makes are weighted. Usually, choices' attribute-specific values are aggregated into a complete method. Decision analysts (DAs) distinguished two types of utility. The preferences of value are made among alternatives when there is no uncertainty. Risk preferences solves the attitude of DM to risk taking under uncertainty. This learning package focuses on deterministic choices, namely value theory, and in particular a decision analysis tool called a value tree.
History
The concept of utility was used by Daniel Bernoulli (1738) first in 1730s while explaining the evaluation of St Petersburg paradox, a specific uncertain gable. He explained that money was not enough to measure how much value is. For an individual, however, the worth of money was a non-linear function. This discovery led to the emergence of utility theory, which is a numerical measure that indicates how much value alternative choices have. With the development of decision analysis, utility played an important role in the explanation of economics behavior. Some utilitarian philosophers like Bentham and Mill took advantage of it as an implement to build a certain kind of ethics theory either. Nevertheless, there was no possibility of measuring one's utility function. Moreover, the theory was not so important as in practice. With the time past, the utility theory gradually based on a solid theoretical foundation. People started to use theory of games to explain the behavior of those who are rational and calm when engaging with others with conflict happening. In 1944 John von Neumann and Oskar Morgenstern's Theory of Games and Economic Behavior was published. Afterwards, it emerged since it has become of the key implement researchers and practitioners from statistics and operations research use to give a helping hand to decision makers when it was hard to make a decision. Decision analysts can be separated into two sorts of utility. The attitude of decision makers towards uncertain risk are solved by risk preference.
Process
The goal of the value tree analysis process is to offer a well-organized way to think and discuss about alternatives and support subjective judgements which are critical for correct or excellent decisions. The phases of process of the value tree analysis is shown as below:
Problem structuring:
defining the decision context
identifying the objectives
generating and identifying decision alternatives
creating a hierarchical model of the objectives
specifying the attributes
Preference elicitation
Recommended decision
Sentitvity analysis
These processes are usually large and iterative. For example, problem structure, collection of related information, and modeling of DM preferences often require a lot of work. DM's perception of the problem and preferences for results not previously considered may change and evolve during this process.
Methodology
Value tree was built to be an effective and essential technique for improving and enhancing goals and values by several aspects. The tree analysis displays a visual mode to problems that used to be only available in a verbal mode. Plus separate aspects, thoughts and opinions are united to a single visual representation, which gives birth to great clarity, stimulation of creative thinking, and constructive communication.
We take the steps below to create a value tree analysis with an example to help illustrate the steps:
Step1: Initial pool
Using a free brainstorming of all the values as a beginning, by which we mean all the problems which are related to the decision: the goals and criteria, the demands, etc.—all the things which have relevance to decision making. Write down what each value is on a piece of paper.
(A) Begin the process with several things:
Essences in your decision
The things that matter
The thing that you are looking for
The thing you want
Your passions, intentions, joys, ambition
The things which joy you
The things that you are fierce of
(B) Once you've exhausted your thoughts after this very open phase, consider the following topics to help yu come up with comprehensive values, interests, and concerns related to your decision:
Stakeholders
Consider who is affected by the decision and what their values might be. Stakeholders may be family, friends, neighbors, society, offspring or other species, but they can be anyone who might be affected by your decision, whether intentional or not.
Basic human needs:
Physiological value - for example, health and nutrition
Safety value - feel safe
Social values - be loved and respected
Self-realizing value - doing and becoming "fit"
Cognitive value - eager to satisfy curiosity, know, explain and understand
Aesthetic value - experience beauty
Intangible consequences. We are most inclined to ignore intangible consequences, such as:
If you make this choice, how would you feel about yourself?
How do others see you making this choice?
The lack of awareness of this intangible consequence can easily lead to our regretful decision. Moreover, if there is a disagreement between our intuitive and thorough analysis of decision-making, we are usually not aware of the underlying intangible consequences.
The pros and cons of the options you have seen:
For each option you can think of, what are the best and worst aspects of yourself? These will be values.
Special consideration of costs and risks. We tend to start our plan by thinking about the positive goals we hope to achieve. Considering costs and risks requires extra effort, but considering them is the first step to avoid them.
Future values
Consider future impacts and current impacts. People tend to ignore or mitigate future consequences.
Imagine your own future, perhaps in your death bed, reviewing this decision. What is important to you?
Step2: Clustering
When lacking of ideas, clustering the ideas is an efficient way to move the paper around until similar ideas are gathered together.
Step3: Labeling
Mark each group with a higher level value that holds them together to make each element clearer.
[Example]
As a simplified example, let us assume that some of the initial values we propose are self-determined, family, safe, friend and healthy. Health, safety and self-realization can be grouped together and labeled as "self", where families and friends can be grouped together and labeled as "other".
Step4: Moving up the tree
Seeing whether these groups can be grouped into still larger groups
[Example]
SELF and OTHERS group into OVERALL VALUE.
Step5: Moving down the tree
Also seeing if these groups can be divided into still smaller sub-groups.
[Example]
SELF-ACTUALIZATION could be divided into WORK and RECREATION.
Step6: Moving across the tree
Asking themselves is another valid way to bring new ideas to a tree, whether any additional thoughts at that level can come out(moving across the tree).
[Example]
In addition to FAMILY and FRIENDS, we could add SOCIETY.
The diagram on the right shows the final result of the (still simplified) example. Bold, italic indicates the basic values that were not originally written by us, but were thought of when we tried to fill in the tree.
Tool
PRIME Decisions
PRIME Decisions is a decision helping implement which use PRIME method to analyze incomplete preference information. Novel features are also offered by PRIME Decisions, which gives support to interactive decision process which includes an elicitation tour. PRIME Decisions are seen as an essential catalyst for further applied work due to its practitioners benefit from M. Köksalan et al. (eds.), Multiple Criteria Decision Making in the New Millennium © Springer-Verlag Berlin Heidelberg 2001 166 the explicit recognition of incomplete information.
Web-Hipre
Web-HIPRE, a Java applet, provides help to multiple criteria decision analysis. Moreover, a normal platform is provided for individual and group decision making. People can process the model at the same time at any time. Plus, they can easily have access to the model. It is possible to define links to other websites. All other sorts of information like geography, media files describing the criteria or alternatives can be referred to this link, which help make a better quality of decision support significantly.
Application
Some indicators obtained by process analysis are of great help to the value tree analysis. Especially in the value decomposition of internal operation indicators, the driving indicators of a first-level process indicator are usually the secondary sub-process indicators. For instance, the new product launch cycle (in terms of R&D project to production) is actually driven by two processes: R&D and testing in the company. The standardized R&D and testing process is a key success factor for improving the speed of innovation. To this end, the two process indicators development cycle, test cycle, sample acceptance and other indicators are the vital elements which drive the new product launch cycle indicators. Therefore, combining process analysis is of great significance for the decomposition of indicator value, especially for the decomposition of internal operational indicators. The instances of the main application areas are shown as below:
Application on business, production and services
Budget allocation
Allocating the engineering budget for products and projects annually is always a challenge. With value tree analysis aspects, such as strategic fit, which have no natural evaluation measure, but may have a significant role in decision-making can be included into the analysis. Furthermore, there is likelihood of communication being increased by explicit modelling of the relevant facts and a base for justified decisions is also provided.
Selection of R&D programs
As it is known to all that the risk in high in many R&D programs sometimes, thus the role of a good reason may be as essential as the decision itself. Value tree analysis offers a tool to give support to the reasoning of the selection of the R&D programme and modelling the facts affecting the decision.
Developing and deciding on marketing strategies
For instance, the analysis of new strategies for merchandising gasoline and other products through full-facility service stations.
Application on public policy problems
Analysis of responses to environmental risks
For instance, organization of negotiations between several parties in order to identify compromise regulations for acid rain and identify the objectives of the regulations.
Negotiation for oil and gas leases
Carry out an evaluation report of subcontractors and analyze the criteria which should be used.
Comparisons between alternative energy sources
For instance, organizing a debate about nuclear power, aiding the decision process, and studying value differences between the decision-makers.
Political decisions
Application on medicine
Deciding on the optimal usage and inventory of blood in a blood bank
Helping individuals to understand the risks of different treatments
In addition to the decision-making problems value tree analysis serves also other purposes.
Identifying and reformulating options
Definition of objectives
Providing a common language for communication
Quantification of subjective variables
For instance, a scale which measures the worth of military targets.
Development of value-relevant indices
Application on empirical pilot study variable selection
As value tree analysis is an approach that costs and computes little, it is one of the best choices for time-sensitive variable selection in empirical pilot healthcare studies. Moreover, value tree analysis offers a well-structured and strategic process for decision-making so that pilot study and patient data constraints can be accounted for and value for study stakeholders can be maximized.
Application on Coaching
Value tree analysis help creative and critical thinking and organize the thoughts in a logical way. Moreover, when a decision has come up, value tree analysis can also be an effective way to think about one's core goals and values. Afterwards, we can actively look for decision opportunities with the analysis done before.
Softwares
The software tools of value tree analysis are shown in the picture below:
References
Quality
Reliability engineering
Risk analysis methodologies
Safety engineering | Value tree analysis | [
"Engineering"
] | 2,298 | [
"Safety engineering",
"Systems engineering",
"Reliability engineering"
] |
60,777,609 | https://en.wikipedia.org/wiki/Li%C3%B1%C3%A1n%27s%20flame%20speed | In combustion, Liñán's flame speed provides the estimate of the upper limit for edge-flame propagation velocity, when the flame curvature is small. The formula is named after Amable Liñán. When the flame thickness is much smaller than the mixing-layer thickness through which the edge flame is propagating, a flame speed can be defined as the propagating speed of the flame front with respect to a region far ahead of the flame. For small flame curvatures (flame stretch), each point of the flame front propagates at a laminar planar premixed speed that depends on a local equivalence ratio just ahead of the flame. However, the flame front as a whole do not propagate at a speed since the mixture ahead of the flame front undergoes thermal expansion due to the heating by the flame front, that aids the flame front to propagate faster with respect to the region far ahead from the flame front. Liñán estimated the edge flame speed to be:
where and is the density of the fluid far upstream and far downstream of the flame front. Here is the stoichiometric value () of the planar speed. Due to the thermal expansion, streamlines diverges as it approaches the flame and a pressure builds just ahead of the flame.
The scaling law for the flame speed was verified experimentally In constant density approximation, this influence due to density variations disappear and the upper limit of the edge flame speed is given by the maximum value of .
References
Fluid dynamics
Combustion | Liñán's flame speed | [
"Chemistry",
"Engineering"
] | 306 | [
"Piping",
"Chemical engineering",
"Combustion",
"Fluid dynamics"
] |
68,622,078 | https://en.wikipedia.org/wiki/HD%2063513 | HD 63513 (HR 3036) is a solitary star located in the southern circumpolar constellation Volans. It has an apparent magnitude of 6.38, placing it near the max naked eye visibility. The star is situated at a distance of 634 light years but is receding with a heliocentric radial velocity of .
This object is a star with the characteristics of a G6 and G8 giant. At present it has 3.14 times the mass of the Sun but has expanded to almost 13 times the Sun's girth. It shines at 102 solar luminosities from its enlarged photosphere at an effective temperature of 5,116 K, which gives it a yellow glow. HD 63513 has an iron abundance 102% that of the Sun, placing it at solar metallicity and spins modestly with a projected rotational velocity of .
References
Volans
G-type giants
063513
037773
Durchmusterung objects
3036
Volantis, 17 | HD 63513 | [
"Astronomy"
] | 208 | [
"Volans",
"Constellations"
] |
68,623,212 | https://en.wikipedia.org/wiki/Principal%20series%20%28spectroscopy%29 | In atomic emission spectroscopy, the principal series is a series of spectral lines caused when electrons move between p orbitals of an atom and the lowest available s orbital. These lines are usually found in the visible and ultraviolet portions of the electromagnetic spectrum. The principal series has given the letter p to the p atomic orbital and subshell.
The lines are absorption lines when the electron gains energy from an s subshell to a p subshell. When electrons descend in energy they produce an emission spectrum. The term principal came about because this series of lines is observed both in absorption and emission for alkali metal vapours. Other series of lines appear in the emission spectrum only and not in the absorption spectrum, and were named the sharp series and the diffuse series based on the appearance of the lines.
References
Atomic physics
Emission spectroscopy
Absorption spectroscopy | Principal series (spectroscopy) | [
"Physics",
"Chemistry",
"Astronomy"
] | 169 | [
"Spectroscopy stubs",
"Spectrum (physical sciences)",
"Emission spectroscopy",
"Quantum mechanics",
"Absorption spectroscopy",
"Astronomy stubs",
"Atomic physics",
" molecular",
"Atomic",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs",
" and optical physics"
] |
68,624,888 | https://en.wikipedia.org/wiki/Bengt%20Aurivillius | Bengt Aurivillius (4 December, 1918 in Linköping – 2 May, 1994 in St. Peter's Parish, Malmöhus County) was a Swedish chemist known for his research in metal and mixed oxides.
Education and career
Aurivillius received his basic scientific education at the then Stockholm University where he graduated in 1937 and earned a fil. lic. in 1943. By 1949, he had made some important discoveries about the oxidation of mixed metals, which became quite prominent in the world of chemistry. He completed his dissertation, "X-ray Examinations of Bismuth Oxifluoride and Mixed Oxides with Trivalent Bismuth", at Stockholm University in 1951. Aurivillius joined the Swedish National Defence Research Institute in 1952, where he worked first as a research engineer and later senior researcher. By 1960, Aurivillius was a docent of physical chemistry and acting senior lecturer at the Stockholm University. In 1965, he was appointed professor of inorganic chemistry at Lund University, a professorship he held until 1983. During the sixties, he worked in the field of crystallography alongside his wife, Karin Aurivillius.
Scientific research
Aurivillius is known for his study on bismuth compounds, including bismuth sesquioxide (Bi2O3) and bismuth layer structured ferroelectrics based on the oxide perovskite structure, which was later named after him as the Aurivillius phases. He characterized the ferroelectric properties of these materials, which have become a family of materials for lead-free ceramics.
Personal life
Bengt Aurivillius is a member of the Aurivillius family, his father was the entomologist Christopher Aurivillius. His wife was crystallographer Karin Aurivillius.
See also
Aurivillius phases
References
Academic staff of Stockholm University
20th-century Swedish chemists
1918 births
1994 deaths
Stockholm University alumni
Crystallographers
Inorganic chemists
Academic staff of Lund University
Solid state chemists | Bengt Aurivillius | [
"Chemistry",
"Materials_science"
] | 417 | [
"Crystallographers",
"Crystallography",
"Inorganic chemists",
"Solid state chemists"
] |
53,059,902 | https://en.wikipedia.org/wiki/Nondeterministic%20constraint%20logic | In theoretical computer science, nondeterministic constraint logic is a combinatorial system in which an orientation is given to the edges of a weighted undirected graph, subject to certain constraints. One can change this orientation by steps in which a single edge is reversed, subject to the same constraints. This is a form of reversible logic in that each sequence of edge orientation changes can be undone.
Reconfiguration problems for constraint logic, asking for a sequence of moves to connect certain states, connect all states, or reverse a specified edge have been proven to be PSPACE-complete. These hardness results form the basis for proofs that various games and puzzles are PSPACE-hard or PSPACE-complete.
Constraint graphs
In the simplest version of nondeterministic constraint logic, each edge of an undirected graph has weight either one or two. (The weights may also be represented graphically by drawing edges of weight one as red and edges of weight two as blue.) The graph is required to be a cubic graph: each vertex is incident to three edges, and additionally each vertex should be incident to an even number of red edges.
The edges are required to be oriented in such a way that at least two units of weight are oriented towards each vertex: there must be either at least one incoming blue edge, or at least two incoming red edges. An orientation can change by steps in which a single edge is reversed, respecting these constraints.
More general forms of nondeterministic constraint logic allow a greater variety of edge weights, more edges per vertex, and different thresholds for how much incoming weight each vertex must have. A graph with a system of edge weights and vertex thresholds is called a constraint graph. The restricted case where the edge weights are all one or two, the vertices require two units of incoming weight, and the vertices all have three incident edges with an even number of red edges, are called and/or constraint graphs.
The reason for the name and/or constraint graphs is that the two possible types of vertex in an and/or constraint graph behave in some ways like an AND gate and OR gate in Boolean logic. A vertex with two red edges and one blue edge behaves like an AND gate in that it requires both red edges to point inwards before the blue edge can be made to point outwards. A vertex with three blue edges behaves like an OR gate, with two of its edges designated as inputs and the third as an output, in that it requires at least one input edge to point inwards before the output edge can be made to point outwards.
Typically, constraint logic problems are defined around finding valid configurations of constraint graphs. Constraint graphs are undirected graphs with two types of edges:
red edges with weight
blue edges with weight
We use constraint graphs as computation models, where we think of the entire graph as a machine. A configuration of the machine consists of the graph along with a specific orientation of its edges. We call a configuration valid, if it satisfies the inflow constraint: each vertex must have an incoming weight of at least . In other words, the sum of the weights of the edges that enter a given vertex must be at least more than the sum of the weights of the edges that exit the vertex.
We also define a move in a constraint graph to be the action of reversing the orientation of an edge, such that the resulting configuration is still valid.
Formal definition of the Constraint logic problem
Suppose we are given a constraint graph, a starting configuration and an ending configuration. This problem asks if there exists a sequence of valid moves that move it from the starting configuration to the ending configuration This problem is PSPACE-Complete for 3-regular or max-degree 3 graphs. The reduction follows from QSAT and is outlined below.
Variants
Planar Non-Deterministic Constraint Logic
The above problem is PSPACE-Complete even if the constraint graph is planar, i.e. no the graph can be drawn in a way such that no two edges cross each other. This reduction follows from Planar QSAT.
Edge Reversal
This problem is a special case of the previous one. It asks, given a constraint graph, if it is possible to reverse a specified edge by a sequence of valid moves. Note that this could be done by a sequence of valid moves so long as the last valid move reverses the desired edge. This problem has also been proven to be PSPACE-Complete for 3-regular or max-degree 3 graphs.
Constraint Graph Satisfaction
This problem asks if there exists an orientation of the edges that satisfies the inflow constraints given an undirected graph . This problem has been proven to be NP-Complete.
Hard problems
The following problems, on and/or constraint graphs and their orientations, are PSPACE-complete:
Given an orientation and a specified edge , testing whether there is a sequence of steps from the given orientation that eventually reverses edge .
Testing whether one orientation can be changed into another one by a sequence of steps.
Given two edges and with specified directions, testing whether there are two orientations for the whole graph, one having the specified direction on and the other having the specified direction on , that can be transformed into each other by a sequence of steps.
The proof that these problems are hard involves a reduction from quantified Boolean formulas, based on the logical interpretation of and/or constraint graphs. It requires additional gadgets for simulating quantifiers and for converting signals carried on red edges into signals carried on blue edges (or vice versa), which can all be accomplished by combinations of and-vertices and or-vertices.
These problems remain PSPACE-complete even for and/or constraint graphs that form planar graphs. The proof of this involves the construction of crossover gadgets that allow two independent signals to cross each other. It is also possible to impose an additional restriction, while preserving the hardness of these problems: each vertex with three blue edges can be required to be part of a triangle with a red edge. Such a vertex is called a protected or, and it has the property that (in any valid orientation of the whole graph) it is not possible for both of the blue edges in the triangle to be directed inwards. This restriction makes it easier to simulate these vertices in hardness reductions for other problems. Additionally, the constraint graphs can be required to have bounded bandwidth, and the problems on them will still remain PSPACE-complete.
Proof of PSPACE-hardness
The reduction follows from QSAT. In order to embed a QSAT formula, we need to create AND, OR, NOT, UNIVERSAL, EXISTENTIAL, and Converter (to change color) gadgets in the constraint graph. The idea goes as follows:
An AND vertex is a vertex such that it has two incident red edges (inputs) and one blue incident edge (output).
An OR vertex is a vertex such that it has three incident blue edges (two inputs, one output).
The other gadgets can also be created in this manner. The full construction is available in Erik Demaine's website. The full construction is also explained in an interactive way.
Applications
The original applications of nondeterministic constraint logic used it to prove the PSPACE-completeness of sliding block puzzles such as Rush Hour and Sokoban. To do so, one needs only to show how to simulate edges and edge orientations, and vertices, and protected or vertices in these puzzles.
Nondeterministic constraint logic has also been used to prove the hardness of reconfiguration versions of classical graph optimization problems including the independent set, vertex cover, and dominating set, on planar graphs of bounded bandwidth. In these problems, one must change one solution to the given problem into another, by moving one vertex at a time into or out of the solution set while maintaining the property that at all times the remaining vertices form a solution.
Reconfiguration 3SAT
Given a 3-CNF formula and two satisfying assignments, this problem asks whether it is possible find a sequence of steps that take us from one assignment to the others, where in each step we are allowed to flip the value of a variable. This problem can be shown PSPACE-complete via a reduction from the Non-deterministic Constraint Logic problem.
Sliding-Block Puzzles
This problem asks whether we can reach a desired configuration in a sliding block puzzle given an initial configuration of the blocks. This problem is PSPACE-complete, even if the rectangles are dominoes.
Rush Hour
This problem asks whether we can reach the victory condition of rush hour puzzle given an initial configuration. This problem is PSPACE-complete, even if the blocks have size .
Dynamic Map Labeling
Given a static map, this problem asks whether there is a smooth dynamic labeling. This problem is also PSPACE-complete.
References
PSPACE-complete problems
Computational problems in graph theory
Reversible computing
Logical calculi
Reconfiguration | Nondeterministic constraint logic | [
"Physics",
"Mathematics"
] | 1,826 | [
"Computational problems in graph theory",
"Physical quantities",
"Time",
"Reconfiguration",
"PSPACE-complete problems",
"Mathematical logic",
"Reversible computing",
"Computational mathematics",
"Logical calculi",
"Computational problems",
"Graph theory",
"Mathematical relations",
"Spacetime... |
53,061,027 | https://en.wikipedia.org/wiki/Luspatercept | Luspatercept, sold under the brand name Reblozyl, is a medication used for the treatment of anemia in beta thalassemia and myelodysplastic syndromes.
The US Food and Drug Administration (FDA) considers it to be a first-in-class medication.
Medical uses
Luspatercept is indicated for the treatment of adults with transfusion-dependent anemia due to very low, low and intermediate-risk myelodysplastic syndromes (MDS) with ring sideroblasts, who had an unsatisfactory response to or are ineligible for erythropoietin-based therapy.
Luspatercept is indicated for the treatment of adults with transfusion-dependent anaemia associated with beta thalassaemia.
Side effects
Possible adverse effects include temporary bone pain, joint pains (arthralgias), dizziness, elevated blood pressure (hypertension) and elevated uric acid levels (hyperuricemia). There was also an increased risk of thrombosis (blood clots) in patients who have risk factors for thrombosis who are taking luspatercept.
Structure and mechanism
Luspatercept is a recombinant fusion protein derived from human activin receptor type IIb (ActRIIb) linked to a protein derived from immunoglobulin G. It binds TGF (transforming growth factor beta) superfamily ligands to reduce SMAD signaling. The reduction in SMAD signaling leads to enhanced erythroid maturation.
History
Phase III trials evaluated the efficacy of luspatercept for the treatment of anemia in the hematological disorders beta thalassemia and myelodysplastic syndromes.
It was developed by Acceleron Pharma in collaboration with Celgene.
The U.S. Food and Drug Administration (FDA) granted approval for luspatercept–aamt in November 2019, for the treatment of anemia (lack of red blood cells) in adult patients with beta thalassemia who require regular red blood cell (RBC) transfusions. Luspatercept was approved for medical use in the European Union in June 2020.
The U.S. Food and Drug Administration (FDA) awarded orphan drug status in 2013, and fast track designation in 2015.
Research
Luspatercept is being evaluated for use in adults with non-transfusion dependent beta thalassemia.
References
Drugs developed by Bristol Myers Squibb
Orphan drugs
Recombinant proteins | Luspatercept | [
"Biology"
] | 523 | [
"Recombinant proteins",
"Biotechnology products"
] |
53,063,316 | https://en.wikipedia.org/wiki/Volatilome | The volatilome (sometimes termed volatolome or volatome) contains all of the volatile metabolites as well as other volatile organic and inorganic compounds that originate from an organism, super-organism, or ecosystem. The atmosphere of a living planet could be regarded as its volatilome. While all volatile metabolites in the volatilome can be thought of as a subset of the metabolome, the volatilome also contains exogenously derived compounds that do not derive from metabolic processes (e.g. environmental contaminants), therefore the volatilome can be regarded as a distinct entity from the metabolome. The volatilome is a component of the 'aura' of molecules and microbes (the 'microbial cloud') that surrounds all organisms.
Odor profile
All volatile metabolites detectable by the human nose are termed an 'odour profile'. The association of altered odour profiles with disease states has long been documented in both eastern and western medicine, and recent advances in robotic sample introduction have increased interest in the volatilome as a source for biomarkers that can be used for non-invasive screening for disease. Volatile profiles can be collected via active or passive sampling and analysis is predominantly undertaken using gas chromatography–mass spectrometry, with a variety of direct or indirect sample introduction techniques.
See also
Electronic nose
References
Omics
Bioinformatics
Odor
Metabolism | Volatilome | [
"Chemistry",
"Engineering",
"Biology"
] | 303 | [
"Biological engineering",
"Bioinformatics",
"Omics",
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
64,274,563 | https://en.wikipedia.org/wiki/Quasi-ultrabarrelled%20space | In functional analysis and related areas of mathematics, a quasi-ultrabarrelled space is a topological vector spaces (TVS) for which every bornivorous ultrabarrel is a neighbourhood of the origin.
Definition
A subset B0 of a TVS X is called a bornivorous ultrabarrel if it is a closed, balanced, and bornivorous subset of X and if there exists a sequence of closed balanced and bornivorous subsets of X such that Bi+1 + Bi+1 ⊆ Bi for all i = 0, 1, ....
In this case, is called a defining sequence for B0.
A TVS X is called quasi-ultrabarrelled if every bornivorous ultrabarrel in X is a neighbourhood of the origin.
Properties
A locally convex quasi-ultrabarrelled space is quasi-barrelled.
Examples and sufficient conditions
Ultrabarrelled spaces and ultrabornological spaces are quasi-ultrabarrelled.
Complete and metrizable TVSs are quasi-ultrabarrelled.
See also
Barrelled space
Countably barrelled space
Countably quasi-barrelled space
Infrabarreled space
Ultrabarrelled space
Uniform boundedness principle#Generalisations
References
Topological vector spaces | Quasi-ultrabarrelled space | [
"Mathematics"
] | 254 | [
"Topological vector spaces",
"Vector spaces",
"Space (mathematics)"
] |
64,278,078 | https://en.wikipedia.org/wiki/Metrizable%20topological%20vector%20space | In functional analysis and related areas of mathematics, a metrizable (resp. pseudometrizable) topological vector space (TVS) is a TVS whose topology is induced by a metric (resp. pseudometric). An LM-space is an inductive limit of a sequence of locally convex metrizable TVS.
Pseudometrics and metrics
A pseudometric on a set is a map satisfying the following properties:
;
Symmetry: ;
Subadditivity:
A pseudometric is called a metric if it satisfies:
Identity of indiscernibles: for all if then
Ultrapseudometric
A pseudometric on is called a ultrapseudometric or a strong pseudometric if it satisfies:
Strong/Ultrametric triangle inequality:
Pseudometric space
A pseudometric space is a pair consisting of a set and a pseudometric on such that 's topology is identical to the topology on induced by We call a pseudometric space a metric space (resp. ultrapseudometric space) when is a metric (resp. ultrapseudometric).
Topology induced by a pseudometric
If is a pseudometric on a set then collection of open balls:
as ranges over and ranges over the positive real numbers,
forms a basis for a topology on that is called the -topology or the pseudometric topology on induced by
: If is a pseudometric space and is treated as a topological space, then unless indicated otherwise, it should be assumed that is endowed with the topology induced by
Pseudometrizable space
A topological space is called pseudometrizable (resp. metrizable, ultrapseudometrizable) if there exists a pseudometric (resp. metric, ultrapseudometric) on such that is equal to the topology induced by
Pseudometrics and values on topological groups
An additive topological group is an additive group endowed with a topology, called a group topology, under which addition and negation become continuous operators.
A topology on a real or complex vector space is called a vector topology or a TVS topology if it makes the operations of vector addition and scalar multiplication continuous (that is, if it makes into a topological vector space).
Every topological vector space (TVS) is an additive commutative topological group but not all group topologies on are vector topologies.
This is because despite it making addition and negation continuous, a group topology on a vector space may fail to make scalar multiplication continuous.
For instance, the discrete topology on any non-trivial vector space makes addition and negation continuous but do not make scalar multiplication continuous.
Translation invariant pseudometrics
If is an additive group then we say that a pseudometric on is translation invariant or just invariant if it satisfies any of the following equivalent conditions:
Translation invariance: ;
Value/G-seminorm
If is a topological group the a value or G-seminorm on (the G stands for Group) is a real-valued map with the following properties:
Non-negative:
Subadditive: ;
Symmetric:
where we call a G-seminorm a G-norm if it satisfies the additional condition:
Total/Positive definite: If then
Properties of values
If is a value on a vector space then:
and for all and positive integers
The set is an additive subgroup of
Equivalence on topological groups
Pseudometrizable topological groups
An invariant pseudometric that doesn't induce a vector topology
Let be a non-trivial (i.e. ) real or complex vector space and let be the translation-invariant trivial metric on defined by and such that
The topology that induces on is the discrete topology, which makes into a commutative topological group under addition but does form a vector topology on because is disconnected but every vector topology is connected.
What fails is that scalar multiplication isn't continuous on
This example shows that a translation-invariant (pseudo)metric is enough to guarantee a vector topology, which leads us to define paranorms and F-seminorms.
Additive sequences
A collection of subsets of a vector space is called additive if for every there exists some such that
All of the above conditions are consequently a necessary for a topology to form a vector topology.
Additive sequences of sets have the particularly nice property that they define non-negative continuous real-valued subadditive functions.
These functions can then be used to prove many of the basic properties of topological vector spaces and also show that a Hausdorff TVS with a countable basis of neighborhoods is metrizable. The following theorem is true more generally for commutative additive topological groups.
Assume that always denotes a finite sequence of non-negative integers and use the notation:
For any integers and
From this it follows that if consists of distinct positive integers then
It will now be shown by induction on that if consists of non-negative integers such that for some integer then
This is clearly true for and so assume that which implies that all are positive.
If all are distinct then this step is done, and otherwise pick distinct indices such that and construct from by replacing each with and deleting the element of (all other elements of are transferred to unchanged).
Observe that and (because ) so by appealing to the inductive hypothesis we conclude that as desired.
It is clear that and that so to prove that is subadditive, it suffices to prove that when are such that which implies that
This is an exercise.
If all are symmetric then if and only if from which it follows that and
If all are balanced then the inequality for all unit scalars such that is proved similarly.
Because is a nonnegative subadditive function satisfying as described in the article on sublinear functionals, is uniformly continuous on if and only if is continuous at the origin.
If all are neighborhoods of the origin then for any real pick an integer such that so that implies
If the set of all form basis of balanced neighborhoods of the origin then it may be shown that for any there exists some such that implies
Paranorms
If is a vector space over the real or complex numbers then a paranorm on is a G-seminorm (defined above) on that satisfies any of the following additional conditions, each of which begins with "for all sequences in and all convergent sequences of scalars ":
Continuity of multiplication: if is a scalar and are such that and then
Both of the conditions:
if and if is such that then ;
if then for every scalar
Both of the conditions:
if and for some scalar then ;
if then
Separate continuity:
if for some scalar then for every ;
if is a scalar, and then .
A paranorm is called total if in addition it satisfies:
Total/Positive definite: implies
Properties of paranorms
If is a paranorm on a vector space then the map defined by is a translation-invariant pseudometric on that defines a on
If is a paranorm on a vector space then:
the set is a vector subspace of
with
If a paranorm satisfies and scalars then is absolutely homogeneity (i.e. equality holds) and thus is a seminorm.
Examples of paranorms
If is a translation-invariant pseudometric on a vector space that induces a vector topology on (i.e. is a TVS) then the map defines a continuous paranorm on ; moreover, the topology that this paranorm defines in is
If is a paranorm on then so is the map
Every positive scalar multiple of a paranorm (resp. total paranorm) is again such a paranorm (resp. total paranorm).
Every seminorm is a paranorm.
The restriction of an paranorm (resp. total paranorm) to a vector subspace is an paranorm (resp. total paranorm).
The sum of two paranorms is a paranorm.
If and are paranorms on then so is Moreover, and This makes the set of paranorms on into a conditionally complete lattice.
Each of the following real-valued maps are paranorms on :
The real-valued maps and are paranorms on
If is a Hamel basis on a vector space then the real-valued map that sends (where all but finitely many of the scalars are 0) to is a paranorm on which satisfies for all and scalars
The function is a paranorm on that is balanced but nevertheless equivalent to the usual norm on Note that the function is subadditive.
Let be a complex vector space and let denote considered as a vector space over Any paranorm on is also a paranorm on
<li>
F-seminorms
If is a vector space over the real or complex numbers then an F-seminorm on (the stands for Fréchet) is a real-valued map with the following four properties:
Non-negative:
Subadditive: for all
Balanced: for all scalars satisfying
This condition guarantees that each set of the form or for some is a balanced set.
For every as
The sequence can be replaced by any positive sequence converging to the zero.
An F-seminorm is called an F-norm if in addition it satisfies:
Total/Positive definite: implies
An F-seminorm is called monotone if it satisfies:
Monotone: for all non-zero and all real and such that
F-seminormed spaces
An F-seminormed space (resp. F''-normed space) is a pair consisting of a vector space and an F-seminorm (resp. F-norm) on
If and are F-seminormed spaces then a map is called an isometric embedding if
Every isometric embedding of one F-seminormed space into another is a topological embedding, but the converse is not true in general.
Examples of F-seminorms
Every positive scalar multiple of an F-seminorm (resp. F-norm, seminorm) is again an F-seminorm (resp. F-norm, seminorm).
The sum of finitely many F-seminorms (resp. F-norms) is an F-seminorm (resp. F-norm).
If and are F-seminorms on then so is their pointwise supremum The same is true of the supremum of any non-empty finite family of F-seminorms on
The restriction of an F-seminorm (resp. F-norm) to a vector subspace is an F-seminorm (resp. F-norm).
A non-negative real-valued function on is a seminorm if and only if it is a convex F-seminorm, or equivalently, if and only if it is a convex balanced G-seminorm. In particular, every seminorm is an F-seminorm.
For any the map on defined by
is an F-norm that is not a norm.
If is a linear map and if is an F-seminorm on then is an F-seminorm on
Let be a complex vector space and let denote considered as a vector space over Any F-seminorm on is also an F-seminorm on
Properties of F-seminorms
Every F-seminorm is a paranorm and every paranorm is equivalent to some F-seminorm.
Every F-seminorm on a vector space is a value on In particular, and for all
Topology induced by a single F-seminorm
Topology induced by a family of F-seminorms
Suppose that is a non-empty collection of F-seminorms on a vector space and for any finite subset and any let
The set forms a filter base on that also forms a neighborhood basis at the origin for a vector topology on denoted by Each is a balanced and absorbing subset of These sets satisfy
is the coarsest vector topology on making each continuous.
is Hausdorff if and only if for every non-zero there exists some such that
If is the set of all continuous F-seminorms on then
If is the set of all pointwise suprema of non-empty finite subsets of of then is a directed family of F-seminorms and
Fréchet combination
Suppose that is a family of non-negative subadditive functions on a vector space
The Fréchet combination of is defined to be the real-valued map
As an F-seminorm
Assume that is an increasing sequence of seminorms on and let be the Fréchet combination of
Then is an F-seminorm on that induces the same locally convex topology as the family of seminorms.
Since is increasing, a basis of open neighborhoods of the origin consists of all sets of the form as ranges over all positive integers and ranges over all positive real numbers.
The translation invariant pseudometric on induced by this F-seminorm is
This metric was discovered by Fréchet in his 1906 thesis for the spaces of real and complex sequences with pointwise operations.
As a paranorm
If each is a paranorm then so is and moreover, induces the same topology on as the family of paranorms.
This is also true of the following paranorms on :
Generalization
The Fréchet combination can be generalized by use of a bounded remetrization function.
A is a continuous non-negative non-decreasing map that has a bounded range, is subadditive (meaning that for all ), and satisfies if and only if
Examples of bounded remetrization functions include and
If is a pseudometric (respectively, metric) on and is a bounded remetrization function then is a bounded pseudometric (respectively, bounded metric) on that is uniformly equivalent to
Suppose that is a family of non-negative F-seminorm on a vector space is a bounded remetrization function, and is a sequence of positive real numbers whose sum is finite.
Then
defines a bounded F-seminorm that is uniformly equivalent to the
It has the property that for any net in if and only if for all
is an F''-norm if and only if the separate points on
Characterizations
Of (pseudo)metrics induced by (semi)norms
A pseudometric (resp. metric) is induced by a seminorm (resp. norm) on a vector space if and only if is translation invariant and absolutely homogeneous, which means that for all scalars and all in which case the function defined by is a seminorm (resp. norm) and the pseudometric (resp. metric) induced by is equal to
Of pseudometrizable TVS
If is a topological vector space (TVS) (where note in particular that is assumed to be a vector topology) then the following are equivalent:
is pseudometrizable (i.e. the vector topology is induced by a pseudometric on ).
has a countable neighborhood base at the origin.
The topology on is induced by a translation-invariant pseudometric on
The topology on is induced by an F-seminorm.
The topology on is induced by a paranorm.
Of metrizable TVS
If is a TVS then the following are equivalent:
is metrizable.
is Hausdorff and pseudometrizable.
is Hausdorff and has a countable neighborhood base at the origin.
The topology on is induced by a translation-invariant metric on
The topology on is induced by an F-norm.
The topology on is induced by a monotone F-norm.
The topology on is induced by a total paranorm.
Of locally convex pseudometrizable TVS
If is TVS then the following are equivalent:
is locally convex and pseudometrizable.
has a countable neighborhood base at the origin consisting of convex sets.
The topology of is induced by a countable family of (continuous) seminorms.
The topology of is induced by a countable increasing sequence of (continuous) seminorms (increasing means that for all
The topology of is induced by an F-seminorm of the form:
where are (continuous) seminorms on
Quotients
Let be a vector subspace of a topological vector space
If is a pseudometrizable TVS then so is
If is a complete pseudometrizable TVS and is a closed vector subspace of then is complete.
If is metrizable TVS and is a closed vector subspace of then is metrizable.
If is an F-seminorm on then the map defined by
is an F-seminorm on that induces the usual quotient topology on If in addition is an F-norm on and if is a closed vector subspace of then is an F-norm on
Examples and sufficient conditions
Every seminormed space is pseudometrizable with a canonical pseudometric given by for all .
If is pseudometric TVS with a translation invariant pseudometric then defines a paranorm. However, if is a translation invariant pseudometric on the vector space (without the addition condition that is ), then need not be either an F-seminorm nor a paranorm.
If a TVS has a bounded neighborhood of the origin then it is pseudometrizable; the converse is in general false.
If a Hausdorff TVS has a bounded neighborhood of the origin then it is metrizable.
Suppose is either a DF-space or an LM-space. If is a sequential space then it is either metrizable or else a Montel DF-space.
If is Hausdorff locally convex TVS then with the strong topology, is metrizable if and only if there exists a countable set of bounded subsets of such that every bounded subset of is contained in some element of
The strong dual space of a metrizable locally convex space (such as a Fréchet space) is a DF-space.
The strong dual of a DF-space is a Fréchet space.
The strong dual of a reflexive Fréchet space is a bornological space.
The strong bidual (that is, the strong dual space of the strong dual space) of a metrizable locally convex space is a Fréchet space.
If is a metrizable locally convex space then its strong dual has one of the following properties, if and only if it has all of these properties: (1) bornological, (2) infrabarreled, (3) barreled.
Normability
A topological vector space is seminormable if and only if it has a convex bounded neighborhood of the origin.
Moreover, a TVS is normable if and only if it is Hausdorff and seminormable.
Every metrizable TVS on a finite-dimensional vector space is a normable locally convex complete TVS, being TVS-isomorphic to Euclidean space. Consequently, any metrizable TVS that is normable must be infinite dimensional.
If is a metrizable locally convex TVS that possess a countable fundamental system of bounded sets, then is normable.
If is a Hausdorff locally convex space then the following are equivalent:
is normable.
has a (von Neumann) bounded neighborhood of the origin.
the strong dual space of is normable.
and if this locally convex space is also metrizable, then the following may be appended to this list:
the strong dual space of is metrizable.
the strong dual space of is a Fréchet–Urysohn locally convex space.
In particular, if a metrizable locally convex space (such as a Fréchet space) is normable then its strong dual space is not a Fréchet–Urysohn space and consequently, this complete Hausdorff locally convex space is also neither metrizable nor normable.
Another consequence of this is that if is a reflexive locally convex TVS whose strong dual is metrizable then is necessarily a reflexive Fréchet space, is a DF-space, both and are necessarily complete Hausdorff ultrabornological distinguished webbed spaces, and moreover, is normable if and only if is normable if and only if is Fréchet–Urysohn if and only if is metrizable. In particular, such a space is either a Banach space or else it is not even a Fréchet–Urysohn space.
Metrically bounded sets and bounded sets
Suppose that is a pseudometric space and
The set is metrically bounded or -bounded if there exists a real number such that for all ;
the smallest such is then called the diameter or -diameter of
If is bounded in a pseudometrizable TVS then it is metrically bounded;
the converse is in general false but it is true for locally convex metrizable TVSs.
Properties of pseudometrizable TVS
Every metrizable locally convex TVS is a quasibarrelled space, bornological space, and a Mackey space.
Every complete metrizable TVS is a barrelled space and a Baire space (and hence non-meager). However, there exist metrizable Baire spaces that are not complete.
If is a metrizable locally convex space, then the strong dual of is bornological if and only if it is barreled, if and only if it is infrabarreled.
If is a complete pseudometrizable TVS and is a closed vector subspace of then is complete.
The strong dual of a locally convex metrizable TVS is a webbed space.
If and are complete metrizable TVSs (i.e. F-spaces) and if is coarser than then ; this is no longer guaranteed to be true if any one of these metrizable TVSs is not complete. Said differently, if and are both F-spaces but with different topologies, then neither one of and contains the other as a subset. One particular consequence of this is, for example, that if is a Banach space and is some other normed space whose norm-induced topology is finer than (or alternatively, is coarser than) that of (i.e. if or if for some constant ), then the only way that can be a Banach space (i.e. also be complete) is if these two norms and are equivalent; if they are not equivalent, then can not be a Banach space.
As another consequence, if is a Banach space and is a Fréchet space, then the map is continuous if and only if the Fréchet space the TVS (here, the Banach space is being considered as a TVS, which means that its norm is "forgetten" but its topology is remembered).
A metrizable locally convex space is normable if and only if its strong dual space is a Fréchet–Urysohn locally convex space.
Any product of complete metrizable TVSs is a Baire space.
A product of metrizable TVSs is metrizable if and only if it all but at most countably many of these TVSs have dimension
A product of pseudometrizable TVSs is pseudometrizable if and only if it all but at most countably many of these TVSs have the trivial topology.
Every complete metrizable TVS is a barrelled space and a Baire space (and thus non-meager).
The dimension of a complete metrizable TVS is either finite or uncountable.
Completeness
Every topological vector space (and more generally, a topological group) has a canonical uniform structure, induced by its topology, which allows the notions of completeness and uniform continuity to be applied to it.
If is a metrizable TVS and is a metric that defines 's topology, then its possible that is complete as a TVS (i.e. relative to its uniformity) but the metric is a complete metric (such metrics exist even for ).
Thus, if is a TVS whose topology is induced by a pseudometric then the notion of completeness of (as a TVS) and the notion of completeness of the pseudometric space are not always equivalent.
The next theorem gives a condition for when they are equivalent:
If is a closed vector subspace of a complete pseudometrizable TVS then the quotient space is complete.
If is a vector subspace of a metrizable TVS and if the quotient space is complete then so is If is not complete then but not complete, vector subspace of
A Baire separable topological group is metrizable if and only if it is cosmic.
Subsets and subsequences
Let be a separable locally convex metrizable topological vector space and let be its completion. If is a bounded subset of then there exists a bounded subset of such that
Every totally bounded subset of a locally convex metrizable TVS is contained in the closed convex balanced hull of some sequence in that converges to
In a pseudometrizable TVS, every bornivore is a neighborhood of the origin.
If is a translation invariant metric on a vector space then for all and every positive integer
If is a null sequence (that is, it converges to the origin) in a metrizable TVS then there exists a sequence of positive real numbers diverging to such that
A subset of a complete metric space is closed if and only if it is complete. If a space is not complete, then is a closed subset of that is not complete.
If is a metrizable locally convex TVS then for every bounded subset of there exists a bounded disk in such that and both and the auxiliary normed space induce the same subspace topology on
Generalized series
As described in this article's section on generalized series, for any -indexed family family of vectors from a TVS it is possible to define their sum as the limit of the net of finite partial sums where the domain is directed by
If and for instance, then the generalized series converges if and only if converges unconditionally in the usual sense (which for real numbers, is equivalent to absolute convergence).
If a generalized series converges in a metrizable TVS, then the set is necessarily countable (that is, either finite or countably infinite);
in other words, all but at most countably many will be zero and so this generalized series is actually a sum of at most countably many non-zero terms.
Linear maps
If is a pseudometrizable TVS and maps bounded subsets of to bounded subsets of then is continuous.
Discontinuous linear functionals exist on any infinite-dimensional pseudometrizable TVS. Thus, a pseudometrizable TVS is finite-dimensional if and only if its continuous dual space is equal to its algebraic dual space.
If is a linear map between TVSs and is metrizable then the following are equivalent:
is continuous;
is a (locally) bounded map (that is, maps (von Neumann) bounded subsets of to bounded subsets of );
is sequentially continuous;
the image under of every null sequence in is a bounded set where by definition, a is a sequence that converges to the origin.
maps null sequences to null sequences;
Open and almost open maps
Theorem: If is a complete pseudometrizable TVS, is a Hausdorff TVS, and is a closed and almost open linear surjection, then is an open map.
Theorem: If is a surjective linear operator from a locally convex space onto a barrelled space (e.g. every complete pseudometrizable space is barrelled) then is almost open.
Theorem: If is a surjective linear operator from a TVS onto a Baire space then is almost open.
Theorem: Suppose is a continuous linear operator from a complete pseudometrizable TVS into a Hausdorff TVS If the image of is non-meager in then is a surjective open map and is a complete metrizable space.
Hahn-Banach extension property
A vector subspace of a TVS has the extension property if any continuous linear functional on can be extended to a continuous linear functional on
Say that a TVS has the Hahn-Banach extension property (HBEP) if every vector subspace of has the extension property.
The Hahn-Banach theorem guarantees that every Hausdorff locally convex space has the HBEP.
For complete metrizable TVSs there is a converse:
If a vector space has uncountable dimension and if we endow it with the finest vector topology then this is a TVS with the HBEP that is neither locally convex or metrizable.
See also
Notes
Proofs
References
Bibliography
Metric spaces
Topological vector spaces | Metrizable topological vector space | [
"Mathematics"
] | 5,951 | [
"Mathematical structures",
"Vector spaces",
"Topological vector spaces",
"Space (mathematics)",
"Metric spaces"
] |
57,687,305 | https://en.wikipedia.org/wiki/Dissimilar%20friction%20stir%20welding | Dissimilar friction stir welding (DFSW) is the application of friction stir welding (FSW), invented in The Welding Institute (TWI) in 1991, to join different base metals including aluminum, copper, steel, titanium, magnesium and other materials. It is based on solid state welding that means there is no melting. DFSW is based on a frictional heat generated by a simple tool in order to soften the materials and stir them together using both tool rotational and tool traverse movements. In the beginning, it is mainly used for joining of aluminum base metals due to existence of solidification defects in joining them by fusion welding methods such as porosity along with thick Intermetallic compounds. DFSW is taken into account as an efficient method to join dissimilar materials in the last decade. There are many advantages for DFSW in compare with other welding methods including low-cost, user-friendly, and easy operation procedure resulting in enormous usages of friction stir welding for dissimilar joints. Welding tool, base materials, backing plate (fixture), and a milling machine are required materials and equipment for DFSW. On the other hand, other welding methods, such as Shielded Metal Arc Welding (SMAW) typically need highly professional operator as well as quite expensive equipment.
Principle of operation
The mechanism of DFSW is very simple. A rotating tool plunges into the interface of parent metals, and heat input generated by the friction between the tool shoulder surface and top surface of the base metals lead to softening of the base materials. In other words, the rotational movement of the tool mixes and stirs the parent metals and create a softened pasty mixture. Afterwards, the tool's traverse movement along the interface creates a joint. This results in a final bond that combines both mechanical and metallurgical bonding at the interface. These two bondings are critical in order to achieve proper mechanical properties. Butt and lap designs are the most common joint types in dissimilar friction stir welding (DFSW). Likewise, one material is generally harder than the other. In general, hard and soft materials are placed in advancing and retreating sides respectively during welding.
Tool Geometry
Tool configuration is an important factor to achieve a sound joint. The tool consists of two parts including tool shoulder and tool pin, as shown in below figure. The tool shoulder generates frictional heat, while the tool pin stirs the softened materials. Various pin and shoulder configurations may be used for DFSW. "Cylindrical", "rectangular", "triangular" and "threaded-cylindrical" are the most common tool pin profiles, while "featureless" and "scrolled" are the most common tool shoulder configurations. Tool material selection is dependent on the base materials to be joined. For example, for aluminum/copper joints, hot working alloy steel is generally used, while for harder metals such as titanium/aluminum joints, tungsten carbide is common.
Welding Parameters
In DFSW, mechanical properties mainly include tensile strength, hardness, yield strength, elongation. Selecting optimum welding parameters results in achieving proper mechanical properties of the joint. Tool rotational speed (rpm), tool traverse speed (mm/min), tool tilt angle (degree), tool offset (mm), tool penetration (mm), and tool geometry are most important welding parameters in DFSW. The tool center is typically placed in the centerline of the joint for similar joints such as aluminum/aluminium or copper/copper joints; in contrast, it is shifted towards the softer materials in DFSW called tool offset. It is a significant factor to achieve a joint possessing smaller welding defect and higher mechanical properties. Generally, harder and softer materials are placed in Advancing Side (AS) and Retreating Side (RT) respectively. Regardless of the tool geometry, which plays a critical role on final mechanical and metallurgical properties of the weldment, the effect of the tool rotational speed and tool offset are taken into account as the most important welding parameters during DFSW.
Heat Generation
A non-consumable rotating tool is plunged into the interface of parent materials. Frictional heat arisen from the tool shoulder throughout welding plasticizes the parent materials leading to local plastic deformation of the parent materials. Localized heat generated by the tool results from following process. At the initial stage, it is primarily arisen from frictional heat between the plunged pin and parent materials. Afterwards, it is mainly produced by the frictional heat between the shoulder surface and the top-surface of base metals once the shoulder touched the top-surface. Subsequently, the softened materials are stirred together by the rotating pin resulting in a solid-state bond. Frigaard et al. showed that tool rotational speed and tool shoulder diameters are the main contributing factors in heat generation.
Material Flow
The mechanism of bonding in DFSW is based on two simple concepts. First, stirred materials, a mixture flow of soft and hard metals, is forged into the interface of harder material leading to strong mechanical bond at the interface. Furthermore, a complementary metallurgical bond is formed at the interface enhancing and improving mechanical properties of the joint. Materials flow throughout DFSW depends on various parameters including welding process parameters, tool geometry, and base materials. Tool geometry is the most important factor in achieving appropriate material flow.
Defects
Occurrence of welding defects in DFSW are quite common. Welding defects in DFSW include tunneling defect, fragment defect, crack, void, surface cavity or grooves and excessive flash formation. Amongst these, tunneling defect is the most common defect in DFSW resulting from improper material flows throughout welding. It is mainly attributed to inappropriate selection of welding parameters particularly welding speed, rotational speed, tool design and tool penetration leading to either abnormal stirring or insufficient heat input. Formation of coarse fragments of harder materials within the matrix of softer materials is another typical defect observed only in DFSW. Generally, during DFSW, the paste materials behave like a metal matrix composite such that harder and softer materials act as the matrix and the reinforcement respectively. In fact, it is quite important to keep the harder material in relatively small size in order to achieve the best flow of materials. Therefore, any factors that cause formation of large piece of harder material lead to appearance of fragment defects. Tool offset and tool pin design were taken into account as the most significant contributing factors in formation of fragment defect in DFSW. They were accounted for disturbing the flow of material resulting from the formation of large pieces of harder material within the matrix of softer material due to the fact that it is quite difficult to stir and mix paste materials when one of them is not relatively fine. In addition, fragment defects usually accompany with other defects such as voids and cracks.
Typical Characteristics
DFSW shows various characteristics in terms of hardness distribution, tensile strength, microstructure, formation of intermetallic compounds as well as formation of a composite structure within the stir zone. The majority of the dissimilar joints fabricated by FSW demonstrate similar results.
Hardness
Since the base materials have different mechanical properties, the hardness distribution is not homogeneous which can be attributed to two different reasons. First, different mechanical properties of base materials including the hardness causes inhomogeneity in the weldments. Second, different microstructure and grain size of the welding zones including stir zone, TMAZ, and HAZ result in various hardness. Moreover, the hardness in the nugget zone or stir zone is very inhomogenous because of the formation of onion ring (composite structure ) and IMCs. As a result, dissimilar joints shows inhomogenous distribution in the nugget zone or stir zone.
Microstructure
Four different welding zones including Stir Zone (SZ) or nugget zone, Thermo-Mechanical Affected Zone (TMAZ), Heat affected zone (HAZ) and Base Metals (BM) are typically observed in dissimilar joints made by FSW. Microstructure of the weldment demonstrates a remarkable grain refinement in the stir zone along with elongation of the grains in the TMAZ. Intensive plastic deformation risen up by tool action, rotational and traverse movements, account for the notable grain refinement in the stir zone. Moreover, HAZ presents relatively coarser grain that can be attributed to lower cooling rate in comparison with other welding areas. Some phenomena are typical in dissimilar friction stir welding including formation of Intermetallic Compounds (IMCs) and appearance of a Composite-like Structure (CS) appeared in various patterns specifically onion rings shown in below figure. IMCs and CS enhance mechanical behavior of the joints depending their conditions such as the thickness of IMCs as well as distribution pattern of composite-like structure. Proper selection of welding parameters optimizes formation of IMCs and CS resulting in the highest mechanical properties. As pointed out before, rotational speed, welding speed, and tool offset along with tool pin are the most important factors affecting on mechanical and metallurgical properties during DFSW. Unlike conventional fusion welding methods that are accompanied with substantially thick interfacial IMCs, forming an interfacial metallurgical bond during DFSW is essential to achieve a sound joint. However, it should be kept at optimum condition to enhance and improve mechanical properties i.e. it should be thin, uniform and contentious.
IMCs
IMCs are another typical phenomenon in DFSW. There existed some criteria for IMCs in order to achieve a sound joint including thickness, uniformity and continuity. The most common type of IMCs appeared in aluminum/copper joint are Al4Cu9, Al2Cu3, Al2Cu. Interface and surrounding edge of the particles dispersed in the nugget zone are two main places IMCs formed. Likewise, depending the size of the particles of harder material which dispersed in the matrix of softer material, coarse particles partially transform to IMCs mostly around the outer edge of the particles, while fine particles completely transform to IMCs. It is worth noting that the average thickness of IMCs are less than 2 micrometer. Therefore, those particles that are below than 2 micrometer are completely transform to IMCs resulting in enhancing mechanical properties of the nugget zone.
Tensile Strength
Another important characteristic in DFSW is the final tensile strength. The majority of dissimilar weldments presented similar trend in tensile strength. There are two different materials in DFSW. One is softer than the other. For example, in aluminum to copper joint, aluminum is softer than copper. What would be the tensile strength of the joint? Is it more than both? Is it less than both? What is the requirement for the sound joint? The answer is that tensile strength of the joints in DFSW are a fraction of the tensile strength of the softer material. Therefore, the final tensile strength of the weldments are usually less than tensile strength of both materials, however, in order to be acceptable in the industry, it is usually more than 70 percent of the tensile strength of the softer material. Fracture behavior of the tensile specimens shows that majority of the joints failed at the interface along with a brittle fracture. It can be attributed to IMCs developed at the interface. Although, it could successfully improve tensile strength, but the specimens showed brittle fracture which is one of the existing challenge in dissimilar joints fabricated by FSW.
Formation of composite structure
Due to the fact that there are two different materials in DFSW; formation of a composite structure within the nugget zone is inevitable. Typically, it appears in the forming of onion ring in the nugget zone or stir zone of the softer matrix as shown in below figure. That is, fine particle of the material in the advancing side (harder material) disperse throughout the stir zone of the retreating material (Softer material). That is the main reason regarding the inhomogeneous hardness distribution in the stir zone.
Challenge
FSW can be efficient method to be used in order to join dissimilar materials and the outcome in terms of tensile strength, shear strength, and hardness distribution are promising. However, most of the joints fractured at interface. Moreover, even those that have been ruptured in the base metals showed brittle behavior i.e. low elongation which can be attributed to formation of IMCs. There must a balance between tensile strength and ductility of the weldments in order to safely use dissimilar weldments in industrial applications. In other words, proper ductility and toughness are required for some industrial applications since they should possess proper resistivity against impact and shock loading. The majority of the fabricated weldments are not sufficiently strong to be used for such applications. Therefore, it is worthwhile to focus current and future works on improving toughness of the weldments along with keeping tensile strength in a proper value.
References
Welding
Friction
Friction stir welding | Dissimilar friction stir welding | [
"Physics",
"Chemistry",
"Engineering"
] | 2,660 | [
"Mechanical phenomena",
"Physical phenomena",
"Force",
"Friction",
"Physical quantities",
"Welding",
"Surface science",
"Mechanical engineering"
] |
57,687,722 | https://en.wikipedia.org/wiki/Spinterface | Spinterface is a term coined to indicate an interface between a ferromagnet and an organic semiconductor.
This is a widely investigated topic in molecular spintronics, since the role of interfaces plays a huge part in the functioning of a device. In particular, spinterfaces are widely studied in the scientific community because of their hybrid organic/inorganic composition. In fact, the hybridization between the metal and the organic material can be controlled by acting on the molecules, which are more responsive to electrical and optical stimuli than metals. This gives rise to the possibility of efficiently tuning the magnetic properties of the interface at the atomic scale.
History
The field of spintronics, which is the scientific field that aims to study the spin-dependent electron transport in solid-state devices, emerged in the last decades of the 20th century, first with the observation of the injection of a spin-polarized current from a ferromagnetic to a paramagnetic metal and subsequently with the discovery of tunnel magnetoresistance and giant magnetoresistance. The field evolved turning towards spin-orbit related phenomena, such as Rashba effect. Only more recently, spintronics has been extended to the organic world, with the idea of exploiting the weak spin-relaxation mechanisms of molecules in order to use them for spin transport. Research in this field started off with hybrid replicas of inorganic spintronic devices, such as spin valves and magnetic tunneling junctions, trying to obtain spin transport in molecular films. However some devices didn't behave as expected, for example vertical spin valves displaying a negative magnetoresistance. It was then quickly understood that the molecular layers don't just play a transport role but they can also act on the spin polarization of the ferromagnet at the interface. Because of this, the interest on ferromagnet/organic interfaces rapidly increased in the scientific community and the term "spinterface" was born. The research is currently aimed at building devices with interfaces engineered in order to tailor the spin injection.
Scientific interest
The shrinking of device sizes and the attention towards low power consumption applications has led to an ever-growing attention towards the physics of surfaces and interfaces, which play a fundamental role in the functioning of many applications. The breaking of the bulk symmetry which occurs at a surface leads to different physical and chemical properties, which are sometimes impossible to find in the bulk material. In particular, when a solid-state material is interfaced with another solid, the terminations of the two different materials influence each other by means of chemical bonds. The behavior of the interface is highly influenced by the properties of the materials. In particular, in spinterfaces, a metal and an organic semiconductor, which display very different electronic properties, are interfaced and they usually form a strong hybridization. With the final aim of being able to tune and change the electronic and magnetic behavior of the interface, spinterfaces are studied both by inserting them into spintronic devices and, on a more basic level, by investigating the growth of ultra-thin molecular layers on ferromagnetic substates with a surface science approach. The scope of building such interfaces is on one side to exploit the spin-polarized character of the electronic structure of the ferromagnet to induce a spin polarization in the molecular layer and, on the other hand, to influence the magnetic character of the ferromagnetic layer by means of hybridization. Combining this with the fact that usually molecules have a very high responsivity to stimuli (typically impossible to achieve in inorganic materials) there is the hope of being able to easily change the character of the hybridization, hence tuning the properties of the spinterface. This could give rise to a new class of spintronic devices, where the spinterface plays a fundamental and active role.
Physics and applications
Organic semiconductors are currently used in various applications, for example OLED displays, which can be flexible, thinner, faster and more power efficient than LCD screens, and organic field-effect transistors, intended for large, low-cost electronic products and biodegradable electronics.
In terms of spintronic applications, there are no available commercial devices yet, but the applied research is headed towards the use of spinterfaces mainly for magnetic tunneling junctions and organic spin valves.
Spin-Filtering
The physical principle that is mainly exploited when talking about spinterfaces is the spin-filtering. This is simply schematized in figure: when one considers the ferromagnet and the organic semiconductor on their own (panel a), the density of states (DOS) of the metal will be unbalanced between the two spin channels, with the difference of the up and down DOS at the Fermi level governing the spin polarization of the current flow; the DOS of the organic semiconductor will have no unbalance between the spin channels and will display localized energy levels, namely highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO), with zero DOS at the Fermi Level. When the two materials are put into contact they influence each other's DOS at the interface: the main effects are a broadening of the molecular orbitals and a possible shift of their energy. These effects are in general spin-dependent, since they arise from the hybridization, which is strictly dependent on the DOS of the two materials, which is itself spin-unbalanced in the case of the ferromagnet. As a matter of example, panel b represents the case of a parallel injection of current, while panel c schematizes an antiparallel spin polarization of the current injected in the semiconductor. In this way, the injected current will be polarized accordingly to the interface DOS at the Fermi Level and exploiting the fact that molecules usually have intrinsically weak spin-relaxation mechanisms, molecular layers are great candidates for spin transport applications. By a good material choice one is then able to filter the spins at the spinterface.
Magnetic Tunneling Junction
Applied research on spinterfaces is often focused on studying the tunnel magnetoresistance (TMR) in hybrid magnetic tunneling junctions (MTJs). Conventional MTJs are composed by two ferromagnetic electrodes separated by an insulating layer, thin enough for electron tunneling events to be relevant. The idea of using spinterfaces consists in replacing the inorganic insulating barrier with an organic one. The motivation for this is given by the flexibility, low cost and higher spin-relaxation times of molecules and the possibility of chemically engineering the interfaces. The physical principle behind MTJs is that the tunneling of the junction is dependent on the relative orientation of the magnetization of the ferromagnetic electrodes. In fact, in the Jullière model, the tunneling current that passes through the junction is proportional to the sum of the products of the DOS of the single spin channels:
The picture of spin-dependent tunneling is represented in figure, and what is observed is that usually there is a larger tunneling current in the case of parallel alignment of the electrode magnetizations. This is given by the fact that, in this case, the term will be way larger than all the other terms, making .
By changing the relative orientation of the magnetization of the electrodes it is possible to control the conductance state of the tunneling junction and use this principle for applications, for example read-heads of hard disk drives and MRAMs.
If an organic material is inserted as tunneling barrier, the picture becomes more complex, as the formation of spin-hybridization-induced polarized states occurs. These states may affect the tunneling transmission coefficient, which is usually kept constant in the Jullière model. Barraud et al., in a Nature Physics paper, develop a spin transport model that takes into account the effect of the spinterface hybridization. What they observed is that the role of this hybridization in the spin tunneling process is not only relevant, but also capable of inverting the sign of the TMR. This opens the door to a new research front, aimed at tailoring the properties of spintronic devices through the right combination of ferromagnetic metals and molecules.
Spin Valves
Conventional spin valves are built in a very similar way with respect to magnetic tunneling junctions, the difference is that the two ferromagnetic electrodes are this time separated by a non-magnetic metal instead of an insulator. The physical principle exploited in this case is no longer related to tunneling but to electrical resistance.
The spin-polarized current, coming from one ferromagnetic electrode, can travel in a non-magnetic metal for a certain distance, given by the spin diffusion length of that metal. When the current enters another ferromagnetic material, the relative orientation of the magnetization with respect to the first electrode can lead to a change in the resistance of the junction: if the alignment of the magnetizations is parallel, the spin valve will exhibit a low resistance state, while, in the case of antiparallel alignment, reflection and spin flip scattering events give rise to a high resistance state. From these considerations one can define and evaluate the magnetoresistance of the spin valve:
where and are respectively the resistances for the antiparallel and parallel alignment.
The usual way of creating the possibility of having both parallel and antiparallel alignment is either pinning one of the electrodes by means of exchange bias or directly using different materials with different coercive fields for the two electrodes (pseudo spin valves). The proposed use of spinterfaces in spin valve applications is to interface one of the electrodes with a molecular layer, which is capable of tuning the magnetization properties of the electrode with a change in hybridization. This change of hybridization at the spinterface can be induced in principle both by light (making these systems suitable for ultra-fast applications) and electric voltages. If this process is reversible, there is the possibility of switching from high to low resistance in a very effective way, making the devices faster and more efficient.
See also
Spintronics
Spin valve
Tunnel magnetoresistance
Giant magnetoresistance
Molecular electronics
Orbital hybridisation
References
Spintronics | Spinterface | [
"Physics",
"Materials_science"
] | 2,109 | [
"Spintronics",
"Condensed matter physics"
] |
57,687,737 | https://en.wikipedia.org/wiki/Incremental%20deformations | In solid mechanics, the linear stability analysis of an elastic solution is studied using the method of incremental deformations superposed on finite deformations. The method of incremental deformation can be used to solve static, quasi-static and time-dependent problems. The governing equations of the motion are ones of the classical mechanics, such as the conservation of mass and the balance of linear and angular momentum, which provide the equilibrium configuration of the material. The main corresponding mathematical framework is described in the main Raymond Ogden's book Non-linear elastic deformations and in Biot's book Mechanics of incremental deformations, which is a collection of his main papers.
Nonlinear Elasticity
Kinematics and Mechanics
Let be a three-dimensional Euclidean space. Let be two regions occupied by the material in two different instants of time. Let be the deformation which transforms the tissue from , i.e. the material/reference configuration, to the loaded configuration , i.e. current configuration. Let be a -diffeomorphism from to , with being the current position vector, given as a function of the material position . The deformation gradient is given by
Considering a hyperelastic material with an elastic strain energy density , the Piola-Kirchhoff stress tensor is given by .
For a quasi-static problem, without body forces, the equilibrium equation is
where is the divergence with respect to the material coordinates.
If the material is incompressible, i.e. the volume of every subdomains does not change during the deformation, a Lagrangian multiplier is typically introduced to enforce the internal isochoric constraint . So that, the expression of the Piola stress tensor becomes
Boundary conditions
Let be the boundary of , the reference configuration, and , the boundary of , the current configuration. One defines the subset of on which Dirichlet conditions are applied, while Neumann conditions hold on , such that . If is the displacement vector to be assigned at the portion and is the traction vector to be assigned to the portion , the boundary conditions can be written in mixed-form, such as
where is the displacement and the vector is the unit outward normal to .
Basic solution
The defined problem is called the boundary value problem (BVP). Hence, let be a solution of the BVP. Since depends nonlinearly on the deformation gradient, this solution is generally not unique, and it depends on geometrical and material parameters of the problem. So, one has to employ the method of incremental deformation in order to highlight the existence of an adjacent solution for a critical value of a dimensionless parameter, called control parameter which "controls" the onset of the instability. This means that by increasing the value of this parameter, at a certain point new solutions appear. Hence, the selected basic solution is not anymore the stable one but it becomes unstable. In a physical way, at a certain time the stored energy, such as the integral of the density over all the domain of the basic solution is bigger than the one of the new solutions. To restore the equilibrium, the configuration of the material moves to another configuration which has lower energy.
Method of incremental deformations superposed on finite deformations
To improve this method, one has to superpose a small displacement on the finite deformation basic solution . So that:
,
where is the perturbed position and maps the basic position vector in the perturbed configuration .
In the following, the incremental variables are indicated by , while the perturbed ones are indicated by .
Deformation gradient
The perturbed deformation gradient is given by:
,
where , where is the gradient operator with respect to the current configuration.
Stresses
The perturbed Piola stress is given by:
where denotes the contraction between two tensors, a forth-order tensor and a second-order tensor . Since depends on through , its expression can be rewritten by emphasizing this dependence, such as
If the material is incompressible, one gets
where is the increment in and is called the elastic moduli associated to the pairs .
It is useful to derive the push-forward of the perturbed Piola stress be defined as
where is also known as the tensor of instantaneous moduli, whose components are:
.
Incremental governing equations
Expanding the equilibrium equation around the basic solution, one gets
Since is the solution to the equation at the zero-order, the incremental equation can be rewritten as
where is the divergence operator with respect to the actual configuration.
The incremental incompressibility constraint reads
Expanding this equation around the basic solution, as before, one gets
Incremental boundary conditions
Let and be the prescribed increment of and respectively. Hence, the perturbed boundary condition are
where is the incremental displacement and .
Solution of the incremental problem
The incremental equations
represent the incremental boundary value problem (BVP) and define a system of partial differential equations (PDEs). The unknowns of the problem depend on the considered case. For the first one, such as the compressible case, there are three unknowns, such as the components of the incremental deformations , linked to the perturbed deformation by this relation . For the latter case, instead, one has to take into account also the increment of the Lagrange multiplier , introduced to impose the isochoric constraint.
The main difficulty to solve this problem is to transform the problem in a more suitable form for implementing an efficient and robusted numerical solution procedure. The one used in this area is the Stroh formalism. It was originally developed by Stroh for a steady state elastic problem and allows the set of four PDEs with the associated boundary conditions to be transformed into a set of ODEs of first order with initial conditions. The number of equations depends on the dimension of the space in which the problem is set. To do this, one has to apply variable separation and assume periodicity in a given direction depending on the considered situation. In particular cases, the system can be rewritten in a compact form by using the Stroh formalism. Indeed, the shape of the system looks like
where is the vector which contains all the unknowns of the problem, is the only variable on which the rewritten problem depends and the matrix is so-called Stroh matrix and it has the following form
where each block is a matrix and its dimension depends on the dimension of the problem. Moreover, a crucial property of this approach is that , i.e. is the hermitian matrix of .
Conclusion and remark
The Stroh formalism provides an optimal form to solve a great variety of elastic problems. Optimal means that one can construct an efficient numerical procedure to solve the incremental problem. By solving the incremental boundary value problem, one finds the relations among the material and geometrical parameters of the problem and the perturbation modes by which the wave propagates in the material, i.e. what denotes the instability. Everything depends on , the selected parameter denoted as the control one.
By this analysis, in a graph perturbation mode vs control parameter, the minimum value of the perturbation mode represents the first mode at which one can see the onset of the instability. For instance, in the picture, the first value of the mode in which the instability emerges is around since the trivial solution and does not have to be considered.
See also
Deformation (mechanics)
Elastic instability
Continuum mechanics
References
Elasticity (physics) | Incremental deformations | [
"Physics",
"Materials_science"
] | 1,534 | [
"Deformation (mechanics)",
"Physical phenomena",
"Physical properties",
"Elasticity (physics)"
] |
51,513,135 | https://en.wikipedia.org/wiki/Graph%20%28topology%29 | In topology, a branch of mathematics, a graph is a topological space which arises from a usual graph by replacing vertices by points and each edge by a copy of the unit interval , where is identified with the point associated to and with the point associated to . That is, as topological spaces, graphs are exactly the simplicial 1-complexes and also exactly the one-dimensional CW complexes.
Thus, in particular, it bears the quotient topology of the set
under the quotient map used for gluing. Here is the 0-skeleton (consisting of one point for each vertex ), are the closed intervals glued to it, one for each edge , and is the disjoint union.
The topology on this space is called the graph topology.
Subgraphs and trees
A subgraph of a graph is a subspace which is also a graph and whose nodes are all contained in the 0-skeleton of . is a subgraph if and only if it consists of vertices and edges from and is closed.
A subgraph is called a tree if it is contractible as a topological space. This can be shown equivalent to the usual definition of a tree in graph theory, namely a connected graph without cycles.
Properties
The associated topological space of a graph is connected (with respect to the graph topology) if and only if the original graph is connected.
Every connected graph contains at least one maximal tree , that is, a tree that is maximal with respect to the order induced by set inclusion on the subgraphs of which are trees.
If is a graph and a maximal tree, then the fundamental group equals the free group generated by elements , where the correspond bijectively to the edges of ; in fact, is homotopy equivalent to a wedge sum of circles.
Forming the topological space associated to a graph as above amounts to a functor from the category of graphs to the category of topological spaces.
Every covering space projecting to a graph is also a graph.
See also
Graph homology
Topological graph theory
Nielsen–Schreier theorem, whose standard proof makes use of this concept.
References
Topological spaces | Graph (topology) | [
"Mathematics"
] | 425 | [
"Topological spaces",
"Mathematical structures",
"Topology",
"Space (mathematics)"
] |
51,514,596 | https://en.wikipedia.org/wiki/Tissue%20growth | Tissue growth is the process by which a tissue increases its size. In animals, tissue growth occurs during embryonic development, post-natal growth, and tissue regeneration. The fundamental cellular basis for tissue growth is the process of cell proliferation, which involves both cell growth and cell division occurring in parallel.
How cell proliferation is controlled during tissue growth to determine final tissue size is an open question in biology. Uncontrolled tissue growth is a cause of cancer.
Differential rates of cell proliferation within an organ can influence proportions, as can the orientation of cell divisions, and thus tissue growth contributes to shaping tissues along with other mechanisms of tissue morphogenesis.
Mechanisms of tissue growth control in animals
Mechanical control of tissue growth in animal skin
For some animal tissues, such as mammalian skin, it is clear that the growth of the skin is ultimately determined by the size of the body whose surface area the skin covers. This suggests that cell proliferation in skin stem cells within the basal layer is likely to be mechanically controlled to ensure that the skin covers the surface of the entire body. Growth of the body causes mechanical stretching of the skin, which is sensed by skin stem cells within the basal layer and consequently leads to both an increased rate of cell proliferation as well as promoting the planar orientation of stem cell divisions to produce new skin stem cells, rather than only producing differentiating supra-basal daughter cells.
Cell proliferation in skin stem cells within the basal layer can be driven by the mechanically-regulated YAP/TAZ family of transcriptional co-activators, which bind to TEAD-family DNA binding transcription factors in the nucleus to activate target gene expression and thereby drive cell proliferation.
For other animal tissues, such as the bones of the skeleton or the internal mammalian organs intestine, pancreas, kidney or brain, it remains unclear how developmental gene regulatory networks encoded in the genome lead to organs of such different sizes and proportions.
Hormonal control of tissue growth in the entire animal body
Although different animal tissues grow at different rates and produce organs of very different proportions, the overall growth rate of the entire animal body can be modulated by circulating hormones of the Insulin/IGF-1 family, which activate the PI3K/AKT/mTOR pathway in many cells of the body to increase the average rate of both cell growth and cell division, leading to increased cell proliferation rates in many tissues. In mammals, production of IGF-1 is induced by another circulating hormone called Growth Hormone. Excessive production of Growth Hormone or IGF-1 is responsible for giantism while insufficient production of these hormones is responsible for dwarfism.
Developmental control of tissue growth during adult tissue homeostasis
Adult animal tissues such as skin or intestine maintain their size but undergo constant turnover of cells by proliferation of stem cells and progenitor cells while undergoing an equivalent loss of differentiated daughter cells via sloughing off. Gradients of Wnt signaling pathway activity appear to have a fundamental role in maintaining proliferation of stem and progenitor cells, at least in the intestine, and possibly also in skin.
Regenerative tissue growth after wounding or other types of damage
Upon tissue damage, there is an upregulation in the activity of many pathways that control tissue growth, including the YAP/TAZ pathway, Wnt signaling pathway, and growth factors that activate the PI3K/AKT/mTOR pathway.
References
Developmental biology
Cell biology
Cell cycle
Cellular processes | Tissue growth | [
"Biology"
] | 699 | [
"Behavior",
"Cell biology",
"Developmental biology",
"Reproduction",
"Cellular processes",
"Cell cycle"
] |
51,516,730 | https://en.wikipedia.org/wiki/Gould%27s%20sequence | Gould's sequence is an integer sequence named after Henry W. Gould that counts how many odd numbers are in each row of Pascal's triangle. It consists only of powers of two, and begins:
1, 2, 2, 4, 2, 4, 4, 8, 2, 4, 4, 8, 4, 8, 8, 16, 2, 4, ...
For instance, the sixth number in the sequence is 4, because there are four odd numbers in the sixth row of Pascal's triangle (the four bold numbers in the sequence 1, 5, 10, 10, 5, 1). Gould's sequence is also a fractal sequence.
Additional interpretations
The th value in the sequence (starting from ) gives the highest power of 2 that divides the central binomial coefficient , and it gives the numerator of (expressed as a fraction in lowest terms).
Gould's sequence also gives the number of live cells in the th generation of the Rule 90 cellular automaton starting from a single live cell.
It has a characteristic growing sawtooth shape that can be used to recognize physical processes that behave similarly to Rule 90.
Related sequences
The binary logarithms (exponents in the powers of two) of Gould's sequence themselves form an integer sequence,
0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4, ...
in which the th value gives the number of nonzero bits in the binary representation of the number , sometimes written in mathematical notation as . Equivalently, the th value in Gould's sequence is
Taking the sequence of exponents modulo two gives the Thue–Morse sequence.
The partial sums of Gould's sequence,
0, 1, 3, 5, 9, 11, 15, 19, 27, 29, 33, 37, 45, ...
count all odd numbers in the first rows of Pascal's triangle. These numbers grow proportionally to ,
but with a constant of proportionality that oscillates between 0.812556... and 1, periodically as a function of .
Recursive construction and self-similarity
The first values in Gould's sequence may be constructed by recursively constructing the first values, and then concatenating the doubles of the first values. For instance, concatenating the first four values 1, 2, 2, 4 with their doubles 2, 4, 4, 8 produces the first eight values. Because of this doubling construction, the first occurrence of each power of two in this sequence is at position .
Gould's sequence, the sequence of its exponents, and the Thue–Morse sequence are all self-similar: they have the property that the subsequence of values at even positions in the whole sequence equals the original sequence, a property they also share with some other sequences such as Stern's diatomic sequence. In Gould's sequence, the values at odd positions are double their predecessors, while in the sequence of exponents, the values at odd positions are one plus their predecessors.
History
The sequence is named after Henry W. Gould, who studied it in the early 1960s. However, the fact that these numbers are powers of two, with the exponent of the th number equal to the number of ones in the binary representation of , was already known to J. W. L. Glaisher in 1899.
Proving that the numbers in Gould's sequence are powers of two was given as a problem in the 1956 William Lowell Putnam Mathematical Competition.
References
Integer sequences
Factorial and binomial topics
Fractals
Scaling symmetries | Gould's sequence | [
"Physics",
"Mathematics"
] | 763 | [
"Sequences and series",
"Symmetry",
"Functions and mappings",
"Integer sequences",
"Mathematical structures",
"Mathematical analysis",
"Factorial and binomial topics",
"Recreational mathematics",
"Mathematical objects",
"Fractals",
"Number theory",
"Combinatorics",
"Mathematical relations",
... |
51,517,440 | https://en.wikipedia.org/wiki/Pomeranchuk%27s%20theorem | Pomeranchuk's theorem, named after Soviet physicist Isaak Pomeranchuk, states that difference of cross sections of interactions of elementary particles and (i. e. particle with particle , and with its antiparticle ) approach 0 when , where is the energy in center of mass system.
See also
Pomeron
References
.
Eponymous theorems of physics
Scattering theory | Pomeranchuk's theorem | [
"Physics",
"Chemistry"
] | 78 | [
"Scattering theory",
"Equations of physics",
"Eponymous theorems of physics",
"Scattering",
"Particle physics",
"Particle physics stubs",
"Physics theorems"
] |
54,366,204 | https://en.wikipedia.org/wiki/Rod%20Smallwood%20%28medical%20engineer%29 | Professor Rodney Harris Smallwood FREng, HonFRCP, FIET, FInstP, FIPEM (born 1945), known as Rod, is a British medical engineer and computer scientist.
Smallwood graduated in Physics from University College London, then studied solid-state physics at Lancaster University, before working for the National Health Service in Sheffield and gaining a PhD from the University of Sheffield.
He was appointed Professor of Medical Engineering and Head of the academic Medical Physics and Clinical Engineering Department at the University of Sheffield in 1995, took a computer science post in 2002, and subsequently became Professor of Computational Systems Biology and the Director of Research for Engineering.
He has served as president of the Institute of Physics and Engineering in Medicine.
References
External links
1945 births
Place of birth missing (living people)
Fellows of the Institute of Physics and Engineering in Medicine
Fellows of the Royal Academy of Engineering
Fellows of the Institution of Engineering and Technology
Living people
Alumni of the University of Sheffield | Rod Smallwood (medical engineer) | [
"Engineering"
] | 191 | [
"Institution of Engineering and Technology",
"Fellows of the Institution of Engineering and Technology"
] |
54,366,493 | https://en.wikipedia.org/wiki/Skeletocutis%20subvulgaris | Skeletocutis subvulgaris is a species of poroid, white rot fungus in the family Polyporaceae. Found in China, it was described as a new species in 1998 by mycologist Yu-Chen Dai. It was named for its resemblance to Skeletocutis vulgaris. The type collection was made in Hongqi District, Jilin Province, where it was found growing on the rotting wood of Korean pine (Pinus koraiensis).
Description
The fungus has a soft, thin, crust-like fruit body forming strips that measure long by wide; these strips are sometimes joined to make larger patches. The pore surface is whitish with small pores numbering 6–8 per millimetre. S. subvulgaris has a dimitic hyphal system. Some of the hyphae of the dissepiment edges (the tissue between the pores) is encrusted with spiny crystals. The skeletal hyphae have a distinct lumen, which helps distinguish this species from the similar S. vulgaris. Spores of S. subvulgaris are roughly cylindrical, thin walled and hyaline, and measure 3.1–4.1 by 1.1–1.6 μm.
References
Fungi described in 1998
Fungi of China
subvulgaris
Taxa named by Yu-Cheng Dai
Fungus species | Skeletocutis subvulgaris | [
"Biology"
] | 280 | [
"Fungi",
"Fungus species"
] |
54,372,951 | https://en.wikipedia.org/wiki/GESTIS%20Substance%20Database | GESTIS Substance Database is a freely accessible online information system on chemical compounds. It is maintained by the Institut für Arbeitsschutz der Deutschen Gesetzlichen Unfallversicherung (IFA, Institute for Occupational Safety and Health of the German Social Accident Insurance). Information on occupational medicine and first aid is compiled by Henning Heberer and his team (TOXICHEM, Leuna).
The database contains information for the safe handling of hazardous substances and other chemical substances at work:
toxicology/ecotoxicology
important physical and chemical properties
application and handling
health effects
protective measures and such in case of danger (incl. first aid)
special regulations e.g. GHS classification and labelling according to CLP Regulation (pictograms, H phrases, P phrases).
The available information relates to about 9,400 substances. Data are updated immediately after publication of new official regulations or after the issue of new scientific results.
A mobile version of the GESTIS Substance Database, suitable for smartphones and tablets, is also available.
References
Literature
External links
GESTIS Substance Database
Biochemistry databases
Online databases
Occupational safety and health | GESTIS Substance Database | [
"Chemistry",
"Biology"
] | 234 | [
"Biochemistry",
"Biochemistry databases"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.