id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
35,439,884
https://en.wikipedia.org/wiki/Bio-inspired%20robotics
Bio-inspired robotic locomotion is a subcategory of bio-inspired design. It is about learning concepts from nature and applying them to the design of real-world engineered systems. More specifically, this field is about making robots that are inspired by biological systems , including Biomimicry. Biomimicry is copying from nature while bio-inspired design is learning from nature and making a mechanism that is simpler and more effective than the system observed in nature. Biomimicry has led to the development of a different branch of robotics called soft robotics. The biological systems have been optimized for specific tasks according to their habitat. However, they are multifunctional and are not designed for only one specific functionality. Bio-inspired robotics is about studying biological systems, and looking for the mechanisms that may solve a problem in the engineering field. The designer should then try to simplify and enhance that mechanism for the specific task of interest. Bio-inspired roboticists are usually interested in biosensors (e.g. eye), bioactuators (e.g. muscle), or biomaterials (e.g. spider silk). Most of the robots have some type of locomotion system. Thus, in this article different modes of animal locomotion and few examples of the corresponding bio-inspired robots are introduced. Biolocomotion Biolocomotion or animal locomotion is usually categorized as below: Locomotion on a surface Locomotion on a surface may include terrestrial locomotion and arboreal locomotion. We will specifically discuss about terrestrial locomotion in detail in the next section. Locomotion in a fluid Locomotion in a blood stream or cell culture media swimming and flying. There are many swimming and flying robots designed and built by roboticists. Some of them use miniaturized motors or conventional MEMS actuators (such as piezoelectric, thermal, magnetic, etc), while others use animal muscle cells as motors. Behavioral classification (terrestrial locomotion) There are many animal and insects moving on land with or without legs. We will discuss legged and limbless locomotion in this section as well as climbing and jumping. Anchoring the feet is fundamental to locomotion on land. The ability to increase traction is important for slip-free motion on surfaces such as smooth rock faces and ice, and is especially critical for moving uphill. Numerous biological mechanisms exist for providing purchase: claws rely upon friction-based mechanisms; gecko feet upon van der walls forces; and some insect feet upon fluid-mediated adhesive forces. Legged locomotion Legged robots may have one, two, four, six, or many legs depending on the application. One of the main advantages of using legs instead of wheels is moving on uneven environment more effectively. Bipedal, quadrupedal, and hexapedal locomotion are among the most favorite types of legged locomotion in the field of bio-inspired robotics. Rhex, a Reliable Hexapedal robot and Cheetah are the two fastest running robots so far. is another hexapedal robot inspired by cockroach locomotion that has been developed at Stanford University. This robot can run up to 15 body length per second and can achieve speeds of up to 2.3 m/s. The original version of this robot was pneumatically driven while the new generation uses a single electric motor for locomotion. Limbless locomotion Terrain involving topography over a range of length scales can be challenging for most organisms and biomimetic robots. Such terrain are easily passed over by limbless organisms such as snakes. Several animals and insects including worms, snails, caterpillars, and snakes are capable of limbless locomotion. A review of snake-like robots is presented by Hirose et al. These robots can be categorized as robots with passive or active wheels, robots with active treads, and undulating robots using vertical waves or linear expansions. Most snake-like robots use wheels, which are high in friction when moving side to side but low in friction when rolling forward (and can be prevented from rolling backward). The majority of snake-like robots use either lateral undulation or rectilinear locomotion and have difficulty climbing vertically. Choset has recently developed a modular robot that can mimic several snake gaits, but it cannot perform concertina motion. Researchers at Georgia Tech have recently developed two snake-like robots called Scalybot. The focus of these robots is on the role of snake ventral scales on adjusting the frictional properties in different directions. These robots can actively control their scales to modify their frictional properties and move on a variety of surfaces efficiently. Researchers at CMU have developed both scaled and conventional actuated snake-like robots. Climbing Climbing is an especially difficult task because mistakes made by the climber may cause the climber to lose its grip and fall. Most robots have been built around a single functionality observed in their biological counterparts. Geckobots typically use van der waals forces that work only on smooth surfaces. Being inspired from geckos, scientists from Stanford university have artificially created the adhesive property of a gecko. Similar to seta in a gecko's leg, millions of microfibers were placed and attached to a spring. The tip of the microfiber will be sharp and pointed in usual circumstances, but upon actuation, the movement of a spring will create a stress which bends these microfibers and increase their contact area to the surface of a glass or wall. Using the same technology, gecko grippers were invented by NASA scientists for different applications in space. Stickybots use directional dry adhesives that works best on smooth surfaces. The Spinybot and RiSE robots are among the insect-like robots that use spines instead. Legged climbing robots have several limitations. They cannot handle large obstacles since they are not flexible and they require a wide space for moving. They usually cannot climb both smooth and rough surfaces or handle vertical to horizontal transitions as well. Jumping One of the tasks commonly performed by a variety of living organisms is jumping. Bharal, hares, kangaroo, grasshopper, flea, and locust are among the best jumping animals. A miniature 7g jumping robot inspired by locust has been developed at EPFL that can jump up to 138 cm. The jump event is induced by releasing the tension of a spring. The highest jumping miniature robot is inspired by the locust, weighs 23 grams with its highest jump to 365 cm is "TAUB" (Tel-Aviv University and Braude College of engineering). It uses torsion springs as energy storage and includes a wire and latch mechanism to compress and release the springs. ETH Zurich has reported a soft jumping robot based on the combustion of methane and laughing gas. The thermal gas expansion inside the soft combustion chamber drastically increases the chamber volume. This causes the 2 kg robot to jump up to 20 cm. The soft robot inspired by a roly-poly toy then reorientates itself into an upright position after landing. Behavioral classification (aquatic locomotion) Swimming (piscine) It is calculated that when swimming some fish can achieve a propulsive efficiency greater than 90%. Furthermore, they can accelerate and maneuver far better than any man-made boat or submarine, and produce less noise and water disturbance. Therefore, many researchers studying underwater robots would like to copy this type of locomotion. Notable examples are the Essex University Computer Science Robotic Fish G9, and the Robot Tuna built by the Institute of Field Robotics, to analyze and mathematically model thunniform motion. The Aqua Penguin, designed and built by Festo of Germany, copies the streamlined shape and propulsion by front "flippers" of penguins. Festo have also built the Aqua Ray and Aqua Jelly, which emulate the locomotion of manta ray, and jellyfish, respectively. In 2014, iSplash-II was developed by PhD student Richard James Clapham and Prof. Huosheng Hu at Essex University. It was the first robotic fish capable of outperforming real carangiform fish in terms of average maximum velocity (measured in body lengths/ second) and endurance, the duration that top speed is maintained. This build attained swimming speeds of 11.6BL/s (i.e. 3.7 m/s). The first build, iSplash-I (2014) was the first robotic platform to apply a full-body length carangiform swimming motion which was found to increase swimming speed by 27% over the traditional approach of a posterior confined waveform. Morphological classification Modular The modular robots are typically capable of performing several tasks and are specifically useful for search and rescue or exploratory missions. Some of the featured robots in this category include a salamander inspired robot developed at EPFL that can walk and swim, a snake inspired robot developed at Carnegie-Mellon University that has four different modes of terrestrial locomotion, and a cockroach inspired robot can run and climb on a variety of complex terrain. Humanoid Humanoid robots are robots that look human-like or are inspired by the human form. There are many different types of humanoid robots for applications such as personal assistance, reception, work at industries, or companionship. These type of robots are used for research purposes as well and were originally developed to build better orthosis and prosthesis for human beings. Petman is one of the first and most advanced humanoid robots developed at Boston Dynamics. Some of the humanoid robots such as Honda Asimo are over actuated. On the other hand, there are some humanoid robots like the robot developed at Cornell University that do not have any actuators and walk passively descending a shallow slope. Swarming The collective behavior of animals has been of interest to researchers for several years. Ants can make structures like rafts to survive on the rivers. Fish can sense their environment more effectively in large groups. Swarm robotics is a fairly new field and the goal is to make robots that can work together and transfer the data, make structures as a group, etc. Soft Soft robots are robots composed entirely of soft materials and moved through pneumatic pressure, similar to an octopus or starfish. Such robots are flexible enough to move in very limited spaces (such as in the human body). The first multigait soft robots was developed in 2011 and the first fully integrated, independent soft robot (with soft batteries and control systems) was developed in 2015. See also Animal locomotion Biomimetics Biorobotics Biomechatronics Biologically inspired engineering Robotic materials Lists of types of robots References External links The Soft Robotics Toolkit Boston Dynamics Research for this Wikipedia entry was conducted as a part of a Locomotion Neuromechanics course (APPH 6232) offered in the School of Applied Physiology at Georgia Tech Research labs Poly-PEDAL Lab (Prof. Bob Full) Biomimetic Milisystems Lab (Prof. Ron Fearing) Biomimetics & Dexterous Manipulation Lab (Prof. Mark Cutkosky) Biomimetic Robotics Lab (Prof. Sangbae Kim) Harvard Microrobotics Lab (Prof. Rob Wood) Harvard Biodesign Lab (Prof. Conor Walsh) ETH Functional Material Lab (Prof. Wendelin Stark) Leg lab at MIT Center for Biologically Inspired Design at Georgia Tech Biologically Inspired Robotics Lab, Case Western Reserve University Biorobotics research group (S. Viollet/ F. Ruffier), Institute of Movement Science, CNRS/Aix-Marseille University (France) Center for Biorobotics, Tallinn University of Technology BioRob EPFL (Prof Auke Ijspeert) Robot locomotion Bionics Bioinspiration
Bio-inspired robotics
[ "Physics", "Engineering", "Biology" ]
2,421
[ "Physical phenomena", "Biological engineering", "Bionics", "Motion (physics)", "Robot locomotion", "Bioinspiration" ]
35,443,480
https://en.wikipedia.org/wiki/Hierarchy%20of%20hazard%20controls
Hierarchy of hazard control is a system used in industry to prioritize possible interventions to minimize or eliminate exposure to hazards. It is a widely accepted system promoted by numerous safety organizations. This concept is taught to managers in industry, to be promoted as standard practice in the workplace. It has also been used to inform public policy, in fields such as road safety. Various illustrations are used to depict this system, most commonly a triangle. The hazard controls in the hierarchy are, in order of decreasing priority: Elimination Substitution Engineering controls Administrative controls Personal protective equipment The system is not based on evidence of effectiveness; rather, it relies on whether the elimination of hazards is possible. Eliminating hazards allows workers to be free from the need to recognize and protect themselves against these dangers. Substitution is given lower priority than elimination because substitutes may also present hazards. Engineering controls depend on a well-functioning system and human behaviour, while administrative controls and personal protective equipment are inherently reliant on human actions, making them less reliable. History During the 1990s TB outbreak, resulting from the HIV epidemic in the United States, the hierarchy of controls was described as a way for healthcare workers to mitigate their exposure to TB. The hierarchy can be summarized, from most to least preferable, as the following list states: "Substitution": Avoids the hazard, which is not possible in a healthcare setting. "Contain [the hazards] at their source": Using administrative controls, screen for a given health hazard (in this case, TB). This can include source control, which can involve masking an infected patient. "Engineering controls": This usually involves configuring isolation rooms and HVAC systems to prevent the spread of infection. "Establish barriers": Personal protective equipment, with respirators. Today's hierarchy has several differences, however keeping the original idea. Components of the hierarchy Elimination Physical removal of the hazard is the most effective hazard control. For example, if employees must work high above the ground, the hazard can be eliminated by moving the piece they are working on to ground level to eliminate the need to work at heights. However, often elimination of the hazard is not possible because the task explicitly involves handling a hazardous agent. For example, construction professionals cannot remove the danger of asbestos when handling the hazardous agent is the core of the task. The most effective control measure is eliminating the hazard and its associated risks entirely. The simplest way to do this is by not introducing the hazard in the first place. For instance, the risk of falling from a height can be eliminated by performing the task at ground level. Eliminating hazards is often more cost-effective and feasible during the design or planning phase of a product, process, or workplace. At this stage, there’s greater flexibility to design out hazards or incorporate risk controls that align with the intended function. Employers can also eliminate hazards by completely removing them—such as clearing trip hazards or disposing of hazardous chemicals, thus eliminating the risks they pose. If eliminating a hazard compromises the ability to produce the product or deliver the service, it's crucial to eliminate as many risks associated with the hazard as possible. Substitution Substitution, the second most effective hazard control, involves replacing something that produces a hazard with something that does not produce a hazard or produces a lesser hazard. However, to be an effective control, the new product must not produce unintended consequences. For example, if a product can be purchased with a larger particle size, the smaller product may effectively be substituted with the larger product due to airborne dust having the possibility of being hazardous. Eliminating hazards and substituting safer alternatives can be challenging to implement within existing processes. These strategies are most effective when applied during the design or development phases of a workplace, tool, or procedure. At this stage, they often represent the most straightforward and cost-effective solutions. Additionally, they present a valuable opportunity when selecting new equipment or methods. The Prevention through Design approach emphasizes integrating safety considerations into the design of work tools, operations, and environments to enhance overall safety and efficiency. Engineering controls The third most effective means of controlling hazards is engineered controls. These do not eliminate hazards, but rather isolate people from hazards. Capital costs of engineered controls tend to be higher than less effective controls in the hierarchy, however they may reduce future costs. A main part of engineering controls, "enclosure and isolation," creates a physical barrier between personnel and hazards, such as using remotely controlled equipment. As an example, fume hoods can remove airborne contaminants as a means of engineered control. Effective engineering controls are integral to the original equipment design and work to eliminate or block hazards at the source before they reach workers. They are designed to prevent users from modifying or tampering with the controls and require minimal action from users to function effectively. These controls operate seamlessly without disrupting the workflow or complicating tasks. While they may have higher initial costs compared to administrative controls or personal protective equipment (PPE), they often result in lower long-term operating expenses, especially when safeguarding multiple workers and potentially saving costs in other operational areas. Administrative controls Administrative controls are changes to the way people work. Examples of administrative controls include procedure changes, employee training, and installation of signs and warning labels, such as those in the Workplace Hazardous Materials Information System. Administrative controls do not remove hazards, but limit or prevent people's exposure to the hazards, such as completing road construction at night when fewer people are driving. Administrative controls are ranked lower than elimination, substitution, and engineering controls because they do not directly remove or reduce workplace hazards. Instead, they manage workers' exposure by setting rules like limiting work times in contaminated areas. However, these measures have limitations since they don't address the hazard itself. Where possible, administrative controls should be combined with other control measures. Examples of administrative controls include: Implementing job rotation or work-rest schedules to limit individual exposure. Establishing a preventive maintenance program to ensure equipment is functioning properly. Scheduling high-exposure tasks during off-peak times when fewer workers are present. Restricting access to hazardous areas. Assigning tasks only to qualified personnel. Posting warning signs to alert workers of potential hazards. Personal protective equipment Personal protective equipment (PPE) includes gloves, Nomex clothing, overalls, Tyvek suits, respirators, hard hats, safety glasses, high-visibility clothing, and safety footwear. PPE is often the most important means of controlling hazards in fields such as health care and asbestos removal. However, considerable efforts are needed to use PPE effectively, such as training in donning and doffing or testing the equipment. Additionally, some PPE, such as respirators, increase physiological effort to complete a task and, therefore, may require medical examinations to ensure workers can use the PPE without risking their health. Employers should not depend solely on personal protective equipment (PPE) to manage hazards when more effective controls are available. While PPE can be beneficial, its effectiveness relies on correct and consistent use, and it may incur significant costs over time, especially when used daily for multiple workers. Employers must provide PPE when other control measures are still being developed or cannot adequately reduce hazardous exposure to safe levels. Personal Protective Equipment (PPE) minimizes risks to health and safety when worn correctly, including items like earplugs, goggles, respirators, and gloves. However, PPE and administrative controls don't eliminate hazards at their source, relying instead on human behavior and supervision. As a result, they are among the least effective methods for risk reduction when used alone. Role in prevention through design The hierarchy of controls is a core component of Prevention through Design, the concept of applying methods to minimize occupational hazards early in the design process. Prevention through Design emphasizes addressing hazards at the top of the hierarchy of controls (mainly through elimination and substitution) at the earliest stages of project development. NIOSH’s Prevention through Design Initiative comprises “all of the efforts to anticipate and design out hazards to workers in facilities, work methods and operations, processes, equipment, tools, products, new technologies, and the organization of work.” Variations on the NIOSH control hierarchy While the control hierarchy shown above is traditionally used in the United States and Canada, other countries or entities may use a slightly different structure. In particular, some add isolation above engineering controls instead of combining the two. The variation of the hierarchy used in the ARECC decision-making framework and process for industrial hygiene (IH) includes modification of the material or procedure to reduce hazards or exposures (sometimes considered a subset of the hazard substitution option but explicitly considered there to mean that the efficacy of the modification for the situation at hand must be confirmed by the user). The ARECC version of the hierarchy also includes warnings as a distinct element to clarify the nature of the warning. In other systems, warnings are sometimes considered part of engineering controls and sometimes part of administrative controls. Use of hierarchical controls The hierarchy of controls serves as a valuable tool for safety professionals to determine the most effective methods for managing specific hazards. By following this hierarchy, employers can ensure they are implementing the best measures to protect their employees from potential risks. When encountering a hazard in the workplace, the hierarchy of hazard control provides a systematic approach to identify the most appropriate actions for controlling or eliminating that hazard. Additionally, it aids in developing a comprehensive hazard control plan for implementing the chosen measures effectively in the workplace. It is important to be aware of the following when using the hierarchy of controls: Use interim controls: If more time is needed to implement long-term solutions, the hierarchy of controls should be used from the top down as interim controls in the meantime. Avoid introducing new hazards: Keep in mind is that the selected controls should never directly or indirectly introduce new hazards. Make sure to perform a thorough safety analysis before implementing the selected controls. Use a combination of controls: If there is no single method that will fully protect workers, then a combination of controls should be used. See also ARECC - Decision-making framework and process used in the field of industrial hygiene (IH) to anticipate and recognize hazards, evaluate exposures, and control and confirm protection from risks Normalization of deviance – one reason people stop using effective prevention measures Notes References External links Canadian Centre for Occupational Health & Safety document Hierarchy of prevention and control measures on OSH Wiki (EU) Hazard analysis Occupational safety and health National Institute for Occupational Safety and Health
Hierarchy of hazard controls
[ "Engineering" ]
2,125
[ "Safety engineering", "Hazard analysis" ]
37,827,725
https://en.wikipedia.org/wiki/False%20diffusion
False diffusion is a type of error observed when the upwind scheme is used to approximate the convection term in convection–diffusion equations. The more accurate central difference scheme can be used for the convection term, but for grids with cell Peclet number more than 2, the central difference scheme is unstable and the simpler upwind scheme is often used. The resulting error from the upwind differencing scheme has a diffusion-like appearance in two- or three-dimensional co-ordinate systems and is referred as "false diffusion". False-diffusion errors in numerical solutions of convection-diffusion problems, in two- and three-dimensions, arise from the numerical approximations of the convection term in the conservation equations. Over the past 20 years many numerical techniques have been developed to solve convection-diffusion equations and none are problem-free, but false diffusion is one of the most serious problems and a major topic of controversy and confusion among numerical analysts. Definition False diffusion is defined as an error having a diffusion-like appearance, obtained when the upwind scheme is used in multidimensional cases to solve for the distribution of transported properties flowing non-orthogonally to one or more of the system's major axes. The error is absent when the flow is orthogonal or parallel to each major axis. Example In figure 1, u = 2 and v = 2 m/s everywhere so the velocity field is uniform and perpendicular to the diagonal (XX). The boundary conditions for temperature on north and west wall is 100 ̊C and for east and south wall is 0 ̊C. This region is meshed into 10×10 equal grids. Take two cases, (i) with diffusion coefficient ≠ 0 and, case (ii) with diffusion coefficient = 0. Case (i) In this case, heat from west and south walls is carried by convection flow towards north and east walls. Heat is also diffused across the diagonal XX from upper to lower triangle. Figure 2 shows the approximate temperature distribution. Case (ii) In this case heat from west and south walls is convected by flow towards north and east. There will be no diffusion across the diagonal XX but, when the upwind scheme is applied the results are similar to case (i) where actual diffusion is occurring. This error is known as false diffusion. Background In early approaches, derivatives in the differential form of the governing transport equation were replaced by finite difference approximations, usually central differencing approximations with second order accuracy. However, for large Peclet numbers (generally > 2) this approximation gave inaccurate results. It was recognized independently by several investigators that the less expensive but only first order accurate upwind scheme can be employed but that this scheme produces results with false diffusion for multidimensional cases. Many new schemes have been developed to counter false diffusion but a reliable, accurate and economical discretisation scheme is still unavailable. Reducing errors Finer mesh False diffusion with the upwind scheme is reduced by increasing the mesh density. In the results of figure 3 and 4 the false diffusion error is lowest in figure 4(b) with finer mesh size. Other schemes False diffusion error also can be reduced by using schemes such as the power law scheme, QUICK scheme, exponential scheme, and SUCCA, and others. Improving the upwind scheme False diffusion with the simple upwind scheme occurs because the scheme does not take into account grid/flow direction inclination. An approximate expression for the false-diffusion term in two dimensions has been given by de Vahl Davis and Mallinson(1972) where U is the resultant velocity and θ is the angle made by the velocity vector with the x direction. False diffusion is absent when the resultant flow is aligned with either of the sets of grid lines and is greatest when the flow direction is 45˚ to the grid lines. Determining the accuracy of approximation for the convection term Using Taylor series for and at the time t + kt are according to the upwind approximation for convection (UAC),. Neglecting the higher order in equation (2a), the error of convected flux due to this approximation is . It has the form of flux of by false diffusion with a diffusion co-efficient The subscript fc is a reminder that this is a false diffusion arising from the estimate of the convected flux at the instant using UAC. Skew upwind corner convection algorithm (SUCCA) SUCCA takes the local flow direction into account by introducing the influence of upwind corner cells into the discretized conservation equation in the general governing transport equation. In Fig 5, SUCCA is applied within nine cell grid cluster. Considering the SW corner inflow for cell P, the SUCCA equations for the convective transport of the conserved species are i.e., i.e., This formulation satisfies all the criteria of convergence and stability. In Fig. 6, as mesh is refined, the upwind scheme gives more accurate results but SUCCA offers a nearly exact solution and is more useful in avoiding multidimensional false diffusion errors. See also Computational fluid dynamics Navier–Stokes equations Numerical diffusion Finite volume method Taylor series References Further reading Computational fluid dynamics Numerical differential equations Numerical artifacts
False diffusion
[ "Physics", "Chemistry" ]
1,047
[ "Computational fluid dynamics", "Fluid dynamics", "Computational physics" ]
37,829,625
https://en.wikipedia.org/wiki/Combustion%20models%20for%20CFD
Combustion models for CFD refers to combustion models for computational fluid dynamics. Combustion is defined as a chemical reaction in which a fuel reacts with an oxidant to form products, accompanied with the release of energy in the form of heat. Being the integral part of various engineering applications like: internal combustion engines, aircraft engines, rocket engines, furnaces, and power station combustors, combustion manifests itself as a wide domain during the design, analysis and performance characteristics stages of the above-mentioned applications. With the added complexity of chemical kinetics and achieving reacting flow mixture environment, proper modeling physics has to be incorporated during computational fluid dynamic (CFD) simulations of combustion. Hence the following discussion presents a general outline of the various adequate models incorporated with the Computational fluid dynamic code to model the process of combustion. Overview Computational fluid dynamics modeling of combustion calls upon the proper selection and implementation of a model suitable to faithfully represent the complex physical and chemical phenomenon associated with any combustion process. The model should be competent enough to deliver information related to the species concentration, their volumetric generation or destruction rate and changes in the parameters of the system like enthalpy, temperature and mixture density. The model should be capable of solving the general transport equations for fluid flow and heat transfer as well as the additional equations of combustion chemistry and chemical kinetics incorporated into that as per the simulating environment desired Critical considerations in combustion phenomenon The major consideration during any general combustion process includes the mixing time scale and the reacting time scale elapsed for the process. The flame type and the type of mixing of flow streams of the constituents also have to be taken into account. Apart from that as far as the kinetic complexity of the reaction is concerned, the reaction proceeds in multiple steps and what appears as a simple one line reaction actually completes after a series of reactions. Also the transport equations for mass fractions of all the species as well as the enthalpy generated during the reaction have to be solved. Hence even the simplest combustion reaction involves very tedious and rigorous calculation if all the intermediate steps of the combustion process, all transport equations and all flow equations have to be satisfied simultaneously. All these factors will have a significant effect on the computational speed and time of the simulation. But with proper simplifying assumptions Computational fluid dynamic modeling of combustion reaction can be done without substantial compromise on the accuracy and convergence of the solution. The basic models used for the same are covered in the following paragraphs. Simple chemical reacting system model This model takes into consideration only the final concentration of species and takes into account only the global nature of combustion process where the reaction proceeds infinitely fast as a single step process without much stress on the detailed kinetics involved. The reactants are assumed to react in stoichiometric proportions. The model also deduces a linear relationship between the mass fractions of fuel, oxidant and the non dimensional variable mixture fraction. The model also takes into account an additional assumption that the mass diffusion coefficients of all species are equal. Owing to this additional assumption the model only solves one extra partial differential equation for mixture fraction and after solving the transport equation for the mixture fraction the corresponding mass fractions for fuel and oxidant are calculated. This model can very well be applied to a combustion environment where laminar diffusion effects are dominant and the combustion proceeds via non premixed fuel and oxidant streams diffusing into each other giving rise to a laminar flame. Eddy break–up model This model is used when turbulent mixing of the constituents has to be taken into consideration. The k/Ɛ turbulent time scale is used to calculate the reaction rate. A comparison between the turbulent dissipation rates of the fuel, oxidant and products is done and the minimum amongst all is taken as the rate of the reaction. The transport equations for the mass fractions of the constituents are solved using this rate of reaction. Apart from this a mean enthalpy equation is also solved and temperature, density and viscosity are calculated accordingly. The model can also be implemented when finite rate kinetically controlled reaction is to be simulated. In such situation while deciding the rate of the reaction the Arrhenius kinetic rate expression is also taken into account and the rate of reaction is taken as minimum amongst the turbulent dissipation rates of all the constituents and the Arrhenius kinetic rate expression. Since turbulent mixing governs the characteristics of this model, there exists a limit to the quality of the combustion simulation depending upon the type of the turbulent model implemented to represent the flow. The model can also be modified to account for mixing of fine structures during the turbulent reaction. This modification of the model results in the eddy dissipation model which consider the mass fraction of fine structures in its calculations. Laminar flamelet model This model approximates the turbulent flame as a series of laminar flamelet regions concentrated just around the stoichiometric surfaces of the reacting mixture. This model exploits the use of experimental data for determining relations between the variables considered like mass fraction, temperature etc. The nature and type of dependence of the variables is predicted through experimental data obtained during laminar diffusion flame experiment and laminar flamelet relationship is deduced based on the same. These relationships are then used to solve the transport equations for species mass fraction and mixture composition. The model can very well be implemented for situations where concentration of minor species in the combustion is to be computed like quantifying the generation of pollutants. A simple enhancement to the model results in the flamelet time scale model which takes finite rate kinetics effect into consideration. The flamelet time scale model produces steady laminar flamelet solution when reaction proceeds very fast and captures the finite rate effects when reaction chemistry is dominant. Presumed probability distribution function model This model takes into account a statistical approach for calculating the variables like species mass fractions, temperature and density while the mixture composition is calculated at the grids. Then these all variables are calculated as functions of the mixture fraction around a presumed probability distribution function. The model can produce satisfactory results for turbulent reactive flows where convection effects due to mean and fluctuating components of velocity are dominant. The model can be extended for adiabatic as well as non adiabatic conditions. Conditional moment closure Conditional moment closure (CMC) is an advanced combustion model. The basic idea is to model the chemical source based on conditional averages. The model was first introduced for non-premixed flows and hence the conditioning is done in the mixture fraction. Other models The following are some of the other relevant models used for computational fluid dynamic modeling of combustion. The chemical equilibrium model The Flamelet generated manifold model The flame surface density model The large eddy simulation model The chemical equilibrium model considers the effect of intermediate reactions during turbulent combustion. The concentration of species is calculated when the combustion reaction reaches equilibrium state. The species concentration is calculated as a function of mixture fraction by deploying certain equilibrium calculation programs available to serve the purpose. The conditional closure model solves the transport equations for the mean components of the flow properties without considering the fluctuating composition of the reaction mixture. References Computational fluid dynamics Combustion engineering
Combustion models for CFD
[ "Physics", "Chemistry", "Engineering" ]
1,443
[ "Computational fluid dynamics", "Combustion engineering", "Industrial engineering", "Computational physics", "Fluid dynamics" ]
37,831,694
https://en.wikipedia.org/wiki/Gelfand%E2%80%93Zeitlin%20integrable%20system
In mathematics, the Gelfand–Zeitlin system (also written Gelfand–Zetlin system, Gelfand–Cetlin system, Gelfand–Tsetlin system) is an integrable system on conjugacy classes of Hermitian matrices. It was introduced by , who named it after the Gelfand–Zeitlin basis, an early example of canonical basis, introduced by I. M. Gelfand and M. L. Cetlin in 1950s. introduced a complex version of this integrable system. References External links http://ncatlab.org/nlab/show/Gelfand-Tsetlin+basis Integrable systems
Gelfand–Zeitlin integrable system
[ "Physics" ]
149
[ "Integrable systems", "Theoretical physics" ]
37,832,958
https://en.wikipedia.org/wiki/GelRed
GelRed is an intercalating nucleic acid stain used in molecular genetics for agarose gel DNA electrophoresis. GelRed structurally consists of two ethidium subunits that are bridged by a linear oxygenated spacer. GelRed is a fluorophore, and its optical properties are essentially identical to those of ethidium bromide. When exposed to ultraviolet light, it fluoresces with an orange color that strongly intensifies after binding to DNA. The substance is marketed as a less toxic and more sensitive alternative to ethidium bromide. GelRed is sold as a solution in anhydrous DMSO or ultrapurified water. GelRed is unable to cross cell membranes. See also Ethidium bromide GelGreen SYBR Green I Gel electrophoresis Phenanthridine Molecular genetics References Aromatic amines Iodides DNA-binding substances Carboxamides Quaternary ammonium compounds Phenanthridine dyes Staining dyes
GelRed
[ "Biology" ]
207
[ "Genetics techniques", "DNA-binding substances" ]
37,833,942
https://en.wikipedia.org/wiki/Cells%20Alive%20System
The Cells Alive System (CAS) is a line of commercial freezers manufactured by ABI Corporation, Ltd. of Chiba, Japan claimed to preserve food with greater freshness than ordinary freezing by using electromagnetic fields and mechanical vibrations to limit ice crystal formation that destroys food texture. They also are claimed to increase tissue survival without having its water replaced by cryogenically compatible fluids; whether they have any effect is unclear. The freezers have attracted attention from the food processing industry. References External links ABI Corporation's CAS product line at Alibaba.com Cooling technology Cryobiology
Cells Alive System
[ "Physics", "Chemistry", "Biology" ]
118
[ "Biochemistry", "Physical phenomena", "Phase transitions", "Cryobiology" ]
37,834,267
https://en.wikipedia.org/wiki/Binary%20Goppa%20code
In mathematics and computer science, the binary Goppa code is an error-correcting code that belongs to the class of general Goppa codes originally described by Valerii Denisovich Goppa, but the binary structure gives it several mathematical advantages over non-binary variants, also providing a better fit for common usage in computers and telecommunication. Binary Goppa codes have interesting properties suitable for cryptography in McEliece-like cryptosystems and similar setups. Construction and properties An irreducible binary Goppa code is defined by a polynomial of degree over a finite field with no repeated roots, and a sequence of distinct elements from that are not roots of . Codewords belong to the kernel of the syndrome function, forming a subspace of : The code defined by a tuple has dimension at least and distance at least , thus it can encode messages of length at least using codewords of size while correcting at least errors. It possesses a convenient parity-check matrix in form Note that this form of the parity-check matrix, being composed of a Vandermonde matrix and diagonal matrix , shares the form with check matrices of alternant codes, thus alternant decoders can be used on this form. Such decoders usually provide only limited error-correcting capability (in most cases ). For practical purposes, parity-check matrix of a binary Goppa code is usually converted to a more computer-friendly binary form by a trace construction, that converts the -by- matrix over to a -by- binary matrix by writing polynomial coefficients of elements on successive rows. Decoding Decoding of binary Goppa codes is traditionally done by Patterson algorithm, which gives good error-correcting capability (it corrects all design errors), and is also fairly simple to implement. Patterson algorithm converts a syndrome to a vector of errors. The syndrome of a binary word is expected to take a form of Alternative form of a parity-check matrix based on formula for can be used to produce such syndrome with a simple matrix multiplication. The algorithm then computes . That fails when , but that is the case when the input word is a codeword, so no error correction is necessary. is reduced to polynomials and using the extended euclidean algorithm, so that , while and . Finally, the error locator polynomial is computed as . Note that in binary case, locating the errors is sufficient to correct them, as there's only one other value possible. In non-binary cases a separate error correction polynomial has to be computed as well. If the original codeword was decodable and the was the binary error vector, then Factoring or evaluating all roots of therefore gives enough information to recover the error vector and fix the errors. Properties and usage Binary Goppa codes viewed as a special case of Goppa codes have the interesting property that they correct full errors, while only errors in ternary and all other cases. Asymptotically, this error correcting capability meets the famous Gilbert–Varshamov bound. Because of the high error correction capacity compared to code rate and form of parity-check matrix (which is usually hardly distinguishable from a random binary matrix of full rank), the binary Goppa codes are used in several post-quantum cryptosystems, notably McEliece cryptosystem and Niederreiter cryptosystem. References Elwyn R. Berlekamp, Goppa Codes, IEEE Transactions on information theory, Vol. IT-19, No. 5, September 1973, https://web.archive.org/web/20170829142555/http://infosec.seu.edu.cn/space/kangwei/senior_thesis/Goppa.pdf Daniela Engelbert, Raphael Overbeck, Arthur Schmidt. "A summary of McEliece-type cryptosystems and their security." Journal of Mathematical Cryptology 1, 151–199. . Previous version: http://eprint.iacr.org/2006/162/ Daniel J. Bernstein. "List decoding for binary Goppa codes." http://cr.yp.to/codes/goppalist-20110303.pdf See also BCH codes Code rate Reed–Solomon error correction Coding theory
Binary Goppa code
[ "Mathematics" ]
883
[ "Discrete mathematics", "Coding theory" ]
37,834,746
https://en.wikipedia.org/wiki/Q-expansion%20principle
In mathematics, the q-expansion principle states that a modular form f has coefficients in a module M if its q-expansion at enough cusps resembles the q-expansion of a modular form g with coefficients in M. It was introduced by . References Modular forms
Q-expansion principle
[ "Mathematics" ]
55
[ "Modular forms", "Number theory" ]
48,284,539
https://en.wikipedia.org/wiki/Multiomics
Multiomics, multi-omics, integrative omics, "panomics" or "pan-omics" is a biological analysis approach in which the data sets are multiple "omes", such as the genome, proteome, transcriptome, epigenome, metabolome, and microbiome (i.e., a meta-genome and/or meta-transcriptome, depending upon how it is sequenced); in other words, the use of multiple omics technologies to study life in a concerted way. By combining these "omes", scientists can analyze complex biological big data to find novel associations between biological entities, pinpoint relevant biomarkers and build elaborate markers of disease and physiology. In doing so, multiomics integrates diverse omics data to find a coherently matching geno-pheno-envirotype relationship or association. The OmicTools service lists more than 99 pieces of software related to multiomic data analysis, as well as more than 99 databases on the topic. Systems biology approaches are often based upon the use of panomic analysis data. The American Society of Clinical Oncology (ASCO) defines panomics as referring to "the interaction of all biological functions within a cell and with other body functions, combining data collected by targeted tests ... and global assays (such as genome sequencing) with other patient-specific information." Single-cell multiomics A branch of the field of multiomics is the analysis of multilevel single-cell data, called single-cell multiomics. This approach gives us an unprecedent resolution to look at multilevel transitions in health and disease at the single cell level. An advantage in relation to bulk analysis is to mitigate confounding factors derived from cell to cell variation, allowing the uncovering of heterogeneous tissue architectures. Methods for parallel single-cell genomic and transcriptomic analysis can be based on simultaneous amplification or physical separation of RNA and genomic DNA. They allow insights that cannot be gathered solely from transcriptomic analysis, as RNA data do not contain non-coding genomic regions and information regarding copy-number variation, for example. An extension of this methodology is the integration of single-cell transcriptomes to single-cell methylomes, combining single-cell bisulfite sequencing to single cell RNA-Seq. Other techniques to query the epigenome, as single-cell ATAC-Seq and single-cell Hi-C also exist. A different, but related, challenge is the integration of proteomic and transcriptomic data. One approach to perform such measurement is to physically separate single-cell lysates in two, processing half for RNA, and half for proteins. The protein content of lysates can be measured by proximity extension assays (PEA), for example, which use DNA-barcoded antibodies. A different approach uses a combination of heavy-metal RNA probes and protein antibodies to adapt mass cytometry for multiomic analysis. Related to Single-cell multiomics is the field of Spatial Omics which assays tissues through omics readouts that preserve the relative spatial orientation of the cells in the tissue. The number of Spatial Omics methods published still lags behind the number of methods published for Single-Cell multiomics, but the numbers are catching up (Single-cell and Spatial methods). Multiomics and machine learning In parallel to the advances in high-throughput biology, machine learning applications to biomedical data analysis are flourishing. The integration of multi-omics data analysis and machine learning has led to the discovery of new biomarkers. For example, one of the methods of the mixOmics project implements a method based on sparse Partial Least Squares regression for selection of features (putative biomarkers). A unified and flexible statistical framewok for heterogeneous data integration called "Regularized Generalized Canonical Correlation Analysis" (RGCCA ) enables identifying such putative biomarkers. This framework is implemented and made freely available within the RGCCA R package . Multiomics in health and disease Multiomics currently holds a promise to fill gaps in the understanding of human health and disease, and many researchers are working on ways to generate and analyze disease-related data. The applications range from understanding host-pathogen interactions and infectious diseases, cancer, to understanding better chronic and complex non-communicable diseases and improving personalized medicine. Integrated Human Microbiome Project The second phase of the $170 million Human Microbiome Project was focused on integrating patient data to different omic datasets, considering host genetics, clinical information and microbiome composition. The phase one focused on characterization of communities in different body sites. Phase 2 focused in the integration of multiomic data from host & microbiome to human diseases. Specifically, the project used multiomics to improve the understanding of the interplay of gut and nasal microbiomes with type 2 diabetes, gut microbiomes and inflammatory bowel disease and vaginal microbiomes and pre-term birth. Systems Immunology The complexity of interactions in the human immune system has prompted the generation of a wealth of immunology-related multi-scale omic data. Multi-omic data analysis has been employed to gather novel insights about the immune response to infectious diseases, such as pediatric chikungunya, as well as noncommunicable autoimmune diseases. Integrative omics has also been employed strongly to understand effectiveness and side effects of vaccines, a field called systems vaccinology. For example, multiomics was essential to uncover the association of changes in plasma metabolites and immune system transcriptome on response to vaccination against herpes zoster. List of software used for multi-omic analysis The Bioconductor project curates a variety of R packages aimed at integrating omic data: omicade4, for multiple co-inertia analysis of multi omic datasets MultiAssayExperiment, offering a bioconductor interface for overlapping samples IMAS, a package focused on using multi omic data for evaluating alternative splicing bioCancer, a package for visualization of multiomic cancer data mixOmics, a suite of multivariate methods for data integration MultiDataSet, a package for encapsulating multiple data sets The RGCCA package implements a versatile framework for data integration. This package is freely available on the Comprehensive R Archive Network (CRAN). The OmicTools database further highlights R packages and othertools for multi omic data analysis: PaintOmics, a web resource for visualization of multi-omics datasets SIGMA, a Java program focused on integrated analysis of cancer datasets iOmicsPASS, a tool in C++ for multiomic-based phenotype prediction Grimon, an R graphical interface for visualization of multiomic data Omics Pipe, a framework in Python for reproducibly automating multiomic data analysis Multiomic Databases A major limitation of classical omic studies is the isolation of only one level of biological complexity. For example, transcriptomic studies may provide information at the transcript level, but many different entities contribute to the biological state of the sample (genomic variants, post-translational modifications, metabolic products, interacting organisms, among others). With the advent of high-throughput biology, it is becoming increasingly affordable to make multiple measurements, allowing transdomain (e.g. RNA and protein levels) correlations and inferences. These correlations aid the construction or more complete biological networks, filling gaps in our knowledge. Integration of data, however, is not an easy task. To facilitate the process, groups have curated database and pipelines to systematically explore multiomic data: Multi-Omics Profiling Expression Database (MOPED), integrating diverse animal models, The Pancreatic Expression Database, integrating data related to pancreatic tissue, LinkedOmics, connecting data from TCGA cancer datasets, OASIS, a web-based resource for general cancer studies, BCIP, a platform for breast cancer studies, C/VDdb, connecting data from several cardiovascular disease studies, ZikaVR, a multiomic resource for Zika virus data Ecomics, a normalized multi-omic database for Escherichia coli data, GourdBase, integrating data from studies with gourd, MODEM, a database for multilevel maize data, SoyKB, a database for multilevel soybean data, ProteomicsDB, a multi-omics and multi-organism resource for life science research See also DisGeNET Pangenomics Hologenomics Omics List of omics topics in biology Systems Biology Network Medicine References Biology theories Molecular biology
Multiomics
[ "Chemistry", "Biology" ]
1,804
[ "Biochemistry", "Biology theories", "Molecular biology" ]
48,286,881
https://en.wikipedia.org/wiki/Lanthanum%20hydroxide
Lanthanum hydroxide is , a hydroxide of the rare-earth element lanthanum. Synthesis Lanthanum hydroxide can be obtained by adding an alkali such as ammonia to aqueous solutions of lanthanum salts such as lanthanum nitrate. This produces a gel-like precipitate that can then be dried in air. Alternatively, it can be produced by hydration reaction (addition of water) to lanthanum oxide. Characteristics Lanthanum hydroxide does not react much with alkaline substances, however is slightly soluble in acidic solution. In temperatures above 330 °C it decomposes into lanthanum oxide hydroxide (LaOOH), which upon further heating decomposes into lanthanum oxide (): LaOOH 2 LaOOH Lanthanum hydroxide crystallizes in the hexagonal crystal system. Each lanthanum ion in the crystal structure is surrounded by nine hydroxide ions in a tricapped trigonal prism. References External links External MSDS 1 External MSDS 2 Lanthanum Oxide MSDS Lanthanum compounds Inorganic compounds Hydroxides
Lanthanum hydroxide
[ "Chemistry" ]
227
[ "Inorganic compounds", "Bases (chemistry)", "Hydroxides", "Inorganic compound stubs" ]
48,287,125
https://en.wikipedia.org/wiki/Phosphorus%20mononitride
Phosphorus mononitride is an inorganic compound with the chemical formula PN. Containing only phosphorus and nitrogen, this material is classified as a binary nitride. From the Lewis structure perspective, it can be represented with a P-N triple bond with a lone pair on each atom. It is isoelectronic with N2, CO, P2, CS and SiO. The compound is highly unstable in standard conditions, tending to rapidly self polymerize. It can be isolated within argon and krypton matrices at . Due to its instability, documentation of reactions with other molecules is limited. Most of its reactivity has thus far been probed and studied at transition metal centers. Phosphorus mononitride was the first identified phosphorus compound in the interstellar medium and is even thought to be an important molecule in the atmospheres of Jupiter and Saturn. Discovery and interstellar occurrence The existence of free, gas-phase phosphorus mononitride was confirmed spectroscopically in 1934 by Nobel laureate, Gerhard Herzberg, and coworkers. J. Curry, L. Herzberg, and G. Herzberg made the accidental discovery after observing new bands in the UV region from 2375 to 2992 Å following an electric discharge within an air-filled tube that had been earlier exposed to phosphorus. In 1987, phosphorus mononitride was detected in the Orion KL Nebula, the W51M nebula in Aquila, and Saggitarius B2 simultaneously by Turner, Bally, and Ziurys. Data from radio telescopes allowed for observation of rotational lines associated with the J = 2-1, 3-2, 5-4, and 6-5 transitions. In the following decades, a rapid expansion of interstellar PN observations ensued, detected frequently alongside PO. Examples include within shocked regions of L1157, within the galactic center, in carbon-rich envelopes in CRL 2688 (alongside HCP) and oxygen-rich envelopes toward VY Canis Majoris, TX Camelopardalis, R. Cassiopeiae, and NML Cygni. ALMA data alongside spectroscopic measurements from the Rosetta probe have shown PN being carried from the comet 67P/Churyumov–Gerasimenko alongside the far more abundant PO. These observations may offer insight to how pre-biotic matter could be transported to planets. In cases where PN and PO are observed in the same region, the latter is more abundant. The consistency of the molecular ratio between these two interstellar molecules across many different interstellar clouds is thought to be a sign of a shared formation pathway between the two molecules. PN is mostly detected in hot, turbulent regions, where the shock induced sputtering of dust grain is thought to contribute to its formation. However, it has also been confirmed in massive dense cores which are by comparison "cold and quiescent". In 2022, researchers used data from the ALMA Comprehensive High-resolution Extragalactic Molecular Inventory (ALCHEMI) project and reported evidence of phosphorus mononitride in giant molecular clouds within the galaxy, NGC 253. This finding marks phosphorus mononitride as the first extragalactic phosphorus containing molecule detected as well. In 2023, Ziurys and coworkers showed the existence of PN and PO in WB89-621 (22.6 kpc from the galactic center) using rotational spectroscopy. Prior, phosphorus was only observed in the inner Milky Way (12kpc). Since supernovae do not occur in outer regions of the galaxy, the detection of these phosphorus-bearing molecules in WB89-621 provides evidence of additional alternative sources of phosphorus formation, such as non-explosive, lower mass asymptotic giant branch stars. The levels were detected at comparable values to that in the Solar system. Electronic structure, spectral and bonding properties PN formation from gaseous phosphorus and nitrogen is endothermic. ½ P2 + ½ N2 = PN (ER = 117 ± 10kJ/mol) Early mass spectrometry studies by Gingerich yielded a PN dissociation energy D0 of . It is predicted to have a high proton affinity (PA = ). Early rotational analysis of 24 of the bands from Herzberg's original study suggested a PN internuclear distance of 1.49 Å, intermediate between N2 (1.094 Å) and P2 (1.856 Å). The associated electronic transition, 1Π → 1Σ, was noted to be similar to that of the isoelectronic CS and SiO molecules. Later rotational spectra studies aligned well with these findings, for example analysis of millimeter wave rotational PN spectra from a microwave spectrometer yielded a bond distance of 1.49085 (2) Å. Infrared studies of gaseous PN at high temperatures assign its vibrational frequency (ωe) to 1337.24 cm-1 and interatomic separation of 1.4869 Å. Simple comparisons to tabulated experimental and calculated bond lengths match well with a PN triple bond according to Pyykkö's Triple-Bond Covalent Radii. NBO analyses support a single neutral resonance structure with a PN triple bond and one lone pair on each atom. However, natural population analysis shows nitrogen as significantly negatively charged (-0.82603) and phosphorus as significantly positively charged (0.82603). This is in line with the large dipole moment and partial ionic character reflecting the electron density contour plots. Monomeric PN in a krypton matrix at gives rise to a single IR band at 1323 cm-1. Auer and Neese have produced calculated gas phase 31P and 15N NMR chemical shifts of 51.61 and -344.71 respectively at the CCSD(T)/p4 level of theory. However, different functionals and basis sets yield dramatically different predictions for chemical shielding and so far experimental NMR shifts for phosphorus mononitride remain elusive. Molecular beam electric resonance spectroscopy has been used to determine the radio frequency spectrum of phosphorus mononitride generated from P3N5 thermolysis; the experimental results showed an experimental PN dipole moment (μ) of 2.7465 +- 0.0001 D, 2.7380 +-0.001 D, and 2.7293 +-0.0001 D for the first three vibrational levels respectively. Its dipole moment is larger than PO (1.88 D), despite the greater electronegativity difference between the constituent P and O atoms and similar bond length (1.476 Å). This a result of the significant differences in bonds and charge distribution within the PN and PO molecules. The large PN dipole moment makes it very favorable with respect to radio-astronomical studies in comparison to N2 - which lacks this property. In consideration to molecular orbitals of PN, direct analogies can be drawn to the bonding in the N2 molecule. It consists of an P-N σ bonding orbital (HOMO), with two perpendicular degenerate P-N pi bonding orbitals. Likewise, the LUMOs of PN, which consist of a degenerate PN pi-antibonding set, allow it to backbond with orbitals of appropriate symmetry. However, in comparison to N2, the HOMO of PN is higher in energy (est. -9.2 eV vs -12.2 eV), and, the LUMOs are lower in energy (-2.3 eV vs -0.6 eV), thus making it both a better σ-donor and pi-acceptor as a ligand. Evidently, the smaller HOMO-LUMO gap of PN, combined with its polar nature and low dissociation energy contribute to its much greater reactivity than dinitrogen (including at the interstellar level). Preparation and formation Interstellar formation The pathways to the formation of PN are still not fully understood, but likely involve competing gaseous phase reactions with other interstellar molecules. Important schemes are shown below along with competing exothermic reactions: PO + N → PN + O PO + N → P + NO (Competing) Another important, very exothermic formation reaction: PH + N → PN + H From carbon containing environments: P + CN → PN + C N + CP → PN + C An important destruction pathway: PN + N → N2 + P The abundance of interstellar PN is additionally perturbed by cosmic-ray ionization, visual extinction, and adsorption/desorption from dust grains. Electric discharge Moldenhauer and Dörsam first generated transient PN in 1924 using an electric discharge through N2 and phosphorus vapors, where the characterized product was a notably robust powder containing equal parts phosphorus and nitrogen. This same method led to the actual first observation of PN by Gerhard and coworkers. PN has also been produced at room temperature using microwave discharges on mixtures of gaseous PCl3 and N2 under moderate vacuum. This preparation was employed to achieve high resolution FTIR spectra of PN. Flash pyrolysis Atkins and Timms later generated PN via flash pyrolysis of P3N5 under high vacuum, allowing the recording of the PN infrared spectrum within a cryogenic krypton matrix. Solid triphosphorus pentanitride generates gaseous, free PN when heated to under high vacuum. Monomeric PN can only be isolated in krypton or argon matrices at . Upon warming up past , cyclotriphosphazene, which has D3h symmetry, is formed (up to before krypton matrix melts). The (PN)3 trimer and is planar and aromatic, with 15N-labelling experiments revealing a planar E' mode band at 1141 cm-1. No dimers or other oligomers are even transiently observed. Without a cryoscopic matrix, these reactions result in the immediate formation of (PN)n polymers. Thermolysis experiments of dimethyl phosphoramidate have shown PN to form as a major decomposition product along with many other minor components including the ·P=O radical and HOP=O. This is contrasting to dimethyl methylphosphonate in which said minor components become the major decomposition products, highlighting significantly diverging pathways. In 2023, Qian et al. proposed PN to be generated as a major product along with CO and cyclopentadienone byproducts when (o-phenyldioxyl)phosphinoazide is heated to 850 °C (following the loss of N2). However, efforts to observe free PN in argon matrixes using this method were unsuccessful due to band overlaps. Dehalogenation of hexachlorophosphazene Schnöckel and coworkers later showed an alternative synthesis involving the dehalogenation of hexachlorophosphazene with molten silver, with concomitant loss of AgCl. In both this route and the P3N5 thermolysis route, only trace P2 and P4 formation is detected even at , showing the reaction temperatures occur far from thermodynamic equilibrium. Anthracene release from dibenzo-7λ3 -phosphanorbornadiene derivatives The aforementioned methods require very high temperatures which are incompatible with standard, homogeneous solution state chemistry. In 2022, Cummins and coworkers prepared and isolated a molecular PN precursor, N3PA which rapidly decomposes to N2, anthracene, and PN in solution at room temperature (t½ = 30 minutes). With the combination of vacuum and heating to 42 °C, this dissociation is explosive. Reactivity Reactions of phosphorus mononitride with other molecules are rare and rather difficult to carry out. The formation of the intermediate (PN)3 trimer (which itself is only isolated in matrices) is highly favorable: 3PN ⇌ (PN)3 (-334 +/- 60 kJ/mol) PN generated in both the gaseous phase or in solution that is not subjected to trapping via noble gas matrices or particular metal complexes results in rapid self polymerization even in cases where trapping agents such as dienes or alkynes are present (differentiating its reactivity profile from related molecules such as P2). Phosphorus mononitride's tendency to rapidly polymerize with itself has dominated its reactivity, greatly hindering both the study and diversity of products in its reactions with organic molecules. In 2023, a rare case of documented reactivity with an organic molecule was reported by Qian and coworkers who demonstrated reversible photoisomerization between o-benzoquinone supported phosphinonitrene and o-benzoquinone stabilized phosphorus mononitride at 10 K, which can be isolated in an argon matrix. Ligation, stabilization, and reactivity at transition metals The majority of documented well-defined PN reactivity has been carried out at transition metal centers. The electronic and molecular orbital similarities it shares with N2 make it a viable ligating species. While free PN is unstable, phosphorus mononitride has been prepared at metal coordination sites where it can exist as an isolable terminal ligand within a complex. In alternative cases, PN ligands can also exist as only as transient, highly reactive intermediates featuring rich chemistry. As a terminal ligand, cases of both preferential P and N bonding modes have been discovered. Smith and co-workers isolated the first stable M-PN (and M-NP) complexes, using methodology to generate the PN moiety at metal sites. They reacted a tris(amido) Mo(VI) terminal phosphide complex with a tris(carbene)borate Fe(IV) terminal nitride, which undergo reductive coupling to form the corresponding neutral bridging PhB(iPr2Im)3Fe-NP-Mo(N3N) complex. Notably, the Mo-N-P bond angle in the bridging compound is nearly perfectly linear with an N-P bond length of 1.509(6) Å (only slightly elongated from free PN indicating significant multiple bond character). Addition of 3 equivalents of strongly lewis basic tert-butyl isocyanide results in the release of the iron adduct as a [PhB(iPr2Im)3Fe-(CNtBu)3]+ cation in the second coordination sphere. The corresponding terminal linear Mo-PN anion can be isolated and converted to its linear Mo-NP isomer by exposure to white light in the solid state. The M-NP isomer of the ligand was determined to be more pi-acidic (N-P = 1.5913(1) Å and P-N = 1.5363(1) Å) and more thermodynamically stable than its isomer. Cummins and co-workers exploited their N3PA free PN releasing reagent to "trap" and isolate a stable terminal (dppe)(Cp*)Fe-NP complex as a BArF24 salt. The NP bond length in this case was very short at 1.493(2) Å, almost unperturbed from gaseous PN, which is consistent with minimal pi-backbonding from the iron center. Studies confirmed the NP binding mode (as opposed to PN) to be energetically preferred by in this iron complex, creating a significant barrier to isomerization (thought to arise from Pauli repulsion effects). Studies of phosphorus mononitride chemistry at tris(amido) vanadium complexes undertaken by Cummins and coworkers provides the bulk of PN reactivity examples at transition metals to date. In this system, PN is synthetically generated at a vanadium center from respective dibenzo-7λ3 -phosphanorbornadiene derivative precursors. However, it is not stable as a terminal ligand, and instead immediately undergoes trimerization. Notably, a thermodynamic equilibrium exists between this trimer species, along with a dimer and non-observed monomeric intermediate fragment. The V-NP fragment undergoes singlet phosphinidene reactivity ([2+1] additions) with alkene and alkyne trapping agents, generating phosphiranes and phospherenes respectively. The products generated from such additions exist in equilibrium (in the case with cis-4-octene and bis-trimethylsilylacetylene), where retention of the cis-4-octene conformer is observed. Upon heating, they reversibly add to generate the V-NP dimer. Such reactivity demonstrates stark contrasts from P2 as a ligand which instead undergoes formal cycloaddition chemistry. Applications The robust nature of PN reaction products such as (PN)n, could find use in heat resistant ceramics or as fire suppressing materials. There has long been interest in studying PN and its reaction products like (PN)n polymers, noting their relevance to precursors/intermediates in the production of fertilizers. See also Triphosphorus pentanitride Phosphorus monoxide Diphosphorus Carbon monosulfide Silicon monoxide References Phosphorus-nitrogen compounds Solids
Phosphorus mononitride
[ "Physics", "Chemistry", "Materials_science" ]
3,643
[ "Solids", "Phases of matter", "Condensed matter physics", "Matter" ]
48,287,182
https://en.wikipedia.org/wiki/Phosphorus%20tetroxide
Diphosphorus tetroxide, or phosphorus tetroxide is an inorganic compound of phosphorus and oxygen. It has the empirical chemical formula . Solid phosphorus tetroxide (also referred to as phosphorus(III,V)-oxide) consists of variable mixtures of the mixed-valence oxides P4O7, P4O8 and P4O9. Preparation Phosphorus tetroxide can be produced by thermal decomposition of phosphorus trioxide, which disproportionates above 210 °C to form phosphorus tetroxide, with elemental phosphorus as a byproduct: In addition, phosphorus trioxide can be converted into phosphorus tetroxide by controlled oxidation with oxygen in carbon tetrachloride solution. Careful reduction of phosphorus pentoxide with red phosphorus at 450-525 °C also produces phosphorus tetroxide. References Phosphorus oxides Solids
Phosphorus tetroxide
[ "Physics", "Chemistry", "Materials_science" ]
176
[ "Solids", "Phases of matter", "Condensed matter physics", "Matter" ]
48,289,689
https://en.wikipedia.org/wiki/Colt%20Acetylene%20Flash%20Lantern
The Colt Acetylene Flash Lantern (or Colt Field Signal Lamp) was an acetylene signal lamp produced by the J. B. Colt Company and used by the United States military at the start of the 1900s. A patent for the device was filed in 1902. A description from maneuvers at Fort Riley detailed the device: The Colt's acetylene flash lantern was employed for night signals. The flash is produced by means of a key which causes a full flame to burst forth in the lantern for the length of time the key is pressed down; when the pressure is removed the light reduces to a minute jet, not visible to the receiving station. It is carried in three leather cases, one holding the tripod, one the generator, and the third the flash lantern, reading lamp, and remaining parts. It is assembled on an extension tripod, with the flash lantern on top, the generator attached to the legs beneath the lantern, and the reading lamp is placed on one leg near the lantern. The signals can be seen up to thirty miles with an ordinary field glass. References Further reading History of telecommunications Types of lamp Military communications Morse code Optical communications Military equipment of the United States
Colt Acetylene Flash Lantern
[ "Engineering" ]
240
[ "Optical communications", "Military communications", "Telecommunications engineering" ]
48,293,114
https://en.wikipedia.org/wiki/Radical%20fluorination
Radical fluorination is a type of fluorination reaction, complementary to nucleophilic and electrophilic approaches. It involves the reaction of an independently generated carbon-centered radical with an atomic fluorine source and yields an organofluorine compound. Historically, only three atomic fluorine sources were available for radical fluorination: Fluorine (F2), hypofluorites (O–F based reagents) and XeF2. Their high reactivity, and the difficult handling of F2 and the hypofluorites, limited the development of radical fluorination compared to electrophilic and nucleophilic methods. The uncovering of the ability of electrophilic N–F fluorinating agents to act as an atomic fluorine source led to a renaissance in radical fluorination. Various methodologies have since been developed for the radical formation of C–F bonds. The radical intermediates have been generated from carboxylic acids and boronic acid derivatives, by radical addition to alkenes, or C–H and C–C bond activations. New sources of atomic fluorine are now emerging, such as metal fluoride complexes. Sources of atomic fluorine Fluorine gas Fluorine gas (F2) can act both as an electrophilic and atomic source of fluorine. The weak F–F bond strength () allows for homolytic cleavage. The reaction of F2 with organic compounds is, however, highly exothermic and can lead to non-selective fluorinations and C–C cleavage, as well as explosions. Only a few selective radical fluorination methods have been reported. The use of fluorine for radical fluorination is mainly limited to perfluorination reactions. O–F reagents The O–F bond of hypofluorites is relatively weak. For trifluoromethyl hypofluorite (CF3OF), it has been estimated to be . The ability of trifluoromethyl hypofluorite to transfer fluorine to alkyl radicals is notably demonstrated by reacting independently generated ethyl radicals from ethene and tritium in the presence of CF3OF. The high reactivity of hypofluorites has limited their application to selective radical fluorination. They can, however, be used as radical initiators for polymerization. XeF2 Xenon difluoride (XeF2) has mainly been used for radical fluorination in radical decarboxylative fluorination reactions. In this Hunsdiecker-type reaction, xenon difluoride is used to generate the radical intermediate, as well as the fluorine transfer source. XeF2 can also be used to generate aryl radicals from arylsilanes, and act as an atomic fluorine source to furnish aryl fluorides. N–F reagents Selectfluor and N-fluorobenzenesulfonimide (NFSI) are traditionally used as electrophilic sources of fluorine, but their ability to transfer fluorine to alkyl radicals has recently been demonstrated. They are now commonly used as fluorine transfer agents to alkyl radicals. Others Examples of radical fluorination using bromine trifluoride (BrF3) and fluorinated solvents have been reported. Recent examples in radical fluorination suggest that in-situ generated metal fluoride complexes can also act as fluorine transfer agents to alkyl radicals. Radical fluorination methodologies Decarboxylative fluorination The thermolysis of t-butyl peresters has been used to generate alkyl radicals in presence of NFSI and Selectfluor. The radicals' intermediates were efficiently fluorinated, demonstrating the ability of the two electrophilic fluorinating agents to transfer fluorine to alkyl radicals. Carboxylic acids can be used as radical precursors in radical fluorination methods. Metal catalysts such as silver and manganese have been used to induce the fluorodecarboxylation. The fluorodecarboxylation of carboxylic acids can also be triggered using photoredox catalysis. More specifically, phenoxyacetic acid derivatives have been shown to undergo fluorodecarboxylation when directly exposed to ultraviolet irradiation or via the use of a photosensitizer. Radical fluorination of alkenes Alkyl radicals generated from radical additions to alkenes have also been fluorinated. Hydrides and nitrogen-, carbon-, and phosphorus-centered radicals have been employed, yielding a wide range of fluorinated difunctionalized compounds. Fluorination of boronic acid derivatives Alkyl fluorides have been synthesized via radicals generated from boronic acid derivatives using silver. C(sp3)–H fluorination One major advantage of radical fluorination is that it allows the direct fluorination of remote C–H bonds. Metal catalysts such as manganese, copper, and tungsten have been used to promote the reaction. Metal-free C(sp3)–H fluorinations rely on the use of radical initiators (triethylborane, persulfates or N-oxyl radicals) or organic photocatalysts. Some methods have also been developed to selectively fluorinate benzylic C–H bonds. C–C bond activation Cyclobutanols and cyclopropanols have been used as radical precursors for the synthesis of β- or γ-fluoroketones. The strained rings undergo C–C bond cleavage in presence of a silver or an iron catalyst or when exposed to ultraviolet light in presence of a photosensitizer. Potential applications One potential application of radical fluorination is for efficiently accessing novel moieties to serve as building blocks in medicinal chemistry. Derivatives of propellane with reactive functional groups, such as the hydrochloride salt of 3-fluorobicyclo[1.1.1]pentan-1-amine, are accessible by this approach. References Free radical reactions Organofluorides
Radical fluorination
[ "Chemistry" ]
1,321
[ "Free radical reactions", "Organic reactions" ]
28,367,322
https://en.wikipedia.org/wiki/N-flake
An n-flake, polyflake, or Sierpinski n-gon, is a fractal constructed starting from an n-gon. This n-gon is replaced by a flake of smaller n-gons, such that the scaled polygons are placed at the vertices, and sometimes in the center. This process is repeated recursively to result in the fractal. Typically, there is also the restriction that the n-gons must touch yet not overlap. In two dimensions The most common variety of n-flake is two-dimensional (in terms of its topological dimension) and is formed of polygons. The four most common special cases are formed with triangles, squares, pentagons, and hexagons, but it can be extended to any polygon. Its boundary is the von Koch curve of varying types – depending on the n-gon – and infinitely many Koch curves are contained within. The fractals occupy zero area yet have an infinite perimeter. The formula of the scale factor r for any n-flake is: where cosine is evaluated in radians and n is the number of sides of the n-gon. The Hausdorff dimension of a n-flake is , where m is the number of polygons in each individual flake and r is the scale factor. Sierpinski triangle The Sierpinski triangle is an n-flake formed by successive flakes of three triangles. Each flake is formed by placing triangles scaled by 1/2 in each corner of the triangle they replace. Its Hausdorff dimension is equal to ≈ 1.585. The is obtained because each iteration has 3 triangles that are scaled by 1/2. Vicsek fractal If a sierpinski 4-gon were constructed from the given definition, the scale factor would be 1/2 and the fractal would simply be a square. A more interesting alternative, the Vicsek fractal, rarely called a quadraflake, is formed by successive flakes of five squares scaled by 1/3. Each flake is formed either by placing a scaled square in each corner and one in the center or one on each side of the square and one in the center. Its Hausdorff dimension is equal to ≈ 1.4650. The is obtained because each iteration has 5 squares that are scaled by 1/3. The boundary of the Vicsek Fractal is a Type 1 quadratic Koch curve. Pentaflake A pentaflake, or sierpinski pentagon, is formed by successive flakes of six regular pentagons. Each flake is formed by placing a pentagon in each corner and one in the center. Its Hausdorff dimension is equal to ≈ 1.8617, where (golden ratio). The is obtained because each iteration has 6 pentagons that are scaled by . The boundary of a pentaflake is the Koch curve of 72 degrees. There is also a variation of the pentaflake that has no central pentagon. Its Hausdorff dimension equals ≈ 1.6723. This variation still contains infinitely many Koch curves, but they are somewhat more visible. Concentric patterns of pentaflake boundary shaped tiles can cover the plane, with the central point being covered by a third shape formed of segments of 72-degree Koch curve, also with 5-fold rotational and reflective symmetry. Hexaflake A hexaflake, is formed by successive flakes of seven regular hexagons. Each flake is formed by placing a scaled hexagon in each corner and one in the center. Each iteration has 7 hexagons that are scaled by 1/3. Therefore the hexaflake has 7n−1 hexagons in its nth iteration, and its Hausdorff dimension is equal to ≈ 1.7712. The boundary of a hexaflake is the standard Koch curve of 60 degrees and infinitely many Koch snowflakes are contained within. Also, the projection of the cantor cube onto the plane orthogonal to its main diagonal is a hexaflake. The hexaflake has been applied in the design of antennas and optical fibers. Like the pentaflake, there is also a variation of the hexaflake, called the Sierpinski hexagon, that has no central hexagon. Its Hausdorff dimension equals ≈ 1.6309. This variation still contains infinitely many Koch curves of 60 degrees. Polyflake n-flakes of higher polygons also exist, though they are less common and usually do not have a central polygon. [If a central polygon is generated, the scale factor differs for odd and even : for even and for odd .] Some examples are shown below; the 7-flake through 12-flake. While it may not be obvious, these higher polyflakes still contain infinitely many Koch curves, but the angle of the Koch curves decreases as n increases. Their Hausdorff dimensions are slightly more difficult to calculate than lower n-flakes because their scale factor is less obvious. However, the Hausdorff dimension is always less than two but no less than one. An interesting n-flake is the ∞-flake, because as the value of n increases, an n-flake's Hausdorff dimension approaches 1, In three dimensions n-flakes can generalized to higher dimensions, in particular to a topological dimension of three. Instead of polygons, regular polyhedra are iteratively replaced. However, while there are an infinite number of regular polygons, there are only five regular, convex polyhedra. Because of this, three-dimensional n-flakes are also called platonic solid fractals. In three dimensions, the fractals' volume is zero. Sierpinski tetrahedron A Sierpinski tetrahedron is formed by successive flakes of four regular tetrahedrons. Each flake is formed by placing a tetrahedron scaled by 1/2 in each corner. Its Hausdorff dimension is equal to , which is exactly equal to 2. On every face there is a Sierpinski triangle and infinitely many are contained within. Hexahedron flake A hexahedron, or cube, flake defined in the same way as the Sierpinski tetrahedron is simply a cube and is not interesting as a fractal. However, there are two pleasing alternatives. One is the Menger Sponge, where every cube is replaced by a three dimensional ring of cubes. Its Hausdorff dimension is ≈ 2.7268. Another hexahedron flake can be produced in a manner similar to the Vicsek fractal extended to three dimensions. Every cube is divided into 27 smaller cubes and the center cross is retained, which is the opposite of the Menger sponge where the cross is removed. However, it is not the Menger Sponge complement. Its Hausdorff dimension is ≈ 1.7712, because a cross of 7 cubes, each scaled by 1/3, replaces each cube. Octahedron flake An octahedron flake, or sierpinski octahedron, is formed by successive flakes of six regular octahedra. Each flake is formed by placing an octahedron scaled by 1/2 in each corner. Its Hausdorff dimension is equal to ≈ 2.5849. On every face there is a Sierpinski triangle and infinitely many are contained within. Dodecahedron flake A dodecahedron flake, or sierpinski dodecahedron, is formed by successive flakes of twenty regular dodecahedra. Each flake is formed by placing a dodecahedron scaled by in each corner. Its Hausdorff dimension is equal to ≈ 2.3296. Icosahedron flake An icosahedron flake, or sierpinski icosahedron, is formed by successive flakes of twelve regular icosahedra. Each flake is formed by placing an icosahedron scaled by in each corner. Its Hausdorff dimension is equal to ≈ 2.5819. See also List of fractals by Hausdorff dimension References External links Quadraflakes, Pentaflakes, Hexaflakes and more – includes Mathematica code to generate these fractals Javascript for covering the plane with 5-fold symmetric Pentaflake tiles. Fractals Fractal curves
N-flake
[ "Mathematics" ]
1,772
[ "Mathematical analysis", "Functions and mappings", "Mathematical objects", "Fractals", "Mathematical relations" ]
28,370,482
https://en.wikipedia.org/wiki/Caustic%20embrittlement
Caustic embrittlement is the phenomenon in which the material of a boiler becomes brittle due to the accumulation of caustic substances. Cause As water evaporates in the boiler, the concentration of sodium carbonate increases in the boiler. In high pressure boilers, sodium carbonate is used in softening of water by lime soda process, due to this some sodium carbonate maybe left behind in the water. As the concentration of sodium carbonate increases, it undergoes hydrolysis to form sodium hydroxide. The presence of sodium hydroxide makes the water alkaline in nature. This alkaline water enters minute cracks present in the inner walls of the boiler by capillary action. Inside the cracks, the water evaporates and the amount of hydroxide keeps increasing progressively. The concentrated area with high stress works as anode and diluted area works as cathode. At anode, sodium hydroxide attacks the surrounding material and then dissolves the iron of the boiler as sodium ferrate forming rust. This causes embrittlement of boiler parts like rivets, bends and joints, which are under stress. Prevention This can be prevented by using sodium phosphate () instead of sodium carbonate as softening reagents. What happens is that its molecules will get inside the hair-line crack and block it, as a result of which sodium hydroxide, even if it is there, will not be able to come in contact with iron, and no reaction will be there. Adding tannin or lignin to boiler water blocks the hair-line cracks and prevents infiltration of NaOH into these areas. Adding Na2SO4 to boiler water also blocks the hair-line cracks. References Further reading Corrosion
Caustic embrittlement
[ "Chemistry", "Materials_science" ]
348
[ "Metallurgy", "Corrosion", "Electrochemistry", "Electrochemistry stubs", "Materials degradation", "Physical chemistry stubs", "Chemical process stubs" ]
25,140,222
https://en.wikipedia.org/wiki/Joint%20spectral%20radius
In mathematics, the joint spectral radius is a generalization of the classical notion of spectral radius of a matrix, to sets of matrices. In recent years this notion has found applications in a large number of engineering fields and is still a topic of active research. General description The joint spectral radius of a set of matrices is the maximal asymptotic growth rate of products of matrices taken in that set. For a finite (or more generally compact) set of matrices the joint spectral radius is defined as follows: It can be proved that the limit exists and that the quantity actually does not depend on the chosen matrix norm (this is true for any norm but particularly easy to see if the norm is sub-multiplicative). The joint spectral radius was introduced in 1960 by Gian-Carlo Rota and Gilbert Strang, two mathematicians from MIT, but started attracting attention with the work of Ingrid Daubechies and Jeffrey Lagarias. They showed that the joint spectral radius can be used to describe smoothness properties of certain wavelet functions. A wide number of applications have been proposed since then. It is known that the joint spectral radius quantity is NP-hard to compute or to approximate, even when the set consists of only two matrices with all nonzero entries of the two matrices which are constrained to be equal. Moreover, the question "" is an undecidable problem. Nevertheless, in recent years much progress has been done on its understanding, and it appears that in practice the joint spectral radius can often be computed to satisfactory precision, and that it moreover can bring interesting insight in engineering and mathematical problems. Computation Approximation algorithms In spite of the negative theoretical results on the joint spectral radius computability, methods have been proposed that perform well in practice. Algorithms are even known, which can reach an arbitrary accuracy in an a priori computable amount of time. These algorithms can be seen as trying to approximate the unit ball of a particular vector norm, called the extremal norm. One generally distinguishes between two families of such algorithms: the first family, called polytope norm methods, construct the extremal norm by computing long trajectories of points. An advantage of these methods is that in the favorable cases it can find the exact value of the joint spectral radius and provide a certificate that this is the exact value. The second family of methods approximate the extremal norm with modern optimization techniques, such as ellipsoid norm approximation, semidefinite programming, Sum Of Squares, and conic programming. The advantage of these methods is that they are easy to implement, and in practice, they provide in general the best bounds on the joint spectral radius. The finiteness conjecture Related to the computability of the joint spectral radius is the following conjecture: "For any finite set of matrices there is a product of matrices in this set such that " In the above equation "" refers to the classical spectral radius of the matrix This conjecture, proposed in 1995, was proven to be false in 2003. The counterexample provided in that reference uses advanced measure-theoretical ideas. Subsequently, many other counterexamples have been provided, including an elementary counterexample that uses simple combinatorial properties matrices and a counterexample based on dynamical systems properties. Recently an explicit counterexample has been proposed in. Many questions related to this conjecture are still open, as for instance the question of knowing whether it holds for pairs of binary matrices. Applications The joint spectral radius was introduced for its interpretation as a stability condition for discrete-time switching dynamical systems. Indeed, the system defined by the equations is stable if and only if The joint spectral radius became popular when Ingrid Daubechies and Jeffrey Lagarias showed that it rules the continuity of certain wavelet functions. Since then, it has found many applications, ranging from number theory to information theory, autonomous agents consensus, combinatorics on words,... Related notions The joint spectral radius is the generalization of the spectral radius of a matrix for a set of several matrices. However, many more quantities can be defined when considering a set of matrices: The joint spectral subradius characterizes the minimal rate of growth of products in the semigroup generated by . The p-radius characterizes the rate of growth of the average of the norms of the products in the semigroup. The Lyapunov exponent of the set of matrices characterizes the rate of growth of the geometric average. References Further reading Control theory Linear algebra
Joint spectral radius
[ "Mathematics" ]
919
[ "Applied mathematics", "Control theory", "Linear algebra", "Algebra", "Dynamical systems" ]
25,140,586
https://en.wikipedia.org/wiki/Conjoined%20gene
A conjoined gene (CG) is defined as a gene, which gives rise to transcripts by combining at least part of one exon from each of two or more distinct known (parent) genes which lie on the same chromosome, are in the same orientation, and often (95%) translate independently into different proteins. In some cases, the transcripts formed by CGs are translated to form chimeric or completely novel proteins. Several alternative names are used to address conjoined genes, including combined gene and complex gene, fusion gene, fusion protein, read-through transcript, co-transcribed genes, bridged genes, spanning genes, hybrid genes, locus-spanning transcripts, etc. At present, 800 CGs have been identified in the entire human genome by different research groups across the world including Prakash et al., Akiva et al., Parra et al., Kim et al., and in the 1% of the human genome in the ENCODE pilot project. 36% of all these CGs could be validated experimentally using RT-PCR and sequencing techniques. However, only a very limited number of these CGs are found in the public human genome resources such as the Entrez Gene database, the UCSC Genome Browser and the Vertebrate Genome Annotation (Vega) database. More than 70% of the human conjoined genes are found to be conserved across other vertebrate genomes with higher order vertebrates showing more conservation, including the closest human ancestor, chimpanzee. Formation of CGs is not only limited to the human genome but some CGs have also been identified in other eukaryotic genomes, including mouse and drosophila. There are a few web resources which include information about some CGs in addition to the other fusion genes, for example, ChimerDB and HYBRIDdb. Another database, ConjoinG, is a comprehensive resource dedicated only to the 800 Conjoined Genes identified in the entire human genome. See also Gene expression References Genes Gene expression
Conjoined gene
[ "Chemistry", "Biology" ]
419
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
25,143,110
https://en.wikipedia.org/wiki/European%20Physical%20Journal%20A
The European Physical Journal A: Hadrons and Nuclei is an academic journal, recognized by the European Physical Society, presenting new and original research results in a variety of formats, including Regular Articles, Reviews, Tools for Experiment and Theory/Scientific Notes and Letters. Topics covered include: Hadron Physics Structure and Dynamics of Hadrons Baryon and Meson Spectroscopy Hadronic and Electroweak Interactions of Hadrons Nonperturbative Approaches to QCD Phenomenological Approaches to Hadron Physics Nuclear Physics Nuclear Structure and Reactions Structure and function of nanostructures Few-Body and Many-Body Systems Heavy-Ion Physics Hypernuclei Radioactive Beams Nuclear Astrophysics History Prior to 1998, the journal was named Zeitschrift für Physik A Hadrons and Nuclei. Thomas Walcher's term as Editor-in-Chief of EPJ A came to an end in 2006. In January 2007, Enzo de Sanctis started as new Editor-in-Chief and he was joined in July that year by Ulf-G. Meißner, who took charge of the theoretical papers while the experimental papers would be handled by de Sanctis. See also European Physical Journal Physics journals EDP Sciences academic journals Springer Science+Business Media academic journals Academic journals established in 1998 Nuclear physics journals
European Physical Journal A
[ "Physics" ]
265
[ "Nuclear physics journals", "Nuclear and atomic physics stubs", "Nuclear physics" ]
26,551,602
https://en.wikipedia.org/wiki/Limit%20%28mathematics%29
In mathematics, a limit is the value that a function (or sequence) approaches as the argument (or index) approaches some value. Limits of functions are essential to calculus and mathematical analysis, and are used to define continuity, derivatives, and integrals. The concept of a limit of a sequence is further generalized to the concept of a limit of a topological net, and is closely related to limit and direct limit in category theory. The limit inferior and limit superior provide generalizations of the concept of a limit which are particularly relevant when the limit at a point may not exist. Notation In formulas, a limit of a function is usually written as and is read as "the limit of of as approaches equals ". This means that the value of the function can be made arbitrarily close to , by choosing sufficiently close to . Alternatively, the fact that a function approaches the limit as approaches is sometimes denoted by a right arrow (→ or ), as in which reads " of tends to as tends to ". History According to Hankel (1871), the modern concept of limit originates from Proposition X.1 of Euclid's Elements, which forms the basis of the Method of exhaustion found in Euclid and Archimedes: "Two unequal magnitudes being set out, if from the greater there is subtracted a magnitude greater than its half, and from that which is left a magnitude greater than its half, and if this process is repeated continually, then there will be left some magnitude less than the lesser magnitude set out." Grégoire de Saint-Vincent gave the first definition of limit (terminus) of a geometric series in his work Opus Geometricum (1647): "The terminus of a progression is the end of the series, which none progression can reach, even not if she is continued in infinity, but which she can approach nearer than a given segment." The modern definition of a limit goes back to Bernard Bolzano who, in 1817, developed the basics of the epsilon-delta technique to define continuous functions. However, his work remained unknown to other mathematicians until thirty years after his death. Augustin-Louis Cauchy in 1821, followed by Karl Weierstrass, formalized the definition of the limit of a function which became known as the (ε, δ)-definition of limit. The modern notation of placing the arrow below the limit symbol is due to G. H. Hardy, who introduced it in his book A Course of Pure Mathematics in 1908. Types of limits In sequences Real numbers The expression 0.999... should be interpreted as the limit of the sequence 0.9, 0.99, 0.999, ... and so on. This sequence can be rigorously shown to have the limit 1, and therefore this expression is meaningfully interpreted as having the value 1. Formally, suppose is a sequence of real numbers. When the limit of the sequence exists, the real number is the limit of this sequence if and only if for every real number , there exists a natural number such that for all , we have . The common notation is read as: "The limit of an as n approaches infinity equals L" or "The limit as n approaches infinity of an equals L". The formal definition intuitively means that eventually, all elements of the sequence get arbitrarily close to the limit, since the absolute value is the distance between and . Not every sequence has a limit. A sequence with a limit is called convergent; otherwise it is called divergent. One can show that a convergent sequence has only one limit. The limit of a sequence and the limit of a function are closely related. On one hand, the limit as approaches infinity of a sequence is simply the limit at infinity of a function —defined on the natural numbers . On the other hand, if X is the domain of a function and if the limit as approaches infinity of is for every arbitrary sequence of points in X − x0 which converges to , then the limit of the function as approaches is equal to . One such sequence would be . Infinity as a limit There is also a notion of having a limit "tend to infinity", rather than to a finite value . A sequence is said to "tend to infinity" if, for each real number , known as the bound, there exists an integer such that for each , That is, for every possible bound, the sequence eventually exceeds the bound. This is often written or simply . It is possible for a sequence to be divergent, but not tend to infinity. Such sequences are called oscillatory. An example of an oscillatory sequence is . There is a corresponding notion of tending to negative infinity, , defined by changing the inequality in the above definition to with A sequence with is called unbounded, a definition equally valid for sequences in the complex numbers, or in any metric space. Sequences which do not tend to infinity are called bounded. Sequences which do not tend to positive infinity are called bounded above, while those which do not tend to negative infinity are bounded below. Metric space The discussion of sequences above is for sequences of real numbers. The notion of limits can be defined for sequences valued in more abstract spaces, such as metric spaces. If is a metric space with distance function , and is a sequence in , then the limit (when it exists) of the sequence is an element such that, given , there exists an such that for each , we have An equivalent statement is that if the sequence of real numbers . Example: Rn An important example is the space of -dimensional real vectors, with elements where each of the are real, an example of a suitable distance function is the Euclidean distance, defined by The sequence of points converges to if the limit exists and . Topological space In some sense the most abstract space in which limits can be defined are topological spaces. If is a topological space with topology , and is a sequence in , then the limit (when it exists) of the sequence is a point such that, given a (open) neighborhood of , there exists an such that for every , is satisfied. In this case, the limit (if it exists) may not be unique. However it must be unique if is a Hausdorff space. Function space This section deals with the idea of limits of sequences of functions, not to be confused with the idea of limits of functions, discussed below. The field of functional analysis partly seeks to identify useful notions of convergence on function spaces. For example, consider the space of functions from a generic set to . Given a sequence of functions such that each is a function , suppose that there exists a function such that for each , Then the sequence is said to converge pointwise to . However, such sequences can exhibit unexpected behavior. For example, it is possible to construct a sequence of continuous functions which has a discontinuous pointwise limit. Another notion of convergence is uniform convergence. The uniform distance between two functions is the maximum difference between the two functions as the argument is varied. That is, Then the sequence is said to uniformly converge or have a uniform limit of if with respect to this distance. The uniform limit has "nicer" properties than the pointwise limit. For example, the uniform limit of a sequence of continuous functions is continuous. Many different notions of convergence can be defined on function spaces. This is sometimes dependent on the regularity of the space. Prominent examples of function spaces with some notion of convergence are Lp spaces and Sobolev space. In functions Suppose is a real-valued function and is a real number. Intuitively speaking, the expression means that can be made to be as close to as desired, by making sufficiently close to . In that case, the above equation can be read as "the limit of of , as approaches , is ". Formally, the definition of the "limit of as approaches " is given as follows. The limit is a real number so that, given an arbitrary real number (thought of as the "error"), there is a such that, for any satisfying , it holds that . This is known as the (ε, δ)-definition of limit. The inequality is used to exclude from the set of points under consideration, but some authors do not include this in their definition of limits, replacing with simply . This replacement is equivalent to additionally requiring that be continuous at . It can be proven that there is an equivalent definition which makes manifest the connection between limits of sequences and limits of functions. The equivalent definition is given as follows. First observe that for every sequence in the domain of , there is an associated sequence , the image of the sequence under . The limit is a real number so that, for all sequences , the associated sequence . One-sided limit It is possible to define the notion of having a "left-handed" limit ("from below"), and a notion of a "right-handed" limit ("from above"). These need not agree. An example is given by the positive indicator function, , defined such that if , and if . At , the function has a "left-handed limit" of 0, a "right-handed limit" of 1, and its limit does not exist. Symbolically, this can be stated as, for this example, , and , and from this it can be deduced doesn't exist, because . Infinity in limits of functions It is possible to define the notion of "tending to infinity" in the domain of , This could be considered equivalent to the limit as a reciprocal tends to 0: or it can be defined directly: the "limit of as tends to positive infinity" is defined as a value such that, given any real , there exists an so that for all , . The definition for sequences is equivalent: As , we have . In these expressions, the infinity is normally considered to be signed ( or ) and corresponds to a one-sided limit of the reciprocal. A two-sided infinite limit can be defined, but an author would explicitly write to be clear. It is also possible to define the notion of "tending to infinity" in the value of , Again, this could be defined in terms of a reciprocal: Or a direct definition can be given as follows: given any real number , there is a so that for , the absolute value of the function . A sequence can also have an infinite limit: as , the sequence . This direct definition is easier to extend to one-sided infinite limits. While mathematicians do talk about functions approaching limits "from above" or "from below", there is not a standard mathematical notation for this as there is for one-sided limits. Nonstandard analysis In non-standard analysis (which involves a hyperreal enlargement of the number system), the limit of a sequence can be expressed as the standard part of the value of the natural extension of the sequence at an infinite hypernatural index n=H. Thus, Here, the standard part function "st" rounds off each finite hyperreal number to the nearest real number (the difference between them is infinitesimal). This formalizes the natural intuition that for "very large" values of the index, the terms in the sequence are "very close" to the limit value of the sequence. Conversely, the standard part of a hyperreal represented in the ultrapower construction by a Cauchy sequence , is simply the limit of that sequence: In this sense, taking the limit and taking the standard part are equivalent procedures. Limit sets Limit set of a sequence Let be a sequence in a topological space . For concreteness, can be thought of as , but the definitions hold more generally. The limit set is the set of points such that if there is a convergent subsequence with , then belongs to the limit set. In this context, such an is sometimes called a limit point. A use of this notion is to characterize the "long-term behavior" of oscillatory sequences. For example, consider the sequence . Starting from n=1, the first few terms of this sequence are . It can be checked that it is oscillatory, so has no limit, but has limit points . Limit set of a trajectory This notion is used in dynamical systems, to study limits of trajectories. Defining a trajectory to be a function , the point is thought of as the "position" of the trajectory at "time" . The limit set of a trajectory is defined as follows. To any sequence of increasing times , there is an associated sequence of positions . If is the limit set of the sequence for any sequence of increasing times, then is a limit set of the trajectory. Technically, this is the -limit set. The corresponding limit set for sequences of decreasing time is called the -limit set. An illustrative example is the circle trajectory: . This has no unique limit, but for each , the point is a limit point, given by the sequence of times . But the limit points need not be attained on the trajectory. The trajectory also has the unit circle as its limit set. Uses Limits are used to define a number of important concepts in analysis. Series A particular expression of interest which is formalized as the limit of a sequence is sums of infinite series. These are "infinite sums" of real numbers, generally written as This is defined through limits as follows: given a sequence of real numbers , the sequence of partial sums is defined by If the limit of the sequence exists, the value of the expression is defined to be the limit. Otherwise, the series is said to be divergent. A classic example is the Basel problem, where . Then However, while for sequences there is essentially a unique notion of convergence, for series there are different notions of convergence. This is due to the fact that the expression does not discriminate between different orderings of the sequence , while the convergence properties of the sequence of partial sums can depend on the ordering of the sequence. A series which converges for all orderings is called unconditionally convergent. It can be proven to be equivalent to absolute convergence. This is defined as follows. A series is absolutely convergent if is well defined. Furthermore, all possible orderings give the same value. Otherwise, the series is conditionally convergent. A surprising result for conditionally convergent series is the Riemann series theorem: depending on the ordering, the partial sums can be made to converge to any real number, as well as . Power series A useful application of the theory of sums of series is for power series. These are sums of series of the form Often is thought of as a complex number, and a suitable notion of convergence of complex sequences is needed. The set of values of for which the series sum converges is a circle, with its radius known as the radius of convergence. Continuity of a function at a point The definition of continuity at a point is given through limits. The above definition of a limit is true even if . Indeed, the function need not even be defined at . However, if is defined and is equal to , then the function is said to be continuous at the point . Equivalently, the function is continuous at if as , or in terms of sequences, whenever , then . An example of a limit where is not defined at is given below. Consider the function then is not defined (see Indeterminate form), yet as moves arbitrarily close to 1, correspondingly approaches 2: Thus, can be made arbitrarily close to the limit of 2—just by making sufficiently close to . In other words, This can also be calculated algebraically, as for all real numbers . Now, since is continuous in at 1, we can now plug in 1 for , leading to the equation In addition to limits at finite values, functions can also have limits at infinity. For example, consider the function where: As becomes extremely large, the value of approaches , and the value of can be made as close to as one could wish—by making sufficiently large. So in this case, the limit of as approaches infinity is , or in mathematical notation, Continuous functions An important class of functions when considering limits are continuous functions. These are precisely those functions which preserve limits, in the sense that if is a continuous function, then whenever in the domain of , then the limit exists and furthermore is . In the most general setting of topological spaces, a short proof is given below: Let be a continuous function between topological spaces and . By definition, for each open set in , the preimage is open in . Now suppose is a sequence with limit in . Then is a sequence in , and is some point. Choose a neighborhood of . Then is an open set (by continuity of ) which in particular contains , and therefore is a neighborhood of . By the convergence of to , there exists an such that for , we have . Then applying to both sides gives that, for the same , for each we have . Originally was an arbitrary neighborhood of , so . This concludes the proof. In real analysis, for the more concrete case of real-valued functions defined on a subset , that is, , a continuous function may also be defined as a function which is continuous at every point of its domain. Limit points In topology, limits are used to define limit points of a subset of a topological space, which in turn give a useful characterization of closed sets. In a topological space , consider a subset . A point is called a limit point if there is a sequence in such that . The reason why is defined to be in rather than just is illustrated by the following example. Take and . Then , and therefore is the limit of the constant sequence . But is not a limit point of . A closed set, which is defined to be the complement of an open set, is equivalently any set which contains all its limit points. Derivative The derivative is defined formally as a limit. In the scope of real analysis, the derivative is first defined for real functions defined on a subset . The derivative at is defined as follows. If the limit of as exists, then the derivative at is this limit. Equivalently, it is the limit as of If the derivative exists, it is commonly denoted by . Properties Sequences of real numbers For sequences of real numbers, a number of properties can be proven. Suppose and are two sequences converging to and respectively. Sum of limits is equal to limit of sum Product of limits is equal to limit of product Inverse of limit is equal to limit of inverse (as long as ) Equivalently, the function is continuous about nonzero . Cauchy sequences A property of convergent sequences of real numbers is that they are Cauchy sequences. The definition of a Cauchy sequence is that for every real number , there is an such that whenever , Informally, for any arbitrarily small error , it is possible to find an interval of diameter such that eventually the sequence is contained within the interval. Cauchy sequences are closely related to convergent sequences. In fact, for sequences of real numbers they are equivalent: any Cauchy sequence is convergent. In general metric spaces, it continues to hold that convergent sequences are also Cauchy. But the converse is not true: not every Cauchy sequence is convergent in a general metric space. A classic counterexample is the rational numbers, , with the usual distance. The sequence of decimal approximations to , truncated at the th decimal place is a Cauchy sequence, but does not converge in . A metric space in which every Cauchy sequence is also convergent, that is, Cauchy sequences are equivalent to convergent sequences, is known as a complete metric space. One reason Cauchy sequences can be "easier to work with" than convergent sequences is that they are a property of the sequence alone, while convergent sequences require not just the sequence but also the limit of the sequence . Order of convergence Beyond whether or not a sequence converges to a limit , it is possible to describe how fast a sequence converges to a limit. One way to quantify this is using the order of convergence of a sequence. A formal definition of order of convergence can be stated as follows. Suppose is a sequence of real numbers which is convergent with limit . Furthermore, for all . If positive constants and exist such that then is said to converge to with order of convergence . The constant is known as the asymptotic error constant. Order of convergence is used for example the field of numerical analysis, in error analysis. Computability Limits can be difficult to compute. There exist limit expressions whose modulus of convergence is undecidable. In recursion theory, the limit lemma proves that it is possible to encode undecidable problems using limits. There are several theorems or tests that indicate whether the limit exists. These are known as convergence tests. Examples include the ratio test and the squeeze theorem. However they may not tell how to compute the limit. See also Asymptotic analysis: a method of describing limiting behavior Big O notation: used to describe the limiting behavior of a function when the argument tends towards a particular value or infinity Banach limit defined on the Banach space that extends the usual limits. Convergence of random variables Convergent matrix Limit in category theory Direct limit Inverse limit Limit of a function One-sided limit: either of the two limits of functions of a real variable x, as x approaches a point from above or below List of limits: list of limits for common functions Squeeze theorem: finds a limit of a function via comparison with two other functions Limit superior and limit inferior Modes of convergence An annotated index Notes References External links Convergence (mathematics) Real analysis Asymptotic analysis Differential calculus General topology
Limit (mathematics)
[ "Mathematics" ]
4,432
[ "Sequences and series", "General topology", "Functions and mappings", "Convergence (mathematics)", "Mathematical structures", "Mathematical analysis", "Calculus", "Mathematical objects", "Topology", "Mathematical relations", "Asymptotic analysis", "Differential calculus" ]
26,556,669
https://en.wikipedia.org/wiki/Theory%20of%20sonics
The theory of sonics is a branch of continuum mechanics which describes the transmission of mechanical energy through vibrations. The birth of the theory of sonics is the publication of the book A treatise on transmission of power by vibrations in 1918 by the Romanian scientist Gogu Constantinescu. ONE of the fundamental problems of mechanical engineering is that of transmitting energy found in nature, after suitable transformation, to some point at which can be made available for performing useful work. The methods of transmitting power known and practised by engineers are broadly included in two classes: mechanical including hydraulic, pneumatic and wire rope methods; and electrical methods....According to the new system, energy is transmitted from one point to another, which may be at a considerable distance, by means of impressed variations of pressure or tension producing longitudinal vibrations in solid, liquid or gaseous columns. The energy is transmitted by periodic changes of pressure and volume in the longitudinal direction and may be described as wave transmission of power, or mechanical wave transmission. – Gogu Constantinescu Later on the theory was expanded in electro-sonic, hydro-sonic, sonostereo-sonic and thermo-sonic. The theory was the first chapter of compressible flow applications and has stated for the first time the mathematical theory of compressible fluid, and was considered a branch of continuum mechanics. The laws discovered by Constantinescu, used in sonicity are the same with the laws used in electricity. Book chapters The book A treatise on transmission of power by vibrations has the following chapters: Introductory Elementary physical principles Definitions Effects of capacity, inertia, friction, and leakage on alternating currents Waves in long pipes Alternating in long pipes allowing for Friction Theory of displacements – motors Theory of resonators High-frequency currents Charged lines Transformers George Constantinescu defined his work as follow. Theory of sonics: applications The Constantinesco synchronization gear, used on military aircraft in order to allow them to target opponents without damaging their own propellers. Automatic gear Sonic Drilling, was one of the first applications developed by Constantinescu. A sonic drill head works by sending high frequency resonant vibrations down the drill string to the drill bit, while the operator controls these frequencies to suit the specific conditions of the soil/rock geology. Torque Converter. A mechanical application of sonic theory on the transmission of power by vibrations. Power is transmitted from the engine to the output shaft through a system of oscillating levers and inertias. Sonic Engine Elementary physical principles If v is the velocity of which waves travel along the pipe, and n the number of the revolutions of the crank a, then the wavelength λ is: Assuming that the pipe is finite and closed at the point r situated at a distance which is multiple of λ, and considering that the piston is smaller than wavelength, at r the wave compression is stopped and reflected, the reflected wave traveling back along the pipe. Definitions Alternating fluid currents Considering any flow or pipes, if: ω = the area section of the pipe measured in square centimeters; v = the velocity of the fluid at any moment in centimeters per second; and i = the flow of liquid in cubic centimeters per second, then we have: i = vω Assuming that the fluid current is produced by a piston having a simple harmonic movement, in a piston cylinder having a section of Ω square centimeters. If we have: r = the equivalent of driving crank in centimeters a = the angular velocity of the crank or the pulsations in radians per second. n = the number of crank rotations per second. Then: The flow from the cylinder to the pipe is: i = I sin(at+φ) Where: I = raΩ (the maximum alternating flow in square centimeters per second; the amplitude of the flow.) t = time in seconds φ = the angle of the phase If T= period of a complete alternation (one revolution of the crank) then: a = 2πn; where n = 1/T The effective current can be defined by the equation: and the effective velocity is : The stroke volume δ will be given by the relation: Alternating pressures The alternating pressures are very similar to alternating currents in electricity. In a pipe where the currents are flowing, we will have: ; where H is the maximum alternating pressure measured in kilograms per square centimeter. the angle of phase; representing the mean pressure in the pipe. Considering the above formulas: the minimum pressure is and maximum pressure is If p1 is the pressure at an arbitrary point and p2 pressure in another arbitrary point: The difference is defined as instantaneous hydromotive force between point p1 and p2, H representing the amplitude. The effective hydromotive force will be: Friction In alternating current flowing through a pipe, there is friction at the surface of the pipe and also in the liquid itself. Therefore, the relation between the hydromotive force and current can be written as: ; where R = coefficient of friction in Using experiments R may be calculated from formula: ; Where: is the density of the liquid in kg per cm.3 l is the length of the pipe in cm. g is the gravitational acceleration in cm. per sec.2 is the section of the pipe in square centimeters. veff is the effective velocity d is the internal diameter of the pipe in centimeters. for water (an approximation from experimental data). h is the instantaneous hydromotive force If we introduce in the formula, we get: which is equivalent to: ; introducing k in the formula results in For pipes with a greater diameter, a greater velocity can be achieved for same value of k. The loss of power due to friction is calculated by: , putting h = Ri results in: Therefore: Capacity and condensers Definition: Hydraulic condensers are appliances for making alterations in value of fluid currents, pressures or phases of alternating fluid currents. The apparatus usually consists of a mobile solid body, which divides the liquid column, and is fixed elastically in a middle position such that it follows the movements of the liquid column. The principal function of hydraulic condensers is to counteract inertia effects due to moving masses. Notes References https://archive.org/stream/theoryofwavetran00consrich#page/n3/mode/2up http://www.rexresearch.com/constran/1constran.htm Constantinesco, G. Theory of Sonics: A Treatise on Transmission of Power by Vibrations. The Admiralty, London, 1918. Constantinesco, G., Sonics. Trans. Soc. of Engineers, London, June 1959 Clark, R.Edison, The Man Who Made the Future. Macdonald and Jane's, London, 1977. McNeil, I., George Constantinesco, 1881–1965 and the Development of Sonic Power Transmission. Excerpt from volume 54, Trans. of the Newcomen Society, London, 1982–83. Constantinesco, G., A Hundred Years of Development in Mechanical Engineering. Trans. Soc. of Engineers, London, Sept. 1954. http://www.gs-harper.com/Mining_Research/Power/Sonics005.asp Constantinesco, G. Transmission of Power the Present, the Future. Paper read before the North East Coast Institution of Engineers and Shipbuilders in Newcastle upon Tyne, on 4 December 1925. Reprinted by order of the council. North East Coast Institution of Engineers and Shipbuilders, Newcastle upon Tyne, 1926. https://web.archive.org/web/20090603102058/http://www.rri.ro/arh-art.shtml?lang=1&sec=9&art=3596 http://www.utcluj.ro/download/doctorat/Rezumat_Carmen_Bal.pdf http://www.rexresearch.com/constran/1constran.htm http://imtuoradea.ro/auo.fmte/files-2008/MECANICA_files/MARCU%20FLORIN%201.pdf http://dynamicsflorio.webs.com/arotmm.htm George Constantinescu Mathematical physics Romanian inventions
Theory of sonics
[ "Physics", "Mathematics" ]
1,696
[ "Applied mathematics", "Theoretical physics", "Mathematical physics" ]
36,378,824
https://en.wikipedia.org/wiki/Magnesiopascoite
Magnesiopascoite is a bright orange mineral with formula Ca2Mg(V10O28)·16H2O. It was discovered in the U.S. state of Utah and formally described in 2008. The mineral's name derives from its status as the magnesium analogue of pascoite. Description Magnesiopascoite is a member of the pascoite group and is the magnesium analogue of pascoite. It is transparent and bright orange in color, occurring as intergrown, parallel stackings of crystals up to several millimeters in the largest dimension. The crystals vary from tabular to equant to prismatic. The mineral dissolves slowly in water and quickly in cold, dilute hydrochloric acid. It decomposes rapidly when mildly heated, likely as a result of dehydration. Structure and composition The crystal structure of magnesiopascoite consists of the decavanadate anion (V10O28)6− and interstitial {Ca2Mg(H2O)16}6+ consisting of Mg(H2O)6 octahedra and seven-fold coordinated CaO2(H2O)5. The structure differs from that of pascoite primarily in cation coordination in the interstitial complex. In addition to calcium and magnesium, magnesiopascoite contains minute quantities of zinc and cobalt. History Joe Marty discovered specimens of magnesiopascoite in San Juan County, Utah, in the Blue Cap mine and the nearby Vanadium Queen mine. The mineral was named "magnesiopascoite" because it is the magnesium analogue of pascoite. The mineral and name were approved by the IMA Commission on New Minerals, Nomenclature and Classification (IMA 2007-025). Magnesiopascoite was described in 2008 in the journal Canadian Mineralogist. The two cotype specimens are held at the Natural History Museum of Los Angeles County in the US State of California. Occurrence In the area of the type locality, the reducing environment caused by carbonaceous material in the Salt Wash and Brushy Basin members of the Morrison Formation precipitated uranium and vanadium minerals from solution. After mining, subsequent leaching and oxidation by groundwater created magnesiopascoite. The mineral has been found in association with gypsum, martyite, montroseite, pyrite and rossite. References Bibliography External links Photos of magnesiopascoite from Mindat.org Monoclinic minerals Minerals in space group 12 Calcium minerals Magnesium minerals Vanadate minerals 16 Minerals described in 2008
Magnesiopascoite
[ "Chemistry" ]
540
[ "Hydrate minerals", "Hydrates" ]
36,379,613
https://en.wikipedia.org/wiki/R-spondin%204
For chromosome 20, R-spondin 4 is a protein in humans that is encoded by the RSPO4 gene. This gene encodes a member of the R-spondin family of proteins that share a common domain organization consisting of a signal peptide, cysteine-rich/furin-like domain, thrombospondin domain and a C-terminal basic region. The encoded protein may be involved in activation of Wnt/beta-catenin signaling pathways. Mutations in this gene are associated with anonychia. Alternate splicing results in multiple transcript variants.[provided by RefSeq, Sep 2009]. References Further reading Genes on human chromosome 20 Glycoproteins Extracellular matrix proteins
R-spondin 4
[ "Chemistry" ]
151
[ "Glycoproteins", "Glycobiology" ]
36,382,567
https://en.wikipedia.org/wiki/Ycf4%20protein%20domain
In molecular biology, the Ycf4 protein is involved in the assembly of the photosystem I complex which is part of an energy-harvesting process named photosynthesis. Without Ycf4, photosynthesis would be inefficient affecting plant growth. Ycf4 is located in the thylakoid membrane of the chloroplast. Ycf4 is important for the light dependent reaction of photosynthesis. To date, three thylakoid proteins involved in the stable accumulation of PSI have been identified, these are as follows: BtpA (INTERPRO), Ycf3 Ycf4. The Ycf4 protein is firmly associated with the thylakoid membrane, presumably through a transmembrane domain. Ycf4 co-fractionates with a protein complex larger than PSI upon sucrose density gradient centrifugation of solubilised thylakoids. Ycf is an acronym standing for hypothetical chloroplast open reading frame. References Protein families Protein domains Photosynthesis
Ycf4 protein domain
[ "Chemistry", "Biology" ]
218
[ "Protein stubs", "Photosynthesis", "Protein classification", "Biochemistry stubs", "Protein domains", "Biochemistry", "Protein families", "Chemical process stubs" ]
36,383,032
https://en.wikipedia.org/wiki/Mass%20generation
In theoretical physics, a mass generation mechanism is a theory that describes the origin of mass from the most fundamental laws of physics. Physicists have proposed a number of models that advocate different views of the origin of mass. The problem is complicated because the primary role of mass is to mediate gravitational interaction between bodies, and no theory of gravitational interaction reconciles with the currently popular Standard Model of particle physics. There are two types of mass generation models: gravity-free models and models that involve gravity. Background Electroweak theory and the Standard Model The Higgs mechanism is based on a symmetry-breaking scalar field potential, such as the quartic. The Standard Model uses this mechanism as part of the Glashow–Weinberg–Salam model to unify electromagnetic and weak interactions. This model was one of several that predicted the existence of the scalar Higgs boson. Gravity-free models In these theories, as in the Standard Model itself, the gravitational interaction either is not involved or does not play a crucial role. Technicolor Technicolor models break electroweak symmetry through gauge interactions, which were originally modeled on quantum chromodynamics. Coleman-Weinberg mechanism Coleman–Weinberg mechanism generates mass through spontaneous symmetry breaking. Other theories Unparticle physics and the unhiggs models posit that the Higgs sector and Higgs boson are scaling invariant. UV-Completion by Classicalization, in which the unitarization of the WW scattering happens by creation of classical configurations. Symmetry breaking driven by non-equilibrium dynamics of quantum fields above the electroweak scale. Asymptotically safe weak interactions based on some nonlinear sigma models. Models of composite W and Z vector bosons. Top quark condensate. Gravitational models Extra-dimensional Higgsless models use the fifth component of the gauge fields in place of the Higgs fields. It is possible to produce electroweak symmetry breaking by imposing certain boundary conditions on the extra dimensional fields, increasing the unitarity breakdown scale up to the energy scale of the extra dimension. Through the AdS/QCD correspondence this model can be related to technicolor models and to UnHiggs models, in which the Higgs field is of unparticle nature. Unitary Weyl gauge. If one adds a suitable gravitational term to the standard model action with gravitational coupling, the theory becomes locally scale-invariant (i.e. Weyl-invariant) in the unitary gauge for the local SU(2). Weyl transformations act multiplicatively on the Higgs field, so one can fix the Weyl gauge by requiring that the Higgs scalar be a constant. Preon and models inspired by preons such as the Ribbon model of Standard Model particles by Sundance Bilson-Thompson, based in braid theory and compatible with loop quantum gravity and similar theories. This model not only explains the origin of mass, but also interprets electric charge as a topological quantity (twists carried on the individual ribbons), and colour charge as modes of twisting. In the theory of superfluid vacuum, masses of elementary particles arise from interaction with a physical vacuum, similarly to the gap generation mechanism in superfluids. The low-energy limit of this theory suggests an effective potential for the Higgs sector that is different from the Standard Model's, yet it yields the mass generation. Under certain conditions, this potential gives rise to an elementary particle with a role and characteristics similar to the Higgs boson. References Standard Model Physics beyond the Standard Model Mass
Mass generation
[ "Physics", "Mathematics" ]
723
[ "Standard Model", "Scalar physical quantities", "Physical quantities", "Quantity", "Mass", "Unsolved problems in physics", "Size", "Particle physics", "Wikipedia categories named after physical quantities", "Physics beyond the Standard Model", "Matter" ]
52,116,400
https://en.wikipedia.org/wiki/Alexandra%20Olaya-Castro
Alexandra Olaya-Castro is a Colombian-born theoretical physicist, currently a Professor in the Department of Physics and Astronomy at University College London. She is known for her work on quantum physics on biomolecular processes, specifically for her research on quantum effects in photosynthesis. She was the recipient of the Maxwell Medal in 2016 "for her contributions to the theory of quantum effects in bio-molecular systems". Early life and education Olaya-Castro did an undergraduate in Physics Education at Universidad Distrital Francisco José de Caldas and later obtained a Master of Science in Physics at Universidad de Los Andes in 2002. She then moved to the UK to pursue a doctorate in physics in the department of physics at the Somerville College, Oxford, where she obtained her DPhil in Physics with her thesis titled “Quantum correlations in multi-qubit-cavity systems” supervised by Neil F. Johnson. Research and career Following her DPhil in Quantum Science at the University of Oxford, Olaya-Castro was awarded a Junior Research Fellowship by Trinity College as well at Oxford University from 2005 to 2008. There she began her research in quantum effects in photosynthesis. In 2008, Olaya-Castro was awarded an EPSRC Career Acceleration Fellowship hosted by University College London where she started an independent research group investigating problems at the interface of Quantum Science and Biology. She obtained a permanent Lecturer position at UCL in 2011 and was promoted to Reader in 2015. In 2016 she became the recipient of the Maxwell and Medal Prize by the Institute of Physics for her contribution to the theoretical understanding of quantum effects in biomolecules. In 2018, Olaya-Castro was promoted to full Professor at UCL and in 2019 she was also appointed as the first vice-Dean for Equality, Diversity and Inclusion in the Mathematical and Physical Sciences. Olaya-Castro’s current research interests lie in the theoretical understanding of the quantum to classical transition [i.e.] and in how quantum science can contribute to new theoretical and experimental explorations of dynamics and control of biomolecular processes [i.e.]. Teaching Olaya-Castro teaches the 4th-year course in Advanced Quantum Theory attended by intercollegiate students from University College London, King's College London, Queen Mary University of London and Royal Holloway. Public engagement In 2015, she delivered a public talk at the Royal Institution which is available as a podcast. Olaya-Castro’s research was showcased at the 2016 Royal Society Summer Science Exhibition. In 2016 Olaya-Castro delivered a TEDx talk advocating for breaking socioeconomic and gender stereotypes through exploring what she calls the option B, the talk in spanish is found here: El poder de la opción B para romper estereotipos. Awards and honours In 2003, she was awarded the Arthur H Cooke Memorial Prize for distinguished work by a first year student, Department of Physics, University of Oxford. In 2005, she won a Junior Research Fellowship at Trinity College, University of Oxford. In 2008, she was awarded an EPSRC Career acceleration fellowship to pursue independent research. In 2016, she was awarded the Maxwell Medal and Prize. In 2024, she was awarded the Freedom of the City of London. Selected publications The most cited publications by Olaya-Castro to the date are: GD Scholes, GR Fleming, A Olaya-Castro, R Van Grondelle. Lessons from nature about solar light harvesting. (2011) Nature chemistry 3 (10), 763-774 A Olaya-Castro, CF Lee, FF Olsen, NF Johnson. Efficiency of energy transfer in a light-harvesting system under quantum coherence. (2008) Physical Review B 78 (8), 085115 GD Scholes, GR Fleming, LX Chen, A Aspuru-Guzik, A Buchleitner. Using coherence to enhance function in chemical and biophysical systems. (2017) Nature 543 (7647), 647-656 A Kolli, EJ O’Reilly, GD Scholes, A Olaya-Castro. The fundamental role of quantized vibrations in coherent light harvesting by cryptophyte algae. (2012) The Journal of chemical physics 137 (17), 174109 EJ O’Reilly, A Olaya-Castro. Non-classicality of the molecular vibrations assisting exciton energy transfer at room temperature. (2014) Nature communications 5 (1), 1-10 A Olaya-Castro, GD Scholes. Energy transfer from Förster–Dexter theory to quantum coherent light-harvesting. (2011) International Reviews in Physical Chemistry 30 (1), 49-77 F Fassioli, A Olaya-Castro. Distribution of entanglement in light-harvesting complexes and their quantum efficiency. (2010) New Journal of Physics 12 (8), 085006 F Fassioli, A Nazir, A Olaya-Castro. Quantum state tuning of energy transfer in a correlated environment. (2010)The Journal of Physical Chemistry Letters 1 (14), 2139-2143 Personal life Olaya-Castro is the mother of two children. References External links Living people Quantum physicists Theoretical physicists Alumni of Somerville College, Oxford University of Los Andes (Colombia) alumni Maxwell Medal and Prize recipients 21st-century British physicists 21st-century British women scientists 1976 births Academics of University College London Colombian women physicists Francisco José de Caldas District University alumni
Alexandra Olaya-Castro
[ "Physics" ]
1,111
[ "Theoretical physics", "Quantum physicists", "Theoretical physicists", "Quantum mechanics" ]
52,117,185
https://en.wikipedia.org/wiki/Design%20with%20memory
Design with Memory (記憶のデザイン, Kioku no dezain) is a value adding approach to sustainable product design and architecture that was developed by Japanese industrial design professional Fumikazu Masuda], professor at Tokyo Zokei University, and American architect Tom Johnson as a design criteria for use in the fifth and sixth rounds of the International Design Resource Awards Competition (IDRA) between 1999-2003. Introduced at the time of rising global sustainability movement in the 1990s, the term proposes a new way of looking at design with recycled, re-used and sustainable materials and identifies four pathways to approach design to achieve this result of added value. The first mention of the term "Design with Memory" in the U.S. was in the ARCADE magazine (1997). In an article discussing winning entries to the first three rounds of the Competition Architect Johnson said, “We have come to call it 'Designing with Memory' because sustainable design is based on the recognition of our interdependent relationship with the natural world around us – something we forgot when we had designed products and architecture with the idea that their materials could be wasted and landfilled. It is also 'Designing with Memory' because it means thinking about where the materials come from, how they are used, and where they will go next.” The first mention of the term “Design with Memory“ in Japan was at the Japanese Design Research Center exhibit in Niigata, Japan (1999). With heightening global interest in sustainable development during this time, the DRA competitions organized under the theme of Design with Memory were funded in part by the Japanese Ministry of International Trade and Industry (MITI), and the exhibits in Japan were part of a national program of design education with associated seminars. It is said to be the first international design competition under the theme of sustainability to be sponsored by the Japanese government. History Origin of the term The Design Resource Awards Competition (1994-2003), was begun by Johnson Design Studio with a grant from the State of Washington’s Clean Washington Center with the goal of encouraging the development of commercially viable products made from recycled and sustainably harvested materials. The State of Washington’s pioneering recycling programs were gathering much mixed waste paper, construction debris, plastics, metals and glass, but there were very few products being made with these materials. One of the main challenges to using the materials was their cost - successful new products would need to be of relatively high value. The Competition challenged designers with the problem of making high value products with what were viewed as relatively low quality, low value materials. While the Competition began as a local effort, within the first year over half the entries came from outside the United States and the name was revised to the International Design Resource Awards (IDRA) Competition. The Competition highlighted that such challenge was not unique to Washington State, but was a global issue which many governmental and environmental institutions around the world were looking to solve (see major donor list below). It is believed to be the first international design competition focused on encouraging the development of sustainable product design and architecture. The core criteria for designer's submissions for IDRA competitions were: Contain a high degree of post-consumer recycled content or sustainably harvested material Demonstrate the ability to add value to the recycled or sustainable material and to increase its usage Be designed for future re-use or recyclability Be suitable for commercial production The Competition was open to student and professional entrants. The award was given out in three categories of student, professional, and honorable mention. Typically the Competition had five jurors from various backgrounds – education, marketing, material sciences, architecture, and product design. Early advocates and supporters of sustainable design from across these disciplines became jurors for the competition. Particularly well known Jurors included Wendy Brawer, Pliny Fisk III, Arunas Oslapas, and Fumikazu Masuda. The term "Design with Memory" came about after 4th year of the Competition in 1999, when a Japanese journalist Hiroyuki Kushida working on the Whole Earth Project in Tokyo, saw some of the award winners from the first three rounds of the Competition during a trip to U.S., and arranged for an exhibit to travel to Japan. While visiting the first exhibit in Japan, Architect Johnson met Professor Masuda for the first time and discovered that the phrase “sustainable design” translates in Japanese to “design with memory” (記憶のデザイン), and that this phrase also translated well back to English in the ways we thought about sustainable design - "Where do materials come from and where do they go next?” (Johnson interview, Sotokoto 2003) This collaboration subsequently led to a year long series of seminars for professional industrial designers and support for the 5th and 6th rounds of the Competition was provided by the Japanese Ministry of International Trade and Industry (MITI) together with other industry supporters such as Living Design Center OZONE in Tokyo. From 1994 to 2003 the Competition received over 1200 entries from over 20 countries with the 5th and 6th rounds of the Competition attracting more than 450 entries. The winning entries were published in various design books and magazines around the world. This story of "Spinning garbage into gold" was featured in various fashion and lifestyle magazines such as House Beautiful, Elle Decor, and An-An. It was one of the pioneering initiatives that aimed to position products made from recycled material in an elevated way by exhibiting them in high-end design shops, galleries, and museums. Origin of the concept An iconic example used by Professor Masuda and Architect Johnson to illustrate the historical precedent for “Design with Memory” is the ancient Japanese tea cup which has been broken and visibly repaired (also see kintsugi, wabi-sabi). "An 'old' value in eco design can be seen in an ancient Japanese tea ceremony cup that was broken and carefully repaired. This care and attention gives the cup added value as it becomes a cultural icon.” During the 5th and 6th rounds of the Competition the phrase came to represent a new design methodology for designing with recycled, re-used and sustainably harvested materials. It is based on making elements of the product’s materials and use self-evident, and thereby raising the value of the product for the user. “Through interaction with the entrants and jurors this phrase has grown to include complementary levels of meaning. For example, 'Design with Memory' could mean adding value to a design by employing the memory of the material’s previous use in the new work. It could also mean adding value to the project by actively employing the memory of the user, or, perhaps, the memory of the material or product itself in the new work.” Application The Japanese industrial design professor and American architect duo identified 4 pathways to accomplish Designing with Memory and introduced them as focus categories in the 5th and 6th rounds of IDRA Competition. Each pathway was illustrated through examples of award winning entries from the Competition: Design with Memory of the User - The goal is to reduce the amount of consumption by making products useful and desirable for a longer period of time. The method employed is to develop a strong relationship between the user and the product. A winning entry example of this strategy was the GreenPeace Activist Bag. It used a smoked natural rubber material coated on used sugar sacks made by members of an indigenous community in the Amazon delta as part of a carrier bag. Users knew every day that this product was made to make the native rubber trees more valuable standing than cut down for grazing and thus helping support their local community. All parts of the bag were compostable, so when it did wear out, it could go back into the earth. Design with Memory of the Material –This strategy is for finding successful paths to re-use materials. Materials that are discarded can be a rich resource for new product development. Adding value to the material through artistic design is the key to using them successfully. A winning entry example of this strategy is the elegant baskets woven from discarded industrial metal strapping. Design with Memory of the Product –The goal is to create a cycle of product, and product re-use, into the future. A winning example of this approach is by a porcelain product company that developed a way to regrind discarded and their broken porcelain products and form it into a new product. To illustrate to the purchaser the nature of the product they incorporated their traditional design motifs blended into the new forms. Design with Memory of Nature - The goal with this strategy is to make the product self-evidently integrating with natural cycles. A winning entry example of this strategy is the compostable computer keyboard. Technological advances are rapid and cause plastic products to be “out of date” quickly - leading to increased pressure on recycling programs and landfills. To address this problem, the keyboard keys and body are instead made from carrot and celery fibers bound with a starch based binder – 100% compostable after the electronic sheet and chord are removed. Major exhibitions Miami Center for Fine Arts, Re(f)use Exhibit including winners from the first International Design Resource Awards Program, 3-22 to 5-26, 1996 Washington State Convention Center, Exhibit of winners from the Second International Design Resource Awards Program, July 1996 IDEE, Tokyo, Whole Earth Exhibit, First exhibit of commercially available, sustainably designed products in Japan, Feb. 1999 Design Resource Core, Design with Memory Exhibit, October, 1999 Niigata, Japan Ozone Living Design Center, Design with Memory Exhibit, Jan.-Feb. 2000, Tokyo, Japan Nopporo Community Center, Design with Memory Exhibit, March 2000, Sapporo, Japan Belfast Waterfront Hall, Design with Memory Exhibit, February 2001, Belfast, Northern Ireland Seattle Art Museum, Design with Memory Exhibition, June, 2001, Seattle, WA, USA Washington Convention Center, National Recycling Convention, Jan., 2002, Seattle, WA, USA First China International Design Festival, November 15–17, Qingdao, China Design Resource Core, Design with Memory Exhibit, Sept, 2003, Niigata, Japan Jacob Javits Center, International Contemporary Furniture Fair, May, 2003 New York, NY, USA Major Donors State of Washington, Clean Washington Center, USA UK Design Directorate, UK Craiganon Industrial Development Organization, Northern Ireland ENFO, Dublin, Republic of Ireland Japanese Ministry of Trade and Economic Development, Japan Phoebe Haas Charitable Trust, USA Arango Design Foundation, USA Eco-Design Institute, Tokyo, Japan References External links Homepage Inhabitat's interview with Fumi Masuda IDSA member profile Arunas Oslapas Whole Earth Project Inc. Green Map Living Design Center OZONE Product design
Design with memory
[ "Engineering" ]
2,158
[ "Product design", "Design" ]
52,122,213
https://en.wikipedia.org/wiki/Nanohole
Nanoholes are a class of nanostructured material consisting of nanoscale voids in a surface of a material. Not to be confused with nanofoam or nanoporous materials which support a network of voids permeating throughout the material (often in a disordered state), nanohole materials feature a regular hole pattern extending through a single surface. These can be thought of as the inverse of a nanopillar or nanowire structure. Uses Nanohole structures have been used for a variety of applications, ranging from superlenses produced from a metal nanohole array, to structured photovoltaic devices used to improve carrier extraction, and light absorption. Nanohole structures are also extensively utilized for the creation of photonic crystals, particularly for creating photonic crystal waveguides. See also Nanoporous material Nanopore Nanostructures References Nanotechnology Photonics
Nanohole
[ "Materials_science", "Engineering" ]
180
[ "Nanotechnology", "Materials science" ]
32,176,298
https://en.wikipedia.org/wiki/Translation%20functor
In mathematical representation theory, a translation functor is a functor taking representations of a Lie algebra to representations with a possibly different central character. Translation functors were introduced independently by and . Roughly speaking, the functor is given by taking a tensor product with a finite-dimensional representation, and then taking a subspace with some central character. Definition By the Harish-Chandra isomorphism, the characters of the center Z of the universal enveloping algebra of a complex reductive Lie algebra can be identified with the points of L⊗C/W, where L is the weight lattice and W is the Weyl group. If λ is a point of L⊗C/W then write χλ for the corresponding character of Z. A representation of the Lie algebra is said to have central character χλ if every vector v is a generalized eigenvector of the center Z with eigenvalue χλ; in other words if z∈Z and v∈V then (z − χλ(z))n(v)=0 for some n. The translation functor ψ takes representations V with central character χλ to representations with central character χμ. It is constructed in two steps: First take the tensor product of V with an irreducible finite dimensional representation with extremal weight λ−μ (if one exists). Then take the generalized eigenspace of this with eigenvalue χμ. References Representation theory Functors
Translation functor
[ "Mathematics" ]
297
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Fields of abstract algebra", "Mathematical relations", "Functors", "Category theory", "Representation theory" ]
32,185,392
https://en.wikipedia.org/wiki/Alpha%20factor
The α-factor is a dimensionless quantity used to predict the solid–liquid interface type of a material during solidification. It was introduced by physicist Kenneth A. Jackson in 1958. In his model, crystal growth with larger values of α is smooth, whereas crystals growing at smaller α (below the threshold value of 2) have rough surfaces. Method According to John E. Gruzleski in his book Microstructure Development During Metalcasting (1996): where is the latent heat of fusion; is the Boltzmann constant; is the freezing temperature at equilibrium; is the number of nearest neighbours an atom has in the interface plane; and is the number of nearest neighbours in the bulk solid. As , where is the molar entropy of fusion of the material, According to Martin Glicksman in his book Principles of Solidification: An Introduction to Modern Casting and Crystal Growth Concepts (2011): where is the universal gas constant. is similar to previous, always < 1. References Materials science
Alpha factor
[ "Physics", "Materials_science", "Engineering" ]
206
[ "Materials science stubs", "Applied and interdisciplinary physics", "Materials science", "Condensed matter physics", "nan", "Condensed matter stubs" ]
32,185,446
https://en.wikipedia.org/wiki/Q-Gaussian%20distribution
The q-Gaussian is a probability distribution arising from the maximization of the Tsallis entropy under appropriate constraints. It is one example of a Tsallis distribution. The q-Gaussian is a generalization of the Gaussian in the same way that Tsallis entropy is a generalization of standard Boltzmann–Gibbs entropy or Shannon entropy. The normal distribution is recovered as q → 1. The q-Gaussian has been applied to problems in the fields of statistical mechanics, geology, anatomy, astronomy, economics, finance, and machine learning. The distribution is often favored for its heavy tails in comparison to the Gaussian for 1 < q < 3. For the q-Gaussian distribution is the PDF of a bounded random variable. This makes in biology and other domains the q-Gaussian distribution more suitable than Gaussian distribution to model the effect of external stochasticity. A generalized q-analog of the classical central limit theorem was proposed in 2008, in which the independence constraint for the i.i.d. variables is relaxed to an extent defined by the q parameter, with independence being recovered as q → 1. However, a proof of such a theorem is still lacking. In the heavy tail regions, the distribution is equivalent to the Student's t-distribution with a direct mapping between q and the degrees of freedom. A practitioner using one of these distributions can therefore parameterize the same distribution in two different ways. The choice of the q-Gaussian form may arise if the system is non-extensive, or if there is lack of a connection to small samples sizes. Characterization Probability density function The standard q-Gaussian has the probability density function where is the q-exponential and the normalization factor is given by Note that for the q-Gaussian distribution is the PDF of a bounded random variable. Cumulative density function For cumulative density function is where is the hypergeometric function. As the hypergeometric function is defined for but x is unbounded, Pfaff transformation could be used. For , Entropy Just as the normal distribution is the maximum information entropy distribution for fixed values of the first moment and second moment (with the fixed zeroth moment corresponding to the normalization condition), the q-Gaussian distribution is the maximum Tsallis entropy distribution for fixed values of these three moments. Related distributions Student's t-distribution While it can be justified by an interesting alternative form of entropy, statistically it is a scaled reparametrization of the Student's t-distribution introduced by W. Gosset in 1908 to describe small-sample statistics. In Gosset's original presentation the degrees of freedom parameter ν was constrained to be a positive integer related to the sample size, but it is readily observed that Gosset's density function is valid for all real values of ν. The scaled reparametrization introduces the alternative parameters q and β which are related to ν. Given a Student's t-distribution with ν degrees of freedom, the equivalent q-Gaussian has with inverse Whenever , the function is simply a scaled version of Student's t-distribution. It is sometimes argued that the distribution is a generalization of Student's t-distribution to negative and or non-integer degrees of freedom. However, the theory of Student's t-distribution extends trivially to all real degrees of freedom, where the support of the distribution is now compact rather than infinite in the case of ν < 0. Three-parameter version As with many distributions centered on zero, the q-Gaussian can be trivially extended to include a location parameter μ. The density then becomes defined by Generating random deviates The Box–Muller transform has been generalized to allow random sampling from q-Gaussians. The standard Box–Muller technique generates pairs of independent normally distributed variables from equations of the following form. The generalized Box–Muller technique can generates pairs of q-Gaussian deviates that are not independent. In practice, only a single deviate will be generated from a pair of uniformly distributed variables. The following formula will generate deviates from a q-Gaussian with specified parameter q and where is the q-logarithm and These deviates can be transformed to generate deviates from an arbitrary q-Gaussian by Applications Physics It has been shown that the momentum distribution of cold atoms in dissipative optical lattices is a q-Gaussian. The q-Gaussian distribution is also obtained as the asymptotic probability density function of the position of the unidimensional motion of a mass subject to two forces: a deterministic force of the type (determining an infinite potential well) and a stochastic white noise force , where is a white noise. Note that in the overdamped/small mass approximation the above-mentioned convergence fails for , as recently shown. Finance Financial return distributions in the New York Stock Exchange, NASDAQ and elsewhere have been interpreted as q-Gaussians. See also Constantino Tsallis Tsallis statistics Tsallis entropy Tsallis distribution q-exponential distribution Q-Gaussian process Notes Further reading Juniper, J. (2007) , Centre of Full Employment and Equity, The University of Newcastle, Australia External links Tsallis Statistics, Statistical Mechanics for Non-extensive Systems and Long-Range Interactions Statistical mechanics Continuous distributions Probability distributions with non-finite variance
Q-Gaussian distribution
[ "Physics" ]
1,118
[ "Statistical mechanics" ]
32,185,806
https://en.wikipedia.org/wiki/Terahertz%20nondestructive%20evaluation
Terahertz nondestructive evaluation pertains to devices, and techniques of analysis occurring in the terahertz domain of electromagnetic radiation. These devices and techniques evaluate the properties of a material, component or system without causing damage. Terahertz imaging Terahertz imaging is an emerging and significant nondestructive evaluation (NDE) technique used for dielectric (nonconducting, i.e., an insulator) materials analysis and quality control in the pharmaceutical, biomedical, security, materials characterization, and aerospace industries. It has proved to be effective in the inspection of layers in paints and coatings, detecting structural defects in ceramic and composite materials and imaging the physical structure of paintings and manuscripts. The use of THz waves for non-destructive evaluation enables inspection of multi-layered structures and can identify abnormalities from foreign material inclusions, disbond and delamination, mechanical impact damage, heat damage, and water or hydraulic fluid ingression. This new method can play a significant role in a number of industries for materials characterization applications where precision thickness mapping (to assure product dimensional tolerances within a product and from product-to-product) and density mapping (to assure product quality within a product and from product-to-product) are required. Nondestructive evaluation Sensors and instruments are employed in the 0.1 to the 10 THz range for nondestructive evaluation, which includes detection. Terahertz Density Thickness Imager The Terahertz Density Thickness Imager is a nondestructive inspection method that employs terahertz energy for density and thickness mapping in dielectric, ceramic, and composite materials. This non-contact, single-sided terahertz electromagnetic measurement and imaging method characterizes micro-structure and thickness variation in dielectric (insulating) materials. This method was demonstrated for the Space Shuttle external tank sprayed-on foam insulation and has been designed for use as an inspection method for current and future NASA thermal protection systems and other dielectric material inspection applications where no contact can be made with the sample due to fragility and it is impractical to use ultrasonic methods. Rotational spectroscopy Rotational spectroscopy uses electromagnetic radiation in the frequency range from 0.1 to 4 terahertz (THz). This range includes millimeter-range wavelengths and is particularly sensitive to chemical molecules. The resulting THz absorption produces a unique and reproducible spectral pattern that identifies the material. THz spectroscopy can detect trace amounts of explosives in less than one second. Because explosives continually emit trace amounts of vapor, it should be possible to use these methods to detect concealed explosives from a distance. THz-wave radar THz-wave radar can sense gas leaks, chemicals and nuclear materials. In field tests, THz-wave radar detected chemicals at the 10-ppm level from 60 meters away. This method can be used in a fence line or aircraft mounted system that works day or night in any weather. It can locate and track chemical and radioactive plumes. THz-wave radar that can sense radioactive plumes from nuclear plants have detected plumes several kilometers away based on radiation-induced ionization effects in air. THz tomography THz tomography techniques are nondestructive methods that can use THz pulsed beam or millimeter-range sources to locate objects in 3D. These techniques include tomography, tomosynthesis, synthetic aperture radar and time of flight. Such techniques can resolve details on scales of less than one millimeter in objects that are several tens of centimeters in size. Passive/active imaging techniques Security imaging is currently being done by both active and passive methods. Active systems illuminate the subject with THz radiation whereas passive systems merely view the naturally occurring radiation from the subject. Evidently passive systems are inherently safe, whereas an argument can be made that any form of "irradiation" of a person is undesirable. In technical and scientific terms, however, the active illumination schemes are safe according to all current legislation and standards. The purpose of using active illumination sources is primarily to make the signal-to-noise ratio better. This is analogous to using a flash on a standard optical light camera when the ambient lighting level is too low. For security imaging purposes the operating frequencies are typically in the range 0.1 THz to 0.8 THz (100 GHz to 800 GHz). In this range skin is not transparent so the imaging systems can look through clothing and hair, but not inside the body. There are privacy issues associated with such activities, especially surrounding the active systems since the active systems, with their higher quality images, can show very detailed anatomical features. Active systems such as the L3 Provision and the Smiths eqo are actually mm-wave imaging systems rather than Terahertz imaging systems like Millitech systems. These widely deployed systems do not display images, avoiding any privacy issues. Instead they display generic "mannequin" outlines with any anomalous regions highlighted. Since security screening is looking for anomalous images, items like false legs, false arms, colostomy bags, body-worn urinals, body-worn insulin pumps, and external breast augmentations will show up. Note that breast implants, being under the skin, will not be revealed. Active imaging techniques can be used to perform medical imaging. Because THz radiation is biologically safe (non ionisant), it can be used in high resolution imaging to detect skin cancer. Space Shuttle inspections NASA Space Shuttle inspections are an example of this technology's application. After the Shuttle Columbia accident in 2003, Columbia Accident Investigation Board recommendation R3.2.1 stated “Initiate an aggressive program to eliminate all External Tank Thermal Protection System debris-shedding at the source….” To support this recommendation, inspection methods for flaws in foam are being evaluated, developed, and refined at NASA. STS-114 employed Space Shuttle Discovery, and was the first "Return to Flight" Space Shuttle mission following the Space Shuttle Columbia disaster. It launched at 10:39 EDT, 26 July 2005. During the STS-114 flight significant foam shedding was observed. Therefore, the ability to nondestructively detect and characterize crushed foam after that flight became a significant priority when it was believed that the staff processing the tank had crushed foam by walking on it or from hail damage when the shuttle was on the launch pad or during other preparations for launch. Additionally, density variations in the foam were also potential points of flaw initiation causing foam shedding. The innovation described below answered the call to develop a nondestructive, totally non-contact, non-liquid-coupled method that could simultaneously and precisely characterize thickness variation (from crushed foam due to worker handling and hail damage) and density variation in foam materials. It was critical to have a method that did not require fluid (water) coupling; i.e.; ultrasonic testing methods require water coupling. There are millions of dollars of ultrasonic equipment in the field and on the market that are used as thickness gauges and density meters. When terahertz nondestructive evaluation is fully commercialized into a more portable form, and becomes less expensive it will be able to replace the ultrasonic instruments for structural plastic, ceramic, and foam materials. The new instruments will not require liquid coupling thereby enhancing their usefulness in field applications and possibly for high-temperature in-situ applications where liquid coupling is not possible. A potential new market segment can be developed with this technology. See also Destructive testing Inspection Maintenance testing Product certification Quality control Risk-based inspection Failure analysis Forensic engineering Materials science Predictive maintenance Reliability engineering Stress testing Terahertz radiation Terahertz metamaterials References Further reading On this page also see the sections that follows for use of the Terahertz domain: Small Organic Molecular Crystals / Materials Properties (Glasses etc.), Understanding of Vibrational Modes at Terahertz Frequencies, Quantum Cascade Laser Applications, Implementation of Novel Sensing Paradigms, and Dynamics in Biomolecules. Original PhD. dissertation by Christopher D. Stoik, Lieutenant Colonel, USAF. December 2008. Free online article. Nondestructive testing Materials testing Materials science Terahertz technology
Terahertz nondestructive evaluation
[ "Physics", "Materials_science", "Engineering" ]
1,686
[ "Applied and interdisciplinary physics", "Spectrum (physical sciences)", "Materials science", "Electromagnetic spectrum", "Nondestructive testing", "Materials testing", "nan", "Terahertz technology" ]
33,775,358
https://en.wikipedia.org/wiki/Caesium%20hexafluorocobaltate%28IV%29
Caesium hexafluorocobaltate(IV) is a salt with the chemical formula . It can be synthesized by the reaction of and fluorine. The salt contains rare example of cobalt(IV) complex, i.e. [CoF6]2-. It has cubic K2PtCl6 structure, with the lattice constant a = 8.91 Å, and the length of Co-F bond is 1.73 Å. The complex is ferromagnetic and the ground state of Co(IV) is T2g3Eg2. See also Percobaltate References Caesium compounds Cobalt complexes Fluoro complexes Metal halides Fluorometallates Ferromagnetic materials Cobalt compounds
Caesium hexafluorocobaltate(IV)
[ "Physics", "Chemistry" ]
153
[ "Inorganic compounds", "Ferromagnetic materials", "Salts", "Inorganic compound stubs", "Materials", "Metal halides", "Matter" ]
33,776,080
https://en.wikipedia.org/wiki/Kigumba%20Petroleum%20Institute
Kigumba Petroleum Institute, also referred to as Uganda Petroleum Institute or as Uganda Petroleum Institute, Kigumba (UPIK), is a government-owned, national center for training, research and consultancy in the field of petroleum exploration, recovery, refinement and responsible utilization in Uganda. Location The institute is located approximately north of the town of Kigumba, off of the Kigumba–Karuma Road, in Kiryandongo District, Western Uganda. This location lies approximately , by road, northeast of Masindi, the nearest large town in the sub-region. Uganda Petroleum Institute is located approximately , by road, northwest of Kampala, Uganda's capital and largest city. The coordinates of the Institute's campus are: 01°50'34.0"N, 32°01'09.0"E (Latitude:1.842778; Longitude:32.019167). History The institute was established in 2009 and admitted the first batch of students in 2010, with the objective of training personnel in petroleum-related skills, at certificate, diploma and undergraduate levels. In 2011, increased budgetary allocations were made towards the elevation of the institute from a vocational school to a fully-fledged International University. Financial assistance to the tune of US$8 million (UGX:20 billion), will be sought from the World Bank and Irish Aid, to achieve this goal. In November 2011, the Uganda Government began the process of elevating the Institute to University status. Recent developments In 2014, the institute introduced five new internationally recognized programs to graduate "highly qualified and specialized" technicians needed by oil companies across the world. The new plan proposes wide ranging overhaul of the curriculum and the introduction of five new diploma courses in oil studies. The institute also plans to work in close collaboration with the Ugandan oil industry to graduate over 220 students annually by the year 2019, up from 54 in 2014. Courses As of November 2019, the institute offers three diploma courses: Diploma in Petroleum Engineering Diploma in Upstream Petroleum Operations Diploma in Downstream Petroleum Operations. See also References External links Website of Uganda Petroleum Institute Kigumba (UPIK) Oil sector wants 30,000 workers Petroleum institute collapsing Grooming professionals in the oil sector Kigumba oil college in crisis Universities and colleges in Uganda Bunyoro sub-region Educational institutions established in 2009 Kiryandongo District 2009 establishments in Uganda Petroleum infrastructure in Uganda Petroleum engineering schools
Kigumba Petroleum Institute
[ "Engineering" ]
499
[ "Petroleum engineering", "Petroleum engineering schools", "Engineering universities and colleges" ]
33,777,105
https://en.wikipedia.org/wiki/Toxicologic%20Pathology
Toxicologic Pathology is a peer-reviewed academic journal covering the field of toxicology, pathology, and preclinical development. The editor-in-chief is Kevin Keane employed at Blueprint Medicines, Cambridge, Massachusetts. The journal was established in 1972 and is published by SAGE Publications in association with the Society of Toxicologic Pathology, the British Society of Toxicological Pathology, and the European Society of Toxicologic Pathology. Abstracting and indexing The journal is abstracted and indexed in Scopus and the Science Citation Index Expanded. According to the Journal Citation Reports, the journal has a 2018 impact factor of 1.382. References External links Journal's Homepage Society of Toxicologic Pathology British Society of Toxicologic Pathology European Society of Toxicologic Pathology SAGE Publishing academic journals English-language journals Toxicology journals Academic journals established in 1972 Pathology journals
Toxicologic Pathology
[ "Environmental_science" ]
177
[ "Toxicology journals", "Toxicology" ]
33,785,040
https://en.wikipedia.org/wiki/Ultrasensitivity
In molecular biology, ultrasensitivity describes an output response that is more sensitive to stimulus change than the hyperbolic Michaelis-Menten response. Ultrasensitivity is one of the biochemical switches in the cell cycle and has been implicated in a number of important cellular events, including exiting G2 cell cycle arrests in Xenopus laevis oocytes, a stage to which the cell or organism would not want to return. Ultrasensitivity is a cellular system which triggers entry into a different cellular state. Ultrasensitivity gives a small response to first input signal, but an increase in the input signal produces higher and higher levels of output. This acts to filter out noise, as small stimuli and threshold concentrations of the stimulus (input signal) is necessary for the trigger which allows the system to get activated quickly. Ultrasensitive responses are represented by sigmoidal graphs, which resemble cooperativity. The quantification of ultrasensitivity is often performed approximately by the Hill equation: Where Hill's coefficient (n) may represent quantitative measure of ultrasensitive response. Historical development Zero-order ultrasensitivity was first described by Albert Goldbeter and Daniel Koshland, Jr in 1981 in a paper in the Proceedings of the National Academy of Sciences. They showed using mathematical modeling that modification of enzymes operating outside of first order kinetics required only small changes in the concentration of the effector to produce larger changes in the amount of modified protein. This amplification provided added sensitivity in biological control, and implicated the importance of this in many biological systems. Many biological processes are binary (ON-OFF), such as cell fate decisions, metabolic states, and signaling pathways. Ultrasensitivity is a switch that helps decision-making in such biological processes. For example, in apoptotic process, a model showed that a positive feedback of inhibition of caspase 3 (Casp3) and Casp9 by inhibitors of apoptosis can bring about ultrasensitivity (bistability). This positive feedback cooperates with Casp3-mediated feedback cleavage of Casp9 to generate irreversibility in caspase activation (switch ON), which leads to cell apoptosis. Another model also showed similar but different positive feedback controls in Bcl-2 family proteins in apoptotic process. Recently, Jeyeraman et al. have proposed that the phenomenon of ultrasensitivity may be further subdivided into three sub-regimes, separated by sharp stimulus threshold values: OFF, OFF-ON-OFF, and ON. Based on their model, they proposed that this sub-regime of ultrasensitivity, OFF-ON-OFF, is like a switch-like adaption which can be accomplished by coupling N phosphorylation–dephosphorylation cycles unidirectionally, without any explicit feedback loops. Other recent work has emphasized that not only is the topology of networks important for creating ultrasensitivity responses, but that their composition (enzymes vs. transcription factors) strongly affects whether they will exhibit robust ultrasensitivity. Mathematical modeling suggests for a broad array of network topologies that a combination of enzymes and transcription factors tends to provide more robust ultrasensitivity than that seen in networks composed entirely of transcription factors or composed entirely of enzymes. Mechanisms Ultrasensitivity can be achieved through several mechanisms: Multistep mechanisms (examples: cooperativity) and multisite phosphorylation Buffering mechanisms (examples: decoy phosphorylation sites) or stoichiometric inhibitors Changes in localisation (such as translocation across the nuclear envelope) Saturation mechanisms (also known as zero-order ultrasensitivity) Positive feedback Allovalency Non-Zero-Order Ultrasensitivity in Membrane Proteins Dissipative Allostery Multistep Mechanisms Multipstep ultrasensitivity occurs when a single effector acts on several steps in a cascade. Successive cascade signals can result in higher levels of noise being introduced into the signal that can interfere with the final output. This is especially relevant for large cascades, such as the flagellar regulatory system in which the master regulator signal is transmitted through multiple intermediate regulators before activating transcription. Cascade ultrasensitivity can reduce noise and therefore require less input for activation. Additionally, multiple phosphorylation events are an example of ultrasensitivity. Recent modeling has shown that multiple phosphorylation sites on membrane proteins could serve to locally saturate enzyme activity. Proteins at the membrane are greatly reduced in mobility compared to those in the cytoplasm, this means that a membrane tethered enzyme acting upon a membrane protein will take longer to diffuse away. With the addition of multiple phosphorylation sites upon the membrane substrate, the enzyme can - by a combination of increased local concentration of enzyme and increased substrates - quickly reach saturation. Buffering Mechanisms Buffering Mechanisms such as molecular titration can generate ultrasensitivity. In vitro, this can be observed for the simple mechanism: A + B <=> AB Where the monomeric form of A is active and it can be inactivated by binding B to form the heterodimer AB. When the concentration of ( = [B] + [AB]) is much greater than the , this system exhibits a threshold determined by the concentration of . At concentrations of ( = [A] +[AB]), lower than , B acts as a buffer to free A and nearly all A will be found as AB. However, at the equivalence point, when ≈ , can no longer buffer the increase in , so a small increase in causes a large increase in A. The strength of the ultrasensitivity of [A] to changes in is determined by /. Ultrasensitivity occurs when this ratio is greater than one and is increased as the ratio increases. Above the equivalence point, and A are again linearly related. In vivo, the synthesis of A and B as well as the degradation of all three components complicates generation of ultrasensitivity. If the synthesis rates of A and B are equal this system still exhibits ultrasensitivity at the equivalence point. One example of a buffering mechanism is protein sequestration, which is a common mechanism found in signalling and regulatory networks. In 2009, Buchler and Cross constructed a synthetic genetic network that was regulated by protein sequestration of a transcriptional activator by a dominant-negative inhibitor. They showed that this system results in a flexibile ultrasensitive response in gene expression. It is flexible in that the degree of ultrasensitivity can be altered by changing expression levels of the dominant-negative inhibitor. Figure 1 in their article illustrates how an active transcription factor can be sequestered by an inhibitor into the inactive complex AB that is unable to bind DNA. This type of mechanism results in an "all-or-none" response, or ultransensitivy, when the concentration of the regulatory protein increases to the point of depleting the inhibitor. Robust buffering against a response exists below this concentration threshold, and when it is reached any small increase in input is amplified into a large change in output. Changes in localization Translocation Signal transduction is regulated in various ways and one of the ways is translocation. Regulated translocation generates ultrasensitive response in mainly three ways: Regulated translocation increases the local concentration of the signaling protein. When concentration of the signaling protein is high enough to partially saturate the enzyme that inactivates it, ultrasensitive response is generated. Translocation of multiple components of the signaling cascade, where stimulus (input signal) causes translocation of both signaling protein and its activator in the same subcellular compartment and thereby generates ultrasensitive response which increases speed and accuracy of the signal. Translocation to the compartment which contains stoichiometric inhibitors. Translocation is one way of regulating signal transduction, and it can generate ultrasensitive switch-like responses or multistep-feedback loop mechanisms. A switch-like response will occur if translocation raises the local concentration of a signaling protein. For example, epidermal growth factor (EGF) receptors can be internalized through clathrin-independent endocytosis (CIE) and/or clathrin-dependent endocytosis (CDE) in ligand concentration-dependent manner. The distribution of receptors into the two pathways was shown to be EGF concentration-dependent. In the presence of low concentrations of EGF, the receptor was exclusively internalized via CDE, whereas at high concentrations, receptors were equally distributed between CDE and CIE. Saturation mechanisms (Zero-order ultrasensitivity) Zero-order ultrasensitivity takes place under saturating conditions. For example, consider an enzymatic step with a kinase, phosphatase, and substrate. Steady state levels of the phosphorylated substrate have an ultrasensitive response when there is enough substrate to saturate all available kinases and phosphatases. Under these conditions, small changes in the ratio of kinase to phosphatase activity can dramatically change the number of phosphorylated substrate (For a graph illustrating this behavior, see ). This enhancement in sensitivity of steady state phosphorylated substrate to Km, or the ratio of kinase to phosphatase activity, is termed zero-order to distinguish it from the first order behavior described by Michaelis-Menten dynamics, wherein the steady state concentration responds in a more gradual fashion than the switch-like behavior exhibited in ultrasensitivity. Using the notation from Goldbeter & Koshland, let W be a certain substrate protein and let W' be a covalently modified version of W. The conversion of W to W' is catalyzed by some enzyme E1 and the reverse conversion of W' to W is catalyzed by a second enzyme E2 according to following equations: The concentrations of all necessary components (such as ATP) are assumed to be constant and represented in the kinetic constants. Using the chemical equations above, the reaction rate equations for each component are: The total concentration of each component is given by: The zero order mechanism assumes that the or . In other words, the system is in a Michaelis-Menten steady state, which means, to a good approximation, and are constant. From these kinetic expressions one can solve for at steady state defining and where and When the is plotted against the molar ratio and it can be seen that the W to W' conversion occurs over a much smaller change in the ratio than it would under first order (non-saturating) conditions, which is the telltale sign of ultrasensitivity. Positive Feedback Positive feedback loops can cause ultrasensitive responses. An example of this is seen in the transcription of certain eukaryotic genes in which non-cooperative transcription factor binding changes positive feedback loops of histone modification that results in an ultrasensitive activation of transcription. The binding of a transcription factor recruits histone acetyltransferases and methyltransferases. The acetylation and methylation of histones recruits more acetyltransferases and methyltransferases that results in a positive feedback loop. Ultimately, this results in activation of transcription. Additionally, positive feedback can induce bistability in Cyclin B1- by the two regulators Wee1 and Cdc25C, leading to the cell's decision to commit to mitosis. The system cannot be stable at intermediate levels of Cyclin B1, and the transition between the two stable states is abrupt when increasing levels of Cyclin B1 switches the system from low to high activity. Exhibiting hysteresis, for different levels of Cyclin B1, the switches from low to high and high to low states vary. However, the emergence of a bistable system is highly influenced by the sensitivity of its feedback loops. It has been shown in Xenopus egg extracts that Cdc25C hyperphosphorylation is a highly ultrasensitive function of Cdk activity, displaying a high value of the Hill coefficient (approx. 11), and the dephosphorylation step of Ser 287 in Cdc25C (also involved in Cdc25C activation) is even more ultrasensitive, displaying a Hill coefficient of approximately 32. Allovalency A proposed mechanism of ultrasensitivity, called allovalency, suggests that activity "derives from a high local concentration of interaction sites moving independently of each other" Allovalency was first proposed when it was believed to occur in the pathway in which Sic1, is degraded in order for Cdk1-Clb (B-type cyclins) to allow entry into mitosis. Sic1 must be phosphorylated multiple times in order to be recognized and degraded by Cdc4 of the SCF Complex. Since Cdc4 only has one recognition site for these phosphorylated residues it was suggested that as the amount of phosphorylation increases, it exponentially increases the likelihood that Sic1 is recognized and degraded by Cdc4. This type of interaction was thought to be relatively immune to loss of any one site and easily tuned to any given threshold by adjusting the properties of individual sites. Assumptions for the allovalency mechanism were based on a general mathematical model that describes the interaction between a polyvalent disordered ligand and a single receptor site It was later found that the ultrasensitivity in Cdk1 levels by degradation of Sic1 is in fact due to a positive feedback loop. Non-Zero-Order Ultrasensitivity in Membrane Proteins Modeling by Dushek et al. proposes a possible mechanism for ultrasensitivity outside of the zero-order regime. For the case of membrane-bound enzymes acting on membrane-bound substrates with multiple enzymatic sites (such as tyrosine-phosphorylated receptors like the T-Cell receptor), ultrasensitive responses could be seen, crucially dependent on three factors: 1) limited diffusion in the membrane, 2) multiple binding sites on the substrate, and 3) brief enzymatic inactivation following catalysis. Under these particular conditions, although the enzyme may be in excess of the substrate (first-order regime), the enzyme is effectively locally saturated with substrate due to the multiple binding sites, leading to switch-like responses. This mechanism of ultrasensitivity is independent of enzyme concentration, however the signal is significantly enhanced depending on the number of binding sites on the substrate. Both conditional factors (limited diffusion and inactivation) are physiologically plausible, but have yet to be experimentally confirmed. Dushek's modeling found increasing Hill cooperativity numbers with more substrate sites (phosphorylation sites), and with greater steric/diffusional hindrance between enzyme and substrate. This mechanism of ultrasensitivity based on local enzyme saturation arises partly from passive properties of slow membrane diffusion, and therefore may be generally applicable. Dissipative Allostery The bacterial flagellar motor has been proposed to follow a dissipative allosteric model, where ultrasensitivity comes as a combination of protein binding affinity and energy contributions from the proton motive force (see Flagellar motors and chemotaxis below). Impact of upstream and downstream components on module's ultrasensitivity In a living cell, ultrasensitive modules are embedded in a bigger network with upstream and downstream components. This components may constrain the range of inputs that the module will receive as well as the range of the module's outputs that network will be able to detect. Altszyler et al. (2014) studied how the effective ultrasensitivity of a modular system is affected by these restrictions. They found for some ultrasensitive motifs that dynamic range limitations imposed by downstream components can produce effective sensitivities much larger than that of the original module when considered in isolation. Hill Coefficient Ultrasensitive behavior is typically represented by a sigmoidal curve, as small alterations in the stimulus can trigger large changes in the response . One such relation is the Hill equation: where is the Hill coefficient which quantifies the steepness of the sigmoidal stimulus-response curve and it is therefore a sensitivity parameter. It is often used to assess the cooperativity of a system. A Hill coefficient greater than one is indicative of positive cooperativity and thus, the system exhibits ultrasensitivity. Systems with a Hill coefficient of 1 are noncooperative and follow the classical Michaelis-Menten kinetics. Enzymes exhibiting noncooperative activity are represented by hyperbolic stimulus/response curves, compared to sigmoidal curves for cooperative (ultrasensitive) enzymes. In mitogen-activated protein kinase (MAPK) signaling (see example below), the ultrasensitivity of the signaling is supported by the sigmoidal stimulus/response curve that is comparable to an enzyme with a Hill coefficient of 4.0-5.0. This is even more ultrasensitive to the cooperative binding activity of hemoglobin, which has a Hill coefficient of 2.8. Calculation From an operational point of view the Hill coefficient can be calculated as: . where EC90 and EC10 are the input values needed to produce the 10% and 90% of the maximal response, respectively. Response Coefficient Global sensitivity measures such as the Hill coefficient do not characterise the local behaviours of the s-shaped curves. Instead, these features are well captured by the response coefficient measure defined as: In systems biology, such system responses are referred to as control coefficients. Specifically, the concentration control coefficients measure the response of concentrations to changes in a given input. In addition, within the framework of the more general biochemical control analysis, such responses can be described in terms of the individual local responses, called the elasticities. Link between Hill Coefficient and Response coefficient Altszyler et al. (2017) have shown that these ultrasensitivity measures can be linked by the following equation: where denoted the mean value of the variable x over the range [a,b]. Ultrasensitivity in function composition Consider two coupled ultrasensitive modules, disregarding effects of sequestration of molecular components between layers. In this case, the expression for the system's dose-response curve, , results from the mathematical composition of the functions, , which describe the input/output relationship of isolated modules : Brown et al. (1997) have shown that the local ultrasensitivity of the different layers combines multiplicatively: . In connection with this result, Ferrell et al. (1997) showed, for Hill-type modules, that the overall cascade global ultrasensitivity had to be less than or equal to the product of the global ultrasensitivity estimations of each cascade's layer, , where and are the Hill coefficient of modules 1 and 2 respectively. Altszyler et al. (2017) have shown that the cascade's global ultrasensitivity can be analytically calculated: where and delimited the Hill input's working range of the composite system, i.e. the input values for the i-layer so that the last layer (corresponding to in this case) reached the 10% and 90% of it maximal output level. It followed this equation that the system's Hill coefficient could be written as the product of two factors, and , which characterized local average sensitivities over the relevant input region for each layer: , with in this case. For the more general case of a cascade of modules, the Hill Coefficient can be expressed as: , Supramultiplicativity Several authors have reported the existence of supramultiplicative behavior in signaling cascades (i.e. the ultrasensitivity of the combination of layers is higher than the product of individual ultrasensitivities), but in many cases the ultimate origin of supramultiplicativity remained elusive. Altszyler et al. (2017) framework naturally suggested a general scenario where supramultiplicative behavior could take place. This could occur when, for a given module, the corresponding Hill's input working range was located in an input region with local ultrasensitivities higher than the global ultrasensitivity of the respective dose-response curve. Role in Cellular Processes MAP Kinase Signaling Cascade A ubiquitous signaling motif that exhibits ultrasensitivity is the MAPK (mitogen-activated protein kinase) cascade, which can take a graded input signal and produce a switch-like output, such as gene transcription or cell cycle progression. In this common motif, MAPK is activated by an earlier kinase in the cascade, called MAPK kinase, or MAPKK. Similarly, MAPKK is activated by MAPKK kinase, or MAPKKK. These kinases are sequentially phosphorylated when MAPKKK is activated, usually via a signal received by a membrane-bound receptor protein. MAPKKK activates MAPKK, and MAPKK activates MAPK. Ultrasensitivity arises in this system due to several features: MAPK and MAPKK both require two separate phosphorylation events to be activated. The reversal of MAPK phosphorylation by specific phosphatases requires an increasing concentration of activation signals from each prior kinase to achieve an output of the same magnitude. The MAPKK is at a concentration above the KΜ for its specific phosphatase and MAPK is at a concentration above the KΜ for MAPKK. Besides the MAPK cascade, ultrasensitivity has also been reported in muscle glycolysis, in the phosphorylation of isocitrate dehydrogenase and in the activation of the calmodulin-dependent protein kinase II (CAMKII). An ultrasensitive switch has been engineered by combining a simple linear signaling protein (N-WASP) with one to five SH3 interaction modules that have autoinhibitory and cooperative properties. Addition of a single SH3 module created a switch that was activated in a linear fashion by exogenous SH3-binding peptide. Increasing number of domains increased ultrasensitivity. A construct with three SH3 modules was activated with an apparent Hill coefficient of 2.7 and a construct with five SH3 module was activated with an apparent Hill coefficient of 3.9. Translocation During G2 phase of the cell cycle, Cdk1 and cyclin B1 makes a complex and forms maturation promoting factor (MPF). The complex accumulates in the nucleus due to phosphorylation of the cyclin B1 at multiple sites, which inhibits nuclear export of the complex. Phosphorylation of Thr19 and Tyr15 residues of Cdk1 by Wee1 and MYT1 keeps the complex inactive and inhibits entry into mitosis whereas dephosphorylation of Cdk1 by CDC25C phosphatase at Thr19 and Tyr15 residues, activates the complex which is necessary in order to enter mitosis. Cdc25C phosphatase is present in the cytoplasm and in late G2 phase it is translocated into the nucleus by signaling such as PIK1, PIK3. The regulated translocation and accumulation of the multiple required signaling cascade components, MPF and its activator Cdc25, in the nucleus generates efficient activation of the MPF and produces switch-like, ultrasensitive entry into mitosis. The figure shows different possible mechanisms for how increased regulation of the localization of signaling components by the stimulus (input signal) shifts the output from Michaelian response to ultrasensitive response. When stimulus is regulating only inhibition of Cdk1-cyclinB1 nuclear export, the outcome is Michaelian response, Fig (a). But if the stimulus can regulate localization of multiple components of the signaling cascade, i.e. inhibition of Cdk1-cyclinB1 nuclear export and translocation of the Cdc25C to nucleus, then the outcome is ultrasensitive response, Fig (b). As more components of the signaling cascade are regulated and localized by the stimulus—i.e. inhibition of Cdk1-cyclinB1 nuclear export, translocation of the Cdc25C to the nucleus, and activation of Cdc25C—the output response becomes more and more ultrasensitive, Fig(c). Buffering (decoy) During mitosis, mitotic spindle orientation is essential for determining the site of cleavage furrowing and position of daughter cells for subsequent cell fate determination. This orientation is achieved by polarizing cortical factors and rapid alignment of the spindle with the polarity axis. In fruit flies, three cortical factors have been found to regulate the position of the spindle: heterotrimeric G protein α subunit (Gαi), Partner of Inscuteable (Pins), and Mushroom body defect (Mud). Gαi localizes at apical cortex to recruit Pins. Upon binding to GDP-bound Gαi, Pins is activated and recruits Mud to achieve polarized distribution of cortical factors. N-terminal tetratricopeptide repeats (TPRs) in Pins is the binding region for Mud, but is autoinhibited by intrinsic C-terminal GoLoco domains (GLs) in the absence of Gαi. Activation of Pins by Gαi binding to GLs is highly ultrasensitive and is achieved through the following decoy mechanism: GLs 1 and 2 act as a decoy domains, competing with the regulatory domain, GL3, for Gαi inputs. This intramolecular decoy mechanism allows Pins to establish its threshold and steepness in response to distinct Gαi concentration. At low Gαi inputs, the decoy GLs 1 and 2 are preferentially bound. At intermediate Gαi concentration, the decoys are nearly saturated, and GL3 begins to be populated. At higher Gαi concentration, the decoys are fully saturated and Gαi binds to GL3, leading to Pins activation. Ultrasensitivity of Pins in response to Gαi ensures that Pins is activated only at the apical cortex where Gαi concentration is above the threshold, allowing for maximal Mud recruitment. Switching Behavior of GTPases GTPases are enzymes capable of binding and hydrolyzing guanosine triphosphate (GTP). Small GTPases, such as Ran and Ras, can exist in either a GTP-bound form (active) or a GDP-bound form (inactive), and the conversion between these two forms grants them a switch-like behavior. As such, small GTPases are involved in multiple cellular events, including nuclear translocation and signaling. The transition between the active and inactive states is facilitated by guanine nucleotide exchange factors (GEFs) and GTPase activating proteins (GAPs). Computational studies on the switching behavior of GTPases have revealed that the GTPase-GAP-GEF system displays ultrasensitivity. In their study, Lipshtat et al. simulated the effects of the levels of GEF and GAP activation on the Rap activation signaling network in response to signals from activated α2-adrenergic (α2R) receptors, which lead to degradation of the activated Rap GAP. They found that the switching behavior of Rap activation was ultrasensitive to changes in the concentration (i.e. amplitude) and the duration of the α2R signal, yielding Hill coefficients of nH=2.9 and nH=1.7, respectively (a Hill coefficient greater than nH=1 is characteristic of ultrasensitivity ). The authors confirmed this experimentally by treating neuroblasts with HU-210, which activates RAP through degradation of Rap GAP. Ultrasensitivity was observed both in a dose-dependent manner (nH=5±0.2), by treating cells with different HU-210 concentrations for a fixed time, and in a duration-dependent manner (nH=8.6±0.8), by treating cells with a fixed HU-210 concentration during varying times. By further studying system, the authors determined that (the degree of responsiveness and ultrasensitivity) was heavily dependent on two parameters: the initial ratio of , where the k's incorporate both the concentration of active GAP or GEF and their corresponding kinetic rates; and the signal impact, which is the product of the degradation rate of activated GAP and either the signal amplitude or the signal duration. The parameter affects the steepness of the transition from the two states of the GTPase switch, with higher values (~10) leading to ultrasensitivity. The signal impact affects the switching point. Therefore, by depending on the ratio of concentrations rather than on individual concentrations, the switch-like behavior of the system can also be displayed outside of the zero-order regime. Ultrasensitivity and Neuronal Potentiation Persistent stimulation at the neuronal synapse can lead to markedly different outcomes for the post-synaptic neuron. Extended weak signaling can result in long-term depression (LTD), in which activation of the post-synaptic neuron requires a stronger signal than before LTD was initiated. In contrast, long-term potentiation (LTP) occurs when the post-synaptic neuron is subjected to a strong stimulus, and this results in strengthening of the neural synapse (i.e., less neurotransmitter signal is required for activation). In the CA1 region of the hippocampus, the decision between LTD and LTP is mediated solely by the level of intracellular \scriptstyle Ca^2+ at the post-synaptic dendritic spine. Low levels of \scriptstyle Ca^2+ (resulting from low-level stimulation) activates the protein phosphatase calcineurin, which induces LTD. Higher levels of \scriptstyle Ca^2+ results in activation of /calmodulin-dependent protein kinase II (CaMKII), which leads to LTP. The difference in Ca2+ concentration required for a cell to undergo LTP is only marginally higher than for LTD, and because neurons show bistability (either LTP or LTD) following persistent stimulation, this suggests that one or more components of the system respond in a switch-like, or ultrasensitive manner. Bradshaw et al. demonstrated that CaMKII (the LTP inducer) responds to intracellular calcium levels in an ultrasensitive manner, with <10% activity at 1.0 uM and ~90% activity at 1.5 uM, resulting in a Hill coefficient of ~8. Further experiments showed that this ultrasenstivity was mediated by cooperative binding of CaMKII by two molecules of calmodulin (CaM), and autophosphorylation of activated CaMKII leading to a positive feedback loop. In this way, intracellular calcium can induce a graded, non-ultrasensitive activation of calcineurin at low levels, leading to LTD, whereas the ultrasensitive activation of CaMKII results in a threshold intracellular calcium level that generates a positive feedback loop that amplifies the signal and leads to the opposite cellular outcome: LTP. Thus, binding of a single substrate to multiple enzymes with different sensitivities facilitates a bistable decision for the cell to undergo LTD or LTP. Ultrasensitivity in Development It has been suggested that zero-order ultrasensitivity may generate thresholds during development allowing for the conversion of a graded morphogen input to a binary switch-like response. Melen et al. (2005) have found evidence for such a system in the patterning of the Drosophila embryonic ventral ectoderm. In this system, graded mitogen activated protein kinase (MAPK) activity is converted to a binary output, the all-or-none degradation of the Yan transcriptional repressor. They found that MAPK phosphorylation of Yan is both essential and sufficient for Yan's degradation. Consistent with zero-order ultrasensitivity an increase in Yan protein lengthened the time required for degradation but had no effect on the border of Yan degradation in developing embryos. Their results are consistent with a situation where a large pool of Yan becomes either completely degraded or maintained. The particular response of each cell depends on whether or not the rate of reversible Yan phosphorylation by MAPK is greater or less than dephosphorylation. Thus, a small increase in MAPK phosphorylation can cause it to be the dominant process in the cell and lead to complete degradation of Yan. Multistep-feedback loop mechanism also leads to ultrasensitivity Multistep-feedback loop mechanism also leads to ultrasensitivity. There is paper introducing that engineering synthetic feedback loops using yeast mating mitogen-activated protein (MAP) kinase pathway as a model system. In Yeast mating pathway: alpha-factor activates receptor, Ste2, and Ste4 and activated Ste4 recruits Ste5 complex to membrane, allowing PAK-like kinase Ste20 (membrane-localized) to activate MAPKKK Ste11. Ste11 and downstream kinases, Ste7 (MAPKK) and Fus3 (MAPK), are colocalized on the scaffold and activation of cascade leads to transcriptional program. They used pathway modulators outside of core cascade, Ste50 promotes activation of Ste11 by Ste20; Msg5 (negative, red) is MAPK phosphatase that deactivates Fus3 (Fig.2A). What they built was circuit with enhanced ultrasensitive switch behavior by constitutively expressing a negative modulator, Msg5 which is one of MAPK phosphatase and inducibly expressing a positive modulator, Ste50 which is pathway modulators outside of core cascade(Fig.2B). The success of this recruitment-based engineering strategy suggests that it may be possible to reprogram cellular responses with high precision. Flagellar motors and chemotaxis The rotational direction of E. coli is controlled by the flagellar motor switch. A ring of 34 FliM proteins around the rotor bind CheY, whose phosphorylation state determines whether the motor rotates in a clockwise or counterclockwise manner. The rapid switching mechanism is attributed to an ultrasensitive response, which has a Hill coefficient of ~10. This system has been proposed to follow a dissipative allosteric model, in which rotational switching is a result of both CheY binding and energy consumption from the proton motive force, which also powers the flagellar rotation. Development of a Synthetic Ultrasensitive Signaling Pathway Recently it has been shown that a Michaelian signaling pathway can be converted to an ultrasensitive signaling pathway by the introduction of two positive feedback loops. In this synthetic biology approach, Palani and Sarkar began with a linear, graded response pathway, a pathway that showed a proportional increase in signal output relative to the amount of signal input, over a certain range of inputs. This simple pathway was composed of a membrane receptor, a kinase and a transcription factor. Upon activation the membrane receptor phosphorylates the kinase, which moves into the nucleus and phosphorylates the transcription factor, which turns on gene expression. To transform this graded response system into an ultrasensitive, or switch-like signaling pathway, the investigators created two positive feedback loops. In the engineered system, activation of the membrane receptor resulted in increased expression of both the receptor itself and the transcription factor. This was accomplished by placing a promoter specific for this transcription factor upstream of both genes. The authors were able to demonstrate that the synthetic pathway displayed high ultrasensitivity and bistability. Recent computational analysis of the effects of a signaling protein's concentration on the presence of an ultrasensitive response has come to complementary conclusions about the influence of a signaling protein's concentration on the conversion of a graded response to an ultrasensitive one. Rather than focus on the generation of signaling proteins through positive feedback, however, the study instead focused on how the dynamics of a signaling protein's exit from the system influences the response. Soyer, Kuwahara, and Csika´sz-Nagy devised a signaling pathway composed of a protein (P) that possesses two possible states (unmodified P or modified P*) and can be modified by an incoming stimulus E. Furthermore, while the unmodified form, P, is permitted to enter or leave the system, P* is only allowed to leave (i.e. it is not generated elsewhere). After varying the parameters of this system, the researchers discovered that the modification of P to P* can shift between a graded response and an ultrasensitive response via the modification of the exit rates of P and P* relative to each other. The transition from an ultrasensitive response to E and a graded response to E was generated when the two rates went from highly similar to highly dissimilar, irrespective of the kinetics of the conversion from P to P* itself. This finding suggests at least two things: 1) the simplifying assumption that the levels of signaling molecules stay constant in a system can severely limit the understanding of ultrasensitivity's complexity; and 2) it may be possible to induce or inhibit ultrasensitivity artificially by regulating the rates of the entry and exit of signaling molecules occupying a system of interest. Limitations in Modularity It has been shown that the integration of a given synthetic ultrasensitive module with upstream and downstream components often alters its information-processing capabilities. This effects must be taken into account in the design process. See also Logistic function Heaviside step function Stimulus (physiology) Hill equation (biochemistry) References Molecular biology
Ultrasensitivity
[ "Chemistry", "Biology" ]
7,883
[ "Biochemistry", "Molecular biology" ]
42,011,185
https://en.wikipedia.org/wiki/Quotient%20type
In the field of type theory in computer science, a quotient type is a data type which respects a user-defined equality relation. A quotient type defines an equivalence relation on elements of the type - for example, we might say that two values of the type Person are equivalent if they have the same name; formally p1 == p2 if p1.name == p2.name. In type theories which allow quotient types, an additional requirement is made that all operations must respect the equivalence between elements. For example, if f is a function on values of type Person, it must be the case that for two Persons p1 and p2, if p1 == p2 then f(p1) == f(p2). Quotient types are part of a general class of types known as algebraic data types. In the early 1980s, quotient types were defined and implemented as part of the Nuprl proof assistant, in work led by Robert L. Constable and others. Quotient types have been studied in the context of Martin-Löf type theory, dependent type theory, higher-order logic, and homotopy type theory. Definition To define a quotient type, one typically provides a data type together with an equivalence relation on that type, for example, Person // ==, where == is a user-defined equality relation. The elements of the quotient type are equivalence classes of elements of the original type. Quotient types can be used to define modular arithmetic. For example, if Integer is a data type of integers, can be defined by saying that if the difference is even. We then form the type of integers modulo 2: Integer // The operations on integers, +, - can be proven to be well-defined on the new quotient type. Variations In type theories that lack quotient types, setoids (sets explicitly equipped with an equivalence relation) are often used instead of quotient types. However, unlike with setoids, many type theories may require a formal proof that any functions defined on quotient types are well-defined. Properties Quotient types are part of a general class of types known as algebraic data types. Just as product types and sum types are analogous to the cartesian product and disjoint union of abstract algebraic structures, quotient types reflect the concept of set-theoretic quotients, sets whose elements are partitioned into equivalence classes by a given equivalence relation on the set. Algebraic structures whose underlying set is a quotient are also termed quotients. Examples of such quotient structures include quotient sets, groups, rings, categories and, in topology, quotient spaces. References See also Algebraic data type Product type Setoid Sum type Data types Type theory Composite data types
Quotient type
[ "Mathematics" ]
587
[ "Type theory", "Mathematical logic", "Mathematical structures", "Mathematical objects" ]
42,012,361
https://en.wikipedia.org/wiki/Cosheaf
In topology, a branch of mathematics, a cosheaf is a dual notion to that of a sheaf that is useful in studying Borel-Moore homology. Definition We associate to a topological space its category of open sets , whose objects are the open sets of , with a (unique) morphism from to whenever . Fix a category . Then a precosheaf (with values in ) is a covariant functor , i.e., consists of for each open set of , an object in , and for each inclusion of open sets , a morphism in such that for all and whenever . Suppose now that is an abelian category that admits small colimits. Then a cosheaf is a precosheaf for which the sequence is exact for every collection of open sets, where and . (Notice that this is dual to the sheaf condition.) Approximately, exactness at means that every element over can be represented as a finite sum of elements that live over the smaller opens , while exactness at means that, when we compare two such representations of the same element, their difference must be captured by a finite collection of elements living over the intersections . Equivalently, is a cosheaf if for all open sets and , is the pushout of and , and for any upward-directed family of open sets, the canonical morphism is an isomorphism. One can show that this definition agrees with the previous one. This one, however, has the benefit of making sense even when is not an abelian category. Examples A motivating example of a precosheaf of abelian groups is the singular precosheaf, sending an open set to , the free abelian group of singular -chains on . In particular, there is a natural inclusion whenever . However, this fails to be a cosheaf because a singular simplex cannot be broken up into smaller pieces. To fix this, we let be the barycentric subdivision homomorphism and define to be the colimit of the diagram In the colimit, a simplex is identified with all of its barycentric subdivisions. One can show using the Lebesgue number lemma that the precosheaf sending to is in fact a cosheaf. Fix a continuous map of topological spaces. Then the precosheaf (on ) of topological spaces sending to is a cosheaf. Notes References Algebraic topology Category theory Sheaf theory
Cosheaf
[ "Mathematics" ]
509
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Algebraic topology", "Topology stubs", "Sheaf theory", "Topology", "Category theory", "Mathematical relations", "Fields of abstract algebra" ]
42,012,426
https://en.wikipedia.org/wiki/C11H13NO6
{{DISPLAYTITLE:C11H13NO6}} The molecular formula C11H13NO6 (molar mass: 255.23 g/mol, exact mass: 255.0743 u) may refer to: Caramboxin Diroximel fumarate Molecular formulas
C11H13NO6
[ "Physics", "Chemistry" ]
63
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
42,013,569
https://en.wikipedia.org/wiki/St.%20Johns%20Racquet%20Center
Portland Tennis & Education (formally St. Johns Racquet Center) offers a variety of classes, drills, mixers, & private lessons for tennis & pickleball players of all levels. As a public facility, we welcome everyone. Our court fees are a flat rate that include ball machine access & racquet rental at no extra cost! Racquet stringing service is also offered at affordable prices. Every dollar earned on our public courts is poured directly back into our nonprofit program that offers academic support, tennis & athletic enrichment, life skills, family resources & mental health support to K-12 students enrolled in our after school & summer programs. Every time you play tennis or pickleball, you 'play a point' for PT&E students & families Since our founding in 1996, 100% of 12th grade PT&E program graduates have graduated from high school on-time and gone on to pursue the post-secondary path of their choosing! History The St. Johns Racquet Center was planned in 1976 but delayed until 1979 after problems with shipment from the manufacturer Hess Building Company. The 27,500 ft. prefabricated building cost US$648,000 (US$ adjusted for inflation) was designed by Richard L. Glassford and Associations and manufactured in the Midwest United States. The total construction cost reached US$883,537 (US$ adjusted for inflation), most of which came from Economic Development Administration, when the building was erected. A failed plan in 1981 called for part of the racquet center be made a roller rink. In October 1981, a National Association of Intercollegiate Athletics (NAIA) round robin tournament was held at the racquet center. The maximum capacity of the building in accordance to the fire code is 20 people. Threats to close the center came in 1983 from Portland Parks & Recreation commissioner Charles Jordan. Instead the hours of operations were cut. A racquetball club known as the "Smashers" was organized at the center in 1984. The center held a table tennis tournament in 1987 and 1988. Plans to allow a private company operate the center were drawn up in 1994 but were quickly abandoned. A similar plan came up in 2006 and also failed. A plan to tear the center down to construct an apartment building was proposed in 2007 but was shelved and it was never recommended again. The center hosts several Portland Interscholastic League tennis matches. It is currently operated by Portland After-School Tennis & Education (PASTE). See also List of sports venues in Portland, Oregon References External links St Johns Racquet Center — PortlandOregon.gov [./Http://www.ptande.org Portland Tennis & Education] St Johns Racquet Club — TennisPoint.com 1979 establishments in Oregon Sports venues completed in 1979 Sports venues in Portland, Oregon Tennis venues in the United States Racquetball in the United States Parks in Portland, Oregon Buildings and structures in St. Johns, Portland, Oregon Prefabricated buildings
St. Johns Racquet Center
[ "Engineering" ]
610
[ "Building engineering", "Prefabricated buildings" ]
42,013,806
https://en.wikipedia.org/wiki/Littlewood%27s%20Tauberian%20theorem
In mathematics, Littlewood's Tauberian theorem is a strengthening of Tauber's theorem introduced by . Statement Littlewood showed the following: If an = O(1/n ), and as x ↑ 1 we have then Hardy and Littlewood later showed that the hypothesis on an could be weakened to the "one-sided" condition an ≥ –C/n for some constant C. However in some sense the condition is optimal: Littlewood showed that if cn is any unbounded sequence then there is a series with |an| ≤ |cn|/n which is divergent but Abel summable. History described his discovery of the proof of his Tauberian theorem. Alfred Tauber's original theorem was similar to Littlewood's, but with the stronger hypothesis that an=o(1/n). Hardy had proved a similar theorem for Cesàro summation with the weaker hypothesis an=O(1/n), and suggested to Littlewood that the same weaker hypothesis might also be enough for Tauber's theorem. In spite of the fact that the hypothesis in Littlewood's theorem seems only slightly weaker than the hypothesis in Tauber's theorem, Littlewood's proof was far harder than Tauber's, though Jovan Karamata later found an easier proof. Littlewood's theorem follows from the later Hardy–Littlewood Tauberian theorem, which is in turn a special case of Wiener's Tauberian theorem, which itself is a special case of various abstract Tauberian theorems about Banach algebras. Examples References Tauberian theorems
Littlewood's Tauberian theorem
[ "Mathematics" ]
331
[ "Theorems in mathematical analysis", "Tauberian theorems" ]
42,017,954
https://en.wikipedia.org/wiki/List%20of%20cosmological%20computation%20software
This List of Cosmological Computation Software catalogs the tools and programs used by scientists in cosmological research. In the past few decades, the accelerating technological evolution has profoundly enhanced astronomical instrumentation, enabling more precise observations and expanding the breadth and depth of data collection by several orders of magnitude. Simultaneously, the exponential growth in computational power has enabled the creation of computer simulations that reveal details with unprecedented resolution and accuracy. For performing computer simulations of the cosmos and analyzing data from both cosmological experiments and simulations, many advanced methods and computational software codes are developed every year. These codes are widely used by researchers all across the globe, in all various fields and topics of cosmology. The computational software used in cosmology can be classified into the following major classes: Cosmological Boltzmann codes: These codes are used for calculating the theoretical power spectrum given the cosmological parameters. These codes are capable of calculating the power spectrum from the standard LCDM model or its derivatives. Some of the most used CMB Boltzmann codes are CMBFAST, CAMB, CMBEASY, CLASS, CMBAns etc. Cosmological parameter estimator: The parameter estimation codes are used for calculating the best-fit parameters from the observation data. The ready to use codes available for this purpose are CosmoMC, AnalyzeThis, SCoPE etc. Newtonian cosmological simulation codes GADGET GADGET, named "GAlaxies with Dark matter and Gas intEracT" is a code written in C++ for cosmological N-body/Smoothed-particle hydrodynamics (SPH) simulations on massively parallel computers with distributed memory. Its first version was developed by German astrophysicist, Volker Springel and was published in 2000. It was followed by two more official public versions, with GADGET-2 released in 2005 and GADGET-4 released in 2020, which is the most recent public version of the software suite currently. GADGET is capable to address a wide array of astrophysically interesting problems, e.g. the dynamics of the gaseous intergalactic medium, star formation and its regulation by feedback processes, colliding and merging galaxies, as well as the formation of large-scale structure in the Universe. AREPO AREPO is a massively parallel code for gravitational N-body systems, hydrodynamics and magnetohydrodynamics (MHD). It is named after the enigmatic word AREPO in the Latin palindromic sentence "sator arepo tenet opera rotas", the Sator Square. The first version of AREPO was written and published by Volker Springel in 2010, with further development by Rüdiger Pakmor and contributions by many other authors. The Arepo code utilizes an unstructured Voronoi-mesh and was designed to blend the benefits of finite-volume hydrodynamics and SPH. Primarily optimized for cosmological simulations, especially galaxy formation, Arepo supports a high dynamic range in space and time. RAMSES GIZMO GIZMO is a flexible, massively parallel, multi-physics simulation code, written in ANSI C by Philip F. Hopkins. The code offers diverse methods to solve fluid equations. It also introduces novel methods, which optimize the resolution of simulations and minimize common errors found in previous methods that limited the accuracy of prior solvers. Originating from GADGET (hence the name "GIZMO", a play on words), the code maintains compatibility in naming/use conventions as well as input/output, making it user-friendly for those familiar with GADGET. PKDGRAV3 StePS StePS, which stands for "STEreographically Projected cosmological Simulations" is a freely available code that implements a novel N-body simulation method that models an infinite universe within a finite sphere with isotropic boundary conditions to follow the evolution of the large-scale structure. Unlike traditional methods, which use unrealistic periodic boundary conditions for numerical simplicity, StePS offers a more observation-aligned approach. This technique enables detailed simulations of an infinite universe using less memory and provides results that are more in line with the observed universe geometry and topology. Relativistic cosmological simulation codes gevolution CosmoGRaPH CosmoGRaPH (Cosmological General Relativity And (Perfect fluid | Particle) Hydrodynamics) is a C++ code used to explore cosmological problems in a fully general relativistic setting. It was developed by James Mertens and Chi Tian and was published in 2016. The code implements various novel methods for numerically solving the Einstein field equations, including an N-body solver, full AMR capabilities via SAMRAI, and raytracing. Cosmological Boltzmann codes CMBFAST CMBFAST is a computer code, developed by Uroš Seljak and Matias Zaldarriaga (based on a Boltzmann code written by Edmund Bertschinger, Chung-Pei Ma and Paul Bode) for computing the power spectrum of the cosmic microwave background anisotropy. It is the first efficient program to do so, reducing the time taken to compute the anisotropy from several days to a few minutes by using a novel semi-analytic line-of-sight approach. CAMB Code for Anisotropies in the Microwave Background by Antony Lewis and Anthony Challinor. The code was originally based on CMBFAST. Later several developments are made to make it a faster and more accurate and compatible with the present research. The code is written in an object oriented manner to make it more user friendly. CMBEASY CMBEASY is a software package written by Michael Doran, Georg Robbers and Christian M. Müller. The code is based on the CMBFAST package. CMBEASY is fully object oriented C++. This considerably simplifies manipulations and extensions of the CMBFAST code. In addition, a powerful Spline class can be used to easily store and visualize data. Many features of the CMBEASY package are also accessible via a graphical user interface. This may be helpful for gaining intuition, as well as for instruction purposes. CLASS The purpose of the Cosmic Linear Anisotropy Solving System is to simulate the evolution of linear perturbations in the universe and to compute CMB and large scale structure observables. CLASS is written in plain C to achieve high performance, yet its modular structure emulates the architecture and philosophy of classes in object-oriented languages for enhanced readability and modularity. The name "CLASS" also derives from its object-oriented style, mimicking the notion of a class. Parameter estimation packages AnalizeThis AnalizeThis is a parameter estimation package used by cosmologists. It comes with the CMBEASY package. The code is written in C++ and uses the global metropolis algorithm for estimation of cosmological parameters. The code was developed by Michael Doran, for parameter estimation using WMAP-5 likelihood. However, the code was not updated after 2008 for the new CMB experiments. Hence this package is currently not in use by the CMB research community. The package comes up with a nice GUI. CosmoMC CosmoMC is a Fortran 2003 Markov chain Monte Carlo (MCMC) engine for exploring cosmological parameter space. The code does brute force (but accurate) theoretical matter power spectrum and Cl calculations using CAMB. CosmoMC uses a simple local Metropolis algorithm along with an optimized fast-slow sampling method. This fast-slow sampling method provides faster convergence for the cases with many nuisance parameters like Planck. CosmoMC package also provides subroutines for post processing and plotting of the data. CosmoMC was written by Antony Lewis in 2002 and later several versions are developed to keep the code up-to date with different cosmological experiments. It is presently the most used cosmological parameter estimation code. SCoPE SCoPE/Slick Cosmological Parameter Estimator is a newly developed cosmological MCMC package written by Santanu Das in C language. Apart from standard global metropolis algorithm the code uses three unique technique named as 'delayed rejection' that increases the acceptance rate of a chain, 'pre-fetching' that helps an individual chain to run on parallel CPUs and 'inter-chain covariance update' that prevents clustering of the chains allowing faster and better mixing of the chains. The code is capable of faster computation of cosmological parameters from WMAP and Planck data. Other packages MADCAP — Microwave Anisotropy Data Computational Analysis Package developed by Borrill et al. SIToolBox — SI Toolbox is a package for estimating the isotropy violation in the CMB sky. It is developed by Das et al. and it consists of several Fortran subroutines and stand-alone facilities, that can be used to estimate the BipoSH coefficients from non statistically isotropic (nSI) skymaps. RECFAST — Software was developed by Seager, Sasselov, and Scott and used to calculate the recombination history of the universe. The package is used by cosmological boltzmann codes (CMBFast, CAMB etc.) TOAST — Time Ordered Astrophysics Scalable Tools, developed and designed by Theodore Kisner, Reijo Keskitalo, Jullian Borrill et al. It "generalizing the problem of CMB map-making to the reduction of any pointed time-domain data, and ensuring that the analysis of exponentially growing datasets scales to the largest HPC systems available". Commander - Commander is an Optimal Monte-carlo Markov chAiN Driven EstimatoR which implements fast and efficient end-to-end CMB posterior exploration through Gibbs sampling. It was developed by Hans Kristian Eriksen et al. Likelihood software packages Different cosmology experiments, in particular the CMB experiments like WMAP and Planck measures the temperature fluctuations in the CMB sky and then measure the CMB power spectrum from the observed skymap. But for parameter estimation the χ² is required. Therefore, all these CMB experiments come up with their own likelihood software. WMAP Likelihood Package Planck Likelihood Software See also Lambda-CDM Physical cosmology Observational cosmology Computational astrophysics UniverseMachine High-performance computing Notes Physical cosmology Cosmic background radiation Cosmological computation software Scientific simulation software
List of cosmological computation software
[ "Physics", "Astronomy", "Technology" ]
2,145
[ "Lists of software", "Computing-related lists", "Theoretical physics", "Astrophysics", "Physical cosmology", "Astronomical sub-disciplines" ]
42,019,789
https://en.wikipedia.org/wiki/Apicidin
Apicidin is a fungal metabolite, as well as a histone deacetylase inhibitor. References Histone deacetylase inhibitors
Apicidin
[ "Chemistry" ]
32
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
46,633,733
https://en.wikipedia.org/wiki/List%20of%20water%20supply%20and%20sanitation%20by%20country
This list of water supply and sanitation by country provides information on the status of water supply and sanitation at a national or, in some cases, also regional level. Water supply and sanitation by country Water supply and sanitation in Afghanistan Water supply and sanitation in Algeria Water supply and sanitation in Argentina Water supply and sanitation in Australia Water supply and sanitation in Bangladesh Water supply and sanitation in Belgium Water supply and sanitation in Benin Water supply and sanitation in Bolivia Water supply and sanitation in Brazil Water supply and sanitation in Burkina Faso Water supply and sanitation in Cambodia Water supply and sanitation in Canada Water supply and sanitation in Chile Water supply and sanitation in China Water supply and sanitation in Colombia Water supply and sanitation in Costa Rica Water supply and sanitation in Cuba Water supply and sanitation in Denmark Water supply and sanitation in the Dominican Republic Water supply and sanitation in Ecuador Water supply and sanitation in Egypt Water supply and sanitation in El Salvador Water supply and sanitation in England Water supply and sanitation in Ethiopia Water supply and sanitation in France Water supply and sanitation in Germany Water supply and sanitation in Ghana Water supply and sanitation in Gibraltar Water supply and sanitation in Greece Water supply and sanitation in Guatemala Water supply and sanitation in Guyana Water supply and sanitation in Haiti Water supply and sanitation in Honduras Water supply and sanitation in Hong Kong Water supply and sanitation in India Water supply and sanitation in Indonesia Water supply and sanitation in Iran Water supply and sanitation in Iraq Water supply and sanitation in the Republic of Ireland Water supply and sanitation in Israel Water supply and sanitation in Italy Water supply and sanitation in Jamaica Water supply and sanitation in Japan Water supply and sanitation in Jordan Water supply and sanitation in Kenya Water supply and sanitation in Lebanon Water supply and sanitation in Malaysia Water supply and sanitation in Mexico Water supply and sanitation in Morocco Water supply and sanitation in Mozambique Water supply and sanitation in Namibia Water supply and sanitation in the Netherlands Water supply and sanitation in New Zealand Water supply and sanitation in Nicaragua Water supply and sanitation in Nigeria Water supply and sanitation in Pakistan Water supply and sanitation in the Palestinian territories Water supply and sanitation in Panama Water supply and sanitation in Paraguay Water supply and sanitation in Peru Water supply and sanitation in the Philippines Water supply and sanitation in Portugal Water supply and sanitation in Russia Water supply and sanitation in Rwanda Water supply and sanitation in Saudi Arabia Water supply and sanitation in Scotland Water supply and sanitation in Senegal Water supply and sanitation in Sierra Leone Water supply and sanitation in Singapore Water supply and sanitation in South Africa Water supply and sanitation in South Sudan Water supply and sanitation in Spain Water supply and sanitation in Syria Water supply and sanitation in Tanzania Water supply and sanitation in Trinidad and Tobago Water supply and sanitation in Tunisia Water supply and sanitation in Turkey Water supply and sanitation in Uganda Water supply and sanitation in the United Kingdom Water supply and sanitation in the United States Water supply and sanitation in Uruguay Water supply and sanitation in Venezuela Water supply and sanitation in Vietnam Water supply and sanitation in Wales Water supply and sanitation in Yemen Water supply and sanitation in Zambia Water supply and sanitation in Zimbabwe Lists by region Water supply and sanitation in the European Union Water supply and sanitation in Latin America Water supply and sanitation in Sub-Saharan Africa List of responsibilities in the water supply and sanitation sector in Latin America and the Caribbean List of water resource management by country This list of water resources management by country provides information on the status of water resource management at a national level. List by country: Water resources management in Argentina Water resources management in Brazil Water resources management in Chile Water resources management in Colombia Water resources management in Costa Rica Water resources management in the Dominican Republic Water resources management in modern Egypt Water resources management in El Salvador Water resources management in Guatemala Water resources management in Honduras Water resources management in Jamaica Water resources management in Mexico Water resources management in Nicaragua Water resources management in Pakistan Water resources management in Peru Water resources management in Syria Water resources management in Uruguay See also List of countries by access to clean water List of countries by proportion of the population using improved sanitation facilities List of abbreviations used in sanitation Popular pages amongst water supply and sanitation by country WikiProject Sanitation WikiProject Water WikiProject Water supply and sanitation by country Sanitation Water supply Sanitation Sewerage Water supply and sanitation
List of water supply and sanitation by country
[ "Chemistry", "Engineering", "Environmental_science" ]
820
[ "Sewerage", "Environmental engineering", "Water pollution" ]
46,635,821
https://en.wikipedia.org/wiki/Regenerative%20Medicine%20%28journal%29
Regenerative Medicine is a peer-reviewed medical journal covering stem cell research and regenerative medicine. It was established in 2006 and is published by Future Medicine. The editor-in-chief is Chris Mason (University College London). Regenerative Medicine has an online sister community site called RegMedNet. RegMedNet is a free-to-join website that publishes news on regenerative medicine and cell therapy research, policy and business, editorials from leaders in the field and free educational webinars. Abstracting and indexing The journal is abstracted and indexed in Biological Abstracts, BIOSIS Previews, Biotechnology Citation Index, Chemical Abstracts, EMBASE/Excerpta Medica, EMCare, Index Medicus/MEDLINE/PubMed, Science Citation Index Expanded, and Scopus. According to the Journal Citation Reports, the journal has a 2016 impact factor of 2.868, ranking it 15th out of 21 journals in the category "Cell & Tissue Engineering" and 23rd out of 77 journals in the category "Engineering, Biomedical". References External links English-language journals Regenerative medicine journals Academic journals established in 2006 Future Science Group academic journals
Regenerative Medicine (journal)
[ "Biology" ]
238
[ "Regenerative medicine journals", "Stem cell research" ]
46,636,927
https://en.wikipedia.org/wiki/DMAC1
Transmembrane protein 261 is a protein that in humans is encoded by the TMEM261 gene located on chromosome 9. TMEM261 is also known as C9ORF123 and DMAC1, Chromosome 9 Open Reading Frame 123 and Transmembrane Protein C9orf123 and Distal membrane-arm assembly complex protein 1. Gene features TMEM261 is located at 9p24.1, its length is 91,891 base pairs (bp) on the reverse strand. Its neighbouring gene is PTPRD located at 9p23-p24.3 also on the reverse strand and encodes protein tyrosine phosphatase receptor type delta. TMEM261 has 2 exons and 1 intron, and 6 primary transcript variants; the largest mRNA transcript variant consisting of 742bp with a protein 129 amino acids (aa) in length and 13,500 daltons (Da) in size, and the smallest coding transcript variant being 381bp with a protein 69aa long and 6,100 Da in size. Protein features TMEM261 is a protein consisting out of 112 amino acids, with a molecular weight of 11.8 kDa. The isoelectric point is predicted to be 10.2, whilst its posttranslational modification value is 9.9. Structure TMEM261 contains a domain of unknown function, DUF4536 (pfam15055), predicted as a helical membrane spanning domain about 45aa (Cys 47- Ser 92) in length with no known domain relationships. Two further transmembrane helical domains are predicted of lengths 18aa (Val 52-Ala 69) and 23aa (Pro 81-Ala 102]). There is also a low complexity region spanning 25aa (Thr 14-Ala 39). The tertiary structure for TMEM261 has not yet been determined. However, its protein secondary structure is mostly composed of coiled-coil regions with beta strands and alpha helices found within the transmembrane and domain of unknown function regions. The N-terminal region of TMEM261 is composed of a disordered region which contains the low complexity region that is not highly conserved amongst orthologues. Modifications A N-myristoylation domain is shown to be present in most TMEM261 protein variants. Post-translational modifications include myristoylation of the N-terminal Glycine residue (Gly2) of the TMEM261 protein as well as phosphorylation of Threonine 31. Interactions Proteins shown to interact with TMEM261 include NAAA (protein-protein interaction), QTRT1 (RNA-protein interaction),ZC4H2(DNA-protein interaction) and ZNF454(DNA-protein interaction). It has also shown to interact with APP(protein-protein interaction), ARHGEF38(protein-protein interaction) and HNRNPD(RNA-protein interaction). Additional transcription factor binding sites (DNA-protein interaction) predicted include one binding site for MEF2C a monocyte-specific enhancement factor that is involved in muscle-cell regulation particularly in the cardiovascular system and two binding sites for GATA1 which is a globin transcription factor 1 involved in erythroblast development regulation. Expression TMEM261 shows ubiquitous expression in humans and is detected in almost all tissue types. It shows tissue-enriched gene (TEG) expression when compared to housekeeping gene (HKG) expression. Its highest expression is seen in the heart (overall relative expression 94%) particularly in heart fibroblast cells, thymus (overall relative expression 90%), and thyroid (overall relative expression 93%) particularly in thyroid glandular cells. Staining intensity of cancer cells showed intermediate to high expression in breast, colorectal, ovarian, skin, urothelial, head and neck cells. Function Currently the function for TMEM261 is unknown. However, gene amplification and rearrangements of its locus have been associated with various cancers including colorectal cancer, breast cancer and lymphomas. Evolution Orthologues The orthologues and homologues of TMEM261 are limited to vertebrates, its oldest homologue dates to that of the cartilaginous fishes which diverged from Homo sapiens 462.5 million years ago. The protein primary structure of TMEM261 shows higher overall conservation in mammals, however high conservation of the domain of unknown function (DUF4536) to the C-terminus region is seen in all orthologues, including distant homologues. The protein structure of TMEM261 shows conservation across most orthologues. Paralogues TMEM261 has no known paralogs. References External links PubMed NCBI gene record GeneCards UCSC Genome Browser Expasy Bioinformatics Resource Portal SDSC Biology Workbench Uniprot HUGO Further reading Proteins Genes
DMAC1
[ "Chemistry" ]
1,059
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
46,638,227
https://en.wikipedia.org/wiki/Heterogeneous%20catalytic%20reactor
Heterogeneous catalytic reactors put emphasis on catalyst effectiveness factors and the heat and mass transfer implications. Heterogeneous catalytic reactors are among the most commonly utilized chemical reactors in the chemical engineering industry. Types of reactors Heterogenous catalytic reactors are commonly classified by the relative motion of the catalyst particles. Reactors with insignificant motion of catalyst particles Fixed bed reactors A fixed bed reactor is a cylindrical tube filled with catalyst pellets with reactants flowing through the bed and being converted into products. The catalyst may have multiple configuration including: one large bed, several horizontal beds, several parallel packed tubes, multiple beds in their own shells. The various configurations may be adapted depending on the need to maintain temperature control within the system. Serial connection of two reactors with option to dose oxidant between the stages enable under optimal conditions to increase the product yield in oxidation catalysis. By dosing intermediates or products between the stages, valuable information could be found concerning the reaction pathways. The catalyst pellets may be spherical, cylindrical, or randomly shaped pellets. They range from 0.25 cm to 1.0 cm in diameter. The flow of a fixed bed reactor is typically downward. Packed bed reactor. Trickle-bed reactors A trickle-bed reactor is a fixed bed where liquid flows without filling the spaces between particles. Like with the fixed bed reactors, the liquid typically flows downward. At the same time, gas is flowing upward. The primary use for trickle-bed reactors is hydrotreatment reactions (hydrodesulfurization and hydrodemetalation of heavy crude oil, hydrodeasphaltenization of coal tar). This reactor is often utilized in order to handle feeds with extremely high boiling points.. Moving bed reactors A moving bed reactor has a fluid phase that passes up through a packed bed. Solid is fed into the top of the reactor and moves down. It is removed at the bottom. Moving bed reactors require special control valves to maintain close control of the solids. For this reason, moving bed reactors are less frequently used than the above two reactors. Moving bed reactors are most suitable for solid content below 10% and is generally used where the solids (primarily catalyst) have high surface area due to its size in microns. Rotating bed reactors A rotating bed reactor (RBR) holds a packed bed fixed within a basket with a central hole. When the basket is spinning immersed in a fluid phase, the inertia forces created by the spinning motion forces the fluid outwards, thereby creating a circulating flow through the rotating packed bed. The rotating bed reactor is a rather new invention that shows high rates of mass transfer and good fluid mixing. RBR type reactors have frequently been applied in high-value biocatalysis reactions, there offering convenient reuse of immobilized enzymes while preventing mechanical damage of the solid-phase catalysts. RBR constructions are also emerging in the nuclear energy industry to purify liquid waste on the scale of 100's of cubic meters. Reactors with significant motion of catalyst particles Fluidized bed reactors A fluidized bed reactor suspends small particles of catalyst by the upward motion of the fluid to be reacted. The fluid is typically a gas with a flow rate high enough to mix the particles without carrying them out of the reactor. The particles are much smaller than those for the above reactors. Typically on the scale of 10-300 microns. One key advantage of using a fluidized bed reactor is the ability to achieve a highly uniform temperature in the reactor. The fluidized bed reactors are best for bio-catalysts or enzymes doped on solids since the solid are fluidized by the working fluid and there is no mechanical impact on the solids. Slurry reactors A slurry reactor contains the catalyst in a powdered or granular form. This reactor is typically used when one reactant is a gas and the other a liquid while the catalyst is a solid. The reactant gas is put through the liquid and dissolved. It then diffuses onto the catalyst surface. Slurry reactors can use very fine particles and this can lead to problems of separation of catalyst from the liquid. Trickle-bed reactors don't have this problem and this is a big advantage of trickle-bed reactor. Unfortunately these large particles in trickle bed means much lower reaction rate. Overall, the trickle bed is simpler, the slurry reactors usually has a high reaction rate and the fluidized bed is somewhat in-between. References Hill, Charles G. An Introduction to Chemical Engineering Kinetics and Reactor Design. New York: Wiley, 1977. H. Mallin, J. Muschiol, E. Byström, U. T. Bornscheuer, ChemCatChem, 5 (2013) 3529-3532 SpinChem Rotating Bed Reactor Technology Catalysis Chemical reactors
Heterogeneous catalytic reactor
[ "Chemistry", "Engineering" ]
978
[ "Catalysis", "Chemical reaction engineering", "Chemical equipment", "Chemical reactors", "Chemical kinetics" ]
46,644,107
https://en.wikipedia.org/wiki/Sofituzumab%20vedotin
Sofituzumab vedotin (INN; development code DMUC5754A) is a monoclonal antibody designed for the treatment of ovarian cancer. This drug was developed by Genentech/Roche. Sofituzumab vedotin is an antibody-drug conjugate that targets MUC16, a protein that is overexpressed in several types of cancer including ovarian and pancreatic cancer. The conjugate consists of a human anti-nectin-4 antibody linked to the cytotoxic agent MMAE, which is released after internalization by the cancer cell. In addition to its direct cytotoxic effect, sofituzumab vedotin may also mediate antitumor activity through signal transduction inhibition, antibody-dependent cellular cytotoxicity, and complement-dependent cytotoxicity. Clinical trials have shown promising results in the treatment of ovarian and pancreatic cancer. References Monoclonal antibodies for tumors Antibody-drug conjugates Experimental cancer drugs
Sofituzumab vedotin
[ "Biology" ]
224
[ "Antibody-drug conjugates" ]
40,588,530
https://en.wikipedia.org/wiki/Stable%20isotope%20ratio
The term stable isotope has a meaning similar to stable nuclide, but is preferably used when speaking of nuclides of a specific element. Hence, the plural form stable isotopes usually refers to isotopes of the same element. The relative abundance of such stable isotopes can be measured experimentally (isotope analysis), yielding an isotope ratio that can be used as a research tool. Theoretically, such stable isotopes could include the radiogenic daughter products of radioactive decay, used in radiometric dating. However, the expression stable-isotope ratio is preferably used to refer to isotopes whose relative abundances are affected by isotope fractionation in nature. This field is termed stable isotope geochemistry. Stable-isotope ratios Measurement of the ratios of naturally occurring stable isotopes (isotope analysis) plays an important role in isotope geochemistry, but stable isotopes (mostly hydrogen, carbon, nitrogen, oxygen and sulfur) are also finding uses in ecological and biological studies. Other workers have used oxygen isotope ratios to reconstruct historical atmospheric temperatures, making them important tools for paleoclimatology. These isotope systems for lighter elements that exhibit more than one primordial isotope for each element have been under investigation for many years in order to study processes of isotope fractionation in natural systems. The long history of study of these elements is in part because the proportions of stable isotopes in these light and volatile elements is relatively easy to measure. However, recent advances in isotope ratio mass spectrometry (i.e. multiple-collector inductively coupled plasma mass spectrometry) now enable the measurement of isotope ratios in heavier stable elements, such as iron, copper, zinc, molybdenum, etc. Applications The variations in oxygen and hydrogen isotope ratios have applications in hydrology since most samples lie between two extremes, ocean water and Arctic/Antarctic snow. Given a sample of water from an aquifer, and a sufficiently sensitive tool to measure the variation in the isotopic ratio of hydrogen in the sample, it is possible to infer the source, be it ocean water or precipitation seeping into the aquifer, and even to estimate the proportions from each source. Stable isotopologues of water are also used in partitioning water sources for plant transpiration and groundwater recharge. Another application is in paleotemperature measurement for paleoclimatology. For example, one technique is based on the variation in isotopic fractionation of oxygen by biological systems with temperature. Species of Foraminifera incorporate oxygen as calcium carbonate in their shells. The ratio of the oxygen isotopes oxygen-16 and oxygen-18 incorporated into the calcium carbonate varies with temperature and the oxygen isotopic composition of the water. This oxygen remains "fixed" in the calcium carbonate when the foraminifera dies, falls to the sea bed, and its shell becomes part of the sediment. It is possible to select standard species of foraminifera from sections through the sediment column, and by mapping the variation in oxygen isotopic ratio, deduce the temperature that the Forminifera encountered during life if changes in the oxygen isotopic composition of the water can be constrained. Paleotemperature relationships have also enabled isotope ratios from calcium carbonate in barnacle shells to be used to infer the movement and home foraging areas of the sea turtles and whales on which some barnacles grow. In ecology, carbon and nitrogen isotope ratios are widely used to determine the broad diets of many free-ranging animals. They have been used to determine the broad diets of seabirds, and to identify the geographical areas where individuals spend the breeding and non-breeding season in seabirds and passerines. Numerous ecological studies have also used isotope analyses to understand migration, food-web structure, diet, and resource use, such as hydrogen isotopes to measure how much energy from stream-side trees supports fish growth in aquatic habitats. Determining diets of aquatic animals using stable isotopes has been particularly common, as direct observations are difficult. They also enable researchers to measure how human interactions with wildlife, such as fishing, may alter natural diets. In forensic science, research suggests that the variation in certain isotope ratios in drugs derived from plant sources (cannabis, cocaine) can be used to determine the drug's continent of origin. In food science, stable isotope ratio analysis has been used to determine the composition of beer, shoyu sauce and dog food. Stable isotope ratio analysis also has applications in doping control, to distinguish between endogenous and exogenous (synthetic) sources of hormones. The accurate measurement of stable isotope ratios relies on proper procedures of analysis, sample preparation and storage. Chondrite meteorites are classified using the oxygen isotope ratios. In addition, an unusual signature of carbon-13 confirms the non-terrestrial origin for organic compounds found in carbonaceous chondrites, as in the Murchison meteorite. The uses of stable isotope ratios described above pertain to measurements of naturally occurring ratios. Scientific research also relies on the measurement of stable isotope ratios that have been artificially perturbed by the introduction of isotopically enriched material into the substance, process or system under study. Isotope dilution involves adding enriched stable isotope to a substance in order to quantify the amount of that substance by measuring the resulting isotope ratios. Isotope labeling uses enriched isotope to label a substance in order to trace its progress through, for example, a chemical reaction, metabolic pathway or biological system. Some applications of isotope labeling rely on the measurement of stable isotope ratios to accomplish this. See also Radiocarbon dating Isotope analysis Hydrogen isotope biogeochemistry Bibliography Allègre C.J., 2008. Isotope Geology (Cambridge University Press). Faure G., Mensing T.M. (2004), Isotopes: Principles and Applications (John Wiley & Sons). Hoefs J., 2004. Stable Isotope Geochemistry (Springer Verlag). Sharp Z., 2006. Principles of Stable Isotope Geochemistry (Prentice Hall). References Isotopes
Stable isotope ratio
[ "Physics", "Chemistry" ]
1,235
[ "Isotopes", "Nuclear physics" ]
40,590,117
https://en.wikipedia.org/wiki/Modular%20crate%20electronics
Modular crate electronics are a general type of electronics and support infrastructure commonly used for trigger electronics and data acquisition in particle detectors. These types of electronics are common in such detectors because all the electronic pathways are made by discrete physical cables connecting together logic blocks on the fronts of modules. This allows circuits to be designed, built, tested, and deployed very quickly (in days or weeks) as an experiment is being put together. Then the modules can all be removed and used again when the experiment is done. A crate is a box (chassis) that mounts in an electronics rack with an opening in the front facing the user. There are rails on the top and bottom of the crate that extend from the open (user) end to the back end of the crate. The back end of the crate contains power and data connectors that modules connect to. Electronics modules slide into the crate along the rails and plug into the power/data connectors at the back. Modules have signal connectors, controls, and lights on their faceplate that are used to interact with other modules. Some modules just draw power from the backplane connectors and have all of their data inputs and outputs on the front plate. Other modules take inputs or controls to and from the backplane or have their behavior controlled from the backplane. Some types of modules have active circuitry inside them, and act almost as small computers; others are not stateful at all and are only dumb single components. Types of crate systems There are number of types of modular crate electronic systems used on particle physics experiments. RENATRAN The very first standard for crate electronics was Renatran, which itself was derived from the Esone Standard published in 1964. This standard was in use mainly in France in nuclear research. The Renatran system consisted of a 5U rackable crate that could accept up to 8 single-width or up to 4 double width plug-in units, with the backplane supplying several power rails, as well as serial and parallel communications between modules, and between the rack and external equipment such as printers and computers. Each plug-in units had the dials, indicators and connectors on the front, and a single screw-mated 24 pin connector (Souriau 8196-17, no longer produced) on the rear to connect to the back-plane. Certain units had additional connectors on the rear, either doubled from the front panel for a more permanent installation, or extra ports for specific purposes, such as daisy chaining counting modules or linking level comparators together. A plug-in unit generally accomplished a single task, such as giving out a clock signal, inverting signal polarity, attenuating or amplifying signals, and more. NIM The simplest and one of the earliest crate module standard is the NIM (Nuclear Instrumentation Module) standard. A NIM crate only has power on the backplane, there is no data bus or data connectors. The NIM backplane connector is an irregular arrangement of individual pins into sockets in the crate. NIM modules typically have multiple single logic blocks on the front with both inputs and outputs on the front panel. A typical NIM module might be, say, four discriminators on the front panel, or three AND gates. NIM modules can be hot swapped, since there are no data connectors at the back. CAMAC A later crate standard is Computer Automated Measurement and Control, or CAMAC. CAMAC modules are much thinner than NIM modules. The backplane connector of a CAMAC module is a card-edge connector; because of the possibilities of mis-aligning the connectors upon plugin, CAMAC modules are NOT hot swappable. The CAMAC backplane contains a signaling protocol for the crate controller to set the values of registers in modules (for configuration) and to read values of registers (for data acquisition). Due to the slowness of the data communication along the backplane, once FASTBUS was invented, CAMAC modules were mostly used for modules that needed to be computer-configured but not for data acquisition. FASTBUS FASTBUS is a crate/module standard developed later than the other two for high-speed parallel data acquisition. Rather than individual components, FASTBUS modules tend to be data acquisition modules with many input connectors on the front, while the stored data is read out on the backplane. The connectors on the back of a FASTBUS module are two parallel pin sockets on the module and pins sticking out of the backplane. The main connector in a FASTBUS crate covers about the bottom 2/3 of the module. There is also an upper connector that consists of pass-through pins to the back side of the backplane; this allows custom modules to be plugged in there. FASTBUS modules are much taller than the other types of crate modules, so the crates are correspondingly taller. The FASTBUS backplane is a full data bus where any module could negotiate to be master of the bus to send or receive data. VME VME (VMEbus) is a bus originally designed to provide an expansion bus for the Motorola 68000 series processor, but it also became a module electronics crate standard. The first editions of VME are three pins wide with pin sockets on the modules and pins on the backplane. In later editions, the physical standard expanded the connectors with two more rows of pins/sockets on the edges for grounding. VME is mostly designed as a computer bus, so its modules are largely data acquisition modules, not modular electronics. PXI PCI eXtensions for Instrumentation (PXI) is one of several modular electronic instrumentation platforms in current use. These platforms are used as a basis for building electronic test equipment, automation systems, and modular laboratory instruments. AdvancedTCA The Advanced Telecom Computing Architecture is an open standard for crates. Additionally to power supply and data buses, it also defines a management infrastructure. This allows to perform an array of maintenance task remotely. The standard is governed by the PICMG consortium. The requirements for cards to be used in AdvancedTCA crates, are called Advanced Mezzanine Cards (AMCs) and are specified independently in their own standard. MicroTCA MicroTCA is an open, modular standard, based upon AdvancedTCA, but with a smaller form factor. Initially developed for applications in telecommunications, it has since outgrown its initial purpose by developing modules for military, aerospace and scientific use. As AdvancedTCA, it uses AMCs, which makes cards interchangeable between those two. See also Bus (computing) Blade server References Experimental particle physics
Modular crate electronics
[ "Physics" ]
1,340
[ "Experimental physics", "Particle physics", "Experimental particle physics" ]
29,866,108
https://en.wikipedia.org/wiki/B-theorem
In mathematics, the B-theorem is a result in finite group theory formerly known as the B-conjecture. The theorem states that if is the centralizer of an involution of a finite group, then every component of is the image of a component of . References Theorems about finite groups Conjectures that have been proved
B-theorem
[ "Mathematics" ]
68
[ "Algebra stubs", "Conjectures that have been proved", "Mathematical problems", "Mathematical theorems", "Algebra" ]
39,286,085
https://en.wikipedia.org/wiki/Charles%20Goodyear%20Medal
The Charles Goodyear Medal is the highest honor conferred by the American Chemical Society, Rubber Division. Established in 1941, the award is named after Charles Goodyear, the discoverer of vulcanization, and consists of a gold medal, a framed certificate and prize money. The medal honors individuals for "outstanding invention, innovation, or development which has resulted in a significant change or contribution to the nature of the rubber industry". Awardees give a lecture at an ACS Rubber Division meeting, and publish a review of their work in the society's scientific journal Rubber Chemistry and Technology. Recipients Source: 1941 David Spence – Diamond Rubber Co. researcher noted for synthesizing isoprene for use in synthetic rubber 1942 Lorin B. Sebrell – Goodyear Research Director noted for his work on organic accelerators for vulcanization 1944 Waldo L. Semon – early developer of synthetic rubber, in particular Ameripol for B. F. Goodrich 1946 Ira Williams – duPont developer of Neoprene 1948 George Oenslager – B. F. Goodrich chemist known for pioneering vulcanization accelerator chemistry 1949 Harry L. Fisher – 69th national president of the American Chemical Society, and an authority on the chemistry of vulcanization 1950 Carroll C. Davis – first editor of Rubber Chemistry and Technology, serving from 1928 to 1957, and developer of the first practical oxygen-aging test in the industry and the use of antioxidants in rubber 1951 William C. Geer – B. F. Goodrich pioneer in studying rubber ageing, and developer of early aircraft de-icing systems 1952 Howard E. Simmons – Dupont chemist that discovered the Simmons–Smith reaction 1953 John T. Blake – Research director at Simplex Wire and Cable company, pioneered understanding of rubber as an electrical insulator 1954 George S. Whitby – Head of the University of Akron rubber laboratory, for many years the only teacher of rubber chemistry in the USA 1955 Ray P. Dinsmore – Goodyear pioneer of the use of rayon as a reinforcing material in auto tires 1956 Sidney M. Cadwell – United States Rubber Company researcher noted as discoverer of antioxidants for rubber. 1957 Arthur W. Carpenter – past president of ASTM, known for contributions to quality control for rubber 1958 Joseph C. Patrick – Thiokol Chemical Company inventor of first American synthetic elastomer – Thiokol (polymer) 1959 Fernley H. Banbury – Farrel Corporation executive and inventor of the Banbury mixer 1960 William B. Wiegand – researcher at Columbian Carbon Co. who demonstrated the effect of carbon black particle size on rubber reinforcement 1961 Herbert A. Winkelmann – B. F. Goodrich developer of first commercially feasible antioxidant 1962 Melvin Mooney – United States Rubber Company physicist and rheologist responsible for the Mooney viscometer and the Mooney-Rivlin solid constitutive relation 1963 William J. Sparks – Exxon chemist and co-inventor of Butyl rubber 1964 Arthur E. Juve – B. F. Goodrich Director of Technology who developed oil-resistant rubber compositions, lab tests for tire treads, and improvements in manufacture of rubber products and the processing of synthetic rubber 1965 Benjamin S. Garvey - worked for B.F. Goodrich and Pennsalt Chemicals. Dr. Garvey developed the "10 Gram Evaluation Process." 1966 Edward A. Murphy – Dunlop researcher credited with invention of latex foam, first marketed as Dunlopillo 1967 Norman Bekkedahl - pioneered understanding of Glass transition in elastomers, and former Deputy Chief of the Polymers Division at the National Bureau of Standards 1968 Paul J. Flory – Cornell University pioneer in the physical chemistry of macromolecules, later a Nobel laureate 1969 Robert M. Thomas – Exxon chemist and co-inventor of Butyl rubber 1970 Samuel D. Gehman – Goodyear physicist noted for development of a modulus-based measurement of rubber's glass transition temperature 1971 Harold J. Osterhof - inventor of Pliofilm, a plasticized rubber hydrochloride cast film, and director of research at Goodyear Tire & Rubber Co. 1972 Frederick W. Stavely - Firestone researcher responsible for development of synthetic polyisoprene a.k.a. "coral rubber" 1973 Arnold M. Collins – polychloroprene developer at DuPont 1974 Joseph C. Krejci – Phillips researcher known for developing oil furnace method to make carbon black 1975 Otto Bayer – head of the research group at IG Farben that discovered the polyaddition for the synthesis of polyurethanes out of polyisocyanate and polyol 1976 Earl L. Warrick – Dow Corning pioneer of silicone elastomer chemistry and inventor of Silly Putty 1977 James D. D'Ianni – Goodyear scientist noted for contributions in the development of synthetic rubber 1978 Frank Herzegh – Goodrich inventor of the first successful tubeless tire and owner of patents for over 100 inventions in the field of tire technology 1979 Francis P. Baldwin – Exxon Chief Scientist noted for his work on chemical modifications of low functionality elastomers 1980 Samuel E. Horne, Jr. – Goodrich chemist who first polymerized synthetic polyisoprene using Ziegler catalyst 1981 John D. Ferry – University of Wisconsin–Madison chemistry professor noted for co-authoring the Williams–Landel–Ferry equation 1982 Adolf Schallamach – MRPRA researcher who pioneered understanding of the mechanisms of tire traction, abrasion and wear 1983 J. Reid Shelton – professor at Case Western University known for contributions to understanding of oxidation and antioxidants in rubber, and for application of laser-Raman spectroscopy to the study of sulfur vulcanization 1984 Herman E. Schroeder – R&D Director at DuPont and a pioneer in the development of tire cord adhesion and specialty elastomers 1985 Maurice Morton – Inaugural director of the Institute of Rubber Research at the University of Akron 1986 Leonard Mullins – MRPRA research director who first described the effect of prior overloads on rubber's stress-strain curve (i.e. the Mullins effect) 1987 Norman R. Legge – Shell Oil Company researcher and pioneer of thermoplastic elastomers 1988 Herman F. Mark – Polytechnic Institute of Brooklyn faculty known as the "father of polymer science" for his early work focused on the crystal structure of natural rubber and other polymers 1989 Jean-Marie Massoubre – Michelin researcher associated with early development of the radial tire 1990 Alan N. Gent – University of Akron professor who contributed to understanding adhesion physics, and fracture of rubbery, crystalline and glassy polymers 1991 Edwin J. Vandenberg – chemist at Hercules Inc. known for discovery of isotactic polypropylene and the development of Ziegler-type catalysts 1992 Ronald S. Rivlin – MRPRA physicist and developer of finite elasticity theory for elastomers 1993 Leo Mandelkern – Florida State University Distinguished Professor of Chemistry, pioneered understanding of crystallization in polymers 1994 Alan G. Thomas – MRPRA physicist and developer of fracture mechanics theory for elastomers 1995 Aubert Y. Coran – Monsanto researcher responsible for invention of thermoplastic elastomer Geolast 1996 Siegfried Wolff – Degussa scientist who first recognized the potential for using silica in tire treads to reduce rolling resistance 1997 Adel F. Halasa – Goodyear scientist who developed a terpolymer rubber of styrene, isoprene and butadiene (SIBR) that was used in the Aquatred tire 1998 Jean-Baptiste Donnet – CNRS pioneer in surface chemistry of carbon black 1999 James E. Mark – University of Cincinnati pioneer in molecular dynamics computer simulations of rubber elasticity 2000 Jack L. Koenig – Case Western Reserve University professor who pioneered spectroscopic methods of polymer characterization 2001 Yasuyuki Tanaka – Tokyo University of Agriculture and Technology professor noted for elucidating the molecular structure of natural rubber 2003 Graham J. Lake – former pro cricketer and MRPRA pioneer in understanding fatigue behavior of rubber 2006 Robert F. Landel – Caltech Jet Propulsion Laboratory physical chemist noted for co-authoring the Williams–Landel–Ferry equation 2007 Karl A. Grosch – Uniroyal scientist who pioneered in the study of friction and abrasion in relation to tire traction and wear 2008 Joseph P. Kennedy – University of Akron Polymer Science professor and inventor of the polystyrene-polyisobutylene-polystyrene triblock polymeric coating on the Taxus Drug-eluting stent 2009 James L. White – University of Akron Polymer Engineering professor who developed numerical models of rubber rheological behavior in batch and continuous mixing machines 2010 Edward Kresge – Exxon Chief Polymer Scientist who developed tailored molecular weight density EPDM elastomers 2011 Joseph Kuczkowski – Goodyear chemist who elucidated mechanisms of antioxidant function, resulting in the commercialization of several new antioxidant systems 2012 C. Michael Roland – Naval Research Lab scientist recognized for blast and impact protection using elastomers, and for diverse contributions to elastomer science 2013 Russell A. Livigni – Gencorp scientist known for discovery and development of barium-based catalysts for the polymerization of butadiene and its copolymerization with styrene to give high trans rubbers with low vinyl content 2014 Alan D. Roberts – TARRC physicist noted for contributions to understanding friction and contact in elastomers, in particular the JKR equation 2015 Sudhin Datta – ExxonMobil Chemical scientist noted for development of Vistamaxx propylene-based elastomers. 2016 Georg Bohm- Bridgestone scientist noted for development of electron beam pre-curing of elastomers 2017 Judit Puskas – Ohio State University scientist noted as co-inventor of the polymer used on the Taxus-brand coronary stent 2018 Eric Baer – Case Western Reserve University professor noted for contributions to understanding elastomeric polyolefins and rubber toughening of brittle polymers, and for founding the university's Department of Macromolecular Science and Engineering. 2019 Roderic Quirk – University of Akron professor noted for contributions to anionic polymerization technology that is used to produce butadiene, isoprene and styrene homo and block copolymers. 2020 Nissim Calderon – Goodyear Tire & Rubber Company researcher who first demonstrated olefin metathesis and later applied it to development of new elastomers, copolymers, terpolymers, alternating copolymers and oligomers. 2021 Joseph DeSimone – American chemist, inventor, entrepreneur and co-founder of Carbon, the 3D Manufacturing company that commercialized his Continuous Liquid Interface Production (CLIP) technology. 2022 Timothy B. Rhyne and Steven M. Cron – Michelin engineers who jointly invented and developed non-pneumatic tire technology for the Tweel and Uptis tires. 2023 Christopher Macosko - University of Minnesota professor emeritus who invented a rheometer for the rubber industry and co-founded Rheometric Scientific. 2024 Katrina Cornish - Ohio State University professor known for development of alternative sources of natural rubber. 2025 Gert Heinrich - TU Dresden professor known for contributions to "statistical-mechanical and constitutive continuum theory, molecular dynamics, friction theory and fracture mechanics" of polymers. See also List of engineering awards List of chemistry awards International Rubber Science Hall of Fame: Another ACS award Melvin Mooney Distinguished Technology Award Sparks-Thomas award References External links The ACS Rubber Division Oral histories of several medal winners Awards of the American Chemical Society Awards established in 1941 Materials science awards Chemical engineering awards Rubber industry
Charles Goodyear Medal
[ "Chemistry", "Materials_science", "Technology", "Engineering" ]
2,420
[ "Chemical engineering", "Materials science", "Chemical engineering awards", "Science and technology awards", "Materials science awards" ]
39,287,520
https://en.wikipedia.org/wiki/N%C3%A6s%20jernverk
Næs Ironworks ( or Næs verk) in Holt (now part of Tvedestrand municipality, in Aust-Agder county, Norway), was an iron works which started operation in 1665 under the name “Baaseland Værk”. The blast furnace and foundry were located at the Båsland farm, while the associated forge was located a kilometer further east, by the Storelva river at Næs. The blast furnace was new, and not an extension of the Barbu jernverk at Arendal which ceased operations in the 1650s. “Baaseland Værk” was given the name Naes blast furnace operation when the buildings were concentrated by Storelva in 1738. About 1840 the firm was renamed Jacob Aall & Søn. It ceased operation in 1959. History While Ulrich Schnell was the sole owner of Baaselands Værk, he decided to relocate the blast furnace to the Storelva to take advantage of the water power available. In 1738 operations were centralized in Naes, and the name of the operation was changed to Næs Jernverk. Meanwhile, a new dam to power the hammers was built in Storelva. Infrastructure was included; for example in 1740 a permanent school building was constructed. During the decades of the 1750s and 1760s favorable economic conditions allowed Schnell to expand the business to a significant undertaking. Tvedestrand harbor was the center for shipping the products. The iron ore came was supplied from Arendal, from the Lyngroth mines in Froland and Solberg mine in Holt. The smelter used charcoal for fuel, and farmers in the surrounding district (Holt, Vegarshei, Amli) were required to burn timber to charcoal and deliver the charcoal to the iron works. In 1799 Jacob Aall bought the Næs Ironworks for 170,000 Norwegian rigsdalers. He both improved and expanded the Næs Ironworks such that it became arguably the country's best-operated blast furnaces, known form both its well-constructed furnaces and for its foundry products. During the war with England from 1807 to 1814 Aall made a special effort to import wheat from Denmark for the people of the parish who supported the Næs Ironworks. In 1820 the iron works had its own savings bank, as well as health and social security systems for its employees. The work also had approximately 70 smallholdings (small farms) that workers stayed on and operated. In 1830 the blast furnace complex was doubled in capacity, allowing the casting of larger items. In 1837 this capability was directed at construction of the cast-iron bridge at Fosstveit, a few kilometers further down the Storelva. This Fosstvedt bridge was proposed for protection in 2002 as part of the Norwegian national protection plan for roads, bridges and related heritage constructions. The Norwegian Directorate for Cultural Heritage formally declared the bridge protected under the Cultural Heritage Act in 2008. The iron work also produced a statue of Christian Krohg, the nation's first public monument of cast iron, which was unveiled in Christiania on 17 May 1833. In 1840 Aalls son Benjamin Nicolay became an active member of the firm and the firm's name was changed to "Jacob Aall & Son." Åmotbrua is a suspension bridge now located in Grünerløkka, Oslo. Originally built in 1851 - 1852, it is now a pedestrian bridge over the Aker River on Grünerløkka in Oslo. It was originally built to cross the Drammen River, near the mouth of the Simoa creek at Åmot in Modum. The bridge was built in 1851-1852, and was Norway's second chain bridge, of cast iron chains (of three), cast iron by the Næs Ironworks. In 1853 another iron works, Egelands Verk at Eikeland village in Gjerstad municipality of Aust-Agder, was purchased for 80,000 Norwegian Speciedalers, and was operated as a subsidiary of the undertaking at Næs. In the 1850s new technologies emerged that made use of blast furnaces fueled with charcoal outdated, but Naes works chose to continue to rely on the old technology, and focused instead on developing specialized niche products. These products included horseshoe nails, axes, files, crucible steel products and rolling mill products. The company declared bankrupt in 1884. The Egeland Works were sold and later abandoned. Operations at Naes were restarted under the aegis of a corporation, in which the family Aall was able to retain a majority share. The new company was named A/S Jacob Aall & Søn. In addition to the blast furnace operation, the company ran a pulp mill and included forestry, agriculture, mining and milling activities. In 1886, it built a new blast furnace following the Swedish pattern, and specialized crucible steel production was expanded. At the turn of the century the ironworks employed 400 men, 120 of them permanent employees. Further financial setbacks occurred and the blast furnaces were closed for good in 1909. The iron works were selected as the millennium site for Aust-Agder county. The works were added to the list of priority technical and industrial cultural heritage by the Norwegian Directorate for Cultural Heritage. Museum In December 1966 the forge, the crucible steel mill and the steel production building were declared protected, along with the fixtures and production equipment, as part of Norway’s technical heritage. A/S Jacob Aall & Søn donated the buildings for use as a museum, along with the area where they were standing. The Naes Ironworks Museum (Næs Jernverksmuseum ) was established with a board consisting of Norwegian Museum of Science and Technology, Norwegian Directorate for Cultural Heritage and the Aust-Agder Museum, with administration the responsibility of the latter. References Metal companies of Norway Iron and steel mills Millennium sites
Næs jernverk
[ "Chemistry" ]
1,216
[ "Iron and steel mills", "Metallurgical facilities" ]
39,288,495
https://en.wikipedia.org/wiki/Mitochondrial%20fusion
Mitochondria are dynamic organelles with the ability to fuse and divide (fission), forming constantly changing tubular networks in most eukaryotic cells. These mitochondrial dynamics, first observed over a hundred years ago are important for the health of the cell, and defects in dynamics lead to genetic disorders. Through fusion, mitochondria can overcome the dangerous consequences of genetic malfunction. The process of mitochondrial fusion involves a variety of proteins that assist the cell throughout the series of events that form this process. Process overview When cells experience metabolic or environmental stresses, mitochondrial fusion and fission work to maintain functional mitochondria. An increase in fusion activity leads to mitochondrial elongation, whereas an increase in fission activity results in mitochondrial fragmentation. The components of this process can influence programmed cell death and lead to neurodegenerative disorders such as Parkinson's disease. Such cell death can be caused by disruptions in the process of either fusion or fission. The shapes of mitochondria in cells are continually changing via a combination of fission, fusion, and motility. Specifically, fusion assists in modifying stress by integrating the contents of slightly damaged mitochondria as a form of complementation. By enabling genetic complementation, fusion of the mitochondria allows for two mitochondrial genomes with different defects within the same organelle to individually encode what the other lacks. In doing so, these mitochondrial genomes generate all of the necessary components for a functional mitochondrion. With mitochondrial fission The combined effects of continuous fusion and fission give rise to mitochondrial networks. The mechanisms of mitochondrial fusion and fission are regulated by proteolysis and posttranslational modifications. The actions of fission, fusion and motility cause the shapes of mitochondria to continually change. The changes in balance between the rates of mitochondrial fission and fusion directly affect the wide range of mitochondrial lengths that can be observed in different cell types. Rapid fission and fusion of the mitochondria in cultured fibroblasts has been shown to promote the redistribution of mitochondrial green fluorescent protein (GFP) from one mitochondrion to all of the other mitochondria. This process can occur in a cell within a time period as short as an hour. The significance of mitochondrial fission and fusion is distinct for nonproliferating neurons, which are unable to survive without mitochondrial fission. Such nonproliferating neurons cause two human diseases known as dominant optic atrophy and Charcot Marie Tooth disease type 2A, which are both caused by fusion defects. Though the importance of these processes is evident, it is still unclear why mitochondrial fission and fusion are necessary for nonproliferating cells. Regulation Many gene products that control mitochondrial fusion have been identified, and can be reduced to three core groups which also control mitochondrial fission. These groups of proteins include mitofusins, OPA1/Mgm1, and Drp1/Dnm1. All of these molecules are GTP hydrolyzing proteins (GTPases) that belong to the dynamin family. Mitochondrial dynamics in different cells are understood by the way in which these proteins regulate and bind to each other. These GTPases in control of mitochondrial fusion are well conserved between mammals, flies, and yeast. Mitochondrial fusion mediators differ between the outer and inner membranes of the mitochondria. Specific membrane-anchored dynamin family members mediate fusion between mitochondrial outer membranes known as Mfn1 and Mfn2. These two proteins are mitofusin contained within humans that can alter the morphology of affected mitochondria in over-expressed conditions. However, a single dynamin family member known as OPA1 in mammals mediates fusion between mitochondrial inner membranes. These regulating proteins of mitochondrial fusion are organism-dependent; therefore, in Drosophila (fruit flies) and yeasts, the process is controlled by the mitochondrial transmembrane GTPase, Fzo. In Drosophila, Fzo is found in postmeiotic spermatids and the dysfunction of this protein results in male sterility. However, a deletion of Fzo1 in budding yeast results in smaller, spherical mitochondria due to the lack of mitochondrial DNA (mtDNA). Apoptosis The balance between mitochondrial fusion and fission in cells is dictated by the up-and-down regulation of mitofusins, OPA1/Mgm1, and Drp1/Dnm1. Apoptosis, or programmed cell death, begins with the breakdown of mitochondria into smaller pieces. This process results from up-regulation of Drp1/Dnm1 and down-regulation of mitofusins. Later in the apoptosis cycle, an alteration of OPA1/Mgm1 activity within the inner mitochondrial membrane occurs. The role of the OPA1 protein is to protect cells against apoptosis by inhibiting the release of cytochrome c. Once this protein is altered, there is a change in the cristae structure, release of cytochrome c, and the activation of the destructive caspase enzymes. These resulting changes indicate that inner mitochondrial membrane structure is linked with regulatory pathways in influencing cell life and death. OPA1 plays both a genetic and molecular role in mitochondrial fusion and in cristae remodeling during apoptosis. OPA1 exists in two forms; the first being soluble and found in the intermembrane space, and the second as an integral inner membrane form, work together to restructure and shape the cristae during and after apoptosis. OPA1 blocks intramitochondrial cytochrome c redistribution which proceeds remodeling of the cristae. OPA1 functions to protect cells with mitochondrial dysfunction due to Mfn deficiencies, doubly for those lacking Mfn1 and Mfn2, but it plays a greater role in cells with only Mfn1 deficiencies as opposed to Mfn2 deficiencies. Therefore, it is supported that OPA1 function is dependent on the amount of Mfn1 present in the cell to promote mitochondrial elongation. In mammals Both proteins, Mfn1 and Mfn2, can act either together or separately during mitochondrial fusion. Mfn1 and Mfn2 are 81% similar to each other and about 51% similar to the Drosophila protein Fzo. Results published for a study to determine the impact of fusion on mitochondrial structure revealed that Mfn-deficient cells demonstrated either elongated cells (majority) or small, spherical cells upon observation. The Mfn protein has three different methods of action: Mfn1 homotypic oligomers, Mfn2 homotypic oligomers and Mfn1-Mfn2 heterotypic oligomers. It has been suggested that the type of cell determines the method of action but it has yet to be concluded whether or not Mfn1 and Mfn2 perform the same function in the process or if they are separate. Cells lacking this protein are subject to severe cellular defects such as poor cell growth, heterogeneity of mitochondrial membrane potential and decreased cellular respiration. Mitochondrial fusion plays an important role in the process of embryonic development, as shown through the Mfn1 and Mfn2 proteins. Using Mfn1 and Mfn2 knock-out mice, which die in utero at midgestation due to a placental deficiency, mitochondrial fusion was shown not to be essential for cell survival in vitro, but necessary for embryonic development and cell survival throughout later stages of development. Mfn1 Mfn2 double knock-out mice, which die even earlier in development, were distinguished from the "single" knock-out mice. Mouse embryo fibroblasts (MEFs) originated from the double knock-out mice, which do survive in culture even though there is a complete absence of fusion, but parts of their mitochondria show a reduced mitochondrial DNA (mtDNA) copy number and lose membrane potential. This series of events causes problems with adenosine triphosphate (ATP) synthesis. The Mitochondrial Inner/Outer Membrane Fusion (MMF) Family The Mitochondrial Inner/Outer Membrane Fusion (MMF) Family (TC# 9.B.25) is a family of proteins that play a role in mitochondrial fusion events. This family belongs to the larger Mitochondrial Carrier (MC) Superfamily. The dynamic nature of mitochondria is critical for function. Chen and Chan (2010) have discussed the molecular basis of mitochondrial fusion, its protective role in neurodegeneration, and its importance in cellular function. The mammalian mitofusins Mfn1 and Mfn2, GTPases localized to the outer membrane, mediate outer-membrane fusion. OPA1, a GTPase associated with the inner membrane, mediates subsequent inner-membrane fusion. Mutations in Mfn2 or OPA1 cause neurodegenerative diseases. Mitochondrial fusion enables content mixing within a mitochondrial population, thereby preventing permanent loss of essential components. Cells with reduced mitochondrial fusion show a subpopulation of mitochondria that lack mtDNA nucleoids. Such mtDNA defects lead to respiration-deficient mitochondria, and their accumulation in neurons leads to impaired outgrowth of cellular processes and consequent neurodegeneration. Family members A representative list of the proteins belonging to the MMF family is available in the Transporter Classification Database. 9.B.25.1.1 - The mitochondrial inner/outer membrane fusion complex, Fzo/Mgm1/Ugo1. Only the Ugo1 protein is a member of the MC superfamily. 9.B.25.2.1 - The mammalian mitochondrial membrane fusion complex, Mitofusin 1 (Mfn1)/Mfn2/Optical Atrophy Protein 1 (OPA1) complex. This subfamily includes mitofusins 1 and 2. Mitofusins: Mfn1 and Mfn2 Mfn1 and Mfn2 (TC# 9.B.25.2.1; Q8IWA4 and O95140, respectively), in mammalian cells are required for mitochondrial fusion, Mfn1 and Mfn2 possess functional distinctions. For instance, the formation of tethered structures in vitro occurs more readily when mitochondria are isolated from cells overexpressing Mfn1 than Mfn2. In addition, Mfn2 specifically has been shown to associate with Bax and Bak (Bcl-2 family, TC#1.A.21), resulting in altered Mfn2 activity, indicating that the mitofusins possess unique functional characteristics. Lipidic holes may open on opposing bilayers as intermediates, and fusion in cardiac myocytes is coupled with outer mitochondrial membrane destabilization that is opportunistically employed during the mitochondrial permeability transition. Mutations in Mfn2 (but not Mfn1) result in the neurological disorder Charcot-Marie-Tooth syndrome. These mutations can be complemented by the formation of Mfn1–Mfn2CMT2A hetero-oligomers but not homo-oligomers of Mfn2+–Mfn2CMT2A. This suggests that within the Mfn1–Mfn2 hetero-oligomeric complex, each molecule is functionally distinct. This suggests that control of the expression levels of each protein likely represents the most basic form of regulation to alter mitochondrial dynamics in mammalian tissues. Indeed, the expression levels of Mfn1 and Mfn2 vary according to cell or tissue type as does the mitochondrial morphology. Yeast mitochondrial fusion proteins In yeast, three proteins are essential for mitochondrial fusion. Fzo1 (P38297) and Mgm1 (P32266) are conserved guanosine triphosphatases that reside in the outer and inner membranes, respectively. At each membrane, these conserved proteins are required for the distinct steps of membrane tethering and lipid mixing. The third essential component is Ugo1, an outer membrane protein with a region homologous to but distantly related to a region in the Mitochondrial Carrier (MC) family. Hoppins et al., 2009 showed that Ugo1 is a modified member of this family, containing three transmembrane domains and existing as a dimer, a structure that is critical for the fusion function of Ugo1. Their analyses of Ugo1 indicate that it is required for both outer and inner membrane fusion after membrane tethering, indicating that it operates at the lipid-mixing step of fusion. This role is distinct from the fusion dynamin-related proteins and thus demonstrates that at each membrane, a single fusion protein is not sufficient to drive the lipid-mixing step. Instead, this step requires a more complex assembly of proteins. The formation of a fusion pore has not yet been demonstrated. The Ugo1 protein is a member of the MC superfamily. See also Mitochondrial fission Mitochondrial carriers MFN1 MFN2 OPA1 DNM1 Transporter Classification Database References Mitochondrial genetics Cell anatomy Cell biology Protein families Membrane proteins Transmembrane proteins Transmembrane transporters Transport proteins Integral membrane proteins
Mitochondrial fusion
[ "Biology" ]
2,748
[ "Protein families", "Cell biology", "Protein classification", "Membrane proteins" ]
39,288,501
https://en.wikipedia.org/wiki/Planar%20transmission%20line
Planar transmission lines are transmission lines with conductors, or in some cases dielectric (insulating) strips, that are flat, ribbon-shaped lines. They are used to interconnect components on printed circuits and integrated circuits working at microwave frequencies because the planar type fits in well with the manufacturing methods for these components. Transmission lines are more than simply interconnections. With simple interconnections, the propagation of the electromagnetic wave along the wire is fast enough to be considered instantaneous, and the voltages at each end of the wire can be considered identical. If the wire is longer than a large fraction of a wavelength (one tenth is often used as a rule of thumb), these assumptions are no longer true and transmission line theory must be used instead. With transmission lines, the geometry of the line is precisely controlled (in most cases, the cross-section is kept constant along the length) so that its electrical behaviour is highly predictable. At lower frequencies, these considerations are only necessary for the cables connecting different pieces of equipment, but at microwave frequencies the distance at which transmission line theory becomes necessary is measured in millimetres. Hence, transmission lines are needed within circuits. The earliest type of planar transmission line was conceived during World War II by Robert M. Barrett. It is known as stripline, and is one of the four main types in modern use, along with microstrip, suspended stripline, and coplanar waveguide. All four of these types consist of a pair of conductors (although in three of them, one of these conductors is the ground plane). Consequently, they have a dominant mode of transmission (the mode is the field pattern of the electromagnetic wave) that is identical, or near-identical, to the mode found in a pair of wires. Other planar types of transmission line, such as slotline, finline, and imageline, transmit along a strip of dielectric, and substrate-integrated waveguide forms a dielectric waveguide within the substrate with rows of posts. These types cannot support the same mode as a pair of wires, and consequently they have different transmission properties. Many of these types have a narrower bandwidth and in general produce more signal distortion than pairs of conductors. Their advantages depend on the exact types being compared, but can include low loss and a better range of characteristic impedance. Planar transmission lines can be used for constructing components as well as interconnecting them. At microwave frequencies it is often the case that individual components in a circuit are themselves larger than a significant fraction of a wavelength. This means they can no longer be treated as lumped components, that is, treated as if they existed at a single point. Lumped passive components are often impractical at microwave frequencies, either for this reason, or because the values required are impractically small to manufacture. A pattern of transmission lines can be used for the same function as these components. Whole circuits, called distributed-element circuits, can be built this way. The method is often used for filters. This method is particularly appealing for use with printed and integrated circuits because these structures can be manufactured with the same processes as the rest of the assembly simply by applying patterns to the existing substrate. This gives the planar technologies a big economic advantage over other types, such as coaxial line. Some authors make a distinction between transmission line, a line that uses a pair of conductors, and waveguide, a line that either does not use conductors at all, or just uses one conductor to constrain the wave in the dielectric. Others use the terms synonymously. This article includes both kinds, so long as they are in a planar form. Names used are the common ones and do not necessarily indicate the number of conductors. The term waveguide when used unadorned, means the hollow, or dielectric filled, metal kind of waveguide, which is not a planar form. General properties Planar transmission lines are those transmission lines in which the conductors are essentially flat. The conductors consist of flat strips, and there are usually one or more ground planes parallel to the flat surface of the conductors. The conductors are separated from the ground planes, sometimes with air between them but more often with a solid dielectric material. Transmission lines can also be constructed in non-planar formats such as wires or coaxial line. As well as interconnections, there are a wide range of circuits that can be implemented in transmission lines. These include filters, power dividers, directional couplers, impedance matching networks, and choke circuits to deliver biasing to active components. The principal advantage of the planar types is that they can be manufactured using the same processes used to make printed circuits and integrated circuits, particularly through the photolithography process. The planar technologies are thus particularly well suited to mass production of such components. Making circuit elements out of transmission lines is most useful at microwave frequencies. At lower frequencies the longer wavelength makes these components too bulky. At the highest microwave frequencies planar transmission line types are generally too lossy and waveguide is used instead. Waveguide, however, is bulkier and more expensive to manufacture. At still higher frequencies dielectric waveguide (such as optical fibre) becomes the technology of choice, but there are planar types of dielectric waveguide available. The most widely used planar transmission lines (of any kind) are stripline, microstrip, suspended stripline, and coplanar waveguide. Modes An important parameter for transmission lines is the mode of transmission employed. The mode describes the electromagnetic field patterns caused by the geometry of the transmission structure. It is possible for more than one mode to exist simultaneously on the same line. Usually, steps are taken to suppress all modes except the desired one. But some devices, such as the dual-mode filter, rely on the transmission of more than one mode. TEM mode The mode found on ordinary conductive wires and cables is the transverse electromagnetic mode (TEM mode). This is also the dominant mode on some planar transmission lines. In the TEM mode, the field strength vectors for the electric and magnetic field are both transverse to the direction of travel of the wave and orthogonal to each other. An important property of the TEM mode is that it can be used at low frequencies, all the way down to zero (i.e. DC). Another feature of the TEM mode is that on an ideal transmission line (one that meets the Heaviside condition) there is no change of line transmission parameters (characteristic impedance and signal group velocity) with the frequency of transmission. Because of this, ideal TEM transmission lines do not suffer from dispersion, a form of distortion in which different frequency components travel at different velocities. Dispersion "smears out" the wave shape (which may represent the transmitted information) in the direction of the line length. All other modes suffer from dispersion, which puts a limit on the bandwidth achievable. Quasi-TEM modes Some planar types, notably microstrip, do not have a homogeneous dielectric; it is different above and below the line. Such geometries cannot support a true TEM mode; there is some component of the electromagnetic field parallel to the direction of the line, although the transmission can be nearly TEM. Such a mode is referred to as quasi-TEM. In a TEM line, discontinuities such as gaps and posts (used to construct filters and other devices) have an impedance that is purely reactive: they can store energy, but do not dissipate it. In most quasi-TEM lines, these structures additionally have a resistive component to the impedance. This resistance is a result of radiation from the structure and causes the circuit to be lossy. The same problem occurs at bends and corners of the line. These problems can be mitigated by using a high permittivity material as the substrate, which causes a higher proportion of the wave to be contained in the dielectric, making for a more homogeneous transmission medium and a mode closer to TEM. Transverse modes In hollow metal waveguides and optical waveguides there are an unlimited number of other transverse modes that can occur. However, the TEM mode cannot be supported since it requires two or more separate conductors to propagate. The transverse modes are classified as either transverse electric (TE, or H modes) or transverse magnetic (TM, or E modes) according to whether, respectively, all of the electric field, or all of the magnetic field is transverse. There is always a longitudinal component of one field or the other. The exact mode is identified by a pair of indices counting the number of wavelengths or half-wavelengths along specified transverse dimensions. These indices are usually written without a separator: for instance, TE10. The exact definition depends on whether the waveguide is rectangular, circular, or elliptical. For waveguide resonators a third index is introduced to the mode for half-wavelengths in the longitudinal direction. A feature of TE and TM modes is that there is a definite cutoff frequency below which transmission will not take place. The cutoff frequency depends on mode and the mode with the lowest cutoff frequency is called the dominant mode. Multi-mode propagation is generally undesirable. Because of this, circuits are often designed to operate in the dominant mode at frequencies below the cutoff of the next highest mode. Only one mode, the dominant mode, can exist in this band. Some planar types that are designed to operate as TEM devices can also support TE and TM modes unless steps are taken to suppress them. The ground planes or shielding enclosures can behave as hollow waveguides and propagate these modes. Suppression can take the form of shorting screws between the ground planes or designing the enclosure to be too small to support frequencies as low as the operational frequencies of the circuit. Similarly, coaxial cable can support circular TE and TM modes that do not require the centre conductor to propagate, and these modes can be suppressed by reducing the diameter of the cable. Longitudinal-section modes Some transmission line structures are unable to support a pure TE or TM mode, but can support modes that are a linear superposition of TE and TM modes. In other words, they have a longitudinal component of both electric and magnetic field. Such modes are called hybrid electromagnetic (HEM) modes. A subset of the HEM modes is the longitudinal-section modes. These come in two varieties; longitudinal-section electric (LSE) modes and longitudinal-section magnetic (LSM) modes. LSE modes have an electric field that is zero in one transverse direction, and LSM modes have a magnetic field that is zero in one transverse direction. LSE and LSM modes can occur in planar transmission line types with non-homogeneous transmission media. Structures that are unable to support a pure TE or TM mode, if they are able to support transmissions at all, must necessarily do so with a hybrid mode. Other important parameters The characteristic impedance of a line is the impedance encountered by a wave travelling along the line; it depends only on the line geometry and materials and is not changed by the line termination. It is necessary to match the characteristic impedance of the planar line to the impedance of the systems to which it is connected. Many filter designs require lines with a number of different characteristic impedances, so it is an advantage for a technology to have a good range of achievable impedances. Narrow lines have a higher impedance than broad lines. The highest impedance achievable is limited by the resolution of the manufacturing process which imposes a limit on how narrow the lines can be made. The lower limit is determined by the width of line at which unwanted transverse resonance modes might arise. Q factor (or just Q) is the ratio of energy stored to energy dissipated per cycle. It is the main parameter characterising the quality of resonators. In transmission line circuits, resonators are frequently constructed of transmission line sections to build filters and other devices. Their Q factor limits the steepness of the filter skirts and its selectivity. The main factors determining Q of a planar type are the permittivity of the dielectric (high permittivity increases Q) and the dielectric losses, which decrease Q. Other factors that lower Q are the resistance of the conductor and radiation losses. εr is the relative permittivity of the substrate. Substrates There are a wide range of substrates that are used with planar technologies. For printed circuits, glass-reinforced epoxy (FR-4 grade) is commonly used. High permittivity ceramic-PTFE laminates (e.g. Rogers Corporation 6010 board) are expressly intended for microwave applications. At the higher microwave frequencies, a ceramic material such as aluminium oxide (alumina) might be used for hybrid microwave integrated circuits (MICs). At the very highest microwave frequencies, in the millimetre band, a crystalline substrate might be used such as sapphire or quartz. Monolithic microwave integrated circuits (MMICs) will have substrates composed of the semiconductor material of which the chip is built such as silicon or gallium arsenide, or an oxide deposited on the chip such as silicon dioxide. The electrical properties of the substrate of most interest are the relative permittivity (εr) and the loss tangent (). The relative permittivity determines the characteristic impedance of a given line width and the group velocity of signals travelling on it. High permittivity results in smaller printed components, aiding miniaturisation. In quasi-TEM types, permittivity determines how much of the field will be contained within the substrate and how much is in the air above it. The loss tangent is a measure of the dielectric losses. It is desirable to have this as small as possible, especially in circuits which require high Q. Mechanical properties of interest include the thickness and mechanical strength required of the substrate. In some types, such as suspended stripline and finline, it is advantageous to make the substrate as thin as possible. Delicate semiconductor components mounted on a flexing substrate can become damaged. A hard, rigid material such as quartz might be chosen as the substrate to avoid this problem, rather than an easier-to-machine board. In other types, such as homogeneous stripline, it can be much thicker. For printed antennae, that are conformal to the device shape, flexible, hence very thin, substrates are required. The thickness required for electrical performance depends on the permittivity of the material. Surface finish is an issue; some roughness may be required to ensure adhesion of the metallisation, but too much causes conductor losses (as the consequent roughness of the metallization becomes significant compared with the skin depth). Thermal properties can be important. Thermal expansion changes the electrical properties of lines and can break plated through holes. Types Stripline Stripline is a strip conductor embedded in a dielectric between two ground planes. It is usually constructed as two sheets of dielectric clamped together with the stripline pattern on one side of one sheet. The main advantage of stripline over its principal rival, microstrip, is that transmission is purely in the TEM mode and is free of dispersion, at least over the distances encountered in stripline applications. Stripline is capable of supporting TE and TM modes but these are not generally used. The main disadvantage is that it is not as easy as microstrip to incorporate discrete components. For any that are incorporated, cutouts have to be provided in the dielectric and they are not accessible once assembled. Suspended stripline Suspended stripline is a type of air stripline in which the substrate is suspended between the ground planes with an air gap above and below. The idea is to minimise dielectric losses by having the wave travel through air. The purpose of the dielectric is only for mechanical support of the conductor strip. Since the wave is travelling through the mixed media of air and dielectric, the transmission mode is not truly TEM, but a thin dielectric renders this effect negligible. Suspended stripline is used in the mid microwave frequencies where it is superior to microstrip with respect to losses, but not as bulky or expensive as waveguide. Other stripline variants The idea of two conductor stripline is to compensate for air gaps between the two substrates. Small air gaps are inevitable because of manufacturing tolerances and the thickness of the conductor. These gaps can promote radiation away from the line between the ground planes. Printing identical conductors on both boards ensures the fields are equal in both substrates and the electric field in the gaps due to the two lines cancels out. Usually, one line is made slightly undersize to prevent small misalignments effectively widening the line, and consequently reducing the characteristic impedance. The bilateral suspended stripline has more of the field in the air and almost none in the substrate leading to higher Q, compared to standard suspended stripline. The disadvantage of doing this is that the two lines have to be bonded together at intervals less than a quarter wavelength apart. The bilateral structure can also be used to couple two independent lines across their broad side. This gives much stronger coupling than side-by-side coupling and allows coupled-line filter and directional coupler circuits to be realised that are not possible in standard stripline. Microstrip Microstrip consists of a strip conductor on the top surface of a dielectric layer and a ground plane on the bottom surface of the dielectric. The electromagnetic wave travels partly in the dielectric and partly in the air above the conductor resulting in quasi-TEM transmission. Despite the drawbacks of the quasi-TEM mode, microstrip is often favoured for its easy compatibility with printed circuits. In any case, these effects are not so severe in a miniaturised circuit. Another drawback of microstrip is that it is more limited than other types in the range of characteristic impedances that it can achieve. Some circuit designs require characteristic impedances of or more. Microstrip is not usually capable of going that high so either those circuits are not available to the designer or a transition to another type has to be provided for the component requiring the high impedance. The tendency of microstrip to radiate is generally a disadvantage of the type, but when it comes to creating antennae it is a positive advantage. It is very easy to make a patch antenna in microstrip, and a variant of the patch, the planar inverted-F antenna, is the most widely used antenna in mobile devices. Microstrip variants Suspended microstrip has the same aim as suspended stripline; to put the field into air rather than the dielectric to reduce losses and dispersion. The reduced permittivity results in larger printed components, which limits miniaturisation, but makes the components easier to manufacture. Suspending the substrate increases the maximum frequency at which the type can be used. Inverted microstrip has similar properties to suspended microstrip with the additional benefit that most of the field is contained in the air between the conductor and the groundplane. There is very little stray field above the substrate available to link to other components. Trapped inverted microstrip shields the line on three sides preventing some higher order modes that are possible with the more open structures. Placing the line in a shielded box completely avoids any stray coupling but the substrate must now be cut to fit the box. Fabricating a complete device on one large substrate is not possible using this structure. Coplanar waveguide and coplanar strips Coplanar waveguide (CPW) has the return conductors on top of the substrate in the same plane as the main line, unlike stripline and microstrip where the return conductors are ground planes above or below the substrate. The return conductors are placed either side of the main line and made wide enough that they can be considered to extend to infinity. Like microstrip, CPW has quasi-TEM propagation. CPW is simpler to manufacture; there is only one plane of metallization and components can be surface mounted whether they are connected in series (spanning a break in the line) or shunt (between the line and the ground). Shunt components in stripline and microstrip require a connection through to the bottom of the substrate. CPW is also easier to miniaturise; its characteristic impedance depends on the ratio of the line width to the distance between return conductors rather than the absolute value of line width. Despite its advantages, CPW has not proved popular. A disadvantage is that return conductors take up a large amount of board area that cannot be used for mounting components, though it is possible in some designs to achieve a greater density of components than microstrip. More seriously, there is a second mode in CPW that has zero frequency cutoff called the slotline mode. Since this mode cannot be avoided by operating below it, and multiple modes are undesirable, it needs to be suppressed. It is an odd mode, meaning that the electric potentials on the two return conductors are equal and opposite. Thus, it can be suppressed by bonding the two return conductors together. This can be achieved with a bottom ground plane (conductor-backed coplanar waveguide, CBCPW) and periodic plated through holes, or periodic air bridges on the top of the board. Both these solutions detract from the basic simplicity of CPW. Coplanar variants Coplanar strips (also coplanar stripline or differential line) are usually used only for RF applications below the microwave band. The lack of a ground plane leads to a poorly defined field pattern and the losses from stray fields are too great at microwave frequencies. On the other hand, the lack of ground planes means that the type is amenable to embedding in multi-layer structures. Slotline A slotline is a slot cut in the metallisation on top of the substrate. It is the dual of microstrip, a dielectric line surrounded by conductor instead of a conducting line surrounded by dielectric. The dominant propagation mode is hybrid, quasi-TE with a small longitudinal component of electric field. Slotline is essentially a balanced line, unlike stripline and microstrip, which are unbalanced lines. This type makes it particularly easy to connect components to the line in shunt; surface mount components can be mounted bridging across the line. Another advantage of slotline is that high impedance lines are easier to achieve. Characteristic impedance increases with line width (compare microstrip where it decreases with width) so there is no issue with printing resolution for high impedance lines. A disadvantage of slotline is that both characteristic impedance and group velocity vary strongly with frequency, resulting in slotline being more dispersive than microstrip. Slotline also has a relatively low Q. Slotline variants Antipodal slotline is used where very low characteristic impedances are required. With dielectric lines, low impedance means narrow lines (the opposite of the case with conducting lines) and there is a limit to the thinness of line that can be achieved because of the printing resolution. With the antipodal structure, the conductors can even overlap without any danger of short-circuiting. Bilateral slotline has advantages similar to those of bilateral air stripline. Substrate-integrated waveguide Substrate-integrated waveguide (SIW), also called laminated waveguide or post-wall waveguide, is a waveguide formed in the substrate dielectric by constraining the wave between two rows of posts or plated through holes and ground planes above and below the substrate. The dominant mode is a quasi-TE mode. SIW is intended as a cheaper alternative to hollow metal waveguide while retaining many of its benefits. The greatest benefit is that, as an effectively enclosed waveguide, it has considerably less radiation loss than microstrip. There is no unwanted coupling of stray fields to other circuit components. SIW also has high Q and high power handling, and, as a planar technology, is easier to integrate with other components. SIW can be implemented on printed circuit boards or as low-temperature co-fired ceramic (LTCC). The latter is particularly suited to implementing SIW. Active circuits are not directly implemented in SIW: the usual technique is to implement the active part in stripline through a stripline-to-SIW transition. Antennae can be created directly in SIW by cutting slots in the ground planes. A horn antenna can be made by flaring the rows of posts at the end of a waveguide. SIW variants There is an SIW version of ridge waveguide. Ridge waveguide is a rectangular hollow metal waveguide with an internal longitudinal wall part-way across the E-plane. The principal advantage of ridge waveguide is that it has a very wide bandwidth. Ridge SIW is not very easy to implement in printed circuit boards because the equivalent of the ridge is a row of posts that only go part-way through the board. But the structure can be created more easily in LTCC. Finline Finline consists of a sheet of metallised dielectric inserted into the E-plane of a rectangular metal waveguide. This mixed format is sometimes called quasi-planar. The design is not intended to generate waveguide modes in the rectangular waveguide as such: instead, a line is cut in the metallisation exposing the dielectric and it is this that acts as a transmission line. Finline is thus a type of dielectric waveguide and can be viewed as a shielded slotline. Finline is similar to ridge waveguide in that the metallisation of the substrate represents the ridge (the "fin") and the finline represents the gap. Filters can be constructed in ridge waveguide by varying the height of the ridge in a pattern. A common way of manufacturing these is to take a thin sheet of metal with pieces cut out (typically, a series of rectangular holes) and insert this in the waveguide in much the same way as finline. A finline filter is able to implement patterns of arbitrary complexity whereas the metal insert filter is limited by the need for mechanical support and integrity. Finline has been used at frequencies up to and experimentally tested to at least . At these frequencies it has a considerable advantage over microstrip for its low loss and it can be manufactured with similar low-cost printed circuit techniques. It is also free of radiation since it is completely enclosed in the rectangular waveguide. A metal insert device has an even lower loss because it is air dielectric, but has very limited circuit complexity. A full waveguide solution for a complex design retains the low loss of air dielectric, but it would be much bulkier than finline and significantly more expensive to manufacture. A further advantage of finline is that it can achieve a particularly wide range of characteristic impedances. Biasing of transistors and diodes cannot be achieved in finline by feeding bias current down the main transmission line, as is done in stripline and microstrip, since the finline is not a conductor. Separate arrangements have to be made for biasing in finline. Finline variants Unilateral finline is the simplest design and easiest to manufacture but bilateral finline has lower loss, as with bilateral suspended stripline, and for similar reasons. The high Q of bilateral finline often makes it the choice for filter applications. Antipodal finline is used where very low characteristic impedance is required. The stronger the coupling between the two planes, the lower the impedance. Insulated finline is used in circuits that contain active components needing bias lines. The Q of insulated finline is lower than other finline types so it is otherwise not usually used. Imageline Imageline, also image line or image guide, is a planar form of dielectric slab waveguide. It consists of a strip of dielectric, often alumina, on a metal sheet. In this type, there is no dielectric substrate extending in all horizontal directions, only the dielectric line. It is so called because the ground plane acts as a mirror resulting in a line that is equivalent to a dielectric slab without the ground plane of twice the height. It shows promise for use at the higher microwave frequencies, around , but it is still largely experimental. For instance Q factors in the thousands are theoretically possible but radiation from bends and losses in the dielectric-metal adhesive significantly reduce this figure. A disadvantage of imageline is that the characteristic impedance is fixed at a single value of about . Imageline supports TE and TM modes. The dominant TE and TM modes have a cutoff frequency of zero, unlike hollow metal waveguides whose TE and TM modes all have a finite frequency below which propagation cannot occur. As the frequency approaches zero, the longitudinal component of field diminishes and the mode asymptotically approaches the TEM mode. Imageline thus shares the property of being able to propagate waves at arbitrarily low frequencies with the TEM type lines, although it cannot actually support a TEM wave. Despite this, imageline is not a suitable technology at lower frequencies. A drawback of imageline is that it must be precisely machined as surface roughness increases radiation losses. Imageline variants and other dielectric lines In insular imageline a thin layer of low permittivity insulator is deposited over the metal ground plane and the higher permittivity imageline is set on top of this. The insulating layer has the effect of reducing conductor losses. This type also has lower radiation losses on straight sections, but like the standard imageline, radiation losses are high at bends and corners. Trapped imageline overcomes this drawback, but is more complex to manufacture since it detracts from the simplicity of the planar structure. Ribline is a dielectric line machined from the substrate as a single piece. It has similar properties to insular imageline. Like imageline, it must be precisely machined. Strip dielectric guide is a low permittivity strip (usually plastic) placed on a high permittivity substrate such as alumina. The field is largely contained in the substrate between the strip and the ground plane. Because of this, this type does not have the precise machining requirements of standard imageline and ribline. Inverted strip dielectric guide has lower conductor losses because the field in the substrate has been moved away from the conductor, but it has higher radiation losses. Multiple layers Multilayer circuits can be constructed in printed circuits or monolithic integrated circuits, but LTCC is the most amenable technology for implementing planar transmission lines as multilayers. In a multilayer circuit at least some of the lines will be buried, completely enclosed by dielectric. The losses will not, therefore, be as low as with a more open technology, but very compact circuits can be achieved with multilayer LTCC. Transitions Different parts of a system may be best implemented in different types. Transitions between the various types are therefore required. Transitions between types using unbalanced conductive lines are straightforward: this is mostly a matter of providing continuity of the conductor through the transition and ensuring a good impedance match. The same can be said for transitions to non-planar types such as coaxial. A transition between stripline and microstrip needs to ensure that both ground planes of the stripline are adequately electrically bonded to the microstrip ground plane. One of these groundplanes can be continuous through the transition, but the other ends at the transition. There is a similar issue with the microstrip to CPW transition shown at C in the diagram. There is only one ground plane in each type but it changes from one side of the substrate to the other at the transition. This can be avoided by printing the microstrip and CPW lines on opposite sides of the substrate. In this case, the ground plane is continuous on one side of the substrate but a via is required on the line at the transition. Transitions between conductive lines and dielectric lines or waveguides are more complex. In these cases, a change of mode is required. Transitions of this sort consist of forming some kind of antenna in one type that acts as a launcher into the new type. Examples of this are coplanar waveguide (CPW) or microstrip converted to slotline or substrate-integrated waveguide (SIW). For wireless devices, transitions to the external antennae are also required. Transitions to and from finline can be treated in a similar way to slotline. However, it is more natural for finline transitions to go to waveguide; the waveguide is already there. A simple transition into waveguide consists of a smooth exponential taper (Vivaldi antenna) of the finline from a narrow line to the full height of the waveguide. The earliest application of finline was to launch into circular waveguide. A transition from a balanced to an unbalanced line requires a balun circuit. An example of this is CPW to slotline. Example D in the diagram shows this kind of transition and features a balun consisting of a dielectric radial stub. The component shown thus in this circuit is an air bridge bonding the two CPW ground planes together. All transitions have some insertion loss and add to the complexity of the design. It is sometimes advantageous to design with a single integrated type for the whole device to minimise the number of transitions even when the compromise type is not optimal for each of the component circuits. History The development of planar technologies was driven at first by the needs of the US military, but today they can be found in mass-produced household items such as mobile phones and satellite TV receivers. According to Thomas H. Lee, Harold A. Wheeler may have experimented with coplanar lines as early as the 1930s, but the first documented planar transmission line was stripline, invented by Robert M. Barrett of the Air Force Cambridge Research Center, and published by Barrett and Barnes in 1951. Although publication did not occur until the 1950s, stripline had actually been used during World War II. According to Barrett, the first stripline power divider was built by V. H. Rumsey and H. W. Jamieson during this period. As well as issuing contracts, Barrett encouraged research in other organisations, including the Airborne Instruments Laboratory Inc. (AIL). Microstrip followed soon after in 1952 and is due to Grieg and Engelmann. The quality of common dielectric materials was at first not good enough for microwave circuits, and consequently, their use did not become widespread until the 1960s. Stripline and microstrip were commercial rivals. Stripline was the brand name of AIL who made air stripline. Microstrip was made by ITT. Later, dielectric-filled stripline under the brand name triplate was manufactured by Sanders Associates. Stripline became a generic term for dielectric filled stripline and air stripline or suspended stripline is now used to distinguish the original type. Stripline was initially preferred to its rival because of the dispersion issue. In the 1960s, the need to incorporate miniature solid-state components in MICs swung the balance to microstrip. Miniaturisation also leads to favouring microstrip because its disadvantages are not so severe in a miniaturised circuit. Stripline is still chosen where operation over a wide band is required. The first planar slab dielectric line, imageline, is due to King in 1952. King initially used semicircular imageline, making it equivalent to the already well-studied circular rod dielectric. Slotline, the first printed planar dielectric line type, is due to Cohn in 1968. Coplanar waveguide is due to Wen in 1969. Finline, as a printed technology, is due to Meier in 1972, although Robertson created finline-like structures much earlier (1955–56) with metal inserts. Robertson fabricated circuits for diplexers and couplers and coined the term finline. SIW was first described by Hirokawa and Ando in 1998. At first, components made in planar types were made as discrete parts connected together, usually with coaxial lines and connectors. It was quickly realised that the size of circuits could be hugely reduced by directly connecting components together with planar lines within the same housing. This led to the concept of hybrid MICs: hybrid because lumped components were included in the designs connected together with planar lines. Since the 1970s, there has been a great proliferation in new variations of the basic planar types to aid miniaturisation and mass production. Further miniaturisation became possible with the introduction of MMICs. In this technology, the planar transmission lines are directly incorporated in the semiconductor slab in which the integrated circuit components have been manufactured. The first MMIC, an X band amplifier, is due to Pengelly and Turner of Plessey in 1976. Circuit gallery A small selection of the many circuits that can be constructed with planar transmission lines are shown in the figure. Such circuits are a class of distributed-element circuits. Microstrip and slotline types of directional couplers are shown at A and B respectively. Generally, a circuit form in conducting lines like stripline or microstrip has a dual form in dielectric line such as slotline or finline with the roles of the conductor and insulator reversed. The line widths of the two types are inversely related; narrow conducting lines result in high impedance, but in dielectric lines, the result is low impedance. Another example of dual circuits is the bandpass filter consisting of coupled lines shown at C in conductor form and at D in dielectric form. Each section of line acts as a resonator in the coupled lines filters. Another kind of resonator is shown in the SIW bandpass filter at E. Here posts placed in the centre of the waveguide act as resonators. Item F is a slotline hybrid ring featuring a mixture of both CPW and slotline feeds into its ports. The microstrip version of this circuit requires one section of the ring to be three-quarters wavelength long. In the slotline/CPW version all sections are one-quarter wavelength because there is a 180° phase inversion at the slotline junction. References Bibliography Barrett, R. M., "Etched sheets serve as microwave components", Electronics, vol. 25, pp. 114–118, June 1952. Barrett, R. M.; Barnes, M. H., "Microwave printed circuits", Radio TV News, vol. 46, 16 September 1951. Becherrawy, Tamer, Electromagnetism: Maxwell Equations, Wave Propagation and Emission, Wiley, 2013 . Bhartia, Prakash; Pramanick, Protap, "Fin-line characteristics and circuits", ch. 1 in, Button, Kenneth J., Topics in Millimeter Wave Technology: Volume 1, Elsevier, 2012 . Bhat, Bharathi; Koul, Shiban K., Stripline-like Transmission Lines for Microwave Integrated Circuits, New Age International, 1989 . Blank, Jon; Buntschuh, Charles, "Directional couplers", ch. 7 in, Ishii, T. Koryu, Handbook of Microwave Technology: Volume 1: Components and Devices, Academic Press, 2013 . Chang, Kai; Hsieh, Lung-Hwa, Microwave Ring Circuits and Related Structures, Wiley, 2004 . Cohn, S. B., "Slot line – an alternative transmission medium for integrated circuits", G-MTT International Microwave Symposium, pp. 104–109, 1968. Connor, F. R., Wave Transmission, Edward Arnold, 1972 . Das, Annapurna; Das, Sisir K., Microwave Engineering, Tata McGraw-Hill, 2009 . Edwards, Terry; Steer, Michael, Foundations for Microstrip Circuit Design, Wiley, 2016 . Fang, D. G., Antenna Theory and Microstrip Antennas, CRC Press, 2009 . Flaviis, Franco De, "Guided waves", ch. 5 in, Chen, Wai-Kai (ed.), The Electrical Engineering Handbook, Academic Press, 2004 . Garg, Ramesh, Microstrip Antenna Design Handbook, Artech House, 2001 . Garg, Ramesh; Bahl, Inder; Bozzi, Maurizio, Microstrip Lines and Slotlines, Artech House, 2013 . Grebennikov, Andrei, RF and Microwave Transmitter Design, Wiley, 2011 . Grieg, D. D.; Engelmann, H. F., "Microstrip – A new transmission technique for the kilomegacycle range", Proceedings of the IRE, vol. 40, iss. 12, pp. 1644–1650, December 1952. Heinen, Stefan; Klein, Norbert, "RF and microwave communication – systems, circuits and devices", ch. 36 in, Waser, Rainer (ed), Nanoelectronics and Information Technology, Wiley, 2012 . Helszajn, J., Ridge Waveguides and Passive Microwave Components, IET, 2000 . Hirowkawa, J.; Ando, M, "Single-layer feed waveguide consisting of posts for plane TEM wave excitation in parallel plates", IEEE Transactions on Antennas and Propagation, vol. 46, iss. 5, pp. 625–630, May 1998. Hunter, I. C., Theory and Design of Microwave Filters, IET, 2001 . Ishii, T. K., "Synthesis of distributed circuits", ch. 45 in, Chen, Wai-Kai (ed.), The Circuits and Filters Handbook, 2nd edition, CRC Press, 2002 . Jarry, Pierre; Beneat, Jacques, Design and Realizations of Miniaturized Fractal Microwave and RF Filters, Wiley, 2009 . King, D. D., "Dielectric image line", Journal of Applied Physics, vol. 23, no. 6, pp. 699–700, June 1952. King, D. D., "Properties of dielectric image lines", IRE Transactions on Microwave Theory and Techniques, vol. 3, iss. 2, pp. 75–81, March 1955. Kneppo, I.; Fabian, J.; Bezousek, P.; Hrnicko, P.; Pavel, M., Microwave Integrated Circuits, Springer, 2012 . Knox, R. M., Toulios, P. P., Onoda, G. Y., Investigation of the Use of Microwave Image Line Integrated Circuits for Use in Radiometers and Other Microwave Devices in X-band and Above, NASA technical report no. CR 112107, August 1972. Kouzaev, Geunnadi A.; Deen, M. Jamal; Nikolova, Natalie K., "Transmission lines and passive components", ch. 2 in, Deen, M. Jamal (ed.), Advances in Imaging and Electron Physics: Volume 174: Silicon-Based Millimeter-Wave Technology, Academic Press, 2012 . Lee, Thomas H., Planar Microwave Engineering, Cambridge University Press, 2004 . Maas, Stephen A., Practical Microwave Circuits, Artech House, 2014 . Maaskant, Rob, "Fast analysis of periodic antennas and metamaterial based waveguides", ch. 3 in, Mittra, Raj (ed.), Computational Electromagnetics: Recent Advances and Engineering Applications, Springer, 2013 . Maichen, Wolfgang, Digital Timing Measurements, Springer, 2006 . Maloratsky, Leo, Passive RF and Microwave Integrated Circuits, Elsevier, 2003 . Mazierska, Janina; Jacob, Mohan, "High-temperature superconducting planar filters for wireless communication", ch. 6 in, Kiang, Jean-Fu (ed.), Novel Technologies for Microwave and Millimeter – Wave Applications, Springer, 2013 . Meier, Paul J., "Two new integrated-circuit media with special advantages at millimeter wavelengths", 1972 IEEE GMTT International Microwave Symposium, 22–24 May 1972. Menzel, Wolfgang, "Integrated fin-line components for communications, radar, and radiometer applications", ch. 6 in, Button, Kenneth J. (ed.), Infrared and Millimeter Waves: Volume 13: Millimeter Components and Techniques, Part IV, Elsevier, 1985 . Molnar, J. A., Analysis of FIN line Feasibility for W-Band Attenuator Applications, Naval Research Lab Report 6843, 11 June 1991, Defense Technical Information Center accession no. ADA237721. Oliner, Arthur A., "The evolution of electromagnetic waveguides", ch. 16 in, Sarkar et al., History of Wireless, John Wiley and Sons, 2006 . Osterman, Michael D.; Pecht, Michael, "Introduction", ch. 1 in, Pecht, Michael (ed.), Handbook of Electronic Package Design, CRC Press, 1991 . Paolo, Franco Di, Networks and Devices Using Planar Transmission Lines, CRC Press, 2000 . Pengelly, R. S.; Turner, J. A., "Monolithic broadband GaAs FET amplifiers", Electronics Letters, vol. 12, pp. 251–252, May 1976. Pfeiffer, Ullrich, "Millimeter-wave packaging", ch. 2 in, Liu, Pfeiffer, Gaucher, Grzyb, Advanced Millimeter-wave Technologies: Antennas, Packaging and Circuits, Wiley, 2009 . Räisänen, Antti V.; Lehto, Arto, Radio Engineering for Wireless Communication and Sensor Applications, Artech House, 2003 . Rao, R. S., Microwave Engineering, PHI Learning, 2012 . Robertson, S. D., "The ultra-bandwidth finline coupler", IRE Transactions on Microwave Theory and Techniques, vol. 3, iss. 6, pp. 45–48, December 1955. Rogers, John W. M.; Plett, Calvin, Radio Frequency Integrated Circuit Design, Artech House, 2010 . Rosloniec, Stanislaw, Fundamental Numerical Methods for Electrical Engineering, Springer, 2008 . Russer, P.; Biebl, E., "Fundamentals", ch. 1 in, Luy, Johann-Friedrich; Russer, Peter (eds.), Silicon-Based Millimeter-Wave Devices, Springer, 2013 . Sander, K. F.; Reed, G. A. L., Transmission and Propagation of Electromagnetic Waves, Cambridge University Press, 1986 . Schantz, Hans G., The Art and Science of Ultrawideband Antennas, Artech House, 2015 . Simons, Rainee N., Coplanar Waveguide Circuits, Components, and Systems, Wiley, 2004 . Sisodia, M. L.; Gupta, Vijay Laxmi, Microwaves: Introduction to Circuits, Devices and Antennas, New Age International, 2007 . Srivastava, Ganesh Prasad; Gupta, Vijay Laxmi, Microwave Devices and Circuit Design, PHI Learning, 2006 . Tan, Boon-Kok, Development of Coherent Detector Technologies for Sub-Millimetre Wave Astronomy Observations, Springer, 2015 . Teshirogi, Tasuku, Modern Millimeter-wave Technologies, IOS Press, 2001 . Wallace, Richard; Andreasson, Krister, Introduction to RF and Microwave Passive Components, Artech House, 2015 . Wanhammar, Lars, Analog Filters using MATLAB, Springer, 2009 . Wen, C. P., "Coplanar waveguide: a surface strip transmission line suitable for nonreciprocal gyromagnetic device applications", IEEE Transactions on Microwave Theory and Techniques, vol. 17, iss. 12, pp. 1087–1090, December 1969. Wolff, Ingo, Coplanar Microwave Integrated Circuits, Wiley, 2006 . Wu, Ke; Zhu, Lei; Vahldieck, Ruediger, "Microwave passive components", ch. 7 in, Chen, Wai-Kai (ed.), The Electrical Engineering Handbook, Academic Press, 2004 . Wu, Xuan Hui; Kishk, Ahmed, Analysis and Design of Substrate Integrated Waveguide Using Efficient 2D Hybrid Method, Morgan & Claypool, 2010 . Yarman, Binboga Siddik, Design of Ultra Wideband Antenna Matching Networks, Springer, 2008 . Yeh, C; Shimabukuro, F, The Essence of Dielectric Waveguides, Springer, 2008 . Zhang, Kequian; Li, Dejie, Electromagnetic Theory for Microwaves and Optoelectronics, Springer, 2013 . Distributed element circuits Microwave technology Signal cables Printed circuit board manufacturing
Planar transmission line
[ "Engineering" ]
10,127
[ "Electrical engineering", "Electronic engineering", "Printed circuit board manufacturing", "Distributed element circuits" ]
39,291,030
https://en.wikipedia.org/wiki/Male%20reproductive%20alliances
Male reproductive alliances can best be understood within the context of traditional male–male competition, as a specific case of cooperative competition. Such cooperative behavior, however, does not necessarily result in the equal sharing of resources among cooperating individuals. Cooperation often requires that individuals decrease their own fitness to increase the fitness of another. This behavior becomes even more striking when it occurs within the context of cooperative reproduction, where individuals decrease their own reproductive fitness to improve the reproductive fitness of another. In some species, males cooperate by forming alliances between related or non-related individuals to gain access to females and prevent other males from mating. Such alliances often result in the monopolization of mating opportunities by one dominant male. The resulting unequal sharing of mating opportunities contradicts the traditional male–male competition over access to females that natural selection implies, making male reproductive alliances an ideal case to study the costs and benefits associated with subordinate individual cooperation. The cost of cooperation (a decrease in fitness) makes it difficult to reconcile the principles of natural selection and cooperation unless there are specific circumstances that make cooperation favorable. Despite the apparent contradictory nature of cooperation, it does occur in a variety of species. Male reproductive alliances have been documented in bottlenose dolphins (Tursiops sp.), slender mongooses (Galerella sanguine), lions (Panthera leo), chimpanzees (Pan troglodytes), and other primates. However, such behavior may or may not have evolved within the context of reproduction. Alliances may improve an individual's fitness by either improving foraging capabilities or lessening the cost of defending territories. The reproductive tradeoffs for males participating in reproductive alliances depends on the extent to which mating is shared among alliance members, and the extent to which alliance membership incurs a reproductive fitness advantage over competing as a single male. Three mechanisms have been hypothesized to reconcile the principles of natural selection and cooperation: kin selection, direct reciprocity and mutualism . Separate cases have provided evidence supporting all three of the routes described. Male alliances have been hypothesized to have evolved within the context of kin selection in red howler monkeys, within the context of direct reciprocity in savanna baboons and within the context of mutualism in lions. Determining the evolutionary context of cooperative behavior can be difficult. Two things to consider regarding male alliances are whether the coalition comprises related or unrelated individuals and how stable the coalitions are. Male alliances involve complex interactions with many costs and benefits, making the study of such cooperative behavior both difficult and fascinating. Kin selection and inclusive fitness When coalitions are composed of relatives, the contradictory nature of male reproductive alliances is easily resolved through inclusive fitness theory. The theory of inclusive fitness, proposed by Hamilton (1964) states that individuals can enhance their own reproductive fitness by securing the reproductive success of their relatives 10. For kin selection to increase the reproductive fitness of the altruist, according to Hamilton (1964) as cited in Nowak (2006) the coefficient of relatedness, between the donor and recipient of the altruistic act, must be greater than the cost-to-benefit ratio of the altruist act (r < c/b). In other words, the reproductive benefit gained by the recipient of the altruistic act times the coefficient of relatedness must be greater than the reproductive cost of the individual performing the altruistic act (rb > c). Therefore, in alliances composed of closely related individuals, where there is a large coefficient of relatedness, it is widely believed that inclusive fitness has been the principal driving force for the evolution of male reproductive cooperation. Pope (1990) demonstrated that it was advantageous for both dominant and subordinate male red howler monkeys to be members of a male alliance. While males formed coalitions of both related and unrelated individuals, related troops were more stable and lasted longer than unstable troops. Such findings indicate that despite the low reproductive success of subordinate males, given that dominant males secured all mating opportunities, it was still beneficial to be in a group as opposed to being alone, particularly if that coalition was composed of related individuals. The study suggested that for red howler monkey coalitions, kin selection was the primary mechanism involved in the formation of male alliances. While subordinate males decreased their direct fitness by cooperating with dominant males, they increased their inclusive fitness by cooperating with relatives. Male philopatry, or the behavior of remaining in natal groups, in many cases sets the stage for such alliances among relatives. Later Pope (2013) also discovered that the reproductive success of males within coalitions increases with increasing relatedness. That being said, while being a member of a male alliance has direct reproductive benefits for individuals, alliances come with a cost. Status shifts and mate guarding behavior often result in injury to the participants. Direct reciprocity Coalitions can also provide a reproductive benefit without comprising related individuals. In some species, coalitions of nonrelatives are fairly stable over time providing valuable reproductive benefits for the individuals involved. Lion collations composed of 2–6 individuals were initially thought to be made up of related individuals; however, it is now known that a large percent of lion alliances (42%) are composed of unrelated individuals. Such findings indicate that kin selection is not the only driving force behind the formation of male coalitions. It is often unclear how male coalitions of unrelated individuals arise. The hypothesis of direct reciprocity was proposed by Trivers (1971) to explain altruistic behaviors among nonrelatives. Direct reciprocity states that individuals are more likely to cooperate with individuals that they are likely to encounter again. The altruist performs a behavior that benefits another individual but decreases their own fitness. Repayment for the altruistic act follows later when the two individuals meet again, and the former altruist becomes the receiver of the altruistic act and the former receiver becomes the altruist. Direct reciprocity can operate between unrelated individuals because the benefit to the receiver is greater than the cost to the altruist; therefore, when both individuals have been on the receiving end of their partnership they have both increased their fitness. The simplest explanation for direct reciprocity is a tit-for-tat model in which individuals who have been on the receiving end of an altruistic act are more likely to cooperate in the future. The tit-for-tat strategy is successful in societies where there is a tendency for individuals not to cooperate. A more fitting strategy for situations in which cooperation is already common is the win-stay, lose-shift strategy. In the win-stay, lose-shift strategy individuals repeat their last act, either cooperate or defect cooperation, when they are doing well, and change their methods when they are doing poorly. For direct reciprocity to lead to stable cooperative behavior, the probability of the two cooperative individuals encountering each other again must be greater than the cost/benefit ratio of the altruistic act (w>c/b). Despite the strategy, there are very few documented cases of direct altruism in general and even less cases of reciprocal altruism within the confines of reproduction, with the exception of savanna baboons. Subordinate male savanna baboons form friendly relations with dominant males who rely on the subordinate males’ help to secure a monopolization of the females against intruding males. The friendly relationship between the dominant male and the subordinate males is only temporary; eventually the subordinate males work together to overturn the dominant male, ending his reproductive monopoly and thus allowing subordinate males to temporarily secure mating opportunities, until a new dominant male emerges. The key to this hypothesis is that dominant male is the newest member of the coalition. Since, the subordinate males have lived together prior to the dominant male's arrival, researchers suggest that such cohabitation has allowed the baboons to come to a mutual “agreement” that they will work together to overturn the dominant male. The coalitions formed by the lower ranking males have been interpreted as direct reciprocity because the cooperative subordinate males have presumably cooperated before. However, while the savanna baboon behavior can be viewed reciprocal altruism, the evidence is weak and their behavior often deviates from the framework of reciprocal altruism. Mutualism Another explanation for male alliances is mutualism. Mutualism benefits both individuals cooperating, in that the immediate benefit associated with cooperating outweighs costs associated with cooperating. Lions provide an interesting case of mutualism. Male lions cannot afford not to cooperate with one another. Single male lions cannot successfully defend and maintain access to a females against invading coalitions of males, which can invade the pride and force the single male to retreat. Therefore, males cooperate to gain and defend access to females. Resident male alliances maintain access to females in that pride, and sire all offspring during their control of the pride. Solitary lions rarely gain access into coalitions; however, when they do they kill all resident cubs before beginning their tenure. Therefore, it is it of vital importance for the males’ reproductive success to be a member of a male coalition and defend females against intruding males. While it was originally believed that male lions formed groups of related individuals, it is now clear that they may also form alliances with unrelated individuals. Male lion alliances are extremely successful in preventing solitary males from invading their pride. Packer et al. (1991) found that while lions often form coalitions with nonrelatives, they do so only under specific circumstances. The degree of relatedness of coalition members is related to coalition size. Small coalitions are often composed of unrelated individuals while large coalitions are largely composed of related individuals. Although larger coalitions result in more offspring per capita than small coalitions, large coalitions are only composed of close relatives. In large groups only a select few males were successful in mating, resulting in a large discrepancy in reproductive fitness among coalition members. In small groups no male was dominant and individuals displayed similar reproductive fitness. Therefore, in lions, researchers propose that cooperation between non-relatives is hypothesized has evolved in cases where there is little variance in mating opportunities among coalition members, and males share equally in mating opportunities. Larger coalitions result in greater variability of mating opportunities, and therefore researchers suggest that kinship (indirect fitness benefit) is necessary for maintenance and success of larger male lion coalitions. In smaller groups male lions are thought to share a “mutual dependence”. Smaller groups are composed largely of unrelated individuals which eliminates the possibility of kin selection. Without cooperation, males would not be able to defend their pride. Mutualism appears to be the driving force in the formation of small male lion alliances, whereas kin selection appears to be of greater importance in the formation and maintenance of larger collations. Such findings demonstrate that the formation of male alliances not only occurs differently between species but also within species. Super alliances Adding to the variability of male alliances, some species such as bottlenose dolphins form at least two levels of male alliances. The first level of alliances, termed first order alliances, to guard access to females and closely resemble the alliances previously discussed in primate species. Second order alliances form between two first order alliances. The resulting “super alliance” works to herd females and defend them, preventing outside males from gaining access to females. While first order alliances can be very stable, often lasting over ten years, second order alliances are more dynamic. On average males in first order alliances are more closely related than second order alliances which do not display genetic relatedness. Scientists suggest that the different levels of alliances in bottlenose dolphins have risen from different evolutionary contexts. Inclusive fitness is suggested as the driving force behind first order alliances of related individuals. However, first order alliances composed of unrelated individuals as well as second alliances of unrelated individuals also occur, indicating that there may be another mechanism favoring the formation of male alliances. Male Sarasota Bay common bottlenose dolphins with the defensive advantages of male pair-bonding range more widely than unpaired males, and encounter more unrelated females. Nonetheless, research also supports earlier predictions of high female promiscuity, which would decrease the value of male alliances. There are also potential mutualistic benefits for individuals involved in the alliance 3. Such benefits may include more effective female guarding which in turn further enhances the reproductive success of alliance members in the csse of the lance-tailed manakin. These findings further demonstrate that the formation of male alliances is highly variable and context dependent. Conclusions There have been many hypotheses set forward to explain the formation and stability of male alliances, most notably kin selection, direct reciprocity and mutualism. While there are many factors that dictate the formation and stability of male reproductive alliances, scientists propose that the formation of male alliances is largely a result of a need for males to cooperate in order gain access to females. If there is intense competition over access to females, males may form alliances if there is a greater reproductive benefit to being a member of an alliance over being a solitary male. In male-biased populations, male cooperative reproductive behavior is rare. However, there are extraordinary cases in which cooperation is favorable such as outlined in red howler monkey, savanna baboon, lion and bottlenose dolphin communities. References Ethology
Male reproductive alliances
[ "Biology" ]
2,699
[ "Behavioural sciences", "Ethology", "Behavior" ]
39,291,986
https://en.wikipedia.org/wiki/Linear%20seismic%20inversion
Inverse modeling is a mathematical technique where the objective is to determine the physical properties of the subsurface of an earth region that has produced a given seismogram. Cooke and Schneider (1983) defined it as calculation of the earth's structure and physical parameters from some set of observed seismic data. The underlying assumption in this method is that the collected seismic data are from an earth structure that matches the cross-section computed from the inversion algorithm. Some common earth properties that are inverted for include acoustic velocity, formation and fluid densities, acoustic impedance, Poisson's ratio, formation compressibility, shear rigidity, porosity, and fluid saturation. The method has long been useful for geophysicists and can be categorized into two broad types: Deterministic and stochastic inversion. Deterministic inversion methods are based on comparison of the output from an earth model with the observed field data and continuously updating the earth model parameters to minimize a function, which is usually some form of difference between model output and field observation. As such, this method of inversion to which linear inversion falls under is posed as an minimization problem and the accepted earth model is the set of model parameters that minimizes the objective function in producing a numerical seismogram which best compares with collected field seismic data. On the other hand, stochastic inversion methods are used to generate constrained models as used in reservoir flow simulation, using geostatistical tools like kriging. As opposed to deterministic inversion methods which produce a single set of model parameters, stochastic methods generate a suite of alternate earth model parameters which all obey the model constraint. However, the two methods are related as the results of deterministic models is the average of all the possible non-unique solutions of stochastic methods. Since seismic linear inversion is a deterministic inversion method, the stochastic method will not be discussed beyond this point. Linear inversion The deterministic nature of linear inversion requires a functional relationship which models, in terms of the earth model parameters, the seismic variable to be inverted. This functional relationship is some mathematical model derived from the fundamental laws of physics and is more often called a forward model. The aim of the technique is to minimize a function which is dependent on the difference between the convolution of the forward model with a source wavelet and the field collected seismic trace. As in the field of optimization, this function to be minimized is called the objective function and in convectional inverse modeling, is simply the difference between the convolved forward model and the seismic trace. As earlier mentioned, different types of variables can be inverted for but for clarity, these variables will be referred to as the impedance series of the earth model. In the following subsections we will describe in more detail, in the context of linear inversion as a minimization problem, the different components that are necessary to invert seismic data. Forward model The centerpiece of seismic linear inversion is the forward model which models the generation of the experimental data collected. According to Wiggins (1972), it provides a functional (computational) relationship between the model parameters and calculated values for the observed traces. Depending on the seismic data collected, this model may vary from the classical wave equations for predicting particle displacement or fluid pressure for sound wave propagation through rock or fluids, to some variants of these classical equations. For example, the forward model in Tarantola (1984) is the wave equation for pressure variation in a liquid media during seismic wave propagation while by assuming constant velocity layers with plane interfaces, Kanasewich and Chiu (1985) used the brachistotrone model of John Bernoulli for travel time of a ray along a path. In Cooke and Schneider (1983), the model is a synthetic trace generation algorithm expressed as in Eqn. 3, where R(t) is generated in the Z-domain by recursive formula. In whatever form the forward model appears, it is important that it not only predicts the collected field data, but also models how the data is generated. Thus, the forward model by Cooke and Schneider (1983) can only be used to invert CMP data since the model invariably assumes no spreading loss by mimicking the response of a laterally homogeneous earth to a plane-wave source where t is ray travel time, x, y, z are depth coordinates and vi is the constant velocity between interfaces i − 1 and i. where represent bulk modulus, density, the source of acoustic waves, and the pressure variation. where s(t) = synthetic trace, w(t) = source wavelet, and R(t) = reflectivity function. Objective function An important numerical process in inverse modeling is to minimize the objective function, which is a function defined in terms of the difference between the collected field seismic data and the numerically computed seismic data. Classical objective functions include the sum of squared deviations between experimental and numerical data, as in the least squares methods, the sum of the magnitude of the difference between field and numerical data, or some variant of these definitions. Irrespective of the definition used, numerical solution of the inverse problem is obtained as earth model that minimize the objective function. In addition to the objective function, other constraints like known model parameters and known layer interfaces in some regions of the earth are also incorporated in the inverse modeling procedure. These constraints, according to Francis 2006, help to reduce non-uniqueness of the inversion solution by providing a priori information that is not contained in the inverted data while Cooke and Schneider (1983) reports their useful in controlling noise and when working in a geophysically well-known area. Mathematical analysis of generalized linear inversion procedure The objective of mathematical analysis of inverse modeling is to cast the generalized linear inverse problem into a simple matrix algebra by considering all the components described in previous sections. viz; forward model, objective function etc. Generally, the numerically generated seismic data are non-linear functions of the earth model parameters. To remove the non-linearity and create a platform for application of linear algebra concepts, the forward model is linearized by expansion using a Taylor series as carried out below. For more details see Wiggins (1972), Cooke and Schneider (1983). Consider a set of seismic field observations , for and a set of earth model parameters to be inverted for, for . The field observations can be represented in either or , where and are vectorial representations of model parameters and the field observations as a function of earth parameters. Similarly, for representing guesses of model parameters, is the vector of numerical computed seismic data using the forward model of Sec. 1.3. Taylor's series expansion of about is given below. On linearization by dropping the non-linear terms (terms with (p⃗ − ⃗q) of order 2 and above), the equation becomes Considering that has components and and have components, the discrete form of Eqn. 5 results in a system of linear equations in variables whose matrix form is shown below. is called the difference vector in Cooke and Schneider (1983). It has a size of and its components are the difference between the observed trace and the numerically computed seismic data. is the corrector vector of size , while is called the sensitivity matrix. It has a size of and its comments are such that each column is the partial derivative of a component of the forward function with respect to one of the unknown earth model parameters. Similarly, each row is the partial derivative of a component of the numerically computed seismic trace with respect to all unknown model parameters. Solution algorithm is computed from the forward model, while is the experimental data. Thus, is a known quality. On the other hand, is unknown and is obtained by solution of Eqn. 10. This equation is theoretically solvable only when is invertible, that is, if it is a square matrix so that the number of observations is equal to the number of unknown earth parameters. If this is the case, the unknown corrector vector , is solved for as shown below, using any of the classical direct or iterative solvers for solution of a set of linear equations. In most seismic inversion applications, there are more observations than the number of earth parameters to be inverted for, i.e. , leading to a system of equations that is mathematically over-determined. As a result, Eqn. 10 is not theoretically solvable and an exact solution is not obtainable. An estimate of the corrector vector is obtained using the least squares procedure to find the corrector vector that minimizes , which is the sum of the squares of the error, . The error is given by In the least squares procedure, the corrector vector that minimizes is obtained as below. Thus, From the above discussions, the objective function is defined as either the or norm of given by or or of given by or . The generalized procedure for inverting any experimental seismic data for or , using the mathematical theory for inverse modeling, as described above, is shown in Fig. 1 and described as follows. An initial guess of the model impedance is provided to initiate the inversion process. The forward model uses this initial guess to compute a synthetic seismic data which is subtracted from the observed seismic data to calculate the difference vector. An initial guess of the model impedance is provided to initiate the inversion process. A synthetic seismic data is computed by the forward model, using the model impedance above. The difference vector is computed as the difference between experimental and synthetic seismic data. The sensitivity matrix is computed at this value of the impedance profile. Using and the difference vector from 3 above, the corrector vector is calculated. A new impedance profile is obtained as The or norm of the computed corrector vector is compared with a provided tolerance value. If the computed norm is less than the tolerance, the numerical procedure is concluded and the inverted impedance profile for the earth region is given by from Eqn. 14. On the other hand, if the norm is greater than the tolerance, iterations through steps 2-6 are repeated but with an updated impedance profile as computed from Eqn. 14. Fig. 2 shows a typical example of impedance profile updating during successive iteration process. According to Cooke and Schneider (1983), use of the corrected guess from Eqn. 14 as the new initial guess during iteration reduces the error. Parameterization of the earth model space Irrespective of the variable to be inverted for, the earth's impedance is a continuous function of depth (or time in seismic data) and for numerical linear inversion technique to be applicable for this continuous physical model, the continuous properties have to be discretized and/or sampled at discrete intervals along the depth of the earth model. Thus, the total depth over which model properties are to be determined is a necessary starting point for the discretization. Commonly, as shown in Fig. 3, this properties are sampled at close discrete intervals over this depth to ensure high resolution of impedance variation along the earth's depth. The impedance values inverted from the algorithm represents the average value in the discrete interval. Considering that inverse modeling problem is only theoretically solvable when the number of discrete intervals for sampling the properties is equal to the number of observation in the trace to be inverted, a high-resolution sampling will lead to a large matrix which will be very expensive to invert. Furthermore, the matrix may be singular for dependent equations, the inversion can be unstable in the presence of noise and the system may be under-constrained if parameters other than the primary variables inverted for, are desired. In relation to parameters desired, other than impedance, Cooke and Schneider (1983) gives them to include source wavelet and scale factor. Finally, by treating constraints as known impedance values in some layers or discrete intervals, the number of unknown impedance values to be solved for are reduced, leading to greater accuracy in the results of the inversion algorithm. Inversion examples Temperature inversion from Marescot (2010) Source: We start with an example to invert for earth parameter values from temperature depth distribution in a given earth region. Although this example does not directly relate to seismic inversion since no traveling acoustic waves are involved, it nonetheless introduces practical application of the inversion technique in a manner easy to comprehend, before moving on to seismic applications. In this example, the temperature of the earth is measured at discrete locations in a well bore by placing temperature sensors in the target depths. By assuming a forward model of linear distribution of temperature with depth, two parameters are inverted for from the temperature depth measurements. The forward model is given by where . Thus, the dimension of is 2 i.e. the number of parameters inverted for is 2. The objective of this inversion algorithm is to find , which is the value of that minimizes the difference between the observed temperature distribution and those obtained using the forward model of Eqn. 15. Considering the dimension of the forward model or the number of temperature observations to be , the components of the forward model is written as so that We present results from Marescot (2010) for the case of for which the observed temperature values at depths were at and at . These experimental data were inverted to obtain earth parameter values of and . For a more general case with large number of temperature observations, Fig. 4 shows the final linear forward model obtained from using the inverted values of and . The figure shows a good match between experimental and numerical data. Wave travel time inversion from Marescot (2010) Source: This examples inverts for earth layer velocity from recorded seismic wave travel times. Fig. 5 shows the initial velocity guesses and the travel times recorded from the field, while Fig. 6a shows the inverted heterogeneous velocity model, which is the solution of the inversion algorithm obtained after 30 iterations. As seen in Fig. 6b, there is good comparison between the final travel times obtained from the forward model using the inverted velocity and the field record travel times. Using these solutions, the ray path was reconstructed and is shown to be highly tortuous through the earth model as shown in Fig. 7. Seismic trace inversion from Cooke and Schneider (1983) This example, taken from Cooke and Schneider (1983), shows inversion of a CMP seismic trace for earth model impedance (product of density and velocity) profile. The seismic trace inverted is shown in Fig. 8 while Fig. 9a shows the inverted impedance profile with the input initial impedance used for the inversion algorithm. Also recorded alongside the seismic trace is an impedance log of the earth region as shown in Fig. 9b. The figures show good comparison between the recorded impedance log and the numerical inverted impedance from the seismic trace. References Further reading Backus, G. 1970. "Inference from inadequate and inaccurate data". Proceedings of the National Academy of Sciences of the United States of America 65, no. 1. Backus, G., and F. Gilbert. 1968. "The Resolving Power of Gross Earth Data". Geophysical Journal of the Royal Astronomical Society 16 (2): 169–205. Backus, G. E., and J. F. Gilbert. 1967. "Numerical applications of a formalism for geophysical inverse problems". Geophysical Journal of the Royal Astronomical Society. 13 (1–3): 247. Bamberger, A., G. Chavent, C. Hemon, and P. Lailly. 1982. "Inversion of normal incidence seisomograms". Geophysics 47 (5): 757–770. Clayton, R. W., and R. H. Stolt. 1981. "A Born-WKBJ inversion method for acoustic reflection data". Geophysics 46 (11): 1559–1567. Franklin, J. N. 1970. "Well-posed stochastic extensions of ill-posed linear problems". Journal of Mathematical Analysis and Applications 31 (3): 682. Parker, R. L. 1977. "Understanding inverse theory". Annual Review of Earth and planetary sciences 5:35–64. Rawlinson, N. 2000. "Inversion of Seismic Dat for Layered Crustal Structure". Ph.D. diss., Monash University. Wang, B., and L. W. Braile. 1996. "Simultaneous inversion of reflection and refraction seismic data and application to field data from the northern Rio Grande rift". Geophysical Journal International 125 (2): 443–458. Weglein, A. B., H. Y. Zhang, A. C. Ramirez, F. Liu, and J. E. M. Lira. 2009. "Clarifying the underlying and fundamental meaning of the approximate linear inversion of seismic data". Geophysics 74 (6): 6WCD1–WCD13. Mathematical modeling Geological techniques Seismology measurement
Linear seismic inversion
[ "Mathematics" ]
3,451
[ "Applied mathematics", "Mathematical modeling" ]
39,295,868
https://en.wikipedia.org/wiki/Lely%20method
The Lely method, also known as the Lely process or Lely technique, is a crystal growth technology used for producing silicon carbide crystals for the semiconductor industry. The patent for this method was filed in the Netherlands in 1954 and in the United States in 1955 by Jan Anthony Lely of Philips Electronics. The patent was subsequently granted on 30 September 1958, then was refined by D. R. Hamilton et al. in 1960, and by V. P. Novikov and V. I. Ionov in 1968. Overview The Lely method produces bulk silicon carbide crystals through the process of sublimation. Silicon carbide powder is loaded into a graphite crucible, which is purged with argon gas and heated to approximately . The silicon carbide near the outer walls of the crucible sublimes and is deposited on a graphite rod near the center of the crucible, which is at a lower temperature. Several modified versions of the Lely process exist, most commonly the silicon carbide is heated from the bottom end rather than the walls of the crucible, and deposited on the lid. Other modifications include varying the temperature, temperature gradient, argon pressure, and geometry of the system. Typically, an induction furnace is used to achieve the required temperatures of . See also Acheson process Czochralski method Sublimation sandwich method References Crystallography Materials science Thin film deposition
Lely method
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
290
[ "Materials science stubs", "Applied and interdisciplinary physics", "Thin film deposition", "Coatings", "Thin films", "Materials science", "Crystallography stubs", "Crystallography", "Condensed matter physics", "nan", "Planes (geometry)", "Solid state engineering" ]
49,529,442
https://en.wikipedia.org/wiki/Zinc%20finger%2C%20cchc%20domain%20containing%2023
Zinc finger, CCHC domain containing 23 is a protein that in humans is encoded by the ZCCHC23 gene. References Human proteins
Zinc finger, cchc domain containing 23
[ "Chemistry" ]
29
[ "Biochemistry stubs", "Protein stubs" ]
49,530,308
https://en.wikipedia.org/wiki/Neutron%20magnetic%20imaging
Neutrons are spin 1/2 particles that interact with magnetic induction fields via the Zeeman interaction. This interaction is both rather large and simple to describe. Several neutron scattering techniques have been developed to use thermal neutrons to characterize magnetic micro and nanostructures. Polarized small-angle neutron scattering (SANS) Small-angle neutron scattering is a technique which is especially suited for the study of nanoparticles. It has for example been used extensively for the study of ferrofluids. More recently, polarized SANS has become more widely available and a wide range of study have been performed. Polarized SANS allows either to probe the internal structure of magnetic nanoparticles via the measurement of the magnetic form factor or the magnetic interactions between magnetic nanoparticles via the structure factor. In a few cases, Polarized Grazing Incidence SANS was performed on magnetic systems A few polarized neutrons SANS spectrometers are available across the world: D33 at the Institut Laue-Langevin (ILL) in Grenoble France PA20 at CEA Laboratoire Léon Brillouin (LLB) in Saclay, France (CEA Saclay site) SANS-I and KWS-1 and KWS-2 at the Forschungsneutronenquelle Heinz Maier-Leibnitz (FRM II) in Garching, Germany V4 at Helmholtz Zentrum Berlin Polarized neutron reflectometry Polarized neutron reflectometry allows probing magnetic thin films and ultra-thin films. The polarized reflectivity measurements allow measuring the magnitude and directions of the magnetic induction in magnetic heterostructures with a depth resolution on the order of 2-3 nm for films with thicknesses ranging from 5 to 100 nm. A number of polarized neutrons reflectometers are available across the world: Platypus at ANSTO in Sydney, Australia C5 spectrometer at NRC Canada Chalk River Labs in Chalk River, Canada. D3 reflectometer at NRC Canada Chalk River Labs in Chalk River, Canada. D17, SuperADAM at the Institut Laue-Langevin (ILL) in Grenoble, France PRISM (alternate) at CEA Laboratoire Léon Brillouin (LLB) in Saclay, France N-REX+, MIRA, TREFF@NoSpec and MARIA at the Forschungsneutronenquelle Heinz Maier-Leibnitz (FRM II) in Garching, Germany REFLEX and REMUR at Joint Institute for Nuclear Research IBR-2 in Dubna, Russia AMOR at the Paul Scherrer Institute (PSI) in Villigen, Switzerland SURF, CRISP, INTER, Offspec and polREF at the ISIS neutron source (ISIS) in Oxfordshire, United Kingdom NG1, NG7 at the NIST Center for Neutron Research (NCNR) in Gaithersburg, Maryland, United States Magnetism at the Spallation Neutron Source (ORNL) in Oak Ridge, Tennessee, United States A catalogue of neutron reflectometers is available at www.reflectometry.net. Polarized Neutron Radiography and Tomography Precession techniques The neutron precession in an induction field is expressed as where is the neutron magnetic moment, is the local magnetic induction at the neutron position and is the neutron gyromagnetic ratio. For neutrons, the gyromagnetic ratio is (note that for neutrons g factor is negative and equal to -3.83). Bulk systems Neutron radiography can be used to map the distribution of an induction field in space. In order to perform such experiments, the neutron beam is initially polarized, it interacts with the induction field of interest and the neutron precession is measured with a neutron analyzer in front of the 2D detector. The beam can be either polarized with supermirrors or with polarized 3He gaz Thin film structures The neutron precession in an induction field is rather small. Thus in the case of thin films (~1 μm thick) the neutron interaction is rather small. Thus in order to obtain a measurable signal, it has been proposed that a grazing incidence geometry could be used. In such a geometry, the interaction is enhance since the neutron travels a longer path inside the induction field. Such measurements however assume that the planar structure of the system is homogeneous and that the induction varie only through the depth of the magnetic film. The magnetisation depth profile was measured in thick CoZr films in which the magnetic anisotropy field was "engineered" during deposition. A very thorough description of the measurement process can be found in. Phase imaging Phase contrast (or dark field) imaging has recently been developed for neutron radiography and tomography. It has been applied to visualize magnetic domains in several types of systems: soft magnetic alloys magnetic vortices in low Tc superconductors superconductors Scanning magnetic neutron imaging Magnetic neutron radiography is currently limited in spatial resolution due to the need of analyzing the neutron polarization which results in losses in spatial resolution. It has been proposed that neutron scanning imaging could be performed by using micro beams. It is however only possible to produce 1 dimensional microbeams due to the intrinsic limitation in neutron flux. Hence this technique can presently be applied only for 1 dimensional problems. See also Tomography Tomographic reconstruction Neutron Tomography References Small-angle scattering Neutron scattering Imaging
Neutron magnetic imaging
[ "Chemistry" ]
1,116
[ "Scattering", "Neutron scattering" ]
49,534,415
https://en.wikipedia.org/wiki/Mitochondrial%20calcium%20uniporter
The mitochondrial calcium uniporter (MCU) is a transmembrane protein that allows the passage of calcium ions from a cell's cytosol into mitochondria. Its activity is regulated by MICU1 and MICU2, which together with the MCU make up the mitochondrial calcium uniporter complex. The MCU is one of the primary sources of mitochondria uptake of calcium, and flow is dependent on membrane potential of the inner mitochondrial membrane and the concentration of calcium in the cytosol relative to the concentration in the mitochondria. Balancing calcium concentration is necessary to increase the cell's energy supply and regulate cell death. Calcium is balanced through the MCU in conjunction with the sodium-calcium exchanger. The MCU has a very low affinity for calcium, so the cytosolic calcium concentration needs to be approximately 5-10 uM for significant transport of calcium into the mitochondria. Mitochondria are closely associated with the endoplasmic reticulum (ER), at contact sites, which contains stores of cellular calcium ions for calcium signaling. The presence of 1,4,5-triphosphate (IP3) triggers the release of calcium from these intracellular stores, which creates microdomains of high calcium concentration between the ER and the mitochondria, creating the conditions for the MCU to take up calcium. Ruthenium red and Ru360 are typical reagents used to experimentally block the MCU to study its properties and role in mitochondrial signaling. MICU1 and MICU2 MICU1 The mitochondrial calcium uptake 1 (MICU1) is a single pass membrane protein, it contains 2 binding domains. This protein was first discovered before the MCU by only a few months. MICU1 was used as a bait to figure out what the core of the mitochondrial calcium uniporter was. Once both MICU1 and MCU were discovered scientists made some intriguing discoveries in regards to the two proteins. Both MICU1 and MCU share similar RNA sequences, same pattern of expression, and they both interact with one another in the intermitochondrial membrane. It was first found through the use of siRNA screening of the membrane. The functions of MICU1 are still being studied; however, there are some important functions MICU1 plays in the intermitochondrial membrane. MICU1 helps to stabilize the entire mitochondrial calcium uniporter complex, it also limits the amount of calcium that enters the cell during low concentrations of calcium. However, along with limiting the entry of calcium into the mitochondrial matrix, it functions alongside MCU to keep the accumulated calcium inside the matrix of the mitochondria, and promotes ion specificity by preventing aberrant loading of transition metals into the mitochondria. MICU2 Mitochondrial calcium uptake 2 (MICU2) is another intermitochondrial membrane protein. It works alongside MICU1 and contains roughly 25% of the same DNA sequence. MICU2 works with MICU1 and MCU to reduce the amount of calcium coming into the matrix. It is shown that when both MICU1 and MICU2 are sequestered there is reduced calcium; however, whenever MICU1 is sequestered and MICU2 is activated, normal calcium flow. It is also shown that all three, MCU, MICU1, and MICU2 are part of a single complex, the mitochondrial calcium uniporter complex resumes. Research using a CRISPR/Cas9 technique has found that MICU1 and MICU2 play other roles as well. They are essential for cell growth, cell invasion, and cell replication. References Transmembrane transporters Calcium signaling Mitochondria
Mitochondrial calcium uniporter
[ "Chemistry" ]
776
[ "Mitochondria", "Calcium signaling", "Metabolism", "Signal transduction" ]
49,539,168
https://en.wikipedia.org/wiki/No-SCAR%20genome%20editing
No-SCAR genome editing is an editing method that is able to manipulate the Escherichia coli (E. coli) genome. The system relies on recombineering whereby DNA sequences are combined and manipulated through homologous recombination. No-SCAR is able to manipulate the E. coli genome without the use of the chromosomal markers detailed in previous recombineering methods. Instead, the λ-Red recombination system facilitates donor DNA integration while Cas9 cleaves double-stranded DNA to counter-select against wild-type cells. Although λ-Red and Cas9 genome editing are widely used technologies, the no-SCAR method is novel in combining the two functions; this technique is able to establish point mutations, gene deletions, and short sequence insertions in several genomic loci with increased efficiency and time sensitivity. Background and overview λ-red recombineering system The λ-red recombineering system was published in 1998 and allows for insertion, deletion, or mutations to E. coli genes. In this system, the red operon from bacteriophage λ is transfected into E. coli cells to facilitate incorporation of linear target DNA into the E. coli genome. The bacteriophage λ-red operon consists of the exo, bet, and gam genes which, together, are responsible for recombineering. Phage λ exonuclease (exo) degrades transfected linear target DNA from the 5’ end. Beta binds to the resulting single stranded 3’ end and incorporates it into the target DNA to form the recombinant DNA. Phage λ gamma is necessary to inhibit E. coli nuclease activity and protect the transformed linear DNA in vivo. Following λ-red operon activity induction, a linear, double-stranded cassette encoding a selectable marker, such as antibiotic resistance, is transformed into the cells in place of the target gene and incorporated into the DNA behind a specific inducible promoter. This allows for growth selection of the recombinant cells with proper insertion location verified using polymerase chain reaction (PCR). Specific incorporation can be achieved by including flanking PCR primers around the inserted linear DNA that are complement to the targeted insertion site. After selection of the recombinants, a second transformation is needed to remove the selective marker. A plasmid expressing flippase (FLP) can be transformed into the recombined cells, which can specifically cleave FLP recognition target sites (FRTs) flanking the antibiotic resistance gene. While this successfully removes the selective marker from the genome, it leaves FRT scars in place of the target gene. The λ-red system has also been optimized for scarless recombination; however, this is a two-step system consisting of selection and counterselection. In this case, a gene cassette with a dual selectable marker can be incorporated into the DNA at the specific location of mutagenesis. After selection of recombinants, a subsequent transformation to transfect linear DNA with the desired mutation is performed, which will then be homologously recombined into the cellular DNA in place of the marker. Therefore, counterselection against the cells containing the marker needs to be performed in order to identify the cells that have successfully incorporated the linear DNA into the target sequence. This can be verified using PCR screening. The method of recombination detailed above is advantageous as it provides an alternative to the low-efficiency, laborious, and multi-step recombination processes using endonucleases and ligases. Therefore, λ-red recombination is more specific in terms of possible genomic alterations that are not governed by locations of restriction enzyme recognition sites. However, it also has many limitations. Multiple rounds of transformations can increase the risk of error and increase recombination time. Therefore, efficiency can be low (0.1–10% for point mutations; 10−5–10−6 for insertions, deletions, or replacements) and requires long growth into workable colonies between transformations and recombinant selections. All together, this contributes to a lengthy and inefficient mutagenesis procedure for even single mutations. This technique can also leave scars that can contribute to destabilization of the chromosome and impact the success of the manipulation. CRISPR/Cas9 recombination A more recent method for genome editing uses CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) sequences and the endonuclease Cas9 (CRISPR-associated protein 9). These components are an integral part of the immune response for some bacteria. and have been repurposed for genome engineering. In this system, sequences matching foreign bacteriophage or plasmid DNA are incorporated as "spacer" sequences into the bacterial genome located between repeating CRISPR loci. Cas endonucleases are able to initiate double strand breaks within these foreign DNAs that are complement to the transcribed CRISPR RNAs (crRNA, or “protospacers”), thus degrading them. A conserved protospacer-adjacent motif (PAM, sequence 5’-NGG-3’) located immediately downstream of the protospacer in the cellular genome is necessary for Cas9 cleavage. Together, this system allows for adaptive immunity to the dynamic viral genetic material. In 2013, methods for harnessing this system for use in editing mutations or insertions of specific E. coli sequences were published. This method includes constructing a plasmid consisting of the cas9 gene and CRISPR loci containing the matching target DNA, called single guide RNA (sgRNA). After expression is induced, Cas9 is able to identify the target DNA sequence of the cellular genome by finding the sgRNA complement and initiate strand breaks in the E. coli genome. This allows a transfected, linear DNA sequence to be incorporated into the genome, relying on the cellular machinery to use the linear DNA flanked by homologous regions specific to the cleaved location as a template to rebuild using homology directed repair. This method is advantageous in that it allows multiple mutations to be introduced into the genome in one experiment. It also has vastly improved efficiency over previous technologies and allows for almost any kind of mutation with ease. These advantages make the CRISPR/Cas9 recombination technique the most promising for application in human health. However, major drawbacks exist, including, most prominently, off-target cleavage that can result in unintended genome disruptions. Integrating λ-red and Cas9 recombination Reisch and Prather pioneered a technique that combines both the λ-red and CRISPR/Cas9 recombination systems to form a novel methodology called no-SCAR (Scarless Cas9 Assisted Recombineering) for E. coli genome modifications. In this method, a plasmid containing the gene for Cas9 expression (cas9) is first transformed into E. coli cells. After selecting for the transformants using antibiotic resistance, another plasmid containing the targeted gene of interest in the form of sgRNA and the λ-red operon is transformed. After induced expression of the λ-red recombineering system, linear DNA to be incorporated into the E. coli genome is transformed into the cells. The expression of Cas9 and the sgRNA are then induced, which results in the Cas9 locating the E. coli target DNA based on the sgRNA complement. Cas9 is able to initiate a double strand break and the λ-red system is able to bring the linear DNA to E. coli genome for homologous recombination. The cells are then cured of the plasmid containing the specific sgRNA and then the next plasmid containing the specific sgRNA target sequence can be transformed and the process is repeated. This system is able to modify numerous genome locations rapidly. In instances when a small number of mutations are introduced into the genome, the time efficiency of no-SCAR is comparable to other methods. However, with a large number of alterations, this technique is superior. Methodology Plasmids SCAR-less editing uses plasmids for genome modification created using circular polymerase extension cloning. Briefly, in a one-step reaction, this process can assemble and clone multiple inserts into any vector and requires no restriction enzyme digestion, ligation, or homologous recombination. Therefore, this method contributes to the cost-effectiveness and throughput of the no-SCAR system. The no-SCAR technique uses a two-plasmid system; this is because co-transformation of both cas9 containing plasmid and the sgRNA plasmid results in cell death. More specifically, the cell lethality is a consequence of Cas9 cleaving the E. coli DNA that matches the sgRNA. In order to circumvent this issue, multiple plasmids are used in order to maintain expressional control of both cas9 and sgRNA. The PTET promoter plays an integral role in the expression of both plasmids. It drives transcriptional expression of both the sgRNA of interest and cas9 in the host cell upon induction with anhydrotetracycline (aTc). In the absence of the inducer, the PTET promoter is repressed by the constitutively expressed tetracycline repressor protein (TetR). Therefore, the presence of TetR in the host cell prior to the introduction of both sgRNA and cas9 is a measure to avoid cell lethality. The first plasmid used in the pioneering work of Reisch and Prather is composed of: the cas9 gene under the control of the PTET promoter; the tetR gene, which codes for tetracycline repressor protein, under control of a constitutive promoter; and the cmr gene for chloramphenicol resistance. It was observed that leaky expression of Cas9 occurred even without induction of the PTET promoter. Therefore, to avoid cell death, a transfer messenger RNA (ssrA) tag was included in the plasmid downstream of the cas9 gene. In the event of leaky Cas9 expression, the C-terminal ssrA tag would be recognized by ClpP protease and degrade Cas9 to allow for better expressional control of the protein. Together, these components make up the pCas9cr4 plasmid and allow targeting of the host cell chromosome. The second plasmid used in the no-SCAR method consists of: the sgRNA of interest expressed under the control of the PTET promoter; the three genes that make up the λ-red system (exo, bet, and gam) under the control of the ParaB promoter, which is induced by arabinose; and the gene conferring resistance to aminoglycosides, such as spectinomycin and streptomycin. These components make up the pKDsg-XXX plasmid to facilitate λ-red mediated alterations of the E. coli genome, where -XXX denotes the targeted sequence to modify. In summary, theoretically, the plasmids for use in this method can be constructed to allow for absolute customization of the protocol. One stipulation is that each of the two constructed plasmids should have distinct selectable markers, such as genes conferring resistance to two different antibiotics, to allow for targeted selection and counterselection. Furthermore, the pCas9cr4 plasmid can be purchased from Addgene (ID 62655) for direct implementation into a no-SCAR recombination experiment, and the pKDsgRNA-p15 (ID 62656) and pKDsgRNA-ack (ID 62654) plasmids can also be purchased from Addgene. Transformations The next step in the no-SCAR protocol is to transform the pCas9cr4 and pKDsg-XXX plasmids and linear oligonucleotides into the E. coli cells themselves. In order to achieve this, the cells are made to be electrocompetent; one such method, as used by Reisch and Pather, is using a glycerol/mannitol density step gradient, which is fast and simple. This allows for transformation through electroporation and introduction of the plasmids and DNA into the cells. In order to increase transformation efficiency, transformed cells are grown in super optimal broth to expedite the recovery process after transformation. In this method, two subsequent transformations must be performed in order to incorporate the pCas9cr4 and pKDsg-XXX plasmids necessary for recombination. This is because preliminary studies found that when both Cas9 and sgRNA were expressed at the same time without the linear DNA to be incorporated into the genome, cell death was induced due to the disruption of the crucial gene. Therefore, the expression of the recombination machinery and sgRNA were kept under strict control and transformed stepwise to reduce cell lethality. To ensure the pCas9cr4 plasmid was first successfully admitted to the cells, the cells were grown on a plate containing chloramphenicol ; the pCas9cr4 plasmid contained the gene cmr, conferring resistance to chloramphenicol, which ensured only the successful recombinants grew. Triplicate plates of 10μL of recovered cultures as well as dilutions of 10−1, 10−2, and 10−3 were then spotted and incubated overnight at 30 °C and CFU, or colony forming unit, assessments were then made to identify successful mutants using the miniaturized plating method described previously. Once the mutation of interest was screened using overnight growth on M9 minimal medium plates supplemented with glycerol, chloroacetate and SOC (super optimal broth with catabolite repression), colonies were patched onto selective plates using a toothpick and incubated at 30 °C for two nights. After sufficient colony growth, cells were transfected with the pKDsg-XXX plasmid containing the aada gene, and as a result were resistant to the aminoglycosides spectinomycin and streptomycin. To ensure that the pKDsg-XXX plasmid was successfully admitted to the cells, the cells were grown on a plate containing both chloramphenicol and spectinomycin to select for cells containing both the pCas9cr4 and pKDsg-XXX plasmids. Following successful recombination of the linear DNA to the target genome, plasmid origins and markers can be re-used as a result of the plasmid curing method. The pKDsgRNA contains a temperature sensitive open reading frame which, when grown at 37 °C, denatures the plasmid. This allows for easy plasmid curing that does not include any additional reagents. This is useful because upon curing of the pKDsg-XXX plasmid, another pKDsg-XXX plasmid with a different sgRNA can subsequently be transfected into the E. coli cells to introduce further mutations to the target cellular sequences. After all mutations are introduced, both plasmids should be cured. Unfortunately, the pCas9cr4 plasmid lacks an inherent curing mechanism, so Reisch and Prather pioneered a plasmid curing mechanism by introducing a pKDsgRNA whose sgRNA is complement to the pCas9cr4 plasmid. Specifically, they constructed a pKDsg-p15A which targeted the p15A origin of replication of the pCas9cr4 plasmid. After recombinants were selected for, expression of Cas9 and sgRNA was induced with the addition of aTc. After plating on selective plates and growing at 37 °C, they observed no colony formation on the LB plates containing chloramphenicol indicating loss of the pCas9cr4 plasmid due to a Cas9-mediated double strand break in the plasmid. Additional plasmid mini-preps demonstrated that neither plasmid was retained in the cells, therefore indicating plasmid curing. This technique can easily be applied to curing other plasmids as well. Recombinations Oligonucleotides used for subsequent recombinations should follow several guidelines to help maximize success. First, optimal oligo length for the transfected linear DNA should be between 60 and 90 base pairs. This guideline is based on previous observations that this length has the highest allelic replacement efficiency. Longer oligos are prone to forming hairpin structures that are not only inhibitory, but also are more expensive to synthesize. Shorter oligos have lower hybridization energies, resulting in decreased stability of the oligo to the chromosomal target. Of this sequence, at least 15 base pairs should be homologous to the target sequence at both the 5’ and 3’ ends to provide sufficient oligo annealing. Another consideration is the inclusion of phosphorothioate bonds. In a phosphorothioate bond, one of the non-bridging oxygens between nucleotide bases is replaced with a sulfur atom, changing the chemical properties. This modification is necessary in oligo design because it results in decreased susceptibility to exonuclease degradation. At maximum, four of these bonds should be situated near the 5’ end of the oligo. Too many phosphorothioate bonds can be problematic; each modified site creates a chiral center, which can lead to a racemic mixture of isomers with varying characteristics and properties. Further optimization can be achieved by designing oligos that target the lagging strand in DNA replication because targeting the lagging strand results in a higher frequency of recombinants. In DNA replication, RNA primers must be inserted along the lagging strand so that DNA polymerase is able to synthesize the strand in the 5’ to 3’ direction. This discontinuous synthesis results in the lagging strand being more exposed which allows for easier beta-mediated annealing of the oligo to the target DNA than when compared to annealing to the leading strand. Finally, the mismatch repair (MMR) machinery offers inherent protection to the cell against nucleotide base mismatch. Therefore, any mutations introduced in the oligonucleotides can be targeted by the MMR. There are several ways to avoid this. First, the use of an E. coli strain that is MMR deficient will eradicate this issue. However, this also results in a higher rate of other random mutations throughout the genome. Another method to reduce mismatch repair is to bury the mutations of interest within other silent mutations. Since silent mutations do not often cause catastrophic effects, they are poorly detected by the MMR machinery. The presence of modified bases will also help to evade the MMR machinery because they are not recognized. Finally, long segments of mutations will be less affected by short segments of mutations. Transformed oligonucleotides are incorporated into the cellular DNA through λ-red- and Cas9 endonuclease-mediated homologous recombination. λ-red is activated when arabinose binds to expressed AraC, inducing dimerization of the AraC protein and subsequent DNA binding to activate the promoter prior to oligonucleotide transfection. This step is followed by induced expression of the Cas9 nuclease and the sgRNA through binding of anhydrotetracycline to the PTET promoter. Cas9, with the guidance of the transformed sgRNA, identifies the E. coli complement target sequence and initiates a double strand break. One crucial aspect is that the target gene must be in close proximity upstream of the PAM sequence. Fortunately, the PAM NGG sequence occurs at 424,651 instances on both strands of the E. coli chromosome, so this method is not limited in its targeted specificity. Meanwhile, dimerized λ gam binds to the host RecBCD and SbcCD nucleases, inhibiting all of their known activities, which prevents degradation of the linear foreign DNA. λ exo binds to double stranded linear transformed DNA and processively degrades it in a 5’ to 3’ direction. Exo is a globular, trimeric protein that forms a ring shape with a hollow center that positions the linear DNA for cleavage. One side of the ring is large enough to admit double stranded DNA, but the other end can only accommodate single stranded DNA, therefore providing details into the exo mechanism of action. This process results in single stranded 3’ overhangs on the linear DNA. Subsequently, λ beta, a member of a recombinase family, binds to the 3’ overhangs and mediates annealing to the complementary E. coli DNA. This process occurs through invasion of the single stranded 3’ overhang into the complementary target DNA on the lagging strand during DNA synthesis, allowing beta-facilitated annealing. In summary, this protocol allows for almost unlimited genome targeting provided that one stipulation is considered: this method is limited to targeting E. coli sequences that are located directly upstream of a PAM NGG sequence, so experiments must be designed that accommodate this restriction. PCR screen Once colonies are selected, transformants are genotyped using allele specific PCR. In this process, a mutant PCR primer is used to select for the mutant over the wild-type genotype. If the mutant genotype is present, it anneals to the 3’ end of the PCR primer while the wild-type genotype results in mismatched DNA at the 3’ end. The mismatch between the 3’ end of the primer and wild-type prevents primer extension and thus, only the mutant genotype produces a PCR product. Hotstart Taq polymerase lacking 3’ to 5’ exonuclease activity was used for colony PCR of the putative mutants. The SCAR-less method is able to induce point mutations, oligonucleotide-mediated deletions, and short sequence insertions with a high efficiency. In the case of point mutations, oligonucleotides targeted to the lagging strand are designed to alter the either the PAM sequence or the protospacer sequences within 12 base pairs of the PAM as this is the most important region for Cas9 specificity. With this process, both nonsense and missense mutations are possible. However, the subsequent transformations differ between nonsense and missense mutations: double transformations, where the pCas9cr4 plasmid and oligonucleotide are transformed simultaneously, are more efficient for nonsense mutations while single transformations, where each plasmid and oligonucleotide are transformed independently, are better suited for missense mutations as demonstrated by colony PCR. The method is also able to successfully delete large regions of chromosomal DNA using oligonucleotides designed with upstream and downstream homology to the area of deletion. In the case of deletions, a single transformation protocol is more efficient and shows a 100% deletion rate following colony PCR. Further work should be pursued to determine causes for the different transformation efficiencies for various mutation types. Short sequence insertions are also possible using the SCAR-less method. In order to circumvent oligonucleotide length constraints, linear dsDNA sequences are used to insert fragments into the chromosome. Because the system uses a plasmid that expresses the exo and gam genes from the λ-red system it is able to use oligonucleotide recombineering techniques for sequence insertion. More specifically, the dsDNA strands facilitate insertion through the mechanism of single-strand intermediates at the replication fork. Previous methods were successful in inserting 30bp into the chromosome while the SCAR-less method was able to insert a sequence ten times greater (300bp). More importantly, PCR screens identified a 100% insertion rate in three separate experiments. Although recombination efficiency decreases over time, the SCAR-less method is expected to successfully facilitate insertions of over 1 kB sequence. Limitations The current system is not able to simultaneously select against more than one target as a result of difficulties in DNA synthesis and ligation independent cloning methods. Essentially this means that the highly repetitive sequences necessary for multiple sgRNA in a single plasmid are constrained by flaws in the biological machinery necessary to express them and target them to more than one target. Future improvements are necessary to allow simultaneous genomic manipulations using several distinct sgRNA in a single plasmid. Challenges underlying plasmid recombination and loss of protospacer sequences also impact the no-SCAR method and require further improvements. Poor plasmid recombination underlies frequency of escape difficulties that may impact the efficiency of genetic manipulation. Applications Research The no-SCAR method is more efficient than standard cloning techniques including ssDNA recombineering as well as other scar-free genome editing techniques. The method allows for unlimited numbers of single step genomic alterations without the use of selectable markers. The great advantage of the method, specifically in terms research applications, is the speed of the protocol. In the seminal paper by Reisch and Prather, the no-SCAR method required 5 days of work including the cloning step necessary for sgRNA targeting. Once the cells contain the pCas9cr4 plasmid, subsequent experiments can be completed in as little as 3 days allowing for rapid genome editing. Currently, the no-SCAR method is faster than any other method published and is an attractive option for researchers interested in studying the effects of numerous modifications. When compared specifically to other scar-free genome editing techniques including the method published by Datsenko and Wanner, the no-SCAR method is less time-consuming when multiple mutations are desired. Two components of the method mediate this rapid mutation ability. First, the pCas9cr4 plasmid is retained in the cells after the first iteration thus preventing repetitive transfection of the plasmid into the cells. In instances where three genome modifications are desired, for example , this means that the no-SCAR method is able to mediate these mutations four days faster than the next fastest method. Second, the wild-type gene is never removed from the chromosome. This means that PCR screening is able to more quickly identify numerous mutants because essential wild-type genes can be targeted with ease. Human health The discovery of the CRISPR/Cas9 genome editing system has revolutionized genetic research. In terms of human health, it has applications to both specific diseases as well as stem cell systems that model these same diseases. In stem cell research, the CRISPR system has been successfully applied to a wide spectrum of diseases. Mutations in transcriptional repressor CTCF from cultured intestinal stem cells of cystic fibrosis patients were corrected using CRISPR/Cas. Subsequent applications built upon simple sequence corrections and successfully repaired a chromosomal inversion abnormality in Hemophilia A. Both applications demonstrate the utility of pairing CRISPR/Cas with stem cell models in the study and treatment of genetic disease. With the advent of patient-derived induced pluripotent stem cells (iPSCs), the applicability of CRISPR/Cas is further strengthened. To date, CRISPR methods have successfully repaired disease-associated genetic mutations in 1) metabolic disorders such as β-thalassemia, 2) immunological deficiencies such as severe combined immunodeficiency (SCID) and 3) neuromuscular diseases such as Duchenne muscular dystrophy. The corrections of these genetic mutations, more importantly, are potential future vehicles for cell and gene therapies where the patient’s own repaired stem cells can be re-implanted. The no-SCAR method, as an improvement of the CRISPR/Cas system, will play an important role in modeling human disease using iPS cells and in the future treating these same diseases. References Genome editing
No-SCAR genome editing
[ "Engineering", "Biology" ]
5,918
[ "Genetics techniques", "Genetic engineering", "Genome editing" ]
50,660,598
https://en.wikipedia.org/wiki/750%20GeV%20diphoton%20excess
The 750 GeV diphoton excess in particle physics was an anomaly in data collected at the Large Hadron Collider (LHC) in 2015, which could have been an indication of a new particle or resonance. The anomaly was absent in data collected in 2016, suggesting that the diphoton excess was a statistical fluctuation. In the interval between the December 2015 and August 2016 results, the anomaly generated considerable interest in the scientific community, including about 500 theoretical studies. The hypothetical particle was denoted by the Greek letter Ϝ (pronounced digamma) in the scientific literature, owing to the decay channel in which the anomaly occurred. The data, however, were always less than five standard deviations (sigma) different from that expected if there was no new particle, and, as such, the anomaly never reached the accepted level of statistical significance required to announce a discovery in particle physics. After the August 2016 results, interest in the anomaly sank as it was considered a statistical fluctuation. Indeed, a Bayesian analysis of the anomaly found that whilst data collected in 2015 constituted "substantial" evidence for the digamma on the Jeffreys scale, data collected in 2016 combined with that collected in 2015 was evidence against the digamma. December 2015 data On December 15, 2015, the ATLAS and CMS collaborations at CERN presented results from the second operational run of the Large Hadron Collider (LHC) at the centre-of-momentum energy of 13 TeV, the highest ever achieved in proton-proton collisions. Among the results, the invariant mass distribution of pairs of high-energy photons produced in the collisions showed an excess of events compared to the Standard Model prediction at around . The statistical significance of the deviation was reported to be 3.9 and 3.4 standard deviations (locally) respectively for each experiment. The excess could have been explained by the production of a new particle (the digamma) with a mass of about 750 GeV/c2 that decayed into two photons. The cross-section at 13 TeV centre-of-momentum energy required to explain the excess, multiplied by the branching fraction into two photons, was estimated to be (fb = femtobarn) This result, while unexpected, was compatible with previous experiments, and in particular with the LHC measurements at a lower centre-of-momentum energy of 8 TeV. August 2016 data Analysis of a larger sample of data, collected by ATLAS and CMS in the first half 2016, did not confirm the existence of the Ϝ particle, which indicates that the excess seen in 2015 was a statistical fluctuation. Implications for particle physics research The non-observation of the 750 GeV bump in follow-up searches by the ATLAS and CMS experiments had a significant impact on the particle physics community. The event highlighted the desire in the community for the LHC to discover a fundamentally new particle, and the difficulties in searching for a signal which is unknown a priori. See also Sgoldstino References Hypothetical elementary particles Hypothetical composite particles Physics beyond the Standard Model 2015 in science 2016 in science Large Hadron Collider Obsolete theories in physics
750 GeV diphoton excess
[ "Physics" ]
643
[ "Theoretical physics", "Unsolved problems in physics", "Particle physics", "Hypothetical elementary particles", "Physics beyond the Standard Model", "Obsolete theories in physics" ]
50,664,282
https://en.wikipedia.org/wiki/Hesse%27s%20principle%20of%20transfer
In geometry, Hesse's principle of transfer () states that if the points of the projective line P1 are depicted by a rational normal curve in Pn, then the group of the projective transformations of Pn that preserve the curve is isomorphic to the group of the projective transformations of P1 (this is a generalization of the original Hesse's principle, in a form suggested by Wilhelm Franz Meyer). It was originally introduced by Otto Hesse in 1866, in a more restricted form. It influenced Felix Klein in the development of the Erlangen program. Since its original conception, it was generalized by many mathematicians, including Klein, Fano, and Cartan. See also Rational normal curve References Original reference Hesse, L. O. (1866). "Ein Uebertragungsprinzip", Crelle's Journal. Further reading Hawkins, Thomas (1988). "Hesses's principle of transfer and the representation of lie algebras", Archive for History of Exact Sciences, 39(1), pp. 41–73. Projective geometry Invariant theory Group theory Symmetry Birational geometry
Hesse's principle of transfer
[ "Physics", "Mathematics" ]
230
[ "Symmetry", "Group actions", "Group theory", "Fields of abstract algebra", "Geometry", "Geometry stubs", "Invariant theory" ]
50,668,615
https://en.wikipedia.org/wiki/TGF-beta%20receptor%20family
The transforming growth factor beta (TGFβ) receptors are a family of serine/threonine kinase receptors involved in TGF beta signaling pathway. These receptors bind growth factor and cytokine signaling proteins in the TGF-beta family such as TGFβs (TGFβ1, TGFβ2, TGFβ3), bone morphogenetic proteins (BMPs), growth differentiation factors (GDFs), activin and inhibin, myostatin, anti-Müllerian hormone (AMH), and NODAL. TGFβ family receptors TGFβ family receptors are grouped into three types, type I, type II, and type III. There are seven type I receptors, termed the activin-like receptors (ALK1–7), five type II receptors, and one type III receptor, for a total of 13 TGFβ superfamily receptors. In the transduction pathway, ligand-bound type II receptors activate type I receptors by phosphorylation, which then autophosphorylate and bind SMAD. The Type I receptors have a glycine-serine (GS, or TTSGSGSG) repeat motif of around 30 AA, a target of type II activity. At least three, and perhaps four to five of the serines and threonines in the GS domain, must be phosphorylated to fully activate TbetaR-1. Type I ALK1 (ACVRL1) ALK2 (ACVR1A) ALK3 (BMPR1A) ALK4 (ACVR1B) ALK5 (TGFβR1) ALK6 (BMPR1B) ALK7 (ACVR1C) Type II TGFβR2 BMPR2 ACVR2A ACVR2B AMHR2 (AMHR) Type III Unlike the Type I and II receptors which are kinases, TGFBR3 has a Zona pellucida-like domain. Its core domain binds TGF-beta family ligands and its heparan sulfate chains bind bFGF. It acts as a reservoir of ligand for TGF-beta receptors. TGFβR3 (β-glycan) References TGF beta receptors Transmembrane receptors Protein families
TGF-beta receptor family
[ "Chemistry", "Biology" ]
486
[ "Transmembrane receptors", "Protein families", "Protein classification", "Signal transduction" ]
45,307,162
https://en.wikipedia.org/wiki/Tarextumab
Tarextumab (formerly OMP-59R5) is a fully human monoclonal antibody targeting the Notch 2/3 receptors. It is being tested as a possible treatment for cancer. In January 2015, the US FDA granted orphan drug designation to tarextumab for the treatment of pancreatic cancer and lung cancer. Two early stage clinical trials have reported encouraging results. See also Notch signaling pathway, e.g. in embryo tissue development References Experimental cancer drugs Monoclonal antibodies
Tarextumab
[ "Chemistry" ]
102
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
45,313,845
https://en.wikipedia.org/wiki/CRISPR/Cas%20tools
CRISPR-Cas design tools are computer software platforms and bioinformatics tools used to facilitate the design of guide RNAs (gRNAs) for use with the CRISPR/Cas gene editing system. CRISPR-Cas The CRISPR/Cas (clustered regularly interspaced short palindromic repeats/CRISPR associated nucleases) system was originally discovered to be an acquired immune response mechanism used by archaea and bacteria. It has since been adopted for use as a tool in the genetic engineering of higher organisms. Designing an appropriate gRNA is an important element of genome editing with the CRISPR/Cas system. A gRNA can and at times does have unintended interactions ("off-targets") with other locations of the genome of interest. For a given candidate gRNA, these tools report its list of potential off-targets in the genome thereby allowing the designer to evaluate its suitability prior to embarking on any experiments. Scientists have also begun exploring the mechanics of the CRISPR/Cas system and what governs how good, or active, a gRNA is at directing the Cas nuclease to a specific location of the genome of interest. As a result of this work, new methods of assessing a gRNA for its 'activity' have been published, and it is now best practice to consider both the unintended interactions of a gRNA as well as the predicted activity of a gRNA at the design stage. Table The below table lists available tools and their attributes. References Genetic engineering Genome editing
CRISPR/Cas tools
[ "Chemistry", "Engineering", "Biology" ]
312
[ "Genetics techniques", "Biological engineering", "Genome editing", "Genetic engineering", "Molecular biology" ]
35,456,546
https://en.wikipedia.org/wiki/Ricci%20calculus
In mathematics, Ricci calculus constitutes the rules of index notation and manipulation for tensors and tensor fields on a differentiable manifold, with or without a metric tensor or connection. It is also the modern name for what used to be called the absolute differential calculus (the foundation of tensor calculus), tensor calculus or tensor analysis developed by Gregorio Ricci-Curbastro in 1887–1896, and subsequently popularized in a paper written with his pupil Tullio Levi-Civita in 1900. Jan Arnoldus Schouten developed the modern notation and formalism for this mathematical framework, and made contributions to the theory, during its applications to general relativity and differential geometry in the early twentieth century. The basis of modern tensor analysis was developed by Bernhard Riemann in a paper from 1861. A component of a tensor is a real number that is used as a coefficient of a basis element for the tensor space. The tensor is the sum of its components multiplied by their corresponding basis elements. Tensors and tensor fields can be expressed in terms of their components, and operations on tensors and tensor fields can be expressed in terms of operations on their components. The description of tensor fields and operations on them in terms of their components is the focus of the Ricci calculus. This notation allows an efficient expression of such tensor fields and operations. While much of the notation may be applied with any tensors, operations relating to a differential structure are only applicable to tensor fields. Where needed, the notation extends to components of non-tensors, particularly multidimensional arrays. A tensor may be expressed as a linear sum of the tensor product of vector and covector basis elements. The resulting tensor components are labelled by indices of the basis. Each index has one possible value per dimension of the underlying vector space. The number of indices equals the degree (or order) of the tensor. For compactness and convenience, the Ricci calculus incorporates Einstein notation, which implies summation over indices repeated within a term and universal quantification over free indices. Expressions in the notation of the Ricci calculus may generally be interpreted as a set of simultaneous equations relating the components as functions over a manifold, usually more specifically as functions of the coordinates on the manifold. This allows intuitive manipulation of expressions with familiarity of only a limited set of rules. Applications Tensor calculus has many applications in physics, engineering and computer science including elasticity, continuum mechanics, electromagnetism (see mathematical descriptions of the electromagnetic field), general relativity (see mathematics of general relativity), quantum field theory, and machine learning. Working with a main proponent of the exterior calculus Élie Cartan, the influential geometer Shiing-Shen Chern summarizes the role of tensor calculus:In our subject of differential geometry, where you talk about manifolds, one difficulty is that the geometry is described by coordinates, but the coordinates do not have meaning. They are allowed to undergo transformation. And in order to handle this kind of situation, an important tool is the so-called tensor analysis, or Ricci calculus, which was new to mathematicians. In mathematics you have a function, you write down the function, you calculate, or you add, or you multiply, or you can differentiate. You have something very concrete. In geometry the geometric situation is described by numbers, but you can change your numbers arbitrarily. So to handle this, you need the Ricci calculus. Notation for indices Basis-related distinctions Space and time coordinates Where a distinction is to be made between the space-like basis elements and a time-like element in the four-dimensional spacetime of classical physics, this is conventionally done through indices as follows: The lowercase Latin alphabet is used to indicate restriction to 3-dimensional Euclidean space, which take values 1, 2, 3 for the spatial components; and the time-like element, indicated by 0, is shown separately. The lowercase Greek alphabet is used for 4-dimensional spacetime, which typically take values 0 for time components and 1, 2, 3 for the spatial components. Some sources use 4 instead of 0 as the index value corresponding to time; in this article, 0 is used. Otherwise, in general mathematical contexts, any symbols can be used for the indices, generally running over all dimensions of the vector space. Coordinate and index notation The author(s) will usually make it clear whether a subscript is intended as an index or as a label. For example, in 3-D Euclidean space and using Cartesian coordinates; the coordinate vector shows a direct correspondence between the subscripts 1, 2, 3 and the labels , , . In the expression , is interpreted as an index ranging over the values 1, 2, 3, while the , , subscripts are only labels, not variables. In the context of spacetime, the index value 0 conventionally corresponds to the label . Reference to basis Indices themselves may be labelled using diacritic-like symbols, such as a hat (ˆ), bar (¯), tilde (˜), or prime (′) as in: to denote a possibly different basis for that index. An example is in Lorentz transformations from one frame of reference to another, where one frame could be unprimed and the other primed, as in: This is not to be confused with van der Waerden notation for spinors, which uses hats and overdots on indices to reflect the chirality of a spinor. Upper and lower indices Ricci calculus, and index notation more generally, distinguishes between lower indices (subscripts) and upper indices (superscripts); the latter are not exponents, even though they may look as such to the reader only familiar with other parts of mathematics. In the special case that the metric tensor is everywhere equal to the identity matrix, it is possible to drop the distinction between upper and lower indices, and then all indices could be written in the lower position. Coordinate formulae in linear algebra such as for the product of matrices may be examples of this. But in general, the distinction between upper and lower indices should be maintained. Covariant tensor components A lower index (subscript) indicates covariance of the components with respect to that index: Contravariant tensor components An upper index (superscript) indicates contravariance of the components with respect to that index: Mixed-variance tensor components A tensor may have both upper and lower indices: Ordering of indices is significant, even when of differing variance. However, when it is understood that no indices will be raised or lowered while retaining the base symbol, covariant indices are sometimes placed below contravariant indices for notational convenience (e.g. with the generalized Kronecker delta). Tensor type and degree The number of each upper and lower indices of a tensor gives its type: a tensor with upper and lower indices is said to be of type , or to be a type- tensor. The number of indices of a tensor, regardless of variance, is called the degree of the tensor (alternatively, its valence, order or rank, although rank is ambiguous). Thus, a tensor of type has degree . Summation convention The same symbol occurring twice (one upper and one lower) within a term indicates a pair of indices that are summed over: The operation implied by such a summation is called tensor contraction: This summation may occur more than once within a term with a distinct symbol per pair of indices, for example: Other combinations of repeated indices within a term are considered to be ill-formed, such as {| |- | || (both occurrences of are lower; would be fine) |- | || ( occurs twice as a lower index; or would be fine). |} The reason for excluding such formulae is that although these quantities could be computed as arrays of numbers, they would not in general transform as tensors under a change of basis. Multi-index notation If a tensor has a list of all upper or lower indices, one shorthand is to use a capital letter for the list: where and . Sequential summation A pair of vertical bars around a set of all-upper indices or all-lower indices (but not both), associated with contraction with another set of indices when the expression is completely antisymmetric in each of the two sets of indices: means a restricted sum over index values, where each index is constrained to being strictly less than the next. More than one group can be summed in this way, for example: When using multi-index notation, an underarrow is placed underneath the block of indices: where Raising and lowering indices By contracting an index with a non-singular metric tensor, the type of a tensor can be changed, converting a lower index to an upper index or vice versa: The base symbol in many cases is retained (e.g. using where appears here), and when there is no ambiguity, repositioning an index may be taken to imply this operation. Correlations between index positions and invariance This table summarizes how the manipulation of covariant and contravariant indices fit in with invariance under a passive transformation between bases, with the components of each basis set in terms of the other reflected in the first column. The barred indices refer to the final coordinate system after the transformation. The Kronecker delta is used, see also below. {| class="wikitable" |- ! ! Basis transformation ! Component transformation ! Invariance |- ! Covector, covariant vector, 1-form | | | |- ! Vector, contravariant vector | | | |} General outlines for index notation and operations Tensors are equal if and only if every corresponding component is equal; e.g., tensor equals tensor if and only if for all . Consequently, there are facets of the notation that are useful in checking that an equation makes sense (an analogous procedure to dimensional analysis). Free and dummy indices Indices not involved in contractions are called free indices. Indices used in contractions are termed dummy indices, or summation indices. A tensor equation represents many ordinary (real-valued) equations The components of tensors (like , etc.) are just real numbers. Since the indices take various integer values to select specific components of the tensors, a single tensor equation represents many ordinary equations. If a tensor equality has free indices, and if the dimensionality of the underlying vector space is , the equality represents equations: each index takes on every value of a specific set of values. For instance, if is in four dimensions (that is, each index runs from 0 to 3 or from 1 to 4), then because there are three free indices (), there are 43 = 64 equations. Three of these are: This illustrates the compactness and efficiency of using index notation: many equations which all share a similar structure can be collected into one simple tensor equation. Indices are replaceable labels Replacing any index symbol throughout by another leaves the tensor equation unchanged (provided there is no conflict with other symbols already used). This can be useful when manipulating indices, such as using index notation to verify vector calculus identities or identities of the Kronecker delta and Levi-Civita symbol (see also below). An example of a correct change is: whereas an erroneous change is: In the first replacement, replaced and replaced everywhere, so the expression still has the same meaning. In the second, did not fully replace , and did not fully replace (incidentally, the contraction on the index became a tensor product), which is entirely inconsistent for reasons shown next. Indices are the same in every term The free indices in a tensor expression always appear in the same (upper or lower) position throughout every term, and in a tensor equation the free indices are the same on each side. Dummy indices (which implies a summation over that index) need not be the same, for example: as for an erroneous expression: In other words, non-repeated indices must be of the same type in every term of the equation. In the above identity, line up throughout and occurs twice in one term due to a contraction (once as an upper index and once as a lower index), and thus it is a valid expression. In the invalid expression, while lines up, and do not, and appears twice in one term (contraction) and once in another term, which is inconsistent. Brackets and punctuation used once where implied When applying a rule to a number of indices (differentiation, symmetrization etc., shown next), the bracket or punctuation symbols denoting the rules are only shown on one group of the indices to which they apply. If the brackets enclose covariant indices – the rule applies only to all covariant indices enclosed in the brackets, not to any contravariant indices which happen to be placed intermediately between the brackets. Similarly if brackets enclose contravariant indices – the rule applies only to all enclosed contravariant indices, not to intermediately placed covariant indices. Symmetric and antisymmetric parts Symmetric part of tensor Parentheses, ( ), around multiple indices denotes the symmetrized part of the tensor. When symmetrizing indices using to range over permutations of the numbers 1 to , one takes a sum over the permutations of those indices for , and then divides by the number of permutations: For example, two symmetrizing indices mean there are two indices to permute and sum over: while for three symmetrizing indices, there are three indices to sum over and permute: The symmetrization is distributive over addition; Indices are not part of the symmetrization when they are: not on the same level, for example; within the parentheses and between vertical bars (i.e. |⋅⋅⋅|), modifying the previous example; Here the and indices are symmetrized, is not. Antisymmetric or alternating part of tensor Square brackets, [ ], around multiple indices denotes the antisymmetrized part of the tensor. For antisymmetrizing indices – the sum over the permutations of those indices multiplied by the signature of the permutation is taken, then divided by the number of permutations: where is the generalized Kronecker delta of degree , with scaling as defined below. For example, two antisymmetrizing indices imply: while three antisymmetrizing indices imply: as for a more specific example, if represents the electromagnetic tensor, then the equation represents Gauss's law for magnetism and Faraday's law of induction. As before, the antisymmetrization is distributive over addition; As with symmetrization, indices are not antisymmetrized when they are: not on the same level, for example; within the square brackets and between vertical bars (i.e. |⋅⋅⋅|), modifying the previous example; Here the and indices are antisymmetrized, is not. Sum of symmetric and antisymmetric parts Any tensor can be written as the sum of its symmetric and antisymmetric parts on two indices: as can be seen by adding the above expressions for and . This does not hold for other than two indices. Differentiation For compactness, derivatives may be indicated by adding indices after a comma or semicolon. Partial derivative While most of the expressions of the Ricci calculus are valid for arbitrary bases, the expressions involving partial derivatives of tensor components with respect to coordinates apply only with a coordinate basis: a basis that is defined through differentiation with respect to the coordinates. Coordinates are typically denoted by , but do not in general form the components of a vector. In flat spacetime with linear coordinatization, a tuple of differences in coordinates, , can be treated as a contravariant vector. With the same constraints on the space and on the choice of coordinate system, the partial derivatives with respect to the coordinates yield a result that is effectively covariant. Aside from use in this special case, the partial derivatives of components of tensors do not in general transform covariantly, but are useful in building expressions that are covariant, albeit still with a coordinate basis if the partial derivatives are explicitly used, as with the covariant, exterior and Lie derivatives below. To indicate partial differentiation of the components of a tensor field with respect to a coordinate variable , a comma is placed before an appended lower index of the coordinate variable. This may be repeated (without adding further commas): These components do not transform covariantly, unless the expression being differentiated is a scalar. This derivative is characterized by the product rule and the derivatives of the coordinates where is the Kronecker delta. Covariant derivative The covariant derivative is only defined if a connection is defined. For any tensor field, a semicolon () placed before an appended lower (covariant) index indicates covariant differentiation. Less common alternatives to the semicolon include a forward slash () or in three-dimensional curved space a single vertical bar (). The covariant derivative of a scalar function, a contravariant vector and a covariant vector are: where are the connection coefficients. For an arbitrary tensor: An alternative notation for the covariant derivative of any tensor is the subscripted nabla symbol . For the case of a vector field : The covariant formulation of the directional derivative of any tensor field along a vector may be expressed as its contraction with the covariant derivative, e.g.: The components of this derivative of a tensor field transform covariantly, and hence form another tensor field, despite subexpressions (the partial derivative and the connection coefficients) separately not transforming covariantly. This derivative is characterized by the product rule: Connection types A Koszul connection on the tangent bundle of a differentiable manifold is called an affine connection. A connection is a metric connection when the covariant derivative of the metric tensor vanishes: An affine connection that is also a metric connection is called a Riemannian connection. A Riemannian connection that is torsion-free (i.e., for which the torsion tensor vanishes: ) is a Levi-Civita connection. The for a Levi-Civita connection in a coordinate basis are called Christoffel symbols of the second kind. Exterior derivative The exterior derivative of a totally antisymmetric type tensor field with components (also called a differential form) is a derivative that is covariant under basis transformations. It does not depend on either a metric tensor or a connection: it requires only the structure of a differentiable manifold. In a coordinate basis, it may be expressed as the antisymmetrization of the partial derivatives of the tensor components: This derivative is not defined on any tensor field with contravariant indices or that is not totally antisymmetric. It is characterized by a graded product rule. Lie derivative The Lie derivative is another derivative that is covariant under basis transformations. Like the exterior derivative, it does not depend on either a metric tensor or a connection. The Lie derivative of a type tensor field along (the flow of) a contravariant vector field may be expressed using a coordinate basis as This derivative is characterized by the product rule and the fact that the Lie derivative of a contravariant vector field along itself is zero: Notable tensors Kronecker delta The Kronecker delta is like the identity matrix when multiplied and contracted: The components are the same in any basis and form an invariant tensor of type , i.e. the identity of the tangent bundle over the identity mapping of the base manifold, and so its trace is an invariant. Its trace is the dimensionality of the space; for example, in four-dimensional spacetime, The Kronecker delta is one of the family of generalized Kronecker deltas. The generalized Kronecker delta of degree may be defined in terms of the Kronecker delta by (a common definition includes an additional multiplier of on the right): and acts as an antisymmetrizer on indices: Torsion tensor An affine connection has a torsion tensor : where are given by the components of the Lie bracket of the local basis, which vanish when it is a coordinate basis. For a Levi-Civita connection this tensor is defined to be zero, which for a coordinate basis gives the equations Riemann curvature tensor If this tensor is defined as then it is the commutator of the covariant derivative with itself: since the connection is torsionless, which means that the torsion tensor vanishes. This can be generalized to get the commutator for two covariant derivatives of an arbitrary tensor as follows: which are often referred to as the Ricci identities. Metric tensor The metric tensor is used for lowering indices and gives the length of any space-like curve where is any smooth strictly monotone parameterization of the path. It also gives the duration of any time-like curve where is any smooth strictly monotone parameterization of the trajectory. See also Line element. The inverse matrix of the metric tensor is another important tensor, used for raising indices: See also Abstract index notation Connection Curvilinear coordinates Tensors in curvilinear coordinates Differential form Differential geometry Exterior algebra Hodge star operator Holonomic basis Matrix calculus Metric tensor Multilinear algebra Multilinear subspace learning Penrose graphical notation Regge calculus Ricci calculus Ricci decomposition Tensor (intrinsic definition) Tensor calculus Tensor field Vector analysis Notes References Sources Further reading External links Calculus Differential geometry Tensors
Ricci calculus
[ "Mathematics", "Engineering" ]
4,485
[ "Tensors", "Calculus" ]
35,457,105
https://en.wikipedia.org/wiki/Gassmann%27s%20equation
Gassmann's equations are a set of two equations describing the isotropic elastic constants of an ensemble consisting of an isotropic, homogeneous porous medium with a fully connected pore space, saturated by a compressible fluid at pressure equilibrium. First published in German by Fritz Gassmann, the original work was only later translated in English long after the adoption of the equations in standard geophysical practice. Gassmann's equations remain the most common way of performing fluid substitution—predicting the elastic behaviour of a porous medium under a different saturant to the one measured. Procedure These formulations are from Avseth et al. (2006). Given an initial set of velocities and densities, , , and corresponding to a rock with an initial set of fluids, you can compute the velocities and densities of the rock with another set of fluid. Often these velocities are measured from well logs, but might also come from a theoretical model. Step 1: Extract the dynamic bulk and shear moduli from , , and : Step 2: Apply Gassmann's relation, of the following form, to transform the saturated bulk modulus: where and are the rock bulk moduli saturated with fluid 1 and fluid 2, and are the bulk moduli of the fluids themselves, and is the rock's porosity. Step 3: Leave the shear modulus unchanged (rigidity is independent of fluid type): Step 4: Correct the bulk density for the change in fluid: Step 5: recompute the fluid substituted velocities Rearranging for Ksat Given Let and then Or, expanded Assumptions Load induced pore pressure is homogeneous and identical in all pores This assumption imply that shear modulus of the saturated rock is the same as the shear modulus of the dry rock, . Porosity does not change with different saturating fluids Gassmann fluid substitution requires that the porosity remain constant. The assumption being that, all other things being equal, different saturating fluids should not affect the porosity of the rock. This does not take into account diagenetic processes, such as cementation or dissolution, that vary with changing geochemical conditions in the pores. For example, quartz cement is more likely to precipitate in water-filled pores than it is in hydrocarbon-filled ones (Worden and Morad, 2000). So the same rock may have different porosity in different locations due to the local water saturation. Frequency effects are negligible in the measurements Gassmann's equations are essentially the lower frequency limit of Biot's more general equations of motion for poroelastic materials. At seismic frequencies (10–100 Hz), the error in using Gassmann's equation may be negligible. However, when constraining the necessary parameters with sonic measurements at logging frequencies (~20 kHz), this assumption may be violated. A better option, yet more computationally intense, would be to use Biot's frequency-dependent equation to calculate the fluid substitution effects. If the output from this process will be integrated with seismic data, the obtained elastic parameters must also be corrected for dispersion effects. Rock frame is not altered by the saturating fluid Gassmann's equations assumes no chemical interactions between the fluids and the solids. References Geophysics
Gassmann's equation
[ "Physics" ]
695
[ "Applied and interdisciplinary physics", "Geophysics" ]
35,457,722
https://en.wikipedia.org/wiki/Classical%20capacity
In quantum information theory, the classical capacity of a quantum channel is the maximum rate at which classical data can be sent over it error-free in the limit of many uses of the channel. Holevo, Schumacher, and Westmoreland proved the following least upper bound on the classical capacity of any quantum channel : where is a classical-quantum state of the following form: is a probability distribution, and each is a density operator that can be input to the channel . Achievability using sequential decoding We briefly review the HSW coding theorem (the statement of the achievability of the Holevo information rate for communicating classical data over a quantum channel). We first review the minimal amount of quantum mechanics needed for the theorem. We then cover quantum typicality, and finally we prove the theorem using a recent sequential decoding technique. Review of quantum mechanics In order to prove the HSW coding theorem, we really just need a few basic things from quantum mechanics. First, a quantum state is a unit trace, positive operator known as a density operator. Usually, we denote it by , , , etc. The simplest model for a quantum channel is known as a classical-quantum channel: The meaning of the above notation is that inputting the classical letter at the transmitting end leads to a quantum state at the receiving end. It is the task of the receiver to perform a measurement to determine the input of the sender. If it is true that the states are perfectly distinguishable from one another (i.e., if they have orthogonal supports such that for ), then the channel is a noiseless channel. We are interested in situations for which this is not the case. If it is true that the states all commute with one another, then this is effectively identical to the situation for a classical channel, so we are also not interested in these situations. So, the situation in which we are interested is that in which the states have overlapping support and are non-commutative. The most general way to describe a quantum measurement is with a positive operator-valued measure (POVM). We usually denote the elements of a POVM as . These operators should satisfy positivity and completeness in order to form a valid POVM: The probabilistic interpretation of quantum mechanics states that if someone measures a quantum state using a measurement device corresponding to the POVM , then the probability for obtaining outcome is equal to and the post-measurement state is if the person measuring obtains outcome . These rules are sufficient for us to consider classical communication schemes over cq channels. Quantum typicality The reader can find a good review of this topic in the article about the typical subspace. Gentle operator lemma The following lemma is important for our proofs. It demonstrates that a measurement that succeeds with high probability on average does not disturb the state too much on average: Lemma: [Winter] Given an ensemble with expected density operator , suppose that an operator such that succeeds with high probability on the state : Then the subnormalized state is close in expected trace distance to the original state : (Note that is the nuclear norm of the operator so that Tr.) The following inequality is useful for us as well. It holds for any operators , , such that : The quantum information-theoretic interpretation of the above inequality is that the probability of obtaining outcome from a quantum measurement acting on the state is upper bounded by the probability of obtaining outcome on the state summed with the distinguishability of the two states and . Non-commutative union bound Lemma: [Sen's bound] The following bound holds for a subnormalized state such that and with , ... , being projectors: We can think of Sen's bound as a "non-commutative union bound" because it is analogous to the following union bound from probability theory: where are events. The analogous bound for projector logic would be if we think of as a projector onto the intersection of subspaces. Though, the above bound only holds if the projectors , ..., are commuting (choosing , , and gives a counterexample). If the projectors are non-commuting, then Sen's bound is the next best thing and suffices for our purposes here. HSW theorem with the non-commutative union bound We now prove the HSW theorem with Sen's non-commutative union bound. We divide up the proof into a few parts: codebook generation, POVM construction, and error analysis. Codebook Generation. We first describe how Alice and Bob agree on a random choice of code. They have the channel and a distribution . They choose classical sequences according to the IID\ distribution . After selecting them, they label them with indices as . This leads to the following quantum codewords: The quantum codebook is then . The average state of the codebook is then where . POVM Construction . Sens' bound from the above lemma suggests a method for Bob to decode a state that Alice transmits. Bob should first ask "Is the received state in the average typical subspace?" He can do this operationally by performing a typical subspace measurement corresponding to . Next, he asks in sequential order, "Is the received codeword in the conditionally typical subspace?" This is in some sense equivalent to the question, "Is the received codeword the transmitted codeword?" He can ask these questions operationally by performing the measurements corresponding to the conditionally typical projectors . Why should this sequential decoding scheme work well? The reason is that the transmitted codeword lies in the typical subspace on average: where the inequality follows from (\ref{eq:1st-typ-prop}). Also, the projectors are "good detectors" for the states (on average) because the following condition holds from conditional quantum typicality: Error Analysis. The probability of detecting the codeword correctly under our sequential decoding scheme is equal to where we make the abbreviation . (Observe that we project into the average typical subspace just once.) Thus, the probability of an incorrect detection for the codeword is given by and the average error probability of this scheme is equal to Instead of analyzing the average error probability, we analyze the expectation of the average error probability, where the expectation is with respect to the random choice of code: Our first step is to apply Sen's bound to the above quantity. But before doing so, we should rewrite the above expression just slightly, by observing that Substituting into () (and forgetting about the small term for now) gives an upper bound of We then apply Sen's bound to this expression with and the sequential projectors as , , ..., . This gives the upper bound Due to concavity of the square root, we can bound this expression from above by where the second bound follows by summing over all of the codewords not equal to the codeword (this sum can only be larger). We now focus exclusively on showing that the term inside the square root can be made small. Consider the first term: where the first inequality follows from () and the second inequality follows from the gentle operator lemma and the properties of unconditional and conditional typicality. Consider now the second term and the following chain of inequalities: The first equality follows because the codewords and are independent since they are different. The second equality follows from (). The first inequality follows from (\ref{eq:3rd-typ-prop}). Continuing, we have The first inequality follows from and exchanging the trace with the expectation. The second inequality follows from (\ref{eq:2nd-cond-typ}). The next two are straightforward. Putting everything together, we get our final bound on the expectation of the average error probability: Thus, as long as we choose , there exists a code with vanishing error probability. See also Entanglement-assisted classical capacity Quantum capacity Quantum information theory Typical subspace References . . . . Quantum information theory Limits of computation
Classical capacity
[ "Physics" ]
1,665
[ "Physical phenomena", "Limits of computation" ]
35,461,390
https://en.wikipedia.org/wiki/Magneto-inertial%20fusion
Magneto-inertial fusion (MIF) describes a class of fusion power devices that combine aspects of magnetic confinement fusion and inertial confinement fusion in an attempt to lower the cost of fusion devices. MIF uses magnetic fields to confine an initial warm, low-density plasma, then compresses that plasma to fusion conditions using an impulsive driver or "liner." The concept is also known as magnetized target fusion (MTF) and magnitnoye obzhatiye (MAGO) in Russia. Magneto-inertial fusion approaches differ in the degree of magnetic organization present in the initial target, as well as the nature and speed of the imploding liner. Laser, solid, liquid and plasma liners have all been proposed. Magneto-inertial fusion begins with a warm dense plasma target containing a magnetic field. Plasma's conductivity prevents it from crossing magnetic field lines. Compressing the target amplifies the magnetic field. Since the magnetic field reduces particle transport, the field insulates the target from the liner. History The MIF concept traces its history to comments by Andrei Sakharov in the 1950s, who noted that a magnetic field in a foil could be compressed and could, in theory, reach millions of Gauss. The concept was not picked up until the 1960s, when Evgeny Velikhov at the Kurchatov Institute began small-scale experiments using metal foils that were imploded by an external magnetic field. It was realized that the cost of the metal liners would likely be higher than the value of the electricity they would produce, the "kopeck problem", and they considered the idea of using a reusable liquid metal liner instead. At a 1971 meeting of fusion researchers, Ramy Shanny of the United States Naval Research Laboratory (NRL) talked to Velikhov about his ideas. Shanny asked about how such a system would be stabilized against Rayleigh–Taylor instability during the collapse. Velikhov misunderstood the question, thinking he was asking how it would be stabilized against gravity within the drum. He replied that they would spin it. Shanny, believing Velikhov was saying spinning would address Rayleigh-Taylor problems, performed the calculations and found that it did indeed stabilize these instabilities. On his return to the NRL, Shanny began a liquid liner program known as Linus. The idea was to spin a cylinder filled with a liquid metal rapidly enough that the metal would be forced to the outside of the cylinder and leave an opening in the center where plasma would be injected. Additional metal would then be forced into cylinder the using pistons or similar means, causing the opening in the center to close and the plasma to rapidly collapse. The Linus program was successful to a point, but as the scale of the compression ramped up the system began to face the problem that the collapsing metal would squeeze the plasma out of the ends of the cylinder more rapidly than expected, too rapidly to complete the compression. Looking for solutions to this problem, they began to adapt the recently discovered field-reversed configuration (FRC), which causes the plasma to form into a self-stable form. By injecting the plasma in FRC, it would not squirt out the ends. Interest in mechanical compression waned as the researchers turned to studying FRCs. In popular fiction The starships in Mike Kupari's novel Her Brother's Keeper are propelled in part by magneto-inertial fusion rockets. See also Inertial confinement fusion (ICF) Magnetized Liner Inertial Fusion Magnetized target fusion Helion Energy General Fusion Notes References Fusion power
Magneto-inertial fusion
[ "Physics", "Chemistry" ]
752
[ "Nuclear fusion", "Fusion power", "Plasma physics" ]
35,470,585
https://en.wikipedia.org/wiki/Adiabatic%20wall
In thermodynamics, an adiabatic wall between two thermodynamic systems does not allow heat or chemical substances to pass across it, in other words there is no heat transfer or mass transfer. In theoretical investigations, it is sometimes assumed that one of the two systems is the surroundings of the other. Then it is assumed that the work transferred is reversible within the surroundings, but in thermodynamics it is not assumed that the work transferred is reversible within the system. The assumption of reversibility in the surroundings has the consequence that the quantity of work transferred is well defined by macroscopic variables in the surroundings. Accordingly, the surroundings are sometimes said to have a reversible work reservoir. Along with the idea of an adiabatic wall is that of an adiabatic enclosure. It is easily possible that a system has some boundary walls that are adiabatic and others that are not. When some are not adiabatic, then the system is not adiabatically enclosed, though adiabatic transfer of energy as work can occur across the adiabatic walls. The adiabatic enclosure is important because, according to one widely cited author, Herbert Callen, "An essential prerequisite for the measurability of energy is the existence of walls that do not permit the transfer of energy in the form of heat." In thermodynamics, it is customary to assume a priori the physical existence of adiabatic enclosures, though it is not customary to label this assumption separately as an axiom or numbered law. Construction of the concept of an adiabatic enclosure Definitions of transfer of heat In theoretical thermodynamics, respected authors vary in their approaches to the definition of quantity of heat transferred. There are two main streams of thinking. One is from a primarily empirical viewpoint (which will here be referred to as the thermodynamic stream), to define heat transfer as occurring only by specified macroscopic mechanisms; loosely speaking, this approach is historically older. The other (which will here be referred to as the mechanical stream) is from a primarily theoretical viewpoint, to define it as a residual quantity after transfers of energy as macroscopic work, between two bodies or closed systems, have been determined for a process, so as to conform with the principle of conservation of energy or the first law of thermodynamics for closed systems; this approach grew in the twentieth century, though was partly manifest in the nineteenth. Thermodynamic stream of thinking In the thermodynamic stream of thinking, the specified mechanisms of heat transfer are conduction and radiation. These mechanisms presuppose recognition of temperature; empirical temperature is enough for this purpose, though absolute temperature can also serve. In this stream of thinking, quantity of heat is defined primarily through calorimetry. Though its definition of them differs from that of the mechanical stream of thinking, the empirical stream of thinking nevertheless presupposes the existence of adiabatic enclosures. It defines them through the concepts of heat and temperature. These two concepts are coordinately coherent in the sense that they arise jointly in the description of experiments of transfer of energy as heat. Mechanical stream of thinking In the mechanical stream of thinking about a process of transfer of energy between two bodies or closed systems, heat transferred is defined as a residual amount of energy transferred after the energy transferred as work has been determined, assuming for the calculation the law of conservation of energy, without reference to the concept of temperature. There are five main elements of the underlying theory. The existence of states of thermodynamic equilibrium, determinable by precisely one (called the non-deformation variable) more variable of state than the number of independent work (deformation) variables. That a state of internal thermodynamic equilibrium of a body have a well defined internal energy, that is postulated by the first law of thermodynamics. The universality of the law of conservation of energy. The recognition of work as a form of energy transfer. The universal irreversibility of natural processes. The existence of adiabatic enclosures. The existence of walls permeable only to heat. Axiomatic presentations of this stream of thinking vary slightly, but they intend to avoid the notions of heat and of temperature in their axioms. It is essential to this stream of thinking that heat is not presupposed as being measurable by calorimetry. It is essential to this stream of thinking that, for the specification of the thermodynamic state of a body or closed system, in addition to the variables of state called deformation variables, there be precisely one extra real-number-valued variable of state, called the non-deformation variable, though it should not be axiomatically recognized as an empirical temperature, even though it satisfies the criteria for one. Accounts of the adiabatic wall The authors Buchdahl, Callen, and Haase make no mention of the passage of radiation, thermal or coherent, across their adiabatic walls. Carathéodory explicitly discusses problems with respect to thermal radiation, which is incoherent, and he was probably unaware of the practical possibility of laser light, which is coherent. Carathéodory in 1909 says that he leaves such questions unanswered. For the thermodynamic stream of thinking, the notion of empirical temperature is coordinately presupposed in the notion of heat transfer for the definition of an adiabatic wall. For the mechanical stream of thinking, the exact way in which the adiabatic wall is defined is important. In the presentation of Carathéodory, it is essential that the definition of the adiabatic wall should in no way depend upon the notions of heat or temperature. This is achieved by careful wording and reference to transfer of energy only as work. Buchdahl is careful in the same way. Nevertheless, Carathéodory explicitly postulates the existence of walls that are permeable only to heat, that is to say impermeable to work and to matter, but still permeable to energy in some unspecified way. One might be forgiven for inferring from this that heat is energy in transfer across walls permeable only to heat, and that such exist as undefined postulated primitives. In the widely cited presentation of Callen, the notion of an adiabatic wall is introduced as a limit of a wall that is poorly conductive of heat. Although Callen does not here explicitly mention temperature, he considers the case of an experiment with melting ice, done on a summer's day, when, the reader may speculate, the temperature of the surrounds would be higher. Nevertheless, when it comes to a hard core definition, Callen does not use this introductory account. He eventually defines an adiabatic enclosure as does Carathéodory, that it passes energy only as work, and does not pass matter. Accordingly, he defines heat, therefore, as energy that is transferred across the boundary of a closed system other than by work. As suggested for example by Carathéodory and used for example by Callen, the favoured instance of an adiabatic wall is that of a Dewar flask. A Dewar flask has rigid walls. Nevertheless, Carathéodory requires that his adiabatic walls shall be imagined to be flexible, and that the pressures on these flexible walls be adjusted and controlled externally so that the walls are not deformed, unless a process is undertaken in which work is transferred across the walls. The work considered by Carathéodory is pressure-volume work. Another text considers asbestos and fiberglass as good examples of materials that constitute a practicable adiabatic wall. The mechanical stream of thinking thus regards the adiabatic enclosure's property of not allowing the transfer of heat across itself as a deduction from the Carathéodory axioms of thermodynamics. References Bibliography Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, . Beattie, J.A., Oppenheim, I. (1979). Principles of Thermodynamics, Elsevier, Amsterdam, . Born, M. (1921). Kritische Betrachtungen zur traditionellen Darstellung der Thermodynamik, Physik. Zeitschr. 22: 218–224. Bryan, G.H. (1907). Thermodynamics. An Introductory Treatise dealing mainly with First Principles and their Direct Applications, B.G. Teubner, Leipzig. Buchdahl, H.A. (1957/1966). The Concepts of Classical Thermodynamics, Cambridge University Press, London. Callen, H.B. (1960/1985). Thermodynamics and an Introduction to Thermostatistics, second edition, John Wiley & Sons, New York, . A translation may be found here . A partly reliable translation is to be found at Kestin, J. (1976). The Second Law of Thermodynamics, Dowden, Hutchinson & Ross, Stroudsburg PA. Haase, R. (1971). Survey of Fundamental Laws, chapter 1 of Thermodynamics, pages 1–97 of volume 1, ed. W. Jost, of Physical Chemistry. An Advanced Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73–117081. Kirkwood, J.G., Oppenheim, I. (1961). Chemical Thermodynamics, McGraw–Hill, New York. * Thermodynamics
Adiabatic wall
[ "Physics", "Chemistry", "Mathematics" ]
2,014
[ "Thermodynamics", "Dynamical systems" ]
43,474,263
https://en.wikipedia.org/wiki/Radiation%20Safety%20Information%20Computational%20Center
The Radiation Safety Information Computational Center (RSICC) collects, analyzes, maintains, and distributes computer software and data sets in the areas of radiation transport and safety. The RSICC is operated by Oak Ridge National Laboratory in Oak Ridge, Tennessee. The primary sponsors of the RSICC are the U.S. Department of Energy, the U.S. Department of Homeland Security, and the U.S. Nuclear Regulatory Commission. The center began as the Radiation Shielding Information Center (RSIC) in 1962, but was renamed to RSICC in August 1996 to better reflect the scope of computer code technology at the center (i.e., radiation transport and safety). The RSICC staff maintain an online software catalog at their website, which site visitors may browse (though no search functionality is provided). The software in the catalog cover a broad range of nuclear computational tools, providing in-depth coverage of radiation transport and safety topics for nuclear science and engineering, in support of modeling and simulation. Registered users may request software from the repository. With a few specific exceptions, a "cost recovery fee" is required to recoup the cost associated with RSICC operations before software requests are fulfilled. The European counterpart to the RSICC is the Organisation for Economic Co-operation and Development's (OECD) Nuclear Energy Agency (NEA) Data Bank. See also Safety code (nuclear reactor) External links Radiation Safety Information Computational Center OECD NEA Data Bank Nuclear technology Nuclear safety and security Oak Ridge National Laboratory
Radiation Safety Information Computational Center
[ "Physics" ]
317
[ "Nuclear technology", "Nuclear physics" ]
43,474,691
https://en.wikipedia.org/wiki/Caking
Caking is a powder's tendency to form lumps or masses. The formation of lumps interferes with packaging, transport, flowability, and consumption. Usually caking is undesirable, but it is useful when pressing powdered substances into pills or briquettes. Granular materials can also be subject to caking, particularly those that are hygroscopic such as salt, sugar, and many chemical fertilizers. Anticaking agents are commonly added to control caking. Caking properties must be considered when designing and constructing bulk material handling equipment. Powdered substances that need to be stored, and flow smoothly at some time in the future, are often pelletized or made into pills. Mechanism Caking mechanisms depend on the nature of the material. Caking is a consequence of chemical reactions of grain surfaces. Often these reactions involve adsorption of water vapor or other gases. Crystalline solids often cake by formation of liquid bridge between microcrystals and subsequent fusion of a solid bridge. Amorphous materials can cake by glass transitions and changes in viscosity. Polymorphic phase transitions can also induce caking. The caking process can involve electrostatic attractions or the formation of weak chemical bonds between particles. Anticaking agents Anticaking agents are chemical compounds that prevent caking. Some anticaking agents function by absorbing excess moisture or by coating particles and making them water-repellent. Calcium silicate (CaSiO3), a common anti-caking agent added to table salt, absorbs both water and oil. Anticaking agents are also used in non-food items such as road salt, fertilisers, cosmetics, synthetic detergents, and in manufacturing applications. References Chemical engineering Particle technology
Caking
[ "Chemistry", "Engineering" ]
362
[ "Particle technology", "Chemical engineering", "nan", "Environmental engineering" ]
43,478,202
https://en.wikipedia.org/wiki/Ogden%E2%80%93Roxburgh%20model
The Ogden–Roxburgh model is an approach published in 1999 which extends hyperelastic material models to allow for the Mullins effect. It is used in several commercial finite element codes, and is named after R.W. Ogden and D. G. Roxburgh. The fundamental idea of the approach can already be found in a paper by De Souza Neto et al. from 1994. The basis of pseudo-elastic material models is a hyperelastic second Piola–Kirchhoff stress , which is derived from a suitable strain energy density function : The key idea of pseudo-elastic material models is that the stress during the first loading process is equal to the basic stress . Upon unloading and reloading is multiplied by a positive softening function . The function thereby depends on the strain energy of the current load and its maximum in the history of the material: It was shown that this idea can also be used to extend arbitrary inelastic material models for softening effects. References Continuum mechanics Elasticity (physics) Rubber properties Solid mechanics
Ogden–Roxburgh model
[ "Physics", "Materials_science" ]
212
[ "Solid mechanics", "Physical phenomena", "Continuum mechanics", "Elasticity (physics)", "Deformation (mechanics)", "Classical mechanics", "Mechanics", "Physical properties" ]
36,383,128
https://en.wikipedia.org/wiki/C21H14O15
{{DISPLAYTITLE:C21H14O15}} The molecular formula C21H14O15 (molar mass: 506.33 g/mol, exact mass: 506.03327 u) may refer to: Nonahydroxytriphenic acid Sanguisorbic acid Valoneic acid Molecular formulas
C21H14O15
[ "Physics", "Chemistry" ]
74
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
36,387,287
https://en.wikipedia.org/wiki/C20H17F3N2O4
{{DISPLAYTITLE:C20H17F3N2O4}} The molecular formula C20H17F3N2O4 (molar mass: 406.355 g/mol) may refer to: Floctafenine Tasquinimod Molecular formulas
C20H17F3N2O4
[ "Physics", "Chemistry" ]
62
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
36,387,873
https://en.wikipedia.org/wiki/Kinetic%20inhibitor
Kinetic inhibitors are a new and evolving technology of a class of Low Dosage Hydrate Inhibitors (LDHI) that are polymers and copolymers (or a mix thereof). The most common of which is polyvinylcaprolactam (PVCap). These inhibitors are primarily utilized to retard the formation of clathrate hydrates. This problem becomes most prominent in flow lines when hydrocarbons and water flow through a line. The pressure and cold temperatures that could be exposed to the flow lines provides an environment in which clathrate hydrates can form and plug up the flow line. The inhibitors generally slow the formation of the hydrates enough so the fluid reaches storage without causing blockage. Structure There may be variations on the types and structure of the polymer used as the inhibitor. However, in general, while the gas hydrate is forming, the cavity where the hydrocarbon usually resides. Instead, the alkyl group penetrates the cavity followed by the carbonyl group of the amide group hydrogen bonding to the surface of the hydrate, thus preventing the formation of hydrates. Uses These inhibitors chemically retard the formation of clathrate hydrates. Dosage of such inhibitors in the fluid is usually .3% to .5% by weight of the fluid, compared to 10% to 50% by weight for thermodynamic inhibitors for prevention of clathrate formation. Due to this and other factors, kinetic inhibitors are more cost effective than thermodynamic inhibitors. However, if the fluid is in a static environment (i.e. packer fluid), kinetic inhibitors will have limited effect as hydrates will still form over sufficient time. At this point the only viable option is thermodynamic inhibitors. References Fluid mechanics
Kinetic inhibitor
[ "Engineering" ]
366
[ "Civil engineering", "Fluid mechanics" ]
36,388,598
https://en.wikipedia.org/wiki/Electrical%20isolation%20test
In electrical engineering, an electrical isolation test is a direct current (DC) or alternating current (AC) resistance test that is performed on sub-systems of an electronic system to verify that a specified level of isolation resistance is met. Isolation testing may also be conducted between one or more electrical circuits of the same subsystem. The test often reveals problems that occurred during assembly, such as defective components, improper component placement, and insulator defects that may cause inadvertent shorting or grounding to chassis, in turn, compromising electrical circuit quality and product safety. Isolation resistance measurements may be achieved using a high input impedance ohmmeter, digital multimeter (DMM) or current-limited Hipot test instrument. The selected equipment should not over-stress sensitive electronic components comprising the subsystem. The test limits should also consider semiconductor components within the subsystem that may be activated by the potentials imposed by each type of test instrumentation. A minimum acceptable resistance value is usually specified (typically in the mega ohm (MΩ) range per circuit tested). Multiple circuits having a common return may be tested simultaneously, provided the minimum allowable resistance value is based on the number of circuits in parallel. Five basic isolation test configurations exist: Single Un-referenced End-Circuit – isolation between one input signal and circuit chassis/common ground. Multiple Un-referenced End-Circuits with a single return – isolation between several input signals and circuit chassis/common ground. Subsystem with Isolated Common – isolation between signal input and common ground. Common Chassis Ground – isolation between circuit common and chassis (chassis grounded). Isolated Circuit Common – isolation between circuit common and chassis (chassis floating). Isolation measurements are made with the assembly or subsystem unpowered and disconnected from any support equipment. See also Dielectric withstand test Electrical breakdown Galvanic isolation References Electrical tests
Electrical isolation test
[ "Engineering" ]
385
[ "Electrical engineering", "Electrical tests" ]
36,390,145
https://en.wikipedia.org/wiki/Uberon
The Uber-anatomy ontology (Uberon) is a comparative anatomy ontology representing a variety of structures found in animals, such as lungs, muscles, bones, feathers and fins. These structures are connected to other structures via relationships such as part-of and develops-from. One of the uses of this ontology is to integrate data from different biological databases, and other species-specific ontologies such as the Foundational Model of Anatomy. References External links http://uberon.org Biological databases Comparative anatomy Anatomical terminology Ontology (information science)
Uberon
[ "Biology" ]
114
[ "Bioinformatics", "Biological databases" ]
36,390,214
https://en.wikipedia.org/wiki/Dconf
dconf is a low-level configuration system and settings management tool. Its main purpose is to provide a back end to GSettings on platforms that don't already have configuration storage systems. It depends on GLib. It is part of GNOME as of version 3, and is a replacement for GConf. Overview dconf is a simple key-based configuration system. Keys exist in an unstructured database (but it is intended that keys that logically belong together are grouped together). Change notification is supported. Stacking of multiple configuration sources is supported. Mandatory keys are supported. The stacking can be done at "mount points". For example, the global system configuration can be mounted under inside of each user's configuration space. A single configuration source may appear at multiple points in the hierarchy. For example, in addition to stacking over the normal keys at , the system default keys may also appear at for inspection and modification by a system policy configuration utility. PolicyKit integration is planned so that a normal user may temporarily gain the ability to, for example, write to the keys under (or ). This means that programs like the GNOME Display Manager configuration utility no longer have to be run as root. dconf is loosely the GNOME equivalent of the Windows Registry. Software architecture Since a typical GNOME login consists of thousands of reads and ideally 0 writes, dconf is optimized for reads. Typically, reading a key from dconf involves zero system calls and zero context switches. This is achieved with a simple file format that doubles both as the storage format for data in dconf and as an IPC mechanism between the clients and the server. Avoiding round trips and context switches is desirable in itself, but the real advantage comes from allowing the I/O scheduler in the kernel to do a better job by saturating it with requests coming from all of the applications trying to read their keys (as opposed to a common configuration server serially requesting a single key at a time). Having all of the keys in a single compact binary format also avoids the intense fragmentation problems currently experienced by the tree-of-directories-of-xml-files approach. Writes are less optimized they traverse the bus and are handled by a "writer" a D-Bus service in the ordinary way. Change notification is also handled by the writer. The reason for having a bus service at all is that getting the clients to synchronize on writing would be very difficult. The writer service doesn't have to be activated until the first write operation is performed. The service is completely stateless and can start and stop dynamically. The list of change notifications that an individual client is interested in is maintained by the bus daemon (as a D-Bus signal watch/match list). dconf database One dconf database consists of a single file in binary format, i.e. it is not a text-file. The format is defined as (GVariant Database file). It is a simple database file format that stores a mapping from strings to GVariant values in a way that is extremely efficient for lookups. The GNOME database file for each user is by default ~/.config/dconf/user, a file expected to be in GVDB format. GVariant GVariant is a strongly typed variant datatype used for all the values stored in dconf; it can contain one or more values along with information about the type of the values. A GVariant may contain simple types, like integers, or Boolean values; or complex types, like an array of two strings, or a dictionary of key value pairs. A GVariant is also immutable: once it's been created, neither its type nor its content can be modified further. GVariant is useful whenever data needs to be serialized, for example when sending method parameters in DBus, or when saving settings using GSettings. GVariant is part of GLib. https://developer.gnome.org/glib/stable/glib-GVariant.html https://git.gnome.org/browse/glib/tree/glib/gvariant.c GSettings The GSettings class provides a high-level API for application for storing and retrieving their own settings. In Debian, the utility program /usr/bin/gsettings is contained in the package libglib2.0-bin. GSettings is part of GIO. which is part of GLib. libglib2.0-0 https://developer.gnome.org/gio/stable/GSettings.html https://git.gnome.org/browse/glib/tree/gio/gsettings.c Documentation A system administrators guide for dconf is available. Since version 0.2, dconf is licensed under the LGPL version 2.1 "or later". History Release history References External links https://wiki.gnome.org/Projects/dconf https://download.gnome.org/sources/dconf/ https://gitlab.gnome.org/GNOME/dconf/ Configuration management GNOME Software that uses Meson
Dconf
[ "Engineering" ]
1,124
[ "Systems engineering", "Configuration management" ]
36,391,174
https://en.wikipedia.org/wiki/Circle%20diagram
The circle diagram (also known as Heyland diagram or Heyland circle) is the graphical representation of the performance of the electrical machine drawn in terms of the locus of the machine's input voltage and current. It was first conceived by in 1894 and Bernhard Arthur Behrend in 1895. A newer variant devised by in 1899 is often named Ossanna diagram, Ossanna circle, Heyland-Ossanna diagram or Heyland-Ossanna circle. In 1910, further improved the diagram by also incorporating the rotor resistance, then called Sumec diagram or Sumec circle. The circle diagram can be drawn for alternators, synchronous motors, transformers, induction motors. The Heyland diagram is an approximate representation of a circle diagram applied to induction motors, which assumes that stator input voltage, rotor resistance and rotor reactance are constant and stator resistance and core loss are zero. Another common circle diagram form is as described in the two constant air-gap induction motor images shown here, where, Rs, Xs: Stator resistance and leakage reactance Rr', Xr', s: Rotor resistance and leakage reactance referred to the stator and rotor slip Rc, Xm, : Core and mechanical losses, magnetization reactance Vs, Impressed stator voltage I0 = OO', IBL = OA, I1 =OV: No load current, blocked rotor current, operating current Φ0, ΦBL : No load angle, blocked rotor angle Pmax, sPmax, PFmax, Tmax, sTmax: Maximum output power & related slip, maximum power factor, maximum torque & related slip η1, s1, PF1, Φ1,: Efficiency, slip, power factor, PF angle at operating current AB: Represents rotor power input, which divided by synchronous speed equals starting torque. The circle diagram is drawn using the data obtained from no load and either short-circuit or, in case of machines, blocked rotor tests by fitting a half-circle in points O' and A. Beyond the error inherent in the constant air-gap assumption, the circle diagram introduces errors due to rotor reactance and rotor resistance variations caused by magnetic saturation and rotor frequency over the range from no-load to operating speed. See also Steinmetz equivalent circuit References Electromechanical engineering Electric motors Synchronous machines Electric transformers
Circle diagram
[ "Technology", "Engineering" ]
493
[ "Engines", "Electric motors", "Synchronous machines", "Electromechanical engineering", "Mechanical engineering by discipline", "Electrical engineering" ]
36,392,268
https://en.wikipedia.org/wiki/Severfield
Severfield plc is a North Yorkshire based structural steel contractor. By turnover it is the largest in the UK, and amongst the biggest in Europe, with a capacity of 165,000 tons per year. Landmark works include London's 2012 Olympic Stadium, The Shard, Wimbledon Centre Court roof, Emirates Stadium and Paris Philharmonic Hall. The firm has acquired businesses across structural steel market sectors within the UK and participates with JSW Group in two Mumbai based joint ventures, JSW Severfield Structures Ltd and indirectly, JSW Structural Metal Decking Ltd. History The business was founded in 1978 as a partnership named Severfield-Reeve; moved to Dalton in 1980, and incorporated as Severfield Reeve Structural Engineers Ltd in 1983. As a public company it was known from 1988 to 1999 as Severfield-Reeve plc; from 1999 to 2014 as Severfield-Rowen plc, and then adopted its current title. Listings In July 1988, Severfield-Reeve plc was quoted on the Unlisted Securities Market, then moved up to the London Stock Exchange on 8 June 1995. , Severfield plc is a component of the FTSE SmallCap Index. Acquisitions and new businesses (UK) Steel and UK Steel In 1991, Severfield Reeve acquired 80% of steel trading business, Steel (UK) Ltd and bought out the remaining minority shareholdings in 2009. The company ceased trading in 2002; was renamed Stable Move Ltd in 2005, and dissolved in 2013 owing £1,161,732 to Severfield plc and eliminating shareholder value. In 2006, Severfield established Steel UK Ltd as a 50% joint venture with Sheffield based steel stockholder Murray Metals Group Ltd to negotiate steel purchase prices for both partners. The company had been incorporated in 2005 as Stable Move Ltd. Severfield and Murray bought steel at prices agreed with suppliers by Steel UK and the company did not trade itself. The joint venture was dissolved in 2013. Meat processing equipment In 1995, Severfield's meat processing safety equipment subsidiary Manabo (UK) Ltd began trading. Initially the business was a 75% joint venture with the original technology developer but in 1996, Severfield Reeve bought out all remaining minority shareholdings. The established Scandinavian distributor for Manabo's products agreed to purchase from the new company. Manabo manufactured at sites around Thirsk and, from 1997, operated a chain mail glove factory in Bataszek, Hungary. Such was the anticipated transformation of the structural steel business that by 1997, The Times described Severfield as a specialist engineer and supplier of equipment for the meat and poultry processing industry. However, in the same year company founder John Severs blamed Manabo's six monthly £902,000 losses on the meat industry being reluctant to change its ways in health and hygiene.The glove business and company were sold in 2000 after a further £2 million loss in 1998, and asset write-down of £2.8 million in 1999. Severfield Reeve Projects Ltd In 1991, subsidiary Severfield Reeve Projects Ltd was incorporated, initially to carry out construction at Severfield's plants. It began to undertake main contracting for others in 1997. Severfield Reeve Projects diversified into house building in 2002, purchasing on its own account a site in Bagby for two traditional, five bedroom dwellings with gardens. In 2008, it launched a property investment division with the purchase of distribution warehouses for £7.1 million. The value of the initial investments was written down by £2.1 million in 2009. New projects and investments by Severfield Reeve Projects Ltd stopped in 2009, and it closed in 2011. Kennedy Watts Partnership Ltd In 1999, Severfield purchased 25.1% of Sheffield based design bureau Kennedy Watts Partnership Ltd for £464,000 in shares and cash. The shareholding was restructured in 2008 so that it was held through an intermediary, Last Exit Ltd. Kennedy Watts Partnership Ltd was placed into administration in 2013; into liquidation in 2014, and dissolved in 2016 with a deficit to creditors of £292,000 and elimination of all shareholder stakes. Work platform In 2006, Severfield patented a craned or self climbing work platform that attaches to steelwork at height, and mounts its own powered cherry picker. The apparatus was invented to assist erection of steel structures where conditions are not suitable for safe operation of conventional access equipment. In 2009, the work platform project was abandoned and £2.4 million development costs written off. Tonnage Severfield structural, fabricated steel output, relative to capacity and UK market: , the JSW Severfield Structures Ltd joint venture in India has a capacity of 100,000 tons of fabricated steel per annum. Locations Severfield plc is headquartered on the former RAF Dalton near Thirsk and is a significant employer there. It also has sites in Sherburn, Lostock, Bolton, Bridlington, Ballinamallard, and European offices at Zevenbergen. Joint ventures JSW Severfield Structures Ltd and JSW Structural Metal Decking Ltd are located in Mumbai. The Construction Metal Forming Ltd cold formed construction steel joint venture manufactures steel decking, and light gauge framing, in Mamhilad and Magor. Safety Accident statistics Severfield plc's RIDDOR ratios have improved, and are lower than its industry average. Senior executives are remunerated according to the accident frequency rate of the business units they oversee. Site fall denied In January 2021, a steel erector employed by Severfield fell down a staircase at Google's Kings Cross site and was unfit to work for six days. Severfield suspected fraud and summarily dismissed him, but an Employment Tribunal determined the incident occurred; that the employee was unfairly dismissed, and awarded £2,721 in compensation. There was no order for payment in lieu of notice because, post dismissal, the erector took immediate, better remunerated employment elsewhere. Hazardous welding fumes In March 2020, Severfield was served a Health and Safety Executive Improvement Notice, because of hazardous welding fumes at its Lostock works. The fault was corrected by April 2020. Lifting chain failure In 2019, a link weld in the chain set being used to lift a 2.5 ton beam failed, although rated to over twice that capacity. The lifting equipment had recently been independently inspected. Similar sets from the Turkey based manufacturer were withdrawn from use and allegedly exhibited faulty welds. The beam was partially bolted in place so did not fall. Damage to Leadenhall Tower During 2017 redevelopment of 22 Bishopsgate, a suspended girder struck Leadenhall Tower. Severfield was the steel frame subcontractor to Multiplex's Bishopsgate site. Nobody was hurt. Hand tool vibration In November 2017, Severfield was served a Health and Safety Executive Improvement Notice, because of tools causing excessive hand arm vibrations, at its Dalton site. New working practices were applied by March 2018. Forklift driver fatality In 2016, the Health and Safety Executive fined the firm £135,000, plus £46,020 costs following a 2013 incident when a 27 year old forklift driver was fatally crushed at its Dalton site. Site unloading In 2015, Severfield and its haulage contractors adopted customised trailers for delivering fabricated steel to construction sites. They are fitted with exclusion barriers to prevent unauthorised access whilst unloading by forklift, and fall arrest equipment to protect riggers on the trailer when unloading by crane. Leadenhall Tower bolts In November 2014, two embrittled bolts, purchased from a supplier, broke and fell from Leadenhall Tower. Another descended in January 2015. Severfield announced an anticipated £6 million charge for bolt remediation works in 2015, and final settlement in 2019. Faulty crane plate clamp In 2012, Severfield settled the claim from a welder who had been moving steelwork with a crane and faulty plate clamp. A ten foot long, and two foot wide, beam fell and crushed his foot. White finger In 2011, a plater at Severfield's Dalton facility suffered permanent damage to his hands caused by vibrating tools provided by the firm. He developed hand arm vibration syndrome commonly known as white finger. Clyde Arc Bridge Severfield subsidiary Watson Steel Structures Ltd fabricated the Clyde Arc Bridge in 2007. It had to be closed in 2008 because a clevis connector failed and a 35 metre long tension bar fell onto the carriageway. Another clevis was found to be cracked and it was decided to replace all 14 tension bars in the structure. Watson Steel Structures Ltd claimed £1.8 million from Macalloy, the clevis supplier, alleging its product was faulty. Macalloy denied the claim and countered Watson Steel Structures Ltd had only specified minimum yield stress for the components. Epoxy resin paint In 2005 Severfield dismissed a painter, at its newly acquired Sherburn site, who suffered allergic industrial contact dermatitis following exposure to epoxy resin paint. He claimed compensation. Severfield initially denied, but then accepted responsibility just before the 2007 High Court hearing. Mr Recorder Salter went on to award the painter £113,168.15 damages including £50,000 for loss of future earnings. The firm appealed but in 2008, Lord Justice Keene's judgement rejected its arguments and increased the award for future earnings loss to £90,000. Waltham Abbey fall In 2002, a 29 year old steel fixer working for Severfield's Steelcraft Erection Services Ltd fell from a new Sainsbury's distribution centre, and suffered spinal injuries; broken ribs, and a punctured lung. Project gallery Controversies Blyth battery plant The Guardian reported in September 2022 that Severfield had been impacted by cancelled and delayed works at Britishvolt's challenged Blyth gigafactory. It declined comment to the newspaper. Manchester Ship Canal In 2022, Severfield issued a legal claim against Davymarkham Ltd relating to 2015 bridgeworks over the Manchester Ship Canal. Davymarkham Ltd had been dissolved in 2021, but was restored by court order to face proceedings alongside its insurers and Fairfield Engineering Solutions Ltd. Late supplier payments In July 2019, subsidiary Severfield Design and Build Ltd was suspended from the UK Government's Prompt Payment Code for failing to pay suppliers on time. The firm submitted an action plan to the Chartered Institute of Credit Management and was reinstated by November 2019. Carrington Power Station In 2013, Severfield contracted with the Duro Felguera group to provide steelwork for the new Carrington Power Station. Duro Felguera's UK subsidiary refused to pay a December 2014 stage payment. Severfield obtained adjudication under the Housing Grants, Construction and Regeneration Act 1996 for £2,470,231.97, and then sought summary judgement from the High Court of Justice to enforce payment for a reduced amount of £1,445,495.78. Judge Stuart-Smith refused because part of the sum related to a power plant and was therefore excluded from the 1996 Act, and the adjudicator's jurisdiction. Duro Felguera argued it was in fact owed money by Severfield because of overpayments. Severfield finally obtained a judgement from Mr Justice Coulson in 2017 for £2,774,077.91 (or £1,760,480.27 up to 2014) but by then Duro Felguera UK Ltd had entered liquidation and recovery was limited to what Duro Felguera in Spain could be obliged to pay under a parent company guarantee for the period up to 2014. Losses on Leadenhall Tower During 2013, the group acknowledged substantial contractual losses in relation to Leadenhall Tower in the City of London. This prompted a major restructuring of the business; departure of the Chief Executive, and a £45m rights issue. Severfield's 2012 accounts included a £9.9 million charge for losses at Leadenhall Tower, plus a further £10.2 million on other delinquent contracts. Wimbledon roof leak In 2011, the Daily Mirror alleged the retractable roof over Wimbledon Centre Court leaked during a quarter final tennis match. Severfield completed the 3,000 ton moving roof in 2009. The All England Club stated the leaks were in the permanent roof, not the mobile section, and were "part of the normal drainage process". Discrimination by association In 2011, Severfield decided to reduce the number of welders at its Sherburn site by selectively not renewing the contract of an employee who was caring for his disabled wife. He claimed discrimination by association. Severfield could not satisfy the employment tribunal there was any other reason for the dismissal and the welder was awarded £10,500 compensation with a recommendation he should be re-employed. Remuneration of founder Shareholders rebelled against a 2007 payment of £1.6 million to retiring Managing Director, John Severs. They refused to pass a resolution at AGM to authorise the payment which had already been made. Property transactions In 2001, directors of the company, including John Severs, purchased the headquarters property for £14 million. In 2007 the company bought it back again for £23.5 million. Both transactions were endorsed by the independent directors. Fischer and Bartlett roof leaks In 1989, a subsidiary of Georg Fischer AG built a distribution warehouse near Coventry. The shallow pitch roof leaked, exacerbated by deflection of supporting steelwork. It sued the builders and designer. Severfield-Reeve plc was the steelwork subcontractor and met specifications supplied to it by the designer. In 1994, Severfield-Reeve plc agreed to pay £175,000 to the building owner in return for an indemnity against all parties in the matter. In 2009, Severfield became third party in a claim relating to the leaking roof at a potato processing plant in Airdrie constructed by its then subsidiary Atlas Ward Structures. Severfield agreed joint liability with the building's main contractors. Lord Menzies in the Outer House of the Court of Session was asked to choose between remediation options for the Airdrie premises, the alternatives differing in cost and quantum of damages. He drew attention to the similar decision that had faced Judge Hicks in the Fischer warehouse case. See also References External links 1978 establishments in the United Kingdom Companies based in North Yorkshire Construction and civil engineering companies established in 1978 Companies listed on the London Stock Exchange Construction and civil engineering companies of the United Kingdom Hambleton District Steel companies of the United Kingdom Structural steel 1978 establishments in England Companies established in the 20th century
Severfield
[ "Engineering" ]
2,986
[ "Structural engineering", "Structural steel" ]
48,302,751
https://en.wikipedia.org/wiki/Dissimilatory%20metal-reducing%20microorganisms
Dissimilatory metal-reducing microorganisms are a group of microorganisms (both bacteria and archaea) that can perform anaerobic respiration utilizing a metal as terminal electron acceptor rather than molecular oxygen (O2), which is the terminal electron acceptor reduced to water (H2O) in aerobic respiration. The most common metals used for this end are iron [Fe(III)] and manganese [Mn(IV)], which are reduced to Fe(II) and Mn(II) respectively, and most microorganisms that reduce Fe(III) can reduce Mn(IV) as well. But other metals and metalloids are also used as terminal electron acceptors, such as vanadium [V(V)], chromium [Cr(VI)], molybdenum [Mo(VI)], cobalt [Co(III)], palladium [Pd(II)], gold [Au(III)], and mercury [Hg(II)]. Conditions and mechanisms for dissimilatory metal reduction Dissimilatory metal reducers are a diverse group of microorganisms, which is reflected in the factors that affect the different forms of metal reduction. The process of dissimilatory metal reduction occurs in the absence of oxygen (O2), but dissimilatory metal reducers include both obligate (strict) anaerobes, such as the family Geobacteraceae, and facultative anaerobes, such as Shewanella spp. As well, across the dissimilatory metal reducers species, various electron donors are used in the oxidative reaction that is coupled to metal reduction. For instance, some species are limited to small organic acids and hydrogen (H2), whereas others may oxidize aromatic compounds. In certain instances, such as Cr(VI) reduction, the use of small organic compounds can optimize the rate of metal reduction. Another factor that influences metal respiration is environmental acidity. Although acidophilic and alkaliphilic dissimilatory metal reducers exist, the neutrophilic metal reducers group contains the most well-characterized genera. In soil and sediment environments, where the pH is often neutral, metals like iron are found in their solid oxidized forms, and exhibit variable reduction potential, which can affect their use by microorganisms. Due to the impermeability of the cell wall to minerals and the insolubility of metal oxides, dissimilatory metal reducers have developed ways to reduce metals extracellularly via electron transfer. Cytochromes c, which are transmembrane proteins, play an important role in transporting electrons from the cytosol to enzymes attached to the outside of the cell. The electrons are then further transported to the terminal electron acceptor via direct interaction between the enzymes and the metal oxide. In addition to establishing direct contact, dissimilatory metal reducers also display the ability to perform ranged metal reduction. For instance, some species of dissimilatory metal reducers produce compounds that can dissolve insoluble minerals or act as electron shuttles, enabling them to perform metal reduction from a distance. Other organic compounds frequently found in soils and sediments, such as humic acids, may also act as electron shuttles. In biofilms, nanowires and multistep electron hopping (in which electrons jump from cell to cell towards the mineral) have also been suggested as methods for reducing metals without requiring direct cell contact. It has been proposed that cytochromes c are involved in both of these mechanisms. In nanowires, for instance, cytochromes c function as the final component that transfers electrons to the metal oxide. Terminal electron acceptors A wide range of Fe(III)-bearing minerals have been observed to function as terminal electron acceptors, including magnetite, hematite, goethite, lepidocrocite, ferrihydrite, hydrous ferric oxide, smectite, illite, jarosite, among others. Secondary mineral formation In natural systems, secondary minerals may form as a byproduct of bacterial metal reduction. Commonly observed secondary minerals produced during experimental bio-reduction by dissimilatory metal reducers include magnetite, siderite, green rust, vivianite, and hydrous Fe(II)-carbonate. Genera that include dissimilatory metal reducers Albidiferax (Betaproteobacteria) Shewanella (Gammaproteobacteria) Geobacter (Deltaproteobacteria) Geothrix fermentans (Acidobacteria) Deferribacter (Deferribacteres) Thermoanaerobacter (Firmicutes) References Bacteria Metabolism Extremophiles
Dissimilatory metal-reducing microorganisms
[ "Chemistry", "Biology", "Environmental_science" ]
1,022
[ "Prokaryotes", "Organisms by adaptation", "Extremophiles", "Cellular processes", "Bacteria", "Biochemistry", "Environmental microbiology", "Microorganisms", "Metabolism" ]
48,306,673
https://en.wikipedia.org/wiki/Pharmaceutical%20bioinformatics
Pharmaceutical bioinformatics is a research field related to bioinformatics but with the focus on studying biological and chemical processes in the pharmaceutical area; to understand how xenobiotics interact with the human body and the drug discovery process. Introduction Whereas traditional bioinformatics is a wide subject it has a large focus on molecular biology, pharmaceutical bioinformatics more specifically targets chemical-biological interaction and exploratory focus of chemical and biological interactors using e.g. cheminformatics and chemometrics methods. Methods include, apart from many general bioinformatics methods, ligand-based modeling such as Quantitative structure–activity relationship (QSAR) and proteochemometrics, computer-aided molecular design, chembioinformatics databases, algorithms for chemical software, and biopharmaceutical chemistry including analyses of biological activity and other issues related to drug discovery. In silico metabolism prediction One of the major fields within pharmaceutical bioinformatics is the in silico metabolism prediction of drug candidates. This field is in turn divided into three tasks; Predicting the occurrence of an interaction between a compound and an enzyme, Predicting the location in the compound that takes part in the interaction, i.e. the site of metabolism (SOM), Predicting the outcome from the interaction, i.e. the resulting metabolite product. There are several existing tools trying to solve these tasks, e.g. SMARTCyp and MetaPrint2D predicts the SOM for chemical compounds. Software and tools There are many software tools for pharmaceutical bioinformatics. An example of an open source tool is the Bioclipse workbench. Conferences One conference specific to Pharmaceutical Bioinformatics is "International Conference on Pharmaceutical Bioinformatics" (ICPB) (http://www.icpb.net) References Further reading Introduction to Pharmaceutical Bioinformatics, (), Oakleaf Academic, 2020 Bioinformatics Drug discovery
Pharmaceutical bioinformatics
[ "Chemistry", "Engineering", "Biology" ]
410
[ "Biological engineering", "Life sciences industry", "Drug discovery", "Bioinformatics", "Medicinal chemistry" ]
33,787,267
https://en.wikipedia.org/wiki/Nuclear%20resonance%20vibrational%20spectroscopy
Nuclear resonance vibrational spectroscopy is a synchrotron-based technique that probes vibrational energy levels. The technique, often called NRVS, is specific for samples that contain nuclei that respond to Mössbauer spectroscopy, most commonly iron. The method exploits the high resolution offered by synchrotron light sources, which enables the resolution of vibrational fine structure, especially those vibrations that are coupled to the position of the Fe centre(s). The method is popularly applied to problems in bioinorganic chemistry, materials science, and geophysics. A novel aspect of the method is the ability to determine the 3D-trajectory of iron atoms within vibrational modes, providing a unique appraisal of DFT-prediction accuracy. Other names for this method include nuclear inelastic scattering (NIS), nuclear inelastic absorption (NIA), nuclear resonant inelastic x-ray scattering (NRIXS), and phonon assisted Mössbauer effect. Experimental set-up In the experimental setup, X-rays are released from the particle beam by an undulator; a high-resolution monochromator produces a beam with small energy dispersion (typically 1.0 meV). The sample is irradiated with photons chosen around the resonance of the Mössbauer isotope and further information is provided for the specific isotope. Typical parameters for the experimental scan are –20 meV below recoil-free resonance energy to +100 meV above it. The number of scans (often recorded for 5 seconds every 0.2 meV) depends on the amount of Mössbauer-active nuclei in the sample. The number of photons absorbed by the sample at any wavelength are measured by detecting the fluorescence emitted from the excited atom with an avalanche photodiode detector. The resulting raw spectrum contains a high-intensity resonance that corresponds to the nuclear excited state of the probed nucleus. For bulk samples, the technique detects natural abundance 57Fe. For many dilute or biological samples, the sample is often enriched in 57Fe. References Vibrational spectroscopy Scientific techniques
Nuclear resonance vibrational spectroscopy
[ "Physics", "Chemistry" ]
429
[ "Vibrational spectroscopy", "Spectroscopy", "Spectrum (physical sciences)" ]
33,791,047
https://en.wikipedia.org/wiki/Geometrically%20and%20materially%20nonlinear%20analysis%20with%20imperfections%20included
Geometrically and materially nonlinear analysis with imperfections included (GMNIA), is a structural analysis method designed to verify the strength capacity of a structure, which accounts for both plasticity and buckling failure modes. GMNIA is currently considered the most sophisticated and perspectively the most accurate method of a numerical buckling strength verification. References Structural analysis
Geometrically and materially nonlinear analysis with imperfections included
[ "Mathematics", "Engineering" ]
71
[ "Structural engineering", "Applied mathematics", "Structural analysis", "Applied mathematics stubs", "Mechanical engineering", "Aerospace engineering" ]
33,791,075
https://en.wikipedia.org/wiki/Copper%20conductor
Copper has been used in electrical wiring since the invention of the electromagnet and the telegraph in the 1820s. The invention of the telephone in 1876 created further demand for copper wire as an electrical conductor. Copper is the electrical conductor in many categories of electrical wiring. Copper wire is used in power generation, power transmission, power distribution, telecommunications, electronics circuitry, and countless types of electrical equipment. Copper and its alloys are also used to make electrical contacts. Electrical wiring in buildings is the most important market for the copper industry. Roughly half of all copper mined is used to manufacture electrical wire and cable conductors. Properties of copper Electrical conductivity Electrical conductivity is a measure of how well a material transports an electric charge. This is an essential property in electrical wiring systems. Copper has the highest electrical conductivity rating of all non-precious metals: the electrical resistivity of copper = 16.78 nΩ•m at 20 °C. The theory of metals in their solid state helps to explain the unusually high electrical conductivity of copper. In a copper atom, the outermost 4s energy zone, or conduction band, is only half filled, so many electrons are able to carry electric current. When an electric field is applied to a copper wire, the conduction of electrons accelerates towards the electropositive end, thereby creating a current. These electrons encounter resistance to their passage by colliding with impurity atoms, vacancies, lattice ions, and imperfections. The average distance travelled between collisions, defined as the mean free path, is inversely proportional to the resistivity of the metal. What is unique about copper is its long mean free path (approximately 100 atomic spacings at room temperature). This mean free path increases rapidly as copper is chilled. Because of its superior conductivity, annealed copper became the international standard to which all other electrical conductors are compared. In 1913, the International Electrotechnical Commission defined the conductivity of commercially pure copper in its International Annealed Copper Standard, as 100% IACS = 58.0 MS/m at 20 °C, decreasing by 0.393%/°C. Because commercial purity has improved over the last century, copper conductors used in building wire often slightly exceed the 100% IACS standard. The main grade of copper used for electrical applications is electrolytic-tough pitch (ETP) copper (CW004A or ASTM designation C11040). This copper is at least 99.90% pure and has an electrical conductivity of at least 101% IACS. ETP copper contains a small percentage of oxygen (0.02 to 0.04%). If high conductivity copper needs to be welded or brazed or used in a reducing atmosphere, then specially-pure oxygen-free copper (CW008A or ASTM designation C10100) may be used; it is about 1% more conductive (i.e., achieves a minimum of 101% IACS). Several electrically conductive metals are less dense than copper, but require larger cross sections to carry the same current and may not be usable when limited space is a major requirement. Aluminium has 61% of the conductivity of copper. The cross sectional area of an aluminium conductor must be 56% larger than copper for the same current carrying capability. The need to increase the thickness of aluminium wire restricts its use in many applications, such as in small motors and automobiles. However, in some applications such as aerial electric power transmission cables, aluminium predominates, and copper is rarely used. Silver, a precious metal, is the only metal with a higher electrical conductivity than copper. The electrical conductivity of silver is 106% of that of annealed copper on the IACS scale, and the electrical resistivity of silver = 15.9 nΩ•m at 20 °C. The high cost of silver combined with its low tensile strength limits its use to special applications, such as joint plating and sliding contact surfaces, and plating for the conductors in high-quality coaxial cables used at frequencies above 30 MHz. Tensile strength Tensile strength measures the force required to pull an object such as rope, wire, or a structural beam to the point where it breaks. The tensile strength of a material is the maximum amount of tensile stress it can take before breaking. Copper's higher tensile strength (200–250 N/mm2 annealed) compared to aluminium (100 N/mm2 for typical conductor alloys) is another reason why copper is used extensively in the building industry. Copper's high strength resists stretching, neck-down, creep, nicks and breaks, and thereby also prevents failures and service interruptions. Copper is much heavier than aluminum for conductors of equal current carrying capacity, so the high tensile strength is offset by its increased weight. Ductility Ductility is a material's ability to deform under tensile stress. This is often characterized by the material's ability to be stretched into a wire. Ductility is especially important in metalworking because materials that crack or break under stress cannot be hammered, rolled, or drawn (drawing is a process that uses tensile forces to stretch metal). Copper has a higher ductility than alternate metal conductors with the exception of gold and silver. Because of copper's high ductility, it is easy to draw down to diameters with very close tolerances. Strength and ductility combination Usually, the stronger a metal is, the less pliable it is. This is not the case with copper. A unique combination of high strength and high ductility makes copper ideal for wiring systems. At junction boxes and at terminations, for example, copper can be bent, twisted, and pulled without stretching or breaking. Creep resistance Creep is the gradual deformation of a material from constant expansions and contractions under varying load conditions. This process has adverse effects on electrical systems: terminations can become loose, causing connections to heat up or create dangerous arcing. Copper has excellent creep characteristics that minimizes loosening at connections. For other metal conductors that creep, extra maintenance is required to check terminals periodically and ensure that screws remain tightened to prevent arcing and overheating. Corrosion resistance Corrosion is the unwanted breakdown and weakening of a material due to chemical reactions. Copper generally resists corrosion from moisture, humidity, industrial pollution, and other atmospheric influences. However, any corrosion oxides, chlorides, and sulfides that do form on copper are somewhat conductive. Under many application conditions copper is higher on the galvanic series than other common structural metals, meaning that copper wire is less likely to be corroded in wet conditions. However, any more anodic metals in contact with copper will be corroded since they will essentially be sacrificed to the copper. Coefficient of thermal expansion Metals and other solid materials expand upon heating and contract upon cooling. This is an undesirable occurrence in electrical systems. Copper has a low coefficient of thermal expansion for an electrical conducting material. Aluminium, an alternate common conductor, expands nearly one third more than copper under increasing temperatures. This higher degree of expansion, along with aluminium's lower ductility, can cause electrical problems when bolted connections are improperly installed. By using proper hardware, such as spring pressure connections and cupped or split washers at the joint, it may be possible to create aluminium joints that compare in quality to copper joints. Thermal conductivity Thermal conductivity is the ability of a material to conduct heat. In electrical systems, high thermal conductivity is important for dissipating waste heat, particularly at terminations and connections. Copper has a 60% higher thermal conductivity rating than aluminium, so it is better able to reduce thermal hot spots in electrical wiring systems. Solderability Soldering is a process whereby two or more metals are joined together by a heating process using a filler material that has a much lower melting point than the metal to be joined. This is a desirable property in electrical systems. Copper is readily soldered to make durable connections when necessary. Ease of installation The strength, hardness, and flexibility of copper make it very easy to work with. Copper wiring can be installed simply and easily with no special tools, washers, pigtails, or joint compounds. Its flexibility makes it easy to join, while its hardness helps keep connections securely in place. It has good strength for pulling wire through tight places, including conduits. It can be bent or twisted easily without breaking. It can be stripped and terminated during installation or service with far less danger of nicks or breaks. And it can be connected without the use of special lugs and fittings. The combination of all of these factors makes it easy for electricians to install copper wire. Types Solid and stranded Solid wire consists of one strand of copper metal wire, bare or surrounded by an insulator. Single-strand copper conductors are typically used as magnet wire in motors and transformers. They are relatively rigid, do not bend easily, and are typically installed in permanent, infrequently handled, and low flex applications. Stranded wire has a group of copper wires braided or twisted together. Stranded wire is more flexible and easier to install than a large single-strand wire of the same cross section. Stranding improves wire life in applications with vibration. A particular cross-section of a stranded conductor gives it essentially the same resistance characteristics as a single-strand conductor, but with added flexibility. Cable A copper cable consists of two or more copper wires running side by side and bonded, twisted or braided together to form a single assembly. Electrical cables may be made more flexible by stranding the wires. Copper wires in a cable may be bare or they may be plated to reduce oxidation with a thin layer of another metal, most often tin but sometimes gold or silver. Plating may lengthen wire life and makes soldering easier. Twisted pair and coaxial cables are designed to inhibit electromagnetic interference, prevent radiation of signals, and to provide transmission lines with defined characteristics. Shielded cables are encased in foil or wire mesh. Applications Electrolytic-tough pitch (ETP) copper, a high-purity copper that contains oxygen as an alloying agent, represents the bulk of electrical conductor applications because of its high electrical conductivity and improved annealability. ETP copper is used for power transmission, power distribution, and telecommunications. Common applications include building wire, motor windings, electrical cables, and busbars. Oxygen-free coppers are used to resist hydrogen embrittlement when extensive amounts of cold work is needed, and for applications requiring higher ductility (e.g., telecommunications cable). When hydrogen embrittlement is a concern and low electrical resistivity is not required, phosphorus may be added to copper. For certain applications, copper alloy conductors are preferred instead of pure copper, especially when higher strengths or improved abrasion and corrosion resistance properties are required. However, relative to pure copper, the higher strength and corrosion resistance benefits that are offered by copper alloys are offset by their lower electrical conductivities. Design engineers weigh the advantages and disadvantages of the various types of copper and copper alloy conductors when determining which type to specify for a specific electrical application. An example of a copper alloy conductor is cadmium copper wire, which is used for railroad electrification in North America. In Britain the BPO (later Post Office Telecommunications) used cadmium copper aerial lines with 1% cadmium for extra strength; for local lines 40 lb/mile (1.3 mm dia) and for toll lines 70 lb/mile (1.7 mm dia). Some of the major application markets for copper conductors are summarized below. Electrical wiring Electrical wiring distributes electric power inside residential, commercial, or industrial buildings, mobile homes, recreational vehicles, boats, and substations at voltages up to 600 V. The thickness of the wire is based on electric current requirements in conjunction with safe operating temperatures. Solid wire is used for smaller diameters; thicker diameters are stranded to provide flexibility. Conductor types include non-metallic/non-metallic corrosion-resistant cable (two or more insulated conductors with a nonmetallic outer sheath), armored or BX cable (cables are surrounded by a flexible metal enclosure), metal clad cable, service entrance cable, underground feeder cable, TC cable, fire resistant cable, and mineral insulated cable, including mineral-insulated copper-clad cable. Copper is commonly used for building wire because of its conductivity, strength, and reliability. Over the life of a building wire system, copper can also be the most economical conductor. Copper used in building wire has a conductivity rating of 100% IACS or better. Copper building wire requires less insulation and can be installed in smaller conduits than when lower-conductivity conductors are used. Also, comparatively, more copper wire can fit in a given conduit than conductors with lower conductivities. This greater wire fill is a special advantage when a system is rewired or expanded. Copper building wire is compatible with brass and quality plated screws. The wire provides connections that will not corrode or creep. It is not, however, compatible with aluminium wire or connectors. If the two metals are joined, a galvanic reaction can occur. Anodic corrosion during the reaction can disintegrate the aluminium. This is why most appliance and electrical equipment manufacturers use copper lead wires for connections to building wiring systems. All-copper building wiring refers to buildings in which the inside electrical service is carried exclusively over copper wiring. In all-copper homes, copper conductors are used in circuit breaker panels, branch circuit wiring (to outlets, switches, lighting fixtures and the like), and in dedicated branches serving heavy-load appliances (such as ranges, ovens, clothes dryers and air conditioners). Attempts to replace copper with aluminium in building wire were curtailed in most countries when it was found that aluminium connections gradually loosened due to their inherent slow creep, combined with the high resistivity and heat generation of aluminium oxidation at joints. Spring-loaded contacts have largely alleviated this problem with aluminium conductors in building wire, but some building codes still forbid the use of aluminium. For branch-circuit sizes, virtually all basic wiring for lights, outlets and switches is made from copper. The market for aluminium building wire today is mostly confined to larger gauge sizes used in supply circuits. Electrical wiring codes give the allowable current rating for standard sizes of conductors. The current rating of a conductor varies depending on the size, allowable maximum temperature, and the operating environment of the conductor. Conductors used in areas where cool air is free to circulate around the wires are generally permitted to carry more current than the small sized conductor encased in an underground conduit run with many similar conductors adjacent to it. The practical temperature ratings of insulated copper conductors are mostly due to the limitations of the insulation material or of the temperature rating of the attached equipment. Communications wiring Twisted pair cable Twisted pair cabling is the most popular network cable and is often used in data networks for short and medium length connections (up to 100 meters or 328 feet). This is due to its relatively lower costs compared to optical fiber and coaxial cable. Unshielded twisted pair (UTP) cables are the primary cable type for telephone usage. In the late 20th century, UTPs emerged as the most common cable in computer networking cables, especially as patch cables or temporary network connections. They are increasingly used in video applications, primarily in security cameras. UTP plenum cables that run above ceilings and inside walls use a solid copper core for each conductor, which enables the cable to hold its shape when bent. Patch cables, which connect computers to wall plates, use stranded copper wire because they are expected to be flexed during their lifetimes. UTPs are the best balanced-line wires available. However, they are the easiest to tap into. When interference and security are concerns, shielded cable or fiber-optic cable is often considered. UTP cables include: Category 3 cable, now the minimum requirement by the FCC (USA) for every telephone connection; Category 5e cable, 100-MHz enhanced pairs for running Gigabit Ethernet (1000BASE-T); and Category 6 cable, where each pair runs 250 MHz for improved 1000BASE-T performance. In copper twisted pair wire networks, copper cable certification is achieved through a thorough series of tests in accordance with Telecommunications Industry Association (TIA) or International Organization for Standardization (ISO) standards. Coaxial cable Coaxial cables were extensively used in mainframe computer systems and were the first type of major cable used for Local Area Networks (LAN). Common applications for coaxial cable today include computer network (Internet) and instrumentation data connections, video and CATV distribution, RF and microwave transmission, and feedlines connecting radio transmitters and receivers with their antennas. While coaxial cables can go longer distances and have better protection from EMI than twisted pairs, coaxial cables are harder to work with and more difficult to run from offices to the wiring closet. For these reasons, it is now generally being replaced with less expensive UTP cables or by fiber-optic cables for more capacity. Today, many CATV companies still use coaxial cables into homes. These cables, however, are increasingly connected to a fiber optic data communications system outside of the home. Most building management systems use proprietary copper cabling, as do paging/audio speaker systems. Security monitoring and entry systems still often depend on copper, although fiber cables are also used. Structured cabling Most telephone lines can share voice and data simultaneously. Pre-digital quad telephone wiring in homes is unable to handle communications needs for multiple phone lines, Internet service, video communications, data transmission, fax machines, and security services. Crosstalk, static interference, inaudible signals, and interrupted service are common problems with outdated wiring. Computers connected to old-fashioned communications wiring often experience poor Internet performance. Structured cabling is the general term for 21st century On-premises wiring for high-capacity telephone, video, data-transmission, security, control, and entertainment systems. Installations usually include a central distribution panel where all connections are made, as well as outlets with dedicated connections for phone, data, TV and audio jacks. Structured cabling enables computers to communicate with each other error-free and at high speeds while resisting interference among various electrical sources, such as household appliances and external communications signals. Networked computers are able to share high-speed Internet connections simultaneously. Structured cabling can also connect computers with printers, scanners, telephones, fax machines, and even home security systems and home entertainment equipment. Quad-shielded RG-6 coaxial cable can carry a large number of TV channels at the same time. A star wiring pattern, where the wiring to each jack extends to a central distribution device, facilitates flexibility of services, problem identification, and better signal quality. This pattern has advantages to daisy chain loops. Installation tools, tips, and techniques for networked wiring systems using twisted pairs, coaxial cables, and connectors for each are available. Structured cabling competes with wireless systems in homes. While wireless systems certainly have convenience advantages, they also have drawbacks over copper-wired systems: the higher bandwidth of systems using Category 5e wiring typically support more than ten times the speeds of wireless systems for faster data applications and more channels for video applications. Alternatively, wireless systems are a security risk as they can transmit sensitive information to unintended users over similar receiver devices. Wireless systems are more susceptible to interference from other devices and systems, which can compromise performance. Certain geographic areas and some buildings may be unsuitable for wireless installations, just as some buildings may present difficulties installing wires. Power distribution Power distribution is the final stage in the delivery of electricity for an end use. A power distribution system carries electricity from the transmission system to consumers. Power cables are used for the transmission and distribution of electric power, either outdoors or inside buildings. Details on the various types of power cables are available. Copper is the preferred conductor material for underground transmission lines operating at high and extra-high voltages to 400 kV. The predominance of copper underground systems stems from its higher volumetric electrical and thermal conductivities compared to other conductors. These beneficial properties for copper conductors conserve space, minimize power loss, and maintain lower cable temperatures. Copper continues to dominate low-voltage lines in mines and underwater applications, as well as in electric railroads, hoists, and other outdoor services. Aluminium, either alone or reinforced with steel, is the preferred conductor for overhead transmission lines due to its lighter weight and lower cost. Appliance conductors Appliance conductors for domestic applications and instruments are manufactured from bunch-stranded soft wire, which may be tinned for soldering or phase identification. Depending upon loads, insulation can be PVC, neoprene, ethylene propylene, polypropylene filler, or cotton. Automotive conductors Automotive conductors require insulation that is resistant to elevated temperatures, petroleum products, humidity, fire, and chemicals. PVC, neoprene, and polyethylene are the most common insulators. Potentials range from 12 V for electrical systems to between 300 V - 15,000 V for instruments, lighting, and ignition systems. Magnet wire Magnet wire or winding wire is used in windings of electric motors, transformers, inductors, generators, headphones, loudspeaker coils, hard drive head positioners, electromagnets, and other devices. Most often, magnetic wire is composed of fully annealed, electrolytically refined copper to allow closer winding when making electromagnetic coils. The wire is coated with a range of polymeric insulations, including varnish, rather than the thicker plastic or other types of insulation commonly used on electrical wire. High-purity oxygen-free copper grades are used for high-temperature applications in reducing atmospheres or in motors or generators cooled by hydrogen gas. Splice closures for copper cables A copper splice closure is defined as an enclosure, and the associated hardware, that is intended to restore the mechanical and environmental integrity of one or more copper cables entering the enclosure and providing some internal function for splicing, termination, or interconnection. Types of closures As stated in Telcordia industry requirements document GR-3151, there are two principal configurations for closures: butt closures and in-line closures. Butt closures permit cables to enter the closure from one end only. This design may also be referred to as a dome closure. These closures can be used in a variety of applications, including branch splicing. In-line closures provide for the entry of cables at both ends of the closure. They can be used in a variety of applications, including branch splicing and cable access. In-line closures can also be used in a butt configuration by restricting cable access to one end of the closure. A copper splice closure is defined by the functional design characteristics and, for the most part, is independent of specific deployment environments or applications. At this time, Telcordia has identified two types of copper closures: Environmentally Sealed Closures (ESCs) Free-Breathing Closures (FBCs) ESCs provide all of the features and functions expected of a typical splice closure in an enclosure that prevents the intrusion of liquid and vapor into the closure interior. This is accomplished through the use of an environmental sealing system such as rubber gaskets or hot-melt adhesives. Some ESCs use pressurized air to help keep moisture out of the closure. FBCs provide all of the features and functions expected of a typical splice closure that prevents the intrusion of wind-driven rain, dust, and insects. Such a closure, however, permits the free exchange of air with the outside environment. Therefore, it is possible that condensation will form inside the closure. It is thus necessary to provide adequate drainage to prevent the accumulation of water inside the closure. Future trends Copper will continue to be the predominant material in most electrical wire applications, especially where space considerations are important. The automotive industry for decades has considered the use of smaller-diameter wires in certain applications. Some manufacturers are beginning to use copper alloys such as copper-magnesium (CuMg), which has less conductivity but more strength than pure copper. Due to the need to increase the transmission of high-speed voice and data signals, the surface quality of copper wire is expected to continue to improve. Demands for better drawability and movement towards zero defects in copper conductors are expected to continue. A minimum mechanical strength requirement for magnet wire may evolve in order to improve formability and prevent excessive stretching of wire during high-speed coiling operations. It does not seem likely that standards for copper wire purity will increase beyond the current minimum value of 101% IACS. Although 6-nines copper (99.9999% pure) has been produced in small quantities, it is extremely expensive and probably unnecessary for most commercial applications such as magnet, telecommunications, and building wire. The electrical conductivity of 6-nines copper and 4-nines copper (99.99% pure) is nearly the same at ambient temperature, although the higher-purity copper has a higher conductivity at cryogenic temperatures. Therefore, for non-cryogenic temperatures, 4-nines copper will probably remain the dominant material for most commercial wire applications. Theft During the 2000s commodities boom, copper prices increased worldwide, increasing the incentive for criminals to steal copper from power supply and communications cables. Iranian Minister of ICT has replaced copper with fiber optic because of theft. See also Annealing by short circuit Copper cable certification Copper sulfide Copper-clad aluminium wire Copper-clad steel Galvanization Magnet wire Mineral-insulated copper-clad cable Passivation (chemistry) Solderability References Copper Electrical wiring Power cables
Copper conductor
[ "Physics", "Engineering" ]
5,288
[ "Electrical systems", "Building engineering", "Physical systems", "Electrical engineering", "Electrical wiring" ]
26,563,427
https://en.wikipedia.org/wiki/Plasma%20shaping
Magnetically confined fusion plasmas such as those generated in tokamaks and stellarators are characterized by a typical shape. Plasma shaping is the study of the plasma shape in such devices, and is particularly important for next step fusion devices such as ITER. This shape is conditioning partly the performance of the plasma. Tokamaks, in particular, are axisymmetric devices, and therefore one can completely define the shape of the plasma by its cross-section. History Early fusion reactor designs tended to have circular cross-sections simply because they were easy to design and understand. Generally, fusion machines using a toroidal layout, like the tokamak and most stellarators, arrange their magnetic fields so the ions and electrons in the plasma travel around the torus at high velocities. However, as the circumference of a path on the outside of the plasma area is longer than one on the inside, this caused several effects that disrupted the stability of the plasma. During the 1960s a number of different methods were used to try to address these problems. Generally they used a combination of several magnetic fields to cause the net magnetic field inside the device to be twisted into a helix. Ions and electrons following these lines found themselves moving to the inside and then outside of the plasma, mixing it and suppressing some of the most obvious instabilities. In the 1980s, further research along these lines demonstrated that further advances were possible by using external current-carrying coils to make the lines not just helical, but non-symmetric as well. This led to a series of experiments using C and D-shaped plasma volumes.. By increasing the current in one (or more) shaping coils to a high enough degree, one (or more) 'X-points' can be created. An X-point is defined as a point in space at which the poloidal field has zero magnitude. The magnetic flux surface that intersects with the X-point is called the separatrix, and, as all flux surfaces external to this surface are unconfined, the separatrix defines the last closed flux surface (LCFS). Formerly, the LCFS was established by inserting a material limiter into the plasma, which fixed the plasma temperature and potential (among other quantities) to be equal to that of the limiter. Plasma that escaped the LCFS would do so with no preferential direction, potentially damaging instruments. By establishing an X-point and separatrix, the plasma edge is uncoupled from the vessel walls, and exhausted heat and plasma particles are preferentially diverted towards a known region of the vessel near the X-point. Cross-section In the simple case of a plasma with up-down symmetry, the plasma cross-section is defined using a combination of four parameters: the plasma elongation, , where is the plasma minor radius, and is the height of the plasma measured from the equatorial plane, the plasma triangularity, , defined as the horizontal distance between the plasma major radius and the X point, the angle between the horizontal and the plasma last closed flux surface (LCFS) at the low field side, the angle between the horizontal and the plasma last closed flux surface (LCFS) at the high field side. In general (no up-down symmetry), there can be an upper-triangularity, and a lower-triangularity. Tokamaks can have negative triangularity. External links Triangularity - with diagram and source Ellipticity - with diagram and source References Fusion power
Plasma shaping
[ "Physics", "Chemistry" ]
716
[ "Nuclear fusion", "Fusion power", "Plasma physics" ]
26,564,321
https://en.wikipedia.org/wiki/WIMP%20Argon%20Programme
The WIMP Argon Programme (WARP) is an experiment at Laboratori Nazionali del Gran Sasso, Italy, for the research of cold dark matter. It aims to detect nuclear recoils in liquid argon induced by weakly interacting massive particles (WIMP) through scintillation light; the apparatus can also detect ionization so to exclude interactions of photons and electrons. The experiment is a recognized CERN experiment (RE15). Collaboration members Università degli Studi di Pavia e INFN P.Benetti, E.Calligarich, M.Cambiaghi, C.Montanari(+), A.Rappoldi, G.L.Raselli, M.Roncadelli, M.Rossella, C.Vignoli Laboratori Nazionali del Gran Sasso (INFN-LNGS) M. Antonello, O.Palamara, L.Pandola, C.Rubbia(*), E.Segreto, A.Szelc Università degli Studi dell'Aquila e INFN R.Acciarri, M. Antonello, N. Canci, F.Cavanna, F.Di Pompeo(++), L.Grandi(++) (++ also at INFN-LNGS) Università degli Studi di Napoli e INFN F.Carbonara, A.Cocco, G.Fiorillo, G.Mangano Princeton University Department of Physics F.Calaprice, C.Galbiati, B.Loer, R.Saldanha Università degli Studi di Padova e INFN B.Baibussinov, S.Centro, M.B.Ceolin, G.Meng, F.Pietropaolo, S.Ventura References External links WARP: Wimp Argon Programme, on the LNGS website. Experiments for dark matter search CERN experiments
WIMP Argon Programme
[ "Physics" ]
423
[ "Dark matter", "Unsolved problems in physics", "Experiments for dark matter search", "Particle physics", "Particle physics stubs" ]
26,567,008
https://en.wikipedia.org/wiki/Gallium-68%20generator
A germanium-68/gallium-68 generator is a device used to extract the positron-emitting isotope 68Ga of gallium from a source of decaying germanium-68. The parent isotope 68Ge has a half-life of 271 days and can be easily utilized for in-hospital production of generator produced 68Ga. Its decay product gallium-68 (with a half-life of only 68 minutes, inconvenient for transport) is extracted and used for certain positron emission tomography nuclear medicine diagnostic procedures, where the radioisotope's relatively short half-life and emission of positrons for creation of 3-dimensional PET scans, are useful. Parent isotope (68Ge) source The parent isotope germanium-68 is the longest-lived (271 days) of the radioisotopes of germanium. It has been produced by several methods. In the U.S., it is primarily produced in proton accelerators: At Los Alamos National Laboratory, it may be separated out as a product of proton capture, after proton irradiation of Nb-encapsulated gallium metal. At Brookhaven National Laboratories, 40 MeV proton irradiation of a gallium metal target produces germanium-68 by proton capture and double neutron knockout, from gallium-69 (the most common of two stable isotopes of gallium). This reaction is: 69Ga(p,2n)68Ge. A Russian source produces germanium-68 from accelerator-produced helium ion (alpha) irradiation of zinc-66, again after knockout of two neutrons, in the nuclear reaction 66Zn(α,2n)68Ge. Mechanism of generator function When loaded with the parent isotope germanium-68, these generators function similarly to technetium-99m generators, in both cases using a process similar to ion chromatography. The stationary phase is either metal-free or alumina, TiO2 or SnO2, onto which germanium-68 is adsorbed. The use of metal-free columns allows direct labeling of 68Ga without prepurification, hence making production of gallium-68-radiolabeled compounds more convenient. The mobile phase is a solvent able to elute (wash out) gallium-68 (III) (68Ga3+) after it has been produced by electron capture decay from the immobilized (absorbed) germanium-68. Currently, such 68Ga (III) is easily eluted with a few mL of 0.05 M, 0.1 M or 1.0 M hydrochloric acid from generators using metal-free tin dioxide or titanium dioxide adsorbents, respectively, within 1 to 2 minutes. With generators of tin dioxide and titanium dioxide-based adsorbents, there once remained more than an hour of pharmaceutical preparation to attach the gallium-68 (III) as a tracer to the pharmaceutical molecules DOTATOC or DOTA-TATE, so that the total preparation time for the resulting radiopharmaceutical is typically longer than the 68Ga isotope half-life. This fact required that these radiopharmaceuticals be made on-site in most cases, and the on-site generator is required to minimize the time losses. However, new kits such as "NETSPOT" for more rapidly preparing Ga-68 edotreotide or DOTATATE from Ga-68 (III) ions have increased the flexibility of sourcing of this radiopharmaceutical for Ga-68 endocrine receptor (octreotide) scans. With NETSPOT the preparation of the Ga-68 DOTATATE is immediate once the Ga-68 has been acquired from the generator and mixed with the reagent. Indications for gallium-68 PET scanning Gallium-67 citrate salt imaging is useful for imaging old or sterile abscesses. Gallium-68 is useful in direct tumor imaging, especially leukocyte-derived malignancies and prostate cancer metastases. See also Isotopes of germanium Positron emission tomography Technetium-99m generator References External links M.D. Anderson article on automated synthesis of tracer molecules from gallium-68 in as little as 20 minutes, for PET scan uses. Radiopharmaceuticals Radioactivity Gallium Medical physics
Gallium-68 generator
[ "Physics", "Chemistry" ]
905
[ "Applied and interdisciplinary physics", "Medicinal radiochemistry", "Radiopharmaceuticals", "Medical physics", "Nuclear physics", "Chemicals in medicine", "Radioactivity" ]
28,381,575
https://en.wikipedia.org/wiki/Partial%20stroke%20testing
Partial stroke testing (or PST) is a technique used in a control system to allow the user to test a percentage of the possible failure modes of a shut down valve without the need to physically close the valve. PST is used to assist in determining that the safety function will operate on demand. PST is most often used on high integrity emergency shutdown valves (ESDVs) in applications where closing the valve will have a high cost burden yet proving the integrity of the valve is essential to maintaining a safe facility. In addition to ESDVs PST is also used on high integrity pressure protection systems or HIPPS. Partial stroke testing is not a replacement for the need to fully stroke valves as proof testing is still a mandatory requirement. Standards Partial stroke testing is an accepted petroleum industry standard technique and is also quantified in detail by regulatory bodies such as the International Electrotechnical Commission (IEC) and the Instrument Society of Automation (ISA). The following are the standards appropriate to these bodies. IEC61508 – Functional safety of electrical/electronic/programmable electronic safety-related systems IEC61511 – Functional safety – Safety instrumented systems for the process industry sector ANSI/ISA-84.00.01 – Functional safety: Safety instrumented systems for the process industry sector (an ANSI standard) These standards define the requirements for safety related systems and describe how to quantify the performance of PST systems Measuring safety performance IEC61508 adapts a safety life cycle approach to the management of plant safety. During the design phase of this life cycle of a safety system the required safety performance level is determined using techniques such as Markov analysis, FMEA, fault tree analysis and Hazop. These techniques allow the user to determine the potential frequency and consequence of hazardous activities and to quantify the level of risk. A common method for this quantification is the safety integrity level. This is quantified from one to four with level four being the most hazardous. Once the SIL level is determined this specifies the required performance level of the safety systems during the operational phase of the plant. The metric for measuring the performance of a safety function is called the average probability of failure on demand (or PFDavg) and this correlates to the SIL level as follows One method of calculating the PFDavg for a basic safety function with no redundancy is using the formula PFDavg = [(1-PTC)×λD×(TIFC/2)] + [PTC×λD×(TIPST/2)] Where: PTC = Proof test coverage of the partial stroke test. λD = The dangerous failure rate of the safety function. TIFC = The full closure interval, i.e. how often the valve must be full closed for testing. TIPST = The partial stroke test interval. The proof test coverage is a measure of how effective the partial stroke test is and the higher the PTC the greater the effect of the test. Benefits The benefits of using PST are not limited to simply the safety performance but gains can also be made in the production performance of a plant and the capital cost of a plant. These are summarised as follows Safety benefits Gains can be made in the following areas by the use of PST. Reducing the probability of failure on demand. Production benefits There are a number of areas where production efficiency can be improved by the successful implementation of a PST system: Extension of the time between compulsory plant shutdowns. Predicting potential valve failures facilitating the pre-ordering of spare parts. Prioritisation of maintenance tasks. Drawbacks The main drawback of all PST systems is the increased probability of causing an accidental activation of the safety system thus causing a plant shutdown, this is the primary concern of PST systems by operators and for this reason many PST system remain dormant after installation. Different techniques mitigate for this issue in different manners but all systems have an inherent risk In addition in some cases, a PST cannot be performed due to the limitations inherent in the process or the valve being used. Further, as the PST introduces a disturbance into the process or system, it may not be appropriate for some processes or systems that are sensitive to disturbances. Finally, a PST cannot always differentiate between different faults or failures within the valve and actuator assembly thus limiting the diagnostic capability. Techniques There are a number of different techniques available for partial stroke testing and the selection of the most appropriate technique depends on the main benefits the operator is trying to gain. Mechanical Jammers Mechanical jammers are devices where a device is inserted into the valve and actuator assembly that physically prevents the valve from moving past a certain point. These are used in cases where accidentally shutting the valve would have severe consequences, or any application where the end user prefers a mechanical device. Typical benefits of this type of device are as follows: The devices assure metal-to-metal prevention of stroke past the specified set point. Unlike some electronic systems, there is no need to commission and calibrate controls or continually train personnel, resulting in additional significant cost savings. The devices are vibration resistant, making them highly reliable. The risk associated with having an ESD event occur at time of manual mechanical PST may be considered statistically insignificant and allows a rational consideration of the advantages mechanical devices offer. Modular design allows for addition of limit switches, potentiometers, remote control operation, etc. The test is a comprehensive test of the logic solver and all final elements, only the sensing elements of the safety function are not tested. The valve is tested at the designed operating speed as it simulates an ESD event Jammers have a very low probability of causing a spurious trip. However, opinions differ whether these devices are suitable for functional safety systems as the safety function is offline for the duration of the test. Modern mechanical PST devices may be automated. Examples of this kind of device include direct interface products that mount between the valve and the actuator and may use cams fitted to the valve stem. An example of such a mechanical PST system: Other methods include adjustable actuator end stops. Pneumatic valve positioners The basic principle behind partial stroke testing is that the valve is moved to a predetermined position in order to determine the performance of the shut down valve. This led to the adaptation of pneumatic positioners used on flow control valve for use in partial stroke testing. These systems are often suitable for use on shutdown valves up to and including SIL3. The main benefits are : Elimination of the cost of manual testing Tracking and records of the PST tests for an optimum Safety monitoring. When the positioner is connected to the Safety System, the date and result of the test are registered in the Sequence of Events, for Insurance purposes. Remote access to valve diagnostics from the control room, with action oriented reports for predictive maintenance. The main benefit of these systems is that positioners are common equipment on plants and thus operators are familiar with the operation of these systems, however the primary drawback is the increased risk of spurious trip caused by the introduction of additional control components that are not normally used on on/off valves. These systems are however limited to use on pneumatically actuated valves. Electrical relay systems These systems use an electrical switch to de-energise the solenoid valve and use an electrical relay attached to the actuator to re-energise the solenoid coil when the desired PST point is reached. Electronic control systems Electronic control systems use a configurable electronic module that connects between the supply from the ESD system and the solenoid valve. In order to perform a test the timer de-energises the solenoid valve to simulate a shutdown and re-energises the solenoid when the required degree of partial stroke is reached. These systems are fundamentally a miniature PLC dedicated to the testing of the valve. Due to their nature these devices do not actually form part of the safety function and are therefore 100% fail safe. With the addition of a pressure sensor and/or a position sensor for feedback timer systems are also capable of providing intelligent diagnostics in order to diagnose the performance of all components including the valve, actuator and solenoid valves. In addition timers are capable of operating with any type of fluid power actuator and can also be used with subsea valves where the solenoid valve is located top-side. Integrated solenoid valve systems Another technique is to embed the control electronics into a solenoid valve enclosure removing the need for additional control boxes. In addition there is no need to change the control schematic as no dedicated components are required. References External links International Electrotechnical Commission Instrument Society of America products/automation-systems/partial-stroke-test-pst-of-actuators Paladon Systems PST Controller Rotork Smart Valve Monitor Dynatorque D-Stop Mechanical Partial Stroke Test Device Foxboro PST positioner IMI Precision Engineering - Maxseal ICO4-PST Val Controls A/S Neles ValvGuard Intelligent Solenoid for ESD Valves and critical ON-OFF valves Safety engineering Safety equipment Valves
Partial stroke testing
[ "Physics", "Chemistry", "Engineering" ]
1,898
[ "Systems engineering", "Safety engineering", "Physical systems", "Valves", "Hydraulics", "Piping" ]
52,127,203
https://en.wikipedia.org/wiki/Pacific%20Islands%20Ocean%20Observing%20System
The Pacific Islands Ocean Observing System (PacIOOS) is a nonprofit association and one of eleven such associations in the U.S. Integrated Ocean Observing System, funded in part by the National Oceanic and Atmospheric Administration (NOAA). The PacIOOS area covers eight time zones, and 2300 individual islands associated with the U.S. Observation priorities are public safety, direct economic value, and environmental preservation. Among ocean characteristics reported are: Currents forecast Shoreline impacts such as high sea level Buoy water characteristics including salinity, turbidity, and temperature The PacIOOS website is hosted by the University of Hawaii at Manoa, and provides interactive graphs and map viewers. References Oceanography Earth sciences organizations Hydrology organizations Meteorological organizations Oceanographic organizations Environmental data
Pacific Islands Ocean Observing System
[ "Physics", "Environmental_science" ]
156
[ "Oceanography", "Hydrology", "Hydrology organizations", "Applied and interdisciplinary physics" ]
52,131,913
https://en.wikipedia.org/wiki/Circulisporites
Circulisporites is a genus of plants. It is known from Triassic spores and pollen grains from the Ipswich coalfield in Queensland, Australia. References External links Prehistoric plant genera Enigmatic plant taxa
Circulisporites
[ "Biology" ]
43
[ "Enigmatic plant taxa", "Plants" ]
52,133,355
https://en.wikipedia.org/wiki/Effects%20of%20legalized%20cannabis
The use of cannabis as a recreational drug has been outlawed in many countries for several decades. As a result of long-fought legalization efforts, several countries such as Uruguay and Canada, as well as several states in the US, have legalized the production, sale, possession, and recreational and/or medical usage of cannabis. The broad legalization of cannabis in this fashion can have numerous effects on the economy and society in which it is legalized. General evidence There is evidence that ready access to legal cannabis is associated with a number of health harms including increased need for emergency medicine, and increased incidence of cannabis use disorder and road traffic accidents. Effects on crime and law enforcement Studies indicate that cannabis decriminalization and legalization lead to fewer cannabis-related arrests. A 2019 analysis of Prince George's County, Maryland found a 54% decrease in county-wide arrest rates for cannabis possession following decriminalization. However, this coincided with a 1,031% increase in cannabis possession citations, suggesting a possible "net-widening effect" where more individuals overall faced enforcement action through citations rather than arrests. Research suggests that cannabis legalization and decriminalization have little impact on general crime rates, with some studies indicating potential decreases in certain areas. A fifty-state study comparing violent and property crime rates between 2010-2014 found that crime rates tended to be slightly higher in states where cannabis was completely prohibited, though this difference was not statistically significant. Effects on youth and public Health Research has identified several potential risks of adolescent cannabis use following legalization, including impaired cognitive functioning, increased risk of developing cannabis dependence, elevated rates of school dropout, elevated risk of developing psychotic illnesses, and increased rate of engaging in risky behaviors. Weekly cannabis use under age 18 has been associated with decreased intelligence among those who develop persistent dependence. Studies indicate adolescents may be more adversely affected by heavy use than adults, particularly in domains of learning, memory, and working memory. A 2021 study from the Cato Institute, a libertarian think tank, found that many strong claims made by both advocates and critics of state-level marijuana legalization are substantially overstated and in some cases entirely without real-world support. The study concluded that state legalizations have generally had minor effects, with tax revenue being a notable exception that has exceeded some expectations. United States A 2015 study found that medical marijuana legalization increased use and abuse by those under and over the age of 21. A 2017 study found that frequency of marijuana use by students increased significantly after recreational legalization and that increase was especially large for females and for Black and Hispanic students. While overall arrests have decreased, racial disparities in cannabis-related arrests persist post-legalization. In Washington state, following retail legalization, disparities in arrest rates between Black and White adults grew from 2.5 times higher to 5 times higher, despite an overall 87% decrease in possession arrests for both groups. A 2017 study found that the introduction of medical marijuana laws caused a reduction in violent crime in Americans states that border Mexico: "The reduction in crime is strongest for counties close to the border (less than 350 km), and for crimes that relate to drug trafficking. In addition, we find that [medical marijuana laws] in inland states lead to a reduction in crime in the nearest border state. Our results are consistent with the theory that decriminalization of the production and distribution of marijuana leads to a reduction in violent crime in markets that are traditionally controlled by Mexican drug trafficking organisations." A 2020 study found that junk food sales increased between 3.2 and 4.5 percent in states that had legalized cannabis. A 2022 study found that legalization had led to a 20% increase in use of cannabis in the US. Pharmaceutical companies had lower returns. Health effects Cannabinoid hyperemesis syndrome, resulting from heavy cannabis use, is characterized by nausea, vomiting, and abdominal pain. It can lead to severe dehydration, seizures, kidney failure, and cardiac arrest, with at least eight reported deaths in the United States. Since its documentation in 2004, there has been a significant rise in reported cases. Accurate tracking of the condition is difficult due to inconsistent recording in medical records. Researchers estimate that up to one-third of near-daily cannabis users in the U.S. may experience symptoms, ranging from mild to severe, affecting approximately six million people. According to data from the nonprofit Health Care Cost Institute, cannabis-related diagnoses among individuals under 65 with employer-paid insurance increased by over 50 percent nationwide between 2016 and 2022, rising from approximately 341,000 to 522,000. Legalization has led to a decreased perception of cannabis use as "risky" and "potentially harmful". A 2013 study showed that 32.8% of people surveyed in Utah, a state where Marijuana use is illegal, believed that they had a risk of harm from Marijuana consumption, whereas only 18.8% of people surveyed in Washington, a state where adult-use is legal, believed they had a risk of harm. Economic impact In 2019, the US gained a total of 1.7 billion dollars in tax revenue due to the legalization of marijuana. In 2021, that number more than doubled to 3.7 billion dollars. The increase in tax revenue being a driving factor in the legalization of marijuana is similar to the effects of the repeal of prohibition. After prohibition was abolished, the percentage of federal government revenue coming from alcohol increased about 7% in the US. Legalization is anticipated to reduce the resources expended on arrests and prosecution for marijuana-related crimes. A 2007 analysis found that legalization could result in a potential savings of $10.7 billion per year. A 2010 report predicted that full marijuana legalization could save the United States more than $13 billion a year, with $8 billion of that amount resulting from no longer having to enforce prohibition. The legalization of marijuana has created significant job opportunities. The industry supports nearly 430,000 full-time jobs, with projections suggesting this could rise to over 1.75 million jobs in the near future. With over 100,000 jobs created in 2021, there was approximately a 33% increase from the previous year, significantly outpacing the projected 8% increase in jobs in the business and financial sector. Colorado In Colorado, effects since 2014 include increased state revenues, violent crime decreased, and an increase in homeless population. One Colorado hospital has received a 15% increase in babies born with THC in their blood. Since legalization, public health and law enforcement officials in Colorado have grappled with a number of issues, serving as a model for policy problems that come with legalization. Marijuana-related hospital visits have nearly doubled between 2011, prior to legalization, and 2014. Top public health administrators in Colorado have cited the increased potency of today's infused products, often referred to as "edibles", as a cause for concern. They have also highlighted the risk that edibles pose to children, as they are often undistinguishable from ordinary foods once they are removed from their packaging. Youth usage has also been a major aspect of the debate surrounding marijuana legalization and a concern for state officials. Overall youth usage rates have increased, although not enough to be deemed statistically significant. Looking at students in the eighth, tenth, and twelfth grades, a survey study published in the Journal of the American Medical Association found that usage rates had not increased among any of the different age groups in Colorado, although statistically significant increases in usage rates amongst eighth and tenth graders were reported in Washington. Oregon Oregon legalized cannabis in November 2014. Effects have included an increase in cannabis-related calls to the Oregon state poison center, an increase in perception among youth that marijuana use is harmful, a decrease in arrest rates for cannabis related offenses, stores sold $250 million in cannabis products which resulted in $70 million in state tax revenue (higher than a predicted $36 million in revenue), 10% decrease in violent crime, and 13% drop in murder rate. Washington D.C. Washington D.C. legalized cannabis in 2015. Cannabis possession arrests decreased 98% from 2014 to 2015 and all cannabis offenses dropped by 85%. Uruguay Effects of cannabis legalization in Uruguay since 2013 include other countries in the region loosening laws concerning cannabis and lower costs of illegal cannabis. The percentage of female prisoners has fallen. See also Cannabis rights Drug liberalization Drug Policy Alliance Green rush Harm reduction Legality of cannabis Legality of the War on Drugs National Organization for the Reform of Marijuana Laws References Further reading Cannabis law reform Cannabis smoking Drug control law Entheogens Euphoriants Herbalism Medicinal plants Hemp Biofuels Fiber plants Herbs Non-food crops
Effects of legalized cannabis
[ "Chemistry" ]
1,774
[ "Drug control law", "Regulation of chemicals" ]
52,133,580
https://en.wikipedia.org/wiki/List%20of%20drugs%20by%20year%20of%20discovery
The following is a table of drugs organized by their year of discovery. Naturally occurring chemicals in plants, including alkaloids, have been used since pre-history. In the modern era, plant-based drugs have been isolated, purified and synthesised anew. Synthesis of drugs has led to novel drugs, including those that have not existed before in nature, particularly drugs based on known drugs which have been modified by chemical or biological processes. Antiquity Prehistory Archaeological evidence indicates that the use of medicinal plants dates back to the Paleolithic age. 4th millennium BCE In ancient Egypt, herbs are mentioned in Egyptian medical papyri, depicted in tomb illustrations, or on rare occasions found in medical jars containing trace amounts of herbs. Medical recipes from 4000 BCE were for liquid preparations rather than solids. In the 4th millennium BCE, Soma (drink) and Haoma are named, but is not clear what ingredients were used to prepare them. 3rd millennium BCE 2nd millennium BCE Written around 1600 BCE, the Edwin Smith Papyrus describes the use of many herbal drugs. The Ebers Papyrus – one of the most important medical papyri of ancient Egypt – was written around 1550 BCE, and covers more than 700 drugs, mainly of plant origin. The first references to pills were found on papyri in ancient Egypt, and contained bread dough, honey, or grease. Medicinal ingredients such as plant powders or spices were mixed in and formed by hand to make little balls, or pills. The papyri also describe how to prepare herbal teas, poultices, ointments, eye drops, suppositories, enemas, laxatives, etc. Aloe vera was used in the 2nd millennium BCE. 1st millennium BCE In Greece, Theophrastus of Eresos wrote Historia Plantarum in the 4th century BCE. Seeds likely used for herbalism have been found in archaeological sites of Bronze Age China dating from the Shang dynasty (c. 1600 BCE–c. 1046 BCE). Over a hundred of the 224 drugs mentioned in the Huangdi Neijing – an early Chinese medical text – are herbs. Herbs also commonly featured in the medicine of ancient India, where the principal treatment for diseases was diet. Opioids are among the world's oldest known drugs. Use of the opium poppy for medical, recreational, and religious purposes can be traced to the 4th century BCE, when Hippocrates wrote about it for its analgesic properties, stating, "Divinum opus est sedare dolores." ("Divine work is the easing of pain") 1st century CE In ancient Greece, pills were known as ("something to be swallowed"). Pliny the Elder, who lived from 23–79 CE, first gave a name to what we now call pills, calling them . Pliny also wrote Naturalis Historia a collection of 38 books and the first pharmacopoea. Pedanius Dioscorides wrote De Materia Medica (c. 40 – 90 CE); this book dominated the area of drug knowledge for some 1500 years until the 1600s. Jojoba was used in the 1st millennium CE. 2nd century CE Aelius Galenus wrote more than 11 books about drugs, also use terra sigillata with kaolinite and goats blood to produce tablets. Post-classical to Early modern Drugs developed in the post-classical (circa 500 to 1450) or early modern eras (circa 1453 to 1789). 6th–11th century CE In middle age ointments were a common dosage form. 11th century CE Avicenna separates Medicine and Pharmacy, in 1025 published his book The Canon of Medicine, an encyclopedia of medicine formed by five books. Drugs mentioned by Avicenna include agaric, scammony and euphorbium. The latex of Euphorbia resinifera contains resiniferatoxin, an ultra potent capsaicin analog. Desensitization to resiniferatoxin is tested in clinical trials to treat neuropathic pain. 16th century CE Paracelsus expounded the concept of dose response in his Third Defense, where he stated that "Solely the dose determines that a thing is not a poison." This was used to defend his use of inorganic substances in medicine as outsiders frequently criticized Paracelsus' chemical agents as too toxic to be used as therapeutic agents. Paracelsus discovered that the alkaloids in opium are far more soluble in alcohol than water. Having experimented with various opium concoctions, Paracelsus came across a specific tincture of opium that was of considerable use in reducing pain. He called this preparation laudanum. For over a thousand years South American indigenous peoples have chewed Erythroxylon coca leaves, which contain alkaloids such as cocaine. Coca leaf remains have been found with ancient Peruvian mummies. There is also evidence coca leaves were used as an anesthetic. In 1569, Spanish botanist Nicolás Monardes described the indigenous peoples' practice of chewing a mixture of tobacco and coca leaves to induce "great contentment". 1400s Nicotine (Tobacco) 18th century CE In 1778 John Mudge created the first inhaler devices. In 1747, James Lind, surgeon of HMS Salisbury, conducted the first clinical trial ever recorded, on it he studied how citrus fruit were capable of curing scurvy. Modern 19th century CE In the 1830s chemist Justus von Liebig began the synthesis of organic molecules, stating that "The production of all organic substances no longer belongs just to living organisms." In 1832 produced chloral hydrate, the first synthetic sleeping drug. In 1833 French chemist Anselme Payen was the first to discover an enzyme, diastase. In 1834, François Mothes and Joseph Dublanc created a method to produce a single-piece gelatin capsule that was sealed with a drop of gelatin solution. In 1853 Alexander Wood was the first physician that used hypodermic needle to dispense drugs via Injections. In 1858 Dr. M. Sales Giron invented the first pressurized inhaler. Amphetamine was first synthesized in 1887 in Germany by Romanian chemist Lazăr Edeleanu who named it phenylisopropylamine; its stimulant effects remained unknown until 1927, when it was independently resynthesized by Gordon Alles and reported to have sympathomimetic properties. Shortly after amphetamine, methamphetamine was synthesized from ephedrine in 1893 by Japanese chemist Nagai Nagayoshi. Three decades later, in 1919, methamphetamine hydrochloride was synthesized by pharmacologist Akira Ogata via reduction of ephedrine using red phosphorus and iodine. 20th century CE In 1901 Jōkichi Takamine isolated and synthesized the first hormone, Adrenaline. In 1907 Alfred Bertheim synthesized Arsphenamine, the first man-made antibiotic. In 1927 Erik Rotheim patented the first aerosol spray can. In 1933 Robert Pauli Scherer created a method to develop softgels. William Roberts studies about penicillin were continued by Alexander Fleming, who in 1928 concluded that penicillin had an antibiotic effect. In 1944 Howard Florey and Ernst Boris Chain mass-produced penicillin. In 1948 Raymond P. Ahlquist published his seminal work where he divided adrenoceptors into α- and β-adrenoceptor subtypes, this allowed a better understanding of drugs mechanisms of action. In 1987, after Montreal Protocol, CFC inhalers were phased out and HFA inhalers replace them. In 1987 CRISPR technique was discovered by Yoshizumi Ishino that in the next century would be used for genome editing. 21st century CE 21st century begins with the first complete sequences of individual human genomes by Human Genome Project, on 12 February 2001, this allowed a switch in drug development and research from the traditional way of drug discovery that was isolating molecules from plants or animals or create new molecules and see if they could be useful in treatment of illness in humans, to pharmacogenomics, that is the study and knowledge of how genes respond to drugs. Another field beneficed by Human Genome Project is pharmacogenetics, that is the study of inherited genetic differences in drug metabolic pathways which can affect individual responses to drugs, both in terms of therapeutic effect as well as adverse effects. Humane genome study also allowed to identify which genes are responsible of illness, and to develop drugs for rare diseases and also treatment of illness through gene therapy. In 2015 a simplified form of CRISPR edition was used in humans with Cas9, and also was used an even more simple method, Cas12a that prevent genetic damage from viruses. These advances are improving personalized medicine and allowing precision medicine. * MA = Monoclonal antibody SM = Small molecule ACT = Adoptive cell transfer See also List of drugs Lists of molecules History of medicine List of pharmaceutical laboratories by year of foundation Lists of diseases by year of discovery Discovery and development of beta2 agonists Pharmacopoeia Edwin Smith Papyrus De Materia Medica Shennong Ben Cao Jing The Canon of Medicine The Book of Healing References External links The Canon of Medicine (text) Year Medical history-related lists
List of drugs by year of discovery
[ "Chemistry", "Biology" ]
1,900
[ "Life sciences industry", "Medicinal chemistry", "Drug discovery" ]
52,136,524
https://en.wikipedia.org/wiki/Papagayo%20Jet
The Papagayo Jet, also referred to as the Papagayo Wind or the Papagayo Wind Jet, are strong intermittent winds that blow approximately 70 km north of the Gulf of Papagayo, after which they are named. The jet winds travel southwest from the Caribbean and the Gulf of Mexico to the Pacific Ocean through a pass in the Cordillera mountains at Lake Nicaragua. The jet follows the same path as the northeast trade winds in this region; however, due to a unique combination of synoptic scale meteorology and orographic phenomena, the jet winds can reach much greater speeds than their trade wind counterparts. That is to say, the winds occur when cold high-pressure systems from the North American continent meet warm moist air over the Caribbean and Gulf of Mexico, generating winds that are then funneled through a mountain pass in the Cordillera. The Papagayo Jet is also not unique to this region. There are two other breaks in the Cordillera where this same phenomenon occurs, one at the Chivela Pass in Mexico and another at the Panama Canal, producing the Tehuano (Tehuantepecer) and the Panama jets respectively. The Papagayo Jet also induces mesoscale meteorology phenomena that influence the pacific waters hundreds of kilometers off the Nicaraguan and Costa Rican shores. When the jet wind surges, it creates cyclonic and anticyclonic eddies, Ekman transport, and upwelling that contribute to the creation of the Costa Rica Dome off the western coast of Central America in the Western Hemisphere Warm Pool (WHWP). The relatively cold, nutrient-rich waters of the dome, in comparison to the surrounding WHWP, create an ideal habitat for a number of species making the Papagayo Wind Jet important for biodiversity in the Eastern Tropical Pacific. Formation In North and Central America, during the Northern Hemisphere winter, high-pressure systems are created between the equator and 35th parallels north via atmospheric circulation. Air near the equator is warmed by the sun. This heated air is more buoyant than colder air so it rises and is then pushed poleward by more air rising from below. Once the air reaches northern latitudes it begins to cool and as a result it falls back towards to the Earth's surface. As the air is falling it exerts more pressure downward on the surface, creating a high-pressure systems. This cold, high-pressure air mass then travels equatorward. Air masses repeatedly move in this loop, but due to the Coriolis force, this convection is not perfectly aligned south to north. In actuality, the air moves clockwise in the Northern Hemisphere, as it moves from the equator to higher latitudes and then back to the equator again. The air travelling clockwise off the North American continent is cold and dense, with a high pressure. As it moves southwest over the Caribbean and the Gulf of Mexico, it meets warm, moist air with a comparatively low pressure. This establishes a dramatic pressure gradient, causing the cold, high-pressure air to flow quickly into the low-pressure area. This is analogous to air flowing rapidly out of a balloon when the neck of the balloon is left open. The air in the balloon has a higher pressure than the surrounding air so the air flows out of the balloon until the pressure inside and outside of the balloon is equal. If Central America were topographically flat, the air would flow uninterrupted from the Caribbean to the Pacific Ocean; however, the Cordillera mountains, which run along the west coast of Central America, block this flow. As a result, the air is funneled into a narrow mountain pass near Lake Nicaragua and the Gulf of Papagayo, creating the Papagayo Jet. Again, the balloon example serves as an analogy of how the Papagayo Jet forms; the air moving out of the balloon cannot escape all at once because there is only a small opening that allows for the release of air. The narrow opening of the balloon facilitates the creation of a wind because the air velocity increases through the balloon neck. Like the wind blowing through the balloon neck, the Papagayo winds reach high speeds as they travel through the break in the Cordillera. For context, Papagayo jet winds have mean speeds of and can reach speeds of up to , in comparison with average trade wind speeds of 25 km/h. Once the Papagayo Jet winds reach the Pacific Ocean they slow considerably and merge with the trade winds. Papagayo wind jet surges can occur intermittently every few weeks and last several days during the Northern Hemisphere winter. The jet is most prominent in the winter months because the pressure gradient is largest between the two air masses during this time of the year. The greater the difference in temperature between the two air masses, the faster the air will flow from an area of high pressure into an area of low pressure. In the spring, summer, and fall months the air mass from the North American continent is much warmer so the resultant air flow is less dramatic and the wind speeds are not as high. In sum, wind speeds in the Papagayo Jet will be high through the months of November to March, peaking in February, then they will be reduced from April to August, and finally diminish completely in September. Influence on the Costa Rica Dome The Papagayo Jet winds are strong enough to influence the ocean waters off the west coast of Central America, namely being one of the factors responsible for the Costa Rica Dome. The Costa Rica Dome is a roughly circular area of anomalously cold water in the eastern tropical Pacific. It has a diameter of approximately 300-500 kilometers and is centered approximately 300 kilometers west of the Gulf of Papagayo. The waters surrounding the dome (known as the Western Hemisphere Warm Pool) are considerably warmer due to heating from the sun, given the region's proximity to the equator. The existence of the Costa Rica Dome can be attributed to a multitude of mesoscale oceanic effects; however, the Papagayo Jet plays a considerable role in the size, movement, and continued existence of the dome throughout the year. As the Papagayo winds blow during the winter months they cool the surface ocean water in their path causing the extension of the Costa Rica Dome east (from 300 to approximately 1000 kilometers in diameter) to the Nicaraguan and Costa Rican coastlines. The mechanism for this cooling is explained by the influence of the Papagayo winds on the ocean surface currents. As the winds blow southwest over the Pacific they create cyclonic and anticyclonic coastal eddies on the water surface due to Ekman pumping. These coastal eddies generate the upwelling of cold water from greater ocean depths where the rising cold water then mixes with the warmer water near the surface and subsequently lowers sea surface temperatures. Therefore, the Papagayo Jet indirectly cools the coastal waters off the shores of Nicaragua and Costa Rica, extending the Costa Rica Dome. During the winter months, the coastal eddies and by extension the Papagayo Jet, are thought to be the primary drivers of the dome. Model simulations indicate that without the Papagayo Jet the Costa Rica Dome would not grow to such a large extent and may not even persist year-round. Impact on Regional Biodiversity The Papagayo Jet is an important meteorological phenomenon when considering ocean biodiversity in the eastern tropical Pacific. The jet plays a key role in lowering sea surface temperatures, through the influence on the Costa Rica Dome. The movement and growth of the dome is caused by the seasonal variability of the jet where the annual upwelling and mixing caused by the Papagayo Jet during dome extension allows for the transport of nutrient-rich cold waters to the surface. If the jet was a permanent feature (and by extension, the dome was also permanent) there would be no seasonal transport of nutrients via cold water upwelling. Indirect evidence of this nutrient transport can be seen in satellite imagery showing increased chlorophyll production in the surface waters directly under the path of the jet. The dome has also shown to be an area with increased zooplankton biomass as well as an area inhabited by blue whales who seem to follow the dome as it migrates in the eastern tropical Pacific waters. See also Hadley Cell References Winds Atmospheric dynamics
Papagayo Jet
[ "Chemistry" ]
1,684
[ "Atmospheric dynamics", "Fluid dynamics" ]
52,138,025
https://en.wikipedia.org/wiki/Regulation%20of%20transcription%20in%20cancer
Generally, in progression to cancer, hundreds of genes are silenced or activated. Although silencing of some genes in cancers occurs by mutation, a large proportion of carcinogenic gene silencing is a result of altered DNA methylation (see DNA methylation in cancer). DNA methylation causing silencing in cancer typically occurs at multiple CpG sites in the CpG islands that are present in the promoters of protein coding genes. Altered expressions of microRNAs also silence or activate many genes in progression to cancer (see microRNAs in cancer). Altered microRNA expression occurs through hyper/hypo-methylation of CpG sites in CpG islands in promoters controlling transcription of the microRNAs. Silencing of DNA repair genes through methylation of CpG islands in their promoters appears to be especially important in progression to cancer (see methylation of DNA repair genes in cancer). CpG islands in promoters In humans, about 70% of promoters located near the transcription start site of a gene (proximal promoters) contain a CpG island. CpG islands are generally 200 to 2000 base pairs long, have a C:G base pair content >50%, and have regions of DNA where a cytosine nucleotide is followed by a guanine nucleotide and this occurs frequently in the linear sequence of bases along its 5′ → 3′ direction. Genes may also have distant promoters (distal promoters) and these frequently contain CpG islands as well. An example is the promoter of the DNA repair gene ERCC1, where the CpG island-containing promoter is located about 5,400 nucleotides upstream of the coding region of the ERCC1 gene. CpG islands also occur frequently in promoters for functional noncoding RNAs such as microRNAs. Transcription silencing due to methylation of CpG islands In humans, DNA methylation occurs at the 5′ position of the pyrimidine ring of the cytosine residues within CpG sites to form 5-methylcytosines. The presence of multiple methylated CpG sites in CpG islands of promoters causes stable inhibition (silencing) of genes. Silencing of transcription of a gene may be initiated by other mechanisms, but this is often followed by methylation of CpG sites in the promoter CpG island to cause the stable silencing of the gene. Transcription silencing/activation in cancers In cancers, loss of expression of genes occurs about 10 times more frequently by transcription silencing (caused by promoter hypermethylation of CpG islands) than by mutations. As Vogelstein et al. point out, in a colorectal cancer there are usually about 3 to 6 driver mutations and 33 to 66 hitchhiker or passenger mutations. In contrast, in colon tumors compared to adjacent normal-appearing colonic mucosa, there are about 600 to 800 heavily methylated CpG islands in promoters of genes in the tumors while these CpG islands are not methylated in the adjacent mucosa. Using gene set enrichment analysis, 569 out of 938 gene sets were hypermethylated and 369 were hypomethylated in cancers. Hypomethylation of CpG islands in promoters results in increased transcription of the genes or gene sets affected. One study listed 147 specific genes with colon cancer-associated hypermethylated promoters and 27 with hypomethylated promoters, along with the frequency with which these hyper/hypo-methylations were found in colon cancers. At least 10 of those genes had hypermethylated promoters in nearly 100% of colon cancers. They also indicated 11 microRNAs whose promoters were hypermethylated in colon cancers at frequencies between 50% and 100% of cancers. MicroRNAs (miRNAs) are small endogenous RNAs that pair with sequences in messenger RNAs to direct post-transcriptional repression. On average, each microRNA represses or inhibits transcriptional expression of several hundred target genes. Thus microRNAs with hypermethylated promoters may be allowing enhanced transcription of hundreds to thousands of genes in a cancer. Transcription inhibition and activation by nuclear microRNAs For more than 20 years, microRNAs have been known to act in the cytoplasm to degrade transcriptional expression of specific target gene messenger RNAs (see microRNA history). However, recently, Gagnon et al. showed that as many as 75% of microRNAs may be shuttled back into the nucleus of cells. Some nuclear microRNAs have been shown to mediate transcriptional gene activation or transcriptional gene inhibition. DNA repair genes with hyper/hypo-methylated promoters in cancers DNA repair genes are frequently repressed in cancers due to hypermethylation of CpG islands within their promoters. In head and neck squamous cell carcinomas at least 15 DNA repair genes have frequently hypermethylated promoters; these genes are XRCC1, MLH3, PMS1, RAD51B, XRCC3, RAD54B, BRCA1, SHFM1, GEN1, FANCE, FAAP20, SPRTN, SETMAR, HUS1, and PER1. About seventeen types of cancer are frequently deficient in one or more DNA repair genes due to hypermethylation of their promoters. As summarized in one review article, promoter hypermethylation of the DNA repair gene MGMT occurs in 93% of bladder cancers, 88% of stomach cancers, 74% of thyroid cancers, 40%-90% of colorectal cancers and 50% of brain cancers. Promoter hypermethylation of LIG4 occurs in 82% of colorectal cancers. This review article also indicates promoter hypermethylation of NEIL1 occurs in 62% of head and neck cancers and in 42% of non-small-cell lung cancers; promoter hypermetylation of ATM occurs in 47% of non-small-cell lung cancers; promoter hypermethylation of MLH1 occurs in 48% of squamous cell carcinomas; and promoter hypermethylation of FANCB occurs in 46% of head and neck cancers. On the other hand, the promoters of two genes, PARP1 and FEN1, were hypomethylated and these genes were over-expressed in numerous cancers. PARP1 and FEN1 are essential genes in the error-prone and mutagenic DNA repair pathway microhomology-mediated end joining. If this pathway is over-expressed, the excess mutations it causes can lead to cancer. PARP1 is over-expressed in tyrosine kinase-activated leukemias, in neuroblastoma, in testicular and other germ cell tumors, and in Ewing's sarcoma, FEN1 is over-expressed in the majority of cancers of the breast, prostate, stomach, neuroblastomas, pancreatic, and lung. DNA damage appears to be the primary underlying cause of cancer. If accurate DNA repair is deficient, DNA damages tend to accumulate. Such excess DNA damage can increase mutational errors during DNA replication due to error-prone translesion synthesis. Excess DNA damage can also increase epigenetic alterations due to errors during DNA repair. Such mutations and epigenetic alterations can give rise to cancer (see malignant neoplasms). Thus, CpG island hyper/hypo-methylation in the promoters of DNA repair genes are likely central to progression to cancer. See also Eukaryotic transcription Gene expression Transcriptional regulation Cancer epigenetics References Gene expression Non-coding RNA Epigenetics Cancer epigenetics DNA Medical regulation
Regulation of transcription in cancer
[ "Chemistry", "Biology" ]
1,579
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
25,148,935
https://en.wikipedia.org/wiki/Schoenflies%20problem
In mathematics, the Schoenflies problem or Schoenflies theorem, of geometric topology is a sharpening of the Jordan curve theorem by Arthur Schoenflies. For Jordan curves in the plane it is often referred to as the Jordan–Schoenflies theorem. Original formulation The original formulation of the Schoenflies problem states that not only does every simple closed curve in the plane separate the plane into two regions, one (the "inside") bounded and the other (the "outside") unbounded; but also that these two regions are homeomorphic to the inside and outside of a standard circle in the plane. An alternative statement is that if is a simple closed curve, then there is a homeomorphism such that is the unit circle in the plane. Elementary proofs can be found in , , and . The result can first be proved for polygons when the homeomorphism can be taken to be piecewise linear and the identity map off some compact set; the case of a continuous curve is then deduced by approximating by polygons. The theorem is also an immediate consequence of Carathéodory's extension theorem for conformal mappings, as discussed in . If the curve is smooth then the homeomorphism can be chosen to be a diffeomorphism. Proofs in this case rely on techniques from differential topology. Although direct proofs are possible (starting for example from the polygonal case), existence of the diffeomorphism can also be deduced by using the smooth Riemann mapping theorem for the interior and exterior of the curve in combination with the Alexander trick for diffeomorphisms of the circle and a result on smooth isotopy from differential topology. Such a theorem is valid only in two dimensions. In three dimensions there are counterexamples such as Alexander's horned sphere. Although they separate space into two regions, those regions are so twisted and knotted that they are not homeomorphic to the inside and outside of a normal sphere. Proofs of the Jordan–Schoenflies theorem For smooth or polygonal curves, the Jordan curve theorem can be proved in a straightforward way. Indeed, the curve has a tubular neighbourhood, defined in the smooth case by the field of unit normal vectors to the curve or in the polygonal case by points at a distance of less than ε from the curve. In a neighbourhood of a differentiable point on the curve, there is a coordinate change in which the curve becomes the diameter of an open disk. Taking a point not on the curve, a straight line aimed at the curve starting at the point will eventually meet the tubular neighborhood; the path can be continued next to the curve until it meets the disk. It will meet it on one side or the other. This proves that the complement of the curve has at most two connected components. On the other hand, using the Cauchy integral formula for the winding number, it can be seen that the winding number is constant on connected components of the complement of the curve, is zero near infinity and increases by 1 when crossing the curve. Hence the curve separates the plane into exactly two components, its "interior" and its "exterior", the latter being unbounded. The same argument works for a piecewise differentiable Jordan curve. Polygonal curve Given a simple closed polygonal curve in the plane, the piecewise linear Jordan–Schoenflies theorem states that there is a piecewise linear homeomorphism of the plane, with compact support, carrying the polygon onto a triangle and taking the interior and exterior of one onto the interior and exterior of the other. The interior of the polygon can be triangulated by small triangles, so that the edges of the polygon form edges of some of the small triangles. Piecewise linear homeomorphisms can be made up from special homeomorphisms obtained by removing a diamond from the plane and taking a piecewise affine map, fixing the edges of the diamond, but moving one diagonal into a V shape. Compositions of homeomorphisms of this kind give rise to piecewise linear homeomorphisms of compact support; they fix the outside of a polygon and act in an affine way on a triangulation of the interior. A simple inductive argument shows that it is always possible to remove a free triangle—one for which the intersection with the boundary is a connected set made up of one or two edges—leaving a simple closed Jordan polygon. The special homeomorphisms described above or their inverses provide piecewise linear homeomorphisms which carry the interior of the larger polygon onto the polygon with the free triangle removed. Iterating this process it follows that there is a piecewise linear homeomorphism of compact support carrying the original polygon onto a triangle. Because the homeomorphism is obtained by composing finite many homeomorphisms of the plane of compact support, it follows that the piecewise linear homeomorphism in the statement of the piecewise linear Jordan-Schoenflies theorem has compact support. As a corollary, it follows that any homeomorphism between simple closed polygonal curves extends to a homeomorphism between their interiors. For each polygon there is a homeomorphism of a given triangle onto the closure of their interior. The three homeomorphisms yield a single homeomorphism of the boundary of the triangle. By the Alexander trick this homeomorphism can be extended to a homeomorphism of closure of interior of the triangle. Reversing this process this homeomorphism yields a homeomorphism between the closures of the interiors of the polygonal curves. Continuous curve The Jordan-Schoenflies theorem for continuous curves can be proved using Carathéodory's theorem on conformal mapping. It states that the Riemann mapping between the interior of a simple Jordan curve and the open unit disk extends continuously to a homeomorphism between their closures, mapping the Jordan curve homeomorphically onto the unit circle. To prove the theorem, Carathéodory's theorem can be applied to the two regions on the Riemann sphere defined by the Jordan curve. This will result in homeomorphisms between their closures and the closed disks |z| ≤ 1 and |z| ≥ 1. The homeomorphisms from the Jordan curve to the circle will differ by a homeomorphism of the circle which can be extended to the unit disk (or its complement) by the Alexander trick. Composition with this homeomorphism will yield a pair of homeomorphisms which match on the Jordan curve and therefore define a homeomorphism of the Riemann sphere carrying the Jordan curve onto the unit circle. The continuous case can also be deduced from the polygonal case by approximating the continuous curve by a polygon. The Jordan curve theorem is first deduced by this method. The Jordan curve is given by a continuous function on the unit circle. It and the inverse function from its image back to the unit circle are uniformly continuous. So dividing the circle up into small enough intervals, there are points on the curve such that the line segments joining adjacent points lie close to the curve, say by ε. Together these line segments form a polygonal curve. If it has self-intersections, these must also create polygonal loops. Erasing these loops, results in a polygonal curve without self-intersections which still lies close to the curve; some of its vertices might not lie on the curve, but they all lie within a neighbourhood of the curve. The polygonal curve divides the plane into two regions, one bounded region U and one unbounded region V. Both U and V ∪ ∞ are continuous images of the closed unit disk. Since the original curve is contained within a small neighbourhood of the polygonal curve, the union of the images of slightly smaller concentric open disks entirely misses the original curve and their union excludes a small neighbourhood of the curve. One of the images is a bounded open set consisting of points around which the curve has winding number one; the other is an unbounded open set consisting of points of winding number zero. Repeating for a sequence of values of ε tending to 0, leads to a union of open path-connected bounded sets of points of winding number one and a union of open path-connected unbounded sets of winding number zero. By construction these two disjoint open path-connected sets fill out the complement of the curve in the plane. Given the Jordan curve theorem, the Jordan-Schoenflies theorem can be proved as follows. The first step is to show that a dense set of points on the curve are accessible from the inside of the curve, i.e. they are at the end of a line segment lying entirely in the interior of the curve. In fact, a given point on the curve is arbitrarily close to some point in the interior and there is a smallest closed disk about that point which intersects the curve only on its boundary; those boundary points are close to the original point on the curve and by construction are accessible. The second step is to prove that given finitely many accessible points Ai on the curve connected to line segments AiBi in its interior, there are disjoint polygonal curves in the interior with vertices on each of the line segments such that their distance to the original curve is arbitrarily small. This requires tessellations of the plane by uniformly small tiles such that if two tiles meet they have a side or a segment of a side in common: examples are the standard hexagonal tessellation; or the standard brickwork tiling by rectangles or squares with common or stretch bonds. It suffices to construct a polygonal path so that its distance to the Jordan curve is arbitrarily small. Orient the tessellation such no side of a tiles is parallel to any AiBi. The size of the tiles can be taken arbitrarily small. Take the union of all the closed tiles containing at least one point of the Jordan curve. Its boundary is made up of disjoint polygonal curves. If the size of the tiles is sufficiently small, the endpoints Bi will lie in the interior of exactly one of the polygonal boundary curves. Its distance to the Jordan curve is less than twice the diameter of the tiles, so is arbitrarily small. The third step is to prove that any homeomorphism f between the curve and a given triangle can be extended to a homeomorphism between the closures of their interiors. In fact take a sequence ε1, ε2, ε3, ... decreasing to zero. Choose finitely many points Ai on the Jordan curve Γ with successive points less than ε1 apart. Make the construction of the second step with tiles of diameter less than ε1 and take Ci to be the points on the polygonal curve Γ1 intersecting AiBi. Take the points f(Ai) on the triangle. Fix an origin in the triangle Δ and scale the triangle to get a smaller one Δ1 at a distance less than ε1 from the original triangle. Let Di be the points at the intersection of the radius through f(Ai) and the smaller triangle. There is a piecewise linear homeomorphism F1 of the polygonal curve onto the smaller triangle carrying Ci onto Di. By the Jordan-Schoenflies theorem it extends to a homeomorphism F1 between the closure of their interiors. Now carry out the same process for ε2 with a new set of points on the Jordan curve. This will produce a second polygonal path Γ2 between Γ1 and Γ. There is likewise a second triangle Δ2 between Δ1 and Δ. The line segments for the accessible points on Γ divide the polygonal region between Γ2 and Γ1 into a union of polygonal regions; similarly for radii for the corresponding points on Δ divides the region between Δ2 and Δ1 into a union of polygonal regions. The homeomorphism F1 can be extended to homeomorphisms between the different polygons, agreeing on common edges (closed intervals on line segments or radii). By the polygonal Jordan-Schoenflies theorem, each of these homeomorphisms extends to the interior of the polygon. Together they yield a homeomorphism F2 of the closure of the interior of Γ2 onto the closure of the interior of Δ2; F2 extends F1. Continuing in this way produces polygonal curves Γn and triangles Δn with a homomeomorphism Fn between the closures of their interiors; Fn extends Fn – 1. The regions inside the Γn increase to the region inside Γ; and the triangles Δn increase to Δ. The homeomorphisms Fn patch together to give a homeomorphism F from the interior of Γ onto the interior of Δ. By construction it has limit f on the boundary curves Γ and Δ. Hence F is the required homeomorphism. The fourth step is to prove that any homeomorphism between Jordan curves can be extended to a homeomorphism between the closures of their interiors. By the result of the third step, it is sufficient to show that any homeomorphism of the boundary of a triangle extends to a homeomorphism of the closure of its interior. This is a consequence of the Alexander trick. (The Alexander trick also establishes a homeomorphism between the solid triangle and the closed disk: the homeomorphism is just the natural radial extension of the projection of the triangle onto its circumcircle with respect to its circumcentre.) The final step is to prove that given two Jordan curves there is a homeomorphism of the plane of compact support carrying one curve onto the other. In fact each Jordan curve lies inside the same large circle and in the interior of each large circle there are radii joining two diagonally opposite points to the curve. Each configuration divide the plane into the exterior of the large circle, the interior of the Jordan curve and the region between the two into two bounded regions bounded by Jordan curves (formed of two radii, a semicircle, and one of the halves of the Jordan curve). Take the identity homeomorphism of the large circle; piecewise linear homeomorphisms between the two pairs of radii; and a homeomorphism between the two pairs of halves of the Jordan curves given by a linear reparametrization. The 4 homeomorphisms patch together on the boundary arcs to yield a homeomorphism of the plane given by the identity off the large circle and carrying one Jordan curve onto the other. Smooth curve Proofs in the smooth case depend on finding a diffeomorphism between the interior/exterior of the curve and the closed unit disk (or its complement in the extended plane). This can be solved for example by using the smooth Riemann mapping theorem, for which a number of direct methods are available, for example through the Dirichlet problem on the curve or Bergman kernels. (Such diffeomorphisms will be holomorphic on the interior and exterior of the curve; more general diffeomorphisms can be constructed more easily using vector fields and flows.) Regarding the smooth curve as lying inside the extended plane or 2-sphere, these analytic methods produce smooth maps up to the boundary between the closure of the interior/exterior of the smooth curve and those of the unit circle. The two identifications of the smooth curve and the unit circle will differ by a diffeomorphism of the unit circle. On the other hand, a diffeomorphism of the unit circle can be extended to a diffeomorphism of the unit disk by the Alexander extension: where is a smooth function with values in [0,1], equal to 0 near 0 and 1 near 1, and , with . Composing one of the diffeomorphisms with the Alexander extension allows the two diffeomorphisms to be patched together to give a homeomorphism of the 2-sphere which restricts to a diffeomorphism on the closed unit disk and the closures of its complement which it carries onto the interior and exterior of the original smooth curve. By the isotopy theorem in differential topology, the homeomorphism can be adjusted to a diffeomorphism on the whole 2-sphere without changing it on the unit circle. This diffeomorphism then provides the smooth solution to the Schoenflies problem. The Jordan-Schoenflies theorem can be deduced using differential topology. In fact it is an immediate consequence of the classification up to diffeomorphism of smooth oriented 2-manifolds with boundary, as described in . Indeed, the smooth curve divides the 2-sphere into two parts. By the classification each is diffeomorphic to the unit disk and—taking into account the isotopy theorem—they are glued together by a diffeomorphism of the boundary. By the Alexander trick, such a diffeomorphism extends to the disk itself. Thus there is a diffeomorphism of the 2-sphere carrying the smooth curve onto the unit circle. On the other hand, the diffeomorphism can also be constructed directly using the Jordan-Schoenflies theorem for polygons and elementary methods from differential topology, namely flows defined by vector fields. When the Jordan curve is smooth (parametrized by arc length) the unit normal vectors give a non-vanishing vector field X0 in a tubular neighbourhood U0 of the curve. Take a polygonal curve in the interior of the curve close to the boundary and transverse to the curve (at the vertices the vector field should be strictly within the angle formed by the edges). By the piecewise linear Jordan–Schoenflies theorem, there is a piecewise linear homeomorphism, affine on an appropriate triangulation of the interior of the polygon, taking the polygon onto a triangle. Take an interior point P in one of the small triangles of the triangulation. It corresponds to a point Q in the image triangle. There is a radial vector field on the image triangle, formed of straight lines pointing towards Q. This gives a series of lines in the small triangles making up the polygon. Each defines a vector field Xi on a neighbourhood Ui of the closure of the triangle. Each vector field is transverse to the sides, provided that Q is chosen in "general position" so that it is not collinear with any of the finitely many edges in the triangulation. Translating if necessary, it can be assumed that P and Q are at the origin 0. On the triangle containing P the vector field can be taken to be the standard radial vector field. Similarly the same procedure can be applied to the outside of the smooth curve, after applying Möbius transformation to map it into the finite part of the plane and ∞ to 0. In this case the neighbourhoods Ui of the triangles have negative indices. Take the vector fields Xi with a negative sign, pointing away from the point at infinity. Together U0 and the Ui's with i ≠ 0 form an open cover of the 2-sphere. Take a smooth partition of unity ψi subordinate to the cover Ui and set X is a smooth vector field on the two sphere vanishing only at 0 and ∞. It has index 1 at 0 and -1 at ∞. Near 0 the vector field equals the radial vector field pointing towards 0. If αt is the smooth flow defined by X, the point 0 is an attracting point and ∞ a repelling point. As t tends to +∞, the flow send points to 0; while as t tends to –∞ points are sent to ∞. Replacing X by f⋅X with f a smooth positive function, changes the parametrization of the integral curves of X, but not the integral curves themselves. For an appropriate choice of f equal to 1 outside a small annulus near 0, the integral curves starting at points of the smooth curve will all reach smaller circle bounding the annulus at the same time s. The diffeomorphism αs therefore carries the smooth curve onto this small circle. A scaling transformation, fixing 0 and ∞, then carries the small circle onto the unit circle. Composing these diffeomorphisms gives a diffeomorphism carrying the smooth curve onto the unit circle. Generalizations There does exist a higher-dimensional generalization due to and independently with , which is also called the generalized Schoenflies theorem. It states that, if an (n − 1)-dimensional sphere S is embedded into the n-dimensional sphere Sn in a locally flat way (that is, the embedding extends to that of a thickened sphere), then the pair (Sn, S) is homeomorphic to the pair (Sn, Sn−1), where Sn−1 is the equator of the n-sphere. Brown and Mazur received the Veblen Prize for their contributions. Both the Brown and Mazur proofs are considered "elementary" and use inductive arguments. The Schoenflies problem can be posed in categories other than the topologically locally flat category, i.e. does a smoothly (piecewise-linearly) embedded (n − 1)-sphere in the n-sphere bound a smooth (piecewise-linear) n-ball? For n = 4, the problem is still open for both categories. See Mazur manifold. For n ≥ 5 the question in the smooth category has an affirmative answer, and follows from the h-cobordism theorem. Notes References Geometric topology Homeomorphisms Differential topology Diffeomorphisms Theorems in topology Mathematical problems
Schoenflies problem
[ "Mathematics" ]
4,511
[ "Mathematical theorems", "Homeomorphisms", "Theorems in topology", "Geometric topology", "Topology", "Differential topology", "Mathematical problems" ]