text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
High-frequency vibrating screens are the most important screening machines primarily utilised in the mineral processing industry. They are used to separate feeds containing solid and crushed ores down to less than 200 μm in size, and are applicable to both perfectly wetted and dried feed. The frequency of the screen is mainly controlled by an electromagnetic vibrator which is mounted above and directly connected to the screening surface. Its high-frequency characteristics differentiate it from a normal vibrating screen. High-frequency vibrating screens usually operate at an inclined angle, traditionally varying between 0° and 25° and can go up to a maximum of 45°. They should operate with a low stroke and have a frequency ranging from 1500 to 9000 RPM . Frequency in high frequency screen can be fixed or variable. Variable high frequency screen is more versatile to tackle varied material condition like particle size distribution, moisture and has higher efficiency due to incremental increase in frequency. G-force plays important role in determining specific screening capacity of screen in terms of TPH per sqm. G-force increases exponentially with frequency.
Pre-treatment of the feed is often required before the use of the high-frequency screen, as the apertures in the screen may become blocked easily.
High frequency screens have become more standardized and widely adopted in materials classification processes. They allow efficient cuts and fines separations, which can provide high purity and precise sizing control of the product (for sizes of fine particles up to 0.074–1.5mm). [ 1 ] Common industrial applications include dewatering of materials, processing of powder in coal, ores and minerals, wood pelleting, fractionated reclaimed asphalt pavement , food, pharmaceutical and chemical industry . Fineness of the products and system capacities varies over a huge range between different models, to satisfy individual application requirements. It is also used effectively to process Manufactured sand for size segregation and removal of silt which is 75 microns below. For fine particle removal high G force is desirable and is achieved by higher frequency around 5000 to 6000 rpm.
Most commonly, high frequency screens are used to separate "reclaimed" asphalt pavement (RAP) into multiple sizes and fractions, which allow producers to take full advantage of the recycled materials. RAP is a recycle material that is reused in new pavement construction; any recycled products are worth as much as what they replace. [ 2 ] As compared to conventional screening methods which are limited to producing unacceptable sizes in the products, high frequency screens can produce more efficient sizing to obtain a finer product. Another advantage of using high frequency screens to recycle the reclaimed materials is the available aggregate and oil that can be reused, and reducing the amount of new material required. Therefore, the capital cost for the process is lowered while maintaining a high quality of the asphalt mixture. Moreover, high frequency screen applies intensive vibration directly onto the screen media, such high RPM allows asphalt pavement material to achieve a higher stratification and separate at a faster rate. [ 3 ]
In mineral processing, such as metals ore (e.g. iron, tin, tungsten, tantalum etc.) and nonferrous metals ores (e.g. lead, zinc, gold, silver and industrial sand etc.), high frequency screens have a crucial role. After the ores get comminuted, high frequency screens are used as a classifier which selects materials size that is small enough to enter the next stage for recovery. For example, the closed grinding circuit (e.g. recirculating network with ball mill). Firstly, it screens out the coarse particles and recirculates them back to the grinding mill machine. Secondly, the fine-grained material will be unloaded in a timely manner, avoiding over-crushing caused by re-grinding. [ 4 ] The benefits of using high frequency screens in mineral processing can meet the requirement of fineness easily for recovery and is able to achieve a smaller size separation, reducing capacity needed for comminution stage and overall energy consumption. Hence, improving the grade of the final product and providing a better recovery and screening efficiency.
The high frequency vibrating screens achieves a high efficiency of separation and differs from its counterparts since it breaks down the surface tension between particles. Also the high level of RPMs contributes to increasing the stratification of material so they separate at a much higher rate. Separation cannot take place without stratification. Furthermore, since the screen vibrates vertically, there is a 'popcorn effect' whereby the coarser particles are lifted higher and finer particles stay closer to the screen, thus increases the probability of separation. In some high frequency vibrating screens the flow rate of the feed can be controlled, this is proportional to the 'popcorn effect'; if the flow rate lowers, the effect is also decreased. Limitations of the high frequency vibrating screen are that the fine screens are very fragile and are susceptible to becoming blocked very easily. Over time the separation efficiency will drop and the screen will need to be replaced. [ 5 ]
An alternative to the high frequency vibrating screens is the rotary sifter. A rotary sifter uses a screen which rotates in a circular motion and the finer particles are sifted through the apertures. It is also generally used for finer separations; between 12mm to 45μm particle size. The rotary sifter will usually be chosen based on the nature of the substance being separated; whey, yeast bread mix, cheese powder, fertilizers. The rotary sifter is often preferred in the non- metallurgical industry and operates in a way to achieve a dust and noise free environment. The limitation for the rotary sifter is that it cannot handle a high capacity compared to the high frequency vibrating screen. Both equipment, however, achieve a high screening efficiency. [ 6 ]
Conventional and general design for a high frequency vibrating screen consists of mainframe, screen web, eccentric bock, electric motor, rub spring and coupler. [ 7 ] The two most common types of vibrators which induce the high frequency vibrations are hydraulic or electric vibrators, [ 8 ] these electric vibrators are either electric motors or solenoids. [ 6 ] Common designs for screening decks are either single or double deck. In addition, another feature of high frequency vibrating screens are the static side plates which provide benefits such as smaller support structure, less noise, longer life, and hence less maintenance. In industry, the screens are operated at a tilted angle up till 40 º. The high frequency (1500 – 7200 rpm) and low amplitude (1.2 – 2.0 mm) characteristics leads to the vertical-elliptical movement that rapidly transports oversized particles down the screen. [ 9 ] Creating a thin bed of particles, this improves the efficiency and capacity of the screen.
Stationary screens are typically used in plants and not moved around. In the mineral processing industry, equipment often has to be moved to different sites depending on the jobs taken up by a company. Mobile screens thus are another viable design for companies who have to move their equipment often. These include wheel-mounted and track-mounted plants which allow for easy transportation and movement of the screens. Typical mobile screen designs are shown in the diagrams on the right.
The screening performance is affected significantly by various factors such as equipment capacity and angle of inclination, in which the performance can be measured by screening efficiency and flux of the product. [ 5 ]
Flux is defined as the amount of a desired component (undersize material) that has carried over the screening media from the feed per time per unit area. [ 12 ] Screening efficiency is expressed as the ratio of the amount of material that actually passes through the aperture, divided by the amount in the feed that theoretically should pass. Commercially perfect screening is considered to be 95% efficient [ 6 ] if the process is operated with appropriate feed concentration and size particles. Generally, a suitable particle size difference between sieving and feed should be no more than 30%. [ 5 ] High screening efficiency can reduce the qualified gain content in cyclic loading and screening and thus increasing the processing capacity of the mill.
The equipment capacity is almost directly proportional to screen width. This means that by increasing the length, there will be additional chances for passage, and will usually lead to increase in transmission and efficiency. In general, the standard size of screen length should be two to three times the width. [ 5 ] However, certain special situations such as restricted space may require a different design.
Angle of inclination can be designed based on the desired mineral grain. For example, wet sieving angle is generally around 25 ± 2 ° for concentrator. Increasing the slope of a screen will effectively reduce the aperture by the cosine of the angle of inclination. [ 5 ] At the same time, the materials also move across the screen faster which leads to more rapid stratification. [ 5 ] [ 6 ] However, the performance tends to decrease after a certain point since the slope of the deck is too high and most particles will remain on the oversized stream instead of passing through the aperture, thus, lower flux is yielded.
Table below presents relationship between inclined angle with desired product flux and efficiency.
The purpose of the vibrating screen is that particles are introduced to the gaps in the screens repeatedly. The frequency of the screen must be high enough so that it prevents the particles from blocking the apertures and the maximum height of the particle trajectory should occur when the screen surface is at its lowest point. Based on the principle, there is an optimum frequency and amplitude of vibration [ 5 ]
Transmission refers to the fraction of desired particle that passes through the apertures in the screen. At low frequency, screening efficiency is high but blinding is severe. Blinding will decrease as frequency increases but the particles will have difficulty going through the apertures. When designing a high frequency vibrating screen, an optimum point of frequency and amplitude must be chosen, [ 5 ] depending on the specific applications.
The separation efficiency is simply a measure of the amount of material removed by the screen compared to the theoretical amount that should have been removed. Screen efficiency can be obtained using different equation, which depends on whether the desired product is the oversize or undersize fraction from the screen.
The screen efficiency based on the oversize (E o ) is given by:
The screen efficiency based on the undersize (E u ) is then given by:
where Q ms (o) is the mass flow rate of solid in the screen overflow, Q ms (f) is the mass flow rate of solid feed, Q ms (u) is the mass flow rate of solid in the screen underflow, M u (o) is the mass fraction of undersize in the overflow, M u (f) is the mass fraction of undersize in the feed, M u (u) is the mass fraction of undersize in the underflow. [ 6 ]
The overall efficiency (E) is given by:
In the process of sizing minerals there are often rules of thumbs that need to be followed in order to achieve maximum efficiency in the separation.
The selection on the screen type will be based on the materials that the equipment will be used to process. A significant problem occurs with screens because if the screen is not suitable for the material fed to the screen, the materials will blind the apertures and regular maintenance will be required. Different types of screens have been developed to counter this problem. An example is the "self-cleaning" wire; these wires are free to vibrate and so resistance to blinding will increase. The particles will be shaken off the wires and apertures. However, there will be a trade-off with screening efficiency. [ 6 ]
The high frequency vibrating screens will often be used as a secondary screener as its purpose is to separate the finer minerals. This not only ensures good separation efficiency, it will also help to maintain the life-time of the screen. Blinding can occur significantly if particle sizes are not within the screens' designed criteria. [ 5 ]
Another problem that is often encountered is that the particles clump together due to moisture. This clumping will result in undesired increase in effective particle size, with the result that the clumped particles are not allowed to pass through the apertures into the product stream. It is recommended that screening at less than around 5mm aperture size is normally performed on perfectly dry materials. [ 6 ] A heated screen deck may be used to evaporate the moisture in the feed. It will also break the surface tension between the screen wire and the particles. An alternative is to run the feed through a dryer before entering the high frequency vibrating screen.
High frequency vibrating screens are widely used in many industrial process, thus there will be high quantity of waste product released into the environment. It is important that these waste streams are treated, since the untreated waste will cause damage to the environment over a sustained period of time.
An established post-treatment system is classification processing. In this system, the waste streams are separated into different types of waste materials. The types of waste materials are classified into recyclable materials, hazardous materials, organic materials, inorganic materials. Generally, waste materials are separated using mechanical separation and manual separation. [ 13 ] [ 14 ] Mechanical separations are used for separating metals and other materials that may be harmful to the environment, and also to prepare the waste stream for manual separations. Manual separation have two types of sorting which are positive sorting and negative sorting. [ 13 ] [ 14 ] Positive sorting collects reusable waste such as recyclable and organic materials while negative sorting collects unusable waste such as hazardous and inorganic materials. After this separation process, the recyclable materials are transferred for reuse. The organic wastes are often treated using chemical processes (e.g. combustion, pyrolysis etc.) or biological treatment (microbial decomposition). [ 13 ] [ 14 ] The products obtained from these waste organic materials are in the form of refuse-derived fuel (RDF). RDF can be used in many ways to generate electricity or even used along with traditional sources of fuel in coal power plants. The rest of the hazardous and unwanted inorganic wastes are transferred to landfill to be disposed. These post-treatment processes are crucial to sustain the environment.
The research on high frequency screens has led to new developments in the field which enhance the operation and performance of the equipment. These new developments include the stacking of up to 5 individual screen decks placed on top of the other and operating in parallel. A divider system splits the feed slurry to each Stack Sizer screen, then to each screen deck on the machine. Each screen deck has an undersize and oversize collection pan which will respectively go into their common outlet. The stacking of the machines thus allows more production while using less space. [ 15 ] Another new development is the fabrication of Polyweb urethane screen surfaces that have openings as fine as 45 μm and open areas from 35% – 45%. This leads to the screen being able to separate finer particles. The screens can be used for both wet and dry applications and urethane formulation is still an ongoing process. Thus, research and development is still being invested in high frequency screening equipment to improve the overall separation efficiency and also to lower costs. [ 16 ]
To further optimize the performance for high frequency vibrating equipment, a "variable speed" hydraulic vibrator is being developed and used to drive the screen decks. It utilizes fluid hydraulic force which then can be converted into rotary power in order to generate high frequency vibration. [ 17 ] This modification allows equipment to operate at higher frequency range, up to 8200 RPM, compared to the conventional electric vibrators. Special Electric vibrator motors are also used to have variable frequency ranging from 3000 to 9000 rpm and have proved to be more efficient and trouble free with less maintenance. Besides that, the induced vibration also creates an excellent condition for separating finer particles and improves the contacting probability for the materials. Another variation that could be applied to the equipment is the "rotary tensioning system", in which it helps to provide a quicker screen media change. [ 10 ] Therefore, multiple applications can be achieved by single equipment, as with different size of feed material can be deal by replacing screens in a very small downtime. Hence, it improves the economic benefits of plants. One important change is multi-slope deck in high frequency screen making it more efficient with incremental throughput and efficiency in same screening area. | https://en.wikipedia.org/wiki/High-frequency_vibrating_screens |
A high-integrity pressure protection system (HIPPS) is a type of safety instrumented system (SIS) designed to prevent over-pressurization of a plant, such as a chemical plant or oil refinery . The HIPPS will shut off the source of the high pressure before the design pressure of the system is exceeded, thus preventing loss of containment through rupture ( explosion ) of a line or vessel. Therefore, a HIPPS is considered as a barrier between a high-pressure and a low-pressure section of an installation. [ 1 ]
In traditional systems over-pressure is dealt with through relief systems . A relief system will open an alternative outlet for the fluids in the system once a set pressure is exceeded, to avoid further build-up of pressure in the protected system. This alternative outlet generally leads to a flare or venting system to safely dispose the excess fluids. A relief system aims at removing any excess inflow of fluids for safe disposal, where a HIPPS aims at stopping the inflow of excess fluids and containing them in the system.
Conventional relief systems have disadvantages such as release of (flammable and toxic) process fluids or their combustion products in the environment and often a large footprint of the installation. With increasing environmental awareness, relief systems are not always an acceptable solution. However, because of their simplicity, relatively low cost and wide availability, conventional relief systems are still often applied.
HIPPS provides a solution to protect equipment in cases where:
HIPPS is an instrumented safety system that is designed and built in accordance with the IEC 61508 and IEC 61511 standards.
The international standards IEC 61508 and 61511 refer to safety functions and Safety Instrumented Systems (SIS) when discussing a device to protect equipment, personnel and environment. Older standards use terms like safety shutdown systems , emergency shutdown systems or last layers of defence.
A system that closes the source of over-pressure within a specified time with at least the same reliability as a safety relief valve is usually called a HIPPS. Such a HIPPS is a complete functional loop consisting of:
The scheme above presents three pressure transmitters (PT) connected to a logic solver. The solver will decide based on 2-out-of-3 ( 2oo3 ) voting whether or not to activate the final element. the 1oo2 solenoid panel decides which valve to be closed. The final elements consist here of two block valves that stop flow to the downstream facilities (right) to prevent them from exceeding a maximum pressure. The operator of the plant is warned through a pressure alarm (PA) that the HIPPS was activated.
This system has a high degree of redundancy:
One must not confine self to the above design as the only means of materializing the HIPPS definition. One must always think of the HIPPS generically, as a means of isolating a source of a high pressure when down stream flow have been blocked, isolating the upstream equipment (source of the high pressure) in a highly reliable manner. Be this source of the high pressure a pump (in case of liquid) or a gas compressor (in case of gas), the aim of the HIPPS in these cases is to reliably shut down the pump or the gas compressor creating the high pressure condition in a reliable and safe manner.
The ever-increasing flow rates in combination with the environmental constraints initiated the widespread and rapid acceptance in the last decades of HIPPS as the ultimate protection system.
The International Electrotechnical Commission (IEC) has introduced the IEC 61508 and the IEC 61511 standards in 1998 and 2003. These are performance based, non-prescriptive, standards which provide a detailed framework and a life-cycle approach for the design, implementation and management of safety systems applicable to a variety of sectors with different levels of risk definition. These standards also apply to HIPPS.
The IEC 61508 mainly focuses on electrical/electronic/programmable safety-related systems. However it also provides a framework for safety-related systems based on other technologies including mechanical systems. The IEC 61511 is added by the IEC specifically for designers, integrators and users of safety instrumented systems and covers the other parts of the safety loop (sensors and final elements) in more detail.
The basis for the design of your safety instrumented system is the required Safety Integrity Level (SIL). The SIL is obtained during the risk analysis of a plant or process and represents the required risk reduction. The SIS shall meet the requirements of the applicable SIL which ranges from 1 to 4. The IEC standards define the requirements for each SIL for the lifecycle of the equipment, including design and maintenance. The SIL also defines a required probability of failure on demand (PFD) for the complete loop and architectural constraints for the loop and its different elements.
The requirements of the HIPPS should not be simplified to a PFD level only, the qualitative requirements and architectural constraints form an integral part of the requirements to an instrumented protection system such as HIPPS.
The European standard EN12186 (formerly the DIN G491) and more specific the EN14382 (formerly DIN 3381) has been used for the past decades in (mechanically) instrumented overpressure protection systems. These standards prescribe the requirements for the over-pressure protection systems, and their components, in gas plants. Not only the response time and accuracy of the loop but also safety factors for over-sizing of the actuator of the final element are dictated by these standards. Independent design verification and testing to prove compliance to the EN14382 standard is mandatory. Therefore the users often refer to this standard for HIPPS design. | https://en.wikipedia.org/wiki/High-integrity_pressure_protection_system |
In the recent past [ when? ] the problem of removing the deleterious iron particles from a process stream had a few alternatives. Magnetic separation was typically limited and moderately effective. Magnetic separators that used permanent magnets could generate fields of low intensity only. These worked well in removing ferrous tramp but not fine paramagnetic particles. Thus high- intensity magnetic separators that were effective in collecting paramagnetic particles came into existence. These focus on the separation of very fine particles that are paramagnetic.
The current is passed through the coil, which creates a magnetic field, which magnetizes the expanded steel matrix ring. The paramagnetic matrix material behaves like a magnet in the magnetic field and thereby attracts the fines. The ring is rinsed when it is in the magnetic field and all the non-magnetic particles are carried with the rinse water. Next as the ring leaves the magnetic zone the ring is flushed and a vacuum of about – 0.3 bars is applied to remove the magnetic particles attached to the matrix ring.
High-gradient magnetic separator is to separate magnetic and non-magnetic particles ( concentrate and tails ) from the feed slurry . This feed comes from intermediate thickener underflow pump through Linear Screen & Passive Matrix. Tailings go to tailing thickener & product goes to throw launder through vacuum tanks.
Ion separation is another application of magnetic separation. The separation is driven by the magnetic field that induces a separating force. The force differentiate then between heavy and lighter ions causing the separation. This phenomenon has been demonstrated on test bench and pilot scale. [ 1 ] | https://en.wikipedia.org/wiki/High-intensity_magnetic_separator |
A high-intensity radiated field ( HIRF ) is radio-frequency energy of a strength sufficient to adversely affect either a living organism or the performance of a device subjected to it. A microwave oven is an example of this principle put to controlled, safe use. Radio-frequency (RF) energy is non-ionizing electromagnetic radiation – its effects on tissue are through heating.
Electronic components are affected via rectification of the RF and a corresponding shift in the bias points of the components in the field. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
The U.S. Food and Drug Administration (FDA), and U.S. Federal Communications Commission (FCC) set limits for the amounts of RF energy exposure permitted in a standard work-day.
The U.S. Federal Aviation Administration (FAA) and industry EMC leaders have periodically met to define the adequacy of protection requirements for civil avionics from outside interference since 1980. In 1986 The FAA Technical Center contracted for a definition of the electromagnetic environment for civil aviation. This study was performed by the Electromagnetic Compatibility Analysis Center (ECAC). The study has shown levels of exposure to this threat as high as four orders of magnitude (10000 times) higher than the then current civil aircraft EMC susceptibility test certification standards of 1 volt/meter (DO-160). This environment was also two orders of magnitude higher (100 times) than the then prevailing military avionics systems test standards ( MIL-STD 461 /462).
An RF electromagnetic wave has both an electric and a magnetic component (electric field and magnetic field), and it is often convenient to express the intensity of the RF environment at a given location in terms of units specific to each component. For example, the unit "volts per meter" (V/m) is used to express the strength of the electric field (electric "field strength"), and the unit "amperes per meter" (A/m) is used to express the strength of the magnetic field (magnetic "field strength"). Another commonly used unit for characterizing the total electromagnetic field is "power density." Power density is most appropriately used when the point of measurement is far enough away from an antenna to be located in the "far-field" zone of the antenna. | https://en.wikipedia.org/wiki/High-intensity_radiated_field |
High-level design (HLD) explains the architecture that would be used to develop a system . The architecture diagram provides an overview of an entire system, identifying the main components that would be developed for the product and their interfaces.
The HLD can use non-technical to mildly technical terms which should be understandable to the administrators of the system. In contrast, low-level design further exposes the logical detailed design of each of these elements for use by engineers and programmers . HLD documentation should cover the planned implementation of both software and hardware.
In both cases, the high-level design should be a complete view of the entire system, breaking it down into smaller parts that are more easily understood. To minimize the maintenance overhead as construction proceeds and the lower-level design is done, it is best that the high-level design is elaborated only to the degree needed to satisfy these needs.
A high-level design document or HLDD adds the necessary details to the current project description to represent a suitable model for building. This document includes a high-level architecture diagram depicting the structure of the system, such as the hardware, database
architecture, application architecture (layers), application flow (navigation), security architecture and technology architecture. [ 1 ]
A high-level design provides an overview of a system, product, service, or process.
Such an overview helps supporting components be compatible to others.
The highest-level design should briefly describe all platforms, systems, products, services, and processes that it depends on, and include any important changes that need to be made to them.
In addition, there should be brief consideration of all significant commercial, legal, environmental, security, safety, and technical risks, along with any issues and assumptions.
The idea is to mention every work area briefly, clearly delegating the ownership of more detailed design activity whilst also encouraging effective collaboration between the various project teams.
Today, most high-level designs require contributions from a number of experts, representing many distinct professional disciplines.
Finally, every type of end-user should be identified in the high-level design and each contributing design should give due consideration to customer experience . | https://en.wikipedia.org/wiki/High-level_design |
High-Mobility Group or HMG is a group of chromosomal proteins that are involved in the regulation of DNA-dependent processes such as transcription , replication , recombination , and DNA repair . [ 1 ]
HMG proteins were originally isolated from mammalian cells, and named according to their electrophoretic mobility in polyacrylamide gels. [ 2 ]
The HMG proteins are subdivided into 3 superfamilies each containing a characteristic functional domain:
Proteins containing any of the above domains embedded in their sequence are known as HMG-motif proteins. HMG-box proteins are found in a variety of eukaryotic organisms.
HMG proteins are thought to play a significant role in various human disorders. Disruptions and rearrangements in the genes coding for some of the HMG proteins are associated with some common benign tumors. Antibodies to HMG proteins are found in patients with autoimmune diseases. The SRY gene on the Y Chromosome, responsible for male sexual differentiation, contains an HMG-Box domain. A member of the HMG family of proteins, HMGB1 , has also been shown to have an extracellular activity as a chemokine , attracting neutrophils and mononuclear inflammatory cells to the infected liver . [ 3 ] The high-mobility group protein such as HMO1 [ 4 ] alters DNA architecture by binding, bending and looping. Furthermore, these HMG-box DNA-binding proteins increase the flexibility of the DNA upon binding. [ 5 ]
In mammalian cells , the HMG non-histone proteins can modulate the activity of major DNA repair pathways including base excision repair , mismatch repair , nucleotide excision repair and double-strand break repair . [ 6 ]
This genetics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/High-mobility_group |
The High-performance Integrated Virtual Environment (HIVE) is a distributed computing environment used for healthcare-IT and biological research, including analysis of Next Generation Sequencing (NGS) data, preclinical, clinical and post market data, adverse events, metagenomic data, etc. [ 1 ] Currently it is supported and continuously developed by US Food and Drug Administration (government domain), George Washington University (academic domain), and by DNA-HIVE, WHISE-Global and Embleema (commercial domain). HIVE currently operates fully functionally within the US FDA supporting wide variety (+60) of regulatory research and regulatory review projects as well as for supporting MDEpiNet medical device postmarket registries. Academic deployments of HIVE are used for research activities and publications in NGS analytics, cancer research, microbiome research and in educational programs for students at GWU. Commercial enterprises use HIVE for oncology, microbiology, vaccine manufacturing, gene editing, healthcare-IT, harmonization of real-world data, in preclinical research and clinical studies.
HIVE is a massively parallel distributed computing environment where the distributed storage library and the distributed computational powerhouse are linked seamlessly. [ 2 ] The system is both robust and flexible due to maintaining both storage and the metadata database on the same network. [ 3 ] The distributed storage layer of software is the key component for file and archive management and is the backbone for the deposition pipeline. The data deposition back-end allows automatic uploads and downloads of external datasets into HIVE data repositories. The metadata database can be used to maintain specific information about extremely large files ingested into the system (big data) as well as metadata related to computations run on the system. This metadata then allows details of a computational pipeline to be brought up easily in the future in order to validate or replicate experiments. Since the metadata is associated with the computation, it stores the parameters of any computation in the system eliminating manual record keeping. [ citation needed ]
Differentiating HIVE from other object oriented databases is that HIVE implements a set of unified APIs to search, view, and manipulate data of all types. The system also facilitates a highly secure hierarchical access control and permission system, allowing determination of data access privileges in a finely granular manner without creating a multiplicity of rules in the security subsystem. The security model, designed for sensitive data, provides comprehensive control and auditing functionality in compliance with HIVE's designation as a FISMA Moderate system. [ 4 ]
FDA launched HIVE Open Source as a platform to support end to end needs for NGS analytics. https://github.com/FDA/fda-hive
HIVE biocompute harmonization platform is at the core of High-throughput Sequencing Computational Standards for Regulatory Sciences (HTS-CSRS) project. Its mission is to provide the scientific community with a framework to harmonize biocomputing, promote interoperability, and verify bioinformatics protocols ( https://hive.biochemistry.gwu.edu/htscsrs ). For more information, see the project description on the FDA Extramural Research page ( https://www.fda.gov/ScienceResearch/SpecialTopics/RegulatoryScience/ucm491893.htm
Sub-clusters of scalable high performance high density compute cores are there to serve as a powerhouse for extra-large distributed parallelized computations of NGS algorithmics. System is extremely scalable and has deployment instances ranging from a single HIVE in a box appliance to massive enterprise level systems of thousands of compute units. | https://en.wikipedia.org/wiki/High-performance_Integrated_Virtual_Environment |
High-performance liquid chromatography ( HPLC ), formerly referred to as high-pressure liquid chromatography , is a technique in analytical chemistry used to separate, identify, and quantify specific components in mixtures. The mixtures can originate from food , chemicals , pharmaceuticals , [ 1 ] biological , environmental and agriculture , etc., which have been dissolved into liquid solutions. [ citation needed ]
It relies on high pressure pumps, which deliver mixtures of various solvents, called the mobile phase , which flows through the system, collecting the sample mixture on the way, delivering it into a cylinder, called the column, filled with solid particles, made of adsorbent material , called the stationary phase . [ 2 ]
Each component in the sample interacts differently with the adsorbent material, causing different migration rates for each component. [ citation needed ] [ better source needed ] These different rates lead to separation as the species flow out of the column into a specific detector such as UV detectors . The output of the detector is a graph, called a chromatogram. Chromatograms are graphical representations of the signal intensity versus time or volume, showing peaks, which represent components of the sample. Each sample appears in its respective time, called its retention time, having area proportional to its amount. [ 2 ]
HPLC is widely used for manufacturing ( e.g. , during the production process of pharmaceutical and biological products), [ 3 ] [ 4 ] legal ( e.g. , detecting performance enhancement drugs in urine), [ 5 ] research ( e.g. , separating the components of a complex biological sample, or of similar synthetic chemicals from each other), and medical ( e.g. , detecting vitamin D levels in blood serum) purposes. [ 6 ]
Chromatography can be described as a mass transfer process involving adsorption and/or partition . As mentioned, HPLC relies on pumps to pass a pressurized liquid and a sample mixture through a column filled with adsorbent, leading to the separation of the sample components. The active component of the column, the adsorbent, is typically a granular material made of solid particles ( e.g. , silica , polymers, etc.), 1.5–50 μm in size, on which various reagents can be bonded. [ 7 ] [ 8 ] The components of the sample mixture are separated from each other due to their different degrees of interaction with the adsorbent particles. The pressurized liquid is typically a mixture of solvents ( e.g. , water, buffers , acetonitrile and/or methanol ) and is referred to as a "mobile phase". Its composition and temperature play a major role in the separation process by influencing the interactions taking place between sample components and adsorbent. [ 9 ] These interactions are physical in nature, such as hydrophobic (dispersive), dipole–dipole and ionic, most often a combination. [ 10 ] [ 11 ]
The liquid chromatograph is complex [ 12 ] and has sophisticated and delicate technology. In order to properly operate the system, there should be a minimum basis for understanding of how the device performs the data processing to avoid incorrect data and distorted results. [ 13 ] [ 14 ] [ 15 ]
HPLC is distinguished from traditional ("low pressure") liquid chromatography because operational pressures are significantly higher (around 50–1400 bar), while ordinary liquid chromatography typically relies on the force of gravity to pass the mobile phase through the packed column. Due to the small sample amount separated in analytical HPLC, typical column dimensions are 2.1–4.6 mm diameter, and 30–250 mm length. Also HPLC columns are made with smaller adsorbent particles (1.5–50 μm in average particle size). This gives HPLC superior resolving power (the ability to distinguish between compounds) when separating mixtures, which makes it a popular chromatographic technique. [ citation needed ]
The schematic of an HPLC instrument typically includes solvents' reservoirs, one or more pumps, a solvent- degasser , a sampler, a column, and a detector. The solvents are prepared in advance according to the needs of the separation, they pass through the degasser to remove dissolved gasses, mixed to become the mobile phase, then flow through the sampler, which brings the sample mixture into the mobile phase stream, which then carries it into the column. The pumps deliver the desired flow and composition of the mobile phase through the stationary phase inside the column, then directly into a flow-cell inside the detector. The detector generates a signal proportional to the amount of sample component emerging from the column, hence allowing for quantitative analysis of the sample components. The detector also marks the time of emergence, the retention time, which serves for initial identification of the component. More advanced detectors, provide also additional information, specific to the analyte's characteristics, such as UV-VIS spectrum or mass spectrum , which can provide insight on its structural features. These detectors are in common use, such as UV/Vis, photodiode array (PDA) / diode array detector and mass spectrometry detector. [ citation needed ]
A digital microprocessor and user software control the HPLC instrument and provide data analysis. Some models of mechanical pumps in an HPLC instrument can mix multiple solvents together at a ratios changing in time, generating a composition gradient in the mobile phase. Most HPLC instruments also have a column oven that allows for adjusting the temperature at which the separation is performed. [ citation needed ]
The sample mixture to be separated and analyzed is introduced, in a discrete small volume (typically microliters), into the stream of mobile phase percolating through the column. The components of the sample move through the column, each at a different velocity, which are a function of specific physical interactions with the adsorbent, the stationary phase. The velocity of each component depends on its chemical nature, on the nature of the stationary phase (inside the column) and on the composition of the mobile phase. The time at which a specific analyte elutes (emerges from the column) is called its retention time. The retention time, measured under particular conditions, is an identifying characteristic of a given analyte. [ citation needed ]
Many different types of columns are available, filled with adsorbents varying in particle size, porosity , and surface chemistry. The use of smaller particle size packing materials requires the use of higher operational pressure ("backpressure") and typically improves chromatographic resolution (the degree of peak separation between consecutive analytes emerging from the column). Sorbent particles may be ionic, hydrophobic or polar in nature. [ citation needed ]
The most common mode of liquid chromatography is reversed phase , whereby the mobile phases used, include any miscible combination of water or buffers with various organic solvents (the most common are acetonitrile and methanol). Some HPLC techniques use water-free mobile phases (see normal-phase chromatography below). The aqueous component of the mobile phase may contain acids (such as formic, phosphoric or trifluoroacetic acid ) or salts to assist in the separation of the sample components. The composition of the mobile phase may be kept constant ("isocratic elution mode") or varied ("gradient elution mode") during the chromatographic analysis. Isocratic elution is typically effective in the separation of simple mixtures. Gradient elution is required for complex mixtures, with varying interactions with the stationary and mobile phases. This is the reason why in gradient elution the composition of the mobile phase is varied typically from low to high eluting strength. The eluting strength of the mobile phase is reflected by analyte retention times, as the high eluting strength speeds up the elution (resulting in shortening of retention times). For example, a typical gradient profile in reversed phase chromatography for might start at 5% acetonitrile (in water or aqueous buffer) and progress linearly to 95% acetonitrile over 5–25 minutes. Periods of constant mobile phase composition (plateau) may be also part of a gradient profile. For example, the mobile phase composition may be kept constant at 5% acetonitrile for 1–3 min, followed by a linear change up to 95% acetonitrile. [ citation needed ]
The chosen composition of the mobile phase depends on the intensity of interactions between various sample components ("analytes") and stationary phase ( e.g. , hydrophobic interactions in reversed-phase HPLC). Depending on their affinity for the stationary and mobile phases, analytes partition between the two during the separation process taking place in the column. This partitioning process is similar to that which occurs during a liquid–liquid extraction but is continuous, not step-wise. [ citation needed ]
In the example using a water/acetonitrile gradient, the more hydrophobic components will elute (come off the column) later, then, once the mobile phase gets richer in acetonitrile ( i.e. , in a mobile phase becomes higher eluting solution), their elution speeds up. [ citation needed ]
The choice of mobile phase components, additives (such as salts or acids) and gradient conditions depends on the nature of the column and sample components. Often a series of trial runs is performed with the sample in order to find the HPLC method which gives adequate separation. [ citation needed ]
Prior to HPLC, scientists used benchtop column liquid chromatographic techniques. Liquid chromatographic systems were largely inefficient due to the flow rate of solvents being dependent on gravity. Separations took many hours, and sometimes days to complete. Gas chromatography (GC) at the time was more powerful than liquid chromatography (LC), however, it was obvious that gas phase separation and analysis of very polar high molecular weight biopolymers was impossible. [ 16 ] GC was ineffective for many life science and health applications for biomolecules, because they are mostly non- volatile and thermally unstable at the high temperatures of GC. [ 17 ] As a result, alternative methods were hypothesized which would soon result in the development of HPLC. [ citation needed ]
Following on the seminal work of Martin and Synge in 1941, it was predicted by Calvin Giddings , [ 18 ] Josef Huber, and others in the 1960s that LC could be operated in the high-efficiency mode by reducing the packing-particle diameter substantially below the typical LC (and GC) level of 150 μm and using pressure to increase the mobile phase velocity. [ 16 ] These predictions underwent extensive experimentation and refinement throughout the 60s into the 70s until these very days. [ 19 ] Early developmental research began to improve LC particles, for example the historic Zipax, a superficially porous particle. [ 20 ]
The 1970s brought about many developments in hardware and instrumentation. Researchers began using pumps and injectors to make a rudimentary design of an HPLC system. [ 21 ] Gas amplifier pumps were ideal because they operated at constant pressure and did not require leak-free seals or check valves for steady flow and good quantitation. [ 17 ] Hardware milestones were made at Dupont IPD (Industrial Polymers Division) such as a low-dwell-volume gradient device being utilized as well as replacing the septum injector with a loop injection valve. [ 17 ]
While instrumentation developments were important, the history of HPLC is primarily about the history and evolution of particle technology . [ 17 ] [ 22 ] After the introduction of porous layer particles, there has been a steady trend to reduced particle size to improve efficiency. [ 17 ] However, by decreasing particle size, new problems arose. The practical disadvantages stem from the excessive pressure drop needed to force mobile fluid through the column and the difficulty of preparing a uniform packing of extremely fine materials. [ 23 ] Every time particle size is reduced significantly, another round of instrument development usually must occur to handle the pressure. [ 19 ] [ 17 ]
Partition chromatography was one of the first kinds of chromatography that chemists developed, and is barely used these days. [ 24 ] The partition coefficient principle has been applied in paper chromatography , thin layer chromatography , gas phase and liquid–liquid separation applications. The 1952 Nobel Prize in chemistry was earned by Archer John Porter Martin and Richard Laurence Millington Synge for their development of the technique, which was used for their separation of amino acids . [ 25 ] Partition chromatography uses a retained solvent, on the surface or within the grains or fibers of an "inert" solid supporting matrix as with paper chromatography; or takes advantage of some coulombic and/or hydrogen donor interaction with the stationary phase. Analyte molecules partition between a liquid stationary phase and the eluent. Just as in hydrophilic interaction chromatography (HILIC; a sub-technique within HPLC), this method separates analytes based on differences in their polarity. HILIC most often uses a bonded polar stationary phase and a mobile phase made primarily of acetonitrile with water as the strong component. Partition HPLC has been used historically on unbonded silica or alumina supports. Each works effectively for separating analytes by relative polar differences. HILIC bonded phases have the advantage of separating acidic , basic and neutral solutes in a single chromatographic run. [ 26 ]
The polar analytes diffuse into a stationary water layer associated with the polar stationary phase and are thus retained. The stronger the interactions between the polar analyte and the polar stationary phase (relative to the mobile phase) the longer the elution time. The interaction strength depends on the functional groups part of the analyte molecular structure, with more polarized groups ( e.g. , hydroxyl-) and groups capable of hydrogen bonding inducing more retention. Coulombic (electrostatic) interactions can also increase retention. Use of more polar solvents in the mobile phase will decrease the retention time of the analytes, whereas more hydrophobic solvents tend to increase retention times. [ citation needed ]
Normal–phase chromatography was one of the first kinds of HPLC that chemists developed, but has decreased in use over the last decades. Also known as normal-phase HPLC (NP-HPLC), this method separates analytes based on their affinity for a polar stationary surface such as silica; hence it is based on analyte ability to engage in polar interactions (such as hydrogen-bonding or dipole-dipole type of interactions) with the sorbent surface. NP-HPLC uses a non-polar, non-aqueous mobile phase ( e.g. , chloroform ), and works effectively for separating analytes readily soluble in non-polar solvents. The analyte associates with and is retained by the polar stationary phase. Adsorption strengths increase with increased analyte polarity. The interaction strength depends not only on the functional groups present in the structure of the analyte molecule, but also on steric factors . The effect of steric hindrance on interaction strength allows this method to resolve (separate) structural isomers . [ citation needed ]
The use of more polar solvents in the mobile phase will decrease the retention time of analytes, whereas more hydrophobic solvents tend to induce slower elution (increased retention times). Very polar solvents such as traces of water in the mobile phase tend to adsorb to the solid surface of the stationary phase forming a stationary bound (water) layer which is considered to play an active role in retention. This behavior is somewhat peculiar to normal phase chromatography because it is governed almost exclusively by an adsorptive mechanism ( i.e. , analytes interact with a solid surface rather than with the solvated layer of a ligand attached to the sorbent surface; see also reversed-phase HPLC below). Adsorption chromatography is still somewhat used for structural isomer separations in both column and thin-layer chromatography formats on activated (dried) silica or alumina supports. [ citation needed ]
Partition- and NP-HPLC fell out of favor in the 1970s with the development of reversed-phase HPLC because of poor reproducibility of retention times due to the presence of a water or protic organic solvent layer on the surface of the silica or alumina chromatographic media. This layer changes with any changes in the composition of the mobile phase ( e.g. , moisture level) causing drifting retention times. [ citation needed ]
Recently, partition chromatography has become popular again with the development of Hilic bonded phases which demonstrate improved reproducibility, and due to a better understanding of the range of usefulness of the technique.
The use of displacement chromatography is rather limited, and is mostly used for preparative chromatography. The basic principle is based on a molecule with a high affinity for the chromatography matrix (the displacer) which is used to compete effectively for binding sites, and thus displace all molecules with lesser affinities. [ 27 ] There are distinct differences between displacement and elution chromatography. In elution mode, substances typically emerge from a column in narrow, Gaussian peaks. Wide separation of peaks, preferably to baseline, is desired in order to achieve maximum purification. The speed at which any component of a mixture travels down the column in elution mode depends on many factors. But for two substances to travel at different speeds, and thereby be resolved, there must be substantial differences in some interaction between the biomolecules and the chromatography matrix. Operating parameters are adjusted to maximize the effect of this difference. In many cases, baseline separation of the peaks can be achieved only with gradient elution and low column loadings. Thus, two drawbacks to elution mode chromatography, especially at the preparative scale, are operational complexity, due to gradient solvent pumping, and low throughput, due to low column loadings. Displacement chromatography has advantages over elution chromatography in that components are resolved into consecutive zones of pure substances rather than "peaks". Because the process takes advantage of the nonlinearity of the isotherms, a larger column feed can be separated on a given column with the purified components recovered at significantly higher concentration. [ citation needed ]
Reversed phase HPLC (RP-HPLC) [ 28 ] is the most widespread mode of chromatography. It has a non-polar stationary phase and an aqueous, moderately polar mobile phase. In the reversed phase methods, the substances are retained in the system the more hydrophobic they are. For the retention of organic materials, the stationary phases, packed inside the columns, are consisted mainly of porous granules of silica gel in various shapes, mainly spherical, at different diameters (1.5, 2, 3, 5, 7, 10 um), with varying pore diameters (60, 100, 150, 300, A), on whose surface are chemically bound various hydrocarbon ligands such as C3, C4, C8, C18. There are also polymeric hydrophobic particles that serve as stationary phases, when solutions at extreme pH are needed, or hybrid silica, polymerized with organic substances. The longer the hydrocarbon ligand on the stationary phase, the longer the sample components can be retained. Most of the current methods of separation of biomedical materials use C-18 type of columns, sometimes called by a trade names such as ODS (octadecylsilane) or RP-18 (Reversed Phase 18).
The most common RP stationary phases are based on a silica support, which is surface-modified by bonding RMe 2 SiCl, where R is a straight chain alkyl group such as C 18 H 37 or C 8 H 17 .
With such stationary phases, retention time is longer for lipophylic molecules, whereas polar molecules elute more readily (emerge early in the analysis). A chromatographer can increase retention times by adding more water to the mobile phase, thereby making the interactions of the hydrophobic analyte with the hydrophobic stationary phase relatively stronger. Similarly, an investigator can decrease retention time by adding more organic solvent to the mobile phase. RP-HPLC is so commonly used among the biologists and life science users, therefore it is often incorrectly referred to as just "HPLC" without further specification. The pharmaceutical industry also regularly employs RP-HPLC to qualify drugs before their release. [ citation needed ]
RP-HPLC operates on the principle of hydrophobic interactions, which originates from the high symmetry in the dipolar water structure and plays the most important role in all processes in life science. RP-HPLC allows the measurement of these interactive forces. The binding of the analyte to the stationary phase is proportional to the contact surface area around the non-polar segment of the analyte molecule upon association with the ligand on the stationary phase. This solvophobic effect is dominated by the force of water for "cavity-reduction" around the analyte and the C 18 -chain versus the complex of both. The energy released in this process is proportional to the surface tension of the eluent (water: 7.3 × 10 −6 J /cm 2 , methanol: 2.2 × 10 −6 J/cm 2 ) and to the hydrophobic surface of the analyte and the ligand respectively. The retention can be decreased by adding a less polar solvent (methanol, acetonitrile ) into the mobile phase to reduce the surface tension of water. Gradient elution uses this effect by automatically reducing the polarity and the surface tension of the aqueous mobile phase during the course of the analysis.
Structural properties of the analyte molecule can play an important role in its retention characteristics. In theory, an analyte with a larger hydrophobic surface area (C–H, C–C, and generally non-polar atomic bonds, such as S-S and others) can be retained longer as it does not interact with the water structure. On the other hand, analytes with higher polar surface area (as a result of the presence of polar groups, such as -OH, -NH 2 , COO − or -NH 3 + in their structure) are less retained, as they are better integrated into water. The interactions with the stationary phase can also affected by steric effects, or exclusion effects, whereby a component of very large molecule may have only restricted access to the pores of the stationary phase, where the interactions with surface ligands (alkyl chains) take place. Such surface hindrance typically results in less retention.
Retention time increases with more hydrophobic (non-polar) surface area of the molecules. For example, branched chain compounds can elute more rapidly than their corresponding linear isomers because their overall surface area is lower. Similarly organic compounds with single C–C bonds frequently elute later than those with a C=C or even triple bond, as the double or triple bond makes the molecule more compact than a single C–C bond.
Another important factor is the mobile phase pH since it can change the hydrophobic character of the ionizable analyte. For this reason most methods use a buffering agent , such as sodium phosphate , to control the pH. Buffers serve multiple purposes: control of pH which affects the ionization state of the ionizable analytes, affect the charge upon the ionizable silica surface of the stationary phase in between the bonded phase linands, and in some cases even act as ion pairing agents to neutralize analyte charge. Ammonium formate is commonly added in mass spectrometry to improve detection of certain analytes by the formation of analyte-ammonium adducts . A volatile organic acid such as acetic acid , or most commonly formic acid , is often added to the mobile phase if mass spectrometry is used to analyze the column effluents.
Trifluoroacetic acid (TFA) as additive to the mobile phase is widely used for complex mixtures of biomedical samples, mostly peptides and proteins, using mostly UV based detectors. They are rarely used in mass spectrometry methods, due to residues it can leave in the detector and solvent delivery system, which interfere with the analysis and detection. However, TFA can be highly effective in improving retention of analytes such as carboxylic acids , in applications utilizing other detectors such as UV-VIS, as it is a fairly strong organic acid. The effects of acids and buffers vary by application but generally improve chromatographic resolution when dealing with ionizable components.
Reversed phase columns are quite difficult to damage compared to normal silica columns, thanks to the shielding effect of the bonded hydrophobic ligands; however, most reversed phase columns consist of alkyl derivatized silica particles, and are prone to hydrolysis of the silica at extreme pH conditions in the mobile phase. Most types of RP columns should not be used with aqueous bases as these will hydrolyze the underlying silica particle and dissolve it. There are selected brands of hybrid or enforced silica based particles of RP columns which can be used at extreme pH conditions. The use of extreme acidic conditions is also not recommended, as they also might hydrolyzed as well as corrode the inside walls of the metallic parts of the HPLC equipment.
As a rule, in most cases RP-HPLC columns should be flushed with clean solvent after use to remove residual acids or buffers, and stored in an appropriate composition of solvent. Some biomedical applications require non metallic environment for the optimal separation. For such sensitive cases there is a test for the metal content of a column is to inject a sample which is a mixture of 2,2'- and 4,4'- bipyridine . Because the 2,2'-bipy can chelate the metal, the shape of the peak for the 2,2'-bipy will be distorted (tailed) when metal ions are present on the surface of the silica . [ citation needed ] ..
Size-exclusion chromatography ( SEC ) [ 29 ] separates polymer molecules and biomolecules based on differences in their molecular size (actually by a particle's Stokes radius ). The separation process is based on the ability of sample molecules to permeate through the pores of gel spheres, packed inside the column, and is dependent on the relative size of analyte molecules and the respective pore size of the absorbent. The process also relies on the absence of any interactions with the packing material surface.
Two types of SEC are usually termed:
The separation principle in SEC is based on the fully, or partially penetrating of the high molecular weight substances of the sample into the porous stationary-phase particles during their transport through column. The mobile-phase eluent is selected in such a way that it totally prevents interactions with the stationary phase's surface. Under these conditions, the smaller the size of the molecule, the more it is able to penetrate inside the pore space and the movement through the column takes longer. On the other hand, the bigger the molecular size, the higher the probability the molecule will not fully penetrate the pores of the stationary phase, and even travel around them, thus, will be eluted earlier. The molecules are separated in order of decreasing molecular weight, with the largest molecules eluting from the column first and smaller molecules eluting later. Molecules larger than the pore size do not enter the pores at all, and elute together as the first peak in the chromatogram and this is called total exclusion volume which defines the exclusion limit for a particular column. Small molecules will permeate fully through the pores of the stationary phase particles and will be eluted last, marking the end of the chromatogram, and may appear as a total penetration marker.
In biomedical sciences it is generally considered as a low resolution chromatography and thus it is often reserved for the final, "polishing" step of the purification. It is also useful for determining the tertiary structure and quaternary structure of purified proteins. SEC is used primarily for the analysis of large molecules such as proteins or polymers. SEC works also in a preparative way by trapping the smaller molecules in the pores of a particles. The larger molecules simply pass by the pores as they are too large to enter the pores. Larger molecules therefore flow through the column quicker than smaller molecules: that is, the smaller the molecule, the longer the retention time.
This technique is widely used for the molecular weight determination of polysaccharides. SEC is the official technique (suggested by European pharmacopeia) for the molecular weight comparison of different commercially available low-molecular weight heparins . [ 30 ]
Ion-exchange chromatography ( IEC ) or ion chromatography ( IC ) [ 31 ] is an analytical technique for the separation and determination of ionic solutes in aqueous samples from environmental and industrial origins such as metal industry, industrial waste water, in biological systems, pharmaceutical samples, food, etc. Retention is based on the attraction between solute ions and charged sites bound to the stationary phase. Solute ions charged the same as the ions on the column are repulsed and elute without retention, while solute ions charged oppositely to the charged sites of the column are retained on it. Solute ions that are retained on the column can be eluted from it by changing the mobile phase composition, such as increasing its salt concentration and pH or increasing the column temperature, etc.
Types of ion exchangers include polystyrene resins , cellulose and dextran ion exchangers (gels), and controlled-pore glass or porous silica gel . Polystyrene resins allow cross linkage, which increases the stability of the chain. Higher cross linkage reduces swerving, which increases the equilibration time and ultimately improves selectivity. Cellulose and dextran ion exchangers possess larger pore sizes and low charge densities making them suitable for protein separation.
In general, ion exchangers favor the binding of ions of higher charge and smaller radius.
An increase in counter ion (with respect to the functional groups in resins) concentration reduces the retention time, as it creates a strong competition with the solute ions. A decrease in pH reduces the retention time in cation exchange while an increase in pH reduces the retention time in anion exchange. By lowering the pH of the solvent in a cation exchange column, for instance, more hydrogen ions are available to compete for positions on the anionic stationary phase, thereby eluting weakly bound cations.
This form of chromatography is widely used in the following applications: water purification, preconcentration of trace components, ligand-exchange chromatography, ion-exchange chromatography of proteins, high-pH anion-exchange chromatography of carbohydrates and oligosaccharides, and others.
High performance affinity chromatography (HPAC) [ 32 ] works by passing a sample solution through a column packed with a stationary phase that contains an immobilized biologically active ligand. The ligand is in fact a substrate that has a specific binding affinity for the target molecule in the sample solution. The target molecule binds to the ligand, while the other molecules in the sample solution pass through the column, having little or no retention. The target molecule is then eluted from the column using a suitable elution buffer.
This chromatographic process relies on the capability of the bonded active substances to form stable, specific, and reversible complexes thanks to their biological recognition of certain specific sample components. The formation of these complexes involves the participation of common molecular forces such as the Van der Waals interaction , electrostatic interaction, dipole-dipole interaction, hydrophobic interaction, and the hydrogen bond. An efficient, biospecific bond is formed by a simultaneous and concerted action of several of these forces in the complementary binding sites.
Aqueous normal-phase chromatography ( ANP ) is also called hydrophilic interaction liquid chromatography ( HILIC ). [ 33 ] This is a chromatographic technique which encompasses the mobile phase region between reversed-phase chromatography (RP) and organic normal phase chromatography (ONP). HILIC is used to achieve unique selectivity for hydrophilic compounds, [ 34 ] showing normal phase elution order, using "reversed-phase solvents", i.e., relatively polar mostly non-aqueous solvents in the mobile phase. [ 33 ] Many biological molecules, especially those found in biological fluids, are small polar compounds that do not retain well by reversed phase-HPLC. This has made hydrophilic interaction LC (HILIC) an attractive alternative and useful approach for analysis of polar molecules. Additionally, because HILIC is routinely used with traditional aqueous mixtures with polar organic solvents such as ACN and methanol, it can be easily coupled to MS. [ 34 ]
A separation in which the mobile phase composition remains constant throughout the procedure is termed isocratic (meaning constant composition ). The word was coined by Csaba Horvath who was one of the pioneers of HPLC. [ 35 ] [ 36 ]
The mobile phase composition does not have to remain constant. A separation in which the mobile phase composition is changed during the separation process is described as a gradient elution . [ 37 ] [ 38 ] For example, a gradient can start at 10% methanol in water, and end at 90% methanol in water after 20 minutes. The two components of the mobile phase are typically termed "A" and "B"; A is the "weak" solvent which allows the solute to elute only slowly, while B is the "strong" solvent which rapidly elutes the solutes from the column. In reversed-phase chromatography , solvent A is often water or an aqueous buffer, while B is an organic solvent miscible with water, such as acetonitrile , methanol, THF , or isopropanol .
In isocratic elution, peak width increases with retention time linearly according to the equation for N, the number of theoretical plates. This can be a major disadvantage when analyzing a sample that contains analytes with a wide range of retention factors. Using a weaker mobile phase, the runtime is lengthened and results in slowly eluting peaks to be broad, leading to reduced sensitivity. A stronger mobile phase would improve issues of runtime and broadening of later peaks but results in diminished peak separation, especially for quickly eluting analytes which may have insufficient time to fully resolve. This issue is addressed through the changing mobile phase composition of gradient elution.
By starting from a weaker mobile phase and strengthening it during the runtime, gradient elution decreases the retention of the later-eluting components so that they elute faster, giving narrower (and taller) peaks for most components, while also allowing for the adequate separation of earlier-eluting components. This also improves the peak shape for tailed peaks, as the increasing concentration of the organic eluent pushes the tailing part of a peak forward. This also increases the peak height (the peak looks "sharper"), which is important in trace analysis. The gradient program may include sudden "step" increases in the percentage of the organic component, or different slopes at different times – all according to the desire for optimum separation in minimum time.
In isocratic elution, the retention order does not change if the column dimensions (length and inner diameter) change – that is, the peaks elute in the same order. In gradient elution, however, the elution order may change as the dimensions or flow rate change. if they are no scaled down or up according to the change [ 39 ]
The driving force in reversed phase chromatography originates in the high order of the water structure. The role of the organic component of the mobile phase is to reduce this high order and thus reduce the retarding strength of the aqueous component.
The theory of high performance liquid chromatography-HPLC is, at its core, the same as general chromatography theory. [ 40 ] This theory has been used as the basis for system-suitability tests, as can be seen in the USP Pharmacopeia, [ 41 ] which are a set of quantitative criteria, which test the suitability of the HPLC system to the required analysis at any step of it.
This relation is also represented as a normalized unit-less factor known as the retention factor , or retention parameter, which is the experimental measurement of the capacity ratio, as shown in the Figure of Performance Criteria as well. t R is the retention time of the specific component and t 0 is the time it takes for a non-retained substance to elute through the system without any retention, thus it is called the Void Time.
The ratio between the retention factors, k', of every two adjacent peaks in the chromatogram is used in the evaluation of the degree of separation between them, and is called selectivity factor , α, as shown in the Performance Criteria graph.
The plate count N as a criterion for system efficiency was developed for isocratic conditions, i.e., a constant mobile phase composition throughout the run. In gradient conditions, where the mobile phase changes with time during the chromatographic run, it is more appropriate to use the parameter peak capacity P c as a measure for the system efficiency. [ 42 ] The definition of peak capacity in chromatography is the number of peaks that can be separated within a retention window for a specific pre-defined resolution factor, usually ~1. It could also be envisioned as the runtime measured in number of peaks' average widths. The equation is shown in the Figure of the performance criteria. In this equation tg is the gradient time and w(ave) is the average peaks width at the base.
The parameters are largely derived from two sets of chromatographic theory: plate theory (as part of partition chromatography ), and the rate theory of chromatography / Van Deemter equation . Of course, they can be put in practice through analysis of HPLC chromatograms, although rate theory is considered the more accurate theory.
They are analogous to the calculation of retention factor for a paper chromatography separation, but describes how well HPLC separates a mixture into two or more components that are detected as peaks (bands) on a chromatogram. The HPLC parameters are the: efficiency factor( N ), the retention factor (kappa prime), and the separation factor (alpha). Together the factors are variables in a resolution equation, which describes how well two components' peaks separated or overlapped each other. These parameters are mostly only used for describing HPLC reversed phase and HPLC normal phase separations, since those separations tend to be more subtle than other HPLC modes ( e.g. , ion exchange and size exclusion).
Void volume is the amount of space in a column that is occupied by solvent. It is the space within the column that is outside of the column's internal packing material. Void volume is measured on a chromatogram as the first component peak detected, which is usually the solvent that was present in the sample mixture; ideally the sample solvent flows through the column without interacting with the column, but is still detectable as distinct from the HPLC solvent. The void volume is used as a correction factor.
Efficiency factor ( N ) practically measures how sharp component peaks on the chromatogram are, as ratio of the component peak's area ("retention time") relative to the width of the peaks at their widest point (at the baseline). Peaks that are tall, sharp, and relatively narrow indicate that separation method efficiently removed a component from a mixture; high efficiency. Efficiency is very dependent upon the HPLC column and the HPLC method used. Efficiency factor is synonymous with plate number, and the 'number of theoretical plates'.
Retention factor ( kappa prime ) measures how long a component of the mixture stuck to the column, measured by the area under the curve of its peak in a chromatogram (since HPLC chromatograms are a function of time). Each chromatogram peak will have its own retention factor ( e.g. , kappa 1 for the retention factor of the first peak). This factor may be corrected for by the void volume of the column.
Separation factor ( alpha ) is a relative comparison on how well two neighboring components of the mixture were separated ( i.e. , two neighboring bands on a chromatogram). This factor is defined in terms of a ratio of the retention factors of a pair of neighboring chromatogram peaks, and may also be corrected for by the void volume of the column. The greater the separation factor value is over 1.0, the better the separation, until about 2.0 beyond which an HPLC method is probably not needed for separation.
Resolution equations relate the three factors such that high efficiency and separation factors improve the resolution of component peaks in an HPLC separation.
The internal diameter (ID) of an HPLC column is an important parameter. [ 43 ] It can influence the detection response when reduced due to the reduced lateral diffusion of the solute band. It can also affect the separation selectivity, when flow rate and injection volumes are not scaled down or up proportionally to the smaller or larger diameter used, both in the isocratic and in gradient modes. [ 44 ] It determines the quantity of analyte that can be loaded onto the column. Larger diameter columns are usually seen in preparative applications, such as the purification of a drug product for later use. [ 45 ] Low-ID columns have improved sensitivity and lower solvent consumption in the recent ultra-high performance liquid chromatography (UHPLC). [ 46 ]
Larger ID columns (over 10 mm) are used to purify usable amounts of material because of their large loading capacity.
Analytical scale columns (4.6 mm) have been the most common type of columns, though narrower columns [ 46 ] are rapidly gaining in popularity. They are used in traditional quantitative analysis of samples and often use a UV-Vis absorbance detector .
Narrow-bore columns (1–2 mm) are used for applications when more sensitivity is desired either with special UV-vis detectors, fluorescence detection or with other detection methods like liquid chromatography-mass spectrometry
Capillary columns (under 0.3 mm) are used almost exclusively with alternative detection means such as mass spectrometry . They are usually made from fused silica capillaries, rather than the stainless steel tubing that larger columns employ.
Most traditional HPLC is performed with the stationary phase attached to the outside of small spherical silica particles (very small beads). These particles come in a variety of sizes with 5 μm beads being the most common. Smaller particles generally provide more surface area and better separations, but the pressure required for optimum linear velocity increases by the inverse of the particle diameter squared. [ 47 ] [ 48 ] [ 49 ]
According to the equations [ 50 ] of the column velocity, efficiency and backpressure , reducing the particle diameter by half and keeping the size of the column the same, will double the column velocity and efficiency; but four times increase the backpressure. And the small particles HPLC also can decrease the width broadening. [ 51 ] Larger particles are used in preparative HPLC (column diameters 5 cm up to >30 cm) and for non-HPLC applications such as solid-phase extraction .
Many stationary phases are porous to provide greater surface area. Small pores provide greater surface area while larger pore size has better kinetics, especially for larger analytes. For example, a protein which is only slightly smaller than a pore might enter the pore but does not easily leave once inside.
Pumps vary in pressure capacity, but their performance is measured on their ability to yield a consistent and reproducible volumetric flow rate . Pressure may reach as high as 60 MPa (6000 lbf/in 2 ), or about 600 atmospheres. Modern HPLC systems have been improved to work at much higher pressures, and therefore are able to use much smaller particle sizes in the columns (<2 μm). These "ultra high performance liquid chromatography" systems or UHPLCs, which could also be known as ultra high pressure chromatography systems, [ 52 ] can work at up to 120 MPa (17,405 lbf/in 2 ), or about 1200 atmospheres. [ 53 ] The term "UPLC" [ 54 ] is a trademark of the Waters Corporation , but is sometimes used to refer to the more general technique of UHPLC.
HPLC detectors fall into two main categories: universal or selective. Universal detectors typically measure a bulk property ( e.g. , refractive index ) by measuring a difference of a physical property between the mobile phase and mobile phase with solute while selective detectors measure a solute property ( e.g. , UV-Vis absorbance ) by simply responding to the physical or chemical property of the solute. [ 55 ] HPLC most commonly uses a UV-Vis absorbance detector ; however, a wide range of other chromatography detectors can be used. A universal detector that complements UV-Vis absorbance detection is the charged aerosol detector (CAD). A kind of commonly utilized detector includes refractive index detectors, which provide readings by measuring the changes in the refractive index of the eluant as it moves through the flow cell. In certain cases, it is possible to use multiple detectors, for example LCMS normally combines UV-Vis with a mass spectrometer.
When used with an electrochemical detector (ECD) the HPLC-ECD selectively detects neurotransmitters such as: norepinephrine , dopamine , serotonin , glutamate , GABA, acetylcholine and others in neurochemical analysis research applications. [ 56 ] The HPLC-ECD detects neurotransmitters to the femtomolar range. Other methods to detect neurotransmitters include liquid chromatography-mass spectrometry, ELISA, or radioimmunoassays.
Large numbers of samples can be automatically injected onto an HPLC system, by the use of HPLC autosamplers. In addition, HPLC autosamplers have an injection volume and technique which is exactly the same for each injection, consequently they provide a high degree of injection volume precision.
It is possible to enable sample stirring within the sampling-chamber, thus promoting homogeneity. [ 57 ]
HPLC has many applications in both laboratory and clinical science. It is a common technique used in pharmaceutical development, as it is a dependable way to obtain and ensure product purity. [ 58 ] While HPLC can produce extremely high quality (pure) products, it is not always the primary method used in the production of bulk drug materials. [ 59 ] According to the European pharmacopoeia, HPLC is used in only 15.5% of syntheses. [ 60 ] However, it plays a role in 44% of syntheses in the United States pharmacopoeia. [ 61 ] This could possibly be due to differences in monetary and time constraints, as HPLC on a large scale can be an expensive technique. An increase in specificity, precision, and accuracy that occurs with HPLC unfortunately corresponds to an increase in cost.
This technique is also used for detection of illicit drugs in various samples. [ 62 ] The most common method of drug detection has been an immunoassay . [ 63 ] This method is much more convenient. However, convenience comes at the cost of specificity and coverage of a wide range of drugs, therefore, HPLC has been used as well as an alternative method. As HPLC is a method of determining (and possibly increasing) purity, using HPLC alone in evaluating concentrations of drugs was somewhat insufficient. Therefore, HPLC in this context is often performed in conjunction with mass spectrometry . [ 64 ] Using liquid chromatography-mass spectrometry (LC-MS) instead of gas chromatography-mass spectrometry (GC-MS) circumvents the necessity for derivitizing with acetylating or alkylation agents, which can be a burdensome extra step. [ 65 ] LC-MS has been used to detect a variety of agents like doping agents, drug metabolites, glucuronide conjugates, amphetamines, opioids, cocaine, BZDs, ketamine, LSD, cannabis, and pesticides. [ 66 ] [ 67 ] Performing HPLC in conjunction with mass spectrometry reduces the absolute need for standardizing HPLC experimental runs.
Similar assays can be performed for research purposes, detecting concentrations of potential clinical candidates like anti-fungal and asthma drugs. [ 68 ] This technique is obviously useful in observing multiple species in collected samples, as well, but requires the use of standard solutions when information about species identity is sought out. It is used as a method to confirm results of synthesis reactions, as purity is essential in this type of research. However, mass spectrometry is still the more reliable way to identify species.
Medical use of HPLC typically use mass spectrometer (MS) as the detector, so the technique is called LC-MS [ 69 ] or LC-MS/MS for tandem MS, where two types of MS are operated sequentially. [ 70 ] When the HPLC instrument is connected to more than one detector, it is called a hyphenated LC system. [ citation needed ] Pharmaceutical applications [ 71 ] are the major users of HPLC, LC-MS and LC-MS/MS. [ 72 ] This includes drug development [ 73 ] and pharmacology, which is the scientific study of the effects of drugs and chemicals on living organisms, [ 74 ] personalized medicine, [ 75 ] public health [ 76 ] [ 77 ] and diagnostics. [ 78 ] While urine is the most common medium for analyzing drug concentrations, blood serum is the sample collected for most medical analyses with HPLC. [ 79 ] One of the most important roles of LC-MS and LC-MS/MS in the clinical lab is the Newborn Screening (NBS) for metabolic disorders [ 80 ] and follow-up diagnostics. [ 81 ] [ 82 ] The infants' samples come in the shape of dried blood spot (DBS), [ 83 ] which is simple to prepare and transport, enabling safe and accessible diagnostics, both locally and globally.
Other methods of detection of molecules that are useful for clinical studies have been tested against HPLC, namely immunoassays. In one example of this, competitive protein binding assays (CPBA) and HPLC were compared for sensitivity in detection of vitamin D. Useful for diagnosing vitamin D deficiencies in children, it was found that sensitivity and specificity of this CPBA reached only 40% and 60%, respectively, of the capacity of HPLC. [ 84 ] While an expensive tool, the accuracy of HPLC is nearly unparalleled. | https://en.wikipedia.org/wiki/High-performance_liquid_chromatography |
High-performance thin-layer chromatography ( HPTLC ) serves as an extension of thin-layer chromatography (TLC), offering robustness, simplicity, speed, and efficiency in the quantitative analysis of compounds. [ 1 ] This TLC-based analytical technique enhances compound resolution for quantitative analysis. Some of these improvements involve employing higher-quality TLC plates with finer particle sizes in the stationary phase, leading to improved resolution. [ 2 ] Additionally, the separation can be further refined through repeated plate development using a multiple development device. As a result, HPTLC provides superior resolution and lower Limit of Detection (LODs). [ 3 ]
Advantages of HPTLC: [ 1 ]
HPTLC comprises three modes: linear mode, circular mode, and anticircular mode. Among these modes, the anticircular mode stands out as the fastest in theory and practice within the realm of HPTLC. This mode achieves separation by allowing the mobile phase to enter the plate layer precisely along an outer circular path, after which it flows toward the center at a nearly constant speed. This approach maximizes sample capacity while minimizing time, layer, and mobile phase consumption, making it the most cost-effective HPTLC technique. The narrow spot-path unique to anticircular HPTLC facilitates automated quantification. When compared to the linear and circular modes, the anticircular mode demonstrates superior separation and significantly heightened sensitivity, especially at higher Rf-values. [ 2 ]
To begin HPTLC, a stationary phase has to be determined to separate different compounds within a mixture. Around 90% of all pharmaceutical separations are performed on normal phase silica gel; however, other stationary phases such as alumina can be used for samples with dissociating compounds and cellulose for ionic compounds. [ 4 ] The reverse-phase HPTLC method (similar methodology to reverse-phase TLC) is used for compounds with high polarity. After the selection of the stationary phase, plates are generally washed with methanol and dried in an oven to remove excess solvent. [ 5 ]
Selection for the mobile phase is one of the most important processes of HPTLC and follows a 'trial and error' pathway. However, the ' PRISMA ' system stands as a guideline for finding the optimal mobile phase. [ 1 ] The mobile phase is dependent on the absorptivity of the stationary phase and the composition of the compound of interest. [ 5 ] The compound is first tested with solutions such as diethyl ether , ethanol , dichloromethane , chloroform for normal phase HPTLC, or solutions such as methanol , acetonitrile , and tetrahydrofuran for reverse phase HPTLC. The retardation factors ( R f) of the compounds with the selected solvent are then analyzed and the solvent that gives the largest R f is chosen to be the mobile phase for the compound. Then, the mobile solvent strength is tested against hexane (for normal HPTLC) and water (for reverse-phase HPTLC) to determine the need for adjustment. [ 5 ] [ 6 ]
Notable HPTLC devices such as the Linomat 5 and the Automatic TLC Sampler 4 (ATS 4) by CAMAG function very similarly by having the automated 'spray-on' sample application technique. [ 4 ] [ 5 ] This automated 'spray-on' technique is useful to overcome the uncertainty in droplet size and position when the sample is applied to the TLC plate by hand. Additionally, automation provides high resolution and narrow bands since the solvent evaporates immediately as the sample makes contact with the plate. [ 4 ] One approach to automation has been the use of piezoelectric devices and inkjet printers for applying the sample. [ 7 ] Alternatively, the Nanomat 4 and ATS 4 by CAMAG are manually operated where the sample is applied via spot application using a capillary pipette. [ 4 ] [ 5 ]
Upon chromatographic detection, HPTLC plates are usually developed in saturated twin-trough chambers with filter paper for optimal outcomes. [ 5 ] [ 6 ] However, flat-bottom chambers and horizontal-development chambers are also used for specific compounds. A general mechanism for the HPTLC device goes as follows. [ 5 ] A fitted filter paper is placed in the rear trough of the chamber and the mobile phase is poured through the rear trough to ensure complete solvent absorption of the filter paper. The chamber is then tilted to ~45° so both troughs are equal in solvent volume and left alone to equilibrate for ~20 mins. [ 5 ] Finally, the HPTLC plate is placed in the chamber to develop. Between each sample reading, the mobile phase and filter paper are changed to ensure the best outcomes.
The spot capacity (analogous to peak capacity in HPLC ) can be increased by developing the plate with two different solvents, using two-dimensional chromatography . [ 8 ] The procedure begins with development of a sample loaded plate with first solvent. After removing it, the plate is rotated 90° and developed with a second solvent.
HPTLC finds extensive application in various fields, including pharmaceutical industries, clinical chemistry, forensic chemistry, biochemistry, cosmetology, food and drug analysis, environmental analysis, and more, owing to its numerous advantages. It distinguishes itself by being the only chromatographic method capable of presenting results as images and offers simplicity, cost-effectiveness, parallel analysis of samples, high sample capacity, rapid results, and the option for multiple detection methods.
Le Roux's research team assessed HPTLC for determining salbutamol serum levels in clinical trials and concluded that it is a suitable method for analyzing serum samples. [ 3 ]
HPTLC has proven valuable in lichenology for analyzing and identifying lichen substances . Compared to standard TLC, the technique offers several advantages for screening lichen compounds: it allows twice as many samples to be run on one plate, requires significantly less solvent (4 mL per plate versus 250 mL), completes chromatographic separation in under 10 minutes per plate, and can detect substances at much lower concentrations. The method's increased sensitivity has enabled detection of previously unidentified lichen compounds and revealed greater chemical variation within lichen species. Since the early 1990s, HPTLC has been used as an improved alternative to standard TLC for routine screening of lichen substances, though proper plate drying is critical as the technique is more sensitive to atmospheric humidity than standard TLC. [ 9 ]
HPTLC has also been used successfully in the separation of various lipid subclasses, with reproducible and promising results obtained for 20 different lipid subclasses. Numerous reports related to clinical medicine studies have been published in various journals. As a result, HPTLC is now strongly recommended for drug analysis in serum and other tissues. [ 7 ] | https://en.wikipedia.org/wiki/High-performance_thin-layer_chromatography |
A high-power field ( HPF ), when used in relation to microscopy , references the field of view under the maximum magnification power of the objective being used. Often, this represents a 400-fold magnification when referenced in scientific papers.
Area per high-power field for some microscope types:
The area provides a reference unit, for example in reference ranges for urine tests . [ 3 ]
Used for grading of soft tissue tumors: Grading, usually on a scale of I to III, is based
on the degree of differentiation, the average number of mitoses per high-power field , cellularity, pleomorphism ,
and an estimate of the extent of necrosis (presumably a
reflection of rate of growth). Mitotic counts and necrosis
are the most important predictors. [ 4 ]
The following grading is part of classification of breast cancer :
This optics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/High-power_field |
High-power impulse magnetron sputtering (HIPIMS or HiPIMS, also known as high-power pulsed magnetron sputtering , HPPMS) is a method for physical vapor deposition of thin films which is based on magnetron sputter deposition . HIPIMS utilises extremely high power densities of the order of kW⋅cm −2 in short pulses (impulses) of tens of microseconds at low duty cycle (on/off time ratio) of < 10%. Distinguishing features of HIPIMS are a high degree of ionisation of the sputtered metal and a high rate of molecular gas dissociation which result in high density of deposited films. The ionization and dissociation degree increase according to the peak cathode power. The limit is determined by the transition of the discharge from glow to arc phase. The peak power and the duty cycle are selected so as to maintain an average cathode power similar to conventional sputtering (1–10 W⋅cm −2 ).
HIPIMS is used for:
HIPIMS plasma is generated by a glow discharge where the discharge current density can reach several A⋅cm −2 , whilst the discharge voltage is maintained at several hundred volts. [ 1 ] The discharge is homogeneously distributed across the surface of the cathode (target) however above a certain threshold of current density it becomes concentrated in narrow ionization zones that move along a path known as the target erosion "racetrack". [ 2 ]
HIPIMS generates a high density plasma of the order of 10 13 ions⋅cm −3 [ 1 ] containing high fractions of target metal ions. The main ionisation mechanism is electron impact, which is balanced by charge exchange, diffusion, and plasma ejection in flares. The ionisation rates depend on the plasma density. The ionisation degree of the metal vapour is a strong function of the peak current density of the discharge. At high current densities, sputtered ions with charge 2+ and higher – up to 5+ for V – can be generated. The appearance of target ions with charge states higher than 1+ is responsible for a potential secondary electron emission process that has a higher emission coefficient than the kinetic secondary emission found in conventional glow discharges. The establishment of a potential secondary electron emission may enhance the current of the discharge. HIPIMS is typically operated in short pulse (impulse) mode with a low duty cycle in order to avoid overheating of the target and other system components. In every pulse the discharge goes through several stages: [ 1 ]
The negative voltage (bias voltage) applied to the substrate influences the energy and direction of motion of the positively charged particles that hit the substrate. The on-off cycle has a period on the order of milliseconds. Because the duty cycle is small (< 10%), only low average cathode power is the result (1–10 kW). The target can cool down during the "off time", thereby maintaining process stability. [ 3 ]
The discharge that maintains HIPIMS is a high-current glow discharge, which is transient or quasistationary . Each pulse remains a glow up to a critical duration after which it transits to an arc discharge . If pulse length is kept below the critical duration, the discharge operates in a stable fashion indefinitely.
Initial observations by fast camera imaging [ 2 ] in 2008 were recorded independently, [ 4 ] demonstrated with better precision, [ 5 ] and confirmed [ 6 ] demonstrating that most ionization processes occur in spatially very limited ionization zones. The drift velocity was measured to be of the order of 10 4 m/s, [ 5 ] which is about only 10% of the electron drift velocity.
Substrate pretreatment in a plasma environment is required prior to deposition of thin films on mechanical components such as automotive parts, metal cutting tools and decorative fittings. The substrates are immersed in a plasma and biased to a high voltage of a few hundred volts. This causes high energy ion bombardment that sputters away any contamination. In cases when the plasma contains metal ions, they can be implanted into the substrate to a depth of a few nm. HIPIMS is used to generate a plasma with a high density and high proportion of metal ions. When looking at the film-substrate interface in cross-section, one can see a clean interface. Epitaxy or atomic registry is typical between the crystal of a nitride film and the crystal of a metal substrate when HIPIMS is used for pretreatment. [ 7 ] HIPIMS has been used for the pretreatment of steel substrates for the first time in February 2001 by A.P. Ehiasarian. [ 8 ]
Substrate biasing during pretreatment uses high voltages, which require purpose-designed arc detection and suppression technology. Dedicated DC substrate biasing units provide the most versatile option as they maximize substrate etch rates, minimise substrate damage, and can operate in systems with multiple cathodes. An alternative is the use of two HIPIMS power supplies synchronised in a master–slave configuration: one to establish the discharge and one to produce a pulsed substrate bias [ 9 ]
Thin films deposited by HIPIMS at discharge current density > 0.5 A⋅cm −2 have a dense columnar structure with no voids. The deposition of copper films by HIPIMS was reported for the first time by V. Kouznetsov for the application of filling 1 μm vias with aspect ratio of 1:1.2 [ 10 ]
Transition metal nitride (CrN) thin films were deposited by HIPIMS for the first time in February 2001 by A.P. Ehiasarian. [ 11 ] The first thorough investigation of films deposited by HIPIMS by TEM demonstrated a dense microstructure, free of large scale defects. [ 8 ] The films had a high hardness , good corrosion resistance and low sliding wear coefficient. [ 8 ] The commercialisation of HIPIMS hardware that followed made the technology accessible to the wider scientific community and triggered developments in a number of areas.
Similarly to what is witnessed in conventional reactive sputter deposition process, HiPIMS has also been used to attain oxide or nitride-based films on several substrates, as is seen in the list below. However, as it is characteristic of these methods, the performance of such depositions has significant hysteresis and need to be carefully examined to inspect the optimal operation points. Significant overviews of reactive HiPIMS were published by André Anders [ 12 ] and Kubart et al.. [ 13 ]
The following materials have, among others, been deposited successfully by HIPIMS:
HIPIMS has been successfully applied for the deposition of thin films in industry, particularly on cutting tools. The first HIPIMS coating units appeared on the market in 2006.
The gold version of the Apple iPhone 12 Pro uses this process on the structural stainless steel band that also serves as the device's antenna system. [ 22 ]
The main advantages of HIPIMS coatings include a denser coating morphology [ 23 ] and an increased ratio of hardness to Young's modulus compared to conventional PVD coatings. Whereas comparable conventional nano-structured (Ti,Al)N coatings have a hardness of 25 GPa and a Young's modulus of 460 GPa, the hardness of the new HIPIMS coating is higher than 30 GPa with a Young's modulus of 368 GPa. The ratio between hardness and Young's modulus is a measure of the toughness properties of the coating. The desirable condition is high hardness with a relatively small Young's modulus, such as can be found in HIPIMS coatings. Recently, innovative applications of HIPIMS coated surfaces for biomedical applications were reported by Rtimi et al. [ 24 ] | https://en.wikipedia.org/wiki/High-power_impulse_magnetron_sputtering |
A gas cylinder is a pressure vessel for storage and containment of gases at above atmospheric pressure . Gas storage cylinders may also be called bottles . Inside the cylinder the stored contents may be in a state of compressed gas, vapor over liquid, supercritical fluid , or dissolved in a substrate material, depending on the physical characteristics of the contents. A typical gas cylinder design is elongated, standing upright on a flattened or dished bottom end or foot ring, with the cylinder valve screwed into the internal neck thread at the top for connecting to the filling or receiving apparatus. [ 1 ]
Gas cylinders may be grouped by several characteristics, such as construction method, material, pressure group, class of contents, transportability, and re-usability. [ 2 ]
The size of a pressurised gas container that may be classed as a gas cylinder is typically 0.5 litres to 150 litres. Smaller containers may be termed gas cartridges, and larger may be termed gas tubes, tanks, or other specific type of pressure vessel. A gas cylinder is used to store gas or liquefied gas at pressures above normal atmospheric pressure. [ 2 ] In South Africa, a gas storage cylinder implies a refillable transportable container with a water capacity volume of up to 150 litres. Refillable transportable cylindrical containers from 150 to 3,000 litres water capacity are referred to as tubes. [ 1 ]
In the United States, " bottled gas " typically refers to liquefied petroleum gas . "Bottled gas" is sometimes used in medical supply, especially for portable oxygen tanks . Packaged industrial gases are frequently called "cylinder gas", though "bottled gas" is sometimes used. The term propane tank is also used for cylinders for propane. [ citation needed ]
The United Kingdom and other parts of Europe more commonly refer to "bottled gas" when discussing any usage, whether industrial, medical, or liquefied petroleum. In contrast, what is called liquefied petroleum gas in the United States is known generically in the United Kingdom as "LPG" and it may be ordered by using one of several trade names , or specifically as butane or propane , depending on the required heat output. [ citation needed ]
The term cylinder in this context is sometimes confused with tank , the latter being an open-top or vented container that stores liquids under gravity, though the term scuba tank is commonly used to refer to a compressed gas cylinder used for breathing gas supply to an underwater breathing apparatus .
Since fibre-composite materials have been used to reinforce pressure vessels, various types of cylinder distinguished by the construction method and materials used have been defined: [ 7 ] [ 8 ]
Assemblies comprising a group of cylinders mounted together for combined use or transport:
All-metal cylinders are the most rugged and usually the most economical option, but are relatively heavy. Steel is generally the most resistant to rough handling and most economical, and is often lighter than aluminium for the same working pressure, capacity, and form factor due to its higher specific strength. The inspection interval of industrial steel cylinders has increased from 5 or 6 years to 10 years. Diving cylinders that are used in water must be inspected more often; intervals tend to range between 1 and 5 years. Steel cylinders are typically withdrawn from service after 70 years, or may continue to be used indefinitely providing they pass periodic inspection and testing. [ citation needed ] When they were found to have inherent structural problems, certain steel and aluminium alloys were withdrawn from service, or discontinued from new production, while existing cylinders may require different inspection or testing, but remain in service provided they pass these tests. [ citation needed ]
For very high pressures, composites have a greater mass advantage. Due to the very high tensile strength of carbon fiber reinforced polymer , these vessels can be very light, but are more expensive to manufacture. [ 12 ] Filament wound composite cylinders are used in fire fighting breathing apparatus, high altitude climbing, and oxygen first aid equipment because of their low weight, but are rarely used for diving, due to their high positive buoyancy . They are occasionally used when portability for accessing the dive site is critical, such as in cave diving where the water surface is far from the cave entrance. [ 13 ] [ 14 ] Composite cylinders certified to ISO-11119-2 or ISO-11119-3 may only be used for underwater applications if they are manufactured in accordance with the requirements for underwater use and are marked "UW". [ 15 ]
Cylinders reinforced with or made from a fibre reinforced material usually must be inspected more frequently than metal cylinders, e.g. , every 5 instead of 10 years, and must be inspected more thoroughly than metal cylinders as they are more susceptible to impact damage. They may also have a limited service life. [ citation needed ] Fibre composite cylinders were originally specified for a limited life span of 15, 20 or 30 years, but this has been extended when they proved to be suitable for longer service. [ citation needed ]
The Type 1 pressure vessel is a seamless cylinder normally made of cold-extruded aluminum or forged steel . [ 16 ] The pressure vessel comprises a cylindrical section of even wall thickness, with a thicker base at one end, and domed shoulder with a central neck to attach a cylinder valve or manifold at the other end.
Occasionally other materials may be used. Inconel has been used for non-magnetic and highly corrosion resistant oxygen compatible spherical high-pressure gas containers for the US Navy's Mk-15 and Mk-16 mixed gas rebreathers, and a few other military rebreathers.
Most aluminum cylinders are flat bottomed, allowing them to stand upright on a level surface, but some were manufactured with domed bottoms.
Aluminum cylinders are usually manufactured by cold extrusion of aluminum billets in a process which first presses the walls and base, then trims the top edge of the cylinder walls, followed by press forming the shoulder and neck. The final structural process is machining the neck outer surface, boring and cutting the neck threads and O-ring groove. The cylinder is then heat-treated, tested and stamped with the required permanent markings. [ 17 ]
Steel cylinders are often used because they are harder and more resistant to external surface impact and abrasion damage, and can tolerate higher temperatures without affecting material properties. They also may have a lower mass than aluminium cylinders with the same gas capacity , due to considerably higher specific strength . Steel cylinders are more susceptible than aluminium to external corrosion, particularly in seawater, and may be galvanized or coated with corrosion barrier paints to resist corrosion damage. It is not difficult to monitor external corrosion, and repair the paint when damaged, and steel cylinders which are well maintained have a long service life, often longer than aluminium cylinders, as they are not susceptible to fatigue damage when filled within their safe working pressure limits.
Steel cylinders are manufactured with domed (convex) and dished (concave) bottoms. The dished profile allows them to stand upright on a horizontal surface, and is the standard shape for industrial cylinders. The cylinders used for emergency gas supply on diving bells are often this shape, and commonly have a water capacity of about 50 litres ("J"). Domed bottoms give a larger volume for the same cylinder mass, and are the standard for scuba cylinders up to 18 litres water capacity, though some concave bottomed cylinders have been marketed for scuba. Domed end industrial cylinders may be fitted with a press-fitted foot ring to allow upright standing. [ 18 ] [ 19 ]
Steel alloys used for gas cylinder manufacture are authorised by the manufacturing standard. For example, the US standard DOT 3AA requires the use of open-hearth, basic oxygen, or electric steel of uniform quality. Approved alloys include 4130X, NE-8630, 9115, 9125, Carbon-boron and Intermediate manganese, with specified constituents, including manganese and carbon, and molybdenum, chromium, boron, nickel or zirconium. [ 20 ]
Steel cylinders may be manufactured from steel plate discs stamped from annealed plate or coil, which are lubricated and cold drawn to a cylindrical cup form, by a hydraulic press, this is annealed and drawn again in two or three stages, until the final diameter and wall thickness is reached. They generally have a domed base if intended for the scuba market, so they cannot stand up by themselves.For industrial use a dished base allows the cylinder to stand on the end on a flat surface. After forming the base and side walls, the top of the cylinder is trimmed to length, heated and hot spun to form the shoulder and close the neck. This process thickens the material of the shoulder. The cylinder is heat-treated by quenching and tempering to provide the best strength and toughness. The cylinders are machined to provide the neck thread and o-ring seat (if applicable), then chemically cleaned or shot-blasted inside and out to remove mill-scale. After inspection and hydrostatic testing they are stamped with the required permanent markings, followed by external coating with a corrosion barrier paint or hot dip galvanising and final inspection. [ 21 ] [ 4 ]
A related method is to start with seamless steel tube of a suitable diameter and wall thickness, manufactured by a process such as the Mannesmann process , and to close both ends by the hot spinning process. This method is particularly suited to high pressure gas storage tubes , which usually have a threaded neck opening at both ends, so that both ends are processed alike. When a neck opening is only required at one end, the base is spun first and dressed inside for a uniform smooth surface, then the process of closing the shoulder and forming the neck is the same as for the pressed plate method. [ 4 ]
An alternative production method is backward extrusion of a heated steel billet, similar to the cold extrusion process for aluminium cylinders, followed by hot drawing and bottom forming to reduce wall thickness, and trimming of the top edge in preparation for shoulder and neck formation by hot spinning. The other processes are much the same for all production methods. [ 22 ] [ 4 ]
The neck of the cylinder is the part of the end which is shaped as a narrow concentric cylinder, and internally threaded to fit a cylinder valve. There are several standards for neck threads, which include parallel threads where the seal is by an O-ring gasket, and taper threads which seal along the contact surface by deformation of the contact surfaces, and on thread tape or sealing compound . [ 3 ]
Type 2 is hoop wrapped with fibre reinforced resin over the cylindrical part of the cylinder, where circumferential load is highest. The fibres share the circumferential load with the metal core, and achieve a significant weight saving due to efficient stress distribution and high specific strength and stiffness of the composite. The core is a seamless metal cylinder, manufactured in any of the ways suitable for a type 1 cylinder, but with thinner walls, as they only carry about half the load, mainly the axial load. Hoop winding is at an angle to the length axis of close to 90°, so the fibres carry negligible axial load. [ 4 ]
Type 3 is wrapped over the entire cylinder except for the neck, and the metal liner is mainly to make the cylinder gas tight, so very little load is carried by the liner. Winding angles are optimised to carry all the loads (axial and circumferential) from the pressurised gas in the cylinder. Only the neck metal is exposed on the outside. This construction can save in the order of 30% of the mass compared with type 2, as the fibre composite has a higher specific strength than the metal of the type 2 liner that it replaces. [ 4 ]
Type 4 is wrapped in the same way as type 3, but the liner is non-metallic. A metal neck boss is fitted to the shoulder of the plastic liner before winding, and this carries the neck threads for the cylinder valve. The outside of the neck of the insert is not covered by the fibre wrapping, and may have axial ridges to engage with a wrench or clamp for torsional support when fitting or removing the cylinder valve. There is a mass reduction compared with type 3 due to the lower density of the plastic liner. [ 4 ]
A welded gas cylinder comprises two or more shell components joined by welding. The most commonly used material is steel, but stainless steel, aluminium and other alloys can be used when they are better suited to the application. Steel is strong, resistant to physical damage, easy to weld, relatively low cost, and usually adequate for corrosion resistance, and provides an economical product.
The components of the shell are usually domed ends, and often a rolled cylindrical centre section. The ends are usually domed by cold pressing from a circular blank, and may be drawn in two or more stages to get the final shape, which is generally semi-elliptical in section. The end blank is typically punched from sheet, drawn to the required section, edges trimmed to size and necked for overlap where appropriate, and hole(s) for the neck and other fittings punched. The neck boss is inserted from the concave side and welded in place before shell assembly. [ 23 ]
Smaller cylinders are typically assembled from a top and bottom dome, with an equatorial weld seam. Larger cylinders with a longer cylindrical body comprise dished ends circumferentially welded to a rolled central cylindrical section with a single longitudinal welded seam. Welding is typically automated gas metal arc welding . [ 23 ]
Typical accessories which are welded to the outside of the cylinder include a foot ring, a valve guard with lifting handles, and a neck boss threaded for the valve. Occasionally other through-shell and external fittings are also welded on. [ 23 ]
After welding, the assembly may be heat treated for stress-relief and to improve mechanical characteristics, cleaned by shotblasting , and coated with a protective and decorative coating. Testing and inspection for quality control will take place at various stages of production. [ 23 ]
The transportation of high-pressure cylinders is regulated by many governments throughout the world. Various levels of testing are generally required by the governing authority for the country in which it is to be transported while filled. In the United States, this authority is the United States Department of Transportation (DOT). Similarly in the UK, the European transport regulations (ADR) are implemented by the Department for Transport (DfT). For Canada, this authority is Transport Canada (TC). Cylinders may have additional requirements placed on design and or performance from independent testing agencies such as Underwriters Laboratories (UL). Each manufacturer of high-pressure cylinders is required to have an independent quality agent that will inspect the product for quality and safety.
Within the UK the " competent authority " — the Department for Transport (DfT) — implements the regulations and appointment of authorised cylinder testers is conducted by United Kingdom Accreditation Service (UKAS), who make recommendations to the Vehicle Certification Agency (VCA) for approval of individual bodies.
There are a variety of tests that may be performed on various cylinders. Some of the most common types of tests are hydrostatic test , burst test, ultimate tensile strength , Charpy impact test and pressure cycling.
During the manufacturing process, vital information is usually stamped or permanently marked on the cylinder. This information usually includes the type of cylinder, the working or service pressure, the serial number, date of manufacture, the manufacture's registered code and sometimes the test pressure. Other information may also be stamped, depending on the regulation requirements.
High-pressure cylinders that are used multiple times — as most are — can be hydrostatically or ultrasonically tested and visually examined every few years. [ 24 ] In the United States, hydrostatic or ultrasonic testing is required either every five years or every ten years, depending on cylinder and its service.
Cylinder neck thread can be to any one of several standards. Both taper thread sealed with thread tape and parallel thread sealed with an O-ring have been found satisfactory for high pressure service, but each has advantages and disadvantages for specific use cases, and if there are no regulatory requirements, the type may be chosen to suit the application. [ 3 ]
A tapered thread provides simple assembly, but requires high torque for establishing a reliable seal, which causes high radial forces in the neck, and has a limited number of times it can be used before it is excessively deformed. This can be extended a bit by always returning the same fitting to the same cylinder, and avoiding over-tightening. [ 3 ]
In Australia, Europe and North America, tapered neck threads are generally preferred for inert, flammable, corrosive and toxic gases, but when aluminium cylinders are used for oxygen service to United States Department of Transportation (DOT) or Transport Canada (TC) specifications in North America, the cylinders must have parallel thread. DOT and TC allow UN pressure vessels to have tapered or parallel threaded openings. In the US, 49 CFR Part 171.11 applies, and in Canada, CSA B340-18 and CSA B341-18. In Europe and other parts of the world, tapered thread is preferred for cylinder inlets for oxidising gases. [ 3 ]
Scuba cylinders typically have a much shorter interval between internal inspections, so the use of tapered thread is less satisfactory due to the limited number of times a tapered thread valve can be re-used before it wears out, [ 3 ] so parallel thread is generally used for this application. [ 1 ]
Parallel thread can be tightened sufficiently to form a good seal with the O-ring without lubrication, which is an advantage when the lubricant may react with the O-ring or the contents. Repeated secure installations are possible with different combinations of valve and cylinder provided they have compatible thread and correct O-ring seals. Parallel thread is more likely to give the technician warning of residual internal pressure by leaking or extruding the O-ring before catastrophic failure when the O-ring seal is broken during removal of the valve. The O-ring size must be correct for the combination of cylinder and valve, and the material must be compatible with the contents and any lubricant used. [ 3 ]
Gas cylinders usually have an angle stop valve at one end, and the cylinder is usually oriented so the valve is on top. During storage, transportation, and handling when the gas is not in use, a cap may be screwed over the protruding valve to protect it from damage or breaking off in case the cylinder were to fall over. Instead of a cap, cylinders sometimes have a protective collar or neck ring around the valve assembly which has an opening for access to fit a regulator or other fitting to the valve outlet, and access to operate the valve. Installation of valves for high pressure aluminum alloy cylinders is described in the guidelines: CGA V-11, Guideline for the Installation of Valves into High Pressure Aluminum Alloy Cylinders and ISO 13341, Transportable gas cylinders—Fitting of valves to gas cylinders. [ 3 ]
The valves on industrial, medical and diving cylinders usually have threads or connection geometries of different handedness, sizes and types that depend on the category of gas, making it more difficult to mistakenly misuse a gas. For example, a hydrogen cylinder valve outlet does not fit an oxygen regulator and supply line, which could result in catastrophe. Some fittings use a right-hand thread, while others use a left-hand thread ; left-hand thread fittings are usually identifiable by notches or grooves cut into them, and are usually used for flammable gases.
In the United States, valve connections are sometimes referred to as CGA connections , since the Compressed Gas Association (CGA) publishes guidelines on what connections to use for what gasses. For example, an argon cylinder may have a "CGA 580" connection on the valve. High purity gases sometimes use CGA-DISS (" Diameter Index Safety System ") connections.
Medical gases may use the Pin Index Safety System to prevent incorrect connection of gases to services.
In the European Union, DIN connections are more common than in the United States.
In the UK, the British Standards Institution sets the standards. Included among the standards is the use left-hand threaded valves for flammable gas cylinders (most commonly brass, BS4, valves for non-corrosive cylinder contents or stainless steel, BS15, valves for corrosive contents). Non flammable gas cylinders are fitted with right-hand threaded valves (most commonly brass, BS3, valves for non-corrosive components or stainless steel, BS14, valves for corrosive contents). [ 25 ]
When the gas in the cylinder is to be used at low pressure, the cap is taken off and a pressure-regulating assembly is attached to the stop valve. This attachment typically has a pressure regulator with upstream (inlet) and downstream (outlet) pressure gauges and a further downstream needle valve and outlet connection. For gases that remain gaseous under ambient storage conditions, the upstream pressure gauge can be used to estimate how much gas is left in the cylinder according to pressure. For gases that are liquid under storage, e.g., propane, the outlet pressure is dependent on the vapor pressure of the gas, and does not fall until the cylinder is nearly exhausted, although it will vary according to the temperature of the cylinder contents. The regulator is adjusted to control the downstream pressure, which will limit the maximum flow of gas out of the cylinder at the pressure shown by the downstream gauge. For some purposes, such as shielding gas for arc welding, the regulator will also have a flowmeter on the downstream side.
The regulator outlet connection is attached to whatever needs the gas supply.
Because the contents are under pressure and are sometimes hazardous materials , handling bottled gases is regulated. Regulations may include chaining bottles to prevent falling and damaging the valve, proper ventilation to prevent injury or death in case of leaks and signage to indicate the potential hazards. If a compressed gas cylinder falls over, causing the valve block to be sheared off, the rapid release of high-pressure gas may cause the cylinder to be violently accelerated, potentially causing property damage, injury, or death. To prevent this, cylinders are normally secured to a fixed object or transport cart with a strap or chain. They can also be stored in a safety cabinet .
In a fire, the pressure in a gas cylinder rises in direct proportion to its temperature . If the internal pressure exceeds the mechanical limitations of the cylinder and there are no means to safely vent the pressurized gas to the atmosphere, the vessel will fail mechanically. If the vessel contents are flammable, this event may result in a "fireball". [ 26 ] Oxidisers such as oxygen and fluorine will produce a similar effect by accelerating combustion in the area affected. If the cylinder's contents are liquid, but become a gas at ambient conditions, this is commonly referred to as a boiling liquid expanding vapour explosion (BLEVE). [ 27 ]
Medical gas cylinders in the UK and some other countries have a fusible plug of Wood's metal in the valve block between the valve seat and the cylinder. [ citation needed ] This plug melts at a comparatively low temperature (70 °C) and allows the contents of the cylinder to escape to the surroundings before the cylinder is significantly weakened by the heat, lessening the risk of explosion.
More common pressure relief devices are a simple burst disc installed in the base of the valve between the cylinder and the valve seat. A burst disc is a small metal gasket engineered to rupture at a pre-determined pressure. Some burst discs are backed with a low-melting-point metal, so that the valve must be exposed to excessive heat before the burst disc can rupture. [ citation needed ]
The Compressed Gas Association publishes a number of booklets and pamphlets on safe handling and use of bottled gases.
There is a wide range of standards relating to the manufacture, use and testing of pressurised gas cylinders and related components. Some examples are listed here.
Gas cylinders are often color-coded , but the codes are not standard across different jurisdictions, and sometimes are not regulated. Cylinder color can not safely be used for positive product identification; cylinders have labels to identify the gas they contain.
The Indian Standard for Gas Cylinder Color Code applies to the identification of the contents of gas cylinders intended for medical use. Each cylinder shall be painted externally in the colours corresponding to its gaseous contents. [ 35 ]
The below are example cylinder sizes and do not constitute an industry standard. [ citation needed ] [ clarification needed ]
(US DOT specs define material, making, and maximum pressure in psi. They are comparable to Transport Canada specs, which shows pressure in bars . A 3E-1800 in DOT nomenclature would be a TC 3EM 124 in Canada. [ 36 ] )
For larger volume, high pressure gas storage units, known as tubes , are available. They generally have a larger diameter and length than high pressure cylinders, and usually have a tapped neck at both ends. They may be mounted alone or in groups on trailers, permanent bases, or intermodal transport frames . Due to their length, they are mounted horizontally on mobile structures. In general usage they are often manifolded together and managed as a unit. [ 37 ] [ 38 ]
Groups of similar size cylinders may be mounted together and connected to a common manifold system to provide larger storage capacity than a single standard cylinder. This is commonly called a cylinder bank or a gas storage bank. The manifold may be arranged to allow simultaneous flow from all the cylinders, or, for a cascade filling system , where gas is tapped off cylinders according to the lowest positive pressure difference between storage and destination cylinder, being a more efficient use of pressurised gas. [ 39 ]
A gas cylinder quad, also known as a gas cylinder bundle, is a group of high pressure cylinders mounted on a transport and storage frame. There are commonly 16 cylinders, each of about 50 litres capacity mounted upright in four rows of four, on a square base with a square plan frame with lifting points on top and may have fork-lift slots in the base. The cylinders are usually interconnected by a manifold for use as a unit, but many variations in layout and structure are possible. [ 9 ] | https://en.wikipedia.org/wiki/High-pressure_gas_storage_cylinder |
High-pressure torsion (HPT) is a severe plastic deformation technique used to refine the microstructure of materials by applying both high pressure and torsional strain. [ 1 ] HPT involves compressing a material between two anvils while simultaneously rotating one of the anvils, inducing shear deformation. [ 2 ] It was introduced in 1935 by P.W. Bridgman , who developed early methods to apply extreme strain under high pressures in material processing. [ 3 ] This process is widely used in materials science to create ultrafine-grained and nanostructured metallic and non-metallic materials, engineer crystal lattice defects, control phase transformations, synthesize new materials or investigate mechanisms underlying some natural phenomena.
HPT leads to significant grain refinement, resulting in materials with enhanced mechanical properties such as increased tensile strength and hardness . HPT also has applications in producing metals with enhanced superplasticity , improving the toughness of alloys, and creating materials with unique properties like high wear resistance. Researchers use HPT to study fundamental aspects of deformation and phase transition under extreme conditions. Additionally, HPT is being explored for potential applications in the biomedical and energy fields due to the enhancement of functional properties. Progress in HPT science and technology has opened new possibilities in the development of advanced materials with superior mechanical and functional properties. [ 4 ]
This metallurgy -related article is a stub . You can help Wikipedia by expanding it .
This article about materials science is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/High-pressure_torsion |
High-redundancy actuation (HRA) is a new approach to fault-tolerant control in the area of mechanical actuation.
The basic idea is to use a lot of small actuation elements, so that a fault of one element has only a minor effect on the overall system. This way, a High Redundancy Actuator can remain functional even after several elements are at fault. This property is also called graceful degradation .
Fault-tolerant operation in the presence of actuator faults requires some form of redundancy. Actuators are essential, because they are used to keep the system stable and to bring it into the desired state. Both requires a certain amount of power or force to be
applied to the system. No control approach can work unless the actuators produce this necessary force.
So the common solution is to err on the side of safety by over-actuation: much more control action than strictly necessary is built into the system. For critical systems, the normal approach involves straightforward replication of the actuators. Often three or four actuators are used in parallel for aircraft flight control systems, even if one would be
sufficient from a control point of view. So if one actuator fails, the remaining actuator can always keep the system operation. While this approach is certainly successful, it also makes the system expensive, heavy and ineffective.
The idea of the high-redundancy actuation (HRA) is inspired by the human musculature. A muscle is composed of many individual muscle cells, each of which provides only a minute contribution to the force and the travel of the muscle. These properties allow the muscle as a whole to be highly resilient to damage of individual cells.
The aim of high redundancy actuation is not to produce man-made muscles, but to use the same principle of cooperation in technical actuators to provide intrinsic fault tolerance. To achieve this, a high number of small actuator elements are assembled in parallel and in series to form one actuator (see Series and parallel circuits ).
Faults within the actuator will affect the maximum capability, but through robust control, full performance can be maintained without either adaptation or reconfiguration. Some form of condition monitoring is necessary to provide warnings to the operator calling for maintenance . But this monitoring has no influence on the system itself, unlike in adaptive methods or control reconfiguration , which simplifies the design of the system significantly.
The HRA is an important new approach within the overall area of fault-tolerant control,
using concepts of reliability engineering on a mechanical level. When applicable, it can provide actuators that have graceful degradation, and that continue to operate at close to nominal performance even in the presence of multiple faults in the actuator elements.
An important feature of the high-redundancy actuation is that the actuator elements are connected both in parallel and in series. While the parallel arrangement is commonly used, the configuration in series is rarely employed, because it is perceived to be less efficient.
However, there is one fault that is difficult to deal with in a parallel arrangement: the locking up of one actuator element. Because parallel actuator elements always have the same extension, one locked-up element can render the whole assembly useless. It is possible to mitigate this by guarding the elements against locking or by limiting the force exerted by a single element. But these measures reduce both the effectiveness of the system and introduce new points of failure.
The analysis of the serial configuration shows that it remains operational when one element is locked-up. This fact is important for the High Redundancy Actuator, as fault tolerance is required for different fault types. The goal of the HRA project is to use parallel and serial actuator elements to accommodate both the blocking and the inactivity (loss of force) of an element.
The basic idea of high-redundancy actuation is technology agnostic: it should be applicable to a wide range of actuator technology, including different kinds of linear actuators and rotational actuators.
However, initial experiments are performed with electric actuators , especially with electromechanical and electromagnetic technology. Compared to pneumatic actuators , the electrical drive allow a much finer control of position and force. | https://en.wikipedia.org/wiki/High-redundancy_actuation |
A high-refractive-index polymer (HRIP) is a polymer that has a refractive index greater than 1.50. [ 1 ]
Such materials are required for anti-reflective coating and photonic devices such as light emitting diodes (LEDs) and image sensors . [ 1 ] [ 2 ] [ 3 ] The refractive index of a polymer is based on several factors which include polarizability , chain flexibility, molecular geometry and the polymer backbone orientation. [ 4 ] [ 5 ]
As of 2004, the highest refractive index for a polymer was 1.76. [ 6 ] Substituents with high molar fractions or high-n nanoparticles in a polymer matrix have been introduced to increase the refractive index in polymers. [ 7 ]
A typical polymer has a refractive index of 1.30–1.70, but a higher refractive index is often required for specific applications. The refractive index is related to the molar refractivity , structure and weight of the monomer. In general, high molar refractivity and low molar volumes increase the refractive index of the polymer. [ 1 ]
Optical dispersion is an important property of an HRIP. It is characterized by the Abbe number . A high refractive index material will generally have a small Abbe number, or a high optical dispersion. [ 8 ] A low birefringence has been required along with a high refractive index for many applications. It can be achieved by using different functional groups in the initial monomer to make the HRIP. Aromatic monomers both increase refractive index and decrease the optical anisotropy and thus the birefringence. [ 7 ]
A high clarity (optical transparency) is also desired in a high refractive index polymer. The clarity is dependent on the refractive indexes of the polymer and of the initial monomer. [ 9 ]
When looking at thermal stability, the typical variables measured include glass transition , initial decomposition temperature , degradation temperature and the melting temperature range. [ 2 ] The thermal stability can be measured by thermogravimetric analysis and differential scanning calorimetry . Polyesters are considered thermally stable with a degradation temperature of 410 °C. The decomposition temperature changes depending on the substituent that is attached to the monomer used in the polymerization of the high refractive index polymer. Thus, longer alkyl substituents results in lower thermal stability. [ 7 ]
Most applications favor polymers which are soluble in as many solvents as possible. Highly refractive polyesters and polyimides are soluble in common organic solvents such as dichloromethane , methanol , hexanes , acetone and toluene . [ 2 ] [ 7 ]
The synthesis route depends on the HRIP type. The Michael polyaddition is used for a polyimide because it can be carried out at room temperature and can be used for step-growth polymerization . This synthesis was first succeeded with polyimidothiethers, resulting in optically transparent polymers with high refractive index. [ 2 ] Polycondensation reactions are also common to make high refractive index polymers, such as polyesters and polyphosphonates. [ 7 ] [ 10 ]
High refractive indices have been achieved either by introducing substituents with high molar refractions (intrinsic HRIPs) or by combining high-n nanoparticles with polymer matrixes (HRIP nanocomposites).
Sulfur -containing substituents including linear thioether and sulfone , cyclic thiophene , thiadiazole and thianthrene are the most commonly used groups for increasing refractive index of a polymer. [ 11 ] [ 12 ] [ 13 ] Polymers with sulfur-rich thianthrene and tetrathiaanthracene moieties exhibit n values above 1.72, depending on the degree of molecular packing.
Halogen elements, especially bromine and iodine , were the earliest components used for developing HRIPs. In 1992, Gaudiana et al. reported a series of polymethylacrylate compounds containing lateral brominated and iodinated carbazole rings. They had refractive indices of 1.67–1.77 depending on the components and numbers of the halogen substituents. [ 14 ] However, recent applications of halogen elements in microelectronics have been severely limited by the WEEE directive and RoHS legislation adopted by the European Union to reduce potential pollution of the environment. [ 15 ]
Phosphorus -containing groups, such as phosphonates and phosphazenes , often exhibit high molar refractivity and optical transmittance in the visible light region. [ 3 ] [ 16 ] [ 17 ] Polyphosphonates have high refractive indices due to the phosphorus moiety even if they have chemical structures analogous to polycarbonates . [ 18 ] Shaver et al. reported a series of polyphosphonates with varying backbones, reaching the highest refractive index reported for polyphosphonates at 1.66. [ 10 ] In addition, polyphosphonates exhibit good thermal stability and optical transparency; they are also suitable for casting into plastic lenses. [ 19 ]
Organometallic components result in HRIPs with good film forming ability and relatively low optical dispersion. Polyferrocenylsilanes [ 20 ] and polyferrocenes containing phosphorus spacers and phenyl side chains show unusually high n values (n=1.74 and n=1.72). [ 21 ] They might be good candidates for all-polymer photonic devices because of their intermediate optical dispersion between organic polymers and inorganic glasses .
Hybrid techniques which combine an organic polymer matrix with highly refractive inorganic nanoparticles could result in high n values. The factors affecting the refractive index of a high-n nanocomposite include the characteristics of the polymer matrix, nanoparticles and
the hybrid technology between inorganic and organic components. The refractive index of a nanocomposite can be estimated as n c o m p = Φ p n p + Φ o r g n o r g {\displaystyle {n_{comp}}={\Phi _{p}}{n_{p}}+{\Phi _{org}}{n_{org}}} , where n c o m p {\displaystyle {n_{comp}}} , n p {\displaystyle {n_{p}}} and n o r g {\displaystyle {n_{org}}} stand for the refractive indices of the nanocomposite, nanoparticle and organic matrix, respectively. Φ p {\displaystyle {\Phi _{p}}} and Φ o r g {\displaystyle {\Phi _{org}}} represent the volume fractions of the nanoparticles and organic matrix, respectively. [ 22 ] The nanoparticle load is also important in designing HRIP nanocomposites for optical applications, because excessive concentrations increase the optical loss and decrease the processability of the nanocomposites. The choice of nanoparticles is often influenced by their size and surface characteristics. In order to increase optical transparency and reduce Rayleigh scattering of the nanocomposite, the diameter of the nanoparticle should be below 25 nm. [ 23 ] Direct mixing of nanoparticles with the polymer matrix often results in the undesirable aggregation of nanoparticles – this is avoided by modifying their surface. The most commonly used nanoparticles for HRIPs include TiO 2 ( anatase , n=2.45; rutile , n=2.70), [ 24 ] ZrO 2 (n=2.10), [ 25 ] amorphous silicon (n=4.23), PbS (n=4.20) [ 26 ] and ZnS (n=2.36). [ 27 ] Polyimides have high refractive indexes and thus are often used as the matrix for high-n nanoparticles. The resulting nanocomposites exhibit a tunable refractive index ranging from 1.57 to 1.99. [ 28 ]
A microlens array is a key component of optoelectronics, optical communications, CMOS image sensors and displays . Polymer-based microlenses are easier to make and are more flexible than conventional glass-based lenses. The resulting devices use less power, are smaller in size and are cheaper to produce. [ 1 ]
Another application of HRIPs is in immersion lithography . In 2009 it was a new technique for circuit manufacturing using both photoresists and high refractive index fluids. The photoresist needs to have an n value of greater than 1.90. It has been shown that non-aromatic, sulfur-containing HRIPs are the best materials for an optical photoresist system. [ 1 ]
Light-emitting diodes (LEDs) are a common solid-state light source. High-brightness LEDs (HBLEDs) are often limited by the relatively low light extraction efficiency due to the mismatch of the refractive indices between the LED material ( GaN , n=2.5) and the organic encapsulant ( epoxy or silicone, n=1.5). Higher light outputs can be achieved by using an HRIP as the encapsulant. [ 29 ] | https://en.wikipedia.org/wiki/High-refractive-index_polymer |
High Resolution Melt ( HRM ) analysis is a powerful technique in molecular biology for the detection of mutations , polymorphisms and epigenetic differences in double-stranded DNA samples. It was discovered and developed by Idaho Technology and the University of Utah. [ 1 ] It has advantages over other genotyping technologies, namely:
HRM analysis is performed on double stranded DNA samples. Typically the user will use polymerase chain reaction (PCR) prior to HRM analysis to amplify the DNA region in which their mutation of interest lies. In the sample tube there are now many copies of the DNA region of interest. This region that is amplified is known as the amplicon.
After the PCR process the HRM analysis begins. The process is simply a precise warming of the amplicon DNA from around 50 ˚C up to around 95 ˚C. At some point during this process, the melting temperature of the amplicon is reached and the two strands of DNA separate or "melt" apart.
The key to HRM is to monitor this separation of strands in real-time. This is achieved by using a fluorescent dye. The dyes that are used for HRM are known as intercalating dyes and have a unique property. They bind specifically to double-stranded DNA and when they are bound they fluoresce brightly. In the absence of double stranded DNA they have nothing to bind to and they only fluoresce at a low level.
At the beginning of the HRM analysis there is a high level of fluorescence in the sample because of the billions of copies of the amplicon. But as the sample is heated up and the two strands of the DNA melt apart, presence of double stranded DNA decreases and thus fluorescence is reduced. The HRM machine has a camera that watches this process by measuring the fluorescence. The machine then simply plots this data as a graph known as a melt curve, showing the level of fluorescence vs the temperature:
The melting temperature of the amplicon at which the two DNA strands come apart is entirely predictable. It is dependent on the sequence of the DNA bases. If you are comparing two samples from two different people, they should give exactly the same shaped melt curve. However, if one person has a mutation in the DNA region you have amplified, then this will alter the temperature at which the DNA strands melt apart. So now the two melt curves appear different. The difference may only be tiny, perhaps a fraction of a degree, but because the HRM machine has the ability to monitor this process in "high resolution", it is possible to accurately document these changes and therefore identify if a mutation is present or not.
Things become slightly more complicated than this because organisms contain two ( or more ) copies of each gene, known as the two alleles . So, if a sample is taken from a patient and amplified using PCR both copies of the region of DNA (alleles) of interest are amplified. So if we are looking for mutation there are now three possibilities:
These three scenarios are known as "Wild–type", "Heterozygote" or "Homozygote" respectively. Each gives a melt curve that is slightly different. With a high quality HRM assay it is possible to distinguish between all three of these scenarios.
Homozygous allelic variants may be characterised by a temperature shift on the resulting melt curve produced by HRM analysis. In comparison, heterozygotes are characterised by changes in melt curve shape. This is due to base-pair mismatching generated as a result of destabilised heteroduplex annealing between wild-type and variant strands. These differences can be easily seen on the resulting melt curve and the melt profile differences between the different genotypes can be amplified visually via generating a difference curve [ 2 ]
Conventional SNP typing methods are typically time-consuming and expensive, requiring several probe based assays to be multiplexed together or the use of DNA microarrays. HRM is more cost-effective and reduces the need to design multiple pairs of primers and the need to purchase expensive probes. The HRM method has been successfully used to detect a single G to A substitution in the gene Vssc (Voltage Sensitive Sodium Channel) which confers resistance to the acaricide permethrin in Scabies mite. This mutation results in a coding change in the protein (G1535D). The analysis of scabies mites collected from suspected permethrin susceptible and tolerant populations by HRM showed distinct melting profiles. The amplicons from the sensitive mites were observed to have a higher melting temperature relative to the tolerant mites, as expected from the higher thermostability of the GC base pair [ 3 ]
In a field more relevant to clinical diagnostics, HRM has been shown to be suitable in principle for the detection of mutations in the breast cancer susceptibility genes BRCA1 and BRCA2. More than 400 mutations have been identified in these genes. The sequencing of genes is the gold standard for identifying mutations. Sequencing is time-consuming and labour-intensive and is often preceded by techniques used to identify heteroduplex DNA, which then further amplify these issues. HRM offers a faster and more convenient closed-tube method of assessing the presence of mutations and gives a result which can be further investigated if it is of interest. In a study carried out by Scott et al. in 2006, [ 4 ] 3 cell lines harbouring different BRCA mutations were used to assess the HRM methodology. It was found that the melting profiles of the resulting PCR products could be used to distinguish the presence or absence of a mutation in the amplicon. Similarly in 2007 Krypuy et al. [ 5 ] showed that the careful design of HRM assays (with regards to primer placement) could be successfully employed to detect mutations in the TP53 gene, which encodes the tumour suppressor protein p53 in clinical samples of breast and ovarian cancer. Both these studies highlighted the fact that changes in the melting profile can be in the form of a shift in the melting temperature or an obvious difference in the shape of the melt curve. Both of these parameters are a function of the amplicon sequence.
The consensus is that HRM is a cost efficient method that can be employed as an initial screen for samples suspected of harbouring polymorphisms or mutations. This would reduce the number of samples which need to be investigated further using more conventional methods.
Currently there are many methods used to determine the zygosity status of a gene at a particular locus. These methods include the use of PCR with specifically designed probes to detect the variants of the genes (SNP typing is the simplest case). In cases where longer stretches of variation is implicated, post PCR analysis of the amplicons may be required. Changes in enzyme restriction, electrophoretic and chromatographic profiles can be measured. These methods are usually more time-consuming and increase the risk of amplicon contamination in the laboratory, due to the need to work with high concentrations of amplicons in the lab post-PCR. The use of HRM reduces the time required for analysis and the risk of contamination. HRM is a more cost-effective solution and the high resolution element not only allows the determination of homo and heterozygosity , it also resolves information about the type of homo and heterozygosity, with different gene variants giving rise to differing melt curve shapes. A study by Gundry et al. 2003, [ 6 ] showed that fluorescent labelling of one primer (in the pair) has been shown to be favourable over using an intercalating dye such as SYBR green I . However, progress has been made in the development and use of improved intercalating dyes [ 7 ] which reduce the issue of PCR inhibition and concerns over non-saturating intercalation of the dye.
The HRM methodology has also been exploited to provide a reliable analysis of the methylation status of DNA. This is of significance since changes to the methylation status of tumour suppressor genes, genes that regulate apoptosis and DNA repair, are characteristics of cancers and also have implications for responses to chemotherapy. For example, cancer patients can be more sensitive to treatment with DNA alkylating agents if the promoter of the DNA repair gene MGMT of the patient is methylated. In a study which tested the methylation status of the MGMT promoter on 19 colorectal samples, 8 samples were found to be methylated. [ 8 ] Another study compared the predictive power of MGMT promoter methylation in 83 high grade glioma patients obtained by either MSP , pyrosequencing , and HRM. The HRM method was found to be at least equivalent to pyrosequencing in quantifying the methylation level. [ 9 ]
Methylated DNA can be treated by bi-sulphite modification, which converts non-methylated cytosines to uracil . Therefore, PCR products resulting from a template that was originally unmethylated will have a lower melting point than those derived from a methylated template. HRM also offers the possibility of determining the proportion of methylation in a given sample, by comparing it to a standard curve which is generated by mixing different ratios of methylated and non-methylated DNA together. This can offer information regarding the degree of methylation that a tumour may have and thus give an indication of the character of the tumour and how far it deviates from what is "normal".
HRM also is practically advantageous for use in diagnostics, due to its capacity to be adapted to high throughput screening testing, and again it minimises the possibility of amplicon spread and contamination within a laboratory, owing to its closed-tube format.
To follow the transition of dsDNA (double-stranded) to ssDNA (single-stranded), intercalating dyes are employed. These dyes show differential fluorescence emission dependent on their association with double-stranded or single-stranded DNA. SYBR Green I is a first generation dye for HRM. It fluoresces when intercalated into dsDNA and not ssDNA. Because it may inhibit PCR at high concentrations, it is used at sub-saturating concentrations. Recently, some researchers have discouraged the use of SYBR Green I for HRM, [ 10 ] claiming that substantial protocol modifications are required. This is because it is suggested that the lack of accuracy may result from "dye jumping", where dye from a melted duplex may get reincorporated into regions of dsDNA which had not yet melted. [ 6 ] [ 10 ] New saturating dyes such as LC Green and LC Green Plus, ResoLight, EvaGreen, Chromofy and SYTO 9 are available on the market and have been used successfully for HRM. However, some groups have successfully used SYBR Green I for HRM with the Corbett Rotorgene instruments [ 11 ] and advocate the use of SYBR Green I for HRM applications.
High resolution melting assays typically involve qPCR amplification followed by a melting curve collected using a fluorescent dye. Due to the sensitivity of high-resolution melting analysis, it is necessary to carefully consider PCR cycling conditions, template DNA quality, and melting curve parameters. [ 12 ] For accurate and repeatable results, PCR thermal cycling conditions must be optimized to ensure that the desired DNA region is amplified with high specificity and minimal bias between sequence variants. The melting curve is typically performed across a broad range of temperatures in small (~0.3 °C) increments that are long enough (~10 seconds) for the DNA to reach equilibrium at each temperature step.
In addition to typical primer design considerations, the design of primers for high-resolution melting assays involves maximizing the thermodynamic differences between PCR products belonging to different genotypes. Smaller amplicons generally yield greater melting temperature variation than longer amplicons, but the variability cannot be predicted by eye. For this reason, it is critical to accurately predict the melting curve of PCR products when designing primers that will distinguish sequence variants. Specialty software, such as uMelt [ 13 ] and DesignSignatures , [ 14 ] are available to help design primers that will maximize melting curve variability specifically for high-resolution melting assays. | https://en.wikipedia.org/wiki/High-resolution_melting_analysis |
High-temperature corrosion is a mechanism of corrosion that takes place when gas turbines , diesel engines , furnaces or other machinery come in contact with hot gas containing certain contaminants. Fuel sometimes contains vanadium compounds or sulfates, which can form low melting point compounds during combustion. These liquid melted salts are strongly corrosive to stainless steel and other alloys normally resistant with respect to corrosion at high temperatures. Other types of high-temperature corrosion include high-temperature oxidation , [ 1 ] sulfidation, and carbonization . High temperature oxidation and other corrosion types are commonly modeled using the Deal-Grove model to account for diffusion and reaction dynamics.
Two types of sulfate -induced hot corrosion are generally distinguished: Type I takes place above the melting point of sodium sulfate , whereas Type II occurs below the melting point of sodium sulfate but in the presence of small amounts of SO 3 . [ 2 ] [ 3 ]
In Type I, the protective oxide scale is dissolved by the molten salt. Sulfur is released from the salt and diffuses into the metal substrate, forming grey- or blue-colored aluminum or chromium sulfides. With the aluminum or chromium sequestered, after the salt layer has been removed, the steel cannot rebuild a new protective oxide layer. Alkali sulfates are formed from sulfur trioxide and sodium-containing compounds. As the formation of vanadates is preferred, sulfates are formed only if the amount of alkali metals is higher than the corresponding amount of vanadium. [ 3 ]
The same kind of attack has been observed for potassium sulfate and magnesium sulfate .
Vanadium is present in petroleum , especially from Canada , western United States , Venezuela and the Caribbean region, often bound to porphyrine in organometallic complexes. [ 4 ] These complexes get concentrated on the higher-boiling fractions, which are then form the base of heavy residual fuel oils . Residues of sodium, primarily from sodium chloride and spent oil treatment chemicals, are also present in this petroleum fraction. Combusting any amount more than 100 ppm of sodium and vanadium will yield ash capable of causing fuel ash corrosion . [ 4 ]
Most fuels contain small traces of vanadium . The vanadium is oxidized to different vanadates . Molten vanadates present as deposits on metal can flux oxide scales and passivation layers . Furthermore, the presence of vanadium accelerates the diffusion of oxygen through the fused salt layer to the metal substrate. Vanadates can be present in semiconducting or ionic form, where the semiconducting form has significantly higher corrosivity as the oxygen is transported via oxygen vacancies . The ionic form, in contrast, transports oxygen by diffusion of the entire vanadate, which is significantly slower. The semiconducting form is rich in vanadium pentoxide. [ 3 ] [ 5 ]
At high temperatures or when there is a lower availability of oxygen, refractory oxides— vanadium dioxide and vanadium trioxide —form. These more reduced forms of vanadium do not promote corrosion. However, at conditions most common for burning, vanadium pentoxide gets formed. Together with sodium oxide , vanadates of various composition ratios are formed. Vanadates of composition approximating Na 2 O.6 V 2 O 5 have the highest corrosion rates at the temperatures between 593 °C and 816 °C; at lower temperatures, the vanadate is in solid state, and at higher temperatures, vanadates with higher proportion of vanadium contribute the most to higher corrosion rates. [ 5 ] [ 3 ]
The solubility of the passivation layer oxides in the molten vanadates depends on the composition of the oxide layer. Iron(III) oxide is readily soluble in vanadates between Na 2 O.6 V 2 O 5 and 6 Na 2 O.V 2 O 5 , at temperatures below 705 °C in amounts up to equal to the mass of the vanadate. This composition range is common for ashes, which aggravates the problem. Chromium(III) oxide , nickel(II) oxide , and cobalt(II) oxide are less soluble in vanadates; they convert the vanadates to the less corrosive ionic form and their vanadates are tightly adherent, refractory, and act as oxygen barriers. [ 5 ] [ 3 ]
The rate of corrosion caused by vanadates can be lowered by reducing the amount of excess air available for combustion to preferentially form the refractory oxides, using refractory coatings on the exposed surfaces, or using high-chromium alloys, such as 50% Ni/50% Cr or 40% Ni/60% Cr. [ 6 ]
The presence of sodium in a ratio of 1:3 gives the lowest melting point and must be avoided. This melting point of 535 °C can cause problems on the hot spots of the engine like piston crowns , valve seats , and turbochargers . [ 5 ] [ 3 ]
Lead can form a low-melting slag capable of fluxing protective oxide scales. [ 7 ] [ 8 ] Lead is more often known for causing stress corrosion cracking in common materials that are exposed to molten lead. The cracking tendency of lead has been known for some time, since most iron based alloys, including those used in steel containers and vessels for molten lead baths, usually fail due to cracking. [ 9 ] | https://en.wikipedia.org/wiki/High-temperature_corrosion |
High-temperature oxidation refers to a scale-forming oxidation process involving a metallic object and atmospheric oxygen that produces corrosion at elevated temperatures. [ 1 ] [ 2 ] [ 3 ]
High-temperature oxidation is a kind of High-temperature corrosion . Other kinds of high-temperature corrosion include high-temperature sulfidation and carbonization. [ 4 ] [ 5 ] High temperature oxidation and other corrosion types are commonly modelled using the Deal-Grove model to account for diffusion and reaction processes.
High temperature oxidation is generally occurs via the following chemical reaction between oxygen (O 2 ) and a metal M: [ 2 ]
nM + 1/2kO 2 = M n O k
According to Wagner's theory of oxidation, oxidation rate is controlled by partial ionic and electronic conductivities of oxides and their dependence on the chemical potential of the metal or oxygen in the oxide. [ 6 ]
This chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/High-temperature_oxidation |
High-temperature superconductivity ( high- T c or HTS ) is superconductivity in materials with a critical temperature (the temperature below which the material behaves as a superconductor) above 77 K (−196.2 °C; −321.1 °F), the boiling point of liquid nitrogen . [ 1 ] They are "high-temperature" only relative to previously known superconductors, which function only closer to absolute zero. The first high-temperature superconductor was discovered in 1986 by IBM researchers Georg Bednorz and K. Alex Müller . [ 2 ] [ 3 ] Although the critical temperature is around 35.1 K (−238.1 °C; −396.5 °F), this material was modified by Ching-Wu Chu to make the first high-temperature superconductor with critical temperature 93 K (−180.2 °C; −292.3 °F). [ 4 ] Bednorz and Müller were awarded the Nobel Prize in Physics in 1987 "for their important break-through in the discovery of superconductivity in ceramic materials". [ 5 ] Most high- T c materials are type-II superconductors .
The major advantage of high-temperature superconductors is that they can be cooled using liquid nitrogen, [ 2 ] in contrast to previously known superconductors, which require expensive and hard-to-handle coolants, primarily liquid helium . A second advantage of high- T c materials is they retain their superconductivity in higher magnetic fields than previous materials. This is important when constructing superconducting magnets , a primary application of high- T c materials.
The majority of high-temperature superconductors are ceramics , rather than the previously known metallic materials. Ceramic superconductors are suitable for some practical uses but encounter manufacturing issues. For example, most ceramics are brittle , which complicates wire fabrication. [ 6 ]
The main class of high-temperature superconductors is copper oxides combined with other metals, especially the rare-earth barium copper oxides (REBCOs) such as yttrium barium copper oxide (YBCO). The second class of high-temperature superconductors in the practical classification is the iron-based compounds . [ 7 ] [ 8 ] Magnesium diboride is sometimes included in high-temperature superconductors: It is relatively simple to manufacture, but it superconducts only below 39 K (−234.2 °C), which makes it unsuitable for liquid nitrogen cooling.
Superconductivity was discovered by Kamerlingh Onnes in 1911, in a metal solid. Ever since, researchers have attempted to create superconductivity at higher temperatures [ 9 ] with the goal of finding a room-temperature superconductor . [ 10 ] By the late 1970s, superconductivity was observed in several metallic compounds (in particular Nb -based, such as NbTi , Nb 3 Sn , and Nb 3 Ge ) at temperatures that were much higher than those for elemental metals and which could even exceed 20 K (−253.2 °C).
In 1986, at the IBM research lab near Zürich in Switzerland, Bednorz and Müller were looking for superconductivity in a new class of ceramics : the copper oxides, or cuprates . In that year, Bednorz and Müller discovered superconductivity in lanthanum barium copper oxide (LBCO), a lanthanum -based cuprate perovskite material, which had a transition temperature of 35 K (Nobel Prize in Physics, 1987). [ 11 ] It was soon found that replacing the lanthanum with yttrium (i.e., making YBCO) raised the critical temperature above 90 K. [ 12 ] Their results were soon confirmed [ 13 ] by many groups. [ 14 ]
In 1987, Philip W. Anderson gave the first theoretical description of these materials, based on the resonating valence bond (RVB) theory , [ 15 ] but a full understanding of these materials is still developing today. These superconductors are now known to possess a d -wave [ clarification needed ] pair symmetry. The first proposal that high-temperature cuprate superconductivity involves d -wave pairing was made in 1987 by N. E. Bickers, Douglas James Scalapino and R. T. Scalettar, [ 16 ] followed by three subsequent theories in 1988 by Masahiko Inui, Sebastian Doniach, Peter J. Hirschfeld and Andrei E. Ruckenstein, [ 17 ] using spin-fluctuation theory, and by Claudius Gros , Didier Poilblanc, Maurice T. Rice and FC. Zhang, [ 18 ] and by Gabriel Kotliar and Jialin Liu identifying d -wave pairing as a natural consequence of the RVB theory. [ 19 ] The confirmation of the d -wave nature of the cuprate superconductors was made by a variety of experiments, including the direct observation of the d -wave nodes in the excitation spectrum through angle resolved photoemission spectroscopy (ARPES), the observation of a half-integer flux in tunneling experiments, and indirectly from the temperature dependence of the penetration depth, specific heat and thermal conductivity.
Until 2001 the cuprates were thought be the only true high temperature superconductors. In that year MgB 2 with T c of 39K was discovered by Akimitsu and colleagues. This was followed in 2006 by Hosono and coworkers with iron-based layered oxypnictide compound with T c of 56K. [ 20 ] These temperature are below the cuprates but well above the conventional superconductors. [ 21 ]
In 2014, evidence showing that fractional particles can happen in quasi two-dimensional magnetic materials was reported by École Polytechnique Fédérale de Lausanne (EPFL) scientists [ 22 ] lending support for Anderson's theory of high-temperature superconductivity. [ 23 ] In 2014 and 2015, hydrogen sulfide ( H 2 S ) at extremely high pressures (around 150 gigapascals) was first predicted and then confirmed to be a high-temperature superconductor with a transition temperature of 80 K. [ 24 ] [ 25 ] [ 26 ]
In 2018, a research team from the Department of Physics, Massachusetts Institute of Technology , discovered superconductivity in bilayer graphene with one layer twisted at an angle of approximately 1.1 degrees with cooling and applying a small electric charge. Even if the experiments were not carried out in a high-temperature environment, the results are correlated less to classical but high temperature superconductors, given that no foreign atoms needed to be introduced. [ 27 ] The superconductivity effect came about as a result of electrons twisted into a vortex between the graphene layers, called " skyrmions ". These act as a single particle and can pair up across the graphene's layers, leading to the basic conditions required for superconductivity. [ 28 ]
In 2019 it was discovered that lanthanum hydride ( LaH 10 ) becomes a superconductor at 250 K under a pressure of 170 gigapascals. [ 29 ] [ 26 ]
In 2020, a room-temperature superconductor (critical temperature 288 K) made from hydrogen, carbon and sulfur under pressures of around 270 gigapascals was described in a paper in Nature . [ 30 ] [ 31 ] However, in 2022 the article was retracted by the editors because the validity of background subtraction procedures had been called into question. All nine authors maintain that the raw data strongly support the main claims of the paper. [ 32 ]
In 2023 a study reported superconductivity at room temperature and ambient pressure in highly oriented pyrolytic graphite with dense arrays of nearly parallel line defects. [ 33 ]
As of 2021, [ 34 ] the superconductor with the highest transition temperature at ambient pressure was the cuprate of mercury, barium, and calcium, at around 133 K (−140 °C). [ 35 ] Other superconductors have higher recorded transition temperatures – for example lanthanum superhydride at 250 K (−23 °C), but these only occur at high pressure. [ 36 ]
The "high-temperature" superconductor class has had many definitions.
The label high- T c should be reserved for materials with critical temperatures greater than the boiling point of liquid nitrogen . However, a number of materials – including the original discovery and recently discovered pnictide superconductors – have critical temperatures below 77 K (−196.2 °C) but nonetheless are commonly referred to in publications as high- T c class. [ 43 ] [ 44 ]
A substance with a critical temperature above the boiling point of liquid nitrogen, together with a high critical magnetic field and critical current density (above which superconductivity is destroyed), would greatly benefit technological applications. In magnet applications, the high critical magnetic field may prove more valuable than the high T c itself. Some cuprates have an upper critical field of about 100 tesla. However, cuprate materials are brittle ceramics that are expensive to manufacture and not easily turned into wires or other useful shapes. Furthermore, high-temperature superconductors do not form large, continuous superconducting domains, rather clusters of microdomains within which superconductivity occurs. They are therefore unsuitable for applications requiring actual superconductive currents, such as magnets for magnetic resonance spectrometers. [ 45 ] For a solution to this (powders), see HTS wire .
There has been considerable debate regarding high-temperature superconductivity coexisting with magnetic ordering in YBCO, [ 46 ] iron-based superconductors , several ruthenocuprates and other exotic superconductors, and the search continues for other families of materials. HTS are Type-II superconductors , which allow magnetic fields to penetrate their interior in quantized units of flux, meaning that much higher magnetic fields are required to suppress superconductivity. The layered structure also gives a directional dependence to the magnetic field response.
All known high- T c superconductors are Type-II superconductors. In contrast to Type-I superconductors , which expel all magnetic fields due to the Meissner effect , Type-II superconductors allow magnetic fields to penetrate their interior in quantized units of flux, creating "holes" or "tubes" of normal metallic regions in the superconducting bulk called vortices . Consequently, high- T c superconductors can sustain much higher magnetic fields.
Cuprate superconductors are a family of high-temperature superconducting materials made of layers of copper oxides ( CuO 2 ) alternating with layers of other metal oxides, which act as charge reservoirs. At ambient pressure, cuprate superconductors are the highest temperature superconductors known.
Cuprates have a structure close to that of a two-dimensional material. Their superconducting properties are determined by electrons moving within weakly coupled copper-oxide ( CuO 2 ) layers. Neighbouring layers contain ions such as lanthanum , barium , strontium , or other atoms that act to stabilize the structures and dope electrons or holes onto the copper-oxide layers. The undoped "parent" or "mother" compounds are Mott insulators with long-range antiferromagnetic order at sufficiently low temperatures. Single band models are generally considered to be enough to describe the electronic properties.
The cuprate superconductors adopt a perovskite structure. The copper-oxide planes are checkerboard lattices with squares of O 2− ions with a Cu 2+ ion at the centre of each square. The unit cell is rotated by 45° from these squares. Chemical formulae of superconducting materials contain fractional numbers to describe the doping required for superconductivity.
Several families of cuprate superconductors have been identified. They can be categorized by their elements and the number of adjacent copper-oxide layers in each superconducting block. For example, YBCO and BSCCO can be referred to as Y123 and Bi2201/Bi2212/Bi2223 depending on the number of layers in each superconducting block ( n ). The superconducting transition temperature peaks at an optimal doping value ( p =0.16) and an optimal number of layers in each block, typically three.
Iron-based superconductors contain layers of iron and a pnictogen – such as arsenic or phosphorus – , a chalcogen , or a crystallogen . This is currently the family with the second highest critical temperature, behind the cuprates. Interest in their superconducting properties began in 2006 with the discovery of superconductivity in LaFePO at 4 K (−269.15 °C) [ 49 ] and gained much greater attention in 2008 after the analogous material LaFeAs(O,F) [ 50 ] was found to superconduct at up to 43 K (−230.2 °C) under pressure. [ 51 ] The highest critical temperatures in the iron-based superconductor family exist in thin films of FeSe, [ 52 ] [ 53 ] [ 54 ] where a critical temperature in excess of 100 K (−173 °C) was reported in 2014. [ 55 ]
Since the original discoveries several families of iron-based superconductors have emerged:
Most undoped iron-based superconductors show a tetragonal-orthorhombic structural phase transition followed at lower temperature by magnetic ordering, similar to the cuprate superconductors. [ 66 ] However, they are poor metals rather than Mott insulators and have five bands at the Fermi surface rather than one. [ 48 ] The phase diagram emerging as the iron-arsenide layers are doped is remarkably similar, with the superconducting phase close to or overlapping the magnetic phase. Strong evidence that the T c value varies with the As–Fe–As bond angles has already emerged and shows that the optimal T c value is obtained with undistorted FeAs 4 tetrahedra. [ 67 ] The symmetry of the pairing wavefunction is still widely debated, but an extended s -wave scenario is currently favoured.
Magnesium diboride is occasionally referred to as a high-temperature superconductor [ 68 ] because its T c value of 39 K (−234.2 °C) is above that historically expected for BCS superconductors. However, it is more generally regarded as the highest T c conventional superconductor, the increased T c resulting from two separate bands being present at the Fermi level .
In 1991 Hebard et al. discovered Fulleride superconductors, [ 69 ] where alkali-metal atoms are intercalated into C 60 molecules.
In 2008 Ganin et al. demonstrated superconductivity at temperatures of up to 38 K (−235.2 °C) for Cs 3 C 60 . [ 70 ]
P-doped Graphane was proposed in 2010 to be capable of sustaining high-temperature superconductivity. [ 71 ]
On 31st of December 2023 "Global Room-Temperature Superconductivity in Graphite" was published in the journal "Advanced Quantum Technologies" claiming to demonstrate superconductivity at room temperature and ambient pressure in Highly oriented pyrolytic graphite with dense arrays of nearly parallel line defects. [ 72 ]
In 1999, Anisimov et al. conjectured superconductivity in nickelates, proposing nickel oxides as direct analogs to the cuprate superconductors. [ 73 ] Superconductivity in an infinite-layer nickelate, Nd 0.8 Sr 0.2 NiO 2 , was reported at the end of 2019 with a superconducting transition temperature between 9 and 15 K (−264.15 and −258.15 °C). [ 74 ] [ 75 ] This superconducting phase is observed in oxygen-reduced thin films created by the pulsed laser deposition of Nd 0.8 Sr 0.2 NiO 3 onto SrTiO 3 substrates that is then reduced to Nd 0.8 Sr 0.2 NiO 2 via annealing the thin films at 533–553 K (260–280 °C) in the presence of CaH 2 . [ 76 ] The superconducting phase is only observed in the oxygen reduced film and is not seen in oxygen reduced bulk material of the same stoichiometry, suggesting that the strain induced by the oxygen reduction of the Nd 0.8 Sr 0.2 NiO 2 thin film changes the phase space to allow for superconductivity. [ 77 ] Of important is further to extract access hydrogen from the reduction with CaH 2 , otherwise topotactic hydrogen may prevent superconductivity. [ 78 ]
Liquid nitrogen can be produced relatively cheaply, even on-site. The higher temperatures additionally help to avoid some of the problems that arise at liquid helium temperatures, such as the formation of plugs of frozen air that can block cryogenic lines and cause unanticipated and potentially hazardous pressure buildup. [ 79 ] [ 80 ]
The question of how superconductivity arises in high-temperature superconductors is one of the major unsolved problems of theoretical condensed matter physics . The mechanism that causes the electrons in these crystals to form pairs is not known. Despite intensive research and many promising leads, an explanation has so far eluded scientists. One reason for this is that the materials in question are generally very complex, multi-layered crystals (for example, BSCCO ), making theoretical modelling difficult.
Improving the quality and variety of samples also gives rise to considerable research, both with the aim of improved characterisation of the physical properties of existing compounds, and synthesizing new materials, often with the hope of increasing T c . Technological research focuses on making HTS materials in sufficient quantities to make their use economically viable [ 81 ] as well as in optimizing their properties in relation to applications . [ 82 ] Metallic hydrogen has been proposed as a room-temperature superconductor, some experimental observations have detected the occurrence of the Meissner effect . [ 83 ] [ 84 ] LK-99 , copper - doped lead-apatite, has also been proposed as a room-temperature superconductor.
Multiple hypotheses attempt to account for HTS.
Resonating-valence-bond theory
Spin fluctuation hypothesis [ 85 ] proposed that electron pairing in high-temperature superconductors is mediated by short-range spin waves known as paramagnons . [ 86 ] [ 87 ] [ dubious – discuss ]
Gubser, Hartnoll, Herzog, and Horowitz proposed holographic superconductivity, which uses holographic duality or AdS/CFT correspondence theory as a possible explanation of high-temperature superconductivity in certain materials. [ 88 ]
Weak coupling theory suggests superconductivity emerges from antiferromagnetic spin fluctuations in a doped system. [ 89 ] It predicts that the pairing wave function of cuprate HTS should have a d x 2 -y 2 symmetry. Thus, determining whether the pairing wave function has d -wave symmetry is essential to test the spin fluctuation mechanism. That is, if the HTS order parameter (a pairing wave function as in Ginzburg–Landau theory ) does not have d -wave symmetry, then a pairing mechanism related to spin fluctuations can be ruled out. (Similar arguments can be made for iron-based superconductors but the different material properties allow a different pairing symmetry.)
Interlayer coupling theory proposes that a layered structure consisting of BCS-type ( s -wave symmetry) superconductors can explain superconductivity by itself. [ 90 ] By introducing an additional tunnelling interaction between layers, this model explained the anisotropic symmetry of the order parameter as well as the emergence of HTS.
In order to resolve this question, experiments such as photoemission spectroscopy , NMR , specific heat measurements, were conducted. The results remain ambiguous, with some reports supporting d symmetry, with others supporting s symmetry.
Such explanations assume that superconductive properties can be treated by mean-field theory . It also does not consider that in addition to the superconductive gap, the pseudogap must be explained. The cuprate layers are insulating, and the superconductors are doped with interlayer impurities to make them metallic.
The transition temperature can be maximized by varying the dopant concentration. The simplest example is La 2 CuO 4 , which consists of alternating CuO 2 and LaO layers that are insulating when pure. When 8% of the La is replaced by Sr, the latter acts as a dopant, contributing holes to the CuO 2 layers, and making the sample metallic. The Sr impurities also act as electronic bridges, enabling interlayer coupling. Proceeding from this picture, some theories argue that the pairing interaction is with phonons , as in conventional superconductors with Cooper pairs . While the undoped materials are antiferromagnetic, even a few percent of impurity dopants introduce a smaller pseudogap in the CuO 2 planes that is also caused by phonons. The gap decreases with increasing charge carriers, and as it nears the superconductive gap, the latter reaches its maximum. The transition temperature is then argued to be due to the percolating behaviour of the carriers, which follow zig-zag percolative paths, largely in metallic domains in the CuO 2 planes, until blocked by charge density wave domain walls , where they use dopant bridges to cross over to a metallic domain of an adjacent CuO 2 plane. The transition temperature maxima are reached when the host lattice has weak bond-bending forces, which produce strong electron–phonon interactions at the interlayer dopants. [ 91 ]
An experiment based on flux quantization of a three-grain ring of YBa 2 Cu 3 O 7 (YBCO) was proposed to test the symmetry of the order parameter in the HTS. The symmetry of the order parameter could best be probed at the junction interface as the Cooper pairs tunnel across a Josephson junction or weak link. [ 92 ] It was expected that a half-integer flux, that is, a spontaneous magnetization could only occur for a junction of d symmetry superconductors. But, even if the junction experiment is the strongest method to determine the symmetry of the HTS order parameter, the results have been ambiguous. John R. Kirtley and C. C. Tsuei thought that the ambiguous results came from the defects inside the HTS, leading them to an experiment where both clean limit (no defects) and dirty limit (maximal defects) were considered simultaneously. [ 93 ] Spontaneous magnetization was clearly observed in YBCO, which supported the d symmetry of the order parameter in YBCO. But, since YBCO is orthorhombic , it might inherently have an admixture of s symmetry. By tuning their technique, they found an admixture of s symmetry in YBCO within about 3%. [ 94 ] Also, they found a pure d x 2 −y 2 order parameter symmetry in tetragonal Tl 2 Ba 2 CuO 6 . [ 95 ]
The lack of exact theoretical computations on such strongly interacting electron systems has complicated attempts to validate spin-fluctuation. However, most theoretical calculations, including phenomenological and diagrammatic approaches, converge on magnetic fluctuations as the pairing mechanism.
In a superconductor, the flow of electrons cannot be resolved into individual electrons, but instead consists of pairs of bound electrons, called Cooper pairs. In conventional superconductors, these pairs are formed when an electron moving through the material distorts the surrounding crystal lattice, which attracts another electron and forms a bound pair. This is sometimes called the "water bed" effect. Each Cooper pair requires a certain minimum energy to be displaced, and if the thermal fluctuations in the crystal lattice are smaller than this energy the pair can flow without dissipating energy. Electron flow without resistance is superconductivity.
In a high- T c superconductor, the mechanism is extremely similar to a conventional superconductor, except that phonons play virtually no role, replaced by spin-density waves. Just as all known conventional superconductors are strong phonon systems, all known high- T c superconductors are strong spin-density wave systems, within close vicinity of a magnetic transition to, for example, an antiferromagnet. When an electron moves in a high- T c superconductor, its spin creates a spin-density wave around it. This spin-density wave in turn causes a nearby electron to fall into the spin depression created by the first electron (water-bed). When the system temperature is lowered, more spin density waves and Cooper pairs are created, eventually leading to superconductivity. High- T c systems are magnetic systems due to the Coulomb interaction, creating a strong Coulomb repulsion between electrons. This repulsion prevents pairing of the Cooper pairs on the same lattice site. Instead, pairing occurs at near-neighbor lattice sites. This is the so-called d -wave pairing, where the pairing state has a node (zero) at the origin.
Examples of high- T c cuprate superconductors include YBCO and BSCCO , which are the most known materials that achieve superconductivity above the boiling point of liquid nitrogen. | https://en.wikipedia.org/wiki/High-temperature_superconductivity |
High-throughput screening ( HTS ) is a method for scientific discovery especially used in drug discovery and relevant to the fields of biology , materials science [ 1 ] and chemistry . [ 2 ] [ 3 ] Using robotics , data processing/control software, liquid handling devices, and sensitive detectors, high-throughput screening allows a researcher to quickly conduct millions of chemical, genetic, or pharmacological tests. Through this process one can quickly recognize active compounds, antibodies, or genes that modulate a particular biomolecular pathway. The results of these experiments provide starting points for drug design and for understanding the noninteraction or role of a particular location.
The key labware or testing vessel of HTS is the microtiter plate , which is a small container, usually disposable and made of plastic, that features a grid of small, open divots called wells . In general, microplates for HTS have either 96, 192, 384, 1536, 3456 or 6144 wells. These are all multiples of 96, reflecting the original 96-well microplate with spaced wells of 8 x 12 with 9 mm spacing. Most of the wells contain test items, depending on the nature of the experiment. These could be different chemical compounds dissolved e.g. in an aqueous solution of dimethyl sulfoxide (DMSO). The wells could also contain cells or enzymes of some type. (The other wells may be empty or contain pure solvent or untreated samples, intended for use as experimental controls .)
A screening facility typically holds a library of stock plates , whose contents are carefully catalogued, and each of which may have been created by the lab or obtained from a commercial source. These stock plates themselves are not directly used in experiments; instead, separate assay plates are created as needed. An assay plate is simply a copy of a stock plate, created by pipetting a small amount of liquid (often measured in nanoliters ) from the wells of a stock plate to the corresponding wells of a completely empty plate.
To prepare for an assay , the researcher fills each well of the plate with some biological entity that they wish to conduct the experiment upon, such as a protein , cells , or an animal embryo . After some incubation time has passed to allow the biological matter to absorb, bind to, or otherwise react (or fail to react) with the compounds in the wells, measurements are taken across all the plate's wells, either manually or by a machine. Manual measurements are often necessary when the researcher is using microscopy to (for example) seek changes or defects in embryonic development caused by the wells' compounds, looking for effects that a computer could not easily determine by itself. Otherwise, a specialized automated analysis machine can run a number of experiments on the wells (such as shining polarized light on them and measuring reflectivity, which can be an indication of protein binding). In this case, the machine outputs the result of each experiment as a grid of numeric values, with each number mapping to the value obtained from a single well. A high-capacity analysis machine can measure dozens of plates in the space of a few minutes like this, generating thousands of experimental datapoints very quickly.
Depending on the results of this first assay, the researcher can perform follow up assays within the same screen by "cherrypicking" liquid from the source wells that gave interesting results (known as "hits") into new assay plates, and then re-running the experiment to collect further data on this narrowed set, confirming and refining observations.
Automation is an essential element in HTS's usefulness. Typically, an integrated robot system consisting of one or more robots transports assay-microplates from station to station for sample and reagent addition, mixing, incubation, and finally readout or detection. An HTS system can usually prepare, incubate, and analyze many plates simultaneously, further speeding the data-collection process. HTS robots that can test up to 100,000 compounds per day currently exist. [ 4 ] [ 5 ] Automatic colony pickers pick thousands of microbial colonies for high throughput genetic screening. [ 6 ] The term uHTS or ultra-high-throughput screening refers (circa 2008) to screening in excess of 100,000 compounds per day. [ 7 ]
With the ability of rapid screening of diverse compounds (such as small molecules or siRNAs ) to identify active compounds, HTS has led to an explosion in the rate of data generated in recent years
. [ 8 ] Consequently, one of the most fundamental challenges in HTS experiments is to glean biochemical significance from mounds of data, which relies on the development and adoption of appropriate experimental designs and analytic methods for both quality control and hit selection
. [ 9 ] HTS research is one of the fields that have a feature described by John Blume, Chief Science Officer for Applied Proteomics, Inc., as follows: Soon, if a scientist does not understand some statistics or rudimentary data-handling technologies, he or she may not be considered to be a true molecular biologist and, thus, will simply become "a dinosaur." [ 10 ]
High-quality HTS assays are critical in HTS experiments. The development of high-quality HTS assays requires the integration of both experimental and computational approaches for quality control (QC). Three important means of QC are (i) good plate design, (ii) the selection of effective positive and negative chemical/biological controls, and (iii) the development of effective QC metrics to measure the degree of differentiation so that assays with inferior data quality can be identified. [ 11 ] A good plate design helps to identify systematic errors (especially those linked with well position) and determine what normalization should be used to remove/reduce the impact of systematic errors on both QC and hit selection. [ 9 ]
Effective analytic QC methods serve as a gatekeeper for excellent quality assays. In a typical HTS experiment, a clear distinction between a positive control and a negative reference such as a negative control is an index for good quality. Many quality-assessment measures have been proposed to measure the degree of differentiation between a positive control and a negative reference. Signal-to-background ratio, signal-to-noise ratio, signal window, assay variability ratio, and Z-factor have been adopted to evaluate data quality. [ 9 ] [ 12 ] Strictly standardized mean difference ( SSMD ) has recently been proposed for assessing data quality in HTS assays. [ 13 ] [ 14 ]
A compound with a desired size of effects in an HTS is called a hit. The process of selecting hits is called hit selection. The analytic methods for hit selection in screens without replicates (usually in primary screens) differ from those with replicates (usually in confirmatory screens). For example, the z-score method is suitable for screens without replicates whereas the t-statistic is suitable for screens with replicates. The calculation of SSMD for screens without replicates also differs from that for screens with replicates
. [ 9 ]
For hit selection in primary screens without replicates, the easily interpretable ones are average fold change, mean difference, percent inhibition, and percent activity. However, they do not capture data variability effectively. The z-score method or SSMD, which can capture data variability based on an assumption that every compound has the same variability as a negative reference in the screens. [ 15 ] [ 16 ] However, outliers are common in HTS experiments, and methods such as z-score are sensitive to outliers and can be problematic. As a consequence, robust methods such as the z*-score method, SSMD*, B-score method, and quantile-based method have been proposed and adopted for hit selection. [ 5 ] [ 9 ] [ 17 ] [ 18 ]
In a screen with replicates, we can directly estimate variability for each compound; as a consequence, we should use SSMD or t-statistic that does not rely on the strong assumption that the z-score and z*-score rely on. One issue with the use of t-statistic and associated p-values is that they are affected by both sample size and effect size. [ 19 ] They come from testing for no mean difference, and thus are not designed to measure the size of compound effects. For hit selection, the major interest is the size of effect in a tested compound. SSMD directly assesses the size of effects. [ 20 ] SSMD has also been shown to be better than other commonly used effect sizes. [ 21 ] The population value of SSMD is comparable across experiments and, thus, we can use the same cutoff for the population value of SSMD to measure the size of compound effects
. [ 22 ]
Unique distributions of compounds across one or many plates can be employed either to increase the number of assays per plate or to reduce the variance of assay results, or both. The simplifying assumption made in this approach is that any N compounds in the same well will not typically interact with each other, or the assay target, in a manner that fundamentally changes the ability of the assay to detect true hits.
For example, imagine a plate wherein compound A is in wells 1–2–3, compound B is in wells 2–3–4, and compound C is in wells 3–4–5. In an assay of this plate against a given target, a hit in wells 2, 3, and 4 would indicate that compound B is the most likely agent, while also providing three measurements of compound B's efficacy against the specified target. Commercial applications of this approach involve combinations in which no two compounds ever share more than one well, to reduce the (second-order) possibility of interference between pairs of compounds being screened.
Automation and low volume assay formats were leveraged by scientists at the NIH Chemical Genomics Center (NCGC) to develop quantitative HTS (qHTS), a paradigm to pharmacologically profile large chemical libraries through the generation of full concentration-response relationships for each compound. With accompanying curve fitting and cheminformatics software qHTS data yields half maximal effective concentration (EC50), maximal response, Hill coefficient (nH) for the entire library enabling the assessment of nascent structure activity relationships (SAR). [ 23 ]
In March 2010, research was published demonstrating an HTS process allowing 1,000 times faster screening (100 million reactions in 10 hours) at 1-millionth the cost (using 10 −7 times the reagent volume) than conventional techniques using drop-based microfluidics. [ 23 ] Drops of fluid separated by oil replace microplate wells and allow analysis and hit sorting while reagents are flowing through channels.
In 2010, researchers developed a silicon sheet of lenses that can be placed over microfluidic arrays to allow the fluorescence measurement of 64 different output channels simultaneously with a single camera. [ 24 ] This process can analyze 200,000 drops per second.
In 2013, researchers have disclosed an approach with small molecules from plants. In general, it is essential to provide high-quality proof-of-concept validations early in the drug discovery process. Here technologies that enable the identification of potent, selective, and bioavailable chemical probes are of crucial interest, even if the resulting compounds require further optimization for development into a pharmaceutical product. Nuclear receptor RORα, a protein that has been targeted for more than a decade to identify potent and bioavailable agonists, was used as an example of a very challenging drug target. Hits are confirmed at the screening step due to the bell-shaped curve. This method is very similar to the quantitative HTS method (screening and hit confirmation at the same time), except that using this approach greatly decreases the data point number and can screen easily more than 100.000 biological relevant compounds. [ 25 ]
Switching from an orbital shaker, which required milling times of 24 hours and at least 10 mg of drug compound to a ResonantAcoustic mixer, Merck reported reduced processing time to less than 2 hours on only 1-2 mg of drug compound per well. Merck also indicated the acoustic milling approach allows for the preparation of high dose nanosuspension formulations that could not be obtained using conventional milling equipment. [ 26 ]
Whereby traditional HTS drug discovery uses purified proteins or intact cells, recent development of the technology is associated with the use of intact living organisms, like the nematode Caenorhabditis elegans and zebrafish ( Danio rerio ). [ 27 ]
In 2016-2018 plate manufacturers began producing specialized chemistry to allow for mass production of ultra-low adherent cell repellent surfaces which facilitated the rapid development of HTS amenable assays to address cancer drug discovery in 3D tissues such as organoids and spheroids; a more physiologically relevant format. [ 28 ] [ 29 ] [ 30 ]
HTS is a relatively recent innovation, made feasible largely through modern advances in robotics and high-speed computer technology. It still takes a highly specialized and expensive screening lab to run an HTS operation, so in many cases a small- to moderate-size research institution will use the services of an existing HTS facility rather than set up one for itself.
There is a trend in academia for universities to be their own drug discovery enterprise. [ 31 ] These facilities, which normally are found only in industry, are now increasingly found at universities as well. UCLA , for example, features an open access HTS laboratory Molecular Screening Shared Resources (MSSR, UCLA), which can screen more than 100,000 compounds a day on a routine basis. The open access policy ensures that researchers from all over the world can take advantage of this facility without lengthy intellectual property negotiations. With a compound library of over 200,000 small molecules, the MSSR has one of the largest compound deck of all universities on the west coast. Also, the MSSR features full functional genomics capabilities (genome wide siRNA, shRNA, cDNA and CRISPR) which are complementary to small molecule efforts: Functional genomics leverages HTS capabilities to execute genome wide screens which examine the function of each gene in the context of interest by either knocking each gene out or overexpressing it. Parallel access to high-throughput small molecule screen and a genome wide screen enables researchers to perform target identification and validation for given disease or the mode of action determination on a small molecule. The most accurate results can be obtained by use of "arrayed" functional genomics libraries, i.e. each library contains a single construct such as a single siRNA or cDNA. Functional genomics is typically paired with high content screening using e.g. epifluorescent microscopy or laser scanning cytometry.
The University of Illinois also has a facility for HTS, as does the University of Minnesota. The Life Sciences Institute at the University of Michigan houses the HTS facility in the Center for Chemical Genomics. Columbia University has an HTS shared resource facility with ~300,000 diverse small molecules and ~10,000 known bioactive compounds available for biochemical, cell-based and NGS-based screening. The Rockefeller University has an open-access HTS Resource Center HTSRC (The Rockefeller University, HTSRC ), which offers a library of over 380,000 compounds. Northwestern University's High Throughput Analysis Laboratory supports target identification, validation, assay development, and compound screening. The non-profit Sanford Burnham Prebys Medical Discovery Institute also has a long-standing HTS facility in the Conrad Prebys Center for Chemical Genomics which was part of the MLPCN. The non-profit Scripps Research Molecular Screening Center (SRMSC) [ 32 ] continues to serve academia across institutes post-MLPCN era. The SRMSC uHTS facility maintains one of the largest library collections in academia, presently at well-over 665,000 small molecule entities, and routinely screens the full collection or sub-libraries in support of multi-PI grant initiatives.
In the United States, the National Institutes of Health or NIH has created a nationwide consortium of small-molecule screening centers to produce innovative chemical tools for use in biological research. The Molecular Libraries Probe Production Centers Network, or MLPCN, performs HTS on assays provided by the research community, against a large library of small molecules maintained in a central molecule repository. In addition, the NIH created the National Center for Advancing Translational Sciences or NCATS, housed in Shady Grove Maryland, that carries out small molecule and RNAi screens in collaboration with academic laboratories. Of note, the small molecule screening uses 1536 well plates, a capability rarely seen in academic screening laboratories that allows one to carry out quantitative HTS in which each compound is tested across four- to five-orders of magnitude of concentrations. [ 23 ] | https://en.wikipedia.org/wiki/High-throughput_screening |
High-valent iron commonly denotes compounds and intermediates in which iron is found in a formal oxidation state > +3 that show a number of bonds > 6 with a coordination number ≤ 6. [ according to whom? ] The ferrate(VI) ion [FeO 4 ] 2− was the first structure in this class synthesized. The synthetic compounds discussed below contain highly oxidized iron in general, as the concepts are closely related.
Oxoferryl species are common examples of high-valent iron complexes. Such compounds are prepared by oxidation of ferrous complexes with iodosobenzene : [ 1 ] [ 2 ]
Several syntheses of oxoiron(IV) species have been reported. The simplest are mixed-metal oxides of the form MFeO 3 , with M=Ba, Ca, or Sr. However, those compounds do not have discrete iron anions. [ 3 ]
Isolated oxoiron(IV) species are known with more complicated ligands. These compounds model biological complexes such as cytochrome P450 , NO synthase , and isopenicillin N synthase. Two such reported compounds are thiolate-ligated oxoiron(IV) and cyclam-acetate oxoiron(IV). [ 4 ]
Thiolate-ligated oxoiron(IV) is formed by the oxidation of a precursor, [Fe II (TMCS)](PF 6 ) (TMCS = 1-mercaptoethyl-4,8,11-trimethyl-1,4,8,11-tetraza cyclotetradecane), and 3-5 equivalents of H 2 O 2 at −60 ˚C in methanol . The iron(IV) compound is deep blue in color and shows intense absorption features at 460 nm, 570 nm, 850 nm, and 1050 nm. This species Fe IV (=O)(TMCS)+ is stable at −60 ˚C, but decomposition is reported as temperature increases. Compound 2 was identified by Mössbauer spectroscopy , high resolution electrospray ionization mass spectrometry (ESI-MS), X-ray absorption spectroscopy , extended X-ray absorption fine structure (EXAFS), ultraviolet–visible spectroscopy (UV-vis), Fourier-transform infrared spectroscopy (FT-IR), and results were compared to density functional theory (DFT) calculations. [ 5 ]
Tetramethylcyclam oxoiron(IV) is formed by the reaction of Fe II (TMC)(OTf) 2 , TMC = 1,4,8,11-tetramethyl-1,4,8,11-tetraazacyclotetradecane; OTf = CF 3 SO 3 , with iodosylbenzene (PhIO) in CH 3 CN at −40 ˚C. A second method for formation of cyclam oxoiron(IV) is reported as the reaction of Fe II (TMC)(OTf) 2 with 3 equivalents of H 2 O 2 for 3 hours. This species is pale green in color and has an absorption maximum at 820 nm. It is reported to be stable for at least 1 month at −40 ˚C. It has been characterized by Mössbauer spectroscopy, ESI-MS, EXAFS, UV-vis, Raman spectroscopy , and FT-IR. [ 6 ]
High-valent iron bispidine complexes can oxidize cyclohexane to cyclohexanol and cyclohexanone in 35% yield with an alcohol to ketone ratio up to 4. [ 7 ]
Fe V TAML(=O), TAML = tetra-amido macrocyclic ligand , is formed by the reaction of [Fe III (TAML)(H 2 O)](PPh 4 ) with 2-5 equivalents of meta-chloroperbenzoic acid at −60 ˚C in n-butyronitrile. This deep green compound (two λ max at 445 and 630 nm respectively) is stable at 77 K. The stabilization of Fe(V) is attributed to the strong π–donor capacity of deprotonated amide nitrogens. [ 8 ]
Ferrate(VI) is found in the inorganic anion [FeO 4 ] 2− . It has been isolated as the potassium salt, potassium ferrate. It is a strong water-stable oxidizing agent . Its solutions are stable at high pH .
Nitridoiron [ 9 ] and imidoiron [ 10 ] compounds are closely related to iron-dinitrogen chemistry . [ 11 ] The biological significance of nitridoiron(V) porphyrins has been reviewed. [ 12 ] [ 13 ] A widely applicable method to generate high-valent nitridoiron species is the thermal or photochemical oxidative elimination of molecular nitrogen from an azide complex.
Several structurally characterized nitridoiron(IV) compounds exist. [ 14 ] [ 15 ] [ 16 ]
The first nitridoiron(V) compound was synthesised and characterized by Wagner and Nakamoto (1988, 1989) using photolysis and Raman spectroscopy at low temperatures. [ 17 ] [ 18 ]
A second Fe VI species apart from the ferrate(VI) ion, [(Me 3 cy-ac)FeN](PF 6 ) 2 , has been reported. This species, is formed by oxidation followed by photolysis to yield the Fe(VI) species. Characterization of the Fe(VI) complex was done by Mossbauer, EXAFS, IR, and DFT calculations. Unlike the ferrate(VI) ion, compound 5 is diamagnetic . [ 19 ]
Bridged μ-nitrido di-iron phthalocyanine compounds such as iron(II) phthalocyanine catalyze the oxidation of methane to methanol , formaldehyde , and formic acid using hydrogen peroxide as sacrificial oxidant. [ 20 ] [ 21 ]
Nitridoiron(IV) and nitridoiron(V) species were first explored theoretically in 2002. [ 22 ] | https://en.wikipedia.org/wiki/High-valent_iron |
High-voltage transformer fire barriers , also known as transformer firewalls , transformer ballistic firewalls , or transformer blast walls , are outdoor countermeasures against a fire or explosion involving a single transformer from damaging adjacent transformers. These barriers compartmentalize transformer fires and explosions involving combustible transformer oil .
High-voltage transformer fire barriers are typically located in electrical substations , but may also be attached to buildings, such as valve halls or manufacturing plants with large electrical distribution systems , such as pulp and paper mills . Outdoor transformer fire barriers that are attached at least on one side to a building are referred to as wing walls .
The primary North American document that deals with outdoor high-voltage transformer fire barriers is NFPA 850. [ 1 ] NFPA 850 outlines that outdoor oil-insulated transformers should be separated from adjacent structures and from each other by firewalls , spatial separation, or other approved means for the purpose of limiting the damage and potential spread of fire from a transformer failure.
Instead of a passive barrier, fire protection water spray systems are sometimes used to cool a transformer to prevent damage if exposed to radiation heat transfer from a fire involving oil released from another transformer that has failed. [ 2 ]
Mechanical systems designed to quickly depressurize the transformer oil tank after the occurrence of an electrical fault [ 3 ] can minimize the chance that a transformer tank will rupture given a minor fault, but are not effective on major internal faults. [ 4 ]
Transformer oil is available in with sufficiently low combustibility that a fire will not continue after an internal electrical fault. These fluids include those approved by FM Global . [ 5 ] FM Data Sheet 5-4 indicates different levels of protection depending on the type of fluid used. Alternatives include, but are not limited to, esters and silicone oil . [ 6 ] | https://en.wikipedia.org/wiki/High-voltage_transformer_fire_barriers |
In the semiconductor industry , the term high-κ dielectric refers to a material with a high dielectric constant (κ, kappa ), as compared to silicon dioxide . High-κ dielectrics are used in semiconductor manufacturing processes where they are usually used to replace a silicon dioxide gate dielectric or another dielectric layer of a device. The implementation of high-κ gate dielectrics is one of several strategies developed to allow further miniaturization of microelectronic components, colloquially referred to as extending Moore's Law .
Sometimes these materials are called "high-k" (pronounced "high kay"), instead of "high-κ" (high kappa).
Silicon dioxide ( SiO 2 ) has been used as a gate oxide material for decades. As metal–oxide–semiconductor field-effect transistors (MOSFETs) have decreased in size, the thickness of the silicon dioxide gate dielectric has steadily decreased to increase the gate capacitance (per unit area) and thereby drive current (per device width), raising device performance. As the thickness scales below 2 nm , leakage currents due to tunneling increase drastically, leading to high power consumption and reduced device reliability. Replacing the silicon dioxide gate dielectric with a high-κ material allows increased gate thickness thus decreasing gate capacitance without the associated leakage effects.
The gate oxide in a MOSFET can be modeled as a parallel plate capacitor. Ignoring quantum mechanical and depletion effects from the Si substrate and gate, the capacitance C of this parallel plate capacitor is given by
where
Since leakage limitation constrains further reduction of t , an alternative method to increase gate capacitance is to alter κ by replacing silicon dioxide with a high-κ material. In such a scenario, a thicker gate oxide layer might be used which can reduce the leakage current flowing through the structure as well as improving the gate dielectric reliability .
The drain current I D for a MOSFET can be written (using the gradual channel approximation) as
where
The term V G − V th is limited in range due to reliability and room temperature operation constraints, since a too large V G would create an undesirable, high electric field across the oxide. Furthermore, V th cannot easily be reduced below about 200 mV, because leakage currents due to increased oxide leakage (that is, assuming high-κ dielectrics are not available) and subthreshold conduction raise stand-by power consumption to unacceptable levels. (See the industry roadmap, [ 1 ] which limits threshold to 200 mV, and Roy et al. [ 2 ] ). Thus, according to this simplified list of factors, an increased I D,sat requires a reduction in the channel length or an increase in the gate dielectric capacitance.
Replacing the silicon dioxide gate dielectric with another material adds complexity to the manufacturing process. Silicon dioxide can be formed by oxidizing the underlying silicon, ensuring a uniform, conformal oxide and high interface quality. As a consequence, development efforts have focused on finding a material with a requisitely high dielectric constant that can be easily integrated into a manufacturing process. Other key considerations include band alignment to silicon (which may alter leakage current), film morphology, thermal stability, maintenance of a high mobility of charge carriers in the channel and minimization of electrical defects in the film/interface. Materials which have received considerable attention are hafnium silicate , zirconium silicate , hafnium dioxide and zirconium dioxide , typically deposited using atomic layer deposition .
It is expected that defect states in the high-κ dielectric can influence its electrical properties. Defect states can be measured for example by using zero-bias thermally stimulated current, zero-temperature-gradient zero-bias thermally stimulated current spectroscopy , [ 3 ] [ 4 ] or inelastic electron tunneling spectroscopy (IETS).
Industry has employed oxynitride gate dielectrics since the 1990s, wherein a conventionally formed silicon oxide dielectric is infused with a small amount of nitrogen. The nitride content subtly raises the dielectric constant and is thought to offer other advantages, such as resistance against dopant diffusion through the gate dielectric.
In 2000, Gurtej Singh Sandhu and Trung T. Doan of Micron Technology initiated the development of atomic layer deposition high-κ films for DRAM memory devices. This helped drive cost-effective implementation of semiconductor memory , starting with 90-nm node DRAM. [ 5 ] [ 6 ]
In early 2007, Intel announced the deployment of hafnium -based high-κ dielectrics in conjunction with a metallic gate for components built on 45 nanometer technologies, and has shipped it in the 2007 processor series codenamed Penryn . [ 7 ] [ 8 ] At the same time, IBM announced plans to transition to high-κ materials, also hafnium-based, for some products in 2008. While not identified, the most likely dielectric used in such applications are some form of nitrided hafnium silicates ( HfSiON ). HfO 2 and HfSiO are susceptible to crystallization during dopant activation annealing. NEC Electronics has also announced the use of a HfSiON dielectric in their 55 nm UltimateLowPower technology. [ 9 ] However, even HfSiON is susceptible to trap-related leakage currents, which tend to increase with stress over device lifetime. This leakage effect becomes more severe as hafnium concentration increases. There is no guarantee, however, that hafnium will serve as a de facto basis for future high-κ dielectrics. The 2006 ITRS roadmap predicted the implementation of high-κ materials to be commonplace in the industry by 2010. | https://en.wikipedia.org/wiki/High-κ_dielectric |
The High Accuracy Radial Velocity Planet Searcher ( HARPS ) is a high-precision echelle planet-finding spectrograph installed in 2002 on the ESO's 3.6m telescope at La Silla Observatory in Chile . The first light was achieved in February 2003. HARPS has discovered over 130 exoplanets to date, with the first one in 2004, making it the most successful planet finder behind the Kepler space telescope . It is a second-generation radial-velocity spectrograph, based on experience with the ELODIE and CORALIE instruments. [ 1 ]
The HARPS can attain a precision of 0.97 m/s (3.5 km/h), [ 2 ] making it one of only two instruments worldwide with such accuracy. [ citation needed ] This is due to a design in which the target star and a reference spectrum from a thorium lamp are observed simultaneously using two identical optic fibre feeds, and to careful attention to mechanical stability: the instrument sits in a vacuum vessel which is temperature-controlled to within 0.01 kelvins. [ 3 ] The precision and sensitivity of the instrument is such that it incidentally produced the best available measurement of the thorium spectrum. [ citation needed ] Planet-detection is in some cases limited by the seismic pulsations of the star observed rather than by limitations of the instrument. [ 4 ]
The principal investigator on the HARPS is Michel Mayor who, along with Didier Queloz and Stéphane Udry , have used the instrument to characterize the Gliese 581 planetary system , home to one of the smallest known exoplanets orbiting a normal star, and two super-Earths whose orbits lie in the star's habitable zone . [ 5 ]
It was initially used for a survey of one-thousand stars. [ citation needed ]
Since October 2012 the HARPS spectrograph has the precision to detect a new category of planets: habitable super-Earths. This sensitivity was expected from simulations of stellar intrinsic signals, and actual observations of planetary systems. Currently, the HARPS can detect habitable super-Earth only around low-mass stars as these are more affected by gravitational tug from planets and have habitable zones close to the host star. [ 6 ]
This is an incomplete list of exoplanets discovered by the HARPS. The list is sorted by the date of the discovery's announcement. As of December 2017, the list contains 134 exoplanets.
Similar instruments:
Space based detectors : | https://en.wikipedia.org/wiki/High_Accuracy_Radial_Velocity_Planet_Searcher |
HEAO-1 was an X-ray telescope launched in 1977. HEAO-1 surveyed the sky in the X-ray portion of the electromagnetic spectrum (0.2 keV – 10 MeV), providing nearly constant monitoring of X-ray sources near the ecliptic poles and more detailed studies of a number of objects by observations lasting 3–6 hours. It was the first of NASA 's three High Energy Astronomy Observatories , launched on August 12, 1977 aboard an Atlas rocket with a Centaur upper stage, operated until 9 January 1979. During that time, it scanned the X-ray sky almost three times
HEAO included four X-ray and gamma-ray astronomy instruments, known as A1, A2, A3, and A4, respectively (before launch, HEAO 1 was known as HEAO A ). The orbital inclination was about 22.7 degrees. HEAO 1 re-entered the Earth's atmosphere on 15 March 1979.
The A1 , or Large-Area Sky Survey (LASS) instrument, covered the 0.25–25 keV energy range, using seven large proportional counters. [ 2 ] It was designed, operated, and managed at the Naval Research Laboratory (NRL) under the direction of Principal Investigator Dr. Herbert D. Friedman, and the prime contractor was TRW . The HEAO A-1 X-Ray Source Catalog included 842 discrete X-ray sources. [ 3 ]
The A2 , or Cosmic X-ray Experiment (CXE) , from the Goddard Space Flight Center , covered the 2–60 keV energy range with high spatial and spectral resolution. The Principal Investigators were Dr. Elihu A. Boldt and Dr. Gordon P. Garmire. [ 4 ]
The A3 , or Modulation Collimator (MC) instrument, provided high-precision positions of X-ray sources, accurate enough to permit follow-up observations to identify optical and radio counterparts. It was provided by the Center for Astrophysics ( Smithsonian Astrophysical Observatory and the Harvard College Observatory , SAO/HCO). [ 5 ] Principal Investigators were Dr. Daniel A. Schwartz of SAO and Dr. Hale V. Bradt of MIT.
The A4 , or Hard X-ray / Low Energy Gamma-ray Experiment , used sodium iodide (NaI) scintillation counters to cover the energy range from about 20 keV to 10 MeV. [ 6 ] It consisted of seven clustered modules, of three distinct designs, in a roughly hexagonal array. [ 7 ] Each detector was actively shielded by surrounding CsI scintillators, in active-anti-coincidence, so that an extraneous particle or gamma-ray event from the side or rear would be vetoed electronically, and rejected.
(It was discovered in early balloon flight by experimenters in the 1960s that passive collimators or shields, made of materials such as lead, actually increase the undesired background rate, due to the intense showers of secondary particles and photons produced by the extremely high energy (GeV) particles characteristic of the space radiation environment.)
A plastic anti-coincidence scintillation shield, essentially transparent to gamma-ray photons, protected the detectors from high-energy charged particles entering from the front.
For all seven modules, the unwanted background effects of particles or photons entering from the rear was suppressed by a "phoswich" design, in which the active NaI detecting element was optically coupled to a layer of CsI on its rear surface, which was in turn optically coupled to a single photomultiplier tube for each of the seven units.
Because the NaI has a much faster response time (~0.25 μs) than the CsI (~1 μs), electronic pulse shape discriminators could distinguish good events in the NaI from mixed events accompanied by a simultaneous interaction in the CsI.
The largest, or High Energy Detector (HED), occupied the central position and covered the upper range from ~120 keV to 10 MeV, with a field-of-view (FOV) collimated to 37° FWHM .
Its NaI detector was 5 inches (13 cm) in diameter by 3 inches (7.6 cm) thick.
The extreme penetrating power of photons in this energy range made it necessary to operate the HED in electronic anti-coincidence with the surrounding CsI and also the six other detectors of the hexagon.
Two Low Energy Detectors (LEDs) were located in positions 180° apart on opposite side of the hexagon.
They had thin ~3 mm thick NaI detectors, also 5 inches (13 cm) in diameter, covering the energy range from ~10–200 keV.
Their FOV was defined to fan-shaped beams of 1.7° x 20° FWHM by passive, parallel slat-plate collimators.
The slats of the two LEDs were inclined to ±30° to the nominal HEAO scanning direction, crossing each other at 60°.
Thus, working together, they covered a wide field of view, but could localize celestial sources with a precision determined by their 1.7° narrow fields.
The four Medium Energy Detectors (MEDs), with a nominal energy range of 80 keV — 3 MeV, had 3 inches (7.6 cm) dia by 1 inch (2.5 cm) thick NaI detector crystals, and occupied the four remaining positions in the hexagon of modules.
They had circular FOVs with a 17° FWHM.
The primary data from A4 consisted of "event-by-event" telemetry, listing each good (i.e., un-vetoed) event in the NaI detectors. The experiment had the flexibility to tag each event with its pulse height (proportional to its energy), and a one or two byte time tag, allowing precision timing of objects such as gamma-ray bursts and pulsars .
Results of the experiment included a catalog of the positions and intensities of hard X-ray (10–200 keV) sources, [ 8 ] a strong observational basis for extremely strong magnetic fields (of order 10 13 G) on the rotating neutron stars associated with Her X-1 [ 9 ] [ 10 ] and 4U 0115+634, a definitive diffuse component spectrum between 13 and 200 keV, discovery of the power-law shape of the Cygnus X-1 power density spectrum, and discovery of slow intensity cycles in the X-Ray sources SMC X-1 and LMC X-4, resulting in approximately 15 Ph.D theses and ~100 scientific publications.
The A4 instrument was provided and managed by the University of California at San Diego, under the direction of Prof. Laurence E. Peterson , in collaboration with the X-ray group at MIT , where the initial A4 data reduction was performed under the direction of Prof. Walter H. G. Lewin . | https://en.wikipedia.org/wiki/High_Energy_Astronomy_Observatory_1 |
The last of NASA's three High Energy Astronomy Observatories , HEAO 3 was launched 20 September 1979 on an Atlas-Centaur launch vehicle, into a nearly circular, 43.6 degree inclination low Earth orbit with an initial perigeum of 486.4 km.
The normal operating mode was a continuous celestial scan, spinning approximately once every 20 min about the spacecraft z-axis, which was nominally pointed at the Sun.
Total mass of the observatory at launch was 2,660.0 kilograms (5,864.3 lb). [ 1 ]
HEAO 3 included three scientific instruments: the first a cryogenic high-resolution germanium gamma-ray spectrometer , and two devoted to cosmic-ray observations.
The scientific objectives of the mission's three experiments were:
The HEAO "C-1" instrument (as it was known before launch) was a sky-survey experiment, operating in the hard X-ray and low-energy gamma-ray bands.
The gamma-ray spectrometer was especially designed to search for the 511 keV gamma-ray line produced by the annihilation of positrons in stars, galaxies, and the interstellar medium (ISM), nuclear gamma-ray line emission expected from the interactions of cosmic rays in the ISM, the radioactive products of cosmic nucleosynthesis , and nuclear reactions due to low-energy cosmic rays.
In addition, careful study was made of the spectral and time variations of known hard X-ray sources.
The experimental package contained four cooled, p-type high-purity Ge gamma-ray detectors with a total volume of about 100 cm 3 {\displaystyle ^{3}} , enclosed in a thick (6.6 cm average) caesium iodide (CsI) scintillation shield in active anti-coincidence [ 2 ] to suppress extraneous background.
The experiment was capable of measuring gamma-ray energies falling within the energy interval from 0.045 to 10 MeV. The Ge detector system had an initial energy resolution better than 2.5 keV at 1.33 MeV and a line sensitivity from 1.E-4 to 1.E-5 photons/cm 2 -s, depending on the energy. Key experimental parameters were (1) a geometry factor of 11.1 cm 2 -sr, (2) effective area ~75 cm 2 {\displaystyle ^{2}} at 100 keV, (3) a field of view of ~30 deg FWHM at 45 keV, and (4) a time resolution of less than 0.1 ms for the germanium detectors and 10 s for the CsI detectors. The gamma-ray spectrometer operated until 1 June 1980, when its cryogen was exhausted. [ 3 ] [ 4 ] The energy resolution of the Ge detectors was subject to degradation (roughly proportional to energy and time) due to radiation damage. [ 5 ] The primary data are available at from the NASA HESARC [ 6 ] and at JPL. They include instrument, orbit, and aspect data plus some spacecraft housekeeping information on 1600-bpi binary tapes. Some of this material has subsequently been archived on more modern media. [ 7 ] The experiment was proposed, developed, and managed by the Jet Propulsion Laboratory of the California Institute of Technology, under the direction of Dr. Allan S. Jacobson .
The HEAO C-2 experiment measured the relative composition of the isotopes of the primary cosmic rays between beryllium and iron (Z from 4 to 26) and the elemental abundances up to tin (Z=50). Cerenkov counters and hodoscopes , together with the Earth's magnetic field, formed a spectrometer. They determined charge and mass of cosmic rays to a precision of 10% for the most abundant elements over the momentum range from 2 to 25 GeV/c (c=speed of light). Scientific direction was by Principal Investigators Prof. Bernard Peters and Dr. Lyoie Koch-Miramond. The primary data base has been archived at the Centre Etudes Nuclearires de Saclay and the Danish Space Research Institute. Information on the data products is given by Engelman et al. 1985. [ 8 ]
The purpose of the HEAO C-3 experiment was to measure the charge spectrum of cosmic-ray nuclei over the nuclear charge (Z) range from 17 to 120, in the energy interval 0.3 to 10 GeV/nucleon; to characterize cosmic ray sources; processes of nucleosynthesis, and propagation modes. The detector consisted of a double-ended instrument of upper and lower hodoscopes and three dual-gap ion chambers. The two ends were separated by a Cerenkov radiator. The geometrical factor was 4 cm 2 -sr. The ion chambers could resolve charge to 0.24 charge units at low energy and 0.39 charge units at high energy and high Z. The Cerenkov counter could resolve 0.3 to 0.4 charge units. Binns et al. [ 9 ] give more details.
The experiment was proposed and managed by the Space Radiation Laboratory of the California Institute of Technology (Caltech), under the direction of Principal Investigator Prof. Edward C. Stone , Jr. of Caltech, and Dr. Martin H. Israel, and Dr. Cecil J. Waddington.
The HEAO 3 Project was the final mission in the High Energy Astronomy Observatory series, which was managed by the NASA Marshall Space Flight Center (MSFC), where the project scientist was Dr. Thomas A. Parnell, and the project manager was Dr. John F. Stone. The prime contractor was TRW . | https://en.wikipedia.org/wiki/High_Energy_Astronomy_Observatory_3 |
High Energy Transient Explorer 1 ( HETE-1 ) was a NASA astronomical satellite with international participation (mainly Japan and France ).
The concept of a satellite capable of multi-wavelength observations of gamma-ray bursts (GRB) was discussed at the Santa Cruz, California meeting on GRBs in 1981. In 1986, the first realistic implementation of the HETE concept by a Massachusetts Institute of Technology MIT-led International Team was proposed. This concept, which was adopted, emphasized accurate locations and multi-wavelength coverage as the primary scientific goals for a sharply-focused small satellite mission which would ultimately solve the gamma-ray burst mystery.
In 1989, NASA approved funding for a low-cost "University Class" explorer satellite to search for GRBs. In 1992, the HETE-1 program was funded, and the design and construction of HETE-1 began. The original spacecraft contractor for HETE-1 was AeroAstro, Inc., of Herndon, Virginia . AeroAstro was responsible for the spacecraft bus, including power, communications, attitude control, and computers.
The instrument complement for HETE-1 consisted of four wide-field gamma-ray detectors , supplied by the CESR of Toulouse , France .
A wide-field coded-aperture X-ray imager, supplied by a collaboration of Los Alamos National Laboratory (LANL) and the Institute of Chemistry and Physics ( RIKEN ) of Tokyo , Japan .
Four wide-field near-UV CCD cameras, supplied by the Center for Space Research at the Massachusetts Institute of Technology.
Due to the tragic fate of HETE-1 and the continuing timeliness of GRB science, NASA agreed to a reflight of the HETE-1 satellite, using flight spare hardware from the first satellite. In July 1997, funding for a second HETE satellite was granted, with a target launch date early 2000. [ 2 ]
The prime objective of HETE-1 was to carry out the first multi-wavelength study of GRBs with ultraviolet (UV), X-ray , and gamma-ray instruments mounted on a single, compact spacecraft. A unique feature of the HETE-1 mission was its capability to localize GRBs with ~10 arcseconds accuracy in near real time aboard the spacecraft, and to transmit these positions directly to a network of receivers at existing ground-based observatories enabling rapid, sensitive follow-up studies in the radio , infrared (IR), and visible light bands. [ 3 ]
The satellite bus for the HETE-1 satellite was designed and built by AeroAstro, Inc. (USA) of Herndon, Virginia . The HETE-1 spacecraft was Sun-pointing with four solar panels connected to the bottom of the spacecraft bus. Spacecraft attitude was to be controlled by magnetic torque coils and a momentum wheel. [ 3 ]
The Omnidirectional Gamma-Ray Spectrometer was designed to operate from 6 keV to greater than 1 MeV. The instrument consisted of four wide-field gamma-ray detectors with a total effective area of 120 cm 2 (19 sq in). The HETE satellite remained within the launch vehicle due to battery failure. The experiment was unable to operate. [ 4 ]
The Ultraviolet Transient Camera Array was designed to provide accurate directional information on transient events, and to assist with spacecraft attitude determination. The instrument consisted of four ultraviolet Charge-coupled device (CCD) cameras operating in the 5 to 7 eV range. [ 5 ]
The Widefield X-ray Monitor was designed to perform X-ray studies of gamma-ray bursts. The instrument consisted of coded aperture cameras, sensitive in the 2-25 keV energy range, and with location accuracy to ~ 10 arcminutes or better. [ 6 ]
The HETE-1 satellite was launched with the Argentina satellite SAC-B. HETE-1 was lost during the launch on 4 November 1996, at 17:08:56 UTC , from Wallops Flight Facility (WFF), launch area-3 . The Pegasus XL launch vehicle achieved a good orbit, but explosive bolts releasing HETE-1 from another satellite, SAC-B, and from its Dual Payload Attach Fitting (DPAF) envelope failed to charge, dooming both satellites. A battery on the third stage of the launch vehicle and responsible for these bolts cracked during the ascent. Due to its inability to deploy the solar panels, HETE lost power several days after launch. [ 3 ]
HETE-1 re-entered on 7 April 2002.
Explorer program | https://en.wikipedia.org/wiki/High_Energy_Transient_Explorer_1 |
The High Performance Wireless Research and Education Network ( HPWREN ) is a network research program, funded by the National Science Foundation . The program includes the creation, demonstration, and evaluation of a non-commercial, prototype, high-performance, wide-area, wireless network in its Southern California service area.
The HPWREN program is a collaborative , interdisciplinary and multi- institutional cyber-infrastructure for research and education purposes. The program also provides data, and data transmission capabilities, to emergency first responders in its service area.
The program includes the creation, demonstration, and evaluation of a non-commercial, prototype, high-performance, wide-area, wireless network in its service area. Currently, the HPWREN network is used for network analysis research, and it also provides high-speed Internet access to field researchers.
Southern California , specifically San Diego , Riverside , and Imperial counties.
The network includes backbone nodes located at the University of California, San Diego (UC San Diego) and San Diego State University (SDSU) campuses, as well as a number of "hard-to-reach" areas in remote environments.
The HPWREN backbone itself operates primarily in the licensed spectrum, and project researchers use off-the-shelf technology to create a redundant topology. Access links often use license -exempt radios.
In 2002, HPWREN researchers conducted an expedition to locate the SEALAB II/III habitat located off Scripps Pier in La Jolla, California . [ 1 ] From the MV Kellie Chouest and utilizing a Scorpio ROV to find the site, researchers were able to conduct a live multicast from ship to shore. [ 1 ]
The network spans from the Southern California coast to the inland valleys, on to the high mountains (reaching more than 8700 feet), and out to the remote desert. The network's longest link is 72 miles (116 km) – reaching from the San Diego Supercomputer Center to San Clemente Island . | https://en.wikipedia.org/wiki/High_Performance_Wireless_Research_and_Education_Network |
High Price: A Neuroscientist's Journey of Self-Discovery That Challenges Everything You Know About Drugs and Society is a 2013 book by psychologist and neuroscientist Carl Hart , [ 1 ] combining memoir , scientific assessment, and policy recommendation. Hart recounts his own experiences growing up in a poor African-American neighborhood in Miami, surrounded by violence and drug use, and views it through his research as a neuroscientist investigating the effects of drugs. He argues for an end to the punitive war on drugs that he finds to be based on race, class and misconceptions, in favor of evidence-based policies. [ 2 ] [ 3 ]
Writing in the New York Times , John Tierney found High Price to be "a fascinating combination of memoir and social science: wrenching scenes of deprivation and violence accompanied by calm analysis of historical data and laboratory results." [ 4 ] In Scientific American , Anna Kuchment recommended High Price , writing, "Hart's account of rising from the projects to the ivory tower is as poignant as his call to change the way society thinks about race, drugs and poverty." [ 5 ] Publishers Weekly wrote, "Combining memoir, popular science, and public policy, Hart’s study lambasts current drug laws as draconian and repressive, arguing that they’re based more on assumptions about race and class than on a real understanding of the physiological and societal effects of drugs. ... His is a provocative clarion call for students of sociology and policy-makers alike." [ 3 ]
High Price won the PEN/E. O. Wilson Literary Science Writing Award in 2014. [ 6 ] [ 7 ] | https://en.wikipedia.org/wiki/High_Price_(book) |
High availability ( HA ) is a characteristic of a system that aims to ensure an agreed level of operational performance, usually uptime , for a higher than normal period. [ 1 ]
There is now more dependence on these systems as a result of modernization. For instance, in order to carry out their regular daily tasks, hospitals and data centers need their systems to be highly available. Availability refers to the ability of the user community to obtain a service or good, access the system, whether to submit new work, update or alter existing work, or collect the results of previous work. If a user cannot access the system, it is – from the user's point of view – unavailable . [ 2 ] Generally, the term downtime is used to refer to periods when a system is unavailable.
High availability is a property of network resilience , the ability to "provide and maintain an acceptable level of service in the face of faults and challenges to normal operation." [ 3 ] Threats and challenges for services can range from simple misconfiguration over large scale natural disasters to targeted attacks. [ 4 ] As such, network resilience touches a very wide range of topics. In order to increase the resilience of a given communication network, the probable challenges and risks have to be identified and appropriate resilience metrics have to be defined for the service to be protected. [ 5 ]
The importance of network resilience is continuously increasing, as communication networks are becoming a fundamental component in the operation of critical infrastructures. [ 6 ] Consequently, recent efforts focus on interpreting and improving network and computing resilience with applications to critical infrastructures. [ 7 ] As an example, one can consider as a resilience objective the provisioning of services over the network, instead of the services of the network itself. This may require coordinated response from both the network and from the services running on top of the network. [ 8 ]
These services include:
Resilience and survivability are interchangeably used according to the specific context of a given study. [ 9 ]
There are three principles of systems design in reliability engineering that can help achieve high availability.
A distinction can be made between scheduled and unscheduled downtime. Typically, scheduled downtime is a result of maintenance that is disruptive to system operation and usually cannot be avoided with a currently installed system design. Scheduled downtime events might include patches to system software that require a reboot or system configuration changes that only take effect upon a reboot. In general, scheduled downtime is usually the result of some logical, management-initiated event. Unscheduled downtime events typically arise from some physical event, such as a hardware or software failure or environmental anomaly. Examples of unscheduled downtime events include power outages, failed CPU or RAM components (or possibly other failed hardware components), an over-temperature related shutdown, logically or physically severed network connections, security breaches, or various application , middleware , and operating system failures.
If users can be warned away from scheduled downtimes, then the distinction is useful. But if the requirement is for true high availability, then downtime is downtime whether or not it is scheduled.
Many computing sites exclude scheduled downtime from availability calculations, assuming that it has little or no impact upon the computing user community. By doing this, they can claim to have phenomenally high availability, which might give the illusion of continuous availability . Systems that exhibit truly continuous availability are comparatively rare and higher priced, and most have carefully implemented specialty designs that eliminate any single point of failure and allow online hardware, network, operating system, middleware , and application upgrades, patches, and replacements. For certain systems, scheduled downtime does not matter, for example, system downtime at an office building after everybody has gone home for the night.
Availability is usually expressed as a percentage of uptime in a given year. The following table shows the downtime that will be allowed for a particular percentage of availability, presuming that the system is required to operate continuously. Service level agreements often refer to monthly downtime or availability in order to calculate service credits to match monthly billing cycles. The following table shows the translation from a given availability percentage to the corresponding amount of time a system would be unavailable.
The terms uptime and availability are often used interchangeably but do not always refer to the same thing. For example, a system can be "up" with its services not "available" in the case of a network outage . Or a system undergoing software maintenance can be "available" to be worked on by a system administrator , but its services do not appear "up" to the end user or customer. The subject of the terms is thus important here: whether the focus of a discussion is the server hardware, server OS, functional service, software service/process, or similar, it is only if there is a single, consistent subject of the discussion that the words uptime and availability can be used synonymously.
A simple mnemonic rule states that 5 nines allows approximately 5 minutes of downtime per year. Variants can be derived by multiplying or dividing by 10: 4 nines is 50 minutes and 3 nines is 500 minutes. In the opposite direction, 6 nines is 0.5 minutes (30 sec) and 7 nines is 3 seconds.
Another memory trick to calculate the allowed downtime duration for an " n {\displaystyle n} -nines" availability percentage is to use the formula 8.64 × 10 4 − n {\displaystyle 8.64\times 10^{4-n}} seconds per day.
For example, 90% ("one nine") yields the exponent 4 − 1 = 3 {\displaystyle 4-1=3} , and therefore the allowed downtime is 8.64 × 10 3 {\displaystyle 8.64\times 10^{3}} seconds per day.
Also, 99.999% ("five nines") gives the exponent 4 − 5 = − 1 {\displaystyle 4-5=-1} , and therefore the allowed downtime is 8.64 × 10 − 1 {\displaystyle 8.64\times 10^{-1}} seconds per day.
Percentages of a particular order of magnitude are sometimes referred to by the number of nines or "class of nines" in the digits. For example, electricity that is delivered without interruptions ( blackouts , brownouts or surges ) 99.999% of the time would have 5 nines reliability, or class five. [ 10 ] In particular, the term is used in connection with mainframes [ 11 ] [ 12 ] or enterprise computing, often as part of a service-level agreement .
Similarly, percentages ending in a 5 have conventional names, traditionally the number of nines, then "five", so 99.95% is "three nines five", abbreviated 3N5. [ 13 ] [ 14 ] This is casually referred to as "three and a half nines", [ 15 ] but this is incorrect: a 5 is only a factor of 2, while a 9 is a factor of 10, so a 5 is 0.3 nines (per below formula: log 10 2 ≈ 0.3 {\displaystyle \log _{10}2\approx 0.3} ): [ note 2 ] 99.95% availability is 3.3 nines, not 3.5 nines. [ 16 ] More simply, going from 99.9% availability to 99.95% availability is a factor of 2 (0.1% to 0.05% unavailability), but going from 99.95% to 99.99% availability is a factor of 5 (0.05% to 0.01% unavailability), over twice as much. [ note 3 ]
A formulation of the class of 9s c {\displaystyle c} based on a system's unavailability x {\displaystyle x} would be
(cf. Floor and ceiling functions ).
A similar measurement is sometimes used to describe the purity of substances.
In general, the number of nines is not often used by a network engineer when modeling and measuring availability because it is hard to apply in formula. More often, the unavailability expressed as a probability (like 0.00001), or a downtime per year is quoted. Availability specified as a number of nines is often seen in marketing documents. [ citation needed ] The use of the "nines" has been called into question, since it does not appropriately reflect that the impact of unavailability varies with its time of occurrence. [ 17 ] For large amounts of 9s, the "unavailability" index (measure of downtime rather than uptime) is easier to handle. For example, this is why an "unavailability" rather than availability metric is used in hard disk or data link bit error rates .
Sometimes the humorous term "nine fives" (55.5555555%) is used to contrast with "five nines" (99.999%), [ 18 ] [ 19 ] [ 20 ] though this is not an actual goal, but rather a sarcastic reference to something totally failing to meet any reasonable target.
Availability measurement is subject to some degree of interpretation. A system that has been up for 365 days in a non-leap year might have been eclipsed by a network failure that lasted for 9 hours during a peak usage period; the user community will see the system as unavailable, whereas the system administrator will claim 100% uptime. However, given the true definition of availability, the system will be approximately 99.9% available, or three nines (8751 hours of available time out of 8760 hours per non-leap year). Also, systems experiencing performance problems are often deemed partially or entirely unavailable by users, even when the systems are continuing to function. Similarly, unavailability of select application functions might go unnoticed by administrators yet be devastating to users – a true availability measure is holistic.
Availability must be measured to be determined, ideally with comprehensive monitoring tools ("instrumentation") that are themselves highly available. If there is a lack of instrumentation, systems supporting high volume transaction processing throughout the day and night, such as credit card processing systems or telephone switches, are often inherently better monitored, at least by the users themselves, than systems which experience periodic lulls in demand.
An alternative metric is mean time between failures (MTBF).
Recovery time (or estimated time of repair (ETR), also known as recovery time objective (RTO) is closely related to availability, that is the total time required for a planned outage or the time required to fully recover from an unplanned outage. Another metric is mean time to recovery (MTTR). Recovery time could be infinite with certain system designs and failures, i.e. full recovery is impossible. One such example is a fire or flood that destroys a data center and its systems when there is no secondary disaster recovery data center.
Another related concept is data availability , that is the degree to which databases and other information storage systems faithfully record and report system transactions. Information management often focuses separately on data availability, or Recovery Point Objective , in order to determine acceptable (or actual) data loss with various failure events. Some users can tolerate application service interruptions but cannot tolerate data loss.
A service level agreement ("SLA") formalizes an organization's availability objectives and requirements.
High availability is one of the primary requirements of the control systems in unmanned vehicles and autonomous maritime vessels . If the controlling system becomes unavailable, the Ground Combat Vehicle (GCV) or ASW Continuous Trail Unmanned Vessel (ACTUV) would be lost.
On one hand, adding more components to an overall system design can undermine efforts to achieve high availability because complex systems inherently have more potential failure points and are more difficult to implement correctly. While some analysts would put forth the theory that the most highly available systems adhere to a simple architecture (a single, high-quality, multi-purpose physical system with comprehensive internal hardware redundancy), this architecture suffers from the requirement that the entire system must be brought down for patching and operating system upgrades. More advanced system designs allow for systems to be patched and upgraded without compromising service availability (see load balancing and failover ). High availability requires less human intervention to restore operation in complex systems; the reason for this being that the most common cause for outages is human error. [ 21 ]
On the other hand, redundancy is used to create systems with high levels of availability (e.g. popular ecommerce websites). In this case it is required to have high levels of failure detectability and avoidance of common cause failures.
If redundant parts are used in parallel and have independent failure (e.g. by not being within the same data center), they can exponentially increase the availability and make the overall system highly available. If you have N parallel components each having X availability, then you can use following formula: [ 22 ] [ 23 ]
Availability of parallel components = 1 - (1 - X)^ N
So for example if each of your components has only 50% availability, by using 10 of components in parallel, you can achieve 99.9023% availability.
Two kinds of redundancy are passive redundancy and active redundancy.
Passive redundancy is used to achieve high availability by including enough excess capacity in the design to accommodate a performance decline. The simplest example is a boat with two separate engines driving two separate propellers. The boat continues toward its destination despite failure of a single engine or propeller. A more complex example is multiple redundant power generation facilities within a large system involving electric power transmission . Malfunction of single components is not considered to be a failure unless the resulting performance decline exceeds the specification limits for the entire system.
Active redundancy is used in complex systems to achieve high availability with no performance decline. Multiple items of the same kind are incorporated into a design that includes a method to detect failure and automatically reconfigure the system to bypass failed items using a voting scheme. This is used with complex computing systems that are linked. Internet routing is derived from early work by Birman and Joseph in this area. [ 24 ] [ non-primary source needed ] Active redundancy may introduce more complex failure modes into a system, such as continuous system reconfiguration due to faulty voting logic.
Zero downtime system design means that modeling and simulation indicates mean time between failures significantly exceeds the period of time between planned maintenance , upgrade events, or system lifetime. Zero downtime involves massive redundancy, which is needed for some types of aircraft and for most kinds of communications satellites . Global Positioning System is an example of a zero downtime system.
Fault instrumentation can be used in systems with limited redundancy to achieve high availability. Maintenance actions occur during brief periods of downtime only after a fault indicator activates. Failure is only significant if this occurs during a mission critical period.
Modeling and simulation is used to evaluate the theoretical reliability for large systems. The outcome of this kind of model is used to evaluate different design options. A model of the entire system is created, and the model is stressed by removing components. Redundancy simulation involves the N-x criteria. N represents the total number of components in the system. x is the number of components used to stress the system. N-1 means the model is stressed by evaluating performance with all possible combinations where one component is faulted. N-2 means the model is stressed by evaluating performance with all possible combinations where two component are faulted simultaneously.
A survey among academic availability experts in 2010 ranked reasons for unavailability of enterprise IT systems. All reasons refer to not following best practice in each of the following areas (in order of importance): [ 25 ]
A book on the factors themselves was published in 2003. [ 26 ]
In a 1998 report from IBM Global Services , unavailable systems were estimated to have cost American businesses $4.54 billion in 1996, due to lost productivity and revenues. [ 27 ] | https://en.wikipedia.org/wiki/High_availability |
High availability software is software used to ensure that systems are running and available most of the time. High availability is a high percentage of time that the system is functioning. It can be formally defined as (1 – (down time/ total time))*100%. Although the minimum required availability varies by task, systems typically attempt to achieve 99.999% (5-nines) availability. This characteristic is weaker than fault tolerance , which typically seeks to provide 100% availability, albeit with significant price and performance penalties.
High availability software is measured by its performance when a subsystem fails, its ability to resume service in a state close to the state of the system at the time of the original failure, and its ability to perform other service-affecting tasks (such as software upgrade or configuration changes) in a manner that eliminates or minimizes down time. All faults that affect availability – hardware, software, and configuration need to be addressed by High Availability Software to maximize availability.
You can add redundancy to achieve high availability. If done properly, adding redundancy can exponentially increase the availability and make the overall system highly available. If you have N redundant and parallel hosts each having X availability, then you can use following formula: [ 1 ] [ 2 ]
Availability of parallel and redundant components = 1 - (1 - X)^ N
So for example if each of your hosts has only 50% availability, by using 10 of hosts in parallel, you can achieve 99.9023% availability.
Note that redundancy doesn’t always lead to higher availability. In fact, redundancy increases complexity which in turn reduces availability. According to Marc Brooker, to take advantage of redundancy, ensure that: [ 3 ]
Typical high availability software provides features that:
Enable hardware and software redundancy :
These features include:
A service is not available if it cannot service all the requests being placed on it. The “scale-out” property of a system refers to the ability to create multiple copies of a subsystem to address increasing demand, and to efficiently distribute incoming work to these copies ( Load balancing (computing) ) preferably without shutting down the system. High availability software should enable scale-out without interrupting service.
Enable active/standby communication (notably Checkpointing) :
Active subsystems need to communicate to standby subsystems to ensure that the standby is ready to take over where the active left off. High Availability Software can provide communications abstractions like redundant message and event queues to help active subsystems in this task. Additionally, an important concept called “checkpointing” is exclusive to highly available software. In a checkpointed system, the active subsystem identifies all of its critical state and periodically updates the standby with any changes to this state. This idea is commonly abstracted as a distributed hash table – the active writes key/value records into the table and both the active and standby subsystems read from it. Unlike a “cloud” distributed hash table ( Chord (peer-to-peer) , Kademlia , etc.) a checkpoint is fully replicated. That is, all records in the “checkpoint” hash table are readable so long as one copy is running. [ 4 ] Another technique, called an [application checkpoint], periodically saves the entire state of a program. [ 5 ]
Enable in-service upgrades :
In Service Software Upgrade is the ability to upgrade software without degrading service. It is typically implemented in redundant systems by executing what is called a “rolling” upgrade—upgrading the standby while the active provides service, failing over, and then upgrading the old active. Another important feature is the ability to rapidly fall back to an older version of the software and configuration if the new version fails. [ 6 ] [ 7 ]
Minimize standby latency and ensure standby correctness :
Standby latency is defined as the time between when a standby is told to become active and when it is actually providing service. “Hot” standby systems are those that actively update internal state in response to active system checkpoints, resulting in millisecond down times. “Cold” standby systems are offline until the active fails and typically restart from a “baseline” state. For example, many cloud solutions will restart a virtual machine on another physical machine if the underlying physical machine fails. “Cold” fail over standby latency can range from 30+ seconds to several minutes. Finally, “warm” standby is an informal term encompassing all systems that are running yet must do some internal processing before becoming active. For example, a warm standby system might be handling low priority jobs – when the active fails it aborts these jobs and reads the active's checkpointed state before resuming service. Warm standby latencies depend on how much data is checkpointed but typically have a few seconds latency.
High availability software can help engineers create complex system architectures that are designed to minimize the scope of failures and to handle specific failure modes. A “normal” failure is defined as one which can be handled by the software architecture's, while a “catastrophic” failure is defined as one which is not handled. A catastrophic failure therefore causes a service outage. However, the software can still greatly increase availability by automatically returning to an in-service state as soon as the catastrophic failure is remedied.
The simplest configuration (or “redundancy model”) is 1 active, 1 standby, or 1+1. Another common configuration is N+1 (N active, 1 standby), which reduces total system cost by having fewer standby subsystems. Some systems use an all-active model, which has the advantage that “standby” subsystems are being constantly validated.
Configurations can also be defined with active, hot standby, and cold standby (or idle) subsystems, extending the traditional “active+standby” nomenclature to “active+standby+idle” (e.g. 5+1+1). Typically, “cold standby” or “idle” subsystems are active for lower priority work. Sometimes these systems are located far away from their redundant pair in a strategy called geographic redundancy. [ 8 ] This architecture seeks to avoid loss of service from physically-local events (fire, flood, earthquake) by separating redundant machines.
Sophisticated policies can be specified by high availability software to differentiate software from hardware faults, and to attempt time-delayed restarts of individual software processes, entire software stacks, or entire systems.
In the past 20 years telecommunication networks and other complex software systems have become essential parts of business and recreational activities.
“At the same time [as the economy is in a downturn], 60% almost -- that's six out of 10 businesses -- require 99.999. That's four nines or five nines of availability and uptime for their mission-critical line-of-business applications.
And 9% of the respondents, so that's almost one out of 10 companies, say that they need greater than five nines of uptime. So what that means is, no downtime. In other words, you have got to really have bulletproof, bombproof applications and hardware systems. So you know, what do you use? Well one thing you have high-availability clusters or you have the more expensive and more complex fault-tolerance servers.” [ 9 ]
Telecommunications : High Availability Software is an essential component of telecommunications equipment since a network outage can result in significant loss in revenue for telecom providers and telephone access to emergency services is an important public safety issue.
Defense/Military : Recently High Availability Software has found its way into defense projects as an inexpensive way to provide availability for crewed and uncrewed vehicles [ 10 ]
Space : High Availability Software is proposed for use of non-radiation hardened equipment in space environments. Radiation hardened electronics is significantly more expensive and lower performance than off-the-shelf equipment. But High Availability Software running on a single or pair of rad-hardened controllers can manage many redundant high performance non-rad-hard computers, potentially failing over and resetting them in the event of a fault. [ 11 ]
Typical cloud services provide a set of networked computers (typical a virtual machine) running a standard server OS like Linux. Computers can often communicate with other instances within the same data center for free (tenant network) and to outside computers for fee. The cloud infrastructure may provide simple fault detection and restart at the virtual machine level. However, restarts can take several minutes resulting in lower availability. Additionally, cloud services cannot detect software failures within the virtual machines. High Availability Software running inside the cloud virtual machines can detect software (and virtual machine) failures in seconds and can use checkpointing to ensure that standby virtual machines are ready to take over service.
The Service Availability Forum defines standards for application-aware High Availability. [ 12 ] | https://en.wikipedia.org/wiki/High_availability_software |
High dynamic range ( HDR ), also known as wide dynamic range , extended dynamic range , or expanded dynamic range , is a signal with a higher dynamic range than usual.
The term is often used in discussing the dynamic ranges of images , videos , audio or radio . It may also apply to the means of recording, processing, and reproducing such signals including analog and digitized signals . [ 1 ]
In this context, the term high dynamic range means there is a large amount of variation in light levels within a scene or an image. The dynamic range refers to the range of luminosity between the brightest area and the darkest area of that scene or image.
High dynamic range imaging ( HDRI ) refers to the set of imaging technologies and techniques that allow the dynamic range of images or videos to be increased. It covers the acquisition, creation, storage, distribution and display of images and videos. [ 2 ]
Modern films have often been shot with cameras featuring a higher dynamic range, and legacy films can be post-converted even if manual intervention will be needed for some frames (as when black-and-white films are converted to color). [ citation needed ] Also, special effects, especially those that mix real and synthetic footage, require both HDR shooting and rendering . [ citation needed ] HDR video is also needed in applications that demand high accuracy for capturing temporal aspects of changes in the scene. This is important in monitoring of some industrial processes such as welding, in predictive driver assistance systems in automotive industry, in surveillance video systems, and other applications.
In photography and videography , a technique, commonly named high dynamic range ( HDR ) allows the dynamic range of photos and videos to be captured beyond the native capability of the camera. It consists of capturing multiple frames of the same scene but with different exposures and then combining them into one, resulting into an image with a dynamic range higher than the individually captured frames. [ 3 ] [ 4 ]
Some of the sensors on modern phones and cameras may even combine the two images on-chip. This also allows a wider dynamic range being directly available to the user for display or processing without in-pixel compression. Some cameras designed for use in security applications can capture HDR videos by automatically providing two or more images for each frame, with changing exposure. For example, a sensor for 30fps video will give out 60fps with the odd frames at a short exposure time and the even frames at a longer exposure time. [ citation needed ]
Modern CMOS image sensors can often capture high dynamic range images from a single exposure. [ 5 ] This reduces the need to use the multi-exposure HDR capture technique.
High dynamic range images are used in extreme dynamic range applications like welding or automotive work. In security cameras the term used instead of HDR is "wide dynamic range". [ citation needed ]
Because of the nonlinearity of some sensors image artifacts can be common. [ citation needed ]
High-dynamic-range rendering (HDRR) is the real-time rendering and display of virtual environments using a dynamic range of 65,535:1 or higher (used in computer, gaming, and entertainment technology). [ 6 ] HDRR does not require a HDR display and originally used tone mapping to display the rendering on a standard dynamic range display.
The technologies used to store, transmit, display and print images have limited dynamic range. When captured or created images have a higher dynamic range, they must be tone mapped in order to reduce that dynamic range. [ citation needed ]
High-dynamic-range formats for image and video files are able to store more dynamic range than traditional 8-bit gamma formats. These formats include:
High dynamic range (HDR) is also the common name of a technology allowing to transmit high dynamic range videos and images to compatible displays. That technology also improves other aspects of transmitted images, such as color gamut .
In this context,
On January 4, 2016, the Ultra HD Alliance announced their certification requirements for an HDR display. [ 23 ] [ 24 ] The HDR display must have either a peak brightness of over 1000 cd/m 2 and a black level less than 0.05 cd/m 2 (a contrast ratio of at least 20,000:1) or a peak brightness of over 540 cd/m 2 and a black level less than 0.0005 cd/m 2 (a contrast ratio of at least 1,080,000:1). [ 23 ] [ 24 ] The two options allow for different types of HDR displays such as LCD and OLED . [ 24 ]
Some options to use HDR transfer functions that better match the human visual system other than a conventional gamma curve include the HLG and perceptual quantizer (PQ). [ 22 ] [ 25 ] [ 26 ] HLG and PQ require a bit depth of 10-bits per sample. [ 22 ] [ 25 ]
The dynamic range of a display refers to range of luminosity the display can reproduce, from the black level to its peak brightness. [ citation needed ] The contrast of a display refers to the ratio between the luminance of the brightest white and the darkest black that a monitor can produce. [ 27 ] Multiple technologies allowed to increase the dynamic range of displays.
In May 2003, BrightSide Technologies demonstrated the first HDR display at the Display Week Symposium of the Society for Information Display . The display used an array of individually-controlled LEDs behind a conventional LCD panel in a configuration known as " local dimming ". BrightSide later introduced a variety of related display and video technologies enabling visualization of HDR content. [ 28 ] In April 2007, BrightSide Technologies was acquired by Dolby Laboratories . [ 29 ]
OLED displays have high contrast. MiniLED improves contrast. [ citation needed ]
In the 1970s and 1980s, Steve Mann invented the Generation-1 and Generation-2 "Digital Eye Glass" as a vision aid to help people see better with some versions being built into welding helmets for HDR vision. [ 30 ] [ 31 ] [ 32 ] [ 33 ] [ 34 ] [ 35 ]
In Audio, the term high dynamic range means there is a lot of variation in the levels of the sound. Here, the dynamic range refers to the range between the highest volume and lowest volume of the sound.
XDR (audio) is used to provide higher-quality audio when using microphone sound systems or recording onto cassette tapes.
HDR Audio is a dynamic mixing technique used in EA Digital Illusions CE Frostbite Engine to allow relatively louder sounds to drown out softer sounds. [ 36 ]
Dynamic range compression is a set of techniques used in audio recording and communication to put high-dynamic-range material through channels or media of lower dynamic range. Optionally, dynamic range expansion is used to restore the original high dynamic range on playback.
In radio, high dynamic range is important especially when there are potentially interfering signals. Measures such as spurious-free dynamic range are used to quantify the dynamic range of various system components such as frequency synthesizers. HDR concepts are important in both conventional and software-defined radio design.
In many fields, instruments need to have a very high dynamic range. For example, in seismology , HDR accelerometers are needed, as in the ICEARRAY instruments . | https://en.wikipedia.org/wiki/High_dynamic_range |
High-energy-density physics (HEDP) is a subfield of physics intersecting condensed matter physics , nuclear physics , astrophysics and plasma physics . It has been defined as the physics of matter and radiation at energy densities in excess of about 100 GJ/m 3 equivalent to pressures of about 1 Mbar (or roughly 1 million times atmospheric pressure). [ 1 ]
High energy density (HED) science includes the study of condensed matter at densities common to the deep interiors of giant planets, and hot plasmas typical of stellar interiors. [ 2 ] This multidisciplinary field provides a foundation for understanding a wide variety of astrophysical observations and understanding and ultimately controlling the fusion regime. Specifically, thermonuclear ignition by inertial confinement in the laboratory – as well as the transition from planets to brown dwarfs and stars in nature – takes place via the HED regime. A wide variety of new and emerging experimental capabilities ( National Ignition Facility (NIF), Jupiter Laser Facility (JLF), etc.) together with the push towards Exascale Computing help make this new scientific frontier rich with discovery. [ 3 ]
The HED domain is often defined by an energy density (units of pressure ) above 1 Mbar = 100 GPa ~ 1 Million of Atmosphere . This is comparable to the energy density of a chemical bond such as in a water molecule. Thus at 1 Mbar, chemistry as we know it changes. Experiments at NIF now routinely probe matter at 100 Mbar. At these "atomic pressure" conditions the energy density is comparable to that of the inner core electrons, so the atoms themselves change. The dense HED regime includes highly degenerate matter, with interatomic spacing less than the de Broglie wavelength. This is similar to quantum regime achieved at low temperatures [ 4 ] (e.g. Bose–Einstein condensation ), however, unlike the low temperature analog, this HED regime simultaneously probes interatomic separations less than the Bohr radius . This opens an entirely new quantum mechanical domain, where core electrons - not just valence electrons - determine material properties and gives rise to core-electron-chemistry and a new structural complexity in solids. Potential exotic electronic, mechanical, and structural behavior of such matter include room temperature superconductivity , high-density electrides , first order fluid-fluid transitions, and new insulator-metal transitions. Such matter is likely quite common throughout the universe, existing in the more than 1000 recently discovered exoplanets . [ 3 ]
HED conditions at higher temperatures are important to the birth and death of stars and controlling thermonuclear fusion in the laboratory. Take as an example the birth and cooling of a neutron star . The central part of a star, ~8-20 times the mass of the Sun, fuses its way to iron and cannot go further since iron has the highest binding energy per nucleon of any element. As the iron core accumulates to ~1.4 solar masses, electron degeneracy pressure gives up against gravity and collapses. Initially the star cools by the rapid emission of neutrinos . The outer Fe surface layer (~10 9 K) gives rise to spontaneous pair production then reaches a temperature where the radiation pressure is comparable to the thermal pressure and where thermal pressure is comparable to coulomb interactions . [ 3 ]
Recent discoveries include metallic fluid hydrogen and superionic water . [ 3 ] | https://en.wikipedia.org/wiki/High_energy_density_physics |
High fidelity (often shortened to hi-fi or, rarely, HiFi ) is the high-quality reproduction of sound . [ 1 ] It is popular with audiophiles and home audio enthusiasts. Ideally, high-fidelity equipment has inaudible noise and distortion , and a flat (neutral, uncolored) frequency response within the human hearing range . [ 2 ]
High fidelity contrasts with the lower-quality " lo-fi " sound produced by inexpensive audio equipment, AM radio , or the inferior quality of sound reproduction that can be heard in recordings made until the late 1940s.
Bell Laboratories began experimenting with various recording techniques in the early 1930s. Performances by Leopold Stokowski and the Philadelphia Orchestra were recorded in 1931 and 1932 using telephone lines between the Academy of Music in Philadelphia and the Bell labs in New Jersey. Some multitrack recordings were made on optical sound film, which led to new advances used primarily by MGM (as early as 1937) and Twentieth Century Fox Film Corporation (as early as 1941). RCA Victor began recording performances by several orchestras using optical sound around 1941, resulting in higher-fidelity masters for 78-rpm discs . During the 1930s, Avery Fisher , an amateur violinist, began experimenting with audio design and acoustics . He wanted to make a radio that would sound like he was listening to a live orchestra and achieve high fidelity to the original sound. After World War II , Harry F. Olson conducted an experiment whereby test subjects listened to a live orchestra through a hidden variable acoustic filter. The results proved that listeners preferred high-fidelity reproduction, once the noise and distortion introduced by early sound equipment was removed. [ citation needed ]
Beginning in 1948, several innovations created the conditions that made major improvements in home audio quality possible:
In the 1950s, audio manufacturers employed the phrase high fidelity as a marketing term to describe records and equipment intended to provide faithful sound reproduction. Many consumers found the difference in quality compared to the then-standard AM radios and 78-rpm records readily apparent and bought high-fidelity phonographs and 33⅓ LPs such as RCA 's New Orthophonics and London's FFRR (Full Frequency Range Recording, a UK Decca system). Audiophiles focused on technical characteristics and bought individual components, such as separate turntables, radio tuners, phono stages , preamplifiers , power amplifiers and loudspeakers. Some enthusiasts even assembled their loudspeaker systems, with the advent of integrated multi-speaker console systems in the 1950s, hi-fi became a generic term for home sound equipment, to some extent displacing phonograph and record player .
In the late 1950s and early 1960s, the development of stereophonic equipment and recordings led to the next wave of home-audio improvement, and in common parlance stereo displaced hi-fi . Records were now played on a stereo (stereophonic phonograph). In the world of the audiophile, however, the concept of high fidelity continued to refer to the goal of highly accurate sound reproduction and to the technological resources available for approaching that goal. This period is regarded as the "Golden Age of Hi-Fi", when vacuum tube equipment manufacturers of the time produced many models considered superior by modern audiophiles, and just before solid state ( transistorized ) equipment was introduced to the market, subsequently replacing tube equipment as the mainstream technology.
In the 1960s, the FTC with the help of the audio manufacturers came up with a definition to identify high-fidelity equipment so that the manufacturers could clearly state if they meet the requirements and reduce misleading advertisements. [ 4 ]
A popular type of system for reproducing music beginning in the 1970s was the integrated music centre —which combined a phonograph turntable, AM-FM radio tuner, tape player, preamplifier, and power amplifier in one package, often sold with its own separate, detachable or integrated speakers. These systems advertised their simplicity. The consumer did not have to select and assemble individual components or be familiar with impedance and power ratings. Purists generally avoid referring to these systems as high fidelity, though some are capable of very good quality sound reproduction.
Audiophiles in the 1970s and 1980s preferred to buy each component separately. That way, they could choose models of each component with the specifications that they desired. In the 1980s, several audiophile magazines became available, offering reviews of components and articles on how to choose and test speakers, amplifiers, and other components.
Listening tests are used by hi-fi manufacturers, audiophile magazines, and audio engineering researchers and scientists. If a listening test is done in such a way that the listener who is assessing the sound quality of a component or recording can see the components that are being used for the test (e.g., the same musical piece listened to through a tube power amplifier and a solid-state amplifier), then it is possible that the listener's pre-existing biases towards or against certain components or brands could affect their judgment. To respond to this issue, researchers began to use blind tests , in which listeners cannot see the components being tested. A commonly used variant of this test is the ABX test . A subject is presented with two known samples (sample A , the reference, and sample B , an alternative), and one unknown sample X, for three samples total. X is randomly selected from A and B , and the subject identifies X as being either A or B . Although there is no way to prove that a certain methodology is transparent , [ 5 ] a properly conducted double-blind test can prove that a method is not transparent.
Blind tests are sometimes used as part of attempts to ascertain whether certain audio components (such as expensive, exotic cables) have any subjectively perceivable effect on sound quality. Data gleaned from these blind tests is not accepted by some audiophile magazines such as Stereophile and The Absolute Sound in their evaluations of audio equipment. John Atkinson , current editor of Stereophile , stated that he once purchased a solid-state amplifier, the Quad 405, in 1978 after seeing the results from blind tests, but came to realize months later that "the magic was gone" until he replaced it with a tube amp. [ 6 ] Robert Harley of The Absolute Sound wrote, in 2008, that: "...blind listening tests fundamentally distort the listening process and are worthless in determining the audibility of a certain phenomenon." [ 7 ]
Doug Schneider, editor of the online Soundstage network, argued the opposite in 2009. [ 8 ] [ 9 ] He stated: "Blind tests are at the core of the decades' worth of research into loudspeaker design done at Canada's National Research Council (NRC). The NRC researchers knew that for their result to be credible within the scientific community and to have the most meaningful results, they had to eliminate bias, and blind testing was the only way to do so." Many Canadian companies such as Axiom, Energy, Mirage, Paradigm, PSB, and Revel use blind testing extensively in designing their loudspeakers. Audio professional Dr. Sean Olive of Harman International shares this view. [ 10 ]
Stereophonic sound provided a partial solution to the problem of reproducing the sound of live orchestral performers by creating separation among instruments, the illusion of space, and a phantom central channel. An attempt to enhance reverberation was tried in the 1970s through quadraphonic sound . Consumers did not want to pay the additional costs and space required for the marginal improvements in realism. With the rise in popularity of home theater , however, multi-channel playback systems became popular, and many consumers were willing to tolerate the six to eight channels required in a home theater.
In addition to spatial realism, the playback of music must be subjectively free from noise, such as hiss or hum, to achieve realism. The compact disc (CD) provides about 90 decibels of dynamic range , [ 11 ] which exceeds the 80 dB dynamic range of music as normally perceived in a concert hall. [ 12 ] Audio equipment must be able to reproduce frequencies high enough and low enough to be realistic. The human hearing range, for healthy young persons, is 20 Hz to 20,000 Hz. [ 13 ] Most adults can't hear higher than 15,000 Hz. [ 11 ] CDs are capable of reproducing frequencies as low as 0 Hz and as high as 22,050 Hz, making them adequate for reproducing the frequency range that most humans can hear. [ 11 ] The equipment must also provide no noticeable distortion of the signal or emphasis or de-emphasis of any frequency in this frequency range.
Integrated , mini , or lifestyle systems (also known by the older terms music centre or midi system [ 14 ] [ 15 ] ) contain one or more sources such as a CD player , a tuner , or a cassette tape deck together with a preamplifier and a power amplifier in one box. A limitation of an "integrated" system is that failure of any one component can possibly lead to the need to replace the entire unit, as components are not readily swapped in or out of a system merely by plugging and unplugging cables, and may not even have been made available by the manufacturer to allow piecemeal repairs.
Although some high-end audio manufacturers do produce integrated systems, such products are generally disparaged by audiophiles , who prefer to build a system from separates (or components ), often with each item from a different manufacturer specialising in a particular component. This provides the most flexibility for piece-by-piece upgrades and repairs.
A preamplifier and a power amplifier in one box is called an integrated amplifier ; with a tuner added, it is a receiver . A monophonic power amplifier is called a monoblock and is often used for powering a subwoofer . Other modules in the system may include components like cartridges , tonearms , hi-fi turntables , digital media players , DVD players that play a wide variety of discs including CDs , CD recorders , MiniDisc recorders, hi-fi videocassette recorders (VCRs) and reel-to-reel tape recorders . Signal modification equipment can include equalizers and noise-reduction systems .
This modularity allows the enthusiast to spend as little or as much as they want on a component to suit their specific needs, achieve a desired sound, and add components as desired. Also, failure of any component of an integrated system can render it unusable, while the unaffected components of a modular system may continue to function. A modular system introduces the complexity of cabling multiple components and often having different remote controls for each unit.
Some modern hi-fi equipment can be digitally connected using fiber optic TOSLINK cables, USB ports (including one to play digital audio files), or Wi-Fi support.
Another modern component is the music server consisting of one or more computer hard drives that hold music in the form of computer files . When the music is stored in an audio file format that is lossless such as FLAC , Monkey's Audio or WMA Lossless , the computer playback of recorded audio can serve as an audiophile-quality source for a hi-fi system. There is now a push from certain streaming services to offer hi-fi services.
Streaming services typically have a modified dynamic range and possibly bit rates lower than audiophile standards. [ citation needed ] Tidal and others have launched a hi-fi tier that includes access to FLAC and Master Quality Authenticated studio masters for many tracks through the desktop version of the player. This integration is also available for high-end audio systems. | https://en.wikipedia.org/wiki/High_fidelity |
High frequency data refers to time-series data collected at an extremely fine scale. As a result of advanced computational power in recent decades, high frequency data can be accurately collected at an efficient rate for analysis. [ 1 ] Largely used in the financial field, high frequency data provides observations at very frequent intervals that can be used to understand market behaviors, dynamics, and micro-structures. [ 2 ]
High frequency data collections were originally formulated by massing tick-by-tick market data, by which each single 'event' (transaction, quote, price movement, etc.) is characterized by a 'tick', or one logical unit of information. Due to the large amounts of ticks in a single day, high frequency data collections generally contain a large amount of data, allowing high statistical precision. [ 3 ] High frequency observations across one day of a liquid market can equal the amount of daily data collected in 30 years. [ 3 ]
Due to the introduction of electronic forms of trading and Internet -based data providers, high frequency data has become much more accessible and can allow one to follow price formation in real-time. This has resulted in a large new area of research in the high frequency data field, where academics and researchers use the characteristics of high frequency data to develop adequate models for predicting future market movements and risks. [ 3 ] Model predictions cover a wide range of market behaviors including volume , volatility , price movement, and placement optimization. [ 4 ]
There is an ongoing interest in both regulatory agencies and academia surrounding transaction data and limit order book data, of which greater implications of trade and market behaviors as well as market outcomes and dynamics can be assessed using high frequency data models. Regulatory agencies take a large interest in these models due to the fact that liquidity and price risks are not fully understood in terms of newer forms of automated trading applications. [ 4 ]
High frequency data studies contain value in their ability to trace irregular market activities over a period of time. This information allows a better understanding of price and trading activity and behavior. Due to the importance of timing in market events, high frequency data requires analysis using point processes , which depend on observations and history to characterize random occurrences of events. [ 4 ] This understanding was first developed by 2003 Nobel Prize in Economics winner Robert Fry Engle III , who specializes in developing financial econometric analysis methods using financial data and point processes. [ 4 ]
High frequency data are primarily used in financial research and stock market analysis. Whenever a trade, quote, or electronic order is processed, the relating data are collected and entered in a time-series format. As such, high frequency data are often referred to as transaction data. [ 4 ]
There are five broad levels of high frequency data that are obtained and used in market research and analysis:
Individual trade data collected at a certain interval within a time series. [ 4 ] There are two main variables to describe a single point of trade data: the time of the transaction, and a vector known as a 'mark', which characterizes the details of the transaction event. [ 5 ]
Data collected details both trades and quotes, including price changes and direction, time stamps, and volume. Such information can be found at the TAQ ( Trade and Quote ) database operated by the NYSE . [ 4 ] Where trade data details the exchange of a transaction itself, quote data details the optimal trading conditions for a given exchange. This information can indicate halts in exchanges and both opening and closing quotes. [ 6 ]
Using systems that have been completely computerized, the depth of the market can be assessed using limit order activities that occur in the background of a given market. [ 4 ]
This data level displays the full information surrounding limit order activities, and can create a reproduction of the trade flow at any given time using information on time stamps, cancellations, and buyer/seller identification. [ 4 ]
Snapshots of the order book activities can be recorded on equi-distant based grids to limit the need to reproduce the order book. This however limits trade analysis ability, and is therefore more useful in understanding dynamics rather than book and trading interaction. [ 4 ]
In financial analysis, high frequency data can be organized in differing time scales from minutes to years. [ 3 ] As high frequency data comes in a largely dis-aggregated form over a time-series compared to lower frequency methods of data collection, it contains various unique characteristics that alter the way the data are understood and analyzed. Robert Fry Engle III categorizes these distinct characteristics as irregular temporal spacing, discreteness, diurnal patterns, and temporal dependence. [ 7 ]
High frequency data employs the collection of a large sum of data over a time series, and as such the frequency of single data collection tends to be spaced out in irregular patterns over time. This is especially clear in financial market analysis, where transactions may occur in sequence, or after a prolonged period of inactivity. [ 7 ]
High frequency data largely incorporates pricing and transactions, of which institutional rules prevent from drastically rising or falling within a short period of time. This results in data changes based on the measure of one tick. [ 7 ] This lessened ability to fluctuate makes the data more discrete in its use, such as in stock market exchange, where popular stocks tend to stay within 5 ticks of movement. Due to the level of discreteness of high frequency data, there tends to be high level of kurtosis present in the set. [ 7 ]
Analysis first made by Engle and Russel in 1998 notes that high frequency data follows a diurnal pattern , with the duration between trades being smallest at the open and the close of the market. Some foreign markets, which operate 24 hours a day, still display a diurnal pattern based on the time of the day. [ 7 ]
Due largely to discreteness in prices, high frequency data are temporally dependent. The spread forced by small tick differences in buying and selling prices creates a trend that pushes the price in a particular direction. Similarly, the duration and transaction rates between trades tend to cluster, denoting dependence on the temporal changes of price. [ 7 ]
In an observation noted by Robert Fry Engle III , the availability of higher frequencies of data over time incited movement from years, to months, to very frequent intervals collections of financial data. This movement however is not infinite in moving to higher frequencies, but faces a limit when all transactions are eventually recorded. [ 5 ] Engle coined this limiting frequency level as ultra-high frequency data . An outstanding quality of this maximum frequency is extreme irregularly spaced data, due to the large spread of time that a dis-aggregated collection imposes. [ 5 ] Rather than breaking the sequence of ultra-high frequency data by time intervals, which would essentially cause a loss of data and make the set a lower frequency, methods and models such as the autoregressive conditional duration model can be used to consider varying waiting times between data collection. [ 5 ] Effective handling of ultra-high frequency data can be used to increase accuracy of econometric analyses. This can be accomplished with two processes: data cleaning and data management. [ 6 ]
Data cleaning , or data cleansing , is the process of utilizing algorithmic functions to remove unnecessary, irrelevant, and incorrect data from high frequency data sets. [ 6 ] Ultra-high frequency data analysis requires a clean sample of records to be useful for study. As velocities in ultra-high frequency collection increase, more errors and irrelevant data are likely to be identified in the collection. [ 6 ] Errors that occur can be attributed to human error , both intentional (e.g. 'dummy' quotes) and unintentional (e.g. typing mistake ), or computer error, which occur with technical failures. [ 8 ]
Data Management refers to the process of selecting a specific time-series of interest within a set of ultra-high frequency data to be pulled and organized for the purpose of an analysis. Various transactions may be reported at the same time and at different price levels, and econometric models generally require one observation at each time stamp, necessitating some form of data aggregation for proper analysis. [ 6 ] Data management efforts can be effective to remedy ultra-high frequency data characteristics including irregular spacing, bid-ask bounce, and market opening and closing. [ 6 ]
A study published in the Freshwater Biology journal focusing on episodic weather effects on lakes highlights the use of high frequency data to further understand meteorological drivers and the consequences of "events", or sudden changes to physical, chemical, and biological parameters of a lake. [ 9 ] Due to advances in data collection technology and human networks coupled with the placement of high frequency monitoring stations at a variety of lake types, these events can be more effectively explored. The use of high frequency data in these studies is noted to be an important factor in allowing analyses of rapidly occurring weather changes at lakes, such as wind speed and rainfall, increasing understandings of lake capacities to handle events in the wake of increasing storm severity and climate change . [ 9 ]
High frequency data has been found to be useful in the forecasting of inflation. A study by Michele Mondugno in the International Journal of Forecasting indicates that use of daily and monthly data at a high frequency have generally improved the forecast accuracy of total CPI inflation in the United States. [ 10 ] The study utilized a comparison of lower frequency models with one that considered all variables at a high frequency. It was ultimately found that the increased accuracy of both highly volatile transport and energy components of prices in the high frequency inflation model led to greater performance and more accurate results. [ 10 ]
The use of half-life estimation to evaluate speeds of mean reversion in economic and financial variables has faced issues in regards to sampling, as a half-life of about 13.53 years would require 147 years of annual data according to early AR process models . [ 11 ] As a result, some scholars have utilized high frequency data to estimate half-life annual data. While use of high frequency data can face some limitations to discovering true half-life, mainly through the bias of an estimator , utilizing a high frequency ARMA model has been found to consistently and effectively estimate half-life with long annual data. [ 11 ] | https://en.wikipedia.org/wiki/High_frequency_data |
A high performance positioning system (HPPS) is a type of positioning system consisting of a piece of electromechanics equipment (e.g. an assembly of linear stages and rotary stages ) that is capable of moving an object in a three-dimensional space within a work envelope. Positioning could be done point to point or along a desired path of motion . Position is typically defined in six degrees of freedom , including linear, in an x,y,z cartesian coordinate system , and angular orientation of yaw, pitch, roll. HPPS are used in many manufacturing processes to move an object (tool or part) smoothly and accurately in six degrees of freedom, along a desired path, at a desired orientation, with high acceleration , high deceleration , high velocity and low settling time . It is designed to quickly stop its motion and accurately place the moving object at its desired final position and orientation with minimal jittering.
HPPS requires a structural characteristics of low moving mass and high stiffness. The resulting system characteristic is a high value for the lowest natural frequency of the system. High natural frequency allows the motion controller to drive the system at high servo bandwidth , which means that the HPPS can reject all motion disturbing frequencies, which act at a lower frequency than the bandwidth. For higher frequency disturbances such as floor vibration , acoustic noise , motor cogging, bearing jitter and cable carrier rattling, HPPS may employ structural composite materials for damping and isolation mounts for vibration attenuation. Unlike articulating robots, which have revolute joints that connect their links, HPPS links typically consists of sliding joints, which are relatively stiffer than revolute joints. That is the reason why high performance positioning systems are often referred to as cartesian robots .
HPPS, driven by linear motors, can move at a combined high velocity on order of 3-5 m/s, high accelerations of 5-7 g, at micron or sub micron positioning accuracy with settling times on order of milliseconds and servo bandwidth of 30-50 Hz. Ball screw actuators, on the other hand, have typical bandwidth of 10-20 Hz and belt driven actuators at about 5-10 Hz. The bandwidth value of HPPS is about 1/3 of the lowest natural frequency in the range of 90-150 Hz. Settling time to +/- 1% Constant Velocity, or + / - 1 um jitter, after high acceleration or high deceleration respectively, takes an estimated 3 bandwidth periods. For example, a 50 Hz servo bandwidth, having a 1 / 50 · 1000 = 20 msec period, will settle to 1 um position accuracy within an estimated 3 · 20 = 60 msec. The lowest natural frequency equals the square root of system stiffness divided by moving inertia. A typical linear recirculating bearing rail, of a high performance positioning stage, has a stiffness on order of 100-300 N/um. Such a performance is required in semiconductor process equipment, electronics assembly lines , numerically controlled machine tools , coordinate-measuring machines , 3D Printing , pick-and-place machines , drug discovery assaying and many more. At their highest performance HPPS may use granite base for thermal stability and flat surfaces, air bearings for jitter free motion, brushless linear motors for non contact, frictionless actuation, high force and low inertia, and laser interferometer for sub micron position feedback. On the other hand, a typical 6 degrees of freedom articulated robot , with 1 m' reach, has a structural stiffness on the order of 1 N/um. That is why articulated robots are best being employed as automation equipment in processes which require position repeatability on the order of 100's microns, such as robot welding , paint robots , palletizers and many more.
The original HPPS were developed at Anorad Corporation (now Rockwell Automation ) in the 1980s, after the invention of brushless linear motors by Anorad's Founder and CEO, Anwar Chitayat . Initially HPPS were used for high precision manufacturing processes in semiconductor applications such as Applied Materials , PCB Inspection Orbotech and High Velocity Machine Tool Ford . [ 1 ] In parallel linear motor technology and their integration in HPPS, was expanded around the world. As a result, in 1996 Siemens integrated its CNC with Anorad linear motors to drive a 20 m' long Maskant machine at Boeing for chemical milling of aircraft wings . [ 2 ] In 1997 FANUC licensed Anorad's linear motor technology and integrated it as a complete solution with their CNC products line. [ 3 ] And in 1998, Rockwell Automation acquired Anorad to compete with Siemens and Fanuc in providing a complete linear motor solutions to drive high velocity machine tools in Automotive transfer lines . [ 4 ] Today linear motors are being used in hundreds of thousands high performance positioning systems, which drive manufacturing processes around the world. Their market is expected to grow, according to some studies, at 4.4% a year and reach $1.5B in 2025. [ 5 ]
System specification (technical standard) is an official interface between the application requirements (problem), as described by the user (customer) and the design (solution) as optimized by the developer (supplier).
HPPS configuration is typically optimized for maximum structural stiffness with maximum damping and minimum inertia, smallest Abbe error at the point of interest (POI), with minimum components and maximum maintainability.
System analysis is a process of understanding the relationships between design parameters, operating conditions, environmental variables and system performance based on system modeling and analysis tools
Component sizing is the process of selecting standard parts from component suppliers, or designing a custom part for manufacturing
System testing is an iterative process of system development, intended to validate system analysis modeling, proof of concepts, safety factor of performance specifications and acceptant testing. | https://en.wikipedia.org/wiki/High_performance_positioning_system |
In science and engineering the study of high pressure examines its effects on materials and the design and construction of devices, such as a diamond anvil cell , which can create high pressure . High pressure usually means pressures of thousands (kilo bars ) or millions (megabars) of times atmospheric pressure (about 1 bar or 100,000 Pa).
Percy Williams Bridgman received a Nobel Prize in 1946 for advancing this area of physics by two magnitudes of pressure (400 MPa to 40 GPa). The list of founding fathers of this field includes also the names of Harry George Drickamer , Tracy Hall , Francis P. Bundy , Leonid F. Vereschagin [ ru ] , and Sergey M. Stishov [ ru ] .
It was by applying high pressure as well as high temperature to carbon that synthetic diamonds were first produced alongside many other interesting discoveries. Almost any material when subjected to high pressure will compact itself into a denser form, for example, quartz (also called silica or silicon dioxide ) will first adopt a denser form known as coesite , then upon application of even higher pressure, form stishovite . These two forms of silica were first discovered by high-pressure experimenters, but then found in nature at the site of a meteor impact.
Chemical bonding is likely to change under high pressure, when the P*V term in the free energy becomes comparable to the energies of typical chemical bonds – i.e. at around 100 GPa. Among the most striking changes are metallization of oxygen at 96 GPa (rendering oxygen a superconductor), and transition of sodium from a nearly-free-electron metal to a transparent insulator at ~200 GPa. At ultimately high compression, however, all materials will metallize . [ 1 ]
High-pressure experimentation has led to the discovery of the types of minerals which are believed to exist in the deep mantle of the Earth, such as silicate perovskite , which is thought to make up half of the Earth's bulk, and post-perovskite , which occurs at the core-mantle boundary and explains many anomalies inferred for that region. [ citation needed ]
Pressure "landmarks": typical pressures reached by large-volume presses are up to 30–40 GPa, pressures that can be generated inside diamond anvil cells are ~1000 GPa, [ 2 ] pressure in the center of the Earth is 364 GPa, and highest pressures ever achieved in shock waves are over 100,000 GPa. [ 3 ] | https://en.wikipedia.org/wiki/High_pressure |
A high pressure jet is a stream of pressurized fluid that is released from an environment at a significantly higher pressure than ambient pressure from a nozzle or orifice, due to operational or accidental release . [ 1 ] In the field of safety engineering , the release of toxic and flammable gases has been the subject of many R&D studies because of the major risk that they pose to the health and safety of workers, equipment and environment . [ 2 ] Intentional or accidental release may occur in an industrial settings like natural gas processing plants, oil refineries and hydrogen storage facilities. [ 2 ]
A main focus during a risk assessment process is the estimation of the gas cloud extension and dissipation , important parameters that allow to evaluate and establish safety limits that must be respected in order to minimize the possible damage after a high pressure release. [ 3 ]
When a pressurized gas is released, the velocity of the flow will heavily depend on the pressure difference between stagnant pressure and downstream pressure. By assuming an isentropic expansion of an ideal gas from its stagnant conditions (P 0 , meaning the velocity of the gas is zero) to downstream conditions (P 1 , positioned at the exit plane of the nozzle or orifice), the subsonic flow rate of the source term is given by Ramskill's formulation: [ 4 ]
As the ratio between downstream condition pressure and stagnant condition pressure decreases, the flow rate of the ideal gas will increase. This behavior will continue until a critical value is reached (in air, P 1 /P 0 is roughly 0.528, [ 5 ] dependent on the heat capacity ratio , γ), changing the condition of the jet from a non-choked flow to a choked flow . This will lead to the a newly defined expression for the aforementioned pressure ratio and, sub-sequentially, the flow rate equation.
The critical value for the pressure ratio is defined as:
This newly defined ratio can then be used to determine the flow rate for a sonic choked flow:
The flow rate equation for a choked flow will have a fixed velocity, which is the speed of sound of the medium, where the Mach number is equals to 1:
It is important to note that if P 1 keeps on decreasing, no flow rate change will occur if the ratio is already below the critical value, unless P 0 also changes (also assuming that the orifice/nozzle exit area and upstream temperature stay the same).
An under-expanded jet is one that manifests when the pressure at downstream conditions (at the end of a nozzle or orifice) is greater that the pressure of the environment where the gas is being released in. It is said to be under-expanded since the gas will expand, trying to reach the same pressure of its surroundings. When under-expanded, the jet will have characteristics of a compressible flow , a condition in which pressure variations are significant enough to have a strong effect on the velocity (where it can exceed the speed of sound of the gas), density and temperature. [ 6 ] It is important to note that as the jet expands and incorporates gases from the surrounding medium, the jet will behave more and more like an incompressible fluid , allowing for a general definition of the structure of a jet to be the following: [ 1 ]
Further classification of the jet can be related to how the nearfield zone develops due to the compressible effects that govern it. [ 1 ] When the jet first exists the orifice or nozzle, it will expand very quickly, resulting in an over-expansion of the flow (which will also reduce the temperature and density of the flow as quickly as it depressurized). Gases that have expanded to a pressure lower than that of the surrounding fluid will be compressed inwards, causing an increase in the pressure of the flow. If this re-compression leads to the fluid having a higher pressure than the surrounding fluid, another expansion will happen.
This process will repeat until the pressure difference between ambient pressure and jet pressure is null (or close to null). [ 7 ] Compression and expansion are accomplished through a series of shock waves , formed as a result of Prandlt-Meyer expansion and compression waves. [ 8 ]
Development of the aforementioned shock waves will be related to the difference in pressure between the stagnant conditions or downstream conditions and the ambient conditions (η 0 = P 0 /P amb and η e = P 1 /P amb , respectively), as well as the mach number (Ma = V/V c , where V is the velocity of the flow and V c is the speed of sound of the medium). With varying pressure ratios, under-expanded jets can be classified as: [ 1 ]
Amongst incidental scenarios, natural gas releases have become particularly relevant within the process industry environment. [ 3 ] With an overall composition of 94.7% of methane , [ 9 ] it is important to consider how this gas can cause incremental damage when it is released. Methane gas is a non-toxic, flammable gas, that, at higher concentrations, can behave as an asphyxiant due to oxygen displacement from the lungs. [ 10 ] The main concern with methane is related to its flammability and the potential damage that could be dealt to its surroundings if the high pressure jet were to ignite into a jet fire . [ 11 ]
Three parameters that must be considered when dealing with flammable gasses are their flash point (FP), upper flammability limit (UFL) and lower flammability limit (LFL), as they are set values for any compound at a specific pressure and temperature. If we consider the fire triangle model, to induce a combustion reaction three components are needed: a fuel , an oxidizing agent and heat .
When release happens in an ambient filled with air , the oxidizing agent will be oxygen (air has a constant concentration of 21% in standard conditions). [ 12 ] At an almost pure concentration , a few centimeters from the exit plane, the concentration of natural gas is too high and oxygen too low to generate any kind of combustion reaction, but as the high pressure jet develops, the concentration of its components will dilute as air entrainment increases, allowing an enrichment of oxygen within the jet. Assuming a constant concentration for oxygen, the jet must dilute enough to enter within its flammability range; below its UFL. Within this range, a flammable mixture can be made and any source of heat can jump-start the reaction. [ 13 ]
To properly judge the damage and potential risk that the jet fire can generate, several studies regarding the maximum distance that the cloud generated by the jet can reach have been made. As dilution of the jet continues due to air entrainment in the farfield, going below its UFL, the maximum distance that the flammable mixture can reach is at the point in which the concentration of the cloud is equals to the LFL of the gas, as it is the lowest concentration allowable that permits the formation of a flammable mixture between air and natural gas at standard conditions (the LFL for natural gas is 4% [ 9 ] ).
Considering a free jet at sub-critical pressure (beyond the nearfield zone), its mean volume fraction axial concentration decay of any gas released in air can be defined as follows: [ 14 ]
Experimental data of high pressure jets have to be limited in terms of size and complexity of the scenario due to the inherit dangers and expenses correlated to the experiment itself. Alternative methods to gather data, such as representative models , can be used in order to predict what the maximum extend of the gas cloud at its LFL concentration can reach. Simpler models like a gaussian gas dispersion model (e.g., SCREEN3 - a dispersion model) or integral model (e.g., PHAST- an integral model) can be useful to have a quick and qualitative overview on how the jet may extend. Unfortunately, their inability to properly simulate jet-obstacle interactions make them impossible to use beyond preliminary calculations. This is the reason why Computational Fluid Dynamic (CFD) simulations are generally preferred for more complex scenarios. [ 15 ]
Although there exists several approaches for CFD simulations, a common approach is the use of a finite volume method that discretizes the volume into smaller cells of varying shapes. Every single cell will represent a fluid-filled volume where the scenarios parameters will be applied. Every cell that was modeled solves a set of conservation equations of mass , momentum and energy , along with the continuity equation . Fluid-obstacle interaction is then modeled with varying algorithms based on the closure turbulent model used. [ 16 ] Depending on the number of total cells within the volume, the better the quality of the simulation, the longer the simulation time. Convergence problems can arise within the simulation as large momentum, mass and energy gradients appear in the volume. The points where these problems are expected to appear (like in the nearfield zone of the jet) need to have a higher number of cells to achieve gradual changes between one cell and another. Ideally, through CFD simulations, a simpler model can be derived which, for a specific set of scenarios, allows to have results with an accuracy and precision level similar to the CFD simulation itself. [ 17 ]
Through a set of small scale experiments at varying pressures, Birch et al. formulated an equation that allowed the estimation of a virtual surface source, considering the conservation of mass between the exit plane of the orifice and the virtual surface. [ 18 ] This approach allows to simulate a compressible, under-expanded jet as an incompressible, fully-expanded jet. As a consequence, a simpler CFD model can be simulated by using the following diameter (named pseudo-diameter ) as the new exit plane: [ 19 ]
In the process industry, there exist a variety of cases where a high pressure jet release incident can occur. LNG storage facilities or NG pipeline systems leakage [ 20 ] can degenerate into a jet fire and, through a domino effect , cause heavy damage to the workforce, equipment and surrounding environment. For different scenarios that may happen, safety protocols have to be engineered that aim to set minimum distances between equipment and the workforce, along with preventive systems that reduce the danger of the potential incidental scenario. The following are some of the most common scenarios that may be encountered in an industrial environment: [ 19 ] [ 21 ] [ 22 ] | https://en.wikipedia.org/wiki/High_pressure_jet |
High production volume chemicals ( HPV chemicals ) are produced or imported into the United States in quantities of 1 million pounds or 500 tons per year. [ 1 ] In OECD countries , HPV chemicals are defined as being produced at levels greater than 1,000 metric tons per producer/importer per year in at least one member country/region. [ 2 ] A list of HPV chemicals serves as an overall priority list, from which chemicals are selected to gather data for a screening information dataset (SIDS), for testing and for initial hazard assessment.
In 1987, member countries of the Organisation for Economic Co-operation and Development decided to investigate existing chemicals. In 1991, they agreed to begin by focusing on High production volume (HPV) chemicals, where production volume was used as a surrogate for data on occupational, consumer, and environmental exposure . [ 3 ] Each country agreed to "sponsor" the assessment of a proportion of the HPV chemicals. Countries also agreed on a minimum set of required information, the screening information dataset (SIDS). Six tests are: acute toxicity , chronic toxicity , developmental toxicity / reproductive toxicity , mutagenicity , ecotoxicity and environmental fate. Using SIDS and detailed exposure data OECD's High Production Volume Chemicals Programme conducted initial risk assessments to screen and to identify any need for further work.
During the late 1990s, OECD member countries began to assess chemical categories and to use quantitative structure–activity relationship (QSAR) results to create OECD guidance documents, as well as a computerized QSAR toolbox. [ 4 ] In 1998, the global chemical industry , organized in the International Council of Chemical Associations (ICCA) initiative, offered to join OECD efforts. The ICCA promised to sponsor by 2013 about 1,000 substances from the OECD's HPV chemicals list "to establish as priorities for investigation", based on "presumed wide dispersive use, production in two or more global regions or similarity to another chemical, which met either of these criteria". OECD in turn agreed to refocus and to "increase transparency, efficiency and productivity and allow longer-term planning for governments and industry". The OECD refocus was on initial hazard assessments of HPV chemicals only, and no longer extensive exposure information gathering and evaluation. Detailed exposure assessments within national (or regional) programmes and priority setting activities were postponed as post-SIDS work.
On October 9, 1998, EPA Administrator Carol Browner sent letters to the CEOs of more than 900 chemical companies that manufacture HPV chemicals, asking them to participate in EPA's voluntary testing initiative, the so-called "HPV Challenge Program". The Environmental Defense Fund , the American Petroleum Institute , and American Chemistry Council joined in the effort. [ 5 ]
The OECD list of HPV chemicals keeps changing. A 2004 list of 143 pages contained 4,842 entries. [ 6 ] A 2007 list was published in 2009. [ 7 ]
As of 2009 [update] the EPA's HPV list had 2,539 chemicals, while the HPV Challenge Program chemical list contained only 1,973 chemicals because inorganic chemicals and polymers were not included. [ 8 ]
The EPA has published an online list of HPV chemicals since 2010. The list is not numerated and without footnotes. [ 1 ]
The "Strategic Approach to International Chemicals Management" ( SAICM ) is a policy for achieving safe production and use of chemicals worldwide by 2020, developed with stakeholders from more than 140 countries, signed by 100 governments, adopted by the UNEP Governing Council in February 2006.
The Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) proposal and the European Chemicals Agency will help the EU to fulfill objectives of SAICM. [ 9 ]
The Stockholm Convention on Persistent Organic Pollutants has aimed to control production, use, trade, disposal and release of twelve Persistent organic pollutants (POPs); the European Community has proposed five additional chemicals. The Convention bans deliberate production and use of POPs, bans the development of new POPs, and aims at minimizing releases of unintentionally produced POPs. The Convention has so far been ratified by the European Community, 18 member states and the two accession countries.
The 1976 Toxic Substances Control Act requires the EPA to "compile, keep current, and publish a list of each chemical substance that is manufactured or processed in the United States". In 1998, the EPA reported the most heavily used HPV chemicals in commerce were largely untested: 43% of 2,800 HPV chemicals had no basic toxicity data or screening level data at all, 50% had incomplete screening data, and only 7% of the HPV chemicals had a complete set of screening level toxicity data. However, screening level data, even if they indicated a problem, were not sufficient to restrict the use of a compound. [ 10 ] In 1986, 2003, 2005, and in 2011 EPA issued regulations to amend and update the TSCA inventory.
As of April 2010, about 84,000 chemicals were on the TSCA inventory, per a GAO report. [ 11 ] TSCA Section 4 gives EPA the authority to demand chemical testing. [ 12 ]
In 1982, U.S. manufacturers, processors, and importers of 75 chemicals that the International Agency for Research on Cancer had found to cause cancers in animals, but the carcinogenicity of which in humans was uncertain, were surveyed. Only for 13 of the 75 chemicals had epidemiologic studies on human health been completed or were in progress. Eighteen of the 75 were HPV chemicals and only for eight HPV chemicals had epidemiologic studies been completed or were in progress. The largest number of chemicals (19) were drugs, and none of them had been epidemiologically studied. Seven chemicals that had been studied were used as pesticides. [ 13 ]
In 1997 the Environmental Defense Fund reported in “Toxic Ignorance” results of its analysis of the availability of basic health test data on HPV chemicals that only 29% of the HPV chemicals in the US met minimum data requirements. [ 14 ] In 1998 the EPA published a report CHEMICAL HAZARD DATA AVAILABILITY STUDY showing "55% of TRI chemicals have had full SIDS testing, while only 7% of other chemicals have full test data". [ 15 ] They wrote
"...of the 830 companies making HPV chemicals in the US, 148 companies have NO SIDS data available on their chemicals; an additional 459 companies sell products for which, on average, half or less of SIDS tests are available. Only 21 companies (or 3% of the 830 companies) have all SIDS tests available for their chemicals. The basic set of test data costs about $200,000 per chemical."
In 1999, the European Union (EU) published a study about how many EU-HPV chemicals were publicly available in a comprehensive chemical data base called IUCLID : Only 14% of the EU-HPV chemicals had data at the level of the base-set, 65% had less than base-set, and 21% had no data available. The authors concluded, "more data [were] publicly available than most previous studies" had shown. [ 16 ]
In 2004, one of the partners in EPA's HPV Challenge Program assessed 532 up to then unsponsored chemicals, whether they were "orphaned" or not, and found:
Since 2009, the EPA required companies to perform toxicity testing on merely 34 chemicals. In 2011, the EPA announced, but as of 2013 had yet to finalize, plans to require testing for 23 additional chemicals, so altogether 57 chemicals. The EPA has prioritized 83 chemicals for risk assessment , and initiated seven assessments in 2012, with plans to start 18 additional assessments in 2013 and 2014. [ 11 ] In 2007, EPA began Toxcast which uses "automated chemical screening technologies (called "high-throughput screening assays") to expose living cells or isolated proteins to chemicals". [ 18 ] [ 19 ]
In 2009, EPA reported that it developed a system called ACToR (Aggregated Computational Toxicology Resource) to expose living cells or isolated proteins to chemicals. It pooled chemical research, data and screening tools from multiple federal agencies including the National Toxicology Program/ National Institute of Environmental Health Science, National Center for Advancing Translational Sciences and the Food and Drug Administration. | https://en.wikipedia.org/wiki/High_production_volume_chemicals |
A high-resistance connection ( HRC ) is a hazard that results from loose or poor connections in traditional electrical accessories and switchgear which can cause heat to develop, capable of starting a fire. [ 1 ]
Glowing connections occur when relatively high current exists in a relatively large resistance object. Heat comes from power dissipation . This energy, when dissipated in a small junction area, can generate temperatures above 1000 °C (1800 °F) and can ignite most flammable materials. [ 2 ]
An example extract from the National Union of Teachers (NUT) Fire Safety Brief: [ 3 ]
Bad wiring junctions can occur in equipment, cords, or in-situ wiring and especially in a defective switch, socket, plug, wiring connection and even at the circuit breaker or fuse panels. Terminal screws loosened by vibration, improper tightening or other causes offer increased resistance to the current, with consequent heating and potential thermal creep , which will cause the termination to loosen further and exacerbate the heating effect. In North America, high resistance junctions are sometimes observed at the terminations of aluminum wire circuits, where oxidation has caused increased resistance, resulting in thermal creep. No technology located in a circuit breaker or fuse panel could detect a high-resistance wiring fault as no measurable characteristic exists that differentiates a glow fault from normal branch circuit operation. Power fault circuit interrupters ( PFCI ) located in receptacles are designed to prevent fires caused by glowing connections in wiring or panels. [ citation needed ] From the receptacle a PFCI can detect the voltage drop when high current exists in a high resistance junction. In a properly designed and maintained circuit, substantial voltage drops should never occur. [ citation needed ] Proper wire terminations inside equipment, such as appliances, and cords prevent high-resistance connections that could lead to fires.
Thermal monitoring of the connection and providing an HRC device close to the probable location where a fault may develop is key to providing early warning or isolation to reduce the risk of fire.
Safety devices such as fuses and residual-current devices (RCDs) are unable to detect thermal rise and disconnect the electrical supply because they cannot sense a HRC. A safety device [ 4 ] to prevent HRCs operates by effectively monitoring for the abnormal thermal rise and will prevent ignition, smoke or burning odour of the electrical accessory or electrical installation. | https://en.wikipedia.org/wiki/High_resistance_connection |
High technology ( high tech or high-tech ), also known as advanced technology ( advanced tech ) or exotechnology , [ 1 ] [ failed verification ] is technology that is at the cutting edge : the highest form of technology available. [ 2 ] It can be defined as either the most complex or the newest technology on the market. [ 3 ] The opposite of high tech is low technology , referring to simple, often traditional or mechanical technology; for example, a slide rule is a low-tech calculating device. [ 4 ] [ 5 ] [ 6 ] When high tech becomes old, it becomes low tech, for example vacuum tube electronics. Further, high tech is related to the concept of mid-tech, that is a balance between the two opposite extreme qualities of low-tech and high tech. Mid-tech could be understood as an inclusive middle that combines the efficiency and versatility of digital/automated technology with low-tech's potential for autonomy and resilience. [ 7 ]
Startups working on high technologies (or developing new high technologies) are sometimes referred to as deep tech ; the term may also refer to disruptive innovations or those based on scientific discoveries. [ 8 ]
High tech, as opposed to high-touch , may refer to self-service experiences that do not require human interaction. [ 9 ]
The phrase was used in a 1958 The New York Times story advocating " atomic energy " for Europe: "... Western Europe, with its dense population and its high technology ...." [ 10 ] Robert Metz used the term in a financial column in 1969, saying Arthur H. Collins of Collins Radio "controls a score of high technology patents in a variety of fields" [ 11 ] and in a 1971 article used the abbreviated form, "high tech". [ 12 ]
A widely used classification of high-technological industries was provided by the OECD in 2006. [ 13 ] It is based on the intensity of research and development activities used in these industries within OECD countries, resulting in four distinct categories. [ 14 ]
In the 21st century, the high tech industry is a significant part of several advanced economies. [ 15 ] The Israeli economy has the highest ratio in the world, with the high tech sector accounting for 20% of the economy. High tech makes up 9.3% of the American economy according to Statista [ 16 ] and CTech . [ 17 ]
Multiple cities and hubs have been described as global startup ecosystems . GSER publishes a yearly ranking of global startup ecosystems. [ 18 ] [ 19 ] The study does yearly reports ranking the top 40 global startup hubs. [ 20 ]
from 2023
The following is a list of the 15 largest exporting countries of high tech products by value in millions of United States dollars , according to the United Nations . [ 21 ] | https://en.wikipedia.org/wiki/High_tech |
High temperature hydrogen attack (HTHA) , also called hot hydrogen attack or methane reaction , is a problem which concerns steels operating at elevated temperatures (typically above 400 °C (752 °F)) in hydrogen-rich atmospheres, such as refineries , petrochemical and other chemical facilities and, possibly, high pressure steam boilers . It is not to be confused with hydrogen embrittlement . [ 1 ]
If a steel is exposed to very hot hydrogen , the high temperature enables the hydrogen molecules to dissociate and to diffuse into the alloy as individual diffusible atoms. There are two stages to the damage:
HTHA can be managed by using a different steel alloy, one where the carbides with other alloying elements, such as chromium and molybdenum , are more stable than iron carbides. [ 4 ] Surface oxide layers are ineffective as a protection as they are immediately reduced by the hydrogen, forming water vapour.
Later-stage damage in the steel component can be seen using ultrasonic examination, which detects the large defects created by methane pressure. [ 5 ] [ 4 ] These large defects in a stressed component are usually the cause of failure in service: which is usually catastrophic as hot flammable hydrogen gas escapes rapidly.
This corrosion -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/High_temperature_hydrogen_attack |
High time-resolution astrophysics (HTRA) is a section of astronomy / astrophysics involved in measuring and studying astronomical phenomena in time scales of 1 second and smaller (t.b.c.). This breed of astronomy has developed with higher efficiency detectors and larger telescopes to get more photons per second along with better computers to store and analyse the vast amounts of data acquired in one night.
Pre-existing objects can now fall into this category such as gamma-ray burst optical transients and pulsars , although this relatively new science is concentrated in the optical/ infrared regime and time limits are yet to be set as to what is high time-resolution. | https://en.wikipedia.org/wiki/High_time-resolution_astrophysics |
High viscosity mixers are mixers designed for mixing materials with laminar mixing processes because the ingredients have such high viscosities that a turbulent mixing phase cannot be obtained at all or cannot be obtained without a high amount of heat. The process can be used for high viscosity liquid to liquid mixing or for paste mixing combining liquid and solid ingredients. Some products that may require laminar mixing in a high viscosity mixer include putties , chewing gum , and soaps . [ 1 ] The end product usually starts at several hundred thousand centipoise and can reach as high as several million centipoise.
Typical mixers used for this purpose are of the Double Arm, Double Planetary or Planetary Disperser design. Models are built to include many features such as vacuum and jacketing to remove air and to control the temperature of the mixture. Capacities are available from 1/2 pint to several thousand gallons. [ 2 ] | https://en.wikipedia.org/wiki/High_viscosity_mixer |
High voltage electricity refers to electrical potential large enough to cause injury or damage. In certain industries, high voltage refers to voltage above a certain threshold. Equipment and conductors that carry high voltage warrant special safety requirements and procedures .
High voltage is used in electrical power distribution , in cathode-ray tubes , to generate X-rays and particle beams , to produce electrical arcs , for ignition, in photomultiplier tubes , and in high-power amplifier vacuum tubes , as well as other industrial, military and scientific applications.
The numerical definition of high voltage depends on context. Two factors considered in classifying a voltage as high voltage are the possibility of causing a spark in air, and the danger of electric shock by contact or proximity.
The International Electrotechnical Commission and its national counterparts ( IET , IEEE , VDE , etc.) define high voltage as above 1000 V for alternating current , and at least 1500 V for direct current . [ 1 ]
In the United States, the American National Standards Institute (ANSI) establishes nominal voltage ratings for 60 Hz electric power systems over 100 V. Specifically, ANSI C84.1-2020 defines high voltage as 115 kV to 230 kV, extra-high voltage as 345 kV to 765 kV, and ultra-high voltage as 1,100 kV. [ 2 ] British Standard BS 7671 :2008 defines high voltage as any voltage difference between conductors that is higher than 1000 VAC or 1500 V ripple-free DC, or any voltage difference between a conductor and Earth that is higher than 600 VAC or 900 V ripple-free DC. [ 3 ]
Electricians may only be licensed for particular voltage classes in some jurisdictions. [ 4 ] For example, an electrical license for a specialized sub-trade such as installation of HVAC systems, fire alarm systems, closed-circuit-television systems may be authorized to install systems energized up to only 30 volts between conductors, and may not be permitted to work on mains-voltage circuits. The general public may consider household mains circuits (100 to 250 VAC), which carry the highest voltages they normally encounter, to be high voltage .
Voltages over approximately 50 volts can usually cause dangerous amounts of current to flow through a human being who touches two points of a circuit, so safety standards are more restrictive around such circuits.
In automotive engineering , high voltage is defined as voltage in range 30 to 1000 VAC or 60 to 1500 VDC. [ 5 ]
The definition of extra-high voltage (EHV) again depends on context. In electric power transmission engineering, EHV is classified as voltages in the range of 345,000– 765,000 V. [ 6 ] In electronics systems, a power supply that provides greater than 275,000 volts is called an EHV Power Supply , and is often used in experiments in physics. The accelerating voltage for a television cathode ray tube may be described as extra-high voltage or extra-high tension (EHT), compared to other voltage supplies within the equipment. This type of supply ranges from 5 kV to about 30 kV.
The Unicode text character representing "high voltage" is U+26A1, the symbol "⚡︎" .
The common static electric sparks seen under low-humidity conditions always involve voltage well above 700 V. For example, sparks to car doors in winter can involve voltages as high as 20,000 V. [ 7 ]
Electrostatic generators such as Van de Graaff generators and Wimshurst machines can produce voltages approaching one million volts at several amps, but typically don't last long enough to cause damage. Induction coils operate on the flyback effect resulting in voltages greater than the turns ratio multiplied by the input voltage. They typically produce higher currents than electrostatic machines, but each doubling of desired output voltage roughly doubles the weight due to the amount of wire required in the secondary winding. Thus scaling them to higher voltages by adding more turns of wire can become impractical. The Cockcroft-Walton multiplier can be used to multiply the voltage produced by an induction coil. It generates DC using diode switches to charge a ladder of capacitors. Tesla coils utilize resonance, are lightweight, and do not require semiconductors.
The largest scale sparks are those produced naturally by lightning . An average bolt of negative lightning carries a current of 30 to 50 kiloamperes, transfers a charge of 5 coulombs , and dissipates 500 megajoules of energy (120 kg TNT equivalent , or enough to light a 100-watt light bulb for approximately 2 months). However, an average bolt of positive lightning (from the top of a thunderstorm) may carry a current of 300 to 500 kiloamperes, transfer a charge of up to 300 coulombs, have a potential difference up to 1 gigavolt (a billion volts), and may dissipate 300 GJ of energy (72 tons TNT, or enough energy to light a 100-watt light bulb for up to 95 years). A negative lightning strike typically lasts for only tens of microseconds, but multiple strikes are common. A positive lightning strike is typically a single event, but the larger peak current may flow for hundreds of milliseconds, making it considerably more energetic than negative lightning.
The dielectric breakdown strength of dry air, at Standard Temperature and Pressure (STP), between spherical electrodes is approximately 33 kV/cm. [ 8 ] This is only a rough guide, since the actual breakdown voltage is highly dependent upon the electrode shape and size. Strong electric fields (from high voltages applied to small or pointed conductors) often produce violet-colored corona discharges in air, as well as visible sparks. Voltages below about 500–700 volts cannot produce easily visible sparks or glows in air at atmospheric pressure, so by this rule these voltages are "low". However, under conditions of low atmospheric pressure (such as in high-altitude aircraft ), or in an environment of noble gas such as argon or neon , sparks appear at much lower voltages. 500 to 700 volts is not a fixed minimum for producing spark breakdown, but it is a rule-of-thumb. For air at STP, the minimum sparkover voltage is around 327 volts, as noted by Friedrich Paschen . [ 9 ]
While lower voltages do not, in general, jump a gap that is present before the voltage is applied, interrupting an existing current flow with a gap often produces a low-voltage spark or arc . As the contacts are separated, a few small points of contact become the last to separate. The current becomes constricted to these small hot spots , causing them to become incandescent, so that they emit electrons (through thermionic emission ). Even a small 9 V battery can spark noticeably by this mechanism in a darkened room. The ionized air and metal vapour (from the contacts) form plasma , which temporarily bridges the widening gap. If the power supply and load allow sufficient current to flow, a self-sustaining arc may form. Once formed, an arc may be extended to a significant length before breaking the circuit. Attempting to open an inductive circuit often forms an arc, since the inductance provides a high-voltage pulse whenever the current is interrupted. AC systems make sustained arcing somewhat less likely, since the current returns to zero twice per cycle. The arc is extinguished every time the current goes through a zero crossing , and must reignite during the next half-cycle to maintain the arc.
Unlike an ohmic conductor, the resistance of an arc decreases as the current increases. This makes unintentional arcs in an electrical apparatus dangerous since even a small arc can grow large enough to damage equipment and start fires if sufficient current is available. Intentionally produced arcs, such as used in lighting or welding , require some element in the circuit to stabilize the arc's current/voltage characteristics.
Electrical transmission and distribution lines for electric power typically use voltages between tens and hundreds of kilovolts. The lines may be overhead or underground. High voltage is used in power distribution to reduce ohmic losses when transporting electricity long distance.
It is used in the production of semiconductors to sputter thin layers of metal films on the surface of the wafer . It is also used for electrostatic flocking to coat objects with small fibers that stand on edge.
Spark gaps were used historically as an early form of radio transmission. Similarly, lightning discharges in the atmosphere of Jupiter are thought to be the source of the planet's powerful radio frequency emissions. [ 10 ]
High voltages have been used in landmark chemistry and particle physics experiments and discoveries. Electric arcs were used in the isolation and discovery of the element argon from atmospheric air. Induction coils powered early X-ray tubes. Moseley used an X-ray tube to determine the atomic number of a selection of metallic elements by the spectrum emitted when used as anodes. High voltage is used for generating electron beams for microscopy . Cockcroft and Walton invented the voltage multiplier to transmutate lithium atoms in lithium oxide into helium by accelerating hydrogen atoms.
Voltages greater than 50 V applied across dry unbroken human skin can cause heart fibrillation if they produce electric currents in body tissues that happen to pass through the chest area. The voltage at which there is the danger of electrocution depends on the electrical conductivity of dry human skin. Living human tissue can be protected from damage by the insulating characteristics of dry skin up to around 50 volts. If the same skin becomes wet, if there are wounds, or if the voltage is applied to electrodes that penetrate the skin, then even voltage sources below 40 V can be lethal.
Accidental contact with any high voltage supplying sufficient energy may result in severe injury or death. This can occur as a person's body provides a path for current flow, causing tissue damage and heart failure. Other injuries can include burns from the arc generated by the accidental contact. These burns can be especially dangerous if the victim's airway is affected. Injuries may also be suffered as a result of the physical forces experienced by people who fall from a great height or are thrown a considerable distance.
Low-energy exposure to high voltage may be harmless, such as the spark produced in a dry climate when touching a doorknob after walking across a carpeted floor. The voltage can be in the thousand-volt range, but the average current is low.
The standard precautions to avoid injury include working under conditions that would avoid having electrical energy flow through the body, particularly through the heart region, such as between the arms, or between an arm and a leg. Electricity can flow between two conductors in high voltage equipment and the body can complete the circuit. To avoid that from happening, the worker should wear insulating clothing such as rubber gloves, use insulated tools, and avoid touching the equipment with more than one hand at a time. An electrical current can also flow between the equipment and the earth ground. To prevent that, the worker should stand on an insulated surface such as on rubber mats. Safety equipment is tested regularly to ensure it is still protecting the user. Test regulations vary according to country. Testing companies can test at up 300,000 volts and offer services from glove testing to Elevated Working Platform (or EWP) testing.
Contact with or close approach to line conductors presents a danger of electrocution . Contact with overhead wires can result in injury or death. Metal ladders, farm equipment, boat masts, construction machinery, aerial antennas , and similar objects are frequently involved in fatal contact with overhead wires. Unauthorized persons climbing on power pylons or electrical apparatus are also frequently the victims of electrocution. [ 11 ] At very high transmission voltages even a close approach can be hazardous, since the high voltage may arc across a significant air gap.
Digging into a buried cable can also be dangerous to workers at an excavation site. Digging equipment (either hand tools or machine driven) that contacts a buried cable may energize piping or the ground in the area, resulting in electrocution of nearby workers. A fault in a high-voltage transmission line or substation may result in high currents flowing along the surface of the earth, producing an earth potential rise that also presents a danger of electric shock.
For high voltage and extra-high voltage transmission lines, specially trained personnel use " live line " techniques to allow hands-on contact with energized equipment. In this case the worker is electrically connected to the high-voltage line but thoroughly insulated from the earth so that he is at the same electrical potential as that of the line. Since training for such operations is lengthy, and still presents a danger to personnel, only very important transmission lines are subject to maintenance while live. Outside these properly engineered situations, insulation from earth does not guarantee that no current flows to earth—as grounding or arcing to ground can occur in unexpected ways, and high-frequency currents can burn even an ungrounded person. Touching a transmitting antenna is dangerous for this reason, and a high-frequency Tesla coil can sustain a spark with only one endpoint.
Protective equipment on high-voltage transmission lines normally prevents formation of an unwanted arc, or ensures that it is quenched within tens of milliseconds. Electrical apparatus that interrupts high-voltage circuits is designed to safely direct the resulting arc so that it dissipates without damage. High voltage circuit breakers often use a blast of high pressure air, a special dielectric gas (such as SF 6 under pressure), or immersion in mineral oil to quench the arc when the high voltage circuit is broken.
Wiring in equipment such as X-ray machines and lasers requires care. The high voltage section is kept physically distant from the low voltage side to reduce the possibility of an arc forming between the two. To avoid coronal losses, conductors are kept as short as possible and free of sharp points. If insulated, the plastic coating should be free of air bubbles which result in coronal discharges within the bubbles.
A high voltage is not necessarily dangerous if it cannot deliver substantial current . Despite electrostatic machines such as Van de Graaff generators and Wimshurst machines producing voltages approaching one million volts, they deliver a brief sting. That is because the current is low, i.e. only a relatively few electrons move. These devices have a limited amount of stored energy, so the average current produced is low and usually for a short time, with impulses peaking in the 1 A range for a nanosecond. [ 12 ] [ 13 ]
The discharge may involve extremely high voltage over very short periods, but to produce heart fibrillation, an electric power supply must produce a significant current in the heart muscle continuing for many milliseconds , and must deposit a total energy in the range of at least millijoules or higher. Relatively high current at anything more than about fifty volts can therefore be medically significant and potentially fatal.
During the discharge, these machines apply high voltage to the body for only a millionth of a second or less. So a low current is applied for a very short time, and the number of electrons involved is very small.
Despite Tesla coils superficially appearing similar to Van de Graaff generators, they are not electrostatic machines and can produce significant radio frequency currents continuously. The current supplied to a human body will be relatively constant as long as contact is maintained, unlike with electrostatic machines which generally take longer to build up charges, and the voltage will be much higher than the break-down voltage of human skin. As a consequence, the output of a Tesla coil can be dangerous or even fatal.
Depending on the prospective short-circuit current available at a switchgear line-up, a hazard is presented to maintenance and operating personnel due to the possibility of a high-intensity electric arc . Maximum temperature of an arc can exceed 10,000 kelvins , and the radiant heat, expanding hot air, and explosive vaporization of metal and insulation material can cause severe injury to unprotected workers. Such switchgear line-ups and high-energy arc sources are commonly present in electric power utility substations and generating stations, industrial plants and large commercial buildings. In the United States, the National Fire Protection Association has published a guideline standard NFPA 70E for evaluating and calculating arc flash hazard , and provides standards for the protective clothing required for electrical workers exposed to such hazards in the workplace.
Even voltages insufficient to break down air can supply enough energy to ignite atmospheres containing flammable gases or vapours, or suspended dust. For example, hydrogen gas, natural gas , or petrol/ gasoline vapor mixed with air can be ignited by sparks produced by electrical apparatus. Examples of industrial facilities with hazardous areas are petrochemical refineries, chemical plants , grain elevators , and coal mines .
Measures taken to prevent such explosions include:
In recent years, standards for explosion hazard protection have become more uniform between European and North American practice. The "zone" system of classification is now used in modified form in U.S. National Electrical Code and in the Canadian Electrical Code . Intrinsic safety apparatus is now approved for use in North American applications.
Electrical discharges, including partial discharge and corona , can produce small quantities of toxic gases, which in a confined space can be a health hazard. These gases include oxidizers such as ozone and various oxides of nitrogen . They are readily identified by their characteristic odor or color, and thus contact time can be minimized. Nitric oxide is invisible but has a sweet odor. It oxidizes to nitrogen dioxide within a few minutes, which has a yellow or reddish-brown color depending on concentration and smells of chlorine gas like a swimming pool. Ozone is invisible but has a pungent smell like that of the air after a lightning storm. It is a short-lived species and half of it breaks down into O 2 within a day at normal temperatures and atmospheric pressure.
Hazards due to lightning include direct strikes on persons or property. However, lightning can also create dangerous voltage gradients in the earth, as well as an electromagnetic pulse , and can charge extended metal objects such as telephone cables, fences, and pipelines to dangerous voltages that can be carried many miles from the site of the strike. Although many of these objects are not normally conductive, very high voltage can cause the electrical breakdown of such insulators, causing them to act as conductors. These transferred potentials are dangerous to people, livestock, and electronic apparatus. Lightning strikes also start fires and explosions, which result in fatalities, injuries, and property damage. For example, each year in North America, thousands of forest fires are started by lightning strikes.
Measures to control lightning can mitigate the hazard; these include lightning rods , shielding wires, and bonding of electrical and structural parts of buildings to form a continuous enclosure. | https://en.wikipedia.org/wiki/High_voltage |
A high water mark is a point that represents the maximum rise of a body of water over land. Such a mark is often the result of a flood , but high water marks may reflect an all-time high, an annual high (highest level to which water rose that year) or the high point for some other division of time. Knowledge of the high water mark for an area is useful in managing the development of that area, particularly in making preparations for flood surges. [ 1 ] High water marks from floods have been measured for planning purposes since at least as far back as the civilizations of ancient Egypt . [ 2 ] It is a common practice to create a physical marker indicating one or more of the highest water marks for an area, usually with a line at the level to which the water rose, and a notation of the date on which this high water mark was set. This may be a free-standing flood level sign or other marker, or it may be affixed to a building or other structure that was standing at the time of the flood that set the mark. [ 3 ]
A high water mark is not necessarily an actual physical mark, [ 4 ] but it is possible for water rising to a high point to leave a lasting physical impression such as floodwater staining. A landscape marking left by the high water mark of ordinary tidal action may be called a strandline and is typically composed of debris left by high tide. The area at the top of a beach where debris is deposited is an example of this phenomenon. Where there are tides , this line is formed by the highest position of the tide, and moves up and down the beach on a fortnightly cycle . [ 5 ] The debris is chiefly composed of rotting seaweed , but can also include a large amount of litter , either from ships at sea or from sewage outflows. [ 6 ]
The strandline is an important habitat for a variety of animals . In parts of the United Kingdom , sandhoppers such as Talitrus saltator and the seaweed fly Coelopa frigida are abundant in the rotting seaweed, and these invertebrates provide food for shore birds such as the rock pipit , turnstone [ 6 ] and pied wagtail , [ 7 ] and mammals such as brown hares , foxes , voles and mice . [ 5 ]
One kind of high water mark is the ordinary high water mark or average high water mark , the high water mark that can be expected to be produced by a body of water in non-flood conditions. The ordinary high water mark may have legal significance and is often being used to demarcate property boundaries . [ 8 ] The ordinary high water mark has also been used for other legal demarcations. For example, a 1651 analysis of laws passed by the English Parliament notes that for persons granted the title Admiral of the English Seas , "the Admirals power extended even to the high water mark, and into the main streams". [ 9 ]
In the United States , the high water mark is also significant because the United States Constitution gives Congress the authority to legislate for waterways, and the high water mark is used to determine the geographic extent of that authority. Federal regulations (33 CFR 328.3(e)) define the "ordinary high water mark" (OHWM) as "that line on the shore established by the fluctuations of water and indicated by physical characteristics such as a clear, natural line impressed on the bank, shelving, changes in the character of soil, destruction of terrestrial vegetation, the presence of litter and debris, or other appropriate means that consider the characteristics of the surrounding areas. [ 10 ] For the purposes of Section 404 of the Clean Water Act , the OHWM defines the lateral limits of federal jurisdiction over non-tidal water bodies in the absence of adjacent wetlands. For the purposes of Sections 9 and 10 of the Rivers and Harbors Act of 1899 , the OHWM defines the lateral limits of federal jurisdiction over traditional navigable waters of the US. [ 11 ] The OHWM is used by the United States Army Corps of Engineers , the United States Environmental Protection Agency , and other federal agencies to determine the geographical extent of their regulatory programs. Likewise, many states use similar definitions of the OHWM for the purposes of their own regulatory programs.
In 2016, the Court of Appeals of Indiana ruled that land below the OHWM (as defined by common law) along Lake Michigan is held by the state in trust for public use. [ 12 ] | https://en.wikipedia.org/wiki/High_water_mark |
Higher-dimensional Einstein gravity is any of various physical theories that attempt to generalise to higher dimensions various results of the well established theory of standard (four-dimensional) Albert Einstein 's gravitational theory, that is, general relativity . This attempt at generalisation has been strongly influenced in recent decades by string theory .
At present, this work can probably be most fairly described as extended theoretical speculation. Currently, it has no direct observational and experimental support, in contrast to four-dimensional general relativity. However, this theoretical work has led to the possibility of proving the existence of extra dimensions. This is best demonstrated by the proof of Harvey Reall and Roberto Emparan that there is a 'black ring' solution in 5 dimensions. If such a 'black ring' could be produced in a particle accelerator such as the Large Hadron Collider , this would provide the evidence that higher dimensions exist.
The higher-dimensional generalization of the Kerr metric was discovered by Robert Myers and Malcolm Perry . [ 1 ] Like the Kerr metric, the Myers–Perry metric has spherical horizon topology. The construction involves making a Kerr–Schild ansatz ; by a similar method, the solution has been generalized to include a cosmological constant . The black ring is a solution of five-dimensional general relativity. It inherits its name from the fact that its event horizon is topologically S 1 × S 2 . This is in contrast to other known black hole solutions in five dimensions which have horizon topology S 3 .
In 2014, Hari Kunduri and James Lucietti proved the existence of a black hole with Lens space topology of the L (2, 1) type in five dimensions, [ 2 ] this was next extended to all L (p, 1) with positive integers p by Shinya Tomizawa and Masato Nozawa in 2016 [ 3 ] and finally in a preprint to all L (p, q) and any dimension by Marcus Khuri and Jordan Rainone in 2022, [ 4 ] [ 5 ] a black lens doesn't necessarily need to rotate as a black ring but all examples so far need a matter field coming from the extra dimensions to remain stable.
In four dimensions, Hawking proved that the topology of the event horizon of a non-rotating black hole must be spherical. [ 6 ] Because the proof uses the Gauss–Bonnet theorem , it does not generalize to higher dimensions. The discovery of black ring solutions in five dimensions [ 7 ] shows that other topologies are allowed in higher dimensions, but it is unclear precisely which topologies are allowed. It has been shown that the horizon must be of positive Yamabe type, meaning that it must admit a metric of positive scalar curvature . [ 8 ] | https://en.wikipedia.org/wiki/Higher-dimensional_Einstein_gravity |
In mathematics and logic , a higher-order logic (abbreviated HOL ) is a form of logic that is distinguished from first-order logic by additional quantifiers and, sometimes, stronger semantics . Higher-order logics with their standard semantics are more expressive, but their model-theoretic properties are less well-behaved than those of first-order logic.
The term "higher-order logic" is commonly used to mean higher-order simple predicate logic . Here "simple" indicates that the underlying type theory is the theory of simple types , also called the simple theory of types . Leon Chwistek and Frank P. Ramsey proposed this as a simplification of ramified theory of types specified in the Principia Mathematica by Alfred North Whitehead and Bertrand Russell . Simple types is sometimes also meant to exclude polymorphic and dependent types. [ 1 ]
First-order logic quantifies only variables that range over individuals; second-order logic , also quantifies over sets; third-order logic also quantifies over sets of sets, and so on.
Higher-order logic is the union of first-, second-, third-, ..., n th-order logic; i.e., higher-order logic admits quantification over sets that are nested arbitrarily deeply.
There are two possible semantics for higher-order logic.
In the standard or full semantics , quantifiers over higher-type objects range over all possible objects of that type. For example, a quantifier over sets of individuals ranges over the entire powerset of the set of individuals. Thus, in standard semantics, once the set of individuals is specified, this is enough to specify all the quantifiers. HOL with standard semantics is more expressive than first-order logic. For example, HOL admits categorical axiomatizations of the natural numbers , and of the real numbers , which are impossible with first-order logic. However, by a result of Kurt Gödel , HOL with standard semantics does not admit an effective , sound, and complete proof calculus . [ 2 ] The model-theoretic properties of HOL with standard semantics are also more complex than those of first-order logic. For example, the Löwenheim number of second-order logic is already larger than the first measurable cardinal , if such a cardinal exists. [ 3 ] The Löwenheim number of first-order logic, in contrast, is ℵ 0 , the smallest infinite cardinal.
In Henkin semantics , a separate domain is included in each interpretation for each higher-order type. Thus, for example, quantifiers over sets of individuals may range over only a subset of the powerset of the set of individuals. HOL with these semantics is equivalent to many-sorted first-order logic , rather than being stronger than first-order logic. In particular, HOL with Henkin semantics has all the model-theoretic properties of first-order logic, and has a complete, sound, effective proof system inherited from first-order logic.
Higher-order logics include the offshoots of Church 's simple theory of types [ 4 ] and the various forms of intuitionistic type theory . Gérard Huet has shown that unifiability is undecidable in a type-theoretic flavor of third-order logic, [ 5 ] [ 6 ] [ 7 ] [ 8 ] that is, there can be no algorithm to decide whether an arbitrary equation between second-order (let alone arbitrary higher-order) terms has a solution.
Up to a certain notion of isomorphism , the powerset operation is definable in second-order logic. Using this observation, Jaakko Hintikka established in 1955 that second-order logic can simulate higher-order logics in the sense that for every formula of a higher-order logic, one can find an equisatisfiable formula for it in second-order logic. [ 9 ]
The term "higher-order logic" is assumed in some context to refer to classical higher-order logic. However, modal higher-order logic has been studied as well. According to several logicians, Gödel's ontological proof is best studied (from a technical perspective) in such a context. [ 10 ] | https://en.wikipedia.org/wiki/Higher-order_logic |
In algebra , a higher-order operad is a higher-dimensional generalization of an operad .
This abstract algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Higher-order_operad |
In multilinear algebra , the higher-order singular value decomposition ( HOSVD ) of a tensor is a specific orthogonal Tucker decomposition . It may be regarded as one type of generalization of the matrix singular value decomposition . It has applications in computer vision , computer graphics , machine learning , scientific computing , and signal processing . Some aspects can be traced as far back as F. L. Hitchcock in 1928, [ 1 ] but it was L. R. Tucker who developed for third-order tensors the general Tucker decomposition in the 1960s, [ 2 ] [ 3 ] [ 4 ] further advocated by L. De Lathauwer et al. [ 5 ] in their Multilinear SVD work that employs the power method, or advocated by Vasilescu and Terzopoulos that developed M-mode SVD a parallel algorithm that employs the matrix SVD.
The term higher order singular value decomposition (HOSVD) was coined by DeLathauwer, but the algorithm referred to commonly in the literature as the HOSVD and attributed to either Tucker or DeLathauwer was developed by Vasilescu and Terzopoulos. [ 6 ] [ 7 ] [ 8 ] Robust and L1-norm -based variants of HOSVD have also been proposed. [ 9 ] [ 10 ] [ 11 ] [ 12 ]
For the purpose of this article, the abstract tensor A {\displaystyle {\mathcal {A}}} is assumed to be given in coordinates with respect to some basis as a M-way array , also denoted by A ∈ C I 1 × I 2 ⋯ × ⋯ I m ⋯ × I M {\displaystyle {\mathcal {A}}\in \mathbb {C} ^{I_{1}\times I_{2}\cdots \times \cdots I_{m}\cdots \times I_{M}}} , where M is the number of modes and the order of the tensor. C {\displaystyle \mathbb {C} } is the complex numbers and it includes both the real numbers R {\displaystyle \mathbb {R} } and the pure imaginary numbers.
Let A [ m ] ∈ C I m × ( I 1 I 2 ⋯ I m − 1 I m + 1 ⋯ I M ) {\displaystyle {\mathcal {A}}_{[m]}\in \mathbb {C} ^{I_{m}\times (I_{1}I_{2}\cdots I_{m-1}I_{m+1}\cdots I_{M})}} denote the standard mode- m flattening of A {\displaystyle {\mathcal {A}}} , so that the left index of A [ m ] {\displaystyle {\mathcal {A}}_{[m]}} corresponds to the m {\displaystyle m} 'th index A {\displaystyle {\mathcal {A}}} and the right index of A [ m ] {\displaystyle {\mathcal {A}}_{[m]}} corresponds to all other indices of A {\displaystyle {\mathcal {A}}} combined. Let U m ∈ C I m × I m {\displaystyle {\bf {U}}_{m}\in \mathbb {C} ^{I_{m}\times I_{m}}} be a unitary matrix containing a basis of the left singular vectors of the A [ m ] {\displaystyle {\mathcal {A}}_{[m]}} such that the j th column u j {\displaystyle \mathbf {u} _{j}} of U m {\displaystyle {\bf {U}}_{m}} corresponds to the j th largest singular value of A [ m ] {\displaystyle {\mathcal {A}}_{[m]}} . Observe that the mode/factor matrix U m {\displaystyle {\bf {U}}_{m}} does not depend on the particular on the specific definition of the mode m flattening. By the properties of the multilinear multiplication , we have A = A × ( I , I , … , I ) = A × ( U 1 U 1 H , U 2 U 2 H , … , U M U M H ) = ( A × ( U 1 H , U 2 H , … , U M H ) ) × ( U 1 , U 2 , … , U M ) , {\displaystyle {\begin{array}{rcl}{\mathcal {A}}&=&{\mathcal {A}}\times ({\bf {I}},{\bf {I}},\ldots ,{\bf {I}})\\&=&{\mathcal {A}}\times ({\bf {U}}_{1}{\bf {U}}_{1}^{H},{\bf {U}}_{2}{\bf {U}}_{2}^{H},\ldots ,{\bf {U}}_{M}{\bf {U}}_{M}^{H})\\&=&\left({\mathcal {A}}\times ({\bf {U}}_{1}^{H},{\bf {U}}_{2}^{H},\ldots ,{\bf {U}}_{M}^{H})\right)\times ({\bf {U}}_{1},{\bf {U}}_{2},\ldots ,{\bf {U}}_{M}),\end{array}}} where ⋅ H {\displaystyle \cdot ^{H}} denotes the conjugate transpose . The second equality is because the U m {\displaystyle {\bf {U}}_{m}} 's are unitary matrices. Define now the core tensor S := A × ( U 1 H , U 2 H , … , U M H ) . {\displaystyle {\mathcal {S}}:={\mathcal {A}}\times ({\bf {U}}_{1}^{H},{\bf {U}}_{2}^{H},\ldots ,{\bf {U}}_{M}^{H}).} Then, the HOSVD [ 5 ] of A {\displaystyle {\mathcal {A}}} is the decomposition A = S × ( U 1 , U 2 , … , U M ) . {\displaystyle {\mathcal {A}}={\mathcal {S}}\times ({\bf {U}}_{1},{\bf {U}}_{2},\ldots ,{\bf {U}}_{M}).} The above construction shows that every tensor has a HOSVD.
As in the case of the compact singular value decomposition of a matrix, where the rows and columns corresponding to vanishing singular values are dropped, it is also possible to consider a compact HOSVD , which is very useful in applications.
Assume that U m ∈ C I m × R m {\displaystyle {\bf {U}}_{m}\in \mathbb {C} ^{I_{m}\times R_{m}}} is a matrix with unitary columns containing a basis of the left singular vectors corresponding to the nonzero singular values of the standard factor- m flattening A [ m ] {\displaystyle {\mathcal {A}}_{[m]}} of A {\displaystyle {\mathcal {A}}} . Let the columns of U m {\displaystyle {\bf {U}}_{m}} be sorted such that the r m {\displaystyle r_{m}} th column u r m {\displaystyle {\bf {u}}_{r_{m}}} of U m {\displaystyle {\bf {U}}_{m}} corresponds to the r m {\displaystyle r_{m}} th largest nonzero singular value of A [ m ] {\displaystyle {\mathcal {A}}_{[m]}} . Since the columns of U m {\displaystyle {\bf {U}}_{m}} form a basis for the image of A [ m ] {\displaystyle {\mathcal {A}}_{[m]}} , we have A [ m ] = U m U m H A [ m ] = ( A × m ( U m U m H ) ) [ m ] , {\displaystyle {\mathcal {A}}_{[m]}={\bf {U}}_{m}{\bf {U}}_{m}^{H}{\mathcal {A}}_{[m]}={\bigl (}{\mathcal {A}}\times _{m}({\bf {U}}_{m}{\bf {U}}_{m}^{H}){\bigr )}_{[m]},} where the first equality is due to the properties of orthogonal projections (in the Hermitian inner product) and the last equality is due to the properties of multilinear multiplication. As flattenings are bijective maps and the above formula is valid for all m = 1 , 2 , … , m , … , M {\displaystyle m=1,2,\ldots ,m,\ldots ,M} , we find as before that A = A × ( U 1 U 1 H , U 2 U 2 H , … , U M U M H ) = ( A × ( U 1 H , U 2 H , … , U M H ) ) × ( U 1 , U 2 , … , U M ) = S × ( U 1 , U 2 , … , U M ) , {\displaystyle {\begin{array}{rcl}{\mathcal {A}}&=&{\mathcal {A}}\times ({\bf {U}}_{1}{\bf {U}}_{1}^{H},{\bf {U}}_{2}{\bf {U}}_{2}^{H},\ldots ,{\bf {U}}_{M}{\bf {U}}_{M}^{H})\\&=&\left({\mathcal {A}}\times ({\bf {U}}_{1}^{H},{\bf {U}}_{2}^{H},\ldots ,{\bf {U}}_{M}^{H})\right)\times ({\bf {U}}_{1},{\bf {U}}_{2},\ldots ,{\bf {U}}_{M})\\&=&{\mathcal {S}}\times ({\bf {U}}_{1},{\bf {U}}_{2},\ldots ,{\bf {U}}_{M}),\end{array}}} where the core tensor S {\displaystyle {\mathcal {S}}} is now of size R 1 × R 2 × ⋯ × R M {\displaystyle R_{1}\times R_{2}\times \cdots \times R_{M}} .
The multilinear rank [ 1 ] of A {\displaystyle {\mathcal {A}}} is denoted with rank- ( R 1 , R 2 , … , R M ) {\displaystyle (R_{1},R_{2},\ldots ,R_{M})} . The multilinear rank is a tuple in N M {\displaystyle \mathbb {N} ^{M}} where R m := r a n k ( A [ m ] ) {\displaystyle R_{m}:=\mathrm {rank} ({\mathcal {A}}_{[m]})} . Not all tuples in N M {\displaystyle \mathbb {N} ^{M}} are multilinear ranks. [ 13 ] The multilinear ranks are bounded by 1 ≤ R m ≤ I m {\displaystyle 1\leq R_{m}\leq I_{m}} and it satisfy the constraint R m ≤ ∏ i ≠ m R i {\textstyle R_{m}\leq \prod _{i\neq m}R_{i}} must hold. [ 13 ]
The compact HOSVD is a rank-revealing decomposition in the sense that the dimensions of its core tensor correspond with the components of the multilinear rank of the tensor.
The following geometric interpretation is valid for both the full and compact HOSVD. Let ( R 1 , R 2 , … , R M ) {\displaystyle (R_{1},R_{2},\ldots ,R_{M})} be the multilinear rank of the tensor A {\displaystyle {\mathcal {A}}} . Since S ∈ C R 1 × R 2 × ⋯ × R M {\displaystyle {\mathcal {S}}\in {\mathbb {C} }^{R_{1}\times R_{2}\times \cdots \times R_{M}}} is a multidimensional array, we can expand it as follows S = ∑ r 1 = 1 R 1 ∑ r 2 = 1 R 2 ⋯ ∑ r M = 1 R M s r 1 , r 2 , … , r M e r 1 ⊗ e r 2 ⊗ ⋯ ⊗ e r M , {\displaystyle {\mathcal {S}}=\sum _{r_{1}=1}^{R_{1}}\sum _{r_{2}=1}^{R_{2}}\cdots \sum _{r_{M}=1}^{R_{M}}s_{r_{1},r_{2},\ldots ,r_{M}}\mathbf {e} _{r_{1}}\otimes \mathbf {e} _{r_{2}}\otimes \cdots \otimes \mathbf {e} _{r_{M}},} where e r m {\displaystyle \mathbf {e} _{r_{m}}} is the r m {\displaystyle r_{m}} th standard basis vector of C I m {\displaystyle {\mathbb {C} }^{I_{m}}} . By definition of the multilinear multiplication, it holds that A = ∑ r 1 = 1 R 1 ∑ r 2 = 1 R 2 ⋯ ∑ r M = 1 R M s r 1 , r 2 , … , r M u r 1 ⊗ u r 2 ⊗ ⋯ ⊗ u r M , {\displaystyle {\mathcal {A}}=\sum _{r_{1}=1}^{R_{1}}\sum _{r_{2}=1}^{R_{2}}\cdots \sum _{r_{M}=1}^{R_{M}}s_{r_{1},r_{2},\ldots ,r_{M}}\mathbf {u} _{r_{1}}\otimes \mathbf {u} _{r_{2}}\otimes \cdots \otimes \mathbf {u} _{r_{M}},} where the u r m {\displaystyle \mathbf {u} _{r_{m}}} are the columns of U m ∈ C I m × R m {\displaystyle {\bf {U}}_{m}\in {\mathbb {C} }^{I_{m}\times R_{m}}} . It is easy to verify that B = { u r 1 ⊗ u r 2 ⊗ ⋯ ⊗ u r M } r 1 , r 2 , … , r M {\displaystyle B=\{\mathbf {u} _{r_{1}}\otimes \mathbf {u} _{r_{2}}\otimes \cdots \otimes \mathbf {u} _{r_{M}}\}_{r_{1},r_{2},\ldots ,r_{M}}} is an orthonormal set of tensors. This means that the HOSVD can be interpreted as a way to express the tensor A {\displaystyle {\mathcal {A}}} with respect to a specifically chosen orthonormal basis B {\displaystyle B} with the coefficients given as the multidimensional array S {\displaystyle {\mathcal {S}}} .
Let A ∈ C I 1 × I 2 × ⋯ × I M {\displaystyle {\mathcal {A}}\in {\mathbb {C} }^{I_{1}\times I_{2}\times \cdots \times I_{M}}} be a tensor with a rank- ( R 1 , R 2 , … , R M ) {\displaystyle (R_{1},R_{2},\ldots ,R_{M})} , where C {\displaystyle \mathbb {C} } contains the reals R {\displaystyle \mathbb {R} } as a subset.
The strategy for computing the Multilinear SVD and the M-mode SVD was introduced in the 1960s by L. R. Tucker , [ 3 ] further advocated by L. De Lathauwer et al. , [ 5 ] and by Vasilescu and Terzopulous. [ 8 ] [ 6 ] The term HOSVD was coined by Lieven De Lathauwer, but the algorithm typically referred to in the literature as HOSVD was introduced by Vasilescu and Terzopoulos [ 6 ] [ 8 ] with the name M-mode SVD. It is a parallel computation that employs the matrix SVD to compute the orthonormal mode matrices.
Sources: [ 6 ] [ 8 ]
A strategy that is significantly faster when some or all R m ≪ I m {\displaystyle R_{m}\ll I_{m}} consists of interlacing the computation of the core tensor and the factor matrices, as follows: [ 14 ] [ 15 ] [ 16 ]
The HOSVD can be computed in-place via the Fused In-place Sequentially Truncated Higher Order Singular Value Decomposition (FIST-HOSVD) [ 16 ] algorithm by overwriting the original tensor by the HOSVD core tensor, significantly reducing the memory consumption of computing HOSVD.
In applications, such as those mentioned below, a common problem consists of approximating a given tensor A ∈ C I 1 × I 2 × ⋯ × I m ⋯ × I M {\displaystyle {\mathcal {A}}\in \mathbb {C} ^{I_{1}\times I_{2}\times \cdots \times I_{m}\cdots \times I_{M}}} by one with a reduced multilinear rank. Formally, if the multilinear rank of A {\displaystyle {\mathcal {A}}} is denoted by r a n k − ( R 1 , R 2 , … , R m , … , R M ) {\displaystyle \mathrm {rank-} (R_{1},R_{2},\ldots ,R_{m},\ldots ,R_{M})} , then computing the optimal A ¯ {\displaystyle {\mathcal {\bar {A}}}} that approximates A {\displaystyle {\mathcal {A}}} for a given reduced r a n k − ( R ¯ 1 , R ¯ 2 , … , R ¯ m , … , R ¯ M ) {\displaystyle \mathrm {rank-} ({\bar {R}}_{1},{\bar {R}}_{2},\ldots ,{\bar {R}}_{m},\ldots ,{\bar {R}}_{M})} is a nonlinear non-convex ℓ 2 {\displaystyle \ell _{2}} -optimization problem min A ¯ ∈ C I 1 × I 2 × ⋯ × I M 1 2 ‖ A − A ¯ ‖ F 2 s.t. r a n k − ( R ¯ 1 , R ¯ 2 , … , R ¯ M ) , {\displaystyle \min _{{\mathcal {\bar {A}}}\in \mathbb {C} ^{I_{1}\times I_{2}\times \cdots \times I_{M}}}{\frac {1}{2}}\|{\mathcal {A}}-{\mathcal {\bar {A}}}\|_{F}^{2}\quad {\text{s.t.}}\quad \mathrm {rank-} ({\bar {R}}_{1},{\bar {R}}_{2},\ldots ,{\bar {R}}_{M}),} where ( R ¯ 1 , R ¯ 2 , … , R ¯ M ) ∈ N M {\displaystyle ({\bar {R}}_{1},{\bar {R}}_{2},\ldots ,{\bar {R}}_{M})\in \mathbb {N} ^{M}} is the reduced multilinear rank with 1 ≤ R ¯ m < R m ≤ I m {\displaystyle 1\leq {\bar {R}}_{m}<R_{m}\leq I_{m}} , and the norm ‖ . ‖ F {\displaystyle \|.\|_{F}} is the Frobenius norm .
A simple idea for trying to solve this optimization problem is to truncate the (compact) SVD in step 2 of either the classic or the interlaced computation. A classically truncated HOSVD is obtained by replacing step 2 in the classic computation by
while a sequentially truncated HOSVD (or successively truncated HOSVD ) is obtained by replacing step 2 in the interlaced computation by
The HOSVD is most commonly applied to the extraction of relevant information from multi-way arrays.
Starting in the early 2000s, Vasilescu addressed causal questions by reframing the data analysis, recognition and synthesis problems as multilinear tensor problems. The power of the tensor framework was showcased by decomposing and representing an image in terms of its causal factors of data formation, in the context of Human Motion Signatures for gait recognition, [ 18 ] face recognition— TensorFaces [ 19 ] [ 20 ] and computer graphics—TensorTextures. [ 21 ]
The HOSVD has been successfully applied to signal processing and big data, e.g., in genomic signal processing. [ 22 ] [ 23 ] [ 24 ] These applications also inspired a higher-order GSVD (HO GSVD) [ 25 ] and a tensor GSVD. [ 26 ]
A combination of HOSVD and SVD also has been applied for real-time event detection from complex data streams (multivariate data with space and time dimensions) in disease surveillance . [ 27 ]
It is also used in tensor product model transformation -based controller design. [ 28 ] [ 29 ]
The concept of HOSVD was carried over to functions by Baranyi and Yam via the TP model transformation . [ 28 ] [ 29 ] This extension led to the definition of the HOSVD-based canonical form of tensor product functions and Linear Parameter Varying system models [ 30 ] and to convex hull manipulation based control optimization theory, see TP model transformation in control theories .
HOSVD was proposed to be applied to multi-view data analysis [ 31 ] and was successfully applied to in silico drug discovery from gene expression. [ 32 ]
L1-Tucker is the L1-norm -based, robust variant of Tucker decomposition . [ 10 ] [ 11 ] L1-HOSVD is the analogous of HOSVD for the solution to L1-Tucker. [ 10 ] [ 12 ] | https://en.wikipedia.org/wiki/Higher-order_singular_value_decomposition |
The higher-order sinusoidal input describing functions (HOSIDF) were first introduced [ 1 ] by dr. ir. P.W.J.M. Nuij . The HOSIDFs are an extension of the sinusoidal input describing function [ 2 ] which describe the response ( gain and phase ) of a system at harmonics of the base frequency of a sinusoidal input signal. The HOSIDFs bear an intuitive resemblance to the classical frequency response function and define the periodic output of a stable, causal , time invariant nonlinear system to a sinusoidal input signal:
u ( t ) = γ sin ( ω 0 t + φ 0 ) {\displaystyle u(t)=\gamma \sin(\omega _{0}t+\varphi _{0})}
This output is denoted by y ( t ) {\displaystyle y(t)} and consists of harmonics of the input frequency:
y ( t ) = ∑ k = 0 K | H k ( ω 0 , γ ) | γ k cos ( k ( ω 0 t + φ 0 ) + ∠ H k ( ω 0 , γ ) ) {\displaystyle y(t)=\sum \limits _{k=0}^{K}|H_{k}(\omega _{0},\gamma )|\gamma ^{k}\cos {\big (}k(\omega _{0}t+\varphi _{0})+\angle H_{k}(\omega _{0},\gamma ){\big )}}
Defining the single sided spectra of the input and output as U ( ω ) {\displaystyle U(\omega )} and Y ( ω ) {\displaystyle Y(\omega )} , such that | U ( ω 0 ) | = γ {\displaystyle |U(\omega _{0})|=\gamma } yields the definition of the k-th order HOSIDF:
H k ( ω 0 , γ ) = Y ( k ω 0 , γ ) U k ( ω 0 , γ ) {\displaystyle H_{k}(\omega _{0},\gamma )={\frac {Y(k\omega _{0},\gamma )}{U^{k}(\omega _{0},\gamma )}}}
The application and analysis of the HOSIDFs is advantageous both when a nonlinear model is already identified and when no model is known yet. In the latter case the HOSIDFs require little model assumptions and can easily be identified while requiring no advanced mathematical tools. Moreover, even when a model is already identified, the analysis of the HOSIDFs often yields significant advantages over the use of the identified nonlinear model. First of all, the HOSIDFs are intuitive in their identification and interpretation while other nonlinear model structures often yield limited direct information about the behavior of the system in practice. Furthermore, the HOSIDFs provide a natural extension of the widely used sinusoidal describing functions in case nonlinearities cannot be neglected. In practice the HOSIDFs have two distinct applications: Due to their ease of identification, HOSIDFs provide a tool to provide on-site testing during system design. Finally, the application of HOSIDFs to (nonlinear) controller design for nonlinear systems is shown to yield significant advantages over conventional time domain based tuning. | https://en.wikipedia.org/wiki/Higher-order_sinusoidal_input_describing_function |
In mathematics , a higher spin alternating sign matrix is a generalisation of the alternating sign matrix ( ASM ), where the columns and rows sum to an integer r (the spin ) rather than simply summing to 1 as in the usual alternating sign matrix definition. HSASMs are square matrices whose elements may be integers in the range − r to + r . When traversing any row or column of an ASM or HSASM, the partial sum of its entries must always be non-negative. [ 1 ]
High spin ASMs have found application in statistical mechanics and physics , where they have been found to represent symmetry groups in ice crystal formation.
Some typical examples of HSASMs are shown below:
The set of HSASMs is a superset of the ASMs. The extreme points of the convex hull of the set of r -spin HSASMs are themselves integer multiples of the usual ASMs.
This combinatorics -related article is a stub . You can help Wikipedia by expanding it .
This article about statistical mechanics is a stub . You can help Wikipedia by expanding it .
This article about matrices is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Higher_spin_alternating_sign_matrix |
Higher sulfur oxides are a group of chemical compounds with the formula SO 3+x where x lies between 0 and 1. They contain peroxo (O−O) groups, and the oxidation state of sulfur is +6 as in SO 3 .
Monomeric SO 4 can be isolated at low temperatures (below 78 K) following the reaction of SO 3 and atomic oxygen or photolysis of SO 3 – ozone mixtures. The favoured structure is:
Colourless polymeric condensates are formed in the reaction of gaseous SO 3 or SO 2 with O 2 in a silent electric discharge. The structure of the polymers is based on β-SO 3 (one of the three forms of solid SO 3 ) with oxide bridges (−O−) replaced randomly by peroxide bridges (−O−O−). As such these compounds are non-stoichiometric .
This article about chemical compounds is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Higher_sulfur_oxides |
In the field of land drainage , a highland carrier is a watercourse that conveys drainage water coming from higher in the catchment across or around a lower, drained area of land, but has little or no connection with the drainage network of that drained area. [ 1 ] Such a carrier is enclosed by levees .
This hydrology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Highland_carrier |
Highly Available Storage ( HAST ) is a protocol and tool set for FreeBSD written by Pawel Jakub Dawidek, a core FreeBSD developer.
HAST provides a block device to be synchronized between two servers for use as a file system. The two machines comprise a cluster, where each machine is a cluster node. HAST uses a Primary-Secondary (or Master-Slave) configuration, so only one cluster node is active at a time.
HAST-provided devices appear like disk devices in the /dev/hast/ directory in FreeBSD, and can be used like standard block devices. HAST is similar to a RAID1 (mirror) where each RAID component is provided across the network by one cluster node. [ 1 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Highly_Available_STorage |
HASA (highly accelerated stress audit) is a proven test method developed to find manufacturing / production process induced defects in electronics and electro-mechanical assemblies before those products are released to market. HASA is a form of HASS (highly accelerated stress screening) – a powerful testing tool for improving product reliability, reducing warranty costs and increasing customer satisfaction.
Since HASS levels are more aggressive than conventional screening tools, a POS procedure is used to establish the effectiveness in revealing production induced defects. A POS is vital to determine that the HASS stresses are capable of revealing production defects, but not so extreme as to remove significant life from the test item. Instituting HASS to screen the product is an excellent tool to maintain a high level of robustness and it will reduce the test time required to screen a product resulting in long term savings. Ongoing HASS screening assures that any weak components or manufacturing process degradations are quickly detected and corrected. HASS is not intended to be a rigid process that has an endpoint. It is a dynamic process that may need modification or adjustment over the life of the product.
HASS aids in the detection of early life failures. HASA's primary purpose is to monitor manufacturing and prevent any defects from being introduced during the process. A carefully determined HASA sampling plan must be designed that will quickly signal when process quality has been degraded. | https://en.wikipedia.org/wiki/Highly_accelerated_stress_audit |
The highly accelerated stress test (HAST) method was first proposed by Jeffrey E. Gunn, Sushil K. Malik, and Purabi M. Mazumdar of IBM. [ 1 ]
The acceleration factor for elevated humidity is empirically derived to be
where RH s is the stressed humidity, RH o is the operating-environment humidity, and n is an empirically derived constant (usually 1 < n < 5).
The acceleration factor for elevated temperature is derived to be
where E a is the activation energy for the temperature-induced failure (most often 0.7 eV for electronics), k is the Boltzmann constant , T o is the operating temperature in kelvins , and T s is the stressed temperature.
Therefore the total acceleration factor for unbiased HAST testing is | https://en.wikipedia.org/wiki/Highly_accelerated_stress_test |
Highly charged ions (HCI) are ions in very high charge states due to the loss of many or most of their bound electrons by energetic collisions or high-energy photon absorption. Examples are 13-fold ionized iron , Fe 13+ or Fe XIV in spectroscopic notation , found in the Sun's corona , or naked uranium , U 92+ (U XCIII in spectroscopic notation), which is bare of all bound electrons, and which requires very high energy for its production. HCI are found in stellar corona , in active galactic nuclei , in supernova remnants , and in accretion disks . Most of the visible matter found in the universe consists of highly charged ions. [ 1 ] High temperature plasmas used for nuclear fusion energy research also contain HCI generated by the plasma-wall interaction (see Tokamak ). In the laboratory, HCI are investigated by means of heavy ion particle accelerators and electron beam ion traps . [ 2 ] They might have applications in improving atomic clocks , advances in quantum computing , and more accurate measurement of fundamental physical constants . [ 3 ] | https://en.wikipedia.org/wiki/Highly_charged_ion |
A highly hazardous chemical , also called a harsh chemical, is a substance classified by the American Occupational Safety and Health Administration as material that is both toxic and reactive and whose potential for human injury is high if released. Highly hazardous chemicals may cause cancer, birth defects, induce genetic damage, cause miscarriage, injury and death from relatively small exposures.
Highly hazardous chemicals include: | https://en.wikipedia.org/wiki/Highly_hazardous_chemical |
The HART Communication Protocol (Highway Addressable Remote Transducer) is a hybrid analog+digital industrial automation open protocol. Its most notable advantage is that it can communicate over legacy 4–20 mA analog instrumentation current loops, sharing the pair of wires used by the analog-only host systems. HART is widely used in process and instrumentation systems ranging from small automation applications up to highly sophisticated industrial applications.
Based on the OSI model , HART resides at Layer 7, the Application Layer. Layers 3–6 are not used. [ 1 ] When sent over 4–20 mA it uses a Bell 202 for layer 1. But it is often converted to RS485 or RS232.
According to Emerson, [ 2 ] due to the huge installation base of 4–20 mA systems throughout the world, the HART Protocol is one of the most popular industrial protocols today. HART protocol has made a good transition protocol for users who wished to use the legacy 4–20 mA signals, but wanted to implement a "smart" protocol.
The protocol was developed by Rosemount Inc. , built off the Bell 202 early communications standard in the mid-1980s as a proprietary digital communication protocol for their smart field instruments. Soon it evolved into HART and in 1986 it was made an open protocol . Since then, the capabilities of the protocol have been enhanced by successive revisions to the specification.
There are two main operational modes of HART instruments: point-to-point (analog/digital) mode, and multi-drop mode.
In point-to-point mode the digital signals are overlaid on the 4–20 mA loop current. Both the 4–20 mA current and the digital signal are valid signalling protocols between the controller and measuring instrument or final control element.
The polling address of the instrument is set to "0". Only one instrument can be put on each instrument cable signal pair. One signal, generally specified by the user, is specified to be the 4–20 mA signal. Other signals are sent digitally on top of the 4–20 mA signal. For example, pressure can be sent as 4–20 mA, representing a range of pressures, and temperature can be sent digitally over the same wires. In point-to-point mode, the digital part of the HART protocol can be seen as a kind of digital current loop interface .
In multi-drop mode the analog loop current is fixed at 4 mA and it is possible to have more than one instrument on a signal loop.
HART revisions 3 through 5 allowed polling addresses of the instruments to be in the range 1–15. HART revision 6 allowed addresses 1 to 63; HART revision 7 allows addresses 0 to 63. Each instrument must have a unique address.
The request HART packet has the following structure:
Specifies slave, Specifies Master and Indicates Burst Mode
Currently all the newer devices implement five byte preamble, since anything greater reduces the communication speed. However, masters are responsible for backwards support. Master communication to a new device starts with the maximum preamble length (20 bytes) and is later reduced once the preamble size for the current device is determined.
Preamble is: "ff" "ff" "ff" "ff" "ff" (5 times ff)
This byte contains the Master number and specifies that the communication packet is starting.
Specifies the destination address as implemented in one of the HART schemes. The original addressing scheme used only four bits to specify the device address, which limited the number of devices to 16 including the master.
The newer scheme utilizes 38 bits to specify the device address. This address is requested from the device using either Command 0, or Command 11.
This is a one byte numerical value representing which command is to be executed.
Command 0 and Command 11 are used to request the device number.
Specifies the number of communication data bytes to follow.
The status field is absent for the master and is two bytes for the slave. This field is used by the slave to inform the master whether it completed the task and what its current health status is.
Data contained in this field depends on the command to be executed.
Checksum is composed of an XOR of all the bytes starting from the start byte and ending with the last byte of the data field, including those bytes.
Each manufacturer that participates in the HART convention is assigned an identification number. This number is communicated as part of the basic device identification command used when first connecting to a device. | https://en.wikipedia.org/wiki/Highway_Addressable_Remote_Transducer_Protocol |
In mathematics , Higman's lemma states that the set Σ ∗ {\displaystyle \Sigma ^{*}} of finite sequences over a finite alphabet Σ {\displaystyle \Sigma } , as partially ordered by the subsequence relation, is a well partial order . That is, if w 1 , w 2 , … ∈ Σ ∗ {\displaystyle w_{1},w_{2},\ldots \in \Sigma ^{*}} is an infinite sequence of words over a finite alphabet Σ {\displaystyle \Sigma } , then there exist indices i < j {\displaystyle i<j} such that w i {\displaystyle w_{i}} can be obtained from w j {\displaystyle w_{j}} by deleting some (possibly none) symbols. More generally the set of sequences is well-quasi-ordered even when Σ {\displaystyle \Sigma } is not necessarily finite, but is itself well-quasi-ordered, and the subsequence ordering is generalized into an "embedding" quasi-order that allows the replacement of symbols by earlier symbols in the well-quasi-ordering of Σ {\displaystyle \Sigma } . This is a special case of the later Kruskal's tree theorem . It is named after Graham Higman , who published it in 1952.
Let Σ {\displaystyle \Sigma } be a well-quasi-ordered alphabet of symbols (in particular, Σ {\displaystyle \Sigma } could be finite and ordered by the identity relation). Suppose for a contradiction that there exist infinite bad sequences, i.e. infinite sequences of words w 1 , w 2 , w 3 , … ∈ Σ ∗ {\displaystyle w_{1},w_{2},w_{3},\ldots \in \Sigma ^{*}} such that no w i {\displaystyle w_{i}} embeds into a later w j {\displaystyle w_{j}} . Then there exists an infinite bad sequence of words W = ( w 1 , w 2 , w 3 , … ) {\displaystyle W=(w_{1},w_{2},w_{3},\ldots )} that is minimal in the following sense: w 1 {\displaystyle w_{1}} is a word of minimum length from among all words that start infinite bad sequences; w 2 {\displaystyle w_{2}} is a word of minimum length from among all infinite bad sequences that start with w 1 {\displaystyle w_{1}} ; w 3 {\displaystyle w_{3}} is a word of minimum length from among all infinite bad sequences that start with w 1 , w 2 {\displaystyle w_{1},w_{2}} ; and so on. In general, w i {\displaystyle w_{i}} is a word of minimum length from among all infinite bad sequences that start with w 1 , … , w i − 1 {\displaystyle w_{1},\ldots ,w_{i-1}} .
Since no w i {\displaystyle w_{i}} can be the empty word , we can write w i = a i z i {\displaystyle w_{i}=a_{i}z_{i}} for a i ∈ Σ {\displaystyle a_{i}\in \Sigma } and z i ∈ Σ ∗ {\displaystyle z_{i}\in \Sigma ^{*}} . Since Σ {\displaystyle \Sigma } is well-quasi-ordered, the sequence of leading symbols a 1 , a 2 , a 3 , … {\displaystyle a_{1},a_{2},a_{3},\ldots } must contain an infinite increasing sequence a i 1 ≤ a i 2 ≤ a i 3 ≤ ⋯ {\displaystyle a_{i_{1}}\leq a_{i_{2}}\leq a_{i_{3}}\leq \cdots } with i 1 < i 2 < i 3 < ⋯ {\displaystyle i_{1}<i_{2}<i_{3}<\cdots } .
Now consider the sequence of words w 1 , … , w i 1 − 1 , z i 1 , z i 2 , z i 3 , … . {\displaystyle w_{1},\ldots ,w_{{i_{1}}-1},z_{i_{1}},z_{i_{2}},z_{i_{3}},\ldots .} Because z i 1 {\displaystyle z_{i_{1}}} is shorter than w i 1 = a i 1 z i 1 {\displaystyle w_{i_{1}}=a_{i_{1}}z_{i_{1}}} ,
this sequence is "more minimal" than W {\displaystyle W} , and so it must contain a word u {\displaystyle u} that embeds into a later word v {\displaystyle v} . But u {\displaystyle u} and v {\displaystyle v} cannot both be w j {\displaystyle w_{j}} 's, because then the original sequence W {\displaystyle W} would not be bad. Similarly, it cannot be that u {\displaystyle u} is a w j {\displaystyle w_{j}} and v {\displaystyle v} is a z i k {\displaystyle z_{i_{k}}} , because then w j {\displaystyle w_{j}} would also embed into w i k = a i k z i k {\displaystyle w_{i_{k}}=a_{i_{k}}z_{i_{k}}} . And similarly, it cannot be that u = z i j {\displaystyle u=z_{i_{j}}} and v = z i k {\displaystyle v=z_{i_{k}}} , j < k {\displaystyle j<k} , because then w i j = a i j z i j {\displaystyle w_{i_{j}}=a_{i_{j}}z_{i_{j}}} would embed into w i k = a i k z i k {\displaystyle w_{i_{k}}=a_{i_{k}}z_{i_{k}}} . In every case we arrive at a contradiction.
The ordinal type of Σ ∗ {\displaystyle \Sigma ^{*}} is related to the ordinal type of Σ {\displaystyle \Sigma } as follows: [ 1 ] [ 2 ] o ( Σ ∗ ) = { ω ω o ( Σ ) − 1 , o ( Σ ) finite ; ω ω o ( Σ ) + 1 , o ( Σ ) = ε α + n for some α and some finite n ; ω ω o ( Σ ) , otherwise . {\displaystyle o(\Sigma ^{*})={\begin{cases}\omega ^{\omega ^{o(\Sigma )-1}},&o(\Sigma ){\text{ finite}};\\\omega ^{\omega ^{o(\Sigma )+1}},&o(\Sigma )=\varepsilon _{\alpha }+n{\text{ for some }}\alpha {\text{ and some finite }}n;\\\omega ^{\omega ^{o(\Sigma )}},&{\text{otherwise}}.\end{cases}}}
Higman's lemma has been reverse mathematically calibrated (in terms of subsystems of second-order arithmetic ) as equivalent to A C A 0 {\displaystyle ACA_{0}} over the base theory R C A 0 {\displaystyle RCA_{0}} . [ 3 ]
This combinatorics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Higman's_lemma |
A higraph is a diagramming object that formalizes relations into a visual structure. It was developed by David Harel in 1988. Higraphs extend mathematical graphs by including notions of depth and orthogonality . In particular, nodes in a higraph can contain other nodes inside them, creating a hierarchy. The idea was initially developed for applications to databases , knowledge representation , and the behavioral specification of complex concurrent systems using the higraph-based language of statecharts.
Higraphs are widely used in industrial applications like UML . Recently they have been used by philosophers to formally study the use of diagrams in mathematical proofs and reasoning. | https://en.wikipedia.org/wiki/Higraph |
Louis-Marie Hilaire Bernigaud de Grange, Count ( Comte ) de Chardonnet (1 May 1839 – 11 March 1924) was a French engineer and industrialist from Besançon , and inventor of artificial silk .
In the late 1870s, Chardonnet was working with Louis Pasteur on a remedy to the epidemic that was destroying French silkworms . Failure to clean up a spill in the darkroom resulted in Chardonnet's discovery of nitrocellulose as a potential replacement for real silk . Realizing the value of such a discovery, Chardonnet began to develop his new product. [ 1 ]
He called his new invention "Chardonnet silk" ( soie de Chardonnet ) and displayed it in the Paris Exhibition of 1889. [ 2 ] Unfortunately, Chardonnet's material was extremely flammable, and was subsequently replaced with other, more stable materials.
He was the first to patent artificial silk, although Georges Audemars had invented a variety called rayon in 1855.
This French engineer or inventor biographical article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hilaire_de_Chardonnet |
Hilary Ann Priestley is a British mathematician. She is a professor at the University of Oxford and a Fellow of St Anne's College, Oxford , where she has been Tutor in Mathematics since 1972. [ 2 ]
Hilary Priestley introduced ordered separable topological spaces ; such topological spaces are now usually called Priestley spaces in her honour. [ 3 ] The term " Priestley duality " is also used for her application of these spaces in the representation theory of distributive lattices . [ 4 ]
This article about a mathematician is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hilary_Priestley |
Hilary Stevenson (12 January 1947 – 5 October 1994) was a food scientist and professor from Northern Ireland who made significant scientific contributions to the study of food irradiation in the 1980s and 1990s.
Mary Hill (Hilary) Morrison was born in Coleraine , County Londonderry on 12 January 1947. [ 1 ] She was the only daughter of James Stewart Morrison and Elizabeth Morrison (née Martin). [ 1 ] She was raised at Drumaduan in the townland of Ballyrashane and attended Coleraine High School . [ 2 ]
Stevenson attended Queen's University Belfast where she graduated with first class honours in Chemistry in 1969 and Agriculture in 1970. [ 1 ] [ 2 ] She completed her M.Sc. in Food Science and Microbiology in 1971 from the University of Strathclyde . [ 1 ] By 1981 she completed her doctoral studies, with her PhD dissertation focusing on metabolism of minerals by poultry. [ 1 ]
Stevenson joined the Department of Agriculture for Northern Ireland (DANI) after her undergraduate degrees, becoming an agricultural inspector. At this time she also lectured at the Loughry College of Agriculture and Food Technology . By 1974 she was promoted to senior scientific officer and had transferred to the Agricultural Chemistry Research Division. At this time she took up a position as university lecturer in the Department of Agriculture and Food Science at Queen's University Belfast. [ 2 ]
In the early years of her research career, Stevenson focused on the vitamin content of peas before and after processing, later shifting her interest to the absorption of minerals by sheep. [ 1 ] By the 1980s, her research had narrowed to poultry nutrition, publishing work on chickens and geese in UK agricultural journals such as British Poultry Science and the Journal of the Science of Food and Agriculture . [ 1 ] Stevenson's most significant scientific contribution was her research on food irradiation , which she took up in the 1980s. [ 2 ] She became an influential expert in the area, publishing over forty papers on the topic, collaborating with international partners on research studies, and organizing gatherings under the auspices of the United Nations. Her work or irradiation was recognized in 1993 when she was appointed to the Order of the British Empire (OBE) . [ 1 ]
Stevenson was an active member of several professional organizations, including the Northern Ireland branch of the Institute of Food Science and Technology , the Nutrition Society , the journal of British Poultry Science, and was a fellow of the Royal Society of Chemistry . [ 2 ]
She married Noel Stevenson in 1976 and the couple lived in Lisburn , County Antrim . [ 2 ] She died at the age of 47 on 5 October 1994, after a prolonged illness. [ 2 ] | https://en.wikipedia.org/wiki/Hilary_Stevenson |
In mathematics , Hilbert's Nullstellensatz (German for "theorem of zeros", or more literally, "zero-locus-theorem") is a theorem that establishes a fundamental relationship between geometry and algebra . This relationship is the basis of algebraic geometry . It relates algebraic sets to ideals in polynomial rings over algebraically closed fields . This relationship was discovered by David Hilbert , who proved the Nullstellensatz in his second major paper on invariant theory in 1893 (following his seminal 1890 paper in which he proved Hilbert's basis theorem ).
Let k {\displaystyle k} be a field (such as the rational numbers ) and K {\displaystyle K} be an algebraically closed field extension of k {\displaystyle k} (such as the complex numbers ). Consider the polynomial ring k [ X 1 , … , X n ] {\displaystyle k[X_{1},\ldots ,X_{n}]} and let I {\displaystyle I} be an ideal in this ring. The algebraic set V ( I ) {\displaystyle \mathrm {V} (I)} defined by this ideal consists of all n {\displaystyle n} -tuples x = ( x 1 , … , x n ) {\displaystyle \mathbf {x} =(x_{1},\dots ,x_{n})} in K n {\displaystyle K^{n}} such that f ( x ) = 0 {\displaystyle f(\mathbf {x} )=0} for all f {\displaystyle f} in I {\displaystyle I} . Hilbert's Nullstellensatz states that if p is some polynomial in k [ X 1 , … , X n ] {\displaystyle k[X_{1},\ldots ,X_{n}]} that vanishes on the algebraic set V ( I ) {\displaystyle \mathrm {V} (I)} , i.e. p ( x ) = 0 {\displaystyle p(\mathbf {x} )=0} for all x {\displaystyle \mathbf {x} } in V ( I ) {\displaystyle \mathrm {V} (I)} , then there exists a natural number r {\displaystyle r} such that p r {\displaystyle p^{r}} is in I {\displaystyle I} . [ 1 ]
An immediate corollary is the weak Nullstellensatz : The ideal I ⊆ k [ X 1 , … , X n ] {\displaystyle I\subseteq k[X_{1},\ldots ,X_{n}]} contains 1 if and only if the polynomials in I {\displaystyle I} do not have any common zeros in K n . Specializing to the case k = K = C , n = 1 {\displaystyle k=K=\mathbb {C} ,n=1} , one immediately recovers a restatement of the fundamental theorem of algebra : a polynomial P in C [ X ] {\displaystyle \mathbb {C} [X]} has a root in C {\displaystyle \mathbb {C} } if and only if deg P ≠ 0. For this reason, the (weak) Nullstellensatz has been referred to as a generalization of the fundamental theorem of algebra for multivariable polynomials. [ 2 ] The weak Nullstellensatz may also be formulated as follows: if I is a proper ideal in k [ X 1 , … , X n ] , {\displaystyle k[X_{1},\ldots ,X_{n}],} then V( I ) cannot be empty , i.e. there exists a common zero for all the polynomials in the ideal in every algebraically closed extension of k . This is the reason for the name of the theorem, the full version of which can be proved easily from the 'weak' form using the Rabinowitsch trick . The assumption of considering common zeros in an algebraically closed field is essential here; for example, the elements of the proper ideal ( X 2 + 1) in R [ X ] {\displaystyle \mathbb {R} [X]} do not have a common zero in R . {\displaystyle \mathbb {R} .}
With the notation common in algebraic geometry, the Nullstellensatz can also be formulated as
for every ideal J . Here, J {\displaystyle {\sqrt {J}}} denotes the radical of J and I( U ) is the ideal of all polynomials that vanish on the set U .
In this way, taking k = K {\displaystyle k=K} we obtain an order-reversing bijective correspondence between the algebraic sets in K n and the radical ideals of K [ X 1 , … , X n ] . {\displaystyle K[X_{1},\ldots ,X_{n}].} In fact, more generally, one has a Galois connection between subsets of the space and subsets of the algebra, where " Zariski closure " and "radical of the ideal generated" are the closure operators .
As a particular example, consider a point P = ( a 1 , … , a n ) ∈ K n {\displaystyle P=(a_{1},\dots ,a_{n})\in K^{n}} . Then I ( P ) = ( X 1 − a 1 , … , X n − a n ) {\displaystyle I(P)=(X_{1}-a_{1},\ldots ,X_{n}-a_{n})} . More generally,
Conversely, every maximal ideal of the polynomial ring K [ X 1 , … , X n ] {\displaystyle K[X_{1},\ldots ,X_{n}]} (note that K {\displaystyle K} is algebraically closed) is of the form ( X 1 − a 1 , … , X n − a n ) {\displaystyle (X_{1}-a_{1},\ldots ,X_{n}-a_{n})} for some a 1 , … , a n ∈ K {\displaystyle a_{1},\ldots ,a_{n}\in K} .
As another example, an algebraic subset W in K n is irreducible (in the Zariski topology) if and only if I ( W ) {\displaystyle I(W)} is a prime ideal.
There are many known proofs of the theorem. Some are non-constructive , such as the first one. Others are constructive, as based on algorithms for expressing 1 or p r as a linear combination of the generators of the ideal.
Zariski's lemma asserts that if a field is finitely generated as an associative algebra over a field K , then it is a finite field extension of K (that is, it is also finitely generated as a vector space ). If m {\displaystyle {\mathfrak {m}}} is a maximal ideal of K [ X 1 , … , X n ] {\displaystyle K[X_{1},\ldots ,X_{n}]} for algebraically closed K , then Zariski's lemma implies that K [ X 1 , … , X n ] / m {\displaystyle K[X_{1},\ldots ,X_{n}]/{\mathfrak {m}}} is a finite field extension of K , and thus, by algebraic closure, must be K . From this, it follows that there is an a = ( a 1 , … , a n ) ∈ K n {\displaystyle a=(a_{1},\dots ,a_{n})\in K^{n}} such that X i − a i ∈ m {\displaystyle X_{i}-a_{i}\in {\mathfrak {m}}} for i = 1 , … , n {\displaystyle i=1,\dots ,n} . In other words,
for some a = ( a 1 , … , a n ) ∈ K n {\displaystyle a=(a_{1},\dots ,a_{n})\in K^{n}} . But m a {\displaystyle {\mathfrak {m}}_{a}} is clearly maximal, so m = m a {\displaystyle {\mathfrak {m}}={\mathfrak {m}}_{a}} . This is the weak Nullstellensatz: every maximal ideal of K [ X 1 , … , X n ] {\displaystyle K[X_{1},\ldots ,X_{n}]} for algebraically closed K is of the form m a = ( X 1 − a 1 , … , X n − a n ) {\displaystyle {\mathfrak {m}}_{a}=(X_{1}-a_{1},\ldots ,X_{n}-a_{n})} for some a = ( a 1 , … , a n ) ∈ K n {\displaystyle a=(a_{1},\dots ,a_{n})\in K^{n}} . Because of this close relationship, some texts refer to Zariski's lemma as the weak Nullstellensatz or as the 'algebraic version' of the weak Nullstellensatz. [ 3 ] [ 4 ]
The full Nullstellensatz can also be proved directly from Zariski's lemma without employing the Rabinowitsch trick. Here is a sketch of a proof using this lemma. [ 5 ]
Let A = K [ X 1 , … , X n ] {\displaystyle A=K[X_{1},\ldots ,X_{n}]} ( K an algebraically closed field), J an ideal of A, and V = V ( J ) {\displaystyle V=\mathrm {V} (J)} the common zeros of J in K n {\displaystyle K^{n}} . Clearly, J ⊆ I ( V ) {\displaystyle {\sqrt {J}}\subseteq \mathrm {I} (V)} , where I ( V ) {\displaystyle \mathrm {I} (V)} is the ideal of polynomials in A vanishing on V . To show the opposite inclusion, let f ∉ J {\displaystyle f\not \in {\sqrt {J}}} . Then f ∉ p {\displaystyle f\not \in {\mathfrak {p}}} for some prime ideal p ⊇ J {\displaystyle {\mathfrak {p}}\supseteq J} in A . Let R = ( A / p ) [ 1 / f ¯ ] {\displaystyle R=(A/{\mathfrak {p}})[1/{\bar {f}}]} , where f ¯ {\displaystyle {\bar {f}}} is the image of f under the natural map A → A / p {\displaystyle A\to A/{\mathfrak {p}}} , and m {\displaystyle {\mathfrak {m}}} be a maximal ideal in R . By Zariski's lemma, R / m {\displaystyle R/{\mathfrak {m}}} is a finite extension of K , and thus, is K since K is algebraically closed. Let x i {\displaystyle x_{i}} be the images of X i {\displaystyle X_{i}} under the natural map A → A / p → R → R / m ≅ K {\displaystyle A\to A/{\mathfrak {p}}\to R\to R/{\mathfrak {m}}\cong K} . It follows that, by construction, x = ( x 1 , … , x n ) ∈ V {\displaystyle x=(x_{1},\ldots ,x_{n})\in V} but f ( x ) ≠ 0 {\displaystyle f(x)\neq 0} , so f ∉ I ( V ) {\displaystyle f\notin \mathrm {I} (V)} .
The following constructive proof of the weak form is one of the oldest proofs (the strong form results from the Rabinowitsch trick , which is also constructive).
The resultant of two polynomials depending on a variable x and other variables is a polynomial in the other variables that is in the ideal generated by the two polynomials, and has the following properties: if one of the polynomials is monic in x , every zero (in the other variables) of the resultant may be extended into a common zero of the two polynomials.
The proof is as follows.
If the ideal is principal , generated by a non-constant polynomial p that depends on x , one chooses arbitrary values for the other variables. The fundamental theorem of algebra asserts that this choice can be extended to a zero of p .
In the case of several polynomials p 1 , … , p n , {\displaystyle p_{1},\ldots ,p_{n},} a linear change of variables allows to suppose that p 1 {\displaystyle p_{1}} is monic in the first variable x . Then, one introduces n − 1 {\displaystyle n-1} new variables u 2 , … , u n , {\displaystyle u_{2},\ldots ,u_{n},} and one considers the resultant
As R is in the ideal generated by p 1 , … , p n , {\displaystyle p_{1},\ldots ,p_{n},} the same is true for the coefficients in R of the monomials in u 2 , … , u n . {\displaystyle u_{2},\ldots ,u_{n}.} So, if 1 is in the ideal generated by these coefficients, it is also in the ideal generated by p 1 , … , p n . {\displaystyle p_{1},\ldots ,p_{n}.} On the other hand, if these coefficients have a common zero, this zero can be extended to a common zero of p 1 , … , p n , {\displaystyle p_{1},\ldots ,p_{n},} by the above property of the resultant.
This proves the weak Nullstellensatz by induction on the number of variables.
A Gröbner basis is an algorithmic concept that was introduced in 1973 by Bruno Buchberger . It is presently fundamental in computational geometry . A Gröbner basis is a special generating set of an ideal from which most properties of the ideal can easily be extracted. Those that are related to the Nullstellensatz are the following:
The Nullstellensatz is subsumed by a systematic development of the theory of Jacobson rings , which are those rings in which every radical ideal is an intersection of maximal ideals. Given Zariski's lemma, proving the Nullstellensatz amounts to showing that if k is a field, then every finitely generated k -algebra R (necessarily of the form R = k [ t 1 , ⋯ , t n ] / I {\textstyle R=k[t_{1},\cdots ,t_{n}]/I} ) is Jacobson. More generally, one has the following theorem:
Other generalizations proceed from viewing the Nullstellensatz in scheme-theoretic terms as saying that for any field k and nonzero finitely generated k -algebra R , the morphism S p e c R → S p e c k {\textstyle \mathrm {Spec} \,R\to \mathrm {Spec} \,k} admits a section étale-locally (equivalently, after base change along S p e c L → S p e c k {\textstyle \mathrm {Spec} \,L\to \mathrm {Spec} \,k} for some finite field extension L / k {\textstyle L/k} ). In this vein, one has the following theorem:
Serge Lang gave an extension of the Nullstellensatz to the case of infinitely many generators:
In all of its variants, Hilbert's Nullstellensatz asserts that some polynomial g belongs or not to an ideal generated, say, by f 1 , ..., f k ; we have g = f r in the strong version, g = 1 in the weak form. This means the existence or the non-existence of polynomials g 1 , ..., g k such that g = f 1 g 1 + ... + f k g k . The usual proofs of the Nullstellensatz are not constructive, non-effective, in the sense that they do not give any way to compute the g i .
It is thus a rather natural question to ask if there is an effective way to compute the g i (and the exponent r in the strong form) or to prove that they do not exist. To solve this problem, it suffices to provide an upper bound on the total degree of the g i : such a bound reduces the problem to a finite system of linear equations that may be solved by usual linear algebra techniques. Any such upper bound is called an effective Nullstellensatz .
A related problem is the ideal membership problem , which consists in testing if a polynomial belongs to an ideal. For this problem also, a solution is provided by an upper bound on the degree of the g i . A general solution of the ideal membership problem provides an effective Nullstellensatz, at least for the weak form.
In 1925, Grete Hermann gave an upper bound for ideal membership problem that is doubly exponential in the number of variables. In 1982 Mayr and Meyer gave an example where the g i have a degree that is at least double exponential, showing that every general upper bound for the ideal membership problem is doubly exponential in the number of variables.
Since most mathematicians at the time assumed the effective Nullstellensatz was at least as hard as ideal membership, few mathematicians sought a bound better than double-exponential. In 1987, however, W. Dale Brownawell gave an upper bound for the effective Nullstellensatz that is simply exponential in the number of variables. [ 10 ] Brownawell's proof relied on analytic techniques valid only in characteristic 0, but, one year later, János Kollár gave a purely algebraic proof, valid in any characteristic, of a slightly better bound.
In the case of the weak Nullstellensatz, Kollár's bound is the following: [ 11 ]
If d is the maximum of the degrees of the f i , this bound may be simplified to
An improvement due to M. Sombra is [ 12 ]
His bound improves Kollár's as soon as at least two of the degrees that are involved are lower than 3.
We can formulate a certain correspondence between homogeneous ideals of polynomials and algebraic subsets of a projective space, called the projective Nullstellensatz , that is analogous to the affine one. To do that, we introduce some notations. Let R = k [ t 0 , … , t n ] . {\displaystyle R=k[t_{0},\ldots ,t_{n}].} The homogeneous ideal,
is called the maximal homogeneous ideal (see also irrelevant ideal ). As in the affine case, we let: for a subset S ⊆ P n {\displaystyle S\subseteq \mathbb {P} ^{n}} and a homogeneous ideal I of R ,
By f = 0 on S {\displaystyle f=0{\text{ on }}S} we mean: for every homogeneous coordinates ( a 0 : ⋯ : a n ) {\displaystyle (a_{0}:\cdots :a_{n})} of a point of S we have f ( a 0 , … , a n ) = 0 {\displaystyle f(a_{0},\ldots ,a_{n})=0} . This implies that the homogeneous components of f are also zero on S and thus that I P n ( S ) {\displaystyle \operatorname {I} _{\mathbb {P} ^{n}}(S)} is a homogeneous ideal. Equivalently, I P n ( S ) {\displaystyle \operatorname {I} _{\mathbb {P} ^{n}}(S)} is the homogeneous ideal generated by homogeneous polynomials f that vanish on S . Now, for any homogeneous ideal I ⊆ R + {\displaystyle I\subseteq R_{+}} , by the usual Nullstellensatz, we have:
and so, like in the affine case, we have: [ 13 ]
The Nullstellensatz also holds for the germs of holomorphic functions at a point of complex n -space C n . {\displaystyle \mathbb {C} ^{n}.} Precisely, for each open subset U ⊆ C n , {\displaystyle U\subseteq \mathbb {C} ^{n},} let O C n ( U ) {\displaystyle {\mathcal {O}}_{\mathbb {C} ^{n}}(U)} denote the ring of holomorphic functions on U ; then O C n {\displaystyle {\mathcal {O}}_{\mathbb {C} ^{n}}} is a sheaf on C n . {\displaystyle \mathbb {C} ^{n}.} The stalk O C n , 0 {\displaystyle {\mathcal {O}}_{\mathbb {C} ^{n},0}} at, say, the origin can be shown to be a Noetherian local ring that is a unique factorization domain .
If f ∈ O C n , 0 {\displaystyle f\in {\mathcal {O}}_{\mathbb {C} ^{n},0}} is a germ represented by a holomorphic function f ~ : U → C {\displaystyle {\widetilde {f}}:U\to \mathbb {C} } , then let V 0 ( f ) {\displaystyle V_{0}(f)} be the equivalence class of the set
where two subsets X , Y ⊆ C n {\displaystyle X,Y\subseteq \mathbb {C} ^{n}} are considered equivalent if X ∩ U = Y ∩ U {\displaystyle X\cap U=Y\cap U} for some neighborhood U of 0. Note V 0 ( f ) {\displaystyle V_{0}(f)} is independent of a choice of the representative f ~ . {\displaystyle {\widetilde {f}}.} For each ideal I ⊆ O C n , 0 , {\displaystyle I\subseteq {\mathcal {O}}_{\mathbb {C} ^{n},0},} let V 0 ( I ) {\displaystyle V_{0}(I)} denote V 0 ( f 1 ) ∩ ⋯ ∩ V 0 ( f r ) {\displaystyle V_{0}(f_{1})\cap \dots \cap V_{0}(f_{r})} for some generators f 1 , … , f r {\displaystyle f_{1},\ldots ,f_{r}} of I . It is well-defined; i.e., is independent of a choice of the generators.
For each subset X ⊆ C n {\displaystyle X\subseteq \mathbb {C} ^{n}} , let
It is easy to see that I 0 ( X ) {\displaystyle I_{0}(X)} is an ideal of O C n , 0 {\displaystyle {\mathcal {O}}_{\mathbb {C} ^{n},0}} and that I 0 ( X ) = I 0 ( Y ) {\displaystyle I_{0}(X)=I_{0}(Y)} if X ∼ Y {\displaystyle X\sim Y} in the sense discussed above.
The analytic Nullstellensatz then states: [ 14 ] for each ideal I ⊆ O C n , 0 {\displaystyle I\subseteq {\mathcal {O}}_{\mathbb {C} ^{n},0}} ,
where the left-hand side is the radical of I . | https://en.wikipedia.org/wiki/Hilbert's_Nullstellensatz |
In mathematics Hilbert's basis theorem asserts that every ideal of a polynomial ring over a field has a finite generating set (a finite basis in Hilbert's terminology).
In modern algebra , rings whose ideals have this property are called Noetherian rings . Every field, and the ring of integers are Noetherian rings. So, the theorem can be generalized and restated as: every polynomial ring over a Noetherian ring is also Noetherian .
The theorem was stated and proved by David Hilbert in 1890 in his seminal article on invariant theory [ 1 ] , where he solved several problems on invariants. In this article, he proved also two other fundamental theorems on polynomials, the Nullstellensatz (zero-locus theorem) and the syzygy theorem (theorem on relations). These three theorems were the starting point of the interpretation of algebraic geometry in terms of commutative algebra . In particular, the basis theorem implies that every algebraic set is the intersection of a finite number of hypersurfaces .
Another aspect of this article had a great impact on mathematics of the 20th century; this is the systematic use of non-constructive methods . For example, the basis theorem asserts that every ideal has a finite generator set, but the original proof does not provide any way to compute it for a specific ideal. This approach was so astonishing for mathematicians of that time that the first version of the article was rejected by Paul Gordan , the greatest specialist of invariants of that time, with the comment "This is not mathematics. This is theology." [ 2 ] Later, he recognized "I have convinced myself that even theology has its merits." [ 3 ]
If R {\displaystyle R} is a ring , let R [ X ] {\displaystyle R[X]} denote the ring of polynomials in the indeterminate X {\displaystyle X} over R {\displaystyle R} . Hilbert proved that if R {\displaystyle R} is "not too large", in the sense that if R {\displaystyle R} is Noetherian, the same must be true for R [ X ] {\displaystyle R[X]} . Formally,
Hilbert's Basis Theorem. If R {\displaystyle R} is a Noetherian ring, then R [ X ] {\displaystyle R[X]} is a Noetherian ring. [ 4 ]
Corollary. If R {\displaystyle R} is a Noetherian ring, then R [ X 1 , … , X n ] {\displaystyle R[X_{1},\dotsc ,X_{n}]} is a Noetherian ring.
Hilbert proved the theorem (for the special case of multivariate polynomials over a field ) in the course of his proof of finite generation of rings of invariants . [ 1 ] The theorem is interpreted in algebraic geometry as follows: every algebraic set is the set of the common zeros of finitely many polynomials.
Hilbert's proof is highly non-constructive : it proceeds by induction on the number of variables, and, at each induction step uses the non-constructive proof for one variable less. Introduced more than eighty years later, Gröbner bases allow a direct proof that is as constructive as possible: Gröbner bases produce an algorithm for testing whether a polynomial belong to the ideal generated by other polynomials. So, given an infinite sequence of polynomials, one can construct algorithmically the list of those polynomials that do not belong to the ideal generated by the preceding ones. Gröbner basis theory implies that this list is necessarily finite, and is thus a finite basis of the ideal. However, for deciding whether the list is complete, one must consider every element of the infinite sequence, which cannot be done in the finite time allowed to an algorithm.
Theorem. If R {\displaystyle R} is a left (resp. right) Noetherian ring , then the polynomial ring R [ X ] {\displaystyle R[X]} is also a left (resp. right) Noetherian ring.
Suppose a ⊆ R [ X ] {\displaystyle {\mathfrak {a}}\subseteq R[X]} is a non-finitely generated left ideal. Then by recursion (using the axiom of dependent choice ) there is a sequence of polynomials { f 0 , f 1 , … } {\displaystyle \{f_{0},f_{1},\ldots \}} such that if b n {\displaystyle {\mathfrak {b}}_{n}} is the left ideal generated by f 0 , … , f n − 1 {\displaystyle f_{0},\ldots ,f_{n-1}} then f n ∈ a ∖ b n {\displaystyle f_{n}\in {\mathfrak {a}}\setminus {\mathfrak {b}}_{n}} is of minimal degree . By construction, { deg ( f 0 ) , deg ( f 1 ) , … } {\displaystyle \{\deg(f_{0}),\deg(f_{1}),\ldots \}} is a non-decreasing sequence of natural numbers . Let a n {\displaystyle a_{n}} be the leading coefficient of f n {\displaystyle f_{n}} and let b {\displaystyle {\mathfrak {b}}} be the left ideal in R {\displaystyle R} generated by a 0 , a 1 , … {\displaystyle a_{0},a_{1},\ldots } . Since R {\displaystyle R} is Noetherian the chain of ideals
must terminate. Thus b = ( a 0 , … , a N − 1 ) {\displaystyle {\mathfrak {b}}=(a_{0},\ldots ,a_{N-1})} for some integer N {\displaystyle N} . So in particular,
Now consider
whose leading term is equal to that of f N {\displaystyle f_{N}} ; moreover, g ∈ b N {\displaystyle g\in {\mathfrak {b}}_{N}} . However, f N ∉ b N {\displaystyle f_{N}\notin {\mathfrak {b}}_{N}} , which means that f N − g ∈ a ∖ b N {\displaystyle f_{N}-g\in {\mathfrak {a}}\setminus {\mathfrak {b}}_{N}} has degree less than f N {\displaystyle f_{N}} , contradicting the minimality.
Let a ⊆ R [ X ] {\displaystyle {\mathfrak {a}}\subseteq R[X]} be a left ideal. Let b {\displaystyle {\mathfrak {b}}} be the set of leading coefficients of members of a {\displaystyle {\mathfrak {a}}} . This is obviously a left ideal over R {\displaystyle R} , and so is finitely generated by the leading coefficients of finitely many members of a {\displaystyle {\mathfrak {a}}} ; say f 0 , … , f N − 1 {\displaystyle f_{0},\ldots ,f_{N-1}} . Let d {\displaystyle d} be the maximum of the set { deg ( f 0 ) , … , deg ( f N − 1 ) } {\displaystyle \{\deg(f_{0}),\ldots ,\deg(f_{N-1})\}} , and let b k {\displaystyle {\mathfrak {b}}_{k}} be the set of leading coefficients of members of a {\displaystyle {\mathfrak {a}}} , whose degree is ≤ k {\displaystyle \leq k} . As before, the b k {\displaystyle {\mathfrak {b}}_{k}} are left ideals over R {\displaystyle R} , and so are finitely generated by the leading coefficients of finitely many members of a {\displaystyle {\mathfrak {a}}} , say
with degrees ≤ k {\displaystyle \leq k} . Now let a ∗ ⊆ R [ X ] {\displaystyle {\mathfrak {a}}^{*}\subseteq R[X]} be the left ideal generated by:
We have a ∗ ⊆ a {\displaystyle {\mathfrak {a}}^{*}\subseteq {\mathfrak {a}}} and claim also a ⊆ a ∗ {\displaystyle {\mathfrak {a}}\subseteq {\mathfrak {a}}^{*}} . Suppose for the sake of contradiction this is not so. Then let h ∈ a ∖ a ∗ {\displaystyle h\in {\mathfrak {a}}\setminus {\mathfrak {a}}^{*}} be of minimal degree, and denote its leading coefficient by a {\displaystyle a} .
Thus our claim holds, and a = a ∗ {\displaystyle {\mathfrak {a}}={\mathfrak {a}}^{*}} which is finitely generated.
Note that the only reason we had to split into two cases was to ensure that the powers of X {\displaystyle X} multiplying the factors were non-negative in the constructions.
Let R {\displaystyle R} be a Noetherian commutative ring . Hilbert's basis theorem has some immediate corollaries .
Formal proofs of Hilbert's basis theorem have been verified through the Mizar project (see HILBASIS file ) and Lean (see ring_theory.polynomial ). | https://en.wikipedia.org/wiki/Hilbert's_basis_theorem |
Hilbert's eighteenth problem is one of the 23 Hilbert problems set out in a celebrated list compiled in 1900 by mathematician David Hilbert . It asks three separate questions about lattices and sphere packing in Euclidean space. [ 1 ]
The first part of the problem asks whether there are only finitely many essentially different space groups in n {\displaystyle n} -dimensional Euclidean space . This was answered affirmatively by Bieberbach .
The second part of the problem asks whether there exists a polyhedron which tiles 3-dimensional Euclidean space but is not the fundamental region of any space group; that is, which tiles but does not admit an isohedral (tile- transitive ) tiling. Such tiles are now known as anisohedral . In asking the problem in three dimensions, Hilbert was probably assuming that no such tile exists in two dimensions; this assumption later turned out to be incorrect.
The first such tile in three dimensions was found by Karl Reinhardt in 1928. The first example in two dimensions was found by Heesch in 1935. [ 2 ] The related einstein problem asks for a shape that can tile space but not with an infinite cyclic group of symmetries.
The third part of the problem asks for the densest sphere packing or packing of other specified shapes. Although it expressly includes shapes other than spheres, it is generally taken as equivalent to the Kepler conjecture .
In 1998, American mathematician Thomas Callister Hales gave a computer-aided proof of the Kepler conjecture. It shows that the most space-efficient way to pack spheres is in a pyramid shape. [ 3 ] | https://en.wikipedia.org/wiki/Hilbert's_eighteenth_problem |
In mathematics , Hilbert's fourth problem in the 1900 list of Hilbert's problems is a foundational question in geometry . In one statement derived from the original, it was to find — up to an isomorphism — all geometries that have an axiomatic system of the classical geometry ( Euclidean , hyperbolic and elliptic ), with those axioms of congruence that involve the concept of the angle dropped, and ` triangle inequality ', regarded as an axiom, added.
If one assumes the continuity axiom in addition, then, in the case of the Euclidean plane, we come to the problem posed by Jean Gaston Darboux : "To determine all the calculus of variation problems in the plane whose solutions are all the plane straight lines." [ 1 ]
There are several interpretations of the original statement of David Hilbert . Nevertheless, a solution was sought, with the German mathematician Georg Hamel being the first to contribute to the solution of Hilbert's fourth problem. [ 2 ]
A recognized solution was given by Soviet mathematician Aleksei Pogorelov in 1973. [ 3 ] [ 4 ] In 1976, Armenian mathematician Rouben V. Ambartzumian proposed another proof of Hilbert's fourth problem. [ 5 ]
Hilbert discusses the existence of non-Euclidean geometry and non-Archimedean geometry
...a geometry in which all the axioms of ordinary euclidean geometry hold, and in particular all the congruence axioms except the one of the congruence of triangles (or all except the theorem of the equality of the base angles in the isosceles triangle), and in which, besides, the proposition that in every triangle the sum of two sides is greater than the third is assumed as a particular axiom. [ 6 ]
Due to the idea that a 'straight line' is defined as the shortest path between two points, he mentions how congruence of triangles is necessary for Euclid's proof that a straight line in the plane is the shortest distance between two points. He summarizes as follows:
The theorem of the straight line as the shortest distance between two points and the essentially equivalent theorem of Euclid about the sides of a triangle, play an important part not only in number theory but also in the theory of surfaces and in the calculus of variations. For this reason, and because I believe that the thorough investigation of the conditions for the validity of this theorem will throw a new light upon the idea of distance, as well as upon other elementary ideas, e. g., upon the idea of the plane, and the possibility of its definition by means of the idea of the straight line, the construction and systematic treatment of the geometries here possible seem to me desirable. [ 6 ]
Desargues's theorem :
If two triangles lie on a plane such that the lines connecting corresponding vertices of the triangles meet at one point, then the three points, at which the prolongations of three pairs of corresponding sides of the triangles intersect, lie on one line.
The necessary condition for solving Hilbert's fourth problem is the requirement that a metric space that satisfies the axioms of this problem should be Desarguesian, i.e.,:
For Desarguesian spaces Georg Hamel proved that every solution of Hilbert's fourth problem can be represented in a real projective space R P n {\displaystyle RP^{n}} or in a convex domain of R P n {\displaystyle RP^{n}} if one determines the congruence of segments by equality of their lengths in a special metric for which the lines of the projective space are geodesics.
Metrics of this type are called flat or projective .
Thus, the solution of Hilbert's fourth problem was reduced to the solution of the problem of constructive determination of all complete flat metrics.
Hamel solved this problem under the assumption of high regularity of the metric. [ 2 ] However, as simple examples show, the class of regular flat metrics is smaller than the class of all flat metrics. The axioms of geometries under consideration imply only a continuity of the metrics. Therefore, to solve Hilbert's fourth problem completely it is necessary to determine constructively all the continuous flat metrics.
Before 1900, there was known the Cayley–Klein model of Lobachevsky geometry in the unit disk, according to which geodesic lines are chords of the disk and the distance between points is defined as a logarithm of the cross-ratio of a quadruple. For two-dimensional Riemannian metrics, Eugenio Beltrami (1835–1900) proved that flat metrics are the metrics of constant curvature. [ 7 ]
For multidimensional Riemannian metrics this statement was proved by E. Cartan in 1930.
In 1890, for solving problems on the theory of numbers, Hermann Minkowski introduced a notion of the space that nowadays is called the finite-dimensional Banach space . [ 8 ]
Let F 0 ⊂ E n {\displaystyle F_{0}\subset \mathbb {E} ^{n}} be a compact convex hypersurface in a Euclidean space defined by
where the function F = F ( y ) {\displaystyle F=F(y)} satisfies the following conditions:
The length of the vector OA is defined by:
A space with this metric is called Minkowski space .
The hypersurface F 0 {\displaystyle F_{0}} is convex and can be irregular. The defined metric is flat.
Let M and T M = { ( x , y ) | x ∈ M , y ∈ T x M } {\displaystyle TM=\{(x,y)|x\in M,y\in T_{x}M\}} be a smooth finite-dimensional manifold and its tangent bundle, respectively. The function F ( x , y ) : T M → [ 0 , + ∞ ) {\displaystyle F(x,y)\colon TM\rightarrow [0,+\infty )} is called Finsler metric if
( M , F ) {\displaystyle (M,F)} is Finsler space .
Let U ⊂ ( E n + 1 , ‖ ⋅ ‖ E ) {\displaystyle U\subset (\mathbb {E} ^{n+1},\|\cdot \|_{\mathbb {E} })} be a bounded open convex set with the boundary of class C 2 and positive normal curvatures. Similarly to the Lobachevsky space, the hypersurface ∂ U {\displaystyle \partial U} is called the absolute of Hilbert's geometry. [ 9 ]
Hilbert's distance (see fig.) is defined by
The distance d U {\displaystyle d_{U}} induces the Hilbert–Finsler metric F U {\displaystyle F_{U}} on U . For any x ∈ U {\displaystyle x\in U} and y ∈ T x U {\displaystyle y\in T_{x}U} (see fig.), we have
The metric is symmetric and flat. In 1895, Hilbert introduced this metric as a generalization of the Lobachevsky geometry. If the hypersurface ∂ U {\displaystyle \partial U} is an ellipsoid, then we have the Lobachevsky geometry.
In 1930, Funk introduced a non-symmetric metric. It is defined in a domain bounded by a closed convex hypersurface and is also flat.
Georg Hamel was first to contribute to the solution of Hilbert's fourth problem. [ 2 ] He proved the following statement.
Theorem . A regular Finsler metric F ( x , y ) = F ( x 1 , … , x n , y 1 , … , y n ) {\displaystyle F(x,y)=F(x_{1},\ldots ,x_{n},y_{1},\ldots ,y_{n})} is flat if and only if it satisfies the conditions:
Consider a set of all oriented lines on a plane. Each line is defined by the parameters ρ {\displaystyle \rho } and φ , {\displaystyle \varphi ,} where ρ {\displaystyle \rho } is a distance from the origin to the line, and φ {\displaystyle \varphi } is an angle between the line and the x -axis. Then the set of all oriented lines is homeomorphic to a circular cylinder of radius 1 with the area element d S = d ρ d φ {\displaystyle dS=d\rho \,d\varphi } . Let γ {\displaystyle \gamma } be a rectifiable curve on a plane. Then the length of γ {\displaystyle \gamma } is L = 1 4 ∬ Ω n ( ρ , φ ) d p d φ {\displaystyle L={\frac {1}{4}}\iint _{\Omega }n(\rho ,\varphi )\,dp\,d\varphi } where Ω {\displaystyle \Omega } is a set of lines that intersect the curve γ {\displaystyle \gamma } , and n ( p , φ ) {\displaystyle n(p,\varphi )} is the number of intersections of the line with γ {\displaystyle \gamma } .
Crofton proved this statement in 1870. [ 10 ]
A similar statement holds for a projective space.
In 1966, in his talk at the International Mathematical Congress in Moscow, Herbert Busemann introduced a new class of flat metrics. On a set of lines on the projective plane R P 2 {\displaystyle RP^{2}} he introduced a completely additive non-negative measure σ {\displaystyle \sigma } , which satisfies the following conditions:
If we consider a σ {\displaystyle \sigma } -metric in an arbitrary convex domain Ω {\displaystyle \Omega } of a projective space R P 2 {\displaystyle RP^{2}} , then condition 3) should be replaced by the following:
for any set H such that H is contained in Ω {\displaystyle \Omega } and the closure of H does not intersect the boundary of Ω {\displaystyle \Omega } , the inequality
Using this measure, the σ {\displaystyle \sigma } -metric on R P 2 {\displaystyle RP^{2}} is defined by
where τ [ x , y ] {\displaystyle \tau [x,y]} is the set of straight lines that intersect the segment [ x , y ] {\displaystyle [x,y]} .
The triangle inequality for this metric follows from Pasch's theorem .
Theorem . σ {\displaystyle \sigma } -metric on R P 2 {\displaystyle RP^{2}} is flat, i.e., the geodesics are the straight lines of the projective space.
But Busemann was far from the idea that σ {\displaystyle \sigma } -metrics exhaust all flat metrics. He wrote, "The freedom in the choice of a metric with given geodesics is for non-Riemannian metrics so great that it may be doubted whether there really exists a convincing characterization of all Desarguesian spaces" . [ 11 ]
The following theorem was proved by Pogorelov in 1973 [ 3 ] [ 4 ]
Theorem . Any two-dimensional continuous complete flat metric is a σ {\displaystyle \sigma } -metric.
Thus Hilbert's fourth problem for the two-dimensional case was completely solved.
A consequence of this is that you can glue boundary to boundary two copies of the same planar convex shape, with an angle twist between them, you will get a 3D object without crease lines, the two faces being developable .
In 1976, Ambartsumian proposed another proof of Hilbert's fourth problem. [ 5 ]
His proof uses the fact that in the two-dimensional case the whole measure can be restored by its values on biangles, and thus be defined on triangles in the same way as the area of a triangle is defined on a sphere. Since the triangle inequality holds, it follows that this measure is positive on non-degenerate triangles and is determined on all Borel sets . However, this structure can not be generalized to higher dimensions because of Hilbert's third problem solved by Max Dehn .
In the two-dimensional case, polygons with the same volume are scissors-congruent. As was shown by Dehn this is not true for a higher dimension.
For three dimensional case Pogorelov proved the following theorem.
Theorem. Any three-dimensional regular complete flat metric is a σ {\displaystyle \sigma } -metric.
However, in the three-dimensional case σ {\displaystyle \sigma } -measures can take either positive or negative values. The necessary and sufficient conditions for the regular metric defined by the function of the set σ {\displaystyle \sigma } to be flat are the following three conditions:
Moreover, Pogorelov showed that any complete continuous flat metric in the three-dimensional case is the limit of regular σ {\displaystyle \sigma } -metrics with the uniform convergence on any compact sub-domain of the metric's domain. He called them generalized σ {\displaystyle \sigma } -metrics.
Thus Pogorelov could prove the following statement.
Theorem. In the three-dimensional case any complete continuous flat metric is a σ {\displaystyle \sigma } -metric in generalized meaning.
Busemann, in his review to Pogorelov’s book "Hilbert’s Fourth Problem" wrote, "In the spirit of the time Hilbert restricted himself to n = 2, 3 and so does Pogorelov.
However, this has doubtless pedagogical reasons, because he addresses a wide class of readers. The real difference is between n = 2 and n >2. Pogorelov's method works for n >3, but requires greater technicalities". [ 12 ]
The multi-dimensional case of the Fourth Hilbert problem was studied by Szabo. [ 13 ] In 1986, he proved, as he wrote, the generalized Pogorelov theorem.
Theorem. Each n -dimensional Desarguesian space of the class C n + 2 , n > 2 {\displaystyle C^{n+2},n>2} , is generated by the Blaschke–Busemann construction.
A σ {\displaystyle \sigma } -measure that generates a flat measure has the following properties:
There was given the example of a flat metric not generated by the Blaschke–Busemann construction. Szabo described all continuous flat metrics in terms of generalized functions.
Hilbert's fourth problem is also closely related to the properties of convex bodies . A convex polyhedron is called a zonotope if it is the Minkowski sum of segments. A convex body which is a limit of zonotopes in the Blaschke – Hausdorff metric is called a zonoid . For zonoids, the support function is represented by
where σ ( u ) {\displaystyle \sigma (u)} is an even positive Borel measure on a sphere S n − 1 {\displaystyle S^{n-1}} .
The Minkowski space is generated by the Blaschke–Busemann construction if and only if the support function of the indicatrix has the form of (1), where σ ( u ) {\displaystyle \sigma (u)} is even and not necessarily of positive Borel measure. [ 14 ] The bodies bounded by such hypersurfaces are called generalized zonoids .
The octahedron | x 1 | + | x 2 | + | x 3 | ≤ 1 {\displaystyle |x_{1}|+|x_{2}|+|x_{3}|\leq 1} in the Euclidean space E 3 {\displaystyle E^{3}} is not a generalized zonoid. From the above statement it follows that the flat metric of Minkowski space with the norm ‖ x ‖ = max { | x 1 | , | x 2 | , | x 3 | } {\displaystyle \|x\|=\max\{|x_{1}|,|x_{2}|,|x_{3}|\}} is not generated by the Blaschke–Busemann construction.
There was found the correspondence between the planar n -dimensional Finsler metrics and special symplectic forms on the Grassmann manifold G ( n + 1 , 2 ) {\displaystyle G(n+1,2)} в E n + 1 {\displaystyle E^{n+1}} . [ 15 ]
There were considered periodic solutions of Hilbert's fourth problem :
Another exposition of Hilbert's fourth problem can be found in work of Paiva. [ 17 ] | https://en.wikipedia.org/wiki/Hilbert's_fourth_problem |
In analysis , a branch of mathematics, Hilbert's inequality states that
for any sequence u 1 , u 2 ,... of complex numbers. It was first demonstrated by David Hilbert with the constant 2 π instead of π ; the sharp constant was found by Issai Schur . It implies that the discrete Hilbert transform is a bounded operator in ℓ 2 .
Let ( u m ) be a sequence of complex numbers. If the sequence is infinite, assume that it is square-summable:
Hilbert's inequality (see Steele (2004) ) asserts that
In 1973, Montgomery & Vaughan reported several generalizations of Hilbert's inequality, considering the bilinear forms
and
where x 1 , x 2 ,..., x m are distinct real numbers modulo 1 (i.e. they belong to distinct classes in the quotient group R / Z ) and λ 1 ,..., λ m are distinct real numbers. Montgomery & Vaughan 's generalizations of Hilbert's inequality are then given by
and
where
is the distance from s to the nearest integer, and min + denotes the smallest positive value. Moreover, if
then the following inequalities hold:
and | https://en.wikipedia.org/wiki/Hilbert's_inequality |
In number theory , Hilbert's irreducibility theorem , conceived by David Hilbert in 1892, states that every finite set of irreducible polynomials in a finite number of variables and having rational number coefficients admit a common specialization of a proper subset of the variables to rational numbers such that all the polynomials remain irreducible. This theorem is a prominent theorem in number theory.
Hilbert's irreducibility theorem. Let
be irreducible polynomials in the ring
Then there exists an r -tuple of rational numbers ( a 1 , ..., a r ) such that
are irreducible in the ring
Remarks.
Hilbert's irreducibility theorem has numerous applications in number theory and algebra . For example:
It has been reformulated and generalized extensively, by using the language of algebraic geometry . See thin set (Serre) . | https://en.wikipedia.org/wiki/Hilbert's_irreducibility_theorem |
Hilbert's lemma was proposed at the end of the 19th century by mathematician David Hilbert . The lemma describes a property of the principal curvatures of surfaces. It may be used to prove Liebmann's theorem that a compact surface with constant Gaussian curvature must be a sphere. [ 1 ]
Given a manifold in three dimensions that is smooth and differentiable over a patch containing the point p , where k and m are defined as the principal curvatures and K ( x ) is the Gaussian curvature at a point x , if k has a max at p , m has a min at p , and k is strictly greater than m at p , then K ( p ) is a non-positive real number. [ 2 ]
This differential geometry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hilbert's_lemma |
Hilbert's paradox of the Grand Hotel ( colloquial : Infinite Hotel Paradox or Hilbert's Hotel ) is a thought experiment which illustrates a counterintuitive property of infinite sets . It is demonstrated that a fully occupied hotel with infinitely many rooms may still accommodate additional guests, even infinitely many of them, and this process may be repeated infinitely often. The idea was introduced by David Hilbert in a 1925 lecture " Über das Unendliche ", reprinted in ( Hilbert 2013 , p.730), and was popularized through George Gamow 's 1947 book One Two Three... Infinity . [ 1 ] [ 2 ]
Hilbert imagines a hypothetical hotel with rooms numbered 1, 2, 3, and so on with no upper limit. This is called a countably infinite number of rooms. Initially every room is occupied, and yet new visitors arrive, each expecting their own room. A normal, finite hotel could not accommodate new guests once every room is full. However, it can be shown that the existing guests and newcomers — even an infinite number of them — can each have their own room in the infinite hotel.
With one additional guest, the hotel can accommodate them and the existing guests if infinitely many guests simultaneously move rooms. The guest currently in room 1 moves to room 2, the guest currently in room 2 to room 3, and so on, moving every guest from their current room n to room n +1. The infinite hotel has no final room, so every guest has a room to go to. After this, room 1 is empty and the new guest can be moved into that room. By repeating this procedure, it is possible to make room for any finite number of new guests. In general, when k guests seek a room, the hotel can apply the same procedure and move every guest from room n to room n + k .
It is also possible to accommodate a countably infinite number of new guests: just move the person occupying room 1 to room 2, the guest occupying room 2 to room 4, and, in general, the guest occupying room n to room 2 n (2 times n ), and all the odd-numbered rooms (which are countably infinite) will be free for the new guests.
It is possible to accommodate countably infinitely many coachloads of countably infinite passengers each, by several different methods. Most methods depend on the seats in the coaches being already numbered (or use the axiom of countable choice ). In general any pairing function can be used to solve this problem. For each of these methods, consider a passenger's seat number on a coach to be n {\displaystyle n} , and their coach number to be c {\displaystyle c} , and the numbers n {\displaystyle n} and c {\displaystyle c} are then fed into the two arguments of the pairing function .
Send the guest in room i {\displaystyle i} to room 2 i {\displaystyle 2^{i}} , then put the first coach's load in rooms 3 n {\displaystyle 3^{n}} , the second coach's load in rooms 5 n {\displaystyle 5^{n}} ; in general for coach number c {\displaystyle c} we use the rooms p c n {\displaystyle p_{c}^{n}} where p c {\displaystyle p_{c}} is the c {\displaystyle c} th odd prime number . This solution leaves certain rooms empty (which may or may not be useful to the hotel); specifically, all numbers that are not prime powers , such as 15 or 847, will no longer be occupied. (So, strictly speaking, this shows that the number of arrivals is less than or equal to the number of vacancies created. It is easier to show, by an independent means, that the number of arrivals is also greater than or equal to the number of vacancies, and thus that they are equal , than to modify the algorithm to an exact fit.) (The algorithm works equally well if one interchanges n {\displaystyle n} and c {\displaystyle c} , but whichever choice is made, it must be applied uniformly throughout.)
Each person of a certain seat s {\displaystyle s} and coach c {\displaystyle c} can be put into room 2 s 3 c {\displaystyle 2^{s}3^{c}} (presuming c =0 for the people already in the hotel, 1 for the first coach, etc.). Because every number has a unique prime factorization , it is easy to see all people will have a room, while no two people will end up in the same room. For example, the person in room 2592 ( 2 5 3 4 {\displaystyle 2^{5}3^{4}} ) was sitting in on the 4th coach, on the 5th seat. Like the prime powers method, this solution leaves certain rooms empty.
This method can also easily be expanded for infinite nights, infinite entrances, etc. ( 2 s 3 c 5 n 7 e . . . {\displaystyle 2^{s}3^{c}5^{n}7^{e}...} )
For each passenger, compare the lengths of n {\displaystyle n} and c {\displaystyle c} as written in any positional numeral system , such as decimal . (Treat each hotel resident as being in coach #0.) If either number is shorter, add leading zeroes to it until both values have the same number of digits. Interleave the digits to produce a room number: its digits will be [first digit of coach number]-[first digit of seat number]-[second digit of coach number]-[second digit of seat number]-etc. The hotel (coach #0) guest in room number 1729 moves to room 01070209 (i.e., room 1,070,209). The passenger on seat 1234 of coach 789 goes to room 01728394 (i.e., room 1,728,394).
Unlike the prime powers solution, this one fills the hotel completely, and we can reconstruct a guest's original coach and seat by reversing the interleaving process. First add a leading zero if the room has an odd number of digits. Then de-interleave the number into two numbers: the coach number consists of the odd-numbered digits and the seat number is the even-numbered ones. Of course, the original encoding is arbitrary, and the roles of the two numbers can be reversed (seat-odd and coach-even), so long as it is applied consistently.
Those already in the hotel will be moved to room ( n 2 + n ) / 2 {\displaystyle (n^{2}+n)/2} , or the n {\displaystyle n} th triangular number . Those in a coach will be in room ( ( c + n − 1 ) 2 + c + n − 1 ) / 2 + n {\displaystyle ((c+n-1)^{2}+c+n-1)/2+n} , or the ( c + n − 1 ) {\displaystyle (c+n-1)} triangular number plus n {\displaystyle n} . In this way all the rooms will be filled by one, and only one, guest.
This pairing function can be demonstrated visually by structuring the hotel as a one-room-deep, infinitely tall pyramid . The pyramid's topmost row is a single room: room 1; its second row is rooms 2 and 3; and so on. The column formed by the set of rightmost rooms will correspond to the triangular numbers. Once they are filled (by the hotel's redistributed occupants), the remaining empty rooms form the shape of a pyramid exactly identical to the original shape. Thus, the process can be repeated for each infinite set. Doing this one at a time for each coach would require an infinite number of steps, but by using the prior formulas, a guest can determine what their room "will be" once their coach has been reached in the process, and can simply go there immediately.
Let S := { ( a , b ) ∣ a , b ∈ N } {\displaystyle S:=\{(a,b)\mid a,b\in \mathbb {N} \}} . S {\displaystyle S} is countable since N {\displaystyle \mathbb {N} } is countable, hence we may enumerate its elements s 1 , s 2 , … {\displaystyle s_{1},s_{2},\dots } . Now if s n = ( a , b ) {\displaystyle s_{n}=(a,b)} , assign the b {\displaystyle b} th guest of the a {\displaystyle a} th coach to the n {\displaystyle n} th room (consider the guests already in the hotel as guests of the 0 {\displaystyle 0} th coach). Thus we have a function assigning each person to a room; furthermore, this assignment does not skip over any rooms.
Suppose the hotel is next to an ocean, and an infinite number of car ferries arrive, each bearing an infinite number of coaches, each with an infinite number of passengers. This is a situation involving three "levels" of infinity , and it can be solved by extensions of any of the previous solutions.
The prime factorization method can be applied by adding a new prime number for every additional layer of infinity ( 2 s 3 c 5 f {\displaystyle 2^{s}3^{c}5^{f}} , with f {\displaystyle f} the ferry).
The prime power solution can be applied with further exponentiation of prime numbers, resulting in very large room numbers even given small inputs. For example, the passenger in the second seat of the third bus on the second ferry (address 2-3-2) would raise the 2nd odd prime (5) to 49, which is the result of the 3rd odd prime (7) being raised to the power of his seat number (2). This room number would have over thirty decimal digits.
The interleaving method can be used with three interleaved "strands" instead of two. The passenger with the address 2-3-2 would go to room 232, while the one with the address 4935-198-82217 would go to room #008,402,912,391,587 (the leading zeroes can be removed).
Anticipating the possibility of any number of layers of infinite guests, the hotel may wish to assign rooms such that no guest will need to move, no matter how many guests arrive afterward. One solution is to convert each arrival's address into a binary number in which ones are used as separators at the start of each layer, while a number within a given layer (such as a guest's coach number) is represented with that many zeroes. Thus, a guest with the prior address 2-5-1-3-1 (five infinite layers) would go to room 10010000010100010 (decimal 295458).
As an added step in this process, one zero can be removed from each section of the number; in this example, the guest's new room is 101000011001 (decimal 2585). This ensures that every room could be filled by a hypothetical guest. If no infinite sets of guests arrive, then only rooms that are a power of two will be occupied.
Hilbert's paradox is a veridical paradox : it leads to a counter-intuitive result that is provably true. The statements "there is a guest to every room" and "no more guests can be accommodated" are not equivalent when there are infinitely many rooms.
Initially, this state of affairs might seem to be counter-intuitive. The properties of infinite collections of things are quite different from those of finite collections of things. The paradox of Hilbert's Grand Hotel can be understood by using Cantor's theory of transfinite numbers . Thus, in an ordinary (finite) hotel with more than one room, the number of odd-numbered rooms is obviously smaller than the total number of rooms. However, in Hilbert's Grand Hotel, the quantity of odd-numbered rooms is not smaller than the total "number" of rooms. In mathematical terms, the cardinality of the subset containing the odd-numbered rooms is the same as the cardinality of the set of all rooms. Indeed, infinite sets are characterized as sets that have proper subsets of the same cardinality. For countable sets (sets with the same cardinality as the natural numbers ) this cardinality is ℵ 0 {\displaystyle \aleph _{0}} . [ 3 ]
Rephrased, for any countably infinite set, there exists a bijective function which maps the countably infinite set to the set of natural numbers, even if the countably infinite set contains the natural numbers. For example, the set of rational numbers—those numbers which can be written as a quotient of integers—contains the natural numbers as a subset, but is no bigger than the set of natural numbers since the rationals are countable: there is a bijection from the naturals to the rationals. | https://en.wikipedia.org/wiki/Hilbert's_paradox_of_the_Grand_Hotel |
In mathematics , Hilbert's program , formulated by German mathematician David Hilbert in the early 1920s, [ 1 ] was a proposed solution to the foundational crisis of mathematics , when early attempts to clarify the foundations of mathematics were found to suffer from paradoxes and inconsistencies. As a solution, Hilbert proposed to ground all existing theories to a finite, complete set of axioms , and provide a proof that these axioms were consistent . Hilbert proposed that the consistency of more complicated systems, such as real analysis , could be proven in terms of simpler systems. Ultimately, the consistency of all of mathematics could be reduced to basic arithmetic .
Gödel's incompleteness theorems , published in 1931, showed that Hilbert's program was unattainable for key areas of mathematics. In his first theorem, Gödel showed that any consistent system with a computable set of axioms which is capable of expressing arithmetic can never be complete: it is possible to construct a statement that can be shown to be true, but that cannot be derived from the formal rules of the system. In his second theorem, he showed that such a system could not prove its own consistency, so it certainly cannot be used to prove the consistency of anything stronger with certainty. This refuted Hilbert's assumption that a finitistic system could be used to prove the consistency of itself, and therefore could not prove everything else.
The main goal of Hilbert's program was to provide secure foundations for all mathematics. In particular, this should include:
Kurt Gödel showed that most of the goals of Hilbert's program were impossible to achieve, at least if interpreted in the most obvious way. Gödel's second incompleteness theorem shows that any consistent theory powerful enough to encode addition and multiplication of integers cannot prove its own consistency. This presents a challenge to Hilbert's program:
Many current lines of research in mathematical logic , such as proof theory and reverse mathematics , can be viewed as natural continuations of Hilbert's original program. Much of it can be salvaged by changing its goals slightly (Zach 2005), and with the following modifications some of it was successfully completed: | https://en.wikipedia.org/wiki/Hilbert's_program |
Hilbert's 16th problem was posed by David Hilbert at the Paris conference of the International Congress of Mathematicians in 1900, as part of his list of 23 problems in mathematics . [ 1 ]
The original problem was posed as the Problem of the topology of algebraic curves and surfaces ( Problem der Topologie algebraischer Kurven und Flächen ).
Actually the problem consists of two similar problems in different branches of mathematics:
The first problem is yet unsolved for n = 8. Therefore, this problem is what usually is meant when talking about Hilbert's sixteenth problem in real algebraic geometry . The second problem also remains unsolved: no upper bound for the number of limit cycles is known for any n > 1, and this is what usually is meant by Hilbert's sixteenth problem in the field of dynamical systems .
The Spanish Royal Society for Mathematics published an explanation of Hilbert's sixteenth problem. [ 2 ]
In 1876, Harnack investigated algebraic curves in the real projective plane and found that curves of degree n could have no more than
separate connected components . Furthermore, he showed how to construct curves that attained that upper bound, and thus that it was the best possible bound. Curves with that number of components are called M-curves .
Hilbert had investigated the M-curves of degree 6, and found that the 11 components always were grouped in a certain way. His challenge to the mathematical community now was to completely investigate the possible configurations of the components of the M-curves.
Furthermore, he requested a generalization of Harnack's curve theorem to algebraic surfaces and a similar investigation of surfaces with the maximum number of components.
Here we are going to consider polynomial vector fields in the real plane, that is a system of differential equations of the form:
where both P and Q are real polynomials of degree n .
These polynomial vector fields were studied by Poincaré , who had the idea of abandoning the search for finding exact solutions to the system, and instead attempted to study the qualitative features of the collection of all possible solutions.
Among many important discoveries, he found that the limit sets of such solutions need not be a stationary point , but could rather be a periodic solution. Such solutions are called limit cycles .
The second part of Hilbert's 16th problem is to decide an upper bound for the number of limit cycles in polynomial vector fields of degree n and, similar to the first part, investigate their relative positions.
It was shown in 1991/1992 by Yulii Ilyashenko and Jean Écalle that every polynomial vector field in the plane has only finitely many limit cycles (a 1923 article by Henri Dulac claiming a proof of this statement had been shown to contain a gap in 1981). This statement is not obvious, since it is easy to construct smooth (C ∞ ) vector fields in the plane with infinitely many concentric limit cycles. [ 3 ]
The question whether there exists a finite upper bound H ( n ) for the number of limit cycles of planar polynomial vector fields of degree n remains unsolved for any n > 1. ( H (1) = 0 since linear vector fields do not have limit cycles.) Evgenii Landis and Ivan Petrovsky claimed a solution in the 1950s, but it was shown wrong in the early 1960s. Quadratic plane vector fields with four limit cycles are known. [ 3 ] An example of numerical visualization of four limit cycles in a quadratic plane vector field can be found in. [ 4 ] [ 5 ] In general, the difficulties in estimating the number of limit cycles by numerical integration are due to the nested limit cycles with very narrow regions of attraction, which are hidden attractors , and semi-stable limit cycles.
In his speech, Hilbert presented the problems as: [ 6 ]
The upper bound of closed and separate branches of an algebraic curve of degree n was decided by Harnack (Mathematische Annalen, 10); from this arises the further question as of the relative positions of the branches in the plane.
As of the curves of degree 6, I have – admittedly in a rather elaborate way – convinced myself that the 11 branches, that they can have according to Harnack, never all can be separate, rather there must exist one branch, which have another branch running in its interior and nine branches running in its exterior, or opposite. It seems to me that a thorough investigation of the relative positions of the upper bound for separate branches is of great interest, and similarly the corresponding investigation of the number, shape and position of the sheets of an algebraic surface in space – it is not yet even known, how many sheets a surface of degree 4 in three-dimensional space can maximally have. (cf. Rohn, Flächen vierter Ordnung, Preissschriften der Fürstlich Jablonowskischen Gesellschaft, Leipzig 1886)
Hilbert continues: [ 6 ]
Following this purely algebraic problem I would like to raise a question that, it seems to me, can be attacked by the same method of continuous coefficient changing, and whose answer is of similar importance to the topology of the families of curves defined by differential equations – that is the question of the upper bound and position of the Poincaré boundary cycles (cycles limites) for a differential equation of first order of the form:
where X , Y are integer, rational functions of n th degree in resp. x , y , or written homogeneously:
where X , Y , Z means integral, rational, homogenic functions of n th degree in x , y , z and the latter are to be considered function of the parameter t . | https://en.wikipedia.org/wiki/Hilbert's_sixteenth_problem |
Hilbert's sixth problem is to axiomatize those branches of physics in which mathematics is prevalent. It occurs on the widely cited list of Hilbert's problems in mathematics that he presented in the year 1900. [ 1 ] In its common English translation, the explicit statement reads:
Hilbert gave the further explanation of this problem and its possible specific forms:
David Hilbert himself devoted much of his research to the sixth problem; [ 3 ] in particular, he worked in those fields of physics that arose after he stated the problem.
In the 1910s, celestial mechanics evolved into general relativity . Hilbert and Emmy Noether corresponded extensively with Albert Einstein on the formulation of the theory. [ 4 ]
In the 1920s, mechanics of microscopic systems evolved into quantum mechanics . Hilbert, with the assistance of John von Neumann , L. Nordheim , and E. P. Wigner , worked on the axiomatic basis of quantum mechanics (see Hilbert space ). [ 5 ] At the same time, but independently, Dirac formulated quantum mechanics in a way that is close to an axiomatic system, as did Hermann Weyl with the assistance of Erwin Schrödinger .
In the 1930s, probability theory was put on an axiomatic basis by Andrey Kolmogorov , using measure theory .
Since the 1960s, following the work of Arthur Wightman and Rudolf Haag , modern quantum field theory can also be considered close to an axiomatic description.
In the 1990s-2000s the problem of "the limiting processes, there merely indicated, which lead from the atomistic view to the laws of motion of continua" was approached by many groups of mathematicians. Main recent results are summarized by Laure Saint-Raymond , [ 6 ] Marshall Slemrod, [ 7 ] Alexander N. Gorban and Ilya Karlin . [ 8 ]
In 2025, a group of mathematicians made the claim that they had derived the full set of fluid equations, including the compressible Euler and incompressible Navier-Stokes-Fourier equations , directly from Newton's laws. As of May 2025 [update] their work is being examined by other mathematicians. [ 9 ] [ 10 ]
Hilbert’s sixth problem was a proposal to expand the axiomatic method outside the existing mathematical disciplines, to physics and beyond. This expansion requires development of semantics of physics with formal analysis of the notion of physical reality that should be done. [ 11 ] Two fundamental theories capture the majority of the fundamental phenomena of physics:
Hilbert considered general relativity as an essential part of the foundation of physics. [ 13 ] [ 14 ] However, quantum field theory is not logically consistent with general relativity, indicating the need for a still-unknown theory of quantum gravity , where the semantics of physics is expected to play a central role. Hilbert's sixth problem thus remains open. [ 15 ] Nevertheless, in recent years it has fostered research regarding the foundations of physics with a particular emphasis on the role of logic and precision of language, leading to some interesting results viz. a direct realization of uncertainty principle from Cauchy's definition of 'derivative' and the unravelling of a semantic obstacle in the path of any theory of quantum gravity from the axiomatic perspective, [ 16 ] unravelling of a logical tautology in the quantum tests of equivalence principle [ 17 ] and formal unprovability of the first Maxwell's equation. [ 18 ] Regarding the problem of "developing mathematically the limiting processes [...] which lead from the atomistic view to the laws of motion of continua." an active area of research is focused on deriving the continuum equations of fluid motion and of elastic solids starting from atomistic particle-based descriptions. For example, a derivation of the equations of laminar viscous flow and of viscoelasticity has been achieved starting all the way from an atomistic particle-based microscopically reversible Hamiltonian [ 19 ] and subsequently generalized from classical mechanics to relativistic mechanics. | https://en.wikipedia.org/wiki/Hilbert's_sixth_problem |
Hilbert's tenth problem is the tenth on the list of mathematical problems that the German mathematician David Hilbert posed in 1900. It is the challenge to provide a general algorithm that, for any given Diophantine equation (a polynomial equation with integer coefficients and a finite number of unknowns), can decide whether the equation has a solution with all unknowns taking integer values.
For example, the Diophantine equation 3 x 2 − 2 x y − y 2 z − 7 = 0 {\displaystyle 3x^{2}-2xy-y^{2}z-7=0} has an integer solution: x = 1 , y = 2 , z = − 2 {\displaystyle x=1,\ y=2,\ z=-2} . By contrast, the Diophantine equation x 2 + y 2 + 1 = 0 {\displaystyle x^{2}+y^{2}+1=0} has no such solution.
Hilbert's tenth problem has been solved, and it has a negative answer: such a general algorithm cannot exist. This is the result of combined work of Martin Davis , Yuri Matiyasevich , Hilary Putnam and Julia Robinson that spans 21 years, with Matiyasevich completing the theorem in 1970. [ 1 ] [ 2 ] [ 3 ] The theorem is now known as Matiyasevich's theorem or the MRDP theorem (an initialism for the surnames of the four principal contributors to its solution).
When all coefficients and variables are restricted to be positive integers, the related problem of polynomial identity testing becomes a decidable (exponentiation-free) variation of Tarski's high school algebra problem , sometimes denoted H S I ¯ . {\displaystyle {\overline {HSI}}.} [ 4 ]
Hilbert formulated the problem as follows: [ 5 ]
Given a Diophantine equation with any number of unknown quantities and with rational integral numerical coefficients: To devise a process according to which it can be determined in a finite number of operations whether the equation is solvable in rational integers.
The words "process" and "finite number of operations" have been taken to mean that Hilbert was asking for an algorithm . The term "rational integral" simply refers to the integers, positive, negative or zero: 0, ±1, ±2, ... . So Hilbert was asking for a general algorithm to decide whether a given polynomial Diophantine equation with integer coefficients has a solution in integers.
Hilbert's problem is not concerned with finding the solutions. It only asks whether, in general, we can decide whether one or more solutions exist. The answer to this question is negative, in the sense that no "process can be devised" for answering that question. In modern terms, Hilbert's 10th problem is an undecidable problem .
In a Diophantine equation, there are two kinds of variables: the parameters and the unknowns. The Diophantine set consists of the parameter assignments for which the Diophantine equation is solvable. A typical example is the linear Diophantine equation in two unknowns,
where the equation is solvable if and only if the greatest common divisor gcd ( a 1 , a 2 ) {\displaystyle \gcd(a_{1},a_{2})} evenly divides a 3 {\displaystyle a_{3}} . The set of all ordered triples ( a 1 , a 2 , a 3 ) {\displaystyle (a_{1},a_{2},a_{3})} satisfying this restriction is called the Diophantine set defined by a 1 x + a 2 y = a 3 {\displaystyle a_{1}x+a_{2}y=a_{3}} .
In these terms, Hilbert's tenth problem asks whether there is an algorithm to determine if the Diophantine set corresponding to an arbitrary polynomial is non-empty.
The problem is generally understood in terms of the natural numbers (that is, the non-negative integers) rather than arbitrary integers. However, the two problems are equivalent: any general algorithm that can decide whether a given Diophantine equation has an integer solution could be modified into an algorithm that decides whether a given Diophantine equation has a natural-number solution, and vice versa. By Lagrange's four-square theorem , every natural number is the sum of the squares of four integers, so we could rewrite every natural-valued parameter in terms of the sum of the squares of four new integer-valued parameters. Similarly, since every integer is the difference of two natural numbers, we could rewrite every integer parameter as the difference of two natural parameters. [ 3 ] Furthermore, we can always rewrite a system of simultaneous equations p 1 = 0 , … , p k = 0 {\displaystyle p_{1}=0,\ldots ,p_{k}=0} (where each p i {\displaystyle p_{i}} is a polynomial) as a single equation p 1 2 + ⋯ + p k 2 = 0 {\displaystyle p_{1}^{\,2}+\cdots +p_{k}^{\,2}=0} .
A recursively enumerable set can be characterized as one for which there exists an algorithm that will ultimately halt when a member of the set is provided as input, but may continue indefinitely when the input is a non-member. It was the development of computability theory (also known as recursion theory) that provided a precise explication of the intuitive notion of algorithmic computability, thus making the notion of recursive enumerability perfectly rigorous. It is evident that Diophantine sets are recursively enumerable (also known as semi-decidable). This is because one can arrange all possible tuples of values of the unknowns in a sequence and then, for a given value of the parameter(s), test these tuples, one after another, to see whether they are solutions of the corresponding equation. The unsolvability of Hilbert's tenth problem is a consequence of the surprising fact that the converse is true:
Every recursively enumerable set is Diophantine.
This result is variously known as Matiyasevich's theorem (because he provided the crucial step that completed the proof) and the MRDP theorem (for Yuri Matiyasevich , Julia Robinson , Martin Davis , and Hilary Putnam ). Because there exists a recursively enumerable set that is not computable, the unsolvability of Hilbert's tenth problem is an immediate consequence. In fact, more can be said: there is a polynomial
with integer coefficients such that the set of values of a {\displaystyle a} for which the equation
has solutions in natural numbers is not computable. So, not only is there no general algorithm for testing Diophantine equations for solvability, but there is none even for this family of single-parameter equations.
where p {\displaystyle p} is a polynomial with integer coefficients. Purely formally, it is only the bounded universal quantifier that stands in the way of this being a definition of a Diophantine set.
Using a non-constructive but easy proof, he derives as a corollary to this normal form that the set of Diophantine sets is not closed under complementation, by showing that there exists a Diophantine set whose complement is not Diophantine. Because the recursively enumerable sets also are not closed under complementation, he conjectures that the two classes are identical.
Using properties of the Pell equation, she proves that J.R. implies that EXP is Diophantine, as well as the binomial coefficients, the factorial, and the primes.
The Matiyasevich/MRDP theorem relates two notions—one from computability theory, the other from number theory—and has some surprising consequences. Perhaps the most surprising is the existence of a universal Diophantine equation:
This is true simply because Diophantine sets, being equal to recursively enumerable sets, are also equal to Turing machines . It is a well known property of Turing machines that there exist universal Turing machines, capable of executing any algorithm.
Hilary Putnam has pointed out that for any Diophantine set S {\displaystyle S} of positive integers, there is a polynomial
such that S {\displaystyle S} consists of exactly the positive numbers among the values assumed by q {\displaystyle q} as
the variables
range over all natural numbers. This can be seen as follows: If
provides a Diophantine definition of S {\displaystyle S} , then it suffices to set
So, for example, there is a polynomial for which the positive part of its range is exactly the prime numbers. (On the other hand, no polynomial can only take on prime values.) The same holds for other recursively enumerable sets of natural numbers: the factorial, the binomial coefficients, the fibonacci numbers, etc.
Other applications concern what logicians refer to as Π 1 0 {\displaystyle \Pi _{1}^{0}} propositions, sometimes also called propositions of Goldbach type . [ b ] These are like Goldbach's conjecture , in stating that all natural numbers possess a certain property that is algorithmically checkable for each particular number. [ c ] The Matiyasevich/MRDP theorem implies that each such proposition is equivalent to a statement that asserts that some particular Diophantine equation has no solutions in natural numbers. [ d ] A number of important and celebrated problems are of this form: in particular, Fermat's Last Theorem , the Riemann hypothesis , and the four color theorem . In addition the assertion that particular formal systems such as Peano arithmetic or ZFC are consistent can be expressed as Π 1 0 {\displaystyle \Pi _{1}^{0}} sentences. The idea is to follow Kurt Gödel in coding proofs by natural numbers in such a way that the property of being the number representing a proof is algorithmically checkable.
Π 1 0 {\displaystyle \Pi _{1}^{0}} sentences have the special property that if they are false, that fact will be provable in any of the usual formal systems. This is because the falsity amounts to the existence of a counter-example that can be verified by simple arithmetic. So if a Π 1 0 {\displaystyle \Pi _{1}^{0}} sentence is such that neither it nor its negation is provable in one of these systems, that sentence must be true. [ citation needed ]
A particularly striking form of Gödel's incompleteness theorem is also a consequence of the Matiyasevich/MRDP theorem:
Let
provide a Diophantine definition of a non-computable set. Let A {\displaystyle A} be an algorithm that outputs a sequence of natural numbers n {\displaystyle n} such that the corresponding equation
has no solutions in natural numbers. Then there is a number n 0 {\displaystyle n_{0}} that is not output by A {\displaystyle A} while in fact the equation
has no solutions in natural numbers.
To see that the theorem is true, it suffices to notice that if there were no such number n 0 {\displaystyle n_{0}} , one could algorithmically test membership of a number n {\displaystyle n} in this non-computable set by simultaneously running the algorithm A {\displaystyle A} to see whether n {\displaystyle n} is output while also checking all possible k {\displaystyle k} -tuples of natural numbers seeking a solution of the equation
and we may associate an algorithm A {\displaystyle A} with any of the usual formal systems such as Peano arithmetic or ZFC by letting it systematically generate consequences of the axioms and then output a number n {\displaystyle n} whenever a sentence of the form
is generated. Then the theorem tells us that either a false statement of this form is proved or a true one remains unproved in the system in question.
We may speak of the degree of a Diophantine set as being the least degree of a polynomial in an equation defining that set. Similarly, we can call the dimension of such a set the fewest unknowns in a defining equation. Because of the existence of a universal Diophantine equation, it is clear that there are absolute upper bounds to both of these quantities, and there has been much interest in determining these bounds.
Already in the 1920s Thoralf Skolem showed that any Diophantine equation is equivalent to one of degree 4 or less. His trick was to introduce new unknowns by equations setting them equal to the square of an unknown or the product of two unknowns. Repetition of this process results in a system of second degree equations; then an equation of degree 4 is obtained by summing the squares. So every Diophantine set is trivially of degree 4 or less. It is not known whether this result is best possible.
Julia Robinson and Yuri Matiyasevich showed that every Diophantine set has dimension no greater than 13. Later, Matiyasevich sharpened their methods to show that 9 unknowns suffice. Although it may well be that this result is not the best possible, there has been no further progress. [ e ] So, in particular, there is no algorithm for testing Diophantine equations with 9 or fewer unknowns for solvability in natural numbers. For the case of rational integer solutions (as Hilbert had originally posed it), the 4-squares trick shows that there is no algorithm for equations with no more than 36 unknowns. But Zhi Wei Sun showed that the problem for integers is unsolvable even for equations with no more than 11 unknowns.
Martin Davis studied algorithmic questions involving the number of solutions of a Diophantine equation. Hilbert's tenth problem asks whether or not that number is 0. Let A = { 0 , 1 , 2 , 3 , … , ℵ 0 } {\displaystyle A=\{0,1,2,3,\ldots ,\aleph _{0}\}} and let C {\displaystyle C} be a proper non-empty subset of A {\displaystyle A} . Davis proved that there is no algorithm to test a given Diophantine equation to determine whether the number of its solutions is a member of the set C {\displaystyle C} . Thus there is no algorithm to determine whether the number of solutions of a Diophantine equation is finite, odd, a perfect square, a prime, etc.
The proof of the MRDP theorem has been formalized in Rocq (previously known as Coq ). [ 8 ]
Although Hilbert posed the problem for the rational integers, it can be just as well asked for many rings (in particular, for any ring whose number of elements is countable ). Obvious examples are the rings of integers of algebraic number fields as well as the rational numbers .
There has been much work on Hilbert's tenth problem for the rings of integers of algebraic number fields. Basing themselves on earlier work by Jan Denef and Leonard Lipschitz and using class field theory, Harold N. Shapiro and Alexandra Shlapentokh were able to prove:
Hilbert's tenth problem is unsolvable for the ring of integers of any algebraic number field whose Galois group over the rationals is abelian .
Shlapentokh and Thanases Pheidas (independently of one another) obtained the same result for algebraic number fields admitting exactly one pair of complex conjugate embeddings.
The problem for the ring of integers of algebraic number fields other than those covered by the results above remains open. Likewise, despite much interest, the problem for equations over the rationals remains open. Barry Mazur has conjectured that for any variety over the rationals, the topological closure over the reals of the set of solutions has only finitely many components. [ 9 ] This conjecture implies that the integers are not Diophantine over the rationals, and so if this conjecture is true, a negative answer to Hilbert's Tenth Problem would require a different approach than that used for other rings.
In 2024, Peter Koymans and Carlo Pagano published a claimed proof that Hilbert’s 10th problem is undecidable for every ring of integers using additive combinatorics . [ 10 ] [ 11 ] Another team of mathematicians subsequently claimed another proof of the same result, using different methods. [ 10 ] [ 12 ] | https://en.wikipedia.org/wiki/Hilbert's_tenth_problem |
In differential geometry , Hilbert's theorem (1901) states that there exists no complete regular surface S {\displaystyle S} of constant negative gaussian curvature K {\displaystyle K} immersed in R 3 {\displaystyle \mathbb {R} ^{3}} . This theorem answers the question for the negative case of which surfaces in R 3 {\displaystyle \mathbb {R} ^{3}} can be obtained by isometrically immersing complete manifolds with constant curvature .
The proof of Hilbert's theorem is elaborate and requires several lemmas . The idea is to show the nonexistence of an isometric immersion
of a plane S ′ {\displaystyle S'} to the real space R 3 {\displaystyle \mathbb {R} ^{3}} . This proof is basically the same as in Hilbert's paper, although based in the books of Do Carmo and Spivak .
Observations : In order to have a more manageable treatment, but without loss of generality , the curvature may be considered equal to minus one, K = − 1 {\displaystyle K=-1} . There is no loss of generality, since it is being dealt with constant curvatures, and similarities of R 3 {\displaystyle \mathbb {R} ^{3}} multiply K {\displaystyle K} by a constant. The exponential map exp p : T p ( S ) ⟶ S {\displaystyle \exp _{p}:T_{p}(S)\longrightarrow S} is a local diffeomorphism (in fact a covering map, by Cartan-Hadamard theorem), therefore, it induces an inner product in the tangent space of S {\displaystyle S} at p {\displaystyle p} : T p ( S ) {\displaystyle T_{p}(S)} . Furthermore, S ′ {\displaystyle S'} denotes the geometric surface T p ( S ) {\displaystyle T_{p}(S)} with this inner product. If ψ : S ⟶ R 3 {\displaystyle \psi :S\longrightarrow \mathbb {R} ^{3}} is an isometric immersion, the same holds for
The first lemma is independent from the other ones, and will be used at the end as the counter statement to reject the results from the other lemmas.
Lemma 1 : The area of S ′ {\displaystyle S'} is infinite. Proof's Sketch: The idea of the proof is to create a global isometry between H {\displaystyle H} and S ′ {\displaystyle S'} . Then, since H {\displaystyle H} has an infinite area, S ′ {\displaystyle S'} will have it too. The fact that the hyperbolic plane H {\displaystyle H} has an infinite area comes by computing the surface integral with the corresponding coefficients of the First fundamental form . To obtain these ones, the hyperbolic plane can be defined as the plane with the following inner product around a point q ∈ R 2 {\displaystyle q\in \mathbb {R} ^{2}} with coordinates ( u , v ) {\displaystyle (u,v)}
Since the hyperbolic plane is unbounded, the limits of the integral are infinite , and the area can be calculated through
Next it is needed to create a map, which will show that the global information from the hyperbolic plane can be transfer to the surface S ′ {\displaystyle S'} , i.e. a global isometry. φ : H → S ′ {\displaystyle \varphi :H\rightarrow S'} will be the map, whose domain is the hyperbolic plane and image the 2-dimensional manifold S ′ {\displaystyle S'} , which carries the inner product from the surface S {\displaystyle S} with negative curvature. φ {\displaystyle \varphi } will be defined via the exponential map, its inverse, and a linear isometry between their tangent spaces,
That is
where p ∈ H , p ′ ∈ S ′ {\displaystyle p\in H,p'\in S'} . That is to say, the starting point p ∈ H {\displaystyle p\in H} goes to the tangent plane from H {\displaystyle H} through the inverse of the exponential map. Then travels from one tangent plane to the other through the isometry ψ {\displaystyle \psi } , and then down to the surface S ′ {\displaystyle S'} with another exponential map.
The following step involves the use of polar coordinates , ( ρ , θ ) {\displaystyle (\rho ,\theta )} and ( ρ ′ , θ ′ ) {\displaystyle (\rho ',\theta ')} , around p {\displaystyle p} and p ′ {\displaystyle p'} respectively. The requirement will be that the axis are mapped to each other, that is θ = 0 {\displaystyle \theta =0} goes to θ ′ = 0 {\displaystyle \theta '=0} . Then φ {\displaystyle \varphi } preserves the first fundamental form. In a geodesic polar system, the Gaussian curvature K {\displaystyle K} can be expressed as
In addition K is constant and fulfills the following differential equation
Since H {\displaystyle H} and S ′ {\displaystyle S'} have the same constant Gaussian curvature, then they are locally isometric ( Minding's Theorem ). That means that φ {\displaystyle \varphi } is a local isometry between H {\displaystyle H} and S ′ {\displaystyle S'} . Furthermore, from the Hadamard's theorem it follows that φ {\displaystyle \varphi } is also a covering map. Since S ′ {\displaystyle S'} is simply connected, φ {\displaystyle \varphi } is a homeomorphism, and hence, a (global) isometry. Therefore, H {\displaystyle H} and S ′ {\displaystyle S'} are globally isometric, and because H {\displaystyle H} has an infinite area, then S ′ = T p ( S ) {\displaystyle S'=T_{p}(S)} has an infinite area, as well. ◻ {\displaystyle \square }
Lemma 2 : For each p ∈ S ′ {\displaystyle p\in S'} exists a parametrization x : U ⊂ R 2 ⟶ S ′ , p ∈ x ( U ) {\displaystyle x:U\subset \mathbb {R} ^{2}\longrightarrow S',\qquad p\in x(U)} , such that the coordinate curves of x {\displaystyle x} are asymptotic curves of x ( U ) = V ′ {\displaystyle x(U)=V'} and form a Tchebyshef net.
Lemma 3 : Let V ′ ⊂ S ′ {\displaystyle V'\subset S'} be a coordinate neighborhood of S ′ {\displaystyle S'} such that the coordinate curves are asymptotic curves in V ′ {\displaystyle V'} . Then the area A of any quadrilateral formed by the coordinate curves is smaller than 2 π {\displaystyle 2\pi } .
The next goal is to show that x {\displaystyle x} is a parametrization of S ′ {\displaystyle S'} .
Lemma 4 : For a fixed t {\displaystyle t} , the curve x ( s , t ) , − ∞ < s < + ∞ {\displaystyle x(s,t),-\infty <s<+\infty } , is an asymptotic curve with s {\displaystyle s} as arc length.
The following 2 lemmas together with lemma 8 will demonstrate the existence of a parametrization x : R 2 ⟶ S ′ {\displaystyle x:\mathbb {R} ^{2}\longrightarrow S'}
Lemma 5 : x {\displaystyle x} is a local diffeomorphism.
Lemma 6 : x {\displaystyle x} is surjective .
Lemma 7 : On S ′ {\displaystyle S'} there are two differentiable linearly independent vector fields which are tangent to the asymptotic curves of S ′ {\displaystyle S'} .
Lemma 8 : x {\displaystyle x} is injective .
Proof of Hilbert's Theorem: First, it will be assumed that an isometric immersion from a complete surface S {\displaystyle S} with negative curvature exists: ψ : S ⟶ R 3 {\displaystyle \psi :S\longrightarrow \mathbb {R} ^{3}}
As stated in the observations, the tangent plane T p ( S ) {\displaystyle T_{p}(S)} is endowed with the metric induced by the exponential map exp p : T p ( S ) ⟶ S {\displaystyle \exp _{p}:T_{p}(S)\longrightarrow S} . Moreover, φ = ψ ∘ exp p : S ′ ⟶ R 3 {\displaystyle \varphi =\psi \circ \exp _{p}:S'\longrightarrow \mathbb {R} ^{3}} is an isometric immersion and Lemmas 5,6, and 8 show the existence of a parametrization x : R 2 ⟶ S ′ {\displaystyle x:\mathbb {R} ^{2}\longrightarrow S'} of the whole S ′ {\displaystyle S'} , such that the coordinate curves of x {\displaystyle x} are the asymptotic curves of S ′ {\displaystyle S'} . This result was provided by Lemma 4. Therefore, S ′ {\displaystyle S'} can be covered by a union of "coordinate" quadrilaterals Q n {\displaystyle Q_{n}} with Q n ⊂ Q n + 1 {\displaystyle Q_{n}\subset Q_{n+1}} . By Lemma 3, the area of each quadrilateral is smaller than 2 π {\displaystyle 2\pi } . On the other hand, by Lemma 1, the area of S ′ {\displaystyle S'} is infinite, therefore has no bounds. This is a contradiction and the proof is concluded. ◻ {\displaystyle \square } | https://en.wikipedia.org/wiki/Hilbert's_theorem_(differential_geometry) |
The third of Hilbert's list of mathematical problems , presented in 1900, was the first to be solved. The problem is related to the following question: given any two polyhedra of equal volume , is it always possible to cut the first into finitely many polyhedral pieces which can be reassembled to yield the second? Based on earlier writings by Carl Friedrich Gauss , [ 1 ] David Hilbert conjectured that this is not always possible. This was confirmed within the year by his student Max Dehn , who proved that the answer in general is "no" by producing a counterexample. [ 2 ]
The answer for the analogous question about polygons in 2 dimensions is "yes" and had been known for a long time; this is the Wallace–Bolyai–Gerwien theorem .
Unknown to Hilbert and Dehn, Hilbert's third problem was also proposed independently by Władysław Kretkowski for a math contest of 1882 by the Academy of Arts and Sciences of Kraków , and was solved by Ludwik Antoni Birkenmajer with a different method than Dehn's. Birkenmajer did not publish the result, and the original manuscript containing his solution was rediscovered years later. [ 3 ]
The formula for the volume of a pyramid , one-third of the product of base area and height, had been known to Euclid . Still, all proofs of it involve some form of limiting process or calculus , notably the method of exhaustion or, in more modern form, Cavalieri's principle . Similar formulas in plane geometry can be proven with more elementary means. Gauss regretted this defect in two of his letters to Christian Ludwig Gerling , who proved that two symmetric tetrahedra are equidecomposable . [ 3 ]
Gauss's letters were the motivation for Hilbert: is it possible to prove the equality of volume using elementary "cut-and-glue" methods? Because if not, then an elementary proof of Euclid's result is also impossible.
Dehn's proof is an instance in which abstract algebra is used to prove an impossibility result in geometry . Other examples are doubling the cube and trisecting the angle .
Two polyhedra are called scissors-congruent if the first can be cut into finitely many polyhedral pieces that can be reassembled to yield the second. Any two scissors-congruent polyhedra have the same volume. Hilbert asks about the converse .
For every polyhedron P {\displaystyle P} , Dehn defines a value, now known as the Dehn invariant D ( P ) {\displaystyle \operatorname {D} (P)} , with the property that,
if P {\displaystyle P} is cut into polyhedral pieces P 1 , P 2 , … P n {\displaystyle P_{1},P_{2},\dots P_{n}} , then D ( P ) = D ( P 1 ) + D ( P 2 ) + ⋯ + D ( P n ) . {\displaystyle \operatorname {D} (P)=\operatorname {D} (P_{1})+\operatorname {D} (P_{2})+\cdots +\operatorname {D} (P_{n}).} In particular, if two polyhedra are scissors-congruent, then they have the same Dehn invariant. He then shows that every cube has Dehn invariant zero while every regular tetrahedron has non-zero Dehn invariant. Therefore, these two shapes cannot be scissors-congruent.
A polyhedron's invariant is defined based on the lengths of its edges and the angles between its faces. If a polyhedron is cut into two, some edges are cut into two, and the corresponding contributions to the Dehn invariants should therefore be additive in the edge lengths. Similarly, if a polyhedron is cut along an edge, the corresponding angle is cut into two. Cutting a polyhedron typically also introduces new edges and angles; their contributions must cancel out. The angles introduced when a cut passes through a face add to π {\displaystyle \pi } , and the angles introduced around an edge interior to the polyhedron add to 2 π {\displaystyle 2\pi } . Therefore, the Dehn invariant is defined in such a way that integer multiples of angles of π {\displaystyle \pi } give a net contribution of zero.
All of the above requirements can be met by defining D ( P ) {\displaystyle \operatorname {D} (P)} as an element of the tensor product of the real numbers R {\displaystyle \mathbb {R} } (representing lengths of edges) and the quotient space R / ( Q π ) {\displaystyle \mathbb {R} /(\mathbb {Q} \pi )} (representing angles, with all rational multiples of π {\displaystyle \pi } replaced by zero). [ 4 ] For some purposes, this definition can be made using the tensor product of modules over Z {\displaystyle \mathbb {Z} } (or equivalently of abelian groups ), while other aspects of this topic make use of a vector space structure on the invariants, obtained by considering the two factors R {\displaystyle \mathbb {R} } and R / ( Q π ) {\displaystyle \mathbb {R} /(\mathbb {Q} \pi )} to be vector spaces over Q {\displaystyle \mathbb {Q} } and taking the tensor product of vector spaces over Q {\displaystyle \mathbb {Q} } . This choice of structure in the definition does not make a difference in whether two Dehn invariants, defined in either way, are equal or unequal.
For any edge e {\displaystyle e} of a polyhedron P {\displaystyle P} , let ℓ ( e ) {\displaystyle \ell (e)} be its length and let θ ( e ) {\displaystyle \theta (e)} denote the dihedral angle of the two faces of P {\displaystyle P} that meet at e {\displaystyle e} , measured in radians and considered modulo rational multiples of π {\displaystyle \pi } . The Dehn invariant is then defined as D ( P ) = ∑ e ℓ ( e ) ⊗ θ ( e ) {\displaystyle \operatorname {D} (P)=\sum _{e}\ell (e)\otimes \theta (e)} where the sum is taken over all edges e {\displaystyle e} of the polyhedron P {\displaystyle P} . [ 4 ] It is a valuation .
In light of Dehn's theorem above, one might ask "which polyhedra are scissors-congruent"? Sydler (1965) showed that two polyhedra are scissors-congruent if and only if they have the same volume and the same Dehn invariant. [ 5 ] Børge Jessen later extended Sydler's results to four dimensions. [ 6 ] In 1990, Dupont and Sah provided a simpler proof of Sydler's result by reinterpreting it as a theorem about the homology of certain classical groups . [ 7 ]
Debrunner showed in 1980 that the Dehn invariant of any polyhedron with which all of three-dimensional space can be tiled periodically is zero. [ 8 ]
Jessen also posed the question of whether the analogue of Jessen's results remained true for spherical geometry and hyperbolic geometry . In these geometries, Dehn's method continues to work, and shows that when two polyhedra are scissors-congruent, their Dehn invariants are equal. However, it remains an open problem whether pairs of polyhedra with the same volume and the same Dehn invariant, in these geometries, are always scissors-congruent. [ 9 ]
Hilbert's original question was more complicated: given any two tetrahedra T 1 and T 2 with equal base area and equal height (and therefore equal volume), is it always possible to find a finite number of tetrahedra, so that when these tetrahedra are glued in some way to T 1 and also glued to T 2 , the resulting polyhedra are scissors-congruent?
Dehn's invariant can be used to yield a negative answer also to this stronger question. | https://en.wikipedia.org/wiki/Hilbert's_third_problem |
Hilbert's thirteenth problem is one of the 23 Hilbert problems set out in a celebrated list compiled in 1900 by David Hilbert . It entails proving whether a solution exists for all 7th-degree equations using algebraic (variant: continuous ) functions of two arguments . It was first presented in the context of nomography , and in particular "nomographic construction" — a process whereby a function of several variables is constructed using functions of two variables. The variant for continuous functions was resolved affirmatively in 1957 by Vladimir Arnold when he proved the Kolmogorov–Arnold representation theorem , but the variant for algebraic functions remains unresolved.
Using the methods pioneered by Ehrenfried Walther von Tschirnhaus (1683), Erland Samuel Bring (1786), and George Jerrard (1834), William Rowan Hamilton showed in 1836 that every seventh-degree equation can be reduced via radicals to the form x 7 + a x 3 + b x 2 + c x + 1 = 0 {\displaystyle x^{7}+ax^{3}+bx^{2}+cx+1=0} .
Regarding this equation, Hilbert asked whether its solution, x , considered as a function of the three variables a , b and c , can be expressed as the composition of a finite number of two-variable functions.
Hilbert originally posed his problem for algebraic functions (Hilbert 1927, "...Existenz von algebraischen Funktionen...", i.e., "...existence of algebraic functions..."; also see Abhyankar 1997, Vitushkin 2004). However, Hilbert also asked in a later version of this problem whether there is a solution in the class of continuous functions .
A generalization of the second ("continuous") variant of the problem is the following question: can every continuous function of three variables be expressed as a composition of finitely many continuous functions of two variables? The affirmative answer to this general question was given in 1957 by Vladimir Arnold , then only nineteen years old and a student of Andrey Kolmogorov . Kolmogorov had shown in the previous year that any function of several variables can be constructed with a finite number of three-variable functions. Arnold then expanded on this work to show that only two-variable functions were in fact required, thus answering Hilbert's question when posed for the class of continuous functions.
Arnold later returned to the algebraic version of the problem, jointly with Goro Shimura (Arnold and Shimura 1976). | https://en.wikipedia.org/wiki/Hilbert's_thirteenth_problem |
Hilbert C*-modules are mathematical objects that generalise the notion of Hilbert spaces (which are themselves generalisations of Euclidean space ),
in that they endow a linear space with an " inner product " that takes values in a C*-algebra .
They were first introduced in the work of Irving Kaplansky in 1953 ,
which developed the theory for commutative , unital algebras (though Kaplansky observed that the assumption of a unit element was not "vital"). [ 1 ]
In the 1970s the theory was extended to non-commutative C*-algebras independently by William Lindall Paschke [ 2 ] and Marc Rieffel ,
the latter in a paper that used Hilbert C*-modules to construct a theory of induced representations of C*-algebras. [ 3 ]
Hilbert C*-modules are crucial to Kasparov's formulation of KK-theory , [ 4 ] and provide the right framework to extend the notion
of Morita equivalence to C*-algebras. [ 5 ] They can be viewed as the generalization
of vector bundles to noncommutative C*-algebras and as such play an important role in noncommutative geometry ,
notably in C*-algebraic quantum group theory , [ 6 ] [ 7 ] and groupoid C*-algebras.
Let A {\displaystyle A} be a C*-algebra (not assumed to be commutative or unital), its involution denoted by ∗ {\displaystyle {}^{*}} . An inner-product A {\displaystyle A} -module (or pre-Hilbert A {\displaystyle A} -module ) is a complex linear space E {\displaystyle E} equipped with a compatible right A {\displaystyle A} -module structure, together with a map
that satisfies the following properties:
An analogue to the Cauchy–Schwarz inequality holds for an inner-product A {\displaystyle A} -module E {\displaystyle E} : [ 10 ]
for x {\displaystyle x} , y {\displaystyle y} in E {\displaystyle E} .
On the pre-Hilbert module E {\displaystyle E} , define a norm by
The norm-completion of E {\displaystyle E} , still denoted by E {\displaystyle E} , is said to be a Hilbert A {\displaystyle A} -module or a Hilbert C*-module over the C*-algebra A {\displaystyle A} .
The Cauchy–Schwarz inequality implies the inner product is jointly continuous in norm and can therefore be extended to the completion.
The action of A {\displaystyle A} on E {\displaystyle E} is continuous: for all x {\displaystyle x} in E {\displaystyle E}
Similarly, if ( e λ ) {\displaystyle (e_{\lambda })} is an approximate unit for A {\displaystyle A} (a net of self-adjoint elements of A {\displaystyle A} for which a e λ {\displaystyle ae_{\lambda }} and e λ a {\displaystyle e_{\lambda }a} tend to a {\displaystyle a} for each a {\displaystyle a} in A {\displaystyle A} ), then for x {\displaystyle x} in E {\displaystyle E}
Whence it follows that E A {\displaystyle EA} is dense in E {\displaystyle E} , and x 1 A = x {\displaystyle x1_{A}=x} when A {\displaystyle A} is unital.
Let
then the closure of ⟨ E , E ⟩ A {\displaystyle \langle E,E\rangle _{A}} is a two-sided ideal in A {\displaystyle A} . Two-sided ideals are C*-subalgebras and therefore possess approximate units. One can verify that E ⟨ E , E ⟩ A {\displaystyle E\langle E,E\rangle _{A}} is dense in E {\displaystyle E} . In the case when ⟨ E , E ⟩ A {\displaystyle \langle E,E\rangle _{A}} is dense in A {\displaystyle A} , E {\displaystyle E} is said to be full . This does not generally hold.
Since the complex numbers C {\displaystyle \mathbb {C} } are a C*-algebra with an involution given by complex conjugation , a complex Hilbert space H {\displaystyle {\mathcal {H}}} is a Hilbert C {\displaystyle \mathbb {C} } -module under scalar multipliation by complex numbers and its inner product.
If X {\displaystyle X} is a locally compact Hausdorff space and E {\displaystyle E} a vector bundle over X {\displaystyle X} with projection π : E → X {\displaystyle \pi \colon E\to X} a Hermitian metric g {\displaystyle g} , then the space of continuous sections of E {\displaystyle E} is a Hilbert C ( X ) {\displaystyle C(X)} -module. Given sections σ , ρ {\displaystyle \sigma ,\rho } of E {\displaystyle E} and f ∈ C ( X ) {\displaystyle f\in C(X)} the right action is defined by
and the inner product is given by
The converse holds as well: Every countably generated Hilbert C*-module over a commutative unital C*-algebra A = C ( X ) {\displaystyle A=C(X)} is isomorphic to the space of sections vanishing at infinity of a continuous field of Hilbert spaces over X {\displaystyle X} . [ citation needed ]
Any C*-algebra A {\displaystyle A} is a Hilbert A {\displaystyle A} -module with the action given by right multiplication in A {\displaystyle A} and the inner product ⟨ a , b ⟩ = a ∗ b {\displaystyle \langle a,b\rangle =a^{*}b} . By the C*-identity, the Hilbert module norm coincides with C*-norm on A {\displaystyle A} .
The (algebraic) direct sum of n {\displaystyle n} copies of A {\displaystyle A}
can be made into a Hilbert A {\displaystyle A} -module by defining
If p {\displaystyle p} is a projection in the C*-algebra M n ( A ) {\displaystyle M_{n}(A)} , then p A n {\displaystyle pA^{n}} is also a Hilbert A {\displaystyle A} -module with the same inner product as the direct sum.
One may also consider the following subspace of elements in the countable direct product of A {\displaystyle A}
Endowed with the obvious inner product (analogous to that of A n {\displaystyle A^{n}} ), the resulting Hilbert A {\displaystyle A} -module is called the standard Hilbert module over A {\displaystyle A} .
The fact that there is a unique separable Hilbert space
has a generalization to Hilbert modules in the form of the Kasparov stabilization theorem , which states
that if E {\displaystyle E} is a countably generated Hilbert A {\displaystyle A} -module, there is an isometric isomorphism E ⊕ ℓ 2 ( A ) ≅ ℓ 2 ( A ) . {\displaystyle E\oplus \ell ^{2}(A)\cong \ell ^{2}(A).} [ 11 ]
Let E {\displaystyle E} and F {\displaystyle F} be two Hilbert modules over the same
C*-algebra A {\displaystyle A} . These are then Banach spaces, so it is possible to
speak of the Banach space of bounded linear maps L ( E , F ) {\displaystyle {\mathcal {L}}(E,F)} ,
normed by the operator norm.
The adjointable and compact adjointable operators are subspaces of this Banach space
defined using the inner product structures on E {\displaystyle E} and F {\displaystyle F} .
In the special case where A {\displaystyle A} is C {\displaystyle \mathbb {C} } these reduce to bounded and compact operators on Hilbert spaces respectively.
A map (not necessarily linear) T : E → F {\displaystyle T\colon E\to F} is defined to be adjointable if there
is another map T ∗ : F → E {\displaystyle T^{*}\colon F\to E} , known as the adjoint
of T {\displaystyle T} , such that for every e ∈ E {\displaystyle e\in E} and f ∈ F {\displaystyle f\in F} ,
Both T {\displaystyle T} and T ∗ {\displaystyle T^{*}} are then automatically linear
and also A {\displaystyle A} -module maps. The
closed graph theorem can be used to show that they are also bounded.
Analogously to the adjoint of operators on Hilbert spaces, T ∗ {\displaystyle T^{*}} is unique (if it exists) and itself adjointable with adjoint T {\displaystyle T} . If S : F → G {\displaystyle S\colon F\to G} is a second adjointable map, S T {\displaystyle ST} is adjointable with adjoint S ∗ T ∗ {\displaystyle S^{*}T^{*}} .
The adjointable operators E → F {\displaystyle E\to F} form a subspace B ( E , F ) {\displaystyle \mathbb {B} (E,F)} of L ( E , F ) {\displaystyle {\mathcal {L}}(E,F)} , which is complete in the operator norm.
In the case F = E {\displaystyle F=E} , the space B ( E , E ) {\displaystyle \mathbb {B} (E,E)} of
adjointable operators from E {\displaystyle E} to itself is denoted B ( E ) {\displaystyle \mathbb {B} (E)} , and is a
C*-algebra. [ 12 ]
Given e ∈ E {\displaystyle e\in E} and f ∈ F {\displaystyle f\in F} , the map | f ⟩ ⟨ e | : E → F {\displaystyle |f\rangle \langle e|\colon E\to F} is defined, analogously to the rank one operators of Hilbert spaces, to be
This is adjointable with adjoint | e ⟩ ⟨ f | {\displaystyle |e\rangle \langle f|} .
The compact adjointable operators K ( E , F ) {\displaystyle \mathbb {K} (E,F)} are defined to be the closed span
of
in B ( E , F ) {\displaystyle \mathbb {B} (E,F)} .
As with the bounded operators, K ( E , E ) {\displaystyle \mathbb {K} (E,E)} is denoted K ( E ) {\displaystyle \mathbb {K} (E)} . This is a
(closed, two-sided) ideal of B ( E ) {\displaystyle \mathbb {B} (E)} . [ 13 ]
If A {\displaystyle A} and B {\displaystyle B} are C*-algebras, an ( A , B ) {\displaystyle (A,B)} C*-correspondence
is a Hilbert B {\displaystyle B} -module equipped with a left action of A {\displaystyle A} by
adjointable maps that is faithful. (NB: Some authors require the left action to be
non-degenerate instead.) These objects are used in the formulation of Morita equivalence
for C*-algebras, see applications in the construction of Toeplitz and Cuntz-Pimsner algebras, [ 14 ] and can be employed to put the structure of a bicategory on the collection of C*-algebras. [ 15 ]
If E {\displaystyle E} is an ( A , B ) {\displaystyle (A,B)} and F {\displaystyle F} a ( B , C ) {\displaystyle (B,C)} correspondence,
the algebraic tensor product E ⊙ F {\displaystyle E\odot F} of E {\displaystyle E} and F {\displaystyle F} as vector spaces inherits left and right A {\displaystyle A} - and C {\displaystyle C} -module
structures respectively.
It can also be endowed with the C {\displaystyle C} -valued sesquilinear form defined on
pure tensors by
This is positive semidefinite, and the Hausdorff completion of E ⊙ F {\displaystyle E\odot F} in the resulting seminorm is denoted E ⊗ B F {\displaystyle E\otimes _{B}F} . The left- and right-actions of A {\displaystyle A} and C {\displaystyle C} extend to make this an ( A , C ) {\displaystyle (A,C)} correspondence. [ 16 ]
The collection of C*-algebras can then be endowed with
the structure of a bicategory, with C*-algebras as
objects, ( A , B ) {\displaystyle (A,B)} correspondences as
arrows B → A {\displaystyle B\to A} , and isomorphisms of correspondences (bijective module maps that preserve
inner products) as 2-arrows. [ 17 ]
Given a C*-algebra A {\displaystyle A} , and an ( A , A ) {\displaystyle (A,A)} correspondence E {\displaystyle E} ,
its Toeplitz algebra T ( E ) {\displaystyle {\mathcal {T}}(E)} is defined as the universal algebra
for Toeplitz representations (defined below).
The classical Toeplitz algebra can be recovered
as a special case, and the Cuntz-Pimsner algebras
are defined as particular quotients of Toeplitz algebras. [ 18 ]
In particular, graph algebras , crossed products by Z {\displaystyle \mathbb {Z} } , and the Cuntz algebras are all quotients of specific Toeplitz algebras.
A Toeplitz representation [ 19 ] of E {\displaystyle E} in a C*-algebra D {\displaystyle D} is a pair ( S , ϕ ) {\displaystyle (S,\phi )} of a linear map S : E → D {\displaystyle S\colon E\to D} and a homomorphism ϕ : A → D {\displaystyle \phi \colon A\to D} such that
The Toeplitz algebra T ( E ) {\displaystyle {\mathcal {T}}(E)} is the universal Toeplitz representation.
That is, there is a Toeplitz representation ( T , ι ) {\displaystyle (T,\iota )} of E {\displaystyle E} in T ( E ) {\displaystyle {\mathcal {T}}(E)} such that if ( S , ϕ ) {\displaystyle (S,\phi )} is any Toeplitz representation
of E {\displaystyle E} (in an arbitrary algebra D {\displaystyle D} ) there is a unique *-homomorphism Φ : T ( E ) → D {\displaystyle \Phi \colon {\mathcal {T}}(E)\to D} such that S = Φ ∘ T {\displaystyle S=\Phi \circ T} and ϕ = Φ ∘ ι {\displaystyle \phi =\Phi \circ \iota } . [ 20 ]
If A {\displaystyle A} is taken to be the algebra of complex numbers, and E {\displaystyle E} the vector space C n {\displaystyle \mathbb {C} ^{n}} , endowed with the natural ( C , C ) {\displaystyle (\mathbb {C} ,\mathbb {C} )} -bimodule structure, the corresponding Toeplitz algebra
is the universal algebra generated by n {\displaystyle n} isometries with mutually orthogonal
range projections. [ 21 ]
In particular, T ( C ) {\displaystyle {\mathcal {T}}(\mathbb {C} )} is the universal algebra generated by
a single isometry, which is the classical Toeplitz algebra. | https://en.wikipedia.org/wiki/Hilbert_C*-module |
The Hilbert basis of a convex cone C is a minimal set of integer vectors in C such that every integer vector in C is a conical combination of the vectors in the Hilbert basis with integer coefficients.
Given a lattice L ⊂ Z d {\displaystyle L\subset \mathbb {Z} ^{d}} and a convex polyhedral cone with generators a 1 , … , a n ∈ Z d {\displaystyle a_{1},\ldots ,a_{n}\in \mathbb {Z} ^{d}}
we consider the monoid C ∩ L {\displaystyle C\cap L} . By Gordan's lemma , this monoid is finitely generated, i.e., there exists a finite set of lattice points { x 1 , … , x m } ⊂ C ∩ L {\displaystyle \{x_{1},\ldots ,x_{m}\}\subset C\cap L} such that every lattice point x ∈ C ∩ L {\displaystyle x\in C\cap L} is an integer conical combination of these points:
The cone C is called pointed if x , − x ∈ C {\displaystyle x,-x\in C} implies x = 0 {\displaystyle x=0} . In this case there exists a unique minimal generating set of the monoid C ∩ L {\displaystyle C\cap L} —the Hilbert basis of C . It is given by the set of irreducible lattice points: An element x ∈ C ∩ L {\displaystyle x\in C\cap L} is called irreducible if it can not be written as the sum of two non-zero elements, i.e., x = y + z {\displaystyle x=y+z} implies y = 0 {\displaystyle y=0} or z = 0 {\displaystyle z=0} .
This applied mathematics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hilbert_basis_(linear_programming) |
In mathematics , the Hilbert cube , named after David Hilbert , is a topological space that provides an instructive example of some ideas in topology . Furthermore, many interesting topological spaces can be embedded in the Hilbert cube; that is, can be viewed as subspaces of the Hilbert cube (see below).
The Hilbert cube is best defined as the topological product of the intervals [ 0 , 1 / n ] {\displaystyle [0,1/n]} for n = 1 , 2 , 3 , 4 , … . {\displaystyle n=1,2,3,4,\ldots .} That is, it is a cuboid of countably infinite dimension , where the lengths of the edges in each orthogonal direction form the sequence ( 1 / n ) n ∈ N . {\displaystyle \left(1/n\right)_{n\in \mathbb {N} }.}
The Hilbert cube is homeomorphic to the product of countably infinitely many copies of the unit interval [ 0 , 1 ] . {\displaystyle [0,1].} In other words, it is topologically indistinguishable from the unit cube of countably infinite dimension. Some authors use the term "Hilbert cube" to mean this Cartesian product instead of the product of the [ 0 , 1 n ] {\displaystyle \left[0,{\tfrac {1}{n}}\right]} . [ 1 ]
If a point in the Hilbert cube is specified by a sequence ( a n ) n ∈ N {\displaystyle \left(a_{n}\right)_{n\in \mathbb {N} }} with 0 ≤ a n ≤ 1 / n , {\displaystyle 0\leq a_{n}\leq 1/n,} then a homeomorphism to the infinite dimensional unit cube is given by h ( a ) n = n ⋅ a n . {\displaystyle h(a)_{n}=n\cdot a_{n}.}
It is sometimes convenient to think of the Hilbert cube as a metric space , indeed as a specific subset of a separable Hilbert space (that is, a Hilbert space with a countably infinite Hilbert basis).
For these purposes, it is best not to think of it as a product of copies of [ 0 , 1 ] , {\displaystyle [0,1],} but instead as [ 0 , 1 ] × [ 0 , 1 / 2 ] × [ 0 , 1 / 3 ] × ⋯ ; {\displaystyle [0,1]\times [0,1/2]\times [0,1/3]\times \cdots ;} as stated above, for topological properties, this makes no difference.
That is, an element of the Hilbert cube is an infinite sequence ( x n ) n ∈ N {\displaystyle \left(x_{n}\right)_{n\in \mathbb {N} }} that satisfies 0 ≤ x n ≤ 1 / n . {\displaystyle 0\leq x_{n}\leq 1/n.}
Any such sequence belongs to the Hilbert space ℓ 2 , {\displaystyle \ell _{2},} so the Hilbert cube inherits a metric from there. One can show that the topology induced by the metric is the same as the product topology in the above definition.
As a product of compact Hausdorff spaces , the Hilbert cube is itself a compact Hausdorff space as a result of the Tychonoff theorem .
The compactness of the Hilbert cube can also be proved without the axiom of choice by constructing a continuous function from the usual Cantor set onto the Hilbert cube.
In ℓ 2 , {\displaystyle \ell _{2},} no point has a compact neighbourhood (thus, ℓ 2 {\displaystyle \ell _{2}} is not locally compact ). One might expect that all of the compact subsets of ℓ 2 {\displaystyle \ell _{2}} are finite-dimensional. The Hilbert cube shows that this is not the case. But the Hilbert cube fails to be a neighbourhood of any point p {\displaystyle p} because its side becomes smaller and smaller in each dimension, so that an open ball around p {\displaystyle p} of any fixed radius e > 0 {\displaystyle e>0} must go outside the cube in some dimension.
The Hilbert cube is a convex set, whose span is dense in the whole space, but whose interior is empty. This situation is impossible in finite dimensions. The closed tangent cone to the cube at the zero vector is the whole space.
Let K {\displaystyle K} be any infinite-dimensional, compact, convex subset of ℓ 2 {\displaystyle \ell _{2}} ; or more generally, any such subset of a locally convex topological vector space such that K {\displaystyle K} is also metrizable; or more generally still, any such subset of a metrizable space such that K {\displaystyle K} is also an absolute retract . Then K {\displaystyle K} is homeomorphic to the Hilbert cube. [ 2 ]
Every subset of the Hilbert cube inherits from the Hilbert cube the properties of being both metrizable (and therefore T4 ) and second countable . It is more interesting that the converse also holds: Every second countable T4 space is homeomorphic to a subset of the Hilbert cube.
In particular, every G δ -subset of the Hilbert cube is a Polish space , a topological space homeomorphic to a separable and complete metric space. Conversely, every Polish space is homeomorphic to a G δ -subset of the Hilbert cube. [ 3 ] | https://en.wikipedia.org/wiki/Hilbert_cube |
In parallel processing , the Hilbert curve scheduling method turns a multidimensional task allocation problem into a one-dimensional space filling problem using Hilbert curves , assigning related tasks to locations with higher levels of proximity. [ 1 ] Other space filling curves may also be used in various computing applications for similar purposes. [ 2 ]
The SLURM job scheduler which is used on a number of supercomputers uses a best fit algorithm based on Hilbert curve scheduling in order to optimize locality of task assignments. [ 2 ]
This computer-engineering -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hilbert_curve_scheduling |
Hilbert Spectroscopy uses Hilbert transforms to analyze broad spectrum signals from gigahertz to terahertz frequency radio. [ 1 ] One suggested use is to quickly analyze liquids inside airport passenger luggage. [ 1 ]
This spectroscopy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hilbert_spectroscopy |
In logic , more specifically proof theory , a Hilbert system , sometimes called Hilbert calculus , Hilbert-style system , Hilbert-style proof system , Hilbert-style deductive system or Hilbert–Ackermann system , is a type of formal proof system attributed to Gottlob Frege [ 1 ] and David Hilbert . [ 2 ] These deductive systems are most often studied for first-order logic , but are of interest for other logics as well.
It is defined as a deductive system that generates theorems from axioms and inference rules, [ 3 ] [ 4 ] [ 5 ] especially if the only postulated inference rule is modus ponens . [ 6 ] [ 7 ] Every Hilbert system is an axiomatic system , which is used by many authors as a sole less specific term to declare their Hilbert systems, [ 8 ] [ 9 ] [ 10 ] without mentioning any more specific terms. In this context, "Hilbert systems" are contrasted with natural deduction systems, [ 3 ] in which no axioms are used, only inference rules.
While all sources that refer to an "axiomatic" logical proof system characterize it simply as a logical proof system with axioms, sources that use variants of the term "Hilbert system" sometimes define it in different ways, which will not be used in this article. For instance, Troelstra defines a "Hilbert system" as a system with axioms and with → E {\displaystyle {\rightarrow }E} and ∀ I {\displaystyle {\forall }I} as the only inference rules. [ 11 ] A specific set of axioms is also sometimes called "the Hilbert system", [ 12 ] or "the Hilbert-style calculus". [ 13 ] Sometimes, "Hilbert-style" is used to convey the type of axiomatic system that has its axioms given in schematic form, [ 2 ] as in the § Schematic form of P2 below—but other sources use the term "Hilbert-style" as encompassing both systems with schematic axioms and systems with a rule of substitution, [ 14 ] as this article does. The use of "Hilbert-style" and similar terms to describe axiomatic proof systems in logic is due to the influence of Hilbert and Ackermann 's Principles of Mathematical Logic (1928). [ 2 ]
Most variants of Hilbert systems take a characteristic tack in the way they balance a trade-off between logical axioms and rules of inference . [ 1 ] [ 6 ] [ 15 ] [ 11 ] Hilbert systems can be characterised by the choice of a large number of schemas of logical axioms and a small set of rules of inference . Systems of natural deduction take the opposite tack, including many deduction rules but very few or no axiom schemas. [ 3 ] The most commonly studied Hilbert systems have either just one rule of inference – modus ponens , for propositional logics – or two – with generalisation , to handle predicate logics , as well – and several infinite axiom schemas. Hilbert systems for alethic modal logics , sometimes called Hilbert-Lewis systems , additionally require the necessitation rule . Some systems use a finite list of concrete formulas as axioms instead of an infinite set of formulas via axiom schemas, in which case the uniform substitution rule is required. [ 14 ]
A characteristic feature of the many variants of Hilbert systems is that the context is not changed in any of their rules of inference, while both natural deduction and sequent calculus contain some context-changing rules. [ 16 ] Thus, if one is interested only in the derivability of tautologies , no hypothetical judgments, then one can formalize the Hilbert system in such a way that its rules of inference contain only judgments of a rather simple form. The same cannot be done with the other two deductions systems: [ citation needed ] as context is changed in some of their rules of inferences, they cannot be formalized so that hypothetical judgments could be avoided – not even if we want to use them just for proving derivability of tautologies.
In a Hilbert system, a formal deduction (or proof ) is a finite sequence of formulas in which each formula is either an axiom or is obtained from previous formulas by a rule of inference. These formal deductions are meant to mirror natural-language proofs, although they are far more detailed.
Suppose Γ {\displaystyle \Gamma } is a set of formulas, considered as hypotheses . For example, Γ {\displaystyle \Gamma } could be a set of axioms for group theory or set theory . The notation Γ ⊢ ϕ {\displaystyle \Gamma \vdash \phi } means that there is a deduction that ends with ϕ {\displaystyle \phi } using as axioms only logical axioms and elements of Γ {\displaystyle \Gamma } . Thus, informally, Γ ⊢ ϕ {\displaystyle \Gamma \vdash \phi } means that ϕ {\displaystyle \phi } is provable assuming all the formulas in Γ {\displaystyle \Gamma } .
Hilbert systems are characterized by the use of numerous schemas of logical axioms . An axiom schema is an infinite set of axioms obtained by substituting all formulas of some form into a specific pattern. The set of logical axioms includes not only those axioms generated from this pattern, but also any generalization of one of those axioms. A generalization of a formula is obtained by prefixing zero or more universal quantifiers on the formula; for example ∀ y ( ∀ x P x y → P t y ) {\displaystyle \forall y(\forall xPxy\to Pty)} is a generalization of ∀ x P x y → P t y {\displaystyle \forall xPxy\to Pty} .
The following are some Hilbert systems that have been used in propositional logic . One of them, the § Schematic form of P2 , is also considered a Frege system .
Axiomatic proofs have been used in mathematics since the famous Ancient Greek textbook, Euclid 's Elements of Geometry , c. 300 BC. But the first known fully formalized proof system that thereby qualifies as a Hilbert system dates back to Gottlob Frege 's 1879 Begriffsschrift . [ 9 ] [ 17 ] Frege's system used only implication and negation as connectives, [ 18 ] and it had six axioms, [ 17 ] which were these ones: [ 19 ] [ 20 ]
These were used by Frege together with modus ponens and a rule of substitution (which was used but never precisely stated) to yield a complete and consistent axiomatization of classical truth-functional propositional logic. [ 19 ]
Jan Łukasiewicz showed that, in Frege's system, "the third axiom is superfluous since it can be derived from the preceding two axioms, and that the last three axioms can be replaced by the single sentence C C N p N q C q p {\displaystyle CCNpNqCqp} ". [ 20 ] Which, taken out of Łukasiewicz's Polish notation into modern notation, means ( ¬ p → ¬ q ) → ( q → p ) {\displaystyle (\neg p\rightarrow \neg q)\rightarrow (q\rightarrow p)} . Hence, Łukasiewicz is credited [ 17 ] with this system of three axioms:
Just like Frege's system, this system uses a substitution rule and uses modus ponens as an inference rule. [ 17 ] The exact same system was given (with an explicit substitution rule) by Alonzo Church , [ 21 ] who referred to it as the system P 2, [ 21 ] [ 22 ] and helped popularize it. [ 22 ]
One may avoid using the rule of substitution by giving the axioms in schematic form, using them to generate an infinite set of axioms. Hence, using Greek letters to represent schemas (metalogical variables that may stand for any well-formed formulas ), the axioms are given as: [ 9 ] [ 22 ]
The schematic version of P 2 is attributed to John von Neumann , [ 17 ] and is used in the Metamath "set.mm" formal proof database. [ 22 ] In fact, the very idea of using axiom schemas to replace the rule of substitution is attributed to von Neumann. [ 23 ] The schematic version of P 2 has also been attributed to Hilbert , and named H {\displaystyle {\mathcal {H}}} in this context. [ 24 ]
Systems for propositional logic whose inference rules are schematic are also called Frege systems ; as the authors that originally defined the term "Frege system" [ 25 ] note, this actually excludes Frege's own system, given above, since it had axioms instead of axiom schemas. [ 23 ]
As an example, a proof of A → A {\displaystyle A\to A} in P 2 is given below. First, the axioms are given names:
And the proof is as follows:
There is an unlimited amount of axiomatisations of predicate logic, since for any logic there is freedom in choosing axioms and rules that characterise that logic. We describe here a Hilbert system with nine axioms and just the rule modus ponens, which we call the one-rule axiomatisation and which describes classical equational logic. We deal with a minimal language for this logic, where formulas use only the connectives ¬ {\displaystyle \lnot } and → {\displaystyle \to } and only the quantifier ∀ {\displaystyle \forall } . Later we show how the system can be extended to include additional logical connectives, such as ∧ {\displaystyle \land } and ∨ {\displaystyle \lor } , without enlarging the class of deducible formulas.
The first four logical axiom schemas allow (together with modus ponens) for the manipulation of logical connectives.
The axiom P1 is redundant, as it follows from P3, P2 and modus ponens (see proof ). These axioms describe classical propositional logic ; without axiom P4 we get positive implicational logic . Minimal logic is achieved either by adding instead the axiom P4m, or by defining ¬ ϕ {\displaystyle \lnot \phi } as ϕ → ⊥ {\displaystyle \phi \to \bot } .
Intuitionistic logic is achieved by adding axioms P4i and P5i to positive implicational logic, or by adding axiom P5i to minimal logic. Both P4i and P5i are theorems of classical propositional logic.
Note that these are axiom schemas, which represent infinitely many specific instances of axioms. For example, P1 might represent the particular axiom instance p → p {\displaystyle p\to p} , or it might represent ( p → q ) → ( p → q ) {\displaystyle \left(p\to q\right)\to \left(p\to q\right)} : the ϕ {\displaystyle \phi } is a place where any formula can be placed. A variable such as this that ranges over formulae is called a 'schematic variable'.
With a second rule of uniform substitution (US), we can change each of these axiom schemas into a single axiom, replacing each schematic variable by some propositional variable that isn't mentioned in any axiom to get what we call the substitutional axiomatisation. Both formalisations have variables, but where the one-rule axiomatisation has schematic variables that are outside the logic's language, the substitutional axiomatisation uses propositional variables that do the same work by expressing the idea of a variable ranging over formulae with a rule that uses substitution.
The next three logical axiom schemas provide ways to add, manipulate, and remove universal quantifiers.
These three additional rules extend the propositional system to axiomatise classical predicate logic . Likewise, these three rules extend system for intuitionstic propositional logic (with P1-3 and P4i and P5i) to intuitionistic predicate logic .
Universal quantification is often given an alternative axiomatisation using an extra rule of generalisation, in which case the rules Q6 and Q7 are redundant.
The final axiom schemas are required to work with formulas involving the equality symbol.
It is common to include in a Hilbert system only axioms for the logical operators implication and negation towards functional completeness . Given these axioms, it is possible to form conservative extensions of the deduction theorem that permit the use of additional connectives. These extensions are called conservative because if a formula φ involving new connectives is rewritten as a logically equivalent formula θ involving only negation, implication, and universal quantification, then φ is derivable in the extended system if and only if θ is derivable in the original system. When fully extended, a Hilbert system will resemble more closely a system of natural deduction . | https://en.wikipedia.org/wiki/Hilbert_system |
In mathematics , particularly in dynamical systems , the Hilbert–Arnold problem is an unsolved problem concerning the estimation of limit cycles . It asks whether in a generic [ disambiguation needed ] finite-parameter family of smooth vector fields on a sphere with a compact parameter base, the number of limit cycles is uniformly bounded across all parameter values. The problem is historically related to Hilbert's sixteenth problem and was first formulated by Russian mathematicians Vladimir Arnold and Yulij Ilyashenko in the 1980s. [ 1 ]
The problem arises from considering modern approaches to Hilbert's sixteenth problem . While Hilbert's original question focused on polynomial vector fields , mathematical attention shifted toward properties of generic families [ disambiguation needed ] within certain classes. Unlike polynomial systems, typical smooth systems on a sphere can have arbitrarily many hyperbolic limit cycles that persist under small perturbations . However, the question of uniform boundedness across parameter families remains meaningful and forms the basis of the Hilbert–Arnold problem. [ 2 ]
Due to the compactness of both the parameter base and phase space, the Hilbert–Arnold problem can be reduced to a local problem studying bifurcations of special degenerate vector fields. This leads to the concept of polycycles— cyclically ordered sets of singular points [ disambiguation needed ] connected by phase curve arcs—and their cyclicity, which measures the number of limit cycles born in bifurcations.
The local version of the Hilbert–Arnold problem asks whether the maximum cyclicity of nontrivial polycycles in generic k-parameter families (known as the bifurcation number B ( k ) {\displaystyle B(k)} ) is finite, and seeks explicit upper bounds. [ 3 ] The local Hilbert–Arnold problem has been solved for k = 1 {\displaystyle k=1} and k = 2 {\displaystyle k=2} , with B ( 1 ) = 1 {\displaystyle B(1)=1} and B ( 2 ) = 2 {\displaystyle B(2)=2} . For k = 3 {\displaystyle k=3} , a solution strategy exists but remains incomplete. A simplified version considering only elementary polycycles (where all vertices are elementary singular points with at least one nonzero eigenvalue ) has been more thoroughly studied. Ilyashenko and Yakovenko proved in 1995 that the elementary bifurcation number E ( k ) {\displaystyle E(k)} is finite for all k > 0 {\displaystyle k>0} . [ 4 ]
In 2003, mathematician Vadim Kaloshin established the explicit bound E ( k ) < 25 k 2 {\displaystyle E(k)<25^{k^{2}}} . [ 5 ] | https://en.wikipedia.org/wiki/Hilbert–Arnold_problem |
The Hilbert–Bernays paradox is a distinctive paradox belonging to the family of the paradoxes of reference . It is named after David Hilbert and Paul Bernays .
The paradox appears in Hilbert and Bernays' Grundlagen der Mathematik and is used by them to show that a sufficiently strong consistent theory cannot contain its own reference functor. [ 1 ] Although it has gone largely unnoticed in the course of the 20th century, it has recently been rediscovered and appreciated for the distinctive difficulties it presents. [ 2 ]
Just as the semantic property of truth seems to be governed by the naive schema:
(where single quotes refer to the linguistic expression inside the quotes), the semantic property of reference seems to be governed by the naive schema:
Let us suppose however that, for every expression e in the language, the language also contains a name <e> for that expression, and consider a name h for (natural) numbers satisfying:
Suppose that, for some number n :
Then, surely, the referent of <h> exists, and so does (the referent of <h> )+1. By (R), it then follows that:
Therefore, by (H) and the principle of indiscernibility of identicals , it is the case that:
But, by two more applications of the indiscernibility of identicals, (1) and (3) yield:
Alas, (4) is absurd, since no number is identical with its successor.
Since, given the diagonal lemma , every sufficiently strong theory will have to accept something like (H), [ clarification needed ] absurdity can only be avoided either by rejecting the principle of naive reference (R) or by rejecting classical logic (which validates the reasoning from (R) and (H) to absurdity). On the first approach, typically whatever one says about the Liar paradox carries over smoothly to the Hilbert–Bernays paradox. [ 3 ] The paradox presents instead distinctive difficulties for many solutions pursuing the second approach: for example, solutions to the Liar paradox that reject the law of excluded middle (which is not used by the Hilbert–Bernays paradox) have denied that there is such a thing as the referent of h ; [ 4 ] solutions to the Liar paradox that reject the law of noncontradiction (which is likewise not used by the Hilbert–Bernays paradox) have claimed that h refers to more than one object. [ 2 ] | https://en.wikipedia.org/wiki/Hilbert–Bernays_paradox |
In mathematical logic , the Hilbert–Bernays provability conditions , named after David Hilbert and Paul Bernays , are a set of requirements for formalized provability predicates in formal theories of arithmetic (Smith 2007:224).
These conditions are used in many proofs of Kurt Gödel 's second incompleteness theorem . They are also closely related to axioms of provability logic .
Let T be a formal theory of arithmetic with a formalized provability predicate Prov( n ) , which is expressed as a formula of T with one free number variable. For each formula φ in the theory, let #(φ) be the Gödel number of φ . The Hilbert–Bernays provability conditions are:
Note that Prov is predicate of numbers, and it is a provability predicate in the sense that the intended interpretation of Prov(#(φ)) is that there exists a number that codes for a proof of φ . Formally what is required of Prov is the above three conditions.
In the more concise notation of provability logic , letting T ⊢ φ {\displaystyle T\vdash \varphi } denote " T {\displaystyle T} proves φ {\displaystyle \varphi } " and ◻ φ {\displaystyle \Box \varphi } denote Prov ( # ( φ ) ) {\displaystyle {\text{Prov}}(\#(\varphi ))} :
The Hilbert–Bernays provability conditions, combined with the diagonal lemma , allow proving both of Gödel's incompleteness theorems shortly. Indeed the main effort of Godel's proofs lied in showing that these conditions (or equivalent ones) and the diagonal lemma hold for Peano arithmetics; once these are established the proof can be easily formalized.
Using the diagonal lemma, there is a formula ρ {\displaystyle \rho } such that T ⊩ ρ ↔ ¬ P r o v ( # ( ρ ) ) {\displaystyle T\Vdash \rho \leftrightarrow \neg Prov(\#(\rho ))} .
For the first theorem only the first and third conditions are needed.
The condition that T is ω-consistent is generalized by the condition that if for every formula φ , if T proves Prov(#(φ)) , then T proves φ . Note that this indeed holds for an ω -consistent T because Prov(#(φ)) means that there is a number coding for the proof of φ , and if T is ω -consistent then going through all natural numbers one can actually find such a particular number a , and then one can use a to construct an actual proof of φ in T .
Suppose T could have proven ρ {\displaystyle \rho } . We then would have the following theorems in T :
Thus T proves both P r o v ( # ( ρ ) ) {\displaystyle Prov(\#(\rho ))} and ¬ P r o v ( # ( ρ ) ) {\displaystyle \neg Prov(\#(\rho ))} . But if T is consistent, this is impossible, and we are forced to conclude that T does not prove ρ {\displaystyle \rho } .
Now let us suppose T could have proven ¬ ρ {\displaystyle \neg \rho } . We then would have the following theorems in T :
Thus T proves both ρ {\displaystyle \rho } and ¬ ρ {\displaystyle \neg \rho } . But if T is consistent, this is impossible, and we are forced to conclude that T does not prove ¬ ρ {\displaystyle \neg \rho } .
To conclude, T can prove neither ρ {\displaystyle \rho } nor ¬ ρ {\displaystyle \neg \rho } .
Using Rosser's trick , one needs not assume that T is ω -consistent. However, one would need to show that the first and third provability conditions holds for Prov R , Rosser's provability predicate, rather than for the naive provability predicate Prov. This follows from the fact that for every formula φ , Prov(#(φ)) holds if and only if Prov R holds.
An additional condition used is that T proves that Prov R (#(φ)) implies ¬Prov R (#(¬φ)) . This condition holds for every T that includes logic and very basic arithmetics (as elaborated in Rosser's trick#The Rosser sentence ).
Using Rosser's trick, ρ is defined using Rosser's provability predicate, instead of the naive provability predicate. The first part of the proof remains unchanged, except that the provability predicate is replaced with Rosser's provability predicate there, too.
The second part of the proof no longer uses ω-consistency, and is changed to the following:
Suppose T could have proven ¬ ρ {\displaystyle \neg \rho } . We then would have the following theorems in T :
Thus T proves both P r o v R ( # ( ¬ ρ ) ) {\displaystyle Prov^{R}(\#(\neg \rho ))} and ¬ P r o v R ( # ( ¬ ρ ) ) {\displaystyle \neg Prov^{R}(\#(\neg \rho ))} . But if T is consistent, this is impossible, and we are forced to conclude that T does not prove ¬ ρ {\displaystyle \neg \rho } .
We assume that T proves its own consistency, i.e. that:
For every formula φ :
It is possible to show by using condition no. 1 on the latter theorem, followed by repeated use of condition no. 3, that:
And using T proving its own consistency it follows that:
We now use this to show that T is not consistent:
Thus T proves both P r o v ( # ( ρ ) ) {\displaystyle Prov(\#(\rho ))} and ¬ P r o v ( # ( ρ ) ) {\displaystyle \neg Prov(\#(\rho ))} , hence is T inconsistent. | https://en.wikipedia.org/wiki/Hilbert–Bernays_provability_conditions |
In mathematics , the Hilbert–Burch theorem describes the structure of some free resolutions of a quotient of a local or graded ring in the case that the quotient has projective dimension 2. Hilbert ( 1890 ) proved a version of this theorem for polynomial rings , and Burch ( 1968 , p. 944) proved a more general version. Several other authors later rediscovered and published variations of this theorem. Eisenbud (1995 , theorem 20.15) gives a statement and proof.
If R is a local ring with an ideal I and
is a free resolution of the R - module R / I , then m = n – 1 and the ideal I is aJ where a is a regular element of R and J , a depth-2 ideal, is the first Fitting ideal Fitt 1 I {\displaystyle \operatorname {Fitt} _{1}I} of I , i.e., the ideal generated by the determinants of the minors of size m of the matrix of f .
This commutative algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hilbert–Burch_theorem |
In algebra , the Hilbert–Kunz function of a local ring ( R , m ) of prime characteristic p is the function
where q is a power of p and m [ q ] is the ideal generated by the q -th powers of elements of the maximal ideal m . [ 1 ]
The notion was introduced by Ernst Kunz , who used it to characterize a regular ring as a Noetherian ring in which the Frobenius morphism is flat . If d is the dimension of the local ring, Monsky showed that f(q)/(q^d) is c+O(1/q) for some real constant c. This constant, the "Hilbert-Kunz multiplicity", is greater than or equal to 1. Watanabe and Yoshida strengthened some of Kunz's results, showing that in the unmixed case, the ring is regular precisely when c=1.
Hilbert–Kunz functions and multiplicities have been studied for their own sake. Brenner and Trivedi have treated local rings coming from the homogeneous co-ordinate rings of smooth projective curves, using techniques from algebraic geometry . Han, Monsky, and Teixeira have treated diagonal hypersurfaces and various related hypersurfaces . But there is no known technique for determining the Hilbert–Kunz function or c in general. In particular the question of whether c is always rational wasn't settled until recently (by Brenner—it needn't be, and indeed can be transcendental). Hochster and Huneke related Hilbert-Kunz multiplicities to " tight closure " and Brenner and Monsky used Hilbert–Kunz functions to show that localization need not preserve tight closure. The question of how c behaves as the characteristic goes to infinity (say for a hypersurface defined by a polynomial with integer coefficients) has also received attention; once again open questions abound.
A comprehensive overview is to be found in Craig Huneke's article "Hilbert-Kunz multiplicities and the F-signature" arXiv:1409.0467. This article is also found on pages 485-525 of the Springer volume "Commutative Algebra: Expository Papers Dedicated to David Eisenbud on the Occasion of His 65th Birthday", edited by Irena Peeva.
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hilbert–Kunz_function |
In mathematics , and in particular in the field of algebra , a Hilbert–Poincaré series (also known under the name Hilbert series ), named after David Hilbert and Henri Poincaré , is an adaptation of the notion of dimension to the context of graded algebraic structures (where the dimension of the entire structure is often infinite). It is a formal power series in one indeterminate, say t {\displaystyle t} , where the coefficient of t n {\displaystyle t^{n}} gives the dimension (or rank) of the sub-structure of elements homogeneous of degree n {\displaystyle n} . It is closely related to the Hilbert polynomial in cases when the latter exists; however, the Hilbert–Poincaré series describes the rank in every degree, while the Hilbert polynomial describes it only in all but finitely many degrees, and therefore provides less information. In particular the Hilbert–Poincaré series cannot be deduced from the Hilbert polynomial even if the latter exists. In good cases, the Hilbert–Poincaré series can be expressed as a rational function of its argument t {\displaystyle t} .
Let K be a field, and let V = ⨁ i ∈ N V i {\displaystyle V=\textstyle \bigoplus _{i\in \mathbb {N} }V_{i}} be an N {\displaystyle \mathbb {N} } - graded vector space over K , where each subspace V i {\displaystyle V_{i}} of vectors of degree i is finite-dimensional. Then the Hilbert–Poincaré series of V is the formal power series
A similar definition can be given for an N {\displaystyle \mathbb {N} } -graded R -module over any commutative ring R in which each submodule of elements homogeneous of a fixed degree n is free of finite rank; it suffices to replace the dimension by the rank. Often the graded vector space or module of which the Hilbert–Poincaré series is considered has additional structure, for instance, that of a ring, but the Hilbert–Poincaré series is independent of the multiplicative or other structure.
Example: Since there are ( n + k k ) {\displaystyle \textstyle {\binom {n+k}{k}}} monomials of degree k in variables X 0 , … , X n {\displaystyle X_{0},\dots ,X_{n}} (by induction, say), one can deduce that the sum of the Hilbert–Poincaré series of K [ X 0 , … , X n ] {\displaystyle K[X_{0},\dots ,X_{n}]} is the rational function 1 / ( 1 − t ) n + 1 {\displaystyle 1/(1-t)^{n+1}} . [ 2 ]
Suppose M is a finitely generated graded module over A [ x 1 , … , x n ] , deg x i = d i {\displaystyle A[x_{1},\dots ,x_{n}],\deg x_{i}=d_{i}} with an Artinian ring (e.g., a field) A . Then the Poincaré series of M is a polynomial with integral coefficients divided by ∏ ( 1 − t d i ) {\displaystyle \prod (1-t^{d_{i}})} . [ 3 ] The standard proof today is an induction on n . Hilbert's original proof made a use of Hilbert's syzygy theorem (a projective resolution of M ), which gives more homological information.
Here is a proof by induction on the number n of indeterminates. If n = 0 {\displaystyle n=0} , then, since M has finite length , M k = 0 {\displaystyle M_{k}=0} if k is large enough. Next, suppose the theorem is true for n − 1 {\displaystyle n-1} and consider the exact sequence of graded modules (exact degree-wise), with the notation N ( l ) k = N k + l {\displaystyle N(l)_{k}=N_{k+l}} ,
Since the length is additive, Poincaré series are also additive. Hence, we have:
We can write P ( M ( − d n ) , t ) = t d n P ( M , t ) {\displaystyle P(M(-d_{n}),t)=t^{d_{n}}P(M,t)} . Since K is killed by x n {\displaystyle x_{n}} , we can regard it as a graded module over A [ x 0 , … , x n − 1 ] {\displaystyle A[x_{0},\dots ,x_{n-1}]} ; the same is true for C . The theorem thus now follows from the inductive hypothesis.
An example of graded vector space is associated to a chain complex , or cochain complex C of vector spaces; the latter takes the form
The Hilbert–Poincaré series (here often called the Poincaré polynomial) of the graded vector space ⨁ i C i {\displaystyle \bigoplus _{i}C^{i}} for this complex is
The Hilbert–Poincaré polynomial of the cohomology , with cohomology spaces H j = H j ( C ), is
A famous relation between the two is that there is a polynomial Q ( t ) {\displaystyle Q(t)} with non-negative coefficients, such that P C ( t ) − P H ( t ) = ( 1 + t ) Q ( t ) . {\displaystyle P_{C}(t)-P_{H}(t)=(1+t)Q(t).} | https://en.wikipedia.org/wiki/Hilbert–Poincaré_series |
The Hildebrand solubility parameter (δ) provides a numerical estimate of the degree of interaction between materials and can be a good indication of solubility , particularly for nonpolar materials such as many polymers . Materials with similar values of δ are likely to be miscible .
The Hildebrand solubility parameter is the square root of the cohesive energy density :
The cohesive energy density is the amount of energy needed to completely remove a unit volume of molecules from their neighbours to infinite separation (an ideal gas ). This is equal to the heat of vaporization of the compound divided by its molar volume in the condensed phase. In order for a material to dissolve, these same interactions need to be overcome, as the molecules are separated from each other and surrounded by the solvent. In 1936 Joel Henry Hildebrand suggested the square root of the cohesive energy density as a numerical value indicating solvency behavior. [ 1 ] This later became known as the "Hildebrand solubility parameter". Materials with similar solubility parameters will be able to interact with each other, resulting in solvation , miscibility or swelling.
Its principal utility is that it provides simple predictions of phase equilibrium based on a single parameter that is readily obtained for most materials. These predictions are often useful for nonpolar and slightly polar ( dipole moment < 2 debyes [ citation needed ] ) systems without hydrogen bonding. It has found particular use in predicting solubility and swelling of polymers by solvents. More complicated three-dimensional solubility parameters, such as Hansen solubility parameters , have been proposed for polar molecules.
The principal limitation of the solubility parameter approach is that it applies only to associated solutions ("like dissolves like" or, technically speaking, positive deviations from Raoult's law ); it cannot account for negative deviations from Raoult's law that result from effects such as solvation or the formation of electron donor–acceptor complexes. Like any simple predictive theory, it can inspire overconfidence; it is best used for screening with data used to verify the predictions. [ citation needed ]
The conventional units for the solubility parameter are ( calories per cm 3 ) 1/2 , or cal 1/2 cm −3/2 . The SI units are J 1/2 m −3/2 , equivalent to the pascal 1/2 . 1 calorie is equal to 4.184 J.
1 cal 1/2 cm −3/2 = (523/125 J) 1/2 (10 −2 m) −3/2 = (4.184 J) 1/2 (0.01 m) −3/2 = 2.045483 10 3 J 1/2 m −3/2 = 2.045483 (10 6 J/m 3 ) 1/2 = 2.045483 MPa 1/2 .
Given the non-exact nature of the use of δ, it is often sufficient to say that the number in MPa 1/2 is about twice the number in cal 1/2 cm −3/2 .
Where the units are not given, for example, in older books, it is usually safe to assume the non-SI unit.
From the table, poly(ethylene) has a solubility parameter of 7.9 cal 1/2 cm −3/2 . Good solvents are likely to be diethyl ether and hexane . (However, PE only dissolves at temperatures well above 100 °C.) Poly(styrene) has a solubility parameter of 9.1 cal 1/2 cm −3/2 , and thus ethyl acetate is likely to be a good solvent. Nylon 6,6 has a solubility parameter of 13.7 cal 1/2 cm −3/2 , and ethanol is likely to be the best solvent of those tabulated. However, the latter is polar, and thus we should be very cautions about using just the Hildebrand solubility parameter to make predictions.
Barton, A. F. M. (1991). Handbook of Solubility Parameters and Other Cohesion Parameters (2nd ed.). CRC Press.
Barton, A. F. M. (1990). Handbook of Polymer Liquid Interaction Parameters and Other Solubility Parameters . CRC Press. | https://en.wikipedia.org/wiki/Hildebrand_solubility_parameter |
In biomechanics , Hill's muscle model refers to the 3-element model consisting of a contractile element (CE) in series with a lightly-damped elastic spring element (SE) and in parallel with lightly-damped elastic parallel element (PE). Within this model, the estimated force-velocity relation for the CE element is usually modeled by what is commonly called Hill's equation, which was based on careful experiments involving tetanized muscle contraction where various muscle loads and associated velocities were measured. They were derived by the famous physiologist Archibald Vivian Hill , who by 1938 when he introduced this model and equation had already won the Nobel Prize for Physiology. He continued to publish in this area through 1970. There are many forms of the basic "Hill-based" or "Hill-type" models, with hundreds of publications having used this model structure for experimental and simulation studies. Most major musculoskeletal simulation packages make use of this model.
This is a popular state equation applicable to skeletal muscle that has been stimulated to show Tetanic contraction . It relates tension to velocity with regard to the internal thermodynamics . The equation is
where
Although Hill's equation looks very much like the van der Waals equation , the former has units of energy dissipation , while the latter has units of energy . Hill's equation demonstrates that the relationship between F and v is hyperbolic . Therefore, the higher the load applied to the muscle, the lower the contraction velocity. Similarly, the higher the contraction velocity, the lower the tension in the muscle. This hyperbolic form has been found to fit the empirical constant only during isotonic contractions near resting length. [ 1 ]
The muscle tension decreases as the shortening velocity increases. This feature has been attributed to two main causes. The major appears to be the loss in tension as the cross bridges in the contractile element and then reform in a shortened condition. The second cause appears to be the fluid viscosity in both the contractile element and the connective tissue. Whichever the cause of loss of tension, it is a viscous friction and can therefore be modeled as a fluid damper . [ 2 ]
The three-element Hill muscle model is a representation of the muscle mechanical response. The model is constituted by a contractile element ( CE ) and two non-linear spring elements , one in series ( SE ) and another in parallel ( PE ). The active force of the contractile element comes from the force generated by the actin and myosin cross-bridges at the sarcomere level. It is fully extensible when inactive but capable of shortening when activated. The connective tissues ( fascia , epimysium , perimysium and endomysium ) that surround the contractile element influences the muscle's force-length curve. The parallel element represents the passive force of these connective tissues and has a soft tissue mechanical behavior. The parallel element is responsible for the muscle passive behavior when it is stretched , even when the contractile element is not activated. The series element represents the tendon and the intrinsic elasticity of the myofilaments. It also has a soft tissue response and provides energy storing mechanism. [ 2 ] [ 3 ]
The net force-length characteristics of a muscle is a combination of the force-length characteristics of both active and passive elements. The forces in the contractile element, in the series element and in the parallel element, F C E {\displaystyle F^{CE}} , F S E {\displaystyle F^{SE}} and F P E {\displaystyle F^{PE}} , respectively, satisfy
On the other hand, the muscle length L {\displaystyle L} and the lengths L C E {\displaystyle L^{CE}} , L S E {\displaystyle L^{SE}} and L P E {\displaystyle L^{PE}} of those elements satisfy
During isometric contractions the series elastic component is under tension and therefore is stretched a finite amount. Because the overall length of the muscle is kept constant, the stretching of the series element can only occur if there is an equal shortening of the contractile element itself. [ 2 ]
The forces in the parallel, series and contractile elements are defined by: F P E ( λ f ) = F 0 f P E ( λ f ) , F S E ( λ S E , λ C E ) = F 0 f S E ( λ S E , λ C E ) , F C E ( λ C E , λ ˙ C E , a ) = F 0 f L C E ( λ C E ) f V C E ( λ ˙ C E ) a , ( 4 ) {\displaystyle F^{PE}(\lambda _{f})=F_{0}f^{PE}(\lambda _{f}),\qquad F^{SE}(\lambda ^{SE},\lambda ^{CE})=F_{0}f^{SE}(\lambda ^{SE},\lambda ^{CE}),\qquad F^{CE}(\lambda ^{CE},{\dot {\lambda }}^{CE},a)=F_{0}f_{L}^{CE}(\lambda ^{CE})f_{V}^{CE}({\dot {\lambda }}^{CE})a,\qquad (4)} where λ f , λ C E , λ S E {\textstyle \lambda _{f},\lambda _{CE},\lambda _{SE}} are strain measures for the different elements defined by: λ f = L L 0 , λ C E = L C E L 0 , λ S E = L L C E , ( 5 ) {\displaystyle \lambda _{f}={\frac {L}{L_{0}}},\quad \lambda ^{CE}={\frac {L^{CE}}{L_{0}}},\quad \lambda ^{SE}={\frac {L}{L^{CE}}},\qquad (5)} where L {\textstyle L} is the deformed muscle length and L C E {\textstyle L^{CE}} is the deformed muscle length due to motion of the contractile element, both from equation (3). L 0 {\textstyle L_{0}} is the rest length of the muscle. λ f {\displaystyle \lambda _{f}} can be split as λ f = λ S E λ C E {\textstyle \lambda _{f}=\lambda ^{SE}\lambda ^{CE}} . The force term, F 0 {\displaystyle F_{0}} , is the peak isometric muscle force and the functions f P E , f S E , f L C E , f V C E {\textstyle f^{PE},f^{SE},f_{L}^{CE},f_{V}^{CE}} are given by: f P E ( λ f ) = { 2 c A ( λ f − 1 ) e c ( λ f − 1 ) 2 , λ f > 1 0 , otherwise , ( 6 ) f S E ( λ S E , λ C E ) = { 0.1 ( e 100 λ C E ( λ S E − 1 ) − 1 ) , λ S E ≥ 1 0 , otherwise , ( 7 ) f L C E ( λ C E ) = { − 4 ( λ C E − 1 ) 2 + 1 , 0.5 ≤ λ C E ≤ 1.5 0 , otherwise , ( 8 ) f V C E ( λ ˙ C E ) = { 0 , λ ˙ C E < − 10 s − 1 − 1 arctan ( 5 ) arctan ( − 0.5 λ ˙ C E ) + 1 , − 10 s − 1 ≤ λ ˙ C E ≤ 2 s − 1 π 4 arctan ( 5 ) + 1 , λ ˙ C E > 2 s − 1 , ( 9 ) {\displaystyle {\begin{array}{lcr}f^{PE}(\lambda _{f})={\begin{cases}2cA(\lambda _{f}-1)e^{c(\lambda _{f}-1)^{2}},&\lambda _{f}>1\\{\text{0}},&{\text{otherwise}}\end{cases}},&(6)\\[4pt]f^{SE}(\lambda ^{SE},\lambda ^{CE})={\begin{cases}0.1(e^{100\lambda ^{CE}(\lambda ^{SE}-1)}-1),&\lambda ^{SE}\geq 1\\{\text{0}},&{\text{otherwise}}\end{cases}},&(7)\\[4pt]f_{L}^{CE}(\lambda ^{CE})={\begin{cases}-4(\lambda ^{CE}-1)^{2}+1,&0.5\leq \lambda ^{CE}\leq 1.5\\{\text{0}},&{\text{otherwise}}\end{cases}},&(8)\\[4pt]f_{V}^{CE}({\dot {\lambda }}^{CE})={\begin{cases}{\text{0}},&{\dot {\lambda }}^{CE}<-10s^{-1}\\-{\frac {1}{\arctan(5)}}\arctan(-0.5{\dot {\lambda }}^{CE})+1,&-10s^{-1}\leq {\dot {\lambda }}^{CE}\leq 2s^{-1}\\{\frac {\pi }{4\arctan(5)}}+1,&{\dot {\lambda }}^{CE}>2s^{-1}\end{cases}},&(9)\end{array}}}
where c , A {\displaystyle c,A} are empirical constants. The function a ( t ) {\displaystyle a(t)} from equation (4) represents the muscle activation. It is defined based on the ordinary differential equation: d a ( t ) d t = 1 τ r i s e ( 1 − a ( t ) u ( t ) + 1 τ f a l l ( a m i n − a ( t ) ) ( 1 − u ( t ) ) ) , ( 10 ) {\displaystyle {\frac {da(t)}{dt}}={\frac {1}{\tau _{rise}}}(1-a(t)u(t)+{\frac {1}{\tau _{fall}}}(a_{min}-a(t))(1-u(t))),\qquad (10)} where τ r i s e , τ f a l l {\displaystyle \tau _{rise},\tau _{fall}} are time constants related to rise and decay for muscle activation and a m i n {\displaystyle a_{min}} is a minimum bound, all determined from experiments. u ( t ) {\displaystyle u(t)} is the neural excitation that leads to muscle contraction. [ 4 ] [ 5 ]
Muscles present viscoelasticity , therefore a viscous damper may be included in the model, when the dynamics of the second-order critically damped twitch is regarded. One common model for muscular viscosity is an exponential form damper, where
is added to the model's global equation, whose k {\displaystyle k} and a {\displaystyle a} are constants. [ 2 ] | https://en.wikipedia.org/wiki/Hill's_muscle_model |
Hill's spherical vortex is an exact solution of the Euler equations that is commonly used to model a vortex ring . The solution is also used to model the velocity distribution inside a spherical drop of one fluid moving at a constant velocity through another fluid at small Reynolds number. [ 1 ] The vortex is named after Micaiah John Muller Hill who discovered the exact solution in 1894. [ 2 ] The two-dimensional analogue of this vortex is the Lamb–Chaplygin dipole .
The solution is described in the spherical polar coordinates system ( r , θ , ϕ ) {\displaystyle (r,\theta ,\phi )} with corresponding velocity components ( v r , v θ , 0 ) {\displaystyle (v_{r},v_{\theta },0)} . The velocity components are identified from Stokes stream function ψ ( r , θ ) {\displaystyle \psi (r,\theta )} as follows
The Hill's spherical vortex is described by [ 3 ]
where U {\displaystyle U} is a constant freestream velocity far away from the origin and a {\displaystyle a} is the radius of the sphere within which the vorticity is non-zero. For r ≥ a {\displaystyle r\geq a} , the vorticity is zero and the solution described above in that range is nothing but the potential flow past a sphere of radius a {\displaystyle a} . The only non-zero vorticity component for r ≤ a {\displaystyle r\leq a} is the azimuthal component that is given by
Note that here the parameters U {\displaystyle U} and a {\displaystyle a} can be scaled out by non-dimensionalization.
The Hill's spherical vortex with a swirling motion is provided by Keith Moffatt in 1969. [ 4 ] Earlier discussion of similar problems are provided by William Mitchinson Hicks in 1899. [ 5 ] The solution was also discovered by Kelvin H. Pendergast in 1956, in the context of plasma physics, [ 6 ] as there exists a direct connection between these fluid flows and plasma physics (see the connection between Hicks equation and Grad–Shafranov equation ). The motion ( v r , v θ , v ϕ ) {\displaystyle (v_{r},v_{\theta },v_{\phi })} in the axial (or, meridional) plane is described by the Stokes stream function ψ {\displaystyle \psi } as before. The azimuthal motion v ϕ {\textstyle v_{\phi }} is given by
where
where J 3 / 2 {\displaystyle J_{3/2}} and J 5 / 2 {\displaystyle J_{5/2}} are the Bessel functions of the first kind. Unlike the Hill's spherical vortex without any swirling motion, the problem here contains an arbitrary parameter k a {\displaystyle ka} . A general class of solutions of the Euler's equation describing propagating three-dimensional vortices without change of shape is provided by Keith Moffatt in 1986. [ 7 ] | https://en.wikipedia.org/wiki/Hill's_spherical_vortex |
In solid-state physics , the Hill limit is a critical distance defined in a lattice of actinide or rare-earth atoms. [ 1 ] These atoms own partially filled 4 f {\displaystyle 4f} or 5 f {\displaystyle 5f} levels in their valence shell and are therefore responsible for the main interaction between each atom and its environment. In this context, the hill limit r H {\displaystyle r_{H}} is defined as twice the radius of the f {\displaystyle f} -orbital. [ 2 ] Therefore, if two atoms of the lattice are separate by a distance greater than the Hill limit, the overlap of their f {\displaystyle f} -orbital becomes negligible. A direct consequence is the absence of hopping for the f electrons, ie their localization on the ion sites of the lattice.
Localized f electrons lead to paramagnetic materials since the remaining unpaired spins are stuck in their orbitals. However, when the rare-earth lattice (or a single atom) is embedded in a metallic one ( intermetallic compound ), interactions with the conduction band allow the f electrons to move through the lattice even for interatomic distances above the Hill limit. | https://en.wikipedia.org/wiki/Hill_limit_(solid-state) |
The Hill reaction is the light-driven transfer of electrons from water to Hill reagents (non-physiological oxidants) in a direction against the chemical potential gradient as part of photosynthesis . Robin Hill discovered the reaction in 1937. He demonstrated that the process by which plants produce oxygen is separate from the process that converts carbon dioxide to sugars.
The evolution of oxygen during the light-dependent steps in photosynthesis (Hill reaction) was proposed and proven by British biochemist Robin Hill . He demonstrated that isolated chloroplasts would make oxygen (O 2 ) but not fix carbon dioxide (CO 2 ). This is evidence that the light and dark reactions occur at different sites within the cell. [ 1 ] [ 2 ] [ 3 ]
Hill's finding was that the origin of oxygen in photosynthesis is water (H 2 O) not carbon dioxide (CO 2 ) as previously believed. Hill's observation of chloroplasts in dark conditions and in the absence of CO 2 , showed that the artificial electron acceptor was oxidized but not reduced, terminating the process, but without production of oxygen and sugar. This observation allowed Hill to conclude that oxygen is released during the light-dependent steps (Hill reaction) of photosynthesis. [ 4 ]
Hill also discovered Hill reagents, artificial electron acceptors that participate in the light reaction, such as Dichlorophenolindophenol (DCPIP), a dye that changes color when reduced. These dyes permitted the finding of electron transport chains during photosynthesis.
Further studies of the Hill reaction were made in 1957 by plant physiologist Daniel I. Arnon . Arnon studied the Hill reaction using a natural electron acceptor, NADP. He demonstrated the light-independent reaction, observing the reaction under dark conditions with an abundance of carbon dioxide. He found that carbon fixation was independent of light. Arnon effectively separated the light-dependent reaction, which produces ATP, NADPH, H + and oxygen, from the light-independent reaction that produces sugars.
Photosynthesis is the process in which light energy is absorbed and converted to chemical energy. This chemical energy is eventually used in the conversion of carbon dioxide to sugar in plants.
During photosynthesis, natural electron acceptor NADP is reduced to NADPH in chloroplasts. [ 5 ] The following equilibrium reaction takes place.
A reduction reaction that stores energy as NADPH:
An oxidation reaction as NADPH's energy is used elsewhere:
Ferredoxin , also known as an NADP+ reductase, is an enzyme that catalyzes the reduction reaction. It is easy to oxidize NADPH but difficult to reduce NADP + , hence a catalyst is beneficial. Cytochromes are conjugate proteins that contain a haem group. [ 5 ] The iron atom from this group undergoes redox reactions:
The light-dependent redox reaction takes place before the light-independent reaction in photosynthesis. [ 6 ]
Isolated chloroplasts placed under light conditions but in the absence of CO 2 , reduce and then oxidize artificial electron acceptors, allowing the process to proceed. Oxygen (O 2 ) is released as a byproduct, but not sugar (CH 2 O).
Chloroplasts placed under dark conditions and in the absence of CO 2 , oxidize the artificial acceptor but do not reduce it, terminating the process, without production of oxygen or sugar. [ 4 ]
The association of phosphorylation and the reduction of an electron acceptor such as ferricyanide increase similarly with the addition of phosphate , magnesium (Mg), and ADP . The existence of these three components is important for maximal reductive and phosphorylative activity. Similar increases in the rate of ferricyanide reduction can be stimulated by a dilution technique. Dilution does not cause a further increase in the rate in which ferricyanide is reduced with the accumulation of ADP, phosphate, and Mg to a treated chloroplast suspension. ATP inhibits the rate of ferricyanide reduction. Studies of light intensities revealed that the effect was largely on the light-independent steps of the Hill reaction. These observations are explained in terms of a proposed method in which phosphate esterifies during electron transport reactions, reducing ferricyanide, while the rate of electron transport is limited by the rate of phosphorylation. An increase in the rate of phosphorylation increases the rate by which electrons are transported in the electron transport system. [ 7 ]
It is possible to introduce an artificial electron acceptor into the light reaction, such as a dye that changes color when it is reduced. These are known as Hill reagents. These dyes permitted the finding of electron transport chains during photosynthesis. Dichlorophenolindophenol (DCPIP), an example of these dyes, is widely used by experimenters. DCPIP is a dark blue solution that becomes lighter as it is reduced. It provides experimenters with a simple visual test and easily observable light reaction. [ 8 ]
In another approach to studying photosynthesis, light-absorbing pigments such as chlorophyll can be extracted from chloroplasts. Like so many important biological systems in the cell, the photosynthetic system is ordered and compartmentalized in a system of membranes . [ 9 ] | https://en.wikipedia.org/wiki/Hill_reaction |
Discovered in 1937 by Robin Hill , Hill reagents allowed the discovery of electron transport chains during photosynthesis .
These are dyes that act as artificial electron acceptors, changing color when they are reduced.
An example of a Hill reagent is 2,6-dichlorophenolindophenol ( DCPIP ).
This photosynthesis article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hill_reagent |
In astronomy , the Hills cloud (also called the inner Oort cloud [ 1 ] and inner cloud [ 2 ] ) is a theoretical vast circumstellar disc , interior to the Oort cloud , whose outer border would be located at around 20,000 to 30,000 astronomical units (AU) from the Sun , and whose inner border, less well defined, is hypothetically located at 250–1500 AU , [ citation needed ] well beyond planetary and Kuiper Belt object orbits—but distances might be much greater. If it exists, the Hills cloud likely contains roughly 5 times as many comets as the Oort cloud. [ 3 ]
The need for the Hills cloud hypothesis is intimately connected with the dynamics of the Oort cloud: Oort cloud comets are continually perturbed in their environment. A non-negligible fraction leave the Solar System , or tumble into the inner system where they evaporate, fall into the Sun, or collide with or are ejected by the giant planets . Hence, the Oort cloud should have been depleted long ago, but it is still well supplied with comets.
The Hills cloud hypothesis addresses the persistence of the Oort cloud by postulating a densely populated, inner-Oort region—the "Hills cloud". Objects ejected from the Hills cloud are likely to end up in the classical Oort cloud region, maintaining the Oort cloud. [ 4 ] It is likely that the Hills cloud has the largest concentration of comets in the whole Solar System.
The existence of the Hills cloud is plausible, since many bodies have been found there already. It should be denser than the Oort cloud. [ 5 ] [ 6 ] Gravitational interaction with the closest stars and tidal effects from the galaxy have given circular orbits to the comets in the Oort cloud, which may not be the case for the comets in the Hills cloud. The Hills cloud's total mass is unknown; some scientists think it would be many times more massive than the outer Oort cloud.
Between 1932 and 1981, astronomers believed that the Oort cloud proposed by Ernst Öpik and Jan Oort , and the Kuiper belt were the only reserves of comets in the Solar System.
In 1932, Estonian astronomer Ernst Öpik hypothesized that comets were rooted in a cloud orbiting the outer boundary of the Solar System. [ 7 ] In 1950, this idea was revived independently by Dutch astronomer Jan Oort to explain an apparent contradiction: Comets are destroyed after several passes through the inner Solar System, so if any had existed for several billion years (since the beginning of the Solar System), no more could be observed now. [ 8 ]
Oort selected 46 comets for his study that were best observed between 1850 and 1952. The distribution of the reciprocal of the semi-major axes showed a maximum frequency which suggested the existence of a reservoir of comets between 40,000 and 150,000 AU (0.6 and 2.4 ly) away. This reservoir, located at the limits of the Sun's sphere of influence , would be subject to stellar disturbances, likely to expel cloud comets outwards or impel them inwards.
In the 1980s, astronomers realized that the main cloud could have an internal section that would start at about 3,000 AU from the Sun and continue up to the classic cloud at 20,000 AU. Most estimates place the population of the Hills cloud at about 20 trillion (about five to ten times that of the outer cloud), although the number could be ten times greater than that. [ 9 ]
The main model of an "inner cloud" was proposed in 1981 by the astronomer Jack G. Hills , from the Los Alamos Laboratory, who gave the region its name. He calculated that the passage of a star near the Solar System could have triggered a "comet rain," thereby causing extinctions on Earth.
His research suggested that the orbits of most cloud comets have a semi-major axis of 10,000 AU, much closer to the Sun than the proposed distance of the Oort cloud. [ 5 ] Moreover, the influence of the surrounding stars and that of the galactic tide should have sent the Oort cloud comets either closer to the Sun or outside of the Solar System. To account for these issues, Hills proposed the presence of an inner cloud, which would have tens or hundreds of times as many comet nuclei as the outer halo. [ 5 ] Thus, it would be a possible source of new comets to resupply the tenuous outer cloud.
In the following years other astronomers searched for the Hills cloud and studied long-period comets . This was the case with Sidney van den Bergh and Mark E. Bailey, who each suggested the Hills cloud's structure in 1982 and 1983, respectively. [ 10 ] In 1986, Bailey stated that the majority of comets in the Solar System were located not in the Oort cloud area, but closer and in an internal cloud, with an orbit with a semi-major axis of 5,000 AU. [ 10 ] The research was further expanded upon by studies of Victor Clube and Bill Napier (1987), and by R. B. Stothers (1988). [ 10 ]
However, the Hills cloud gained major interest in 1991, [ 11 ] when scientists resumed Hills' theory. [ a ]
Oort cloud comets are constantly disturbed by their surroundings and distant objects. A significant number either leave the Solar System or go much closer to the Sun. The Oort cloud should therefore have broken apart long ago, but it still remains intact. The Hills cloud proposal could provide an explanation; J. G. Hills and other scientists suggest that it could replenish the comets in the outer Oort cloud. [ 12 ]
It is also likely that the Hills cloud is the largest concentration of comets across the Solar System. [ 10 ] The Hills cloud should be much denser than the outer Oort cloud: If it exists, it is somewhere between 5,000 and 20,000 AU in size. In contrast, the Oort cloud is between 20,000 and 50,000 AU (0.3 and 0.8 ly) in size. [ 13 ]
The mass of the Hills cloud is not known. Some scientists believe it could be five times more massive than the Oort cloud. [ 3 ] Mark E. Bailey estimates the mass of the Hills cloud to be 13.8 Earth masses , if the majority of the bodies are located at 10,000 AU. [ 10 ]
If the analyses of comets are representative of the whole, the vast majority of Hills cloud objects consists of various ices, such as water, methane, ethane, carbon monoxide and hydrogen cyanide. [ 14 ] However, the discovery of the object 1996 PW , an asteroid on a typical orbit of a long-period comet, suggests that the cloud may also contain rocky objects. [ 15 ]
The carbon analysis and isotopic ratios of nitrogen firstly in the comets of the families of the Oort cloud and the other in the body of the Jupiter area shows little difference between the two, despite their distinctly remote areas. This suggests that both come from a protoplanetary disk , [ 16 ] a conclusion also supported by studies of comet cloud sizes and the recent impact study of comet Tempel 1 . [ 17 ]
Many scientists think that the Hills cloud formed from a close (800 AU) encounter between the Sun and another star within the first 800 million years of the Solar System , which could explain the eccentric orbit of 90377 Sedna , which should not be where it is, being neither influenced by Jupiter nor Neptune , nor tidal effects. [ 18 ] It is then possible that the Hills cloud would be "younger" than the Oort cloud . However, only Sedna and two other sednoids ( 2012 VP 113 and 541132 Leleākūhonua ) bear those irregularities; for 2000 OO 67 and 2006 SQ 372 this theory is not necessary, because both orbit close to the Solar System's gas giants .
Bodies in the Hills cloud are made mostly of water ice, methane and ammonia. Astronomers suspect many long-period comets originate from the Hills cloud, such as Comet Hyakutake .
In their article announcing the discovery of Sedna, Mike Brown and his colleagues asserted that they observed the first Oort cloud object. They observed that, unlike scattered disc objects like Eris, Sedna's perihelion (76 AU) was too remote for the gravitational influence of Neptune to have played a role in its evolution. [ 19 ] The authors regarded Sedna as an "inner Oort cloud object", located along the Ecliptic and placed between the Kuiper belt and the more spherical part of the Oort cloud. [ 20 ] [ 21 ] However, Sedna is much closer to the Sun than expected for objects in the Hills cloud and its inclination is close to that of the planets and the Kuiper belt.
Considerable mystery surrounds 2008 KV 42 , with its retrograde orbit that could make it originate from the Hills cloud or perhaps the Oort cloud. [ 22 ] The same goes for damocloids , whose origins are doubtful, such as the namesake for this category, 5335 Damocles .
Astronomers suspect that several comets come from the same region as the Hills cloud; in particular, they focus on those with aphelia greater than 1,000 AU (which are thus from a farther region than the Kuiper belt), but less than 10,000 AU (or they would otherwise be too close to the outer Oort cloud).
Some famous comets reach great distances and are candidates for Hills cloud objects. For example, Comet Lovejoy , discovered on 15 March 2007 by Australian astronomer Terry Lovejoy , had an inbound aphelion distance of around 1,800 AU. Comet Hyakutake, discovered in 1996 by amateur astronomer Yuji Hyakutake , has an outbound aphelion of 3,500 AU. Comet McNaught , discovered on 7 August 2006 in Australia by Robert H. McNaught , became one of the brightest comets of recent decades, with an aphelion of 4,100 AU. Comet Machholz , discovered on 27 August 2004 by amateur astronomer Donald Machholz , came from about 5,000 AU.
Sedna is a dwarf planet discovered by Michael E. Brown , Chad Trujillo and David L. Rabinowitz on 14 November 2003. Spectroscopic measures show that its surface composition is similar to that of other trans-Neptunian objects : It is mainly composed of a mixture of water ices, methane , and nitrogen with tholins . Its surface is one of the reddest in the Solar System.
This may be the first detection of an object originating from the Hills cloud, depending on the definition used. The area of the Hills cloud is defined as any objects with orbits measuring between 1,500 and 10,000 AU. [ 23 ]
Sedna is, however, much closer than the supposed distance of the Hills cloud. The planetoid discovered at a distance of about 13 billion kilometres (87 AU) from the Sun, travels in an elliptical orbit of 11,400 years with a perihelion point of only 76 AU from the Sun during its closest approach (the next to occur in 2076), and travels out to 936 AU at its farthest point.
However, Sedna is not considered a Kuiper belt object, because its orbit does not bring it into the region of the Kuiper belt at 50 AU. Sedna is a " detached object ", and thus is not in a resonance with Neptune.
The Trans-Neptunian object 2012 VP 113 was announced on 26 March 2014 and has a similar orbit to Sedna with a perihelion point significantly detached from Neptune. Its orbit lies between 80 and 400 AU from the Sun.
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of". | https://en.wikipedia.org/wiki/Hills_cloud |
In population genetics , the Hill–Robertson effect , or Hill–Robertson interference , is a phenomenon first identified by Bill Hill and Alan Robertson in 1966. [ 1 ] It provides an explanation as to why there may be an evolutionary advantage to genetic recombination .
In a population of finite but effective size which is subject to natural selection , varying extents of linkage disequilibria (LD) will occur. These can be caused by genetic drift or by mutation , and they will tend to slow down the process of evolution by natural selection. [ 2 ]
This is most easily seen by considering the case of disequilibria caused by mutation:
Consider a population of individuals whose genome has only two genes, a and b . If an advantageous mutant ( A ) of gene a arises in a given individual, that individual's genes will through natural selection become more frequent in the population over time. However, if a separate advantageous mutant ( B ) of gene b arises before A has gone to fixation, and happens to arise in an individual who does not carry A , then individuals carrying B and individuals carrying A will be in competition. If recombination is present, then individuals carrying both A and B (of genotype AB) will eventually arise. Provided there are no negative epistatic effects of carrying both, individuals of genotype AB will have a greater selective advantage than aB or Ab individuals, and AB will hence go to fixation. However, if there is no recombination, AB individuals can only occur if the latter mutation (B) happens to occur in an Ab individual. The chance of this happening depends on the frequency of new mutations, and on the size of the population, but is in general unlikely unless A is already fixed, or nearly fixed. Hence one should expect the time between the A mutation arising and the population becoming fixed for AB to be much longer in the absence of recombination. Hence recombination allows evolution to progress faster. [Note: This effect is often erroneously equated with "clonal interference", which happens when A and B mutations arise in different wild type ( ab ) individuals and describes the ensuing competition between Ab and aB lineages.] [ 2 ] There tends to be a correlation between the rate of recombination and the likelihood of the preferred haplotype (in the above example labeled as AB ) goes into fixation in a population. [ 3 ]
Joe Felsenstein (1974) [ 4 ] showed this effect to be mathematically identical to the Fisher–Muller model proposed by R. A. Fisher (1930) [ 5 ] and H. J. Muller (1932), [ 6 ] although the verbal arguments were substantially different. Although the Hill-Robertson effect is usually thought of as describing a disproportionate buildup of fitness-reducing (relative to fitness increasing) LD over time, these effects also have immediate consequences for mean population fitness. [ 7 ] | https://en.wikipedia.org/wiki/Hill–Robertson_effect |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.