text
stringlengths
11
320k
source
stringlengths
26
161
Heating, ventilation, and air conditioning ( HVAC ) is the use of various technologies to control the temperature , humidity , and purity of the air in an enclosed space. Its goal is to provide thermal comfort and acceptable indoor air quality . HVAC system design is a subdiscipline of mechanical engineering , based on the principles of thermodynamics , fluid mechanics , and heat transfer . " Refrigeration " is sometimes added to the field's abbreviation as HVAC&R or HVACR , or "ventilation" is dropped, as in HACR (as in the designation of HACR-rated circuit breakers ). HVAC is an important part of residential structures such as single family homes, apartment buildings, hotels, and senior living facilities; medium to large industrial and office buildings such as skyscrapers and hospitals; vehicles such as cars, trains, airplanes, ships and submarines; and in marine environments, where safe and healthy building conditions are regulated with respect to temperature and humidity, using fresh air from outdoors. Ventilating or ventilation (the "V" in HVAC) is the process of exchanging or replacing air in any space to provide high indoor air quality which involves temperature control, oxygen replenishment, and removal of moisture, odors, smoke, heat, dust, airborne bacteria, carbon dioxide, and other gases. Ventilation removes unpleasant smells and excessive moisture, introduces outside air, and keeps interior air circulating. Building ventilation methods are categorized as mechanical (forced) or natural. [ 1 ] The three major functions of heating, ventilation, and air conditioning are interrelated, especially with the need to provide thermal comfort and acceptable indoor air quality within reasonable installation, operation, and maintenance costs. HVAC systems can be used in both domestic and commercial environments. HVAC systems can provide ventilation, and maintain pressure relationships between spaces. The means of air delivery and removal from spaces is known as room air distribution . [ 2 ] In modern buildings, the design, installation, and control systems of these functions are integrated into one or more HVAC systems. For very small buildings, contractors normally estimate the capacity and type of system needed and then design the system, selecting the appropriate refrigerant and various components needed. For larger buildings, building service designers, mechanical engineers, or building services engineers analyze, design, and specify the HVAC systems. Specialty mechanical contractors and suppliers then fabricate, install and commission the systems. Building permits and code-compliance inspections of the installations are normally required for all sizes of buildings Although HVAC is executed in individual buildings or other enclosed spaces (like NORAD's underground headquarters), the equipment involved is in some cases an extension of a larger district heating (DH) or district cooling (DC) network, or a combined DHC network. In such cases, the operating and maintenance aspects are simplified and metering becomes necessary to bill for the energy that is consumed, and in some cases energy that is returned to the larger system. For example, at a given time one building may be utilizing chilled water for air conditioning and the warm water it returns may be used in another building for heating, or for the overall heating-portion of the DHC network (likely with energy added to boost the temperature). [ 3 ] [ 4 ] [ 5 ] Basing HVAC on a larger network helps provide an economy of scale that is often not possible for individual buildings, for utilizing renewable energy sources such as solar heat, [ 6 ] [ 7 ] [ 8 ] winter's cold, [ 9 ] [ 10 ] the cooling potential in some places of lakes or seawater for free cooling , and the enabling function of seasonal thermal energy storage . Utilizing natural sources for HVAC can significantly benefit the environment and promote awareness of alternative methods. HVAC is based on inventions and discoveries made by Nikolay Lvov , Michael Faraday , Rolla C. Carpenter , Willis Carrier , Edwin Ruud , Reuben Trane , James Joule , William Rankine , Sadi Carnot , Alice Parker and many others. [ 11 ] Multiple inventions within this time frame preceded the beginnings of the first comfort air conditioning system, which was designed in 1902 by Alfred Wolff (Cooper, 2003) for the New York Stock Exchange, while Willis Carrier equipped the Sacketts-Wilhems Printing Company with the process AC unit the same year. Coyne College was the first school to offer HVAC training in 1899. [ 12 ] The first residential AC was installed by 1914, and by the 1950s there was "widespread adoption of residential AC". [ 13 ] The invention of the components of HVAC systems went hand-in-hand with the Industrial Revolution , and new methods of modernization, higher efficiency, and system control are constantly being introduced by companies and inventors worldwide. Heaters are appliances whose purpose is to generate heat (i.e. warmth) for the building. This can be done via central heating . Such a system contains a boiler , furnace , or heat pump to heat water, steam, or air in a central location such as a furnace room in a home, or a mechanical room in a large building. The heat can be transferred by convection , conduction, or radiation . Space heaters are used to heat single rooms and only consist of a single unit. Heaters exist for various types of fuel, including solid fuels , liquids , and gases . Another type of heat source is electricity , normally heating ribbons composed of high resistance wire (see Nichrome ). This principle is also used for baseboard heaters and portable heaters . Electrical heaters are often used as backup or supplemental heat for heat pump systems. The heat pump gained popularity in the 1950s in Japan and the United States. [ 14 ] Heat pumps can extract heat from various sources , such as environmental air, exhaust air from a building, or from the ground. Heat pumps transfer heat from outside the structure into the air inside. Initially, heat pump HVAC systems were only used in moderate climates, but with improvements in low temperature operation and reduced loads due to more efficient homes, they are increasing in popularity in cooler climates. They can also operate in reverse to cool an interior. In the case of heated water or steam, piping is used to transport the heat to the rooms. Most modern hot water boiler heating systems have a circulator, which is a pump, to move hot water through the distribution system (as opposed to older gravity-fed systems ). The heat can be transferred to the surrounding air using radiators , hot water coils (hydro-air), or other heat exchangers. The radiators may be mounted on walls or installed within the floor to produce floor heat. The use of water as the heat transfer medium is known as hydronics . The heated water can also supply an auxiliary heat exchanger to supply hot water for bathing and washing. Warm air systems distribute the heated air through ductwork systems of supply and return air through metal or fiberglass ducts. Many systems use the same ducts to distribute air cooled by an evaporator coil for air conditioning. The air supply is normally filtered through air filters [ dubious – discuss ] to remove dust and pollen particles. [ 15 ] The use of furnaces, space heaters, and boilers as a method of indoor heating could result in incomplete combustion and the emission of carbon monoxide , nitrogen oxides , formaldehyde , volatile organic compounds , and other combustion byproducts. Incomplete combustion occurs when there is insufficient oxygen; the inputs are fuels containing various contaminants and the outputs are harmful byproducts, most dangerously carbon monoxide, which is a tasteless and odorless gas with serious adverse health effects. [ 16 ] Without proper ventilation, carbon monoxide can be lethal at concentrations of 1000 ppm (0.1%). However, at several hundred ppm, carbon monoxide exposure induces headaches, fatigue, nausea, and vomiting. Carbon monoxide binds with hemoglobin in the blood, forming carboxyhemoglobin, reducing the blood's ability to transport oxygen. The primary health concerns associated with carbon monoxide exposure are its cardiovascular and neurobehavioral effects. Carbon monoxide can cause atherosclerosis (the hardening of arteries) and can also trigger heart attacks. Neurologically, carbon monoxide exposure reduces hand to eye coordination, vigilance, and continuous performance. It can also affect time discrimination. [ 17 ] Ventilation is the process of changing or replacing air in any space to control the temperature or remove any combination of moisture, odors, smoke, heat, dust, airborne bacteria, or carbon dioxide, and to replenish oxygen. It plays a critical role in maintaining a healthy indoor environment by preventing the buildup of harmful pollutants and ensuring the circulation of fresh air. Different methods, such as natural ventilation through windows and mechanical ventilation systems , can be used depending on the building design and air quality needs. Ventilation often refers to the intentional delivery of the outside air to the building indoor space. It is one of the most important factors for maintaining acceptable indoor air quality in buildings. Although ventilation plays a key role in indoor air quality, it may not be sufficient on its own. [ 18 ] A clear understanding of both indoor and outdoor air quality parameters is needed to improve the performance of ventilation in terms of ... [ 19 ] In scenarios where outdoor pollution would deteriorate indoor air quality, other treatment devices such as filtration may also be necessary. [ 20 ] Methods for ventilating a building may be divided into mechanical/forced and natural types. [ 21 ] Mechanical, or forced, ventilation is provided by an air handler (AHU) and used to control indoor air quality. Excess humidity , odors, and contaminants can often be controlled via dilution or replacement with outside air. However, in humid climates more energy is required to remove excess moisture from ventilation air. Kitchens and bathrooms typically have mechanical exhausts to control odors and sometimes humidity. Factors in the design of such systems include the flow rate (which is a function of the fan speed and exhaust vent size) and noise level. Direct drive fans are available for many applications and can reduce maintenance needs. In summer, ceiling fans and table/floor fans circulate air within a room for the purpose of reducing the perceived temperature by increasing evaporation of perspiration on the skin of the occupants. Because hot air rises, ceiling fans may be used to keep a room warmer in the winter by circulating the warm stratified air from the ceiling to the floor. Natural ventilation is the ventilation of a building with outside air without using fans or other mechanical systems. It can be via operable windows, louvers, or trickle vents when spaces are small and the architecture permits. ASHRAE defined Natural ventilation as the flow of air through open windows, doors, grilles, and other planned building envelope penetrations , and as being driven by natural and/or artificially produced pressure differentials. [ 1 ] Natural ventilation strategies also include cross ventilation , which relies on wind pressure differences on opposite sides of a building. By strategically placing openings, such as windows or vents, on opposing walls, air is channeled through the space to enhance cooling and ventilation. Cross ventilation is most effective when there are clear, unobstructed paths for airflow within the building. In more complex schemes, warm air is allowed to rise and flow out high building openings to the outside ( stack effect ), causing cool outside air to be drawn into low building openings. Natural ventilation schemes can use very little energy, but care must be taken to ensure comfort. In warm or humid climates, maintaining thermal comfort solely via natural ventilation might not be possible. Air conditioning systems are used, either as backups or supplements. Air-side economizers also use outside air to condition spaces, but do so using fans, ducts, dampers, and control systems to introduce and distribute cool outdoor air when appropriate. An important component of natural ventilation is air change rate or air changes per hour : the hourly rate of ventilation divided by the volume of the space. For example, six air changes per hour means an amount of new air, equal to the volume of the space, is added every ten minutes. For human comfort, a minimum of four air changes per hour is typical, though warehouses might have only two. Too high of an air change rate may be uncomfortable, akin to a wind tunnel which has thousands of changes per hour. The highest air change rates are for crowded spaces, bars, night clubs, commercial kitchens at around 30 to 50 air changes per hour. [ 22 ] Room pressure can be either positive or negative with respect to outside the room. Positive pressure occurs when there is more air being supplied than exhausted, and is common to reduce the infiltration of outside contaminants. [ 23 ] Natural ventilation [ 24 ] is a key factor in reducing the spread of airborne illnesses such as tuberculosis, the common cold, influenza, meningitis or COVID-19. Opening doors and windows are good ways to maximize natural ventilation, which would make the risk of airborne contagion much lower than with costly and maintenance-requiring mechanical systems. Old-fashioned clinical areas with high ceilings and large windows provide the greatest protection. Natural ventilation costs little and is maintenance free, and is particularly suited to limited-resource settings and tropical climates, where the burden of TB and institutional TB transmission is highest. In settings where respiratory isolation is difficult and climate permits, windows and doors should be opened to reduce the risk of airborne contagion. Natural ventilation requires little maintenance and is inexpensive. [ 25 ] Natural ventilation is not practical in much of the infrastructure because of climate. This means that the facilities need to have effective mechanical ventilation systems and or use Ceiling Level UV or FAR UV ventilation systems. Ventilation is measured in terms of Air Changes Per Hour (ACH). As of 2023, the CDC recommends that all spaces have a minimum of 5 ACH. [ 26 ] For hospital rooms with airborne contagions the CDC recommends a minimum of 12 ACH. [ 27 ] The challenges in facility ventilation are public unawareness, [ 28 ] [ 29 ] ineffective government oversight, poor building codes that are based on comfort levels, poor system operations, poor maintenance, and lack of transparency. [ 30 ] UVC or Ultraviolet Germicidal Irradiation is a function used in modern air conditioners which reduces airborne viruses , bacteria , and fungi , through the use of a built-in LED UV light that emits a gentle glow across the evaporator. As the cross-flow fan circulates the room air, any viruses are guided through the sterilization module’s irradiation range, rendering them instantly inactive. [ 31 ] An air conditioning system, or a standalone air conditioner, provides cooling and/or humidity control for all or part of a building. Air conditioned buildings often have sealed windows, because open windows would work against the system intended to maintain constant indoor air conditions. Outside, fresh air is generally drawn into the system by a vent into a mix air chamber for mixing with the space return air. Then the mixture air enters an indoor or outdoor heat exchanger section where the air is to be cooled down, then be guided to the space creating positive air pressure. The percentage of return air made up of fresh air can usually be manipulated by adjusting the opening of this vent. Typical fresh air intake is about 10% of the total supply air. [ citation needed ] Air conditioning and refrigeration are provided through the removal of heat. Heat can be removed through radiation , convection, or conduction . The heat transfer medium is a refrigeration system, such as water, air, ice, and chemicals are referred to as refrigerants . A refrigerant is employed either in a heat pump system in which a compressor is used to drive thermodynamic refrigeration cycle , or in a free cooling system that uses pumps to circulate a cool refrigerant (typically water or a glycol mix). It is imperative that the air conditioning horsepower is sufficient for the area being cooled. Underpowered air conditioning systems will lead to power wastage and inefficient usage. Adequate horsepower is required for any air conditioner installed. The refrigeration cycle uses four essential elements to cool, which are compressor, condenser, metering device, and evaporator. In variable climates, the system may include a reversing valve that switches from heating in winter to cooling in summer. By reversing the flow of refrigerant, the heat pump refrigeration cycle is changed from cooling to heating or vice versa. This allows a facility to be heated and cooled by a single piece of equipment by the same means, and with the same hardware. Free cooling systems can have very high efficiencies, and are sometimes combined with seasonal thermal energy storage so that the cold of winter can be used for summer air conditioning. Common storage mediums are deep aquifers or a natural underground rock mass accessed via a cluster of small-diameter, heat-exchanger-equipped boreholes. Some systems with small storages are hybrids, using free cooling early in the cooling season, and later employing a heat pump to chill the circulation coming from the storage. The heat pump is added-in because the storage acts as a heat sink when the system is in cooling (as opposed to charging) mode, causing the temperature to gradually increase during the cooling season. Some systems include an "economizer mode", which is sometimes called a "free-cooling mode". When economizing, the control system will open (fully or partially) the outside air damper and close (fully or partially) the return air damper. This will cause fresh, outside air to be supplied to the system. When the outside air is cooler than the demanded cool air, this will allow the demand to be met without using the mechanical supply of cooling (typically chilled water or a direct expansion "DX" unit), thus saving energy. The control system can compare the temperature of the outside air vs. return air, or it can compare the enthalpy of the air, as is frequently done in climates where humidity is more of an issue. In both cases, the outside air must be less energetic than the return air for the system to enter the economizer mode. Central, "all-air" air-conditioning systems (or package systems) with a combined outdoor condenser/evaporator unit are often installed in North American residences, offices, and public buildings, but are difficult to retrofit (install in a building that was not designed to receive it) because of the bulky air ducts required. [ 32 ] (Minisplit ductless systems are used in these situations.) Outside of North America, packaged systems are only used in limited applications involving large indoor space such as stadiums, theatres or exhibition halls. An alternative to packaged systems is the use of separate indoor and outdoor coils in split systems . Split systems are preferred and widely used worldwide except in North America. In North America, split systems are most often seen in residential applications, but they are gaining popularity in small commercial buildings. Split systems are used where ductwork is not feasible or where the space conditioning efficiency is of prime concern. [ 33 ] The benefits of ductless air conditioning systems include easy installation, no ductwork, greater zonal control, flexibility of control, and quiet operation. [ 34 ] In space conditioning, the duct losses can account for 30% of energy consumption. [ 35 ] The use of minisplits can result in energy savings in space conditioning as there are no losses associated with ducting. With the split system, the evaporator coil is connected to a remote condenser unit using refrigerant piping between an indoor and outdoor unit instead of ducting air directly from the outdoor unit. Indoor units with directional vents mount onto walls, suspended from ceilings, or fit into the ceiling. Other indoor units mount inside the ceiling cavity so that short lengths of duct handle air from the indoor unit to vents or diffusers around the rooms. Split systems are more efficient and the footprint is typically smaller than the package systems. On the other hand, package systems tend to have a slightly lower indoor noise level compared to split systems since the fan motor is located outside. Dehumidification (air drying) in an air conditioning system is provided by the evaporator. Since the evaporator operates at a temperature below the dew point , moisture in the air condenses on the evaporator coil tubes. This moisture is collected at the bottom of the evaporator in a pan and removed by piping to a central drain or onto the ground outside. A dehumidifier is an air-conditioner-like device that controls the humidity of a room or building. It is often employed in basements that have a higher relative humidity because of their lower temperature (and propensity for damp floors and walls). In food retailing establishments, large open chiller cabinets are highly effective at dehumidifying the internal air. Conversely, a humidifier increases the humidity of a building. The HVAC components that dehumidify the ventilation air deserve careful attention because outdoor air constitutes most of the annual humidity load for nearly all buildings. [ 36 ] All modern air conditioning systems, even small window package units, are equipped with internal air filters. [ citation needed ] These are generally of a lightweight gauze-like material, and must be replaced or washed as conditions warrant. For example, a building in a high dust environment, or a home with furry pets, will need to have the filters changed more often than buildings without these dirt loads. Failure to replace these filters as needed will contribute to a lower heat exchange rate, resulting in wasted energy, shortened equipment life, and higher energy bills; low air flow can result in iced-over evaporator coils, which can completely stop airflow. Additionally, very dirty or plugged filters can cause overheating during a heating cycle, which can result in damage to the system or even fire. Because an air conditioner moves heat between the indoor coil and the outdoor coil, both must be kept clean. This means that, in addition to replacing the air filter at the evaporator coil, it is also necessary to regularly clean the condenser coil. Failure to keep the condenser clean will eventually result in harm to the compressor because the condenser coil is responsible for discharging both the indoor heat (as picked up by the evaporator) and the heat generated by the electric motor driving the compressor. HVAC is significantly responsible for promoting energy efficiency of buildings as the building sector consumes the largest percentage of global energy. [ 37 ] Since the 1980s, manufacturers of HVAC equipment have been making an effort to make the systems they manufacture more efficient. This was originally driven by rising energy costs, and has more recently been driven by increased awareness of environmental issues. Additionally, improvements to the HVAC system efficiency can also help increase occupant health and productivity. [ 38 ] In the US, the EPA has imposed tighter restrictions over the years. There are several methods for making HVAC systems more efficient. In the past, water heating was more efficient for heating buildings and was the standard in the United States. Today, forced air systems can double for air conditioning and are more popular. Some benefits of forced air systems, which are now widely used in churches, schools, and high-end residences, are A drawback is the installation cost, which can be slightly higher than traditional HVAC systems. Energy efficiency can be improved even more in central heating systems by introducing zoned heating. This allows a more granular application of heat, similar to non-central heating systems. Zones are controlled by multiple thermostats. In water heating systems the thermostats control zone valves , and in forced air systems they control zone dampers inside the vents which selectively block the flow of air. In this case, the control system is very critical to maintaining a proper temperature. Forecasting is another method of controlling building heating by calculating the demand for heating energy that should be supplied to the building in each time unit. Ground source, or geothermal, heat pumps are similar to ordinary heat pumps, but instead of transferring heat to or from outside air, they rely on the stable, even temperature of the earth to provide heating and air conditioning. Many regions experience seasonal temperature extremes, which would require large-capacity heating and cooling equipment to heat or cool buildings. For example, a conventional heat pump system used to heat a building in Montana's −57 °C (−70 °F) low temperature or cool a building in the highest temperature ever recorded in the US—57 °C (134 °F) in Death Valley , California, in 1913 would require a large amount of energy due to the extreme difference between inside and outside air temperatures. A metre below the earth's surface, however, the ground remains at a relatively constant temperature. Utilizing this large source of relatively moderate temperature earth, a heating or cooling system's capacity can often be significantly reduced. Although ground temperatures vary according to latitude, at 1.8 metres (6 ft) underground, temperatures generally only range from 7 to 24 °C (45 to 75 °F). Photovoltaic solar panels offer a new way to potentially decrease the operating cost of air conditioning. Traditional air conditioners run using alternating current, and hence, any direct-current solar power needs to be inverted to be compatible with these units. New variable-speed DC-motor units allow solar power to more easily run them since this conversion is unnecessary, and since the motors are tolerant of voltage fluctuations associated with variance in supplied solar power (e.g., due to cloud cover). Energy recovery systems sometimes utilize heat recovery ventilation or energy recovery ventilation systems that employ heat exchangers or enthalpy wheels to recover sensible or latent heat from exhausted air. This is done by transfer of energy from the stale air inside the home to the incoming fresh air from outside. The performance of vapor compression refrigeration cycles is limited by thermodynamics . [ 39 ] These air conditioning and heat pump devices move heat rather than convert it from one form to another, so thermal efficiencies do not appropriately describe the performance of these devices. The Coefficient of performance (COP) measures performance, but this dimensionless measure has not been adopted. Instead, the Energy Efficiency Ratio ( EER ) has traditionally been used to characterize the performance of many HVAC systems. EER is the Energy Efficiency Ratio based on a 35 °C (95 °F) outdoor temperature. To more accurately describe the performance of air conditioning equipment over a typical cooling season a modified version of the EER, the Seasonal Energy Efficiency Ratio ( SEER ), or in Europe the ESEER , is used. SEER ratings are based on seasonal temperature averages instead of a constant 35 °C (95 °F) outdoor temperature. The current industry minimum SEER rating is 14 SEER. Engineers have pointed out some areas where efficiency of the existing hardware could be improved. For example, the fan blades used to move the air are usually stamped from sheet metal, an economical method of manufacture, but as a result they are not aerodynamically efficient. A well-designed blade could reduce the electrical power required to move the air by a third. [ 40 ] Demand-controlled kitchen ventilation (DCKV) is a building controls approach to controlling the volume of kitchen exhaust and supply air in response to the actual cooking loads in a commercial kitchen. Traditional commercial kitchen ventilation systems operate at 100% fan speed independent of the volume of cooking activity and DCKV technology changes that to provide significant fan energy and conditioned air savings. By deploying smart sensing technology, both the exhaust and supply fans can be controlled to capitalize on the affinity laws for motor energy savings, reduce makeup air heating and cooling energy, increasing safety, and reducing ambient kitchen noise levels. [ 41 ] Air cleaning and filtration removes particles, contaminants, vapors and gases from the air. The filtered and cleaned air then is used in heating, ventilation, and air conditioning. Air cleaning and filtration should be taken in account when protecting our building environments. [ 42 ] If present, contaminants can come out from the HVAC systems if not removed or filtered properly. Clean air delivery rate (CADR) is the amount of clean air an air cleaner provides to a room or space. When determining CADR, the amount of airflow in a space is taken into account. For example, an air cleaner with a flow rate of 30 cubic metres (1,000 cu ft) per minute and an efficiency of 50% has a CADR of 15 cubic metres (500 cu ft) per minute. Along with CADR, filtration performance is very important when it comes to the air in our indoor environment. This depends on the size of the particle or fiber, the filter packing density and depth, and the airflow rate. [ 42 ] The HVAC industry is a worldwide enterprise, with roles including operation and maintenance, system design and construction, equipment manufacturing and sales, and in education and research. The HVAC industry was historically regulated by the manufacturers of HVAC equipment, but regulating and standards organizations such as HARDI (Heating, Air-conditioning and Refrigeration Distributors International), ASHRAE , SMACNA , ACCA (Air Conditioning Contractors of America), Uniform Mechanical Code , International Mechanical Code , and AMCA have been established to support the industry and encourage high standards and achievement. ( UL as an omnibus agency is not specific to the HVAC industry.) The starting point in carrying out an estimate both for cooling and heating depends on the exterior climate and interior specified conditions. However, before taking up the heat load calculation, it is necessary to find fresh air requirements for each area in detail, as pressurization is an important consideration. ISO 16813:2006 is one of the ISO building environment standards. [ 43 ] It establishes the general principles of building environment design. It takes into account the need to provide a healthy indoor environment for the occupants as well as the need to protect the environment for future generations and promote collaboration among the various parties involved in building environmental design for sustainability. ISO16813 is applicable to new construction and the retrofit of existing buildings. [ 44 ] The building environmental design standard aims to: [ 44 ] In the United States, federal licensure is generally handled by EPA certified (for installation and service of HVAC devices). Many U.S. states have licensing for boiler operation. Some of these are listed as follows: Finally, some U.S. cities may have additional labor laws that apply to HVAC professionals. Many HVAC engineers are members of the American Society of Heating, Refrigerating, and Air-Conditioning Engineers ( ASHRAE ). ASHRAE regularly organizes two annual technical committees and publishes recognized standards for HVAC design, which are updated every four years. [ 55 ] Another popular society is AHRI , which provides regular information on new refrigeration technology, and publishes relevant standards and codes. Codes such as the UMC and IMC do include much detail on installation requirements, however. Other useful reference materials include items from SMACNA , ACGIH , and technical trade journals. American design standards are legislated in the Uniform Mechanical Code or International Mechanical Code. In certain states, counties, or cities, either of these codes may be adopted and amended via various legislative processes. These codes are updated and published by the International Association of Plumbing and Mechanical Officials ( IAPMO ) or the International Code Council ( ICC ) respectively, on a 3-year code development cycle. Typically, local building permit departments are charged with enforcement of these standards on private and certain public properties. An HVAC technician is a tradesman who specializes in heating, ventilation, air conditioning, and refrigeration. HVAC technicians in the US can receive training through formal training institutions, where most earn associate degrees . Training for HVAC technicians includes classroom lectures and hands-on tasks, and can be followed by an apprenticeship wherein the recent graduate works alongside a professional HVAC technician for a temporary period. [ 56 ] HVAC techs who have been trained can also be certified in areas such as air conditioning, heat pumps, gas heating, and commercial refrigeration. The Chartered Institution of Building Services Engineers is a body that covers the essential Service (systems architecture) that allow buildings to operate. It includes the electrotechnical, heating , ventilating , air conditioning, refrigeration and plumbing industries. To train as a building services engineer , the academic requirements are GCSEs (A-C) / Standard Grades (1-3) in Maths and Science, which are important in measurements, planning and theory. Employers will often want a degree in a branch of engineering, such as building environment engineering , electrical engineering or mechanical engineering. To become a full member of CIBSE, and so also to be registered by the Engineering Council UK as a chartered engineer, engineers must also attain an Honours Degree and a master's degree in a relevant engineering subject. [ citation needed ] CIBSE publishes several guides to HVAC design relevant to the UK market, and also the Republic of Ireland, Australia, New Zealand and Hong Kong. These guides include various recommended design criteria and standards, some of which are cited within the UK building regulations, and therefore form a legislative requirement for major building services works. The main guides are: Within the construction sector, it is the job of the building services engineer to design and oversee the installation and maintenance of the essential services such as gas, electricity , water, heating and lighting , as well as many others. These all help to make buildings comfortable and healthy places to live and work in. Building Services is part of a sector that has over 51,000 businesses and employs represents 2–3% of the GDP . The Air Conditioning and Mechanical Contractors Association of Australia (AMCA), Australian Institute of Refrigeration, Air Conditioning and Heating (AIRAH), Australian Refrigeration Mechanical Association and CIBSE are responsible. Asian architectural temperature-control have different priorities than European methods. For example, Asian heating traditionally focuses on maintaining temperatures of objects such as the floor or furnishings such as Kotatsu tables and directly warming people, as opposed to the Western focus, in modern periods, on designing air systems. The Philippine Society of Ventilating, Air Conditioning and Refrigerating Engineers (PSVARE) along with Philippine Society of Mechanical Engineers (PSME) govern on the codes and standards for HVAC / MVAC (MVAC means "mechanical ventilation and air conditioning") in the Philippines. The Indian Society of Heating, Refrigerating and Air Conditioning Engineers (ISHRAE) was established to promote the HVAC industry in India. ISHRAE is an associate of ASHRAE. ISHRAE was founded at New Delhi [ 57 ] in 1981 and a chapter was started in Bangalore in 1989. Between 1989 & 1993, ISHRAE chapters were formed in all major cities in India. [ citation needed ] Media related to Climate control at Wikimedia Commons Related media at Wikimedia Commons:
https://en.wikipedia.org/wiki/Heating,_ventilation,_and_air_conditioning
Heating oil is any petroleum product or other oil used for heating; it is a fuel oil . Most commonly, it refers to low viscosity grades of fuel oil used for furnaces or boilers for home heating and in other buildings. Home heating oil is often abbreviated as HHO . [ 1 ] Most heating oil products are chemically very similar to diesel fuel used as motor fuel ; motor fuel is typically subject to higher fuel taxes . Many countries add fuel dyes to heating oil, allowing law enforcement to check if a driver is evading fuel taxes. Since 2002, Solvent Yellow 124 has been added as a "Euromarker" in the European Union ; untaxed diesel is known as "red diesel" in the United Kingdom. Heating oil is commonly delivered by tank truck to residential, commercial, and municipal buildings and stored in above-ground storage tanks ("ASTs") located in the basements, garages, or outside adjacent to the building. It is sometimes stored in underground storage tanks (or "USTs") but less often than ASTs. ASTs are used for smaller installations due to the lower cost factor. Heating oil is less commonly used as an industrial fuel or for power generation. Leaks from tanks and piping are an environmental concern. In the United States , various federal and state regulations are in place regarding the proper transportation, storage and burning of heating oil, which is classified as a hazardous material (HazMat) by federal regulators. Heating oil consists of a mixture of petroleum -derived hydrocarbons in the 14- to 20-carbon atom range that condense between 250 and 350 °C (482 and 662 °F) during oil refining . Heating oil condenses at a lower temperature than petroleum jelly , bitumen , candle wax , and lubricating oil , but at a higher temperature than kerosene , which condenses between 160–250 °C (320–482 °F). The heavy (C20+) hydrocarbons condense between 340–400 °C (644–752 °F). Heating oil produces 137,500 British thermal units per US gallon (38.3 MJ/L) to 138,700 British thermal units per US gallon (38.7 MJ/L) and weighs 8.2 pounds per US gallon (0.95 kg/L). [ 2 ] Number 2 fuel oil has a flash point of 52 °C (126 °F). Historically, the legal difference between diesel and heating oil in the United States has been sulfur allowance. Diesel for machinery and equipment must be below 15 ppm sulfur content while heating oil needed only stay below 500 ppm sulfur. However, most heating oil in the United States is now "ultra-low sulfur heating oil" (ULSHO) and meets the same 15 ppm standard. Heating oil is known in the United States as No. 2 heating oil . In the U.S., it must conform to ASTM standard D396. Diesel and kerosene , while often confused as being similar or identical, must each conform to their respective ASTM standards. [ 3 ] Heating oil is widely used in both the United States and Canada, with U.S. residential use most common in the northeastern states of New York and Pennsylvania and in New England , collectively accounting for 85% of total U.S. residential heating oil use. [ 4 ] In the United States, biodiesel blends of B5 (5% biodiesel) and B20 (20% biodiesel) are available in most markets as a lower CO 2 and cleaner burning heating fuel. The heating oil futures contract trades in units of 1,000 barrels (160 m 3 ) with a minimum fluctuation of $0.0001 per gallon and (for the USA) is based on delivery in New York Harbor . [ 5 ] The Department of Energy tracks the prices homeowners pay for home heating fuel (oil and propane). There are also a number of websites that allow home owners to compare the price per gallon they are paying with the Department of Energy data as well as other consumers in their area. Likewise the US Energy Information Administration collects heating oil price statistics and maintains historical price data for all major US markets during each heating season. The US Department of Energy also supports research and development for heating oil technology through the National Oilheat Research Alliance . Additional information about biodiesel heating oil use can also be found at the National Biodiesel Board's site . Heating oil is mostly used in the northeastern and northwestern urban United States and a strong market presence in rural areas as well. Most of the northeast's heating oil comes from Irving Oil's refinery in Saint John, New Brunswick , [ 6 ] the largest oil refinery in Canada. Heating oil is the most common fuel for home heating in Northern Ireland due to the late development of a natural gas network. [ 7 ] Common suppliers of heating oil in Ireland are Maxol , Patterson Oil and Emo Oil . In England, Scotland and Wales, there are two types of heating oil: commercial heating oil – referring to gas oil, i.e. red diesel – and domestic heating oil – meaning kerosene , specifically BS 2869 Class C2 kerosene . [ 8 ] Heating oil is used for home heating in England, Scotland and Wales, typically in premises away from mains gas. There are around 1.5 million people in Great Britain using oil for home heating. Great Britain has many suppliers of heating oil ranging from large companies such as Crown Oil to local and independent heating oil suppliers such as J. R. Rix & Sons . Many villages may use buying groups to order heating oil at the same time, thereby accessing lower costs. Many heating oil suppliers will choose to list their prices on independent heating oil price comparison websites. These sites draw in home heating oil users and compare various local supplier prices in order to provide the lowest price available. In the UK it is possible to search for prices by town name, county and postcode prefix. The Department of Energy and Climate Change (DECC) have referred the UK oil market to the Office of Fair Trading (OFT) for review. The OFT has resolved to look at the structure of the market, with a view of the fairness for consumers and alternative energy options for off-grid consumers such as heat pumps . Heating oil storage in the United Kingdom is governed by regulations that ensure the safe installation and usage of oil storage tanks. [ 9 ] It is a criminal offence to keep a tank that violates these regulations, and the owners are liable for fines, penalties and any costs incurred as a result of cleaning up oil spills. The regulations are designed to minimise the risk of damaging pollution and reduce the likelihood of oil being stored in hazardous environments, such as a building without proper fire safety measures. The regulations that govern oil storage tanks are The Control of Pollution (Oil Storage) England Regulations (2001) , [ 10 ] The Pollution Prevention Guidelines (PPG 2) [ 11 ] and The Building Regulations (Approved Document J) . The Oil Storage Regulations (2001) apply to oil tanks used for commercial and industrial purposes, or domestic tanks over 3500 litres in capacity. They state that the storage tank should be of "sufficient strength and structural integrity to ensure that it is unlikely to burst or leak in its ordinary use". [ 12 ] The tank, along with any filters, gauges, valves or ancillary equipment, must be contained within a secondary unit or bund that has at least 110% of the capacity of the inner tank. If the tank has a fill pipe that is not contained within the secondary unit, a drip tray must be installed. They also require the use of an automatic overfill prevention if it is not "reasonably practical" to monitor the oil levels within the tank. The Building Regulations Approved Document J covers the legal requirements for the installation of the tanks within the premises of a building. The regulations state that any new tank larger than 2,500 litres must be stored within a bunded tank or secondary containment that is a minimum of 110% of the tank's capacity. If a tank is single skinned and smaller than 2,500 litres, it must be given an individual site pollution risk assessment. This highlights any pollution or hazard risks such as the possibility of the oil escaping and reaching a river or stream, or the risk of a collision if the storage tank is located near a road. They further state that all tanks must be installed on a surface strong enough to support a full storage tank. The surface must be flat, even and fire-resistant, and should extend at least 300mm beyond the boundaries of the tank. A paving stone surface must be at least 42mm thick, and a concrete surface must be at least 100mm thick. The document also states that the tank should be situated at least 1800mm away from any potential hazards, such as doors, windows, appliance flue terminals, non-fire rated buildings such as garden fences, and at least 760mm from non-fire rated smaller structures such as wooden fences. [ 13 ] A safe, secure tank that complies with all regulations will look similar to the diagram above. It details the different parts of the tank that need to be checked in order to ensure the tank is legal, including where the ancillary equipment should be located and the presence of an automatic overfill prevention.
https://en.wikipedia.org/wiki/Heating_oil
As quoted from various sources in an online version of: As quoted from various sources in: As quoted at http://www.webelements.com/ from these sources:
https://en.wikipedia.org/wiki/Heats_of_fusion_of_the_elements_(data_page)
Zhang Y; Evans JRG & Zhang S (2011). "Corrected Values for Boiling Points and Enthalpies of Vaporization of Elements in Handbooks" . J. Chem. Eng. Data . 56 (2): 328– 337. doi : 10.1021/je1011086 . As quoted from various sources in an online version of: As quoted from various sources in: As quoted at http://www.webelements.com/ from these sources:
https://en.wikipedia.org/wiki/Heats_of_vaporization_of_the_elements_(data_page)
Heatwork is the combined effect of temperature and time. It is important to several industries: While the concept of heatwork is taught in material science courses it is not a defined measurement or scientific concept. Pyrometric devices can be used to gauge heat work as they deform or contract due to heatwork to produce temperature equivalents. Within tolerances, firing can be undertaken at lower temperatures for a longer period to achieve comparable results. When the amount of heatwork of two firings is the same, the pieces may look identical, but there may be differences not visible, such as mechanical strength and microstructure . This engineering-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Heatwork
A heavy Rydberg system consists of a weakly bound positive and negative ion orbiting their common centre of mass. Such systems share many properties with the conventional Rydberg atom and consequently are sometimes referred to as heavy Rydberg atoms. While such a system is a type of ionically bound molecule, it should not be confused with a molecular Rydberg state, which is simply a molecule with one or more highly excited electrons. The peculiar properties of the Rydberg atom come from the large charge separation and the resulting hydrogenic potential. The extremely large separation between the two components of a heavy Rydberg system results in an almost perfect 1/r hydrogenic potential seen by each ion. The positive ion can be viewed as analogous to the nucleus of a hydrogen atom, with the negative ion playing the role of the electron. [ 1 ] The most commonly studied system to date is the H + /H − system, consisting of a proton bound with a H − ion. The H + /H − system was first observed in 2000 by a group at the University of Waterloo in Canada . The formation of the H − ion can be understood classically; as the single electron in a hydrogen atom cannot fully shield the positively charged nucleus, another electron brought into close proximity will feel an attractive force. While this classical description is nice for getting a feel for the interactions involved, it is an oversimplification; many other atoms have a greater electron affinity than hydrogen. In general the process of forming a negative ion is driven by the filling of atomic electron shells to form a lower energy configuration. Only a small number of molecules have been used to produce heavy Rydberg systems although in principle any atom with a positive electron affinity can bind with a positive ion. Species used include O 2 , H 2 S and HF . Fluorine and oxygen are particularly favoured due to their high electron affinity, high ionisation energy and consequently high electronegativity . The difficulty in the production of heavy Rydberg systems arises in finding an energetic pathway by which a molecule can be excited with just the right energy to form an ion pair, without sufficient internal energy to cause autodissociation (a process analogous to autoionization in atoms) or rapid dissociation due to collisions or local fields . Currently production of heavy Rydberg systems relies on complex vacuum ultra-violet (so called because it is strongly absorbed in air and requires the entire system to be enclosed within a vacuum chamber) or multi-photon transitions (relying on absorption of multiple photons almost simultaneously), both of which are rather inefficient and result in systems with high internal energy. The bond length in a heavy Rydberg system is 10,000 times larger than in a typical diatomic molecule . As well as producing the characteristic hydrogen-like behaviour, this also makes them extremely sensitive to perturbation by external electric and magnetic fields. Heavy Rydberg systems have a relatively large reduced mass , given by: This leads to a very slow time evolution, which makes them easy to manipulate both spatially and energetically, while their low binding energy makes them relatively simple to detect through field dissociation and detection of the resulting ions , in a process known as threshold ion-pair production spectroscopy . Kepler's third law states that the period of an orbit is proportional to the cube of the semi-major axis ; this can be applied to the Coulomb force : where τ {\displaystyle \tau } is the time-period, μ {\displaystyle \mu } is the reduced mass, a {\displaystyle a} is the semi-major axis and k = 1 / ( 4 π ϵ 0 ) {\displaystyle k=1/(4\pi \epsilon _{0})} . Classically we can say that a system with a large reduced mass has a long orbital period. Quantum mechanically, a large reduced mass in a system leads to narrow spacing of the energy levels and the rate of time-evolution of the wavefunction depends on this energy spacing. This slow time-evolution makes heavy Rydberg systems ideal for experimentally probing the dynamics of quantum systems.
https://en.wikipedia.org/wiki/Heavy_Rydberg_system
Heavy equipment , heavy machinery , earthmovers , construction vehicles , or construction equipment , refers to heavy-duty vehicles specially designed to execute construction tasks, most frequently involving earthwork operations or other large construction tasks. Heavy equipment usually comprises five equipment systems: the implement , traction , structure, power train , and control/information. Heavy equipment has been used since at least the 1st century BC, when the ancient Roman engineer Vitruvius described a crane powered by human or animal labor in De architectura . Heavy equipment functions through the mechanical advantage of a simple machine that multiplies the ratio between input force applied and force exerted, easing and speeding tasks which often could otherwise take hundreds of people and many weeks' labor. Some such equipment uses hydraulic drives as a primary source of motion. The word plant , in this context, has come to mean any type of industrial equipment, including mobile equipment (e.g. in the same sense as powerplant ). However, plant originally meant "structure" or "establishment" – usually in the sense of factory or warehouse premises; as such, it was used in contradistinction to movable machinery, often in the phrase "plant and equipment". The use of heavy equipment has a long history; the ancient Roman engineer Vitruvius (1st century BCE) gave descriptions of heavy equipment and cranes in ancient Rome in his treatise De architectura . The pile driver was invented around 1500. The first tunnelling shield was patented by Marc Isambard Brunel in 1818. Until the 19th century and into the early 20th century heavy machines were drawn under human or animal power. With the advent of portable steam-powered engines the drawn machine precursors were reconfigured with the new engines, such as the combine harvester . The design of a core tractor evolved around the new steam power source into a new machine core traction engine , that can be configured as the steam tractor and the steamroller . During the 20th century, internal-combustion engines became the major power source of heavy equipment. Kerosene and ethanol engines were used, but today diesel engines are dominant. Mechanical transmission was in many cases replaced by hydraulic machinery. The early 20th century also saw new electric-powered machines such as the forklift . Caterpillar Inc. is a present-day brand from these days, starting out as the Holt Manufacturing Company . The first mass-produced heavy machine was the Fordson tractor in 1917. The first commercial continuous track vehicle was the 1901 Lombard Steam Log Hauler . The use of tracks became popular for tanks during World War I , and later for civilian machinery like the bulldozer. The largest engineering vehicles and mobile land machines are bucket-wheel excavators , built since the 1920s. Until almost the twentieth century, one simple tool constituted the primary earthmoving machine: the hand shovel —moved with animal and human powered, sleds, barges, and wagons. This tool was the principal method by which material was either sidecast or elevated to load a conveyance, usually a wheelbarrow , or a cart or wagon drawn by a draft animal . In antiquity, an equivalent of the hand shovel or hoe and head basket—and masses of men—were used to move earth to build civil works. Builders have long used the inclined plane , levers, and pulleys to place solid building materials, but these labor-saving devices did not lend themselves to earthmoving, which required digging, raising, moving, and placing loose materials. The two elements required for mechanized earthmoving, then as now, were an independent power source and off-road mobility, neither of which could be provided by the technology of that time. [ 1 ] Container cranes were used from the 1950s and onwards, and made containerization possible. Nowadays such is the importance of this machinery, some transport companies have developed specific equipment to transport heavy construction equipment to and from sites. Most of the major equipment manufacturers such as Caterpillar, [ 2 ] Volvo, [ 3 ] Liebherr, [ 4 ] and Bobcat have released or have been developing fully or partially electric-powered heavy equipment. Commercially-available models and R&D models were announced in 2019 and 2020. [ 5 ] Robotics and autonomy has been a growing concern for heavy equipment manufacturers with manufacturers beginning research and technology acquisition. [ 6 ] A number of companies are currently developing ( Caterpillar and Bobcat ) or have launched ( Built Robotics ) commercial solutions to the market. These subdivisions, in this order, are the standard heavy equipment categorization. Tractor Grader Excavator Backhoe Timber Pipelayer Scraper Mining Articulated Compactor Loader Track loader Skid-steer loader Material handler Paving Underground Hydromatic tool Hydraulic machinery Highway Heavy equipment requires specialized tires for various construction applications. While many types of equipment have continuous tracks applicable to more severe service requirements, tires are used where greater speed or mobility is required. An understanding of what equipment will be used for during the life of the tires is required for proper selection. Tire selection can have a significant impact on production and unit cost. There are three types of off-the-road tires, transport for earthmoving machines, work for slow moving earthmoving machines, and load and carry for transporting as well as digging. Off-highway tires have six categories of service C compactor, E earthmover, G grader, L loader, LS log-skidder and ML mining and logging. Within these service categories are various tread types designed for use on hard-packed surface, soft surface and rock. Tires are a large expense on any construction project, careful consideration should be given to prevent excessive wear or damage. A heavy equipment operator drives and operates heavy equipment used in engineering and construction projects. [ 7 ] [ 8 ] Typically only skilled workers may operate heavy equipment, and there is specialized training for learning to use heavy equipment. Much publication about heavy equipment operators focuses on improving safety for such workers. The field of occupational medicine researches and makes recommendations about safety for these and other workers in safety-sensitive positions. Due to the small profit margins on construction projects it is important to maintain accurate records concerning equipment utilization, repairs and maintenance. The two main categories of equipment costs are ownership cost and operating cost . [ 9 ] To classify as an ownership cost an expense must have been incurred regardless of if the equipment is used or not. These costs are as follows: Depreciation can be calculated several ways, the simplest is the straight-line method. The annual depreciation is constant, reducing the equipment value annually. The following are simple equations paraphrased from the Peurifoy & Schexnayder text: m = some year in the future N = equipment useful life (years) and D n = Annual depreciation amount Book value (BV) in year m example: N = 5 purchase price = $350,000 m = 3 years from now For an expense to be classified as an operating cost, it must be incurred through use of the equipment. These costs are as follows: [ 10 ] The biggest distinction from a cost standpoint is if a repair is classified as a major repair or a minor repair . A major repair can change the depreciable equipment value due to an extension in service life , while a minor repair is normal maintenance . How a firm chooses to cost major and minor repairs vary from firm to firm depending on the costing strategies being used. Some firms will charge only major repairs to the equipment while minor repairs are costed to a project. Another common costing strategy is to cost all repairs to the equipment and only frequently replaced wear items are excluded from the equipment cost. Many firms keep their costing structure closely guarded [ citation needed ] as it can impact the bidding strategies of their competition. In a company with multiple semi-independent divisions, the equipment department often wants to classify all repairs as "minor" and charge the work to a job – therefore improving their 'profit' from the equipment. Die-cast metal promotional scale models of heavy equipment are often produced for each vehicle to give to prospective customers. These are typically in 1:50 scale . The popular manufacturers of these models are Conrad and NZG in Germany, even for US vehicles. The largest 10 heavy equipment manufacturers in 2022 [ 12 ] Other manufacturers include:
https://en.wikipedia.org/wiki/Heavy_equipment
In materials science , heavy fermion materials are a specific type of intermetallic compound , containing elements with 4f or 5f electrons in unfilled electron bands . [ 1 ] Electrons are one type of fermion , and when they are found in such materials, they are sometimes referred to as heavy electrons . [ 2 ] Heavy fermion materials have a low-temperature specific heat whose linear term is up to 1000 times larger than the value expected from the free electron model . The properties of the heavy fermion compounds often derive from the partly filled f-orbitals of rare-earth or actinide ions, which behave like localized magnetic moments . The name "heavy fermion" comes from the fact that the fermion behaves as if it has an effective mass greater than its rest mass. In the case of electrons, below a characteristic temperature (typically 10 K), the conduction electrons in these metallic compounds behave as if they had an effective mass up to 1000 times the free particle mass. This large effective mass is also reflected in a large contribution to the resistivity from electron-electron scattering via the Kadowaki–Woods ratio . Heavy fermion behavior has been found in a broad variety of states including metallic, superconducting , insulating and magnetic states. Characteristic examples are CeCu 6 , CeAl 3 , CeCu 2 Si 2 , YbAl 3 , UBe 13 and UPt 3 . Heavy fermion behavior was discovered by K. Andres, J.E. Graebner and H.R. Ott in 1975, who observed enormous magnitudes of the linear specific heat capacity in CeAl 3 . [ 3 ] While investigations on doped superconductors led to the conclusion that the existence of localized magnetic moments and superconductivity in one material was incompatible, the opposite was shown, when in 1979 Frank Steglich et al. discovered heavy fermion superconductivity in the material CeCu 2 Si 2 . [ 4 ] In 1994, the discovery of a quantum critical point and non-Fermi liquid behavior in the phase diagram of heavy fermion compounds by H. von Löhneysen et al. led to a new rise of interest in the research of these compounds. [ 5 ] Another experimental breakthrough was the demonstration in 1998 (by the group of Gil Lonzarich ) that quantum criticality in heavy fermions can be the reason for unconventional superconductivity. [ 6 ] Heavy fermion materials play an important role in current scientific research, acting as prototypical materials for unconventional superconductivity , non-Fermi liquid behavior and quantum criticality. The actual interaction between localized magnetic moments and conduction electrons in heavy fermion compounds is still not completely understood and a topic of ongoing investigation. [ citation needed ] Heavy fermion materials belong to the group of strongly correlated electron systems . Several members of the group of heavy fermion materials become superconducting below a critical temperature. The superconductivity is unconventional , i.e., not covered by BCS theory . At high temperatures, heavy fermion compounds behave like normal metals and the electrons can be described as a Fermi gas , in which the electrons are assumed to be non-interacting fermions. In this case, the interaction between the f electrons, which present a local magnetic moment, and the conduction electrons can be neglected. The Fermi liquid theory of Lev Landau provides a good model to describe the properties of most heavy fermion materials at low temperatures. In this theory, the electrons are described by quasiparticles , which have the same quantum numbers and charge, but the interaction of the electrons is taken into account by introducing an effective mass , which differs from the actual mass of a free electron. In order to obtain the optical properties of heavy fermion systems, these materials have been investigated by optical spectroscopy measurements. [ 7 ] In these experiments the sample is irradiated by electromagnetic waves with tunable wavelength . Measuring the reflected or transmitted light reveals the characteristic energies of the sample. Above the characteristic coherence temperature T c o h {\displaystyle T_{\rm {coh}}} , heavy fermion materials behave like normal metals; i.e. their optical response is described by the Drude model . Compared to a good metal however, heavy fermion compounds at high temperatures have a high scattering rate because of the large density of local magnetic moments (at least one f electron per unit cell), which cause (incoherent) Kondo scattering. Due to the high scattering rate, the conductivity for dc and at low frequencies is rather low. A conductivity roll-off (Drude roll-off) occurs at the frequency that corresponds to the relaxation rate. Below T c o h {\displaystyle T_{\rm {coh}}} , the localized f electrons hybridize with the conduction electrons. This leads to the enhanced effective mass, and a hybridization gap develops. In contrast to Kondo insulators , the chemical potential of heavy fermion compounds lies within the conduction band. These changes lead to two important features in the optical response of heavy fermions. [ 1 ] The frequency-dependent conductivity of heavy-fermion materials can be expressed by σ ( ω ) = n e 2 m ∗ τ ∗ 1 + ω 2 τ ∗ 2 {\displaystyle \sigma (\omega )={\frac {ne^{2}}{m^{*}}}{\frac {\tau ^{*}}{1+\omega ^{2}\tau ^{*2}}}} , containing the effective mass m ∗ {\displaystyle m^{*}} and the renormalized relaxation rate 1 τ ∗ = m m ∗ 1 τ {\displaystyle {\frac {1}{\tau ^{*}}}={\frac {m}{m^{*}}}{\frac {1}{\tau }}} . [ 8 ] Due to the large effective mass, the renormalized relaxation time is also enhanced, leading to a narrow Drude roll-off at very low frequencies compared to normal metals. [ 8 ] [ 9 ] The lowest such Drude relaxation rate observed in heavy fermions so far, in the low GHz range , was found in UPd 2 Al 3 . [ 10 ] The gap-like feature in the optical conductivity represents directly the hybridization gap, which opens due to the interaction of localized f electrons and conduction electrons. Since the conductivity does not vanish completely, the observed gap is actually a pseudogap . [ 11 ] At even higher frequencies we can observe a local maximum in the optical conductivity due to normal interband excitations. [ 1 ] At low temperature and for normal metals, the specific heat C P {\displaystyle C_{P}} consists of the specific heat of the electrons C P , e l {\displaystyle C_{P,{\rm {el}}}} which depends linearly on temperature T {\displaystyle T} and of the specific heat of the crystal lattice vibrations ( phonons ) C P , p h {\displaystyle C_{P,{\rm {ph}}}} which depends cubically on temperature with proportionality constants β {\displaystyle \beta } and γ {\displaystyle \gamma } . In the temperature range mentioned above, the electronic contribution is the major part of the specific heat. In the free electron model — a simple model system that neglects electron interaction — or metals that could be described by it, the electronic specific heat is given by with Boltzmann constant k B {\displaystyle k_{\rm {B}}} , the electron density n {\displaystyle n} and the Fermi energy ϵ F {\displaystyle \epsilon _{\rm {F}}} (the highest single particle energy of occupied electronic states). The proportionality constant γ {\displaystyle \gamma } is called the Sommerfeld coefficient. For electrons with a quadratic dispersion relation (as for the free-electron gas), the Fermi energy ε F is inversely proportional to the particle's mass m : where k F {\displaystyle k_{\rm {F}}} stands for the Fermi wave number that depends on the electron density and is the absolute value of the wave number of the highest occupied electron state. Thus, because the Sommerfeld parameter γ {\displaystyle \gamma } is inversely proportional to ϵ F {\displaystyle \epsilon _{\rm {F}}} , γ {\displaystyle \gamma } is proportional to the particle's mass and for high values of γ {\displaystyle \gamma } , the metal behaves as a Fermi gas in which the conduction electrons have a high thermal effective mass. Experimental results for the specific heat of the heavy fermion compound UBe 13 show a peak at a temperature around 0.75 K that goes down to zero with a high slope if the temperature approaches 0 K. Due to this peak, the γ {\displaystyle \gamma } factor is much higher than the free electron model in this temperature range. In contrast, above 6 K, the specific heat for this heavy fermion compound approaches the value expected from free-electron theory. The presence of local moment and delocalized conduction electrons leads to a competition of the Kondo interaction (which favors a non-magnetic ground state ) and the RKKY interaction (which generates magnetically ordered states, typically antiferromagnetic for heavy fermions). By suppressing the Néel temperature of a heavy-fermion antiferromagnet down to zero (e.g. by applying pressure or magnetic field or by changing the material composition), a quantum phase transition can be induced. [ 12 ] For several heavy-fermion materials it was shown that such a quantum phase transition can generate very pronounced non-Fermi liquid properties at finite temperatures. Such quantum-critical behavior is also studied in great detail in the context of unconventional superconductivity . Examples of heavy-fermion materials with well-studied quantum-critical properties are CeCu 6−x Au, [ 13 ] CeIn 3 , [ 6 ] CePd 2 Si 2 , [ 6 ] YbRh 2 Si 2 , and CeCoIn 5 . [ 14 ] [ 15 ]
https://en.wikipedia.org/wiki/Heavy_fermion_material
Heavy fermion superconductors are a type of unconventional superconductor . The first heavy fermion superconductor, CeCu 2 Si 2 , was discovered by Frank Steglich in 1978. [ 1 ] Since then over 30 heavy fermion superconductors were found (in materials based on Ce, U), with a critical temperature up to 2.3 K (in CeCoIn 5 ). [ 2 ] Heavy fermion materials are intermetallic compounds, containing rare earth or actinide elements. The f-electrons of these atoms hybridize with the normal conduction electrons leading to quasiparticles with an enhanced effective mass. [ citation needed ] From specific heat measurements Δ C / C ( T C ) {\displaystyle \Delta C/C(T_{\rm {C}})} one knows that the Cooper pairs in the superconducting state are also formed by the heavy quasiparticles. [ 10 ] In contrast to normal superconductors it cannot be described by BCS theory . Due to the large effective mass, [ 11 ] the Fermi velocity is reduced and comparable to the inverse Debye frequency . This leads to the failing of the picture of electrons polarizing the lattice as an attractive force. [ citation needed ] Some heavy fermion superconductors are candidate materials for the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) phase . [ 12 ] In particular there has been evidence that CeCoIn 5 close to the critical field is in an FFLO state. [ 13 ]
https://en.wikipedia.org/wiki/Heavy_fermion_superconductor
Heavy fuel oil ( HFO ) is a category of fuel oils of a tar -like consistency. Also known as bunker fuel , or residual fuel oil , HFO is the result or remnant from the distillation and cracking process of petroleum . For this reason, HFO contains several different compounds that include aromatics , sulfur , and nitrogen , making emissions upon combustion more polluting compared to other fuel oils. [ 1 ] HFO is predominantly used as a fuel source for marine vessel propulsion using marine diesel engines due to its relatively low cost compared to cleaner fuel sources such as distillates . [ 2 ] [ 3 ] The use and carriage of HFO on-board vessels presents several environmental concerns, namely the risk of oil spill and the emission of toxic compounds and particulates including black carbon . The use of HFOs is banned as a fuel source for ships travelling in the Antarctic as part of the International Maritime Organization 's (IMO) International Code for Ships Operating in Polar Waters (Polar Code). [ 4 ] For similar reasons, an HFO ban in Arctic waters is currently being considered. [ 5 ] HFO consists of the remnants or residual of petroleum sources once the hydrocarbons of higher quality are extracted via processes such as thermal and catalytic cracking . Thus, HFO is also commonly referred to as residual fuel oil. The chemical composition of HFO is highly variable due to the fact that HFO is often mixed or blended with cleaner fuels; blending streams can include carbon numbers from C 20 to greater than C 50 . HFOs are blended to achieve certain viscosity and flow characteristics for a given use. As a result of the wide compositional spectrum, HFO is defined by processing, physical and final use characteristics. Being the final remnant of the cracking process, HFO also contains mixtures of the following compounds to various degrees: "paraffins, cycloparaffins, aromatics, olefins, and asphaltenes as well as molecules containing sulfur, oxygen, nitrogen and/or organometals". [ 1 ] HFO is characterized by a maximum density of 1010 kg/m 3 at 15°C, and a maximum viscosity of 700 mm 2 /s (cSt) at 50°C according to ISO 8217. [ 6 ] Given HFO's elevated sulfur contamination (maximum of 5% by mass), [ 6 ] the combustion reaction results in the formation of sulfur dioxide SO 2 . Since the middle of the 20th century, [ 7 ] [ 8 ] HFO has been used primarily by the shipping industry due to its low cost compared with all other fuel oils, being up to 30% less expensive, as well as the historically lax regulatory requirements for emissions of nitrogen oxides (NO x ) and sulfur dioxide (SO 2 ) by the IMO. [ 2 ] [ 3 ] For these two reasons, HFO is the single most widely used engine fuel oil on-board ships. Data available until 2007 for global consumption of HFO at the international marine sector reports total fuel oil usages of 200 million tonnes, with HFO consumption accounting for 174 million tonnes. Data available until 2011 for fuel oil sales to the international marine shipping sector reports 207.5 million tonnes total fuel oil sales with HFO accounting for 177.9 million tonnes. [ 9 ] Marine vessels can use a variety of different fuels for the purpose of propulsion, which are divided into two broad categories: residual oils or distillates. In contrast to HFOs, distillates are the petroleum products created through refining crude oil and include diesel, kerosene, naphtha and gas. Residual oils are often combined to various degrees with distillates to achieve desired properties for operational and/or environmental performance. Table 1 lists commonly used categories of marine fuel oil and mixtures; all mixtures including the low sulfur marine fuel oil are still considered HFO. [ 3 ] The use and carriage of HFO in the Arctic is a commonplace marine industry practice. In 2015, over 200 ships entered Arctic waters carrying a total of 1.1 million tonnes of fuel with 57% of fuel consumed during Arctic voyages being HFO. [ 10 ] In the same year, trends in carriage of HFO were reported to be 830,000 tonnes, representing a significant growth from the reported 400,000 tonnes in 2012. A report in 2017 by Norwegian Type Approval body DNV GL (Det Norske Veritas) calculated the total fuel use of HFO by mass in the Arctic to be over 75% with larger vessels being the main consumers. In light of increased area traffic and given that the Arctic is considered to be a sensitive ecological area with a higher response intensity to climate change, the environmental risks posed by HFO present concern for environmentalists and governments in the area. [ 11 ] The two main environmental concerns for HFO in the Arctic are the risk of spill or accidental discharge and the emission of black carbon as a result of HFO consumption. [ 10 ] [ 3 ] Due to its very high viscosity and elevated density, HFO released into the environment is a greater threat to flora and fauna compared to distillate or other residual fuels. In 2009, the Arctic Council identified the spill of oil in the Arctic as the greatest threat to the local marine environment. Being the remnant of the distillation and cracking processes, HFO is characterized by an elevated overall toxicity compared to all other fuels. Its viscosity prevents breakdown into the environment, a property exacerbated by the cold temperatures in the Arctic resulting in the formation of tar-lumps, and an increase in volume through emulsification. Its density, tendency to persist and emulsify can result in HFO polluting both the water column and seabed. [ 10 ] The following HFO specific spills have occurred since the year 2000. The information is organized according to year and ship name and includes amount released and the spill location: The combustion of HFO in ship engines results in the highest amount of black carbon emissions compared to all other fuels. The choice of marine fuel is the most important determinant of ship engine emission factors for black carbon. The second most important factor in the emission of black carbon is the ship load size, with emission factors of black carbon increasing up to six times given low engine loads. [ 13 ] Black carbon is the product of incomplete combustion and a component of soot and fine particulate matter (<2.5 μg). It has a short atmospheric lifetime of a few days to a week and is typically removed upon precipitation events. [ 14 ] Although there has been debate concerning the radiative forcing of black carbon, combinations of ground and satellite observations suggest a global solar absorption of 0.9W·m −2 , making it the second most important climate forcer after CO 2. [ 15 ] [ 16 ] Black carbon affects the climate system by: decreasing the snow/ice albedo through dark soot deposits and increasing snowmelt timing, [ 17 ] reducing the planetary albedo through absorption of solar radiation reflected by the cloud systems, earth surface and atmosphere, [ 16 ] as well as directly decreasing cloud albedo with black carbon contamination of water and ice found therein. [ 16 ] [ 14 ] The greatest increase in Arctic surface temperature per unit of black carbon emissions results from the decrease in snow/ice albedo which makes Arctic specific black carbon release more detrimental than emissions elsewhere. [ 18 ] The International Maritime Organization (IMO), a specialized arm of the United Nations , adopted into force on 1 January 2017 the International Code for Ships Operating in Polar Waters or Polar Code. The requirements of the Polar Code are mandatory under both the International Convention for the Prevention of Pollution from Ships (MARPOL) and the International Convention for the Safety of Life at Sea (SOLAS) . The two broad categories covered by the Polar Code include safety and pollution prevention related to navigation in both Arctic and Antarctic polar waters. [ 4 ] The carriage and use of HFO in the Arctic is discouraged by the Polar Code while being banned completely from the Antarctic under MARPOL Annex I regulation 43. [ 19 ] The ban of HFO use and carriage in the Antarctic precedes the adoption of the Polar Code. At its 60th session (26 March 2010), The Marine Environmental Protection Committee (MEPC) adopted Resolution 189(60) which went into effect in 2011 and prohibits fuels of the following characteristics: [ 20 ] IMO's Marine Environmental Protection Committee (MEPC) tasked the Pollution Prevention Response Sub-Committee (PPR) to enact a ban on the use and carriage of heavy fuel in Arctic waters at its 72nd and 73rd sessions. This task is also accompanied by a requirement to properly define HFO taking into account its current definition under MARPOL Annex I regulation 43. [ 19 ] The adoption of the ban is anticipated for 2021, with widespread implementation by 2023. [ 21 ] The Clean Arctic Alliance was the first IMO delegate nonprofit organization to campaign against the use of HFO in Arctic waters. However, the phase-out and ban of HFO in the Arctic was formally proposed to MEPC by eight countries in 2018: Finland, Germany, Iceland, the Netherlands, New Zealand, Norway, Sweden and the United States. [ 10 ] [ 19 ] Although these member states continue to support the initiative, several countries have been vocal about their resistance to an HFO ban on such a short time scale. The Russian Federation has expressed concern for impacts to the maritime shipping industry and trade given the relatively low cost of HFO. Russia instead suggested the development and implementation of mitigation measures for the use and carriage of HFO in Arctic waters. Canada and Marshall Islands have presented similar arguments, highlighting the potential impacts on Arctic communities (namely remote indigenous populations) and economies. [ 5 ] To appease concerns and resistance, at its 6th session in February 2019, the PPR sub-committee working group developed a "draft methodology for analyzing impacts" of HFO to be finalized at PPR's 7th session in 2020. The purpose of the methodology being to evaluate the ban according to its economic and social impacts on Arctic indigenous communities and other local communities, to measure anticipated benefits to local ecosystems, and potentially consider other factors that could be positively or negatively affected by the ban. [ 22 ]
https://en.wikipedia.org/wiki/Heavy_fuel_oil
A heavy hauler is a very large transporter for moving oversize loads too large for road travel without an escort and special permit. A heavy hauler typically consists of a Ballast Tractor and a hydraulic modular trailer . Some trailers may have independently steerable wheels, and several might be towed by one or more tractor units in a train. Self-propelled modular transporters (SPMT), some featuring a dozen or more self-steering axles with scores of rubber tires to spread out a load, are increasingly being manufactured. Working in coordinated teams, heavy haulers are able to carry loads exceeding 100 tons. In some cases, a heavy hauler is designed and constructed to move a particular load on a one-off or short-term basis. An example is the self-propelled antenna transporter for the ALMA radio telescope project, a 130-tonne (130-long-ton; 140-short-ton) 28-wheeled rigid vehicle designed to carry and place 115-tonne (113-long-ton; 127-short-ton) radio telescope antennas up a mountain to an altitude of 5,000 m (16,400 ft). [ 1 ] Girder bridge ( lowboy ) trailers are another specialist heavy hauler, specifically for the transport of large power transformers. [ citation needed ] Typical loads moved by heavy haulers under escort on highways include giant boilers and pressure vessels used in the chemical industry, industrial plants , prefabricated sections for construction projects, giant power transformers , turbines , and houses (generally made of timber ). The term "heavy hauler" may also be used to refer to off-road dump trucks and ore carriers used in mining and construction with capacities up to 400 tonnes (390 long tons; 440 short tons), or an airplane that has been especially constructed for moving heavy materials. [ 2 ] There are some shipbuilding companies using SPMT for carrying ship parts and constructing ships in China. They have saved millions of dollars formerly spent transporting loads using gantry cranes . [ citation needed ]
https://en.wikipedia.org/wiki/Heavy_hauler
In transportation, heavy lift refers to the handling and installation of heavy items which are indivisible, and of weights generally accepted to be over 100 tons and of widths/heights of more than 100 meters. These oversized items are transported from one place to another (sometimes across country borders), then lifted or installed into place. Characteristic for heavy-lift goods is the absence of standardization, which requires individual transport planning. Mode Of Transport Typical heavy-lift cargo includes generators, turbines, reactors, boilers, towers, casting, heaters, presses, locomotives, boats, satellites, military personnel and equipment. In the offshore industry, parts of oil rigs and production platforms are also lifted; some of these are also removed at the end of an installation's working life. Recent notable lifts have included several of >2000 metric tons in the de-commissioning of the North West Hutton oil field in the British sector of the North Sea . [ citation needed ] Typical cargo consist of Road transport of heavy and oversized load is called heavy haulage specialized equipment is used to haul these load, which are only employed for heavy-duty work. This type of transport requires route planning and escort vehicles. Road transport is carried out from or to manufacturing plants or factories. Heavy Lift Road Equipment. Heavy lift transport of project cargo is done using cargo planes , which are one of the largest aircraft due to the size of the load these loads are carried to or from airports via road transport. In 2021 Gebrüder Weiss a logistics company chartered an Antonov An-225 Mriya world's largest cargo plane to transport project cargo from China to Poland for a Polish factory. This was done due to scarcity of time in the COVID-19 pandemic . [ 2 ] Largest Cargo Planes Sea transport is the preferred mode of transport for long distance due to large space and low cost when compared to air transport, but this mode of transport requires more planning and the transport itself takes long time. Loads are carried to the port via road transport, later cranes and gantry are used to place loads onto or into the vessels. [ 4 ] Heavy Lift Sea Equipment. For land transport, rail transport is also considered as an option for hauling heavy lift cargo over road transport due to advantages of higher speeds, bulk transportation and lower cost compared to road transport. Companies have managed to haul cargos over 250 tons and 30 meters long, and some have also started moves longer cargos like windmill blades . Disadvantage to the rail transport is the access ability and size restrictions. Railroads are not connected to airports and ports directly, ultimately the cargo has to be transmitted to road transport to reach the final destination. [ 5 ] Heavy Lift Rail Equipment
https://en.wikipedia.org/wiki/Heavy_lift
Heavy meromyosin ( HMM ) is the larger of the two fragments obtained from the muscle protein myosin II following limited proteolysis by trypsin or chymotrypsin . [ 1 ] HMM contains two domains S-1 and S-2, S-1 contains is the globular head that can bind to actin while the S-2 domain projects at and angle from light meromyosin (LMM) connecting the two meromyosin fragments. HMM is used to determine the polarity of actin filaments by decorating them with HMM then viewing them under the electron microscope . [ 2 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Heavy_meromyosin
Heavy metals is a controversial and ambiguous term [ 2 ] for metallic elements with relatively high densities , atomic weights , or atomic numbers . The criteria used, and whether metalloids are included, vary depending on the author and context and it has been argued that the term "heavy metal" should be avoided. [ 3 ] [ 4 ] A heavy metal may be defined on the basis of density, atomic number or chemical behaviour . More specific definitions have been published, none of which have been widely accepted. The definitions surveyed in this article encompass up to 96 out of the 118 known chemical elements ; only mercury , lead and bismuth meet all of them. Despite this lack of agreement, the term (plural or singular) is widely used in science. A density of more than 5 g/cm 3 is sometimes quoted as a commonly used criterion and is used in the body of this article. The earliest-known metals—common metals such as iron , copper , and tin , and precious metals such as silver , gold , and platinum —are heavy metals. From 1809 onward, light metals , such as magnesium , aluminium , and titanium , were discovered, as well as less well-known heavy metals including gallium , thallium , and hafnium . Some heavy metals are either essential nutrients (typically iron, cobalt , copper and zinc ), or relatively harmless (such as ruthenium , silver and indium ), but can be toxic in larger amounts or certain forms. Other heavy metals, such as arsenic , cadmium , mercury, and lead, are highly poisonous. Potential sources of heavy metal poisoning include mining , tailings , smelting , industrial waste , agricultural runoff , occupational exposure , paints and treated timber . Physical and chemical characterisations of heavy metals need to be treated with caution, as the metals involved are not always consistently defined. As well as being relatively dense, heavy metals tend to be less reactive than lighter metals and have far fewer soluble sulfides and hydroxides . While it is relatively easy to distinguish a heavy metal such as tungsten from a lighter metal such as sodium , a few heavy metals, such as zinc, mercury, and lead, have some of the characteristics of lighter metals; and lighter metals such as beryllium , scandium , and titanium, have some of the characteristics of heavier metals. Heavy metals are relatively rare in the Earth's crust but are present in many aspects of modern life. They are used in, for example, golf clubs , cars , antiseptics , self-cleaning ovens , plastics , solar panels , mobile phones , and particle accelerators . The International Union of Pure and Applied Chemistry (IUPAC), which standardizes nomenclature, says "the term heavy metals is both meaningless and misleading". [ 2 ] The IUPAC report focuses on the legal and toxicological implications of describing "heavy metals" as toxins when there is no scientific evidence to support a connection. The density implied by the adjective "heavy" has almost no biological consequences and pure metals are rarely the biologically active substance. [ 5 ] This characterization has been echoed by numerous reviews. [ 6 ] [ 7 ] [ 8 ] The most widely used toxicology textbook, Casarett and Doull’s Toxicology [ 9 ] uses "toxic metal", not "heavy metal". [ 5 ] Nevertheless, there are scientific and science related articles which continue to use "heavy metal" as a term for toxic substances. [ 10 ] [ 11 ] To be an acceptable term in scientific papers, a strict definition has been encouraged. [ 12 ] Even in applications other than toxicity, there no widely agreed criterion-based definition of a heavy metal. Reviews have recommended that it not be used. [ 10 ] [ 13 ] Different meanings may be attached to the term, depending on the context. For example, a heavy metal may be defined on the basis of density , [ 14 ] the distinguishing criterion might be atomic number [ 15 ] or chemical behaviour. [ 16 ] Density criteria range from above 3.5 g/cm 3 to above 7 g/cm 3 . [ 17 ] Atomic weight definitions can range from greater than sodium (atomic weight 22.98); [ 17 ] greater than 40 (excluding s- and f-block metals, hence starting with scandium ); [ 18 ] or more than 200, i.e. from mercury onwards. [ 19 ] Atomic numbers are sometimes capped at 92 ( uranium ). [ 20 ] Definitions based on atomic number have been criticised for including metals with low densities. For example, rubidium in group (column) 1 of the periodic table has an atomic number of 37 but a density of only 1.532 g/cm 3 , which is below the threshold figure used by other authors. [ 21 ] The same problem may occur with definitions which are based on atomic weight. [ 22 ] Six elements near the end of periods (rows) 4 to 7 sometimes considered metalloids are treated here as metals: they are germanium (Ge), arsenic (As), selenium (Se), antimony (Sb), tellurium (Te), and astatine (At). [ 31 ] [ n 2 ] Oganesson (Og) is treated as a nonmetal. The United States Pharmacopeia includes a test for heavy metals that involves precipitating metallic impurities as their coloured sulfides . [ 23 ] On the basis of this type of chemical test, the group would include the transition metals and post-transition metals . [ 16 ] A different chemistry-based approach advocates replacing the term "heavy metal" with two groups of metals and a gray area. Class A metal ions prefer oxygen donors; class B ions prefer nitrogen or sulfur donors; and borderline or ambivalent ions show either class A or B characteristics, depending on the circumstances. [ 32 ] The distinction between the class A metals and the other two categories is sharp. The class A and class B terminology is analogous to the "hard acid" and "soft base" terminology sometimes used to refer to the behaviour of metal ions in inorganic systems. [ 33 ] The system groups the elements by X m 2 r {\displaystyle X_{m}^{2}r} where X m {\displaystyle X_{m}} is the metal ion electronegativity and r {\displaystyle r} is its ionic radius . This index gauges the importance of covalent interactions vs ionic interactions for a given metal ion. [ 34 ] This scheme has been applied to analyze biologically active metals in sea water for example, [ 12 ] but it has not been widely adopted. [ 35 ] The heaviness of naturally occurring metals such as gold , copper , and iron may have been noticed in prehistory and, in light of their malleability , led to the first attempts to craft metal ornaments, tools, and weapons. [ 36 ] In 1817, the German chemist Leopold Gmelin divided the elements into nonmetals, light metals, and heavy metals. [ 37 ] Light metals had densities of 0.860–5.0 g/cm 3 ; heavy metals 5.308–22.000. [ 38 ] The term heavy metal is sometimes used interchangeably with the term heavy element . For example, in discussing the history of nuclear chemistry , Magee [ 39 ] notes that the actinides were once thought to represent a new heavy element transition group whereas Seaborg and co-workers "favoured ... a heavy metal rare-earth like series ...". The counterparts to the heavy metals, the light metals , are defined by The Minerals, Metals and Materials Society as including "the traditional ( aluminium , magnesium , beryllium , titanium , lithium , and other reactive metals) and emerging light metals (composites, laminates, etc.)" [ 40 ] Trace amounts of some heavy metals, mostly in period 4, are required for certain biological processes. These are iron and copper ( oxygen and electron transport ); cobalt ( complex syntheses and cell metabolism ); vanadium and manganese ( enzyme regulation or functioning); chromium ( glucose utilisation); nickel ( cell growth ); arsenic (metabolic growth in some animals and possibly in humans) and selenium ( antioxidant functioning and hormone production). [ 46 ] Periods 5 and 6 contain fewer essential heavy metals, consistent with the general pattern that heavier elements tend to be less abundant and that scarcer elements are less likely to be nutritionally essential. [ 47 ] In period 5 , molybdenum is required for the catalysis of redox reactions; cadmium is used by some marine diatoms for the same purpose; and tin may be required for growth in a few species. [ 48 ] In period 6 , tungsten is required by some archaea and bacteria for metabolic processes . [ 49 ] A deficiency of any of these period 4–6 essential heavy metals may increase susceptibility to heavy metal poisoning [ 50 ] (conversely, an excess may also have adverse biological effects ). An average 70 kg human body is about 0.01% heavy metals (~7 g, equivalent to the weight of two dried peas, with iron at 4 g, zinc at 2.5 g, and lead at 0.12 g comprising the three main constituents), 2% light metals (~1.4 kg, the weight of a bottle of wine) and nearly 98% nonmetals (mostly water ). [ 51 ] [ n 8 ] A few non-essential heavy metals have been observed to have biological effects. Gallium , germanium (a metalloid), indium, and most lanthanides can stimulate metabolism, and titanium promotes growth in plants [ 52 ] (though it is not always considered a heavy metal). Heavy metals are often assumed to be highly toxic or damaging to the environment. [ 53 ] Some are, while certain others are toxic only if taken in excess or encountered in certain forms. Inhalation of certain metals, either as fine dust or most commonly as fumes, can also result in a condition called metal fume fever . Chromium, arsenic, cadmium, mercury, and lead have the greatest potential to cause harm on account of their extensive use, the toxicity of some of their combined or elemental forms, and their widespread distribution in the environment. [ 54 ] Hexavalent chromium , for example, is highly toxic [ citation needed ] as are mercury vapour and many mercury compounds. [ 55 ] These five elements have a strong affinity for sulfur; in the human body they usually bind, via thiol groups (–SH), to enzymes responsible for controlling the speed of metabolic reactions. The resulting sulfur-metal bonds inhibit the proper functioning of the enzymes involved; human health deteriorates, sometimes fatally. [ 56 ] Chromium (in its hexavalent form) and arsenic are carcinogens ; cadmium causes a degenerative bone disease ; and mercury and lead damage the central nervous system . [ citation needed ] Lead is the most prevalent heavy metal contaminant. [ 57 ] Levels in the aquatic environments of industrialised societies have been estimated to be two to three times those of pre-industrial levels. [ 58 ] As a component of tetraethyl lead , (CH 3 CH 2 ) 4 Pb , it was used extensively in gasoline from the 1930s until the 1970s. [ 59 ] Although the use of leaded gasoline was largely phased out in North America by 1996, soils next to roads built before this time retain high lead concentrations. [ 60 ] Later research demonstrated a statistically significant correlation between the usage rate of leaded gasoline and violent crime in the United States; taking into account a 22-year time lag (for the average age of violent criminals), the violent crime curve virtually tracked the lead exposure curve. [ 61 ] Other heavy metals noted for their potentially hazardous nature, usually as toxic environmental pollutants, include manganese (central nervous system damage); [ 62 ] cobalt and nickel (carcinogens); [ 63 ] copper, [ 64 ] zinc, [ 65 ] selenium [ 66 ] and silver [ 67 ] ( endocrine disruption, congenital disorders , or general toxic effects in fish, plants, birds, or other aquatic organisms); tin, as organotin (central nervous system damage); [ 68 ] antimony (a suspected carcinogen); [ 69 ] and thallium (central nervous system damage). [ 64 ] [ n 9 ] A few other non-essential heavy metals have one or more toxic forms. Kidney failure and fatalities have been recorded arising from the ingestion of germanium dietary supplements (~15 to 300 g in total consumed over a period of two months to three years). [ 64 ] Exposure to osmium tetroxide (OsO 4 ) may cause permanent eye damage and can lead to respiratory failure [ 72 ] and death. [ 73 ] Indium salts are toxic if more than few milligrams are ingested and will affect the kidneys, liver, and heart. [ 74 ] Cisplatin (PtCl 2 (NH 3 ) 2 ), an important drug used to kill cancer cells , is also a kidney and nerve poison. [ 64 ] Bismuth compounds can cause liver damage if taken in excess; insoluble uranium compounds, as well as the dangerous radiation they emit, can cause permanent kidney damage. [ 75 ] Heavy metals can degrade air, water, and soil quality , and subsequently cause health issues in plants, animals, and people, when they become concentrated as a result of industrial activities. [ 76 ] [ 77 ] Common sources of heavy metals in this context include vehicle emissions; [ 78 ] motor oil; [ 79 ] fertilisers; [ 80 ] glassworking; [ 81 ] incinerators; [ 82 ] treated timber ; [ 83 ] aging water supply infrastructure ; [ 84 ] and microplastics floating in the world's oceans. [ 85 ] Recent examples of heavy metal contamination and health risks include the occurrence of Minamata disease , in Japan (1932–1968; lawsuits ongoing as of 2016); [ 86 ] the Bento Rodrigues dam disaster in Brazil, [ 87 ] high levels of lead in drinking water supplied to the residents of Flint , Michigan, in the north-east of the United States [ 88 ] and 2015 Hong Kong heavy metal in drinking water incidents . Heavy metals up to the vicinity of iron (in the periodic table) are largely made via stellar nucleosynthesis . In this process, lighter elements from hydrogen to silicon undergo successive fusion reactions inside stars, releasing light and heat and forming heavier elements with higher atomic numbers. [ 92 ] Heavier heavy metals are not usually formed this way since fusion reactions involving such nuclei would consume rather than release energy. [ 93 ] Rather, they are largely synthesised (from elements with a lower atomic number) by neutron capture , with the two main modes of this repetitive capture being the s-process and the r-process . In the s-process ("s" stands for "slow"), singular captures are separated by years or decades, allowing the less stable nuclei to beta decay , [ 94 ] while in the r-process ("rapid"), captures happen faster than nuclei can decay. Therefore, the s-process takes a more or less clear path: for example, stable cadmium-110 nuclei are successively bombarded by free neutrons inside a star until they form cadmium-115 nuclei which are unstable and decay to form indium-115 (which is nearly stable, with a half-life 30,000 times the age of the universe). These nuclei capture neutrons and form indium-116, which is unstable, and decays to form tin-116, and so on. [ 92 ] [ 95 ] [ n 11 ] In contrast, there is no such path in the r-process. The s-process stops at bismuth due to the short half-lives of the next two elements, polonium and astatine, which decay to bismuth or lead. The r-process is so fast it can skip this zone of instability and go on to create heavier elements such as thorium and uranium. [ 97 ] Heavy metals condense in planets as a result of stellar evolution and destruction processes. Stars lose much of their mass when it is ejected late in their lifetimes, and sometimes thereafter as a result of a neutron star merger, [ 98 ] [ n 12 ] thereby increasing the abundance of elements heavier than helium in the interstellar medium . When gravitational attraction causes this matter to coalesce and collapse, new stars and planets are formed . [ 100 ] The Earth's crust is made of approximately 5% of heavy metals by weight, with iron comprising 95% of this quantity. Light metals (~20%) and nonmetals (~75%) make up the other 95% of the crust. [ 89 ] Despite their overall scarcity, heavy metals can become concentrated in economically extractable quantities as a result of mountain building , erosion , or other geological processes . [ 101 ] Heavy metals are found primarily as lithophiles (rock-loving) or chalcophiles (ore-loving). Lithophile heavy metals are mainly f-block elements and the more reactive of the d-block elements. They have a strong affinity for oxygen and mostly exist as relatively low density silicate minerals . [ 102 ] Chalcophile heavy metals are mainly the less reactive d-block elements, and period 4–6 p-block metals and metalloids. They are usually found in (insoluble) sulfide minerals . Being denser than the lithophiles, hence sinking lower into the crust at the time of its solidification, the chalcophiles tend to be less abundant than the lithophiles. [ 103 ] In contrast, gold is a siderophile , or iron-loving element. It does not readily form compounds with either oxygen or sulfur. [ 104 ] At the time of the Earth's formation , and as the most noble (inert) of metals, gold sank into the core due to its tendency to form high-density metallic alloys. Consequently, it is a relatively rare metal. [ 105 ] [ failed verification ] Some other (less) noble heavy metals—molybdenum, rhenium , the platinum group metals ( ruthenium , rhodium, palladium , osmium, iridium , and platinum), germanium, and tin—can be counted as siderophiles but only in terms of their primary occurrence in the Earth (core, mantle and crust), rather the crust. These metals otherwise occur in the crust, in small quantities, chiefly as chalcophiles (less so in their native form ). [ 106 ] [ n 13 ] Concentrations of heavy metals below the crust are generally higher, with most being found in the largely iron-silicon-nickel core. Platinum , for example, comprises approximately 1 part per billion of the crust whereas its concentration in the core is thought to be nearly 6,000 times higher. [ 107 ] [ 108 ] Recent speculation suggests that uranium (and thorium) in the core may generate a substantial amount of the heat that drives plate tectonics and (ultimately) sustains the Earth's magnetic field . [ 109 ] [ n 14 ] Broadly speaking, and with some exceptions, lithophile heavy metals can be extracted from their ores by electrical or chemical treatments , while chalcophile heavy metals are obtained by roasting their sulphide ores to yield the corresponding oxides, and then heating these to obtain the raw metals. [ 111 ] [ n 15 ] Radium occurs in quantities too small to be economically mined and is instead obtained from spent nuclear fuels . [ 114 ] The chalcophile platinum group metals (PGM) mainly occur in small (mixed) quantities with other chalcophile ores. The ores involved need to be smelted , roasted, and then leached with sulfuric acid to produce a residue of PGM. This is chemically refined to obtain the individual metals in their pure forms. [ 115 ] Compared to other metals, PGM are expensive due to their scarcity [ 116 ] and high production costs. [ 117 ] Gold, a siderophile, is most commonly recovered by dissolving the ores in which it is found in a cyanide solution . [ 118 ] The gold forms a dicyanoaurate(I), for example: 2 Au + H 2 O +½ O 2 + 4 KCN → 2 K[Au(CN) 2 ] + 2 KOH . Zinc is added to the mix and, being more reactive than gold, displaces the gold: 2 K[Au(CN) 2 ] + Zn → K 2 [Zn(CN) 4 ] + 2 Au. The gold precipitates out of solution as a sludge, and is filtered off and melted. [ 119 ] Some common uses of heavy metals depend on the general characteristics of metals such as electrical conductivity and reflectivity or the general characteristics of heavy metals such as density, strength, and durability. Other uses depend on the characteristics of the specific element, such as their biological role as nutrients or poisons or some other specific atomic properties. Examples of such atomic properties include: partly filled d- or f- orbitals (in many of the transition, lanthanide, and actinide heavy metals) that enable the formation of coloured compounds; [ 120 ] the capacity of heavy metal ions (such as platinum, [ 121 ] cerium [ 122 ] or bismuth [ 123 ] ) to exist in different oxidation states and are used in catalysts; [ 124 ] strong exchange interactions in 3d or 4f orbitals (in iron, cobalt, and nickel, or the lanthanide heavy metals) that give rise to magnetic effects; [ 125 ] and high atomic numbers and electron densities that underpin their nuclear science applications. [ 126 ] Typical uses of heavy metals can be broadly grouped into the following categories. [ 127 ] Some uses of heavy metals, including in sport, mechanical engineering , military ordnance , and nuclear science , take advantage of their relatively high densities. In underwater diving , lead is used as a ballast ; [ 129 ] in handicap horse racing each horse must carry a specified lead weight, based on factors including past performance, so as to equalize the chances of the various competitors. [ 130 ] In golf , tungsten, brass , or copper inserts in fairway clubs and irons lower the centre of gravity of the club making it easier to get the ball into the air; [ 131 ] and golf balls with tungsten cores are claimed to have better flight characteristics. [ 132 ] In fly fishing , sinking fly lines have a PVC coating embedded with tungsten powder, so that they sink at the required rate. [ 133 ] In track and field sport, steel balls used in the hammer throw and shot put events are filled with lead in order to attain the minimum weight required under international rules. [ 134 ] Tungsten was used in hammer throw balls at least up to 1980; the minimum size of the ball was increased in 1981 to eliminate the need for what was, at that time, an expensive metal (triple the cost of other hammers) not generally available in all countries. [ 135 ] Tungsten hammers were so dense that they penetrated too deeply into the turf. [ 136 ] The higher the projectile density, the more effectively it can penetrate heavy armor plate ... Os , Ir , Pt , and Re ... are expensive ... U offers an appealing combination of high density, reasonable cost and high fracture toughness. Heavy metals are used for ballast in boats, [ 137 ] aeroplanes, [ 138 ] and motor vehicles; [ 139 ] or in balance weights on wheels and crankshafts , [ 140 ] gyroscopes , and propellers , [ 141 ] and centrifugal clutches , [ 142 ] in situations requiring maximum weight in minimum space (for example in watch movements ). [ 138 ] In military ordnance, tungsten or uranium is used in armour plating [ 143 ] and armour piercing projectiles , [ 144 ] as well as in nuclear weapons to increase efficiency (by reflecting neutrons and momentarily delaying the expansion of reacting materials). [ 145 ] In the 1970s, tantalum was found to be more effective than copper in shaped charge and explosively formed anti-armour weapons on account of its higher density, allowing greater force concentration, and better deformability. [ 146 ] Less- toxic heavy metals , such as copper, tin, tungsten, and bismuth, and probably manganese (as well as boron , a metalloid), have replaced lead and antimony in the green bullets used by some armies and in some recreational shooting munitions. [ 147 ] Doubts have been raised about the safety (or green credentials ) of tungsten. [ 148 ] The biocidal effects of some heavy metals have been known since antiquity. [ 150 ] Platinum, osmium, copper, ruthenium, and other heavy metals, including arsenic, are used in anti-cancer treatments, or have shown potential. [ 151 ] Antimony (anti-protozoal), bismuth ( anti-ulcer ), gold ( anti-arthritic ), and iron ( anti-malarial ) are also important in medicine. [ 152 ] Copper, zinc, silver, gold, or mercury are used in antiseptic formulations; [ 153 ] small amounts of some heavy metals are used to control algal growth in, for example, cooling towers . [ 154 ] Depending on their intended use as fertilisers or biocides, agrochemicals may contain heavy metals such as chromium, cobalt, nickel, copper, zinc, arsenic, cadmium, mercury, or lead. [ 155 ] Selected heavy metals are used as catalysts in fuel processing (rhenium, for example), synthetic rubber and fibre production (bismuth), emission control devices (palladium and platinum), and in self-cleaning ovens (where cerium(IV) oxide in the walls of such ovens helps oxidise carbon -based cooking residues). [ 156 ] In soap chemistry, heavy metals form insoluble soaps that are used in lubricating greases , paint dryers, and fungicides (apart from lithium, the alkali metals and the ammonium ion form soluble soaps). [ 157 ] The colours of glass , ceramic glazes , paints , pigments , and plastics are commonly produced by the inclusion of heavy metals (or their compounds) such as chromium, manganese, cobalt, copper, zinc, zirconium , molybdenum, silver, tin, praseodymium , neodymium , erbium , tungsten, iridium, gold, lead, or uranium. [ 159 ] Tattoo inks may contain heavy metals, such as chromium, cobalt, nickel, and copper. [ 160 ] The high reflectivity of some heavy metals is important in the construction of mirrors , including precision astronomical instruments . Headlight reflectors rely on the excellent reflectivity of a thin film of rhodium. [ 161 ] Heavy metals or their compounds can be found in electronic components , electrodes , and wiring and solar panels . Molybdenum powder is used in circuit board inks. [ 162 ] Home electrical systems, for the most part, are wired with copper wire for its good conducting properties. [ 163 ] Silver and gold are used in electrical and electronic devices, particularly in contact switches , as a result of their high electrical conductivity and capacity to resist or minimise the formation of impurities on their surfaces. [ 164 ] Heavy metals have been used in batteries for over 200 years, at least since Volta invented his copper and silver voltaic pile in 1800. [ 165 ] Magnets are often made of heavy metals such as manganese, iron, cobalt, nickel, niobium, bismuth, praseodymium, neodymium, gadolinium, and dysprosium . Neodymium magnets are the strongest type of permanent magnet commercially available. They are key components of, for example, car door locks, starter motors , fuel pumps , and power windows . [ 166 ] Heavy metals are used in lighting , lasers , and light-emitting diodes (LEDs). Fluorescent lighting relies on mercury vapour for its operation. Ruby lasers generate deep red beams by exciting chromium atoms in aluminum oxide ; the lanthanides are also extensively employed in lasers. Copper, iridium, and platinum are used in organic LEDs . [ 167 ] Because denser materials absorb more of certain types of radioactive emissions such as gamma rays than lighter ones, heavy metals are useful for radiation shielding and to focus radiation beams in linear accelerators and radiotherapy applications. Niche uses of heavy metals with high atomic numbers occur in diagnostic imaging , electron microscopy , and nuclear science. In diagnostic imaging, heavy metals such as cobalt or tungsten make up the anode materials found in x-ray tubes . [ 171 ] In electron microscopy, heavy metals such as lead, gold, palladium, platinum, or uranium have been used in the past to make conductive coatings and to introduce electron density into biological specimens by staining , negative staining , or vacuum deposition . [ 172 ] In nuclear science, nuclei of heavy metals such as chromium, iron, or zinc are sometimes fired at other heavy metal targets to produce superheavy elements ; [ 173 ] heavy metals are also employed as spallation targets for the production of neutrons [ 174 ] or isotopes of non-primordial elements such as astatine (using lead, bismuth, thorium, or uranium in the latter case). [ 175 ] Definition and usage Toxicity and biological role Formation Uses
https://en.wikipedia.org/wiki/Heavy_metals
In graph theory , the Heawood conjecture or Ringel–Youngs theorem gives a lower bound for the number of colors that are necessary for graph coloring on a surface of a given genus . For surfaces of genus 0, 1, 2, 3, 4, 5, 6, 7, ..., the required number of colors is 4, 7, 8, 9, 10, 11, 12, 12, .... OEIS : A000934 , the chromatic number or Heawood number . The conjecture was formulated in 1890 by P.J. Heawood and proven in 1968 by Gerhard Ringel and J.W.T. Youngs . One case, the non-orientable Klein bottle , proved an exception to the general formula. An entirely different approach was needed for the much older problem of finding the number of colors needed for the plane or sphere , solved in 1976 as the four color theorem by Haken and Appel . On the sphere the lower bound is easy, whereas for higher genera the upper bound is easy and was proved in Heawood's original short paper that contained the conjecture. In other words, Ringel, Youngs, and others had to construct extreme examples for every genus g = 1, 2, 3, …. If g = 12 s + k , then the genera fall into the 12 cases as k = 0, 1, 2, 3, …., 11. To simplify, suppose that case k has been established if only a finite number of g s of the form 12 s + k are in doubt. Then the years in which the twelve cases were settled, and by whom, are the following: The last seven sporadic exceptions were settled as follows: Percy John Heawood conjectured in 1890 that for a given genus g > 0, the minimum number of colors necessary to color all graphs drawn on an orientable surface of that genus (or equivalently, to color the regions of any partition of the surface into simply connected regions) is given by where ⌊ x ⌋ {\displaystyle \left\lfloor x\right\rfloor } is the floor function . Replacing the genus by the Euler characteristic , we obtain a formula that covers both the orientable and non-orientable cases, This relation holds, as Ringel and Youngs showed, for all surfaces except for the Klein bottle . Philip Franklin (1930) proved that the Klein bottle requires at most 6 colors, rather than 7 as predicted by the formula. The Franklin graph can be drawn on the Klein bottle in a way that forms six mutually-adjacent regions, showing that this bound is tight. The upper bound, proved in Heawood's original short paper, is based on a greedy coloring algorithm. By manipulating the Euler characteristic, one can show that every graph embedded in the given surface must have at least one vertex of degree less than the given bound. If one removes this vertex, and colors the rest of the graph, the small number of edges incident to the removed vertex ensures that it can be added back to the graph and colored without increasing the needed number of colors beyond the bound. In the other direction, the proof is more difficult, and involves showing that in each case (except the Klein bottle) a complete graph with a number of vertices equal to the given number of colors can be embedded on the surface. The torus has g = 1, so χ = 0. Therefore, as the formula states, any subdivision of the torus into regions can be colored using at most seven colors. The illustration shows a subdivision of the torus in which each of seven regions are adjacent to each other region; this subdivision shows that the bound of seven on the number of colors is tight for this case. The boundary of this subdivision forms an embedding of the Heawood graph onto the torus.
https://en.wikipedia.org/wiki/Heawood_conjecture
The Heck reaction (also called the Mizoroki–Heck reaction ) [ 1 ] is the chemical reaction of an unsaturated halide (or triflate ) with an alkene in the presence of a base and a palladium catalyst to form a substituted alkene. It is named after Tsutomu Mizoroki and Richard F. Heck . Heck was awarded the 2010 Nobel Prize in Chemistry , which he shared with Ei-ichi Negishi and Akira Suzuki , for the discovery and development of this reaction. This reaction was the first example of a carbon-carbon bond-forming reaction that followed a Pd(0)/Pd(II) catalytic cycle, the same catalytic cycle that is seen in other Pd(0)-catalyzed cross-coupling reactions . The Heck reaction is a way to substitute alkenes. [ 2 ] [ 3 ] [ 4 ] [ 5 ] The original reaction by Tsutomu Mizoroki (1971) describes the coupling between iodobenzene and styrene in methanol to form stilbene at 120 °C ( autoclave ) with potassium acetate base and palladium chloride catalysis. This work was an extension of earlier work by Fujiwara (1967) on the Pd(II)-mediated coupling of arenes (Ar–H) and alkenes [ 6 ] [ 7 ] and earlier work by Heck (1969) on the coupling of arylmercuric halides (ArHgCl) with alkenes using a stoichiometric amount of a palladium(II) species. [ 8 ] In 1972 Heck acknowledged the Mizoroki publication and detailed independently discovered work. Heck's reaction conditions differ in terms of the catalyst (palladium acetate), catalyst loading (0.01 eq.), base (hindered amine), and absence of solvent. [ 9 ] [ 10 ] In 1974 Heck showed that phosphine ligands facilitated the reaction. [ 11 ] The reaction is catalyzed by palladium complexes. Typical catalysts and precatalysts include tetrakis(triphenylphosphine)palladium(0) , palladium chloride , and palladium(II) acetate . Typical supporting ligands are triphenylphosphine , PHOX , and BINAP . Typical bases are triethylamine , potassium carbonate , and sodium acetate . The aryl electrophile can be a halide (Br, Cl) or a triflate as well as benzyl or vinyl halides. The alkene must contain at least one sp 2 -C-H bond. Electron-withdrawing substituents enhance the reaction, thus acrylates are ideal. [ 12 ] The mechanism of this vinylation involves organopalladium intermediates. The required palladium(0) compound is often generated in situ from a palladium(II) precursor. [ 13 ] [ 14 ] For instance, palladium(II) acetate is reduced by triphenylphosphine to bis(triphenylphosphine)palladium(0) ( 1 ) concomitant with oxidation of triphenylphosphine to triphenylphosphine oxide . Step A is an oxidative addition in which palladium inserts itself in the aryl-bromide bond. The resulting palladium(II) complex then binds alkene ( 3 ). In step B the alkene inserts into the Pd-C bond in a syn addition step. Step C involves a beta-hydride elimination (here the arrows are showing the opposite) with the formation of a new palladium - alkene π complex ( 5 ). This complex is destroyed in the next step. The Pd(0) complex is regenerated by reductive elimination of the palladium(II) compound by potassium carbonate in the final step, D . In the course of the reaction the carbonate is stoichiometrically consumed and palladium is truly a catalyst and used in catalytic amounts. A similar palladium cycle but with different scenes and actors is observed in the Wacker process . This cycle is not limited to vinyl compounds, in the Sonogashira coupling one of the reactants is an alkyne and in the Suzuki coupling the alkene is replaced by an aryl boronic acid and in the Stille reaction by an aryl stannane . The cycle also extends to the other group 10 element nickel for example in the Negishi coupling between aryl halides and organozinc compounds. Platinum forms strong bonds with carbon and does not have a catalytic activity in this type of reaction. This coupling reaction is stereoselective with a propensity for trans coupling as the palladium halide group and the bulky organic residue move away from each other in the reaction sequence in a rotation step. The Heck reaction is applied industrially in the production of naproxen and the sunscreen component octyl methoxycinnamate . The naproxen synthesis includes a coupling between a brominated naphthalene compound with ethylene : [ 15 ] In the presence of an ionic liquid a Heck reaction proceeds in absence of a phosphorus ligand. In one modification palladium acetate and the ionic liquid (bmim)PF 6 are immobilized inside the cavities of reversed-phase silica gel . [ 16 ] In this way the reaction proceeds in water and the catalyst is re-usable. In the Heck oxyarylation modification the palladium substituent in the syn-addition intermediate is displaced by a hydroxyl group and the reaction product contains a dihydrofuran ring. [ 17 ] In the amino-Heck reaction a nitrogen to carbon bond is formed. In one example, [ 18 ] an oxime with a strongly electron withdrawing group reacts intramolecularly with the end of a diene to form a pyridine compound. The catalyst is tetrakis(triphenylphosphine)palladium(0) and the base is triethylamine .
https://en.wikipedia.org/wiki/Heck_reaction
In mathematics , the Hecke algebra is the algebra generated by Hecke operators , which are named after Erich Hecke . The algebra is a commutative ring . [ 1 ] [ 2 ] In the classical elliptic modular form theory, the Hecke operators T n with n coprime to the level acting on the space of cusp forms of a given weight are self-adjoint with respect to the Petersson inner product . [ 3 ] Therefore, the spectral theorem implies that there is a basis of modular forms that are eigenfunctions for these Hecke operators. Each of these basic forms possesses an Euler product . More precisely, its Mellin transform is the Dirichlet series that has Euler products with the local factor for each prime p is the reciprocal of the Hecke polynomial , a quadratic polynomial in p − s . [ 4 ] [ 5 ] In the case treated by Mordell, the space of cusp forms of weight 12 with respect to the full modular group is one-dimensional. It follows that the Ramanujan form has an Euler product and establishes the multiplicativity of τ ( n ). [ 6 ] The classical Hecke algebra has been generalized to other settings, such as the Hecke algebra of a locally compact group and spherical Hecke algebra that arise when modular forms and other automorphic forms are viewed using adelic groups . [ 7 ] These play a central role in the Langlands correspondence . [ 8 ] The derived Hecke algebra is a further generalization of Hecke algebras to derived functors . [ 8 ] [ 9 ] [ 10 ] It was introduced by Peter Schneider in 2015 who, together with Rachel Ollivier , used them to study the p -adic Langlands correspondence. [ 8 ] [ 9 ] [ 10 ] [ 11 ] It is the subject of several conjectures on the cohomology of arithmetic groups by Akshay Venkatesh and his collaborators. [ 8 ] [ 10 ] [ 12 ] [ 13 ] [ 14 ] This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Hecke_algebra
In number theory , a Hecke character is a generalisation of a Dirichlet character , introduced by Erich Hecke to construct a class of L -functions larger than Dirichlet L -functions , and a natural setting for the Dedekind zeta-functions and certain others which have functional equations analogous to that of the Riemann zeta-function . A Hecke character is a character of the idele class group of a number field or global function field . It corresponds uniquely to a character of the idele group which is trivial on principal ideles , via composition with the projection map. This definition depends on the definition of a character, which varies slightly between authors: It may be defined as a homomorphism to the non-zero complex numbers (also called a "quasicharacter"), or as a homomorphism to the unit circle in C {\displaystyle \mathbb {C} } ("unitary"). Any quasicharacter (of the idele class group) can be written uniquely as a unitary character times a real power of the norm, so there is no big difference between the two definitions. The conductor of a Hecke character χ {\displaystyle \chi } is the largest ideal m {\displaystyle {\mathfrak {m}}} such that χ {\displaystyle \chi } is a Hecke character mod m {\displaystyle {\mathfrak {m}}} . Here we say that χ {\displaystyle \chi } is a Hecke character mod m {\displaystyle {\mathfrak {m}}} if χ {\displaystyle \chi } (considered as a character on the idele group) is trivial on the group of finite ideles whose every ν {\displaystyle \nu } -adic component lies in 1 + m O ν {\displaystyle 1+{\mathfrak {m}}O_{\nu }} . A Größencharakter (often written Grössencharakter, Grossencharacter, etc.), origin of a Hecke character, going back to Hecke , is defined in terms of a character on the group of fractional ideals . For a number field K {\displaystyle K} , let m = m f m ∞ {\displaystyle {\mathfrak {m}}={\mathfrak {m}}_{f}{\mathfrak {m}}_{\infty }} be a K {\displaystyle K} - modulus , with m f {\displaystyle {\mathfrak {m}}_{f}} , the "finite part", being an integral ideal of K {\displaystyle K} and m ∞ {\displaystyle {\mathfrak {m}}_{\infty }} , the "infinite part", being a (formal) product of real places of K {\displaystyle K} . Let I m {\displaystyle I_{\mathfrak {m}}} denote the group of fractional ideals of K {\displaystyle K} relatively prime to m f {\displaystyle {\mathfrak {m}}_{f}} and let P m {\displaystyle P_{\mathfrak {m}}} denote the subgroup of principal fractional ideals ( a ) {\displaystyle (a)} where a {\displaystyle a} is near 1 {\displaystyle 1} at each place of m {\displaystyle {\mathfrak {m}}} in accordance with the multiplicities of its factors. That is, for each finite place ν {\displaystyle \nu } in m f {\displaystyle {\mathfrak {m}}_{f}} , the order o r d ν ( a − 1 ) {\displaystyle ord_{\nu }(a-1)} is at least as large as the exponent for ν {\displaystyle \nu } in m f {\displaystyle {\mathfrak {m}}_{f}} , and a {\displaystyle a} is positive under each real embedding in m ∞ {\displaystyle {\mathfrak {m}}_{\infty }} . A Größencharakter with modulus m {\displaystyle {\mathfrak {m}}} is a group homomorphism from I m {\displaystyle I_{\mathfrak {m}}} into the nonzero complex numbers such that on ideals ( a ) {\displaystyle (a)} in P m {\displaystyle P_{\mathfrak {m}}} its value is equal to the value at a {\displaystyle a} of a continuous homomorphism to the nonzero complex numbers from the product of the multiplicative groups of all Archimedean completions of K {\displaystyle K} where each local component of the homomorphism has the same real part (in the exponent). (Here we embed a {\displaystyle a} into the product of Archimedean completions of K {\displaystyle K} using embeddings corresponding to the various Archimedean places on K {\displaystyle K} .) Thus a Größencharakter may be defined on the ray class group modulo m {\displaystyle {\mathfrak {m}}} , which is the quotient I m / P m {\displaystyle I_{\mathfrak {m}}/P_{\mathfrak {m}}} . Strictly speaking, Hecke made the stipulation about behavior on principal ideals for those admitting a totally positive generator. So, in terms of the definition given above, he really only worked with moduli where all real places appeared. The role of the infinite part m ∞ is now subsumed under the notion of an infinity-type. A Hecke character and a Größencharakter are essentially the same notion with a one-to-one correspondence [ how? ] . The ideal definition is much more complicated than the idelic one, and Hecke's motivation for his definition was to construct L -functions (sometimes referred to as Hecke L -functions ) [ 1 ] that extend the notion of a Dirichlet L -function from the rationals to other number fields. For a Größencharakter χ, its L -function is defined to be the Dirichlet series carried out over integral ideals relatively prime to the modulus m {\displaystyle {\mathfrak {m}}} of the Größencharakter. Here N ( I ) {\displaystyle N(I)} denotes the ideal norm . The common real part condition governing the behavior of Größencharakter on the subgroups P m {\displaystyle P_{\mathfrak {m}}} implies these Dirichlet series are absolutely convergent in some right half-plane. Hecke proved these L -functions have a meromorphic continuation to the whole complex plane, being analytic except for a simple pole of order 1 at ' s = 1 {\displaystyle s=1} when the character is trivial. For primitive Größencharakter (defined relative to a modulus in a similar manner to primitive Dirichlet characters), Hecke showed these L -functions satisfy a functional equation relating the values of the L -function of a character and the L -function of its complex conjugate character. Consider a character ψ {\displaystyle \psi } of the idele class group, taken to be a map into the unit circle which is 1 on principal ideles and on an exceptional finite set S {\displaystyle S} containing all infinite places. Then ψ {\displaystyle \psi } generates a character χ {\displaystyle \chi } of the ideal group I S {\displaystyle I^{S}} , which is the free abelian group on the prime ideals not in S {\displaystyle S} . [ 2 ] Take a uniformising element π {\displaystyle \pi } for each prime p {\displaystyle {\mathfrak {p}}} not in S {\displaystyle S} and define a map Π {\displaystyle \Pi } from I S {\displaystyle I^{S}} to idele classes by mapping each p {\displaystyle {\mathfrak {p}}} to the class of the idele which is π {\displaystyle \pi } in the p {\displaystyle {\mathfrak {p}}} coordinate and 1 {\displaystyle 1} everywhere else. Let χ {\displaystyle \chi } be the composite of Π {\displaystyle \Pi } and ψ {\displaystyle \psi } . Then χ {\displaystyle \chi } is well-defined as a character on the ideal group. [ 3 ] In the opposite direction, given an admissible character χ {\displaystyle \chi } of I S {\displaystyle I^{S}} there corresponds a unique idele class character ψ {\displaystyle \psi } . [ 4 ] Here admissible refers to the existence of a modulus m {\displaystyle {\mathfrak {m}}} based on the set S {\displaystyle S} such that the character χ {\displaystyle \chi } evaluates to 1 {\displaystyle 1} on the ideals which are 1 mod m {\displaystyle {\mathfrak {m}}} . [ 5 ] The characters are 'big' in the sense that the infinity-type when present non-trivially means these characters are not of finite order. The finite-order Hecke characters are all, in a sense, accounted for by class field theory : their L -functions are Artin L -functions , as Artin reciprocity shows. But even a field as simple as the Gaussian field has Hecke characters that go beyond finite order in a serious way (see the example below). Later developments in complex multiplication theory indicated that the proper place of the 'big' characters was to provide the Hasse–Weil L -functions for an important class of algebraic varieties (or even motives ). Hecke's original proof of the functional equation for L ( s ,χ) used an explicit theta-function . John Tate 's 1950 Princeton doctoral dissertation, written under the supervision of Emil Artin , applied Pontryagin duality systematically, to remove the need for any special functions. A similar theory was independently developed by Kenkichi Iwasawa which was the subject of his 1950 ICM talk. A later reformulation in a Bourbaki seminar by Weil 1966 showed that parts of Tate's proof could be expressed by distribution theory : the space of distributions (for Schwartz–Bruhat test functions ) on the adele group of K transforming under the action of the ideles by a given χ has dimension 1. An algebraic Hecke character is a Hecke character taking algebraic values: they were introduced by Weil in 1947 under the name type A 0 . Such characters occur in class field theory and the theory of complex multiplication . [ 6 ] Indeed let E be an elliptic curve defined over a number field F with complex multiplication by the imaginary quadratic field K , and suppose that K is contained in F . Then there is an algebraic Hecke character χ for F , with exceptional set S the set of primes of bad reduction of E together with the infinite places. This character has the property that for a prime ideal p of good reduction , the value χ( p ) is a root of the characteristic polynomial of the Frobenius endomorphism . As a consequence, the Hasse–Weil zeta function for E is a product of two Dirichlet series, for χ and its complex conjugate. [ 7 ]
https://en.wikipedia.org/wiki/Hecke_character
The Heckscher–Ohlin theorem is one of the four critical theorems of the Heckscher–Ohlin model , developed by Swedish economist Eli Heckscher and Bertil Ohlin (his student). In the two-factor case, it states: "A capital-abundant country will export the capital-intensive good, while the labor-abundant country will export the labor-intensive good." The critical assumption of the Heckscher–Ohlin model is that the two countries are identical, except for the difference in resource endowments. This also implies that the aggregate preferences are the same. The relative abundance in capital will cause the capital-abundant country to produce the capital-intensive good cheaper than the labor -abundant country and vice versa. Initially, when the countries are not trading: Once trade is allowed, profit -seeking firms will move their products to the markets that have (temporary) higher price. As a result: The Leontief paradox , presented by Wassily Leontief in 1951, [ 1 ] found that the U.S. (the most capital-abundant country in the world by any criterion) exported labor-intensive commodities and imported capital-intensive commodities, in apparent contradiction with the Heckscher–Ohlin theorem. However, if labor is separated into two distinct factors, skilled labor and unskilled labor, the Heckscher–Ohlin theorem is more accurate. The U.S. tends to export skilled-labor-intensive goods, and tends to import unskilled-labor-intensive goods.
https://en.wikipedia.org/wiki/Heckscher–Ohlin_theorem
The Heck–Matsuda (HM) reaction is an organic reaction and a type of palladium catalysed arylation of olefins that uses arenediazonium salts as an alternative to aryl halides and triflates . [ 1 ] [ 2 ] [ 3 ] The use of arenediazonium salts presents some advantages over traditional aryl halide electrophiles , for example, the use of phosphines as ligand are not required and thus negating the requirement for anaerobic conditions, which makes the reaction more practical and easier to handle. Additionally, the reaction can be performed with or without a base and is often faster than traditional Heck protocols . [ 4 ] [ 2 ] [ 3 ] Allylic alcohols , conjugated alkenes , unsaturated heterocycles and unactivated alkenes are capable of being arylated with arenediazonium salts using simple catalysts such as palladium acetate (Pd(OAc) 2 ) or tris(dibenzylideneacetone)dipalladium(0) (Pd 2 dba 3 ) at room temperature in air, and in benign and conventional solvents. [ 1 ] In addition to the intermolecular variant of the HM reaction, intramolecular cyclization processes have also been developed for the construction of a range of oxygen and nitrogen heterocycles. [ 1 ] The catalytic cycle for the Heck-Matsuda arylation reaction has four main steps: oxidative addition , migratory insertion or carbopalladation , syn β-elimination and reductive elimination . The proposed Heck catalytic cycle involving cationic palladium with diazonium salts was reinforced by studies with mass spectrometry (ESI) by Correia and co-workers. [ 1 ] These results also show the complex interactions that occur in the coordination sphere of palladium during the Heck reaction with arenediazonium salt. A related reaction is the Meerwein arylation that precedes the Heck reaction. Meerwein arylation often use copper salts, but may in some cases be done without a transition metal. This chemical reaction article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Heck–Matsuda_reaction
Hector Martin Cantero (born September 9, 1990), also known as marcan , is a Spanish security hacker and former lead developer on the Asahi Linux project. [ 1 ] He is also known for hacking multiple PlayStation generations, the Wii and other devices. [ 2 ] Martin went to the American School of Bilbao (Spain), where he received his primary and secondary education. [ 3 ] Since 2011, he has been an official staff volunteer at Euskal Encounter, Gipuzkoa Encounter and Araba Encounter LAN parties . He is the coordinator of the Free Software area, where he organizes the "Hack It / Solve It" competition (a cybersecurity challenge known as capture the flag ) and the "AI Contest" competition. [ 4 ] [ non-primary source needed ] He has been part of fail0verflow, (formerly known as Team Twiizers) where he was responsible for reverse engineering and hacking the Wii . [ 5 ] He was the first to write an open source driver for the Microsoft Kinect [ 6 ] [ 7 ] by reverse engineering [ 8 ] for which he was widely credited. [ 9 ] [ 10 ] Sony sued him and others for hacking the PlayStation 3; the case was eventually settled out of court. [ 11 ] [ 12 ] In 2016, he ported Linux to the PlayStation 4 and demonstrated that at the 33rd Chaos Communication Congress by running Steam inside Linux. [ 13 ] He wrote the usbmuxd tool for synchronizing data from iPhones to Linux computers. [ 14 ] In 2021, Martin founded the Asahi Linux project, an effort to port Linux to the new Apple silicon -powered Macs . While reverse engineering Apple's hardware, Martin discovered the " M1racles " security vulnerability on the Apple M1 processor. [ 15 ] [ 16 ] On 14 February 2025, Martin resigned as lead developer of the project. [ 17 ] On 7 February 2025, Martin stepped down from directly working on the Linux kernel over a dispute regarding Rust for Linux . [ 18 ]
https://en.wikipedia.org/wiki/Hector_Martin_(hacker)
The Hector Medal , formerly known as the Hector Memorial Medal , [ 1 ] is a science award given by the Royal Society Te Apārangi in memory of Sir James Hector to researchers working in New Zealand. It is awarded annually in rotation for different sciences – currently there are three: chemical sciences; physical sciences; mathematical and information sciences. It is given to a researcher who "has undertaken work of great scientific or technological merit and has made an outstanding contribution to the advancement of the particular branch of science." [ 2 ] It was previously rotated through more fields of science – in 1918 they were: botany, chemistry, ethnology, geology, physics (including mathematics and astronomy), zoology (including animal physiology). [ 1 ] For a few years it was awarded biennially – it was not awarded in 2000, 2002 or 2004. [ 3 ] In 1991 it was overtaken by the Rutherford Medal as the highest award given by the Royal Society of New Zealand. [ 4 ] The obverse of the medal bears the head of James Hector and the reverse a Māori snaring a huia . [ 5 ] [ 6 ] The last confirmed sighting of a living huia predates the award of the medal by three years. [ 7 ]
https://en.wikipedia.org/wiki/Hector_Medal
A heddle or heald is an integral part of a loom . Each thread in the warp passes through a heddle, [ 1 ] which is used to separate the warp threads for the passage of the weft . [ 1 ] [ 2 ] The typical heddle is made of cord or wire and is suspended on a shaft of a loom . Each heddle has an eye in the center where the warp is threaded through. [ 3 ] As there is one heddle for each thread of the warp, there can be near a thousand heddles used for fine or wide warps. A handwoven tea-towel will generally have between 300 and 400 warp threads [ 4 ] and thus use that many heddles. In weaving, the warp threads are moved up or down by the shaft. This is achieved because each thread of the warp goes through a heddle on a shaft. When the shaft is raised the heddles are too, and thus the warp threads threaded through the heddles are raised. Heddles can be either equally or unequally distributed on the shafts, depending on the pattern to be woven. [ 1 ] In a plain weave or twill , for example, the heddles are equally distributed. The warp is threaded through heddles on different shafts in order to obtain different weave structures. For a plain weave on a loom with two shafts, for example, the first thread would go through the first heddle on the first shaft, and then the next thread through the first heddle on the second shaft. The third warp thread would be threaded through the second heddle on the first shaft, and so on. In this manner the heddles allow for the grouping of the warp threads into two groups, one group that is threaded through heddles on the first shaft, and the other on the second shaft. While the majority of heddles are as described, this style of heddle has derived from older styles, several of which are still in use. Rigid heddle looms , for example, instead of having one heddle for each thread, have a shaft with the 'heddles' fixed, and all threads go through every shaft. Within wire heddles there is a large variety in quality. Heddles should have a smooth eye, with no sharp edges to either catch or fray (and thus weaken) the warp. The warp must be able to slide through the heddle without impairment. The heddle should also be light and not bulky. There are three common types of metal heddles: wire, inserted eye, and flat steel. The inserted eye are considered to be the best, as they have a smooth eye with no rough ends to catch the warp. Wire heddles are second in quality, followed by the flat steel. Wire heddles look much like the inserted eye heddles, but where in the inserted eye there is a circle of metal for the eye, the wire ones are simply twisted at the top and bottom. The flat metal heddles are considered the poorest in quality as they are heavier and bulkier, as well as not being as smooth. They are a flat piece of steel, with the ends rotated slightly so that the flat side is at an angle of 45 degrees to the shaft. The eye is simply a hole cut in the middle of the piece of metal. Traditional heddles were made of cord. However, cord deteriorates with time and creates friction between the warp and the heddle that can damage the warp. Today, traditional cord heddles are mainly used by historical reenactors . A very simple string heddle can be made with a series of five knots in a doubled length of cord, which creates five loops. Of these loops, the important ones are the two loops on the ends and the loop in the center. The loops on the ends are used to stretch the heddle between the top and bottom bars of a shaft and are typically just large enough for the heddle to slide along the shaft. The center loop is the eye through which a warp thread is passed and is placed in the center of the heddle. String heddles can also be crocheted , and come in many different forms. Some modern hand weavers use machine-crocheted polyester heddles. These synthetic heddles minimize some of the problems with traditional knotted string heddles. They are used as an alternative to metal heddles to lessen the weight of the shafts. [ 5 ] Inkle loom heddles are generally made of string and consist of a simple loop. Alternating warp threads pass through a heddle, as in a rigid heddle loom. Tapestry loom heddles are generally made of string. They consist of a loop of string with an eye at one end for the warp thread and a loop at the other for attaching to a heddle bar. See Loom#Heddle-bar . A repair heddle can be used if a heddle breaks, which is rare, or when the loom has been warped incorrectly. If the weaver finds a mistake in the pattern, instead of rethreading all of the threads, a repair heddle can be slipped onto the shaft in the correct location. Thus repair heddles have a method to open the bottom and top loop that holds them onto the shaft. Repair heddles can save a lot of time in fixing a mistake, however they are bulky, in general, and catch on the other heddles. In rigid heddle looms there is typically a single shaft, with the heddles fixed in place in the shaft. The warp threads pass alternately through a heddle and through a space between the heddles, so that raising the shaft will raise half the threads (those passing through the heddles), and lowering the shaft will lower the same threads—the threads passing through the spaces between the heddles remain in place. Rigid heddles are thus very different from the heddle in common use, though the single heddle derived from the rigid heddle. The advantage of non-rigid heddles is that the weaver has more freedom, and can create a wider variety of fabrics. Rigid heddle looms resemble the standard floor loom in appearance. Single and double heddle looms are types of rigid heddle loom, in that the heddles are all together. Heddles are normally suspended above the loom. The weaver operates them by pedals and works while seated. [ 6 ] Among hand woven African textiles , single-heddle looms are in wide use among weaving regions of Africa. Mounting position varies according to local custom. Double-heddle looms are used in West Africa, Ethiopia and in Madagascar for the production of lamba cloth. [ 6 ]
https://en.wikipedia.org/wiki/Heddle
A hedonometer or hedonimeter is a device used to gauge happiness or pleasure . Conceived of at least as early as 1880, [ 1 ] the term was used in 1881 by the economist Francis Ysidro Edgeworth to describe "an ideally perfect instrument, a psychophysical machine, continually registering the height of pleasure experienced by an individual." [ 2 ] More recently, it has been used to refer to a tool developed by Peter Dodds and Chris Danforth to gauge the valence of various corpora, including historical State of the Union addresses, song lyrics, and online tweets and blogs . [ 3 ] [ 4 ] [ 5 ] It is operated out of the University of Vermont (UVM), and has been in use since 2008. [ 6 ] A version of the tool is available at hedonometer.org, which they call a sort of " Dow Jones Index of Happiness", [ 7 ] and hope will be used by government officials in conjunction with other metrics as a gauge of the population's well-being. [ 8 ] Computer scientists trained the hedonometer to recognize the emotion behind data as tweets with sentiment analysis techniques. Danforth preferred a lexicon approach, that measures the weight of a word, due to the energy required for neural nets. [ 9 ] As of 2020, the hedonometer at UVM scrapes about 50 million tweets each day. Using sentiment analysis , the hedonometer takes the emotional temperature of the words published by users of various platforms. [ 6 ]
https://en.wikipedia.org/wiki/Hedonometer
In mathematics , Heegner's lemma is a lemma used by Kurt Heegner in his paper on the class number problem . His lemma states that if is a curve over a field with a 4 not a square, then it has a solution if it has a solution in an extension of odd degree. This number theory -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Heegner's_lemma
Heerema Marine Contractors (HMC) is a contractor headquartered in the Netherlands most notable for operation of three of the largest crane vessels in the offshore industry. [ 1 ] Heerema Marine Contractors was formed in 1948 by Pieter Schelte Heerema as a small construction company providing oilfield platforms in Venezuela . In the 1960s the company focused on the North Sea offshore developments. The company developed crane vessels to lift large offshore platforms and modules. The ship shaped crane vessel Challenger was equipped to lift 800 t. [ 2 ] The need for large stable crane vessels to operate in the North Sea environment lead the company to develop the first large semi-submersible crane vessels. In 1978, HMC commissioned Mitsui to construct the two sister semi-submersible crane vessels, DCV Balder and SSCV Hermod . These vessels could lift 5,400 tonnes with the twin cranes, and were later upgraded to 8,200 tonnes. [ 2 ] In 1988 HMC formed a joint venture with McDermott called HeereMac. [ 3 ] The SSCV Thialf was added to the HeereMac fleet, and upon the split of the companies in December 1997, Heerema took ownership of the Thialf , the largest deep water construction vessel and is capable of a tandem lift of 14,200 t (15,600 short tons) The DCV Balder was affected by a flooding incident in 2006 and was put out of service for a few months. [ 4 ] Since 2022 the company is led by CEO Philippe Barril. [ 5 ] Heerema presently owns and operates the following crane vessels : Plus a number of barges.
https://en.wikipedia.org/wiki/Heerema_Marine_Contractors
The Hegedus indole synthesis is a name reaction in organic chemistry that allows for the generation of indoles through palladium (II)-mediated oxidative cyclization of ortho-alkenyl anilines. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] The reaction can still take place for tosyl -protected amines. [ 4 ] 2-Allylaniline can be converted to 2-Methylindole using the Hegedus indole synthesis. [ 1 ] This chemical reaction article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Hegedus_indole_synthesis
Hegerfeldt's theorem is a no-go theorem that demonstrates the incompatibility of the existence of spatially localized discrete particles with the combination of the principles of quantum mechanics and special relativity . A crucial requirement is that the states of single particle have positive energy. It has been used to support the conclusion that reality must be described solely in terms of field -based formulations. [ 1 ] [ 2 ] However, it is possible to construct localization observables in terms of positive-operator valued measures that are compatible with the restrictions imposed by the Hegerfeldt theorem. [ 3 ] Specifically, Hegerfeldt's theorem refers to a free particle whose time evolution is determined by a positive Hamiltonian . If the particle is initially confined in a bounded spatial region, then the spatial region where the probability to find the particle does not vanish, expands superluminarly, thus violating Einstein causality by exceeding the speed of light . [ 4 ] [ 5 ] Boundedness of the initial localization region can be weakened to a suitably exponential decay of the localization probability at the initial time. The localization threshold is provided by twice the Compton length of the particle. As a matter of fact, the theorem rules out the Newton-Wigner localization . The theorem was developed by Gerhard C. Hegerfeldt and first published in 1974. [ 6 ] [ 7 ] [ 8 ] This physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Hegerfeldt's_theorem
Ḥeḥ ( ḥḥ , also Huh , Hah , Hauh , Huah , and Hehu [ citation needed ] ) was the personification of infinity or eternity in the Ogdoad in ancient Egyptian religion . [ 1 ] His name originally meant "flood", referring to the watery chaos Nu that the Egyptians believed existed before the creation of the world . [ 2 ] The Egyptians envisioned this chaos as infinite, in contrast with the finite created world, so Heh personified this aspect of the primordial waters. [ 3 ] Heh's female counterpart and consort was known as Hauhet , which is simply the feminine form of his name. [ 1 ] Like the other concepts in the Ogdoad, his male form was often depicted as a frog , or a frog-headed human, and his female form as a snake or snake-headed human. The frog head symbolised fertility, creation, and regeneration, and was also possessed by the other Ogdoad males Kek, Amun, and Nun. [ 4 ] The other common representation depicts him crouching, holding a palm stem in each hand (or just one), [ 5 ] sometimes with a palm stem in his hair, as palm stems represented long life to the Egyptians, the years being represented by notches on it. Depictions of this form also had a shen ring at the base of each palm stem, which represented infinity . Depictions of Heh were also used in hieroglyphs to represent one million , which was essentially considered equivalent to infinity in Ancient Egyptian mathematics . Thus this deity is also known as the "god of millions of years". The primary meaning of the Egyptian word ḥeḥ was "million" or "millions"; a personification of this concept, Ḥeḥ, was adopted as the Egyptian god of infinity. With his female counterpart Ḥauḥet (or Ḥeḥut), Ḥeḥ represented one of the four god-goddess pairs comprising the Ogdoad , a pantheon of eight primeval deities whose worship was centred at Hermopolis Magna . The mythology of the Ogdoad describes its eight members, Heh and Hauhet, Nu and Naunet , Amun and Amaunet , and Kuk and Kauket , coming together in the cataclysmic event that gives rise to the sun (and its deific personification, Atum ). [ 6 ] Heh sometimes helps Shu , a god associated with air, in supporting the sky goddess Nut . [ 7 ] In the Book of the Heavenly Cow , eight Heh gods are depicted together with Shu supporting Nut, who has taken the form of a cow. [ 8 ] The god Ḥeḥ was usually depicted anthropomorphically, as in the hieroglyphic character, as a male figure with divine beard and lappet wig. Normally kneeling (one knee raised), sometimes in a basket—the sign for "all", the god typically holds in each hand a notched palm branch (palm rib). (These were employed in the temples for ceremonial time-keeping, which use explains the use of the palm branch as the hieroglyphic symbol for rnp.t , "year"). [ 9 ] Occasionally, an additional palm branch is worn on the god's head. In Ancient Egyptian Numerology, Gods such as Heh were used to represent numbers in a decimal point system. Particularly, the number 1,000,000 is depicted in the hieroglyph of Heh, who is in his normal seated position. [ 10 ] The personified, somewhat abstract god of eternity Ḥeḥ possessed no known cult centre or sanctuary; rather, his veneration revolved around symbolism and personal belief. The god's image and its iconographic elements reflected the wish for millions of years of life or rule; as such, the figure of Ḥeḥ finds frequent representation in amulets, prestige items and royal iconography from the late Old Kingdom period onwards. Heh became associated with the King and his quest for longevity. For instance, he appears on the tomb of King Tutankhamen, in two cartouches, where he is crowned with a winged scarab beetle, symbolizing existence and a sun disk. The placement of Heh in relation to King Tutankhamen's corpse means he will be granting him these "millions of years" into the afterlife. [ 11 ]
https://en.wikipedia.org/wiki/Heh_(god)
HeiQ (German pronunciation: [ˈhaɪkju]) is HeiQ Group. The mother company of the group is a Swiss specialty chemistry company, HeiQ Materials AG, headquartered in Zurich, Switzerland. It was founded in 2005 as a spin-off of Swiss Federal Institute of Technology Zurich (ETH) . HeiQ produces and sells textile finishing and other auxiliaries . But its core business activity is to conduct co-joint research and development projects with consumer textile products brands such as those that produce and market apparels (e.g. Patagonia , Mammut , Hanes ) and home furnishings (e.g. Bekeart , IKEA ) or textile producers for textile finishing to achieve effects that are currently not in market or not optimized to certain products. For the Deepwater Horizon oil spill that began in April 2010 in the Gulf of Mexico , HeiQ, TWE Group and Beyond Surface Technologies jointly developed an oil-absorbing, water-repelling, nonwoven fabric, in the name Oilguard [ 1 ] [ 2 ] for oil relief efforts. [ 3 ] The product was intended for beach protection against oil spills and was applied to the shoreline. This allows for the fabric to absorb crude oil while repelling the seawater. This innovation was rewarded the Swiss Technology Award (2010) [ 4 ] [ 5 ] and the Swiss Award (2013). [ 6 ] [ 7 ] The water-repelling property of Oilguard is achieved with a textile finishing that creates the Lotus Effect on the surface of the non-woven fabric. The non-fluorinated finishing makes the fabric only water repellent but not oil repellent, therefore the fabric absorbs oil crudes but not the seawater. Due to the COVID-19 pandemic , HeiQ announced the launch of an "antiviral and antimicrobial textile treatment that was tested effective against coronavirus". [ 8 ] The HeiQ Group consists of HeiQ Materials AG and its subsidiaries in North Carolina (HeiQ ChemTex), Shanghai, Taiwan, Hong Kong, Portugal and Australia. [ 9 ]
https://en.wikipedia.org/wiki/HeiQ_Materials_AG
A height function is a function that quantifies the complexity of mathematical objects. In Diophantine geometry , height functions quantify the size of solutions to Diophantine equations and are typically functions from a set of points on algebraic varieties (or a set of algebraic varieties) to the real numbers . [ 1 ] For instance, the classical or naive height over the rational numbers is typically defined to be the maximum of the numerators and denominators of the coordinates (e.g. 7 for the coordinates (3/7, 1/2) ), but in a logarithmic scale . Height functions allow mathematicians to count objects, such as rational points , that are otherwise infinite in quantity. For instance, the set of rational numbers of naive height (the maximum of the numerator and denominator when expressed in lowest terms ) below any given constant is finite despite the set of rational numbers being infinite. [ 2 ] In this sense, height functions can be used to prove asymptotic results such as Baker's theorem in transcendental number theory which was proved by Alan Baker ( 1966 , 1967a , 1967b ). In other cases, height functions can distinguish some objects based on their complexity. For instance, the subspace theorem proved by Wolfgang M. Schmidt ( 1972 ) demonstrates that points of small height (i.e. small complexity) in projective space lie in a finite number of hyperplanes and generalizes Siegel's theorem on integral points and solution of the S-unit equation . [ 3 ] Height functions were crucial to the proofs of the Mordell–Weil theorem and Faltings's theorem by Weil ( 1929 ) and Faltings ( 1983 ) respectively. Several outstanding unsolved problems about the heights of rational points on algebraic varieties, such as the Manin conjecture and Vojta's conjecture , have far-reaching implications for problems in Diophantine approximation , Diophantine equations , arithmetic geometry , and mathematical logic . [ 4 ] [ 5 ] An early form of height function was proposed by Giambattista Benedetti (c. 1563), who argued that the consonance of a musical interval could be measured by the product of its numerator and denominator (in reduced form); see Giambattista Benedetti § Music . [ citation needed ] Heights in Diophantine geometry were initially developed by André Weil and Douglas Northcott beginning in the 1920s. [ 6 ] Innovations in 1960s were the Néron–Tate height and the realization that heights were linked to projective representations in much the same way that ample line bundles are in other parts of algebraic geometry . In the 1970s, Suren Arakelov developed Arakelov heights in Arakelov theory . [ 7 ] In 1983, Faltings developed his theory of Faltings heights in his proof of Faltings's theorem. [ 8 ] Classical or naive height is defined in terms of ordinary absolute value on homogeneous coordinates . It is typically a logarithmic scale and therefore can be viewed as being proportional to the "algebraic complexity" or number of bits needed to store a point. [ 2 ] It is typically defined to be the logarithm of the maximum absolute value of the vector of coprime integers obtained by multiplying through by a lowest common denominator . This may be used to define height on a point in projective space over Q , or of a polynomial, regarded as a vector of coefficients, or of an algebraic number, from the height of its minimal polynomial. [ 9 ] The naive height of a rational number x = p / q (in lowest terms) is Therefore, the naive multiplicative and logarithmic heights of 4/10 are 5 and log(5) , for example. The naive height H of an elliptic curve E given by y 2 = x 3 + Ax + B is defined to be H(E) = log max(4| A | 3 , 27| B | 2 ) . The Néron–Tate height , or canonical height , is a quadratic form on the Mordell–Weil group of rational points of an abelian variety defined over a global field . It is named after André Néron , who first defined it as a sum of local heights, [ 11 ] and John Tate , who defined it globally in an unpublished work. [ 12 ] Let X be a projective variety over a number field K . Let L be a line bundle on X . One defines the Weil height on X with respect to L as follows. First, suppose that L is very ample . A choice of basis of the space Γ ( X , L ) {\displaystyle \Gamma (X,L)} of global sections defines a morphism ϕ from X to projective space, and for all points p on X , one defines h L ( p ) := h ( ϕ ( p ) ) {\displaystyle h_{L}(p):=h(\phi (p))} , where h is the naive height on projective space. [ 13 ] [ 14 ] For fixed X and L , choosing a different basis of global sections changes h L {\displaystyle h_{L}} , but only by a bounded function of p . Thus h L {\displaystyle h_{L}} is well-defined up to addition of a function that is O(1) . In general, one can write L as the difference of two very ample line bundles L 1 and L 2 on X and define h L := h L 1 − h L 2 , {\displaystyle h_{L}:=h_{L_{1}}-h_{L_{2}},} which again is well-defined up to O(1) . [ 13 ] [ 14 ] The Arakelov height on a projective space over the field of algebraic numbers is a global height function with local contributions coming from Fubini–Study metrics on the Archimedean fields and the usual metric on the non-Archimedean fields . [ 15 ] [ 16 ] It is the usual Weil height equipped with a different metric. [ 17 ] The Faltings height of an abelian variety defined over a number field is a measure of its arithmetic complexity. It is defined in terms of the height of a metrized line bundle . It was introduced by Faltings ( 1983 ) in his proof of the Mordell conjecture . For a polynomial P of degree n given by the height H ( P ) is defined to be the maximum of the magnitudes of its coefficients: [ 18 ] One could similarly define the length L ( P ) as the sum of the magnitudes of the coefficients: The Mahler measure M ( P ) of P is also a measure of the complexity of P . [ 19 ] The three functions H ( P ), L ( P ) and M ( P ) are related by the inequalities where ( n ⌊ n / 2 ⌋ ) {\displaystyle \scriptstyle {\binom {n}{\lfloor n/2\rfloor }}} is the binomial coefficient . One of the conditions in the definition of an automorphic form on the general linear group of an adelic algebraic group is moderate growth , which is an asymptotic condition on the growth of a height function on the general linear group viewed as an affine variety . [ 20 ] The height of an irreducible rational number x = p / q , q > 0 is | p | + q {\displaystyle |p|+q} (this function is used for constructing a bijection between N {\displaystyle \mathbb {N} } and Q {\displaystyle \mathbb {Q} } ). [ 21 ]
https://en.wikipedia.org/wiki/Height_function
Heiheionakeiki is a Polynesian constellation which mariners used to navigate to Tahiti . [ 1 ] It contains the seven main stars of the western constellation Orion : [ 2 ] As all of Hawaiian Airlines ’s Airbus A330-200s are named for a constellation or star used by the ancient Polynesians for celestial navigation when making their voyages across the Pacific to Hawaii, the airline has named its seventh Airbus A330-200 (N386HA) aircraft after the constellation. [ 3 ] This constellation -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Heiheionakeiki
A heijunka box is a visual scheduling tool used in heijunka , a method originally created by Toyota for achieving a smoother production flow . While heijunka is the smoothing of production, the heijunka box is the name of a specific tool used in achieving the aims of heijunka. The heijunka box is generally a wall schedule which is divided into a grid of boxes or a set of 'pigeon-holes'/rectangular receptacles. Each column of boxes representing a specific period of time, lines are drawn down the schedule/grid to visually break the schedule into columns of individual shifts or days or weeks. Coloured cards representing individual jobs (referred to as kanban cards) are placed on the heijunka box to provide a visual representation of the upcoming production runs. The heijunka box makes it easy to see what type of jobs are queued for production and for when they are scheduled. Workers on the process remove the kanban cards for the current period from the box in order to know what to do. These cards will be passed to another section when they process the related job. The Heijunka box allows easy and visual control of a smoothed production schedule. A typical heijunka box has horizontal rows for each product. It has vertical columns for identical time intervals of production. In the illustration on the right, the time interval is thirty minutes. Production control kanban are placed in the pigeon-holes provided by the box in proportion to the number of items to be built of a given product type during a time interval. In this illustration, each time period builds an A and two Bs along with a mix of Cs, Ds and Es. What is clear from the box, from the simple repeating patterns of kanbans in each row, is that the production is smooth of each of these products. This ensures that production capacity is kept under a constant pressure thereby eliminating many issues.
https://en.wikipedia.org/wiki/Heijunka_box
In mathematical analysis , Heine's identity , named after Heinrich Eduard Heine [ 1 ] is a Fourier expansion of a reciprocal square root which Heine presented as 1 z − cos ⁡ ψ = 2 π ∑ m = − ∞ ∞ Q m − 1 2 ( z ) e i m ψ {\displaystyle {\frac {1}{\sqrt {z-\cos \psi }}}={\frac {\sqrt {2}}{\pi }}\sum _{m=-\infty }^{\infty }Q_{m-{\frac {1}{2}}}(z)e^{im\psi }} where [ 2 ] Q m − 1 2 {\displaystyle Q_{m-{\frac {1}{2}}}} is a Legendre function of the second kind, which has degree, m − 1 ⁄ 2 , a half-integer, and argument, z , real and greater than one. This expression can be generalized [ 3 ] for arbitrary half-integer powers as follows ( z − cos ⁡ ψ ) n − 1 2 = 2 π ( z 2 − 1 ) n 2 Γ ( 1 2 − n ) ∑ m = − ∞ ∞ Γ ( m − n + 1 2 ) Γ ( m + n + 1 2 ) Q m − 1 2 n ( z ) e i m ψ , {\displaystyle (z-\cos \psi )^{n-{\frac {1}{2}}}={\sqrt {\frac {2}{\pi }}}{\frac {(z^{2}-1)^{\frac {n}{2}}}{\Gamma ({\frac {1}{2}}-n)}}\sum _{m=-\infty }^{\infty }{\frac {\Gamma (m-n+{\frac {1}{2}})}{\Gamma (m+n+{\frac {1}{2}})}}Q_{m-{\frac {1}{2}}}^{n}(z)e^{im\psi },} where Γ {\displaystyle \scriptstyle \,\Gamma } is the Gamma function .
https://en.wikipedia.org/wiki/Heine's_identity
In real analysis the Heine–Borel theorem , named after Eduard Heine and Émile Borel , states: For a subset S {\displaystyle S} of Euclidean space R n {\displaystyle \mathbb {R} ^{n}} , the following two statements are equivalent: The history of what today is called the Heine–Borel theorem starts in the 19th century, with the search for solid foundations of real analysis. Central to the theory was the concept of uniform continuity and the theorem stating that every continuous function on a closed and bounded interval is uniformly continuous. Peter Gustav Lejeune Dirichlet was the first to prove this and implicitly he used the existence of a finite subcover of a given open cover of a closed interval in his proof. [ 1 ] He used this proof in his 1852 lectures, which were published only in 1904. [ 1 ] Later Eduard Heine , Karl Weierstrass and Salvatore Pincherle used similar techniques. Émile Borel in 1895 was the first to state and prove a form of what is now called the Heine–Borel theorem. His formulation was restricted to countable covers. Pierre Cousin (1895), Lebesgue (1898) and Schoenflies (1900) generalized it to arbitrary covers. [ 2 ] If a set is compact, then it must be closed. Let S {\displaystyle S} be a subset of R n {\displaystyle \mathbb {R} ^{n}} . Observe first the following: if a {\displaystyle a} is a limit point of S {\displaystyle S} , then any finite collection C {\displaystyle C} of open sets, such that each open set U ∈ C {\displaystyle U\in C} is disjoint from some neighborhood V U {\displaystyle V_{U}} of a {\displaystyle a} , fails to be a cover of S {\displaystyle S} . Indeed, the intersection of the finite family of sets V U {\displaystyle V_{U}} is a neighborhood W {\displaystyle W} of a {\displaystyle a} in R n {\displaystyle \mathbb {R} ^{n}} . Since a {\displaystyle a} is a limit point of S {\displaystyle S} , W {\displaystyle W} must contain a point x {\displaystyle x} in S {\displaystyle S} . This x ∈ S {\displaystyle x\in S} is not covered by the family C {\displaystyle C} , because every U {\displaystyle U} in C {\displaystyle C} is disjoint from V U {\displaystyle V_{U}} and hence disjoint from W {\displaystyle W} , which contains x {\displaystyle x} . If S {\displaystyle S} is compact but not closed, then it has a limit point a ∉ S {\displaystyle a\not \in S} . Consider a collection C ′ {\displaystyle C'} consisting of an open neighborhood N ( x ) {\displaystyle N(x)} for each x ∈ S {\displaystyle x\in S} , chosen small enough to not intersect some neighborhood V x {\displaystyle V_{x}} of a {\displaystyle a} . Then C ′ {\displaystyle C'} is an open cover of S {\displaystyle S} , but any finite subcollection of C ′ {\displaystyle C'} has the form of C {\displaystyle C} discussed previously, and thus cannot be an open subcover of S {\displaystyle S} . This contradicts the compactness of S {\displaystyle S} . Hence, every limit point of S {\displaystyle S} is in S {\displaystyle S} , so S {\displaystyle S} is closed. The proof above applies with almost no change to showing that any compact subset S {\displaystyle S} of a Hausdorff topological space X {\displaystyle X} is closed in X {\displaystyle X} . If a set is compact, then it is bounded. Let S {\displaystyle S} be a compact set in R n {\displaystyle \mathbb {R} ^{n}} , and U x {\displaystyle U_{x}} a ball of radius 1 centered at x ∈ R n {\displaystyle x\in \mathbb {R} ^{n}} . Then the set of all such balls centered at x ∈ S {\displaystyle x\in S} is clearly an open cover of S {\displaystyle S} , since ∪ x ∈ S U x {\displaystyle \cup _{x\in S}U_{x}} contains all of S {\displaystyle S} . Since S {\displaystyle S} is compact, take a finite subcover of this cover. This subcover is the finite union of balls of radius 1. Consider all pairs of centers of these (finitely many) balls (of radius 1) and let M {\displaystyle M} be the maximum of the distances between them. Then if C p {\displaystyle C_{p}} and C q {\displaystyle C_{q}} are the centers (respectively) of unit balls containing arbitrary p , q ∈ S {\displaystyle p,q\in S} , the triangle inequality says: d ( p , q ) ≤ d ( p , C p ) + d ( C p , C q ) + d ( C q , q ) ≤ 1 + M + 1 = M + 2. {\displaystyle d(p,q)\leq d(p,C_{p})+d(C_{p},C_{q})+d(C_{q},q)\leq 1+M+1=M+2.} So the diameter of S {\displaystyle S} is bounded by M + 2 {\displaystyle M+2} . Lemma: A closed subset of a compact set is compact. Let K {\displaystyle K} be a closed subset of a compact set T {\displaystyle T} in R n {\displaystyle \mathbb {R} ^{n}} and let C K {\displaystyle C_{K}} be an open cover of K {\displaystyle K} . Then U = R n ∖ K {\displaystyle U=\mathbb {R} ^{n}\setminus K} is an open set and C T = C K ∪ { U } {\displaystyle C_{T}=C_{K}\cup \{U\}} is an open cover of T {\displaystyle T} . Since T {\displaystyle T} is compact, then C T {\displaystyle C_{T}} has a finite subcover C T ′ {\displaystyle C_{T}'} , that also covers the smaller set K {\displaystyle K} . Since U {\displaystyle U} does not contain any point of K {\displaystyle K} , the set K {\displaystyle K} is already covered by C K ′ = C T ′ ∖ { U } {\displaystyle C_{K}'=C_{T}'\setminus \{U\}} , that is a finite subcollection of the original collection C K {\displaystyle C_{K}} . It is thus possible to extract from any open cover C K {\displaystyle C_{K}} of K {\displaystyle K} a finite subcover. If a set is closed and bounded, then it is compact. If a set S {\displaystyle S} in R n {\displaystyle \mathbb {R} ^{n}} is bounded, then it can be enclosed within an n {\displaystyle n} -box T 0 = [ − a , a ] n {\displaystyle T_{0}=[-a,a]^{n}} where a > 0 {\displaystyle a>0} . By the lemma above, it is enough to show that T 0 {\displaystyle T_{0}} is compact. Assume, by way of contradiction, that T 0 {\displaystyle T_{0}} is not compact. Then there exists an infinite open cover C {\displaystyle C} of T 0 {\displaystyle T_{0}} that does not admit any finite subcover. Through bisection of each of the sides of T 0 {\displaystyle T_{0}} , the box T 0 {\displaystyle T_{0}} can be broken up into 2 n {\displaystyle 2^{n}} sub n {\displaystyle n} -boxes, each of which has diameter equal to half the diameter of T 0 {\displaystyle T_{0}} . Then at least one of the 2 n {\displaystyle 2^{n}} sections of T 0 {\displaystyle T_{0}} must require an infinite subcover of C {\displaystyle C} , otherwise C {\displaystyle C} itself would have a finite subcover, by uniting together the finite covers of the sections. Call this section T 1 {\displaystyle T_{1}} . Likewise, the sides of T 1 {\displaystyle T_{1}} can be bisected, yielding 2 n {\displaystyle 2^{n}} sections of T 1 {\displaystyle T_{1}} , at least one of which must require an infinite subcover of C {\displaystyle C} . Continuing in like manner yields a decreasing sequence of nested n {\displaystyle n} -boxes: T 0 ⊃ T 1 ⊃ T 2 ⊃ … ⊃ T k ⊃ … {\displaystyle T_{0}\supset T_{1}\supset T_{2}\supset \ldots \supset T_{k}\supset \ldots } where the side length of T k {\displaystyle T_{k}} is ( 2 a ) / 2 k {\displaystyle (2a)/2^{k}} , which tends to 0 as k {\displaystyle k} tends to infinity. Let us define a sequence ( x k ) {\displaystyle (x_{k})} such that each x k {\displaystyle x_{k}} is in T k {\displaystyle T_{k}} . This sequence is Cauchy , so it must converge to some limit L {\displaystyle L} . Since each T k {\displaystyle T_{k}} is closed, and for each k {\displaystyle k} the sequence ( x k ) {\displaystyle (x_{k})} is eventually always inside T k {\displaystyle T_{k}} , we see that L ∈ T k {\displaystyle L\in T_{k}} for each k {\displaystyle k} . Since C {\displaystyle C} covers T 0 {\displaystyle T_{0}} , then it has some member U ∈ C {\displaystyle U\in C} such that L ∈ U {\displaystyle L\in U} . Since U {\displaystyle U} is open, there is an n {\displaystyle n} -ball B ( L ) ⊆ U {\displaystyle B(L)\subseteq U} . For large enough k {\displaystyle k} , one has T k ⊆ B ( L ) ⊆ U {\displaystyle T_{k}\subseteq B(L)\subseteq U} , but then the infinite number of members of C {\displaystyle C} needed to cover T k {\displaystyle T_{k}} can be replaced by just one: U {\displaystyle U} , a contradiction. Thus, T 0 {\displaystyle T_{0}} is compact. Since S {\displaystyle S} is closed and a subset of the compact set T 0 {\displaystyle T_{0}} , then S {\displaystyle S} is also compact (see the lemma above). In general metric spaces, we have the following theorem: For a subset S {\displaystyle S} of a metric space ( X , d ) {\displaystyle (X,d)} , the following two statements are equivalent: The above follows directly from Jean Dieudonné , theorem 3.16.1, [ 5 ] which states: For a metric space ( X , d ) {\displaystyle (X,d)} , the following three conditions are equivalent: The Heine–Borel theorem does not hold as stated for general metric and topological vector spaces , and this gives rise to the necessity to consider special classes of spaces where this proposition is true. These spaces are said to have the Heine–Borel property . A metric space ( X , d ) {\displaystyle (X,d)} is said to have the Heine–Borel property if each closed bounded [ 7 ] set in X {\displaystyle X} is compact. Many metric spaces fail to have the Heine–Borel property, such as the metric space of rational numbers (or indeed any incomplete metric space). Complete metric spaces may also fail to have the property; for instance, no infinite-dimensional Banach spaces have the Heine–Borel property (as metric spaces). Even more trivially, if the real line is not endowed with the usual metric, it may fail to have the Heine–Borel property. A metric space ( X , d ) {\displaystyle (X,d)} has a Heine–Borel metric which is Cauchy locally identical to d {\displaystyle d} if and only if it is complete , σ {\displaystyle \sigma } -compact , and locally compact . [ 8 ] A topological vector space X {\displaystyle X} is said to have the Heine–Borel property [ 9 ] (R.E. Edwards uses the term boundedly compact space [ 10 ] ) if each closed bounded [ 11 ] set in X {\displaystyle X} is compact. [ 12 ] No infinite-dimensional Banach spaces have the Heine–Borel property (as topological vector spaces). But some infinite-dimensional Fréchet spaces do have, for instance, the space C ∞ ( Ω ) {\displaystyle C^{\infty }(\Omega )} of smooth functions on an open set Ω ⊂ R n {\displaystyle \Omega \subset \mathbb {R} ^{n}} [ 10 ] and the space H ( Ω ) {\displaystyle H(\Omega )} of holomorphic functions on an open set Ω ⊂ C n {\displaystyle \Omega \subset \mathbb {C} ^{n}} . [ 10 ] More generally, any quasi-complete nuclear space has the Heine–Borel property. All Montel spaces have the Heine–Borel property as well.
https://en.wikipedia.org/wiki/Heine–Borel_theorem
In mathematics , the Heine–Cantor theorem states that a continuous function between two metric spaces is uniformly continuous if its domain is compact . The theorem is named after Eduard Heine and Georg Cantor . Heine–Cantor theorem — If f : M → N {\displaystyle f\colon M\to N} is a continuous function between two metric spaces M {\displaystyle M} and N {\displaystyle N} , and M {\displaystyle M} is compact , then f {\displaystyle f} is uniformly continuous . An important special case of the Cantor theorem is that every continuous function from a closed bounded interval to the real numbers is uniformly continuous. Suppose that M {\displaystyle M} and N {\displaystyle N} are two metric spaces with metrics d M {\displaystyle d_{M}} and d N {\displaystyle d_{N}} , respectively. Suppose further that a function f : M → N {\displaystyle f:M\to N} is continuous and M {\displaystyle M} is compact. We want to show that f {\displaystyle f} is uniformly continuous , that is, for every positive real number ε > 0 {\displaystyle \varepsilon >0} there exists a positive real number δ > 0 {\displaystyle \delta >0} such that for all points x , y {\displaystyle x,y} in the function domain M {\displaystyle M} , d M ( x , y ) < δ {\displaystyle d_{M}(x,y)<\delta } implies that d N ( f ( x ) , f ( y ) ) < ε {\displaystyle d_{N}(f(x),f(y))<\varepsilon } . Consider some positive real number ε > 0 {\displaystyle \varepsilon >0} . By continuity , for any point x {\displaystyle x} in the domain M {\displaystyle M} , there exists some positive real number δ x > 0 {\displaystyle \delta _{x}>0} such that d N ( f ( x ) , f ( y ) ) < ε / 2 {\displaystyle d_{N}(f(x),f(y))<\varepsilon /2} when d M ( x , y ) < δ x {\displaystyle d_{M}(x,y)<\delta _{x}} , i.e., a fact that y {\displaystyle y} is within δ x {\displaystyle \delta _{x}} of x {\displaystyle x} implies that f ( y ) {\displaystyle f(y)} is within ε / 2 {\displaystyle \varepsilon /2} of f ( x ) {\displaystyle f(x)} . Let U x {\displaystyle U_{x}} be the open δ x / 2 {\displaystyle \delta _{x}/2} -neighborhood of x {\displaystyle x} , i.e. the set Since each point x {\displaystyle x} is contained in its own U x {\displaystyle U_{x}} , we find that the collection { U x ∣ x ∈ M } {\displaystyle \{U_{x}\mid x\in M\}} is an open cover of M {\displaystyle M} . Since M {\displaystyle M} is compact, this cover has a finite subcover { U x 1 , U x 2 , … , U x n } {\displaystyle \{U_{x_{1}},U_{x_{2}},\ldots ,U_{x_{n}}\}} where x 1 , x 2 , … , x n ∈ M {\displaystyle x_{1},x_{2},\ldots ,x_{n}\in M} . Each of these open sets has an associated radius δ x i / 2 {\displaystyle \delta _{x_{i}}/2} . Let us now define δ = min 1 ≤ i ≤ n δ x i / 2 {\displaystyle \delta =\min _{1\leq i\leq n}\delta _{x_{i}}/2} , i.e. the minimum radius of these open sets. Since we have a finite number of positive radii, this minimum δ {\displaystyle \delta } is well-defined and positive. We now show that this δ {\displaystyle \delta } works for the definition of uniform continuity. Suppose that d M ( x , y ) < δ {\displaystyle d_{M}(x,y)<\delta } for any two x , y {\displaystyle x,y} in M {\displaystyle M} . Since the sets U x i {\displaystyle U_{x_{i}}} form an open (sub)cover of our space M {\displaystyle M} , we know that x {\displaystyle x} must lie within one of them, say U x i {\displaystyle U_{x_{i}}} . Then we have that d M ( x , x i ) < 1 2 δ x i {\displaystyle d_{M}(x,x_{i})<{\frac {1}{2}}\delta _{x_{i}}} . The triangle inequality then implies that implying that x {\displaystyle x} and y {\displaystyle y} are both at most δ x i {\displaystyle \delta _{x_{i}}} away from x i {\displaystyle x_{i}} . By definition of δ x i {\displaystyle \delta _{x_{i}}} , this implies that d N ( f ( x i ) , f ( x ) ) {\displaystyle d_{N}(f(x_{i}),f(x))} and d N ( f ( x i ) , f ( y ) ) {\displaystyle d_{N}(f(x_{i}),f(y))} are both less than ε / 2 {\displaystyle \varepsilon /2} . Applying the triangle inequality then yields the desired ∎ For an alternative proof in the case of M = [ a , b ] {\displaystyle M=[a,b]} , a closed interval, see the article Non-standard calculus .
https://en.wikipedia.org/wiki/Heine–Cantor_theorem
In mathematics, the Heine–Stieltjes polynomials or Stieltjes polynomials , introduced by T. J. Stieltjes ( 1885 ), are polynomial solutions of a second-order Fuchsian equation , a differential equation all of whose singularities are regular . The Fuchsian equation has the form for some polynomial V ( z ) of degree at most N − 2, and if this has a polynomial solution S then V is called a Van Vleck polynomial (after Edward Burr Van Vleck ) and S is called a Heine–Stieltjes polynomial. Heun polynomials are the special cases of Stieltjes polynomials when the differential equation has four singular points. This polynomial -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Heine–Stieltjes_polynomials
Heinrich Anton de Bary (26 January 1831 – 19 January 1888) was a German surgeon , botanist , microbiologist , and mycologist (fungal systematics and physiology). [ 1 ] He is considered a founding father of plant pathology (phytopathology) as well as the founder of modern mycology. [ 2 ] His extensive and careful studies of the life history of fungi and contribution to the understanding of algae and higher plants established landmarks in biology. [ 3 ] Born in Frankfurt to physician August Theodor de Bary (1802–1873) and Emilie Meyer de Bary, Anton de Bary was one of ten children. [ 1 ] He joined excursions of naturalists who collected local specimens. De Bary’s interest was further inspired by George Fresenius , a physician, who also taught botany at Senckenberg Institute . Fresenius was an expert on thallophytes . In 1848, de Bary graduated from a gymnasium at Frankfurt, and began to study medicine at Heidelberg , continuing at Marburg . In 1850, he went to Berlin to continue pursuing his study of medicine, and also continued to explore and develop his interest in plant science. Although he received his degree in medicine, his dissertation at Berlin in 1853 was titled "De plantarum generatione sexuali", a botanical subject. He also published a book on fungi and the causes of rusts and smuts . [ 3 ] [ 4 ] After graduation, de Bary briefly practiced medicine in Frankfurt, but he was drawn back to botany and became Privatdozent in botany at the University of Tübingen , where he worked for a while as an assistant to Hugo von Mohl (1805–1872). In 1855, he succeeded the botanist Karl Wilhelm von Nägeli (1818–1891) at Freiburg , where he established the most advanced botanical laboratory at the time and directed many students. [ 3 ] In 1867, de Bary moved to the University of Halle as successor to Professor Diederich Franz Leonhard von Schlechtendal , who, with Hugo von Mohl , co-founded the pioneer botanical journal Botanische Zeitung . De Bary became its coeditor and later sole editor. As an editor of and contributor to the journal, he exercised great influence upon the development of botany. Following the Franco-Prussian War (1870–1871), de Bary took the position of professor of botany at the University of Strasbourg , [ 3 ] where he was the director of the Jardin botanique de l'Université de Strasbourg , and founder of its New Garden. [ 5 ] [ 6 ] He was also elected as the inaugural rector of the reorganized university. [ 3 ] He conducted much research in the university botanical institute, attracted many international students, and made a large contribution to the development of botany. [ 1 ] [ 7 ] His 1884 book Vergleichende Morphologie und Biologie der Pilze, Mycetozoen und Bakterien was translated into English as Comparative Morphology and Biology of the Fungi, Mycetozoa, and Bacteria ( Clarendon Press , 1887). [ 8 ] De Bary was devoted to the study of the life history of fungi . At that time, various fungi were still considered to arise via spontaneous generation. [ 1 ] He proved that pathogenic fungi were like other plants, and not the products of secretions from sick cells. [ 3 ] In de Bary’s time, potato late blight had caused sweeping crop devastation and economic loss. The origin of such plant diseases was not known at that time. de Bary studied the pathogen Phytophthora infestans (formerly Peronospora infestans ) and elucidated its life cycle. [ 1 ] Miles Joseph Berkeley (1803–1889) had insisted in 1841 that the oomycete found in potato blight caused the disease. Similarly, de Bary asserted that rust and smut fungi caused the pathological changes that affected diseased plants. He concluded that Uredinales and Ustilaginales were parasites . [ 3 ] De Bary spent much time studying the morphology of fungi and noticed that certain forms that were classed as separate species were actually successive stages of development of the same organism. De Bary studied the developmental history of Myxomycetes (slime molds), and thought it was necessary to reclassify the lower animals. He first coined the term Mycetozoa to include lower animals and slime molds. In his work on Myxomycetes (1858), he pointed out that at one stage of their life cycle (the plasmodial stage), they were nearly-formless, motile masses of a substance that Félix Dujardin (1801–1860) had called sarcode ( protoplasm ). This is the fundamental basis of the protoplasmic theory of life. [ 3 ] De Bary was the first to demonstrate sexuality in fungi . In 1858, he had observed conjugation in the alga Spirogyra , and in 1861, he described sexual reproduction in the fungus Peronospora sp. He saw the importance of observing pathogens throughout their whole life cycle and attempted to follow that practice in his studies of living host plants. [ 3 ] De Bary published his first work on potato blight fungi in 1861, and then spent more than 15 years studying Peronosporeae, particularly Phytophthora infestans (formerly Peronospora infestans ) and Cystopus ( Albugo ), parasites of potato . In his published work in 1863 entitled "Recherches sur le developpement de quelques champignons parasites", he reported inoculating healthy potato leaves with spores of P. infestans . He observed that mycelium penetrated the leaf and affected the tissue, forming conidia and the black spots characteristic of potato blight. He did similar experiments on tubers and potato stalks. He watched conidia in the soil and their infection of the tubers, observing that mycelium could survive the cold winter in the tubers. Based on these studies, he concluded that organisms were not being generated spontaneously. [ 3 ] He did a thorough investigation on Puccinia graminis , the pathogen that produces rust in wheat, rye and other grains. He noticed that P. graminis produced reddish summer spores or " urediospores ", and darker winter spores or " teleutospores ". He inoculated the leaves of barberry ( Berberis vulgaris ) with sporidia from winter spores of wheat rust. The sporidia germinated, leading to the forming of aecia with yellow spores, the familiar symptoms of infection on the barberry . De Bary then inoculated aecidiospores on moisture-retaining slides and then transferred them to the leaves of seedling of rye plants. In time, he observed the reddish summer spores appearing in the leaves. Sporidia from winter spores germinated only on barberry. De Bary clearly demonstrated that P. graminis lived upon different hosts at different stages of its development. He called this phenomenon " heteroecism " in contrast to " autoecism ", in which development takes place only in one host. De Bary’s discovery explained why the practice of eradicating barberry plants was important as a control for rust. [ 3 ] De Bary also studied the formation of lichens which are the result of an association between a fungus and an alga. He traced their stages of growth and reproduction and showed how adaptations helped them to survive conditions of drought and winter. In 1879 he coined the word " symbiosis ", meaning "the living together of unlike organisms", in the publication "Die Erscheinung der Symbiose" (Strasbourg, 1879). He carefully studied the morphology of molds , yeasts , and fungi and basically established mycology as an independent science. [ 3 ] De Bary's concept and methods had a great impact on the fields of bacteriology and botany, making him one of the most influential bioscientists of the 19th century. [ 1 ] He published more than 100 research papers. [ 3 ] Many of his students later became distinguished botanists and microbiologists including Sergei Winogradsky (1856–1953), William Gilson Farlow (1844–1919), and Pierre-Marie-Alexis Millardet (1838–1902). [ 1 ] De Bary came from a noble family of Huguenots from Wallonia , which was driven out from there by the Spanish Habsburgs under Emperor Charles V and can be found in Frankfurt since 1555. [ 9 ] Anton's father and his brother Johann Jakob de Bary were respected doctors in Frankfurt. His mother was Caroline Emilie von Meyer (1805–1887), whose family produced two renowned scientists. De Bary married Antonie Einert (21 January 1831, Leipzig – 22 May 1892, Thann , Alsace–Lorraine ) in 1861; they raised four children: Wilhelm, August, Marie and Hermann. Antonie was a talented artist and painter, particularly of plants, who contributed to her husband's scientific work. He died on 19 January 1888 in Strasbourg, of a tumor of the jaw, after undergoing extensive surgery. [ 3 ]
https://en.wikipedia.org/wiki/Heinrich_Anton_de_Bary
Heinrich Martin Weber (5 March 1842, Heidelberg , Germany – 17 May 1913, Straßburg , Alsace-Lorraine , German Empire , now Strasbourg , France ) was a German mathematician . [ 1 ] Weber's main work was in algebra , number theory , and analysis . He is best known for his text Lehrbuch der Algebra published in 1895 and much of it is his original research in algebra and number theory. His work Theorie der algebraischen Functionen einer Veränderlichen (with Dedekind ) established an algebraic foundation for Riemann surfaces , allowing a purely algebraic formulation of the Riemann–Roch theorem . Weber's research papers were numerous, most of them appearing in Crelle's Journal or Mathematische Annalen . He was the editor of Riemann 's collected works. Weber was born in Heidelberg , Baden , and entered the University of Heidelberg in 1860. In 1866 he became a privatdozent, and in 1869 he was appointed as extraordinary professor at that school. Weber also taught in Zürich at the Federal Polytechnic Institute (today the ETH Zurich ), at the University of Königsberg , and at the Technische Hochschule in Charlottenburg (today Technische Universität Berlin ). His final post was at the Kaiser-Wilhelm-Universität Straßburg , Alsace-Lorraine , where he died. In 1893 in Chicago, his paper Zur Theorie der ganzzahligen algebraischen Gleichungen was read (but not by him) at the International Mathematical Congress held in connection with the World's Columbian Exposition . [ 2 ] In 1895 and in 1904 he was president of the Deutsche Mathematiker-Vereinigung . His doctoral students include Heinrich Brandt , E. V. Huntington , Louis Karpinski , and Friedrich Levi .
https://en.wikipedia.org/wiki/Heinrich_Martin_Weber
Heinrich Rohrer Medals are a series of awards presented to celebrate the late Nobel laureate Heinrich Rohrer for his work in the fields of nanoscience and nanotechnology , and specifically for co-creating the scanning tunneling microscope . Medals are awarded triennially by the Surface Science Society of Japan with IBM Research – Zurich , Swiss Embassy in Japan, and Ms. Rohrer. The Grand Medal is for a single researcher who has made "distinguished achievements in the field of nanoscience and nanotechnology based on surface science" but can be awarded to several individuals. The Rising Medal is presented to up to three researchers upwards of 37 years in age each with different topics. The Rising Medal is given for their outstanding efforts with the assumption that they will continue to actively work in their respective fields. Medals are given with a framed certificate and a cash prize of JPY 1,000,000 for the Grand Medal and JPY 300,000 for the Rising Medal. [ 1 ] [ 2 ] Awards have been presented in 2014 [ 3 ] [ 4 ] [ 5 ] and 2017 [ 6 ] [ 7 ] and is scheduled to be presented in November 2020 at the 9th International Symposium on Surface Science (ISSS9) in Takamatsu , Japan. [ 8 ] The 2020 medals will be presented and laureates are requested to give award lectures at the upcoming ISSS9.
https://en.wikipedia.org/wiki/Heinrich_Rohrer_Medal
Heinz Falk (born April 29, 1939, in Sankt Pölten , Lower Austria ) is professor emeritus for organic chemistry at Johannes Kepler University of Linz and editor of "Progress in the Chemistry of Organic Natural Compounds". [ 1 ] His research is focused on structural analysis, synthesis , stereochemistry and photochemistry of plant and animal photosensitizing and photosensory pigments, such as hypericin . [ 2 ] Heinz Falk was born April 29, 1939, in Sankt Pölten , Austria , went to elementary school in Statzendorf and completed middle school in Krems an der Donau . After moving to Vienna in 1953 he completed a three-year program at HBLVA for Chemical Industry, Rosensteingasse and completed his high-school diploma in 1959 through classes at an evening school, where he met his future wife, Rotraud Falk (née Strohbach). Heinz Falk is married to Rotraud Falk since 1966 and they have one son: Heinz Falk studied chemistry at the University of Vienna starting in 1959 and completed his dissertation under his doctoral advisor, Karl Schlögl , in 1966. In 1971 Falk spent a year abroad to study at ETH Zürich . Upon his return to Vienna in 1972 he attained habilitation for organic chemistry at the University of Vienna . 1966-1979: University of Vienna Starting in 1966 Falk was an assistant at the Institute of Organic Chemistry at the University of Vienna . In 1975 he was promoted to associate professor of physical organic chemistry at the University of Vienna . In the summer of 1978 Falk was invited to speak at the Gordon Research Conference in Wolfeboro . 1979–present: Johannes Kepler University of Linz In 1979 Falk received a call to become full professor of organic chemistry at Johannes Kepler University of Linz , where he founded the new Institute of Organic Chemistry . From 1989 through 1991 he was elected Dean of the Faculty of Engineering and Natural Sciences ( TNF ) at Johannes Kepler University of Linz . In 2005 Falk was ranked #3 among the "Top 10" scientists in Upper Austria by the newspaper "OÖ Nachrichten". [ 3 ] In 2008 he retired as Professor emeritus at the Institute of Organic chemistry of the JKU. Falk's main research area is the structural analysis, synthesis , stereochemistry and photochemistry of plant and animal photosensitizing and photosensory pigments. The main group of compounds covered in his work are pigments derived from the fundamental phenanthro[1,10,9,8-opqra]perylene-7,14-dione chromophore with natural pigments like hypericin , stentorin , the fringelites, the gymnochromes, and blepharismin. In addition, he is focusing on hemin-analogous corrphycene derivatives (e.g. as potential blood substitutes and heme oxygenase blocker) as well as on other natural compounds such as the natural sun blocker urocanic acid . Furthermore, research on applied problems of industrial relevance, like oxidation, ozonization, non natural amino acids and catalysis have been pursued.
https://en.wikipedia.org/wiki/Heinz_Falk
An heirloom plant , heirloom variety , heritage fruit (Australia and New Zealand), or heirloom vegetable (especially in Ireland and the UK) is an old cultivar of a plant used for food that is grown and maintained by gardeners and farmers, particularly in isolated communities of the Western world . [ 1 ] These were commonly grown during earlier periods in human history , but are not used in modern large-scale agriculture . [ citation needed ] In some parts of the world, it is illegal to sell seeds of cultivars that are not listed as approved for sale. [ 2 ] The Henry Doubleday Research Association, now known as Garden Organic , responded to this legislation by setting up the Heritage Seed Library to preserve seeds of as many of the older cultivars as possible. However, seed banks alone have not been able to provide sufficient insurance against catastrophic loss. [ 2 ] In some jurisdictions, like Colombia , laws have been proposed that would make seed saving itself illegal. [ 3 ] Many heirloom vegetables have kept their traits through open pollination , while fruit varieties such as apples have been propagated over the centuries through grafts and cuttings . The trend of growing heirloom plants in gardens has been returning in popularity in North America and Europe. Before the industrialization of agriculture , a much wider variety of plant foods were grown for human consumption, largely due to farmers and gardeners saving seeds and cuttings for future planting. From the 16th century through the early 20th century, the diversity was huge. Old nursery catalogues were filled with plums, peaches, pears and apples of numerous varieties, and seed catalogs offered legions of vegetable varieties. [ 4 ] Valuable and carefully selected seeds were sold and traded using these catalogs along with useful advice on cultivation. Since World War II, agriculture in the industrialized world has mostly consisted of food crops which are grown in large, monocultural plots. In order to maximize consistency, few varieties of each type of crop are grown. These varieties are often selected for their productivity and their ability to ripen at the same time while withstanding mechanical picking and cross-country shipping, as well as their tolerance to drought , frost, or pesticides . [ 5 ] This form of agriculture has led to a 75% drop in crop genetic diversity. [ 6 ] [ 7 ] While heirloom gardening has maintained a niche community, in recent years it has seen a resurgence in response to the industrial agriculture trend. In the Global South , heirloom plants are still widely grown, for example, in the home gardens of South and Southeast Asia . Before World War II , the majority of produce grown in the United States was heirlooms. [ 8 ] In the 21st century, numerous community groups all over the world are working to preserve historic varieties to make a wide variety of fruits, vegetables, herbs, and flowers available again to the home gardener, by renovating old orchards, sourcing historic fruit varieties, engaging in seed swaps, and encouraging community participation. Heirloom varieties are an increasingly popular way for gardeners and small farmers to connect with traditional forms of agriculture and the crops grown in these systems. Growers also cite lower costs associated with purchasing seeds, improved taste, and perceived improved nutritional quality as reasons for growing heirlooms. [ 9 ] In many countries, hundreds or even thousands of heirloom varieties are commercially available for purchase or can be obtained through seed libraries and banks , seed swaps, or community events. [ 10 ] Heirloom varieties may also be well suited for market gardening , farmer's market sales, and CSA programs . [ 11 ] A primary drawback to growing heirloom varieties is lower disease resistance compared to many commercially available hybrid varieties. Common disease problems, such as verticillium and fusarium wilt , may affect heirlooms more significantly than non-heirloom crops. Heirloom varieties may also be more delicate and perishable. [ 11 ] [ 12 ] In recent years, research has been conducted into improving the disease resistance of heirlooms, particularly tomatoes, by crossing them with resistant hybrid varieties. [ 13 ] The term heirloom to describe a seed variety was first used in the 1930s by horticulturist and vegetable grower J.R. Hepler to describe bean varieties handed down through families. [ 14 ] However, the current definition and use of the word heirloom to describe plants is fiercely debated. One school of thought places an age or date point on the cultivars . For instance, one school says the cultivar must be over 100 years old, others 50 years old, and others prefer the date of 1945, which marks the end of World War II and roughly the beginning of widespread hybrid use by growers and seed companies. Many gardeners consider 1951 to be the latest year a plant could have originated and still be called an heirloom, since that year marked the widespread introduction of the first hybrid varieties. [ 5 ] It was in the 1970s that hybrid seeds began to proliferate in the commercial seed trade. Some heirloom varieties are much older; some are apparently pre-historic . Another way of defining heirloom cultivars is to use the definition of the word heirloom in its truest sense. Under this interpretation, a true heirloom is a cultivar that has been nurtured, selected, and handed down from one family member to another for many generations. Additionally, there is another category of cultivars that could be classified as "commercial heirlooms": cultivars that were introduced many generations ago and were of such merit that they have been saved, maintained and handed down—even if the seed company has gone out of business or otherwise dropped the line. Additionally, many old commercial releases have actually been family heirlooms that a seed company obtained and introduced. Regardless of a person's specific interpretation, most authorities agree that heirlooms, by definition, must be open-pollinated . They may also require open-pollinated varieties to have been bred and stabilized using classic breeding practices. While there is currently one genetically modified tomato available to home growers, [ 15 ] it is generally agreed that no genetically modified organisms can be considered heirloom cultivars. Another important point of discussion is that without the ongoing growing and storage of heirloom plants, the seed companies and the government will control all seed distribution. Most, if not all, hybrid plants, if they do not have sterile seeds and can be regrown, will not be the same as the original hybrid plant, thus ensuring the dependency on seed distributors for future crops. [ 5 ] Writer and author Jennifer A. Jordan describes the term "heirloom" as a culturally constructed concept that is only relevant due to the relatively recent loss of many crop varieties: "It is only with the rise of industrial agriculture that [the] practice of treating food as a literal heirloom has disappeared in many parts of the world—and that is precisely when the heirloom label emerges. ...[T]he concept of an heirloom becomes possible only in the context of the loss of actual heirloom varieties, of increased urbanization and industrialization as fewer people grow their own food, or at least know the people who grow their food." [ 16 ] The heritage fruit trees that exist today are clonally descended from trees of antiquity. Heirloom roses are sometimes collected (nondestructively as small cuttings) from vintage homes and from cemeteries , where they were once planted at gravesites by mourners and left undisturbed in the decades since. Modern production methods and the rise in population have largely supplanted this practice. In the UK and Europe, it is thought that many heritage vegetable varieties (perhaps over 2,000) have been lost since the 1970s, when EEC (now EU) laws were passed making it illegal to sell any vegetable cultivar not on the national list of any EEC country. This was set up to help in eliminating seed suppliers selling one seed as another, guarantee the seeds were true to type, and that they germinated consistently. Thus, there were stringent tests to assess varieties, with a view to ensuring they remain the same from one generation to the next. However, unique varieties were lost for posterity. [ 5 ] These tests (called DUS ) assess "distinctness", "uniformity", and "stability". But since some heritage cultivars are not necessarily uniform from plant to plant, or indeed within a single plant—a single cultivar—this has been a sticking point. "Distinctness" has been a problem, moreover, because many cultivars have several names, perhaps coming from different areas or countries (e.g., carrot cultivar Long Surrey Red is also known as "Red Intermediate", "St. Valery", and "Chertsey"). [ 5 ] However, it has been ascertained that some of these varieties that look similar are in fact different cultivars. On the other hand, two that were known to be different cultivars were almost identical to each other, thus one would be dropped from the national list in order to clean it up. Another problem has been the fact that it is somewhat expensive to register and then maintain a cultivar on a national list. Therefore, if no seed breeder or supplier thinks it will sell well, no one will maintain it on a list, and so the seed will not be re-bred by commercial seed breeders. In recent years, [ when? ] progress has been made in the UK to set up allowances and less stringent tests for heritage varieties on a B national list, but this is still under consideration. When heirloom plants are not being sold, however, laws are often more lenient. Because most heirloom plants are at least 50 years old and grown and swapped in a family or community they fall under the public domain. [ 17 ] Another worldwide alternative is to submit heirloom seeds to a seedbank . These public repositories in turn maintain and disperse these genetics to anyone who will use them appropriately. Typically, approved uses are breeding, study, and sometimes, further distribution. There are a variety of intellectual property protections and laws that are applied to heirloom seeds, which can often differ greatly between states. Plant patents are based on the Plant Patent Act of 1930, which protects plants grown from cuttings and division, while under intellectual property rights, the Plant Variety Protection Act of 1970 (PVPA) shields non-hybrid, seed-propagated plants. However, seed breeders can only shelter their variety for 20 years under PVPA. There are also a couple of exceptions under the PVPA which allow growers to cultivate, save seeds, and sell the resultant crops, and give breeders allowances to use PVPA protected varieties as starter material as long as it constitutes less than half of the breeding material. There are also seed licenses which may place restrictions on the use of seeds or trademarks that guard against the use of certain plant variety names. [ 17 ] In 2014, the Pennsylvania Department of Agriculture caused a seed-lending library to shut down and promised to curtail any similar efforts in the state. [ 18 ] The lending library, hosted by a town library, allowed gardeners to "check out" a package of open-pollinated seed, and "return" seeds kept from the crop grown from those seeds. The Department of Agriculture said that this activity raises the possibility of " agro-terrorism ", and that a Seed Act of 2004 requires the library staff to test each seed packet for germination rate and whether the seed was true to type. [ 18 ] In 2016 the department reversed this decision, and clarified that seed libraries and non-commercial seed exchanges are not subject to the requirements of the Seed Act. [ 19 ] In disputed Palestine , some heirloom growers and seed savers see themselves as contributing a form of resistance against the privatization of agriculture, while also telling stories of their ancestors, defying violence, and encouraging rebellion. [ 20 ] The Palestinian Heirloom Seed Library (PHSL), founded by writer and activist Vivien Sansour , breeds and maintains a selection of traditional crops from the region, seeking to "preserve and promote heritage and threatened seed varieties, traditional Palestinian farming practices, and the cultural stories and identities associated with them." [ 20 ] Some scholars have additionally framed the increasing control of Israeli agribusiness corporations over Palestinian seed supplies as an attempt to suppress food sovereignty and as a form of subtle ecocide . [ 21 ] In January 2012, a conflict over seed access erupted in Latvia when two undercover investigators from the Latvian State Plant Protection Agency charged an independent farm with the illegal sale of unregistered heirloom tomato seeds. [ 22 ] The agency suggested that the farm choose a small number of varieties to officially register and to abandon the other approximately 800 varieties grown on the farm. This infuriated customers as well as members of the general public, many of whom spoke out against what was seen as an overly strict interpretation of the law. The scandal further escalated with a series of hearings held by agency officials, during which residents called for a reexamination of seed registration laws and demanded greater citizen participation in legal and political matters relating to agriculture. [ 22 ] In Peru and Ecuador , genes from heirloom tomato varieties and wild tomato relatives have been the subject of patent claims by the University of Florida . These genes have been investigated for their usefulness in increasing drought and salt tolerance and disease resistance, as well as improving flavor, in commercial tomatoes. [ 23 ] The American genomics development company Evolutionary Genomics identified genes found in Galapagos tomatoes that may increase sweetness by up to 25% and as of 2023 has filed an international patent application on the usage of these genes. [ 24 ] Native heirloom and landrace crop varieties and their stewards are sometimes subject to theft and biopiracy . [ 25 ] Biopiracy may negatively impact communities that grow these heirloom varieties through loss of profits and livelihoods, as well as litigation. One infamous example is the case of Enola bean patent, in which a Texas corporation collected heirloom Mexican varieties of the scarlet runner bean and patented them, and then sued the farmers who had supplied the seeds in the first place to prevent them from exporting their crops to the US. [ 26 ] The 'Enola' bean was granted 20-year patent protection in 1999, but subsequently underwent numerous legal challenges on the grounds that the bean was not a novel variety. In 2004, DNA fingerprinting techniques were used to demonstrate that 'Enola' was functionally identical to a yellow bean grown in Mexico known as Azufrado Peruano 87. [ 27 ] The case has been widely cited as a prime example of biopiracy and misapplication of patent rights . Native communities in the United States and Mexico have drawn particular attention to the importance of traditional and culturally appropriate seed supplies. The Traditional Native American Farmers Association (TNAFA) is an Indigenous organization aiming to "revitalize traditional agriculture for spiritual and human need" and advocating for traditional methods of growing, preparing, and consuming plants. [ 28 ] In concert with other organizations, TNAFA has also drafted a formal Declaration of Seed Sovereignty and worked with legislators to protect Indigenous heritage seeds. Indigenous peoples are also at the forefront of the seed rematriation movement to bring lost seed varieties back to their traditional stewards. [ 29 ] Rematriation efforts are frequently directed at institutions such as universities, museums, and seed banks, which may hold Indigenous seeds in their collection that are inaccessible to the communities from which they originate. In 2018, the Seed Savers Exchange , the largest publicly accessible seed bank in the United States, rematriated several heirloom seed varieties back to Indigenous communities. [ 29 ] Activism surrounding food justice , farmers' rights, and seed sovereignty frequently overlap with the promotion and usage of heirloom crop varieties. International peasant farmers' organization La Via Campesina is credited with the first usage of the term "food sovereignty" and campaigns for agrarian reform, seed freedom, and farmers' rights. It currently represents more than 150 social movement organizations in 56 countries. [ 30 ] Numerous other organizations and collectives worldwide participate in food sovereignty activism, including the US Food Sovereignty Alliance , Food Secure Canada, and the Latin American Seeds Collective in North and South America ; the African Center for Biodiversity (ACB), the Coalition for the Protection of African Genetic Heritage (COPAGEN), and the West African Peasant Seed Committee (COASP) in Africa ; and the Alliance for Sustainable and Holistic Agriculture (ASHA), Navdanya , and the Southeast Asia Regional Initiatives for Community Empowerment (SEARICE) in Asia . [ 31 ] In a 2022 BBC interview, Indian environmental activist and scholar Vandana Shiva stated that "Seed is the source of life. Seed is the source of food. To protect food freedom, we must protect seed freedom." [ 32 ] Other writers have pushed back against the promotion and proliferation of heirloom crop varieties, connecting their usage to the impacts of colonialism . Quoting American author and educator Martín Prechtel in his article in The Guardian , Chris Smith writes that " 'To keep seeds alive, clear, strong and open-pollinated, purity as the idea of a single pure race must be understood as the ironic insistence of imperial minds. ' " [ 33 ] Writer and journalist Brendan Borrell calls heirloom tomatoes "the tomato equivalent of the pug —that 'purebred' dog with the convoluted nose that snorts and hacks when it tries to catch a breath" and claims that selection for unique size, shape, color, and flavor has hampered disease resistance and hardiness in heirlooms. [ 34 ] More attention is being put on heirloom plants as a way to restore genetic diversity and feed a growing population while safeguarding the food supply of diverse regions. Specific heirloom plants are often selected, saved, and planted again because of their superior performance in a particular locality. Over many crop cycles these plants develop unique adaptive qualities to their environment, which empowers local communities and can be vital to maintaining the genetic resources of the world. [ 7 ] Some debate has occurred regarding the perceived improved nutritional qualities of heirloom varieties compared to modern cultivars. [ 35 ] Anecdotal reports claim that heirloom vegetables are more nutritious or contain more vitamins and minerals than more recently developed vegetables. [ 36 ] [ 37 ] Current research does not support the claim that heirloom varieties generally contain a greater concentration of nutrients; however, nutrient concentration and composition does appear to vary between different cultivars. [ 38 ] Nevertheless, heirloom varieties may still contain the genetic basis for useful traits that can be employed to improve modern crops, including for human nutritional qualities. [ 35 ] Heirloom varieties are also critical to promoting global crop diversity , which has generally declined since the middle of the 20th century. [ 39 ] Heirloom crops may contain genetic material that is distinct from varieties typically grown in monocrop systems , many of which are hybrid varieties. Monocrop systems tend to be vulnerable to disease and pest outbreaks, which can decimate whole industries due to the genetic similarity between plants. [ 40 ] Some organizations have employed seed banks and vaults to preserve and protect crop genetics against catastrophic loss. One of the most notable of these seed banks is the Svalbard Global Seed Vault located in Svalbard, Norway, which safeguards approximately 1.2 million seed samples with capacity for up to 4.5 million. [ 41 ] Some writers and farmers have criticized the apparent reliance on seed vaults, however, and argue that heirloom and rare varieties are better protected against extinction when actively planted and grown than stored away with no immediate influence on crop genetic diversity. [ 42 ]
https://en.wikipedia.org/wiki/Heirloom_plant
An heirloom tomato (also called heritage tomato in the UK ) is an open-pollinated , non-hybrid heirloom cultivar of tomato . They are classified as family heirlooms, commercial heirlooms, mystery heirlooms, or created heirlooms. They usually have a shorter shelf life and are less disease resistant than hybrids. They are grown for various reasons: for food, historical interest, access to wider varieties, and by people who wish to save seeds from year to year, as well as for their taste. [ 1 ] Many heirloom tomatoes are sweeter and lack a genetic mutation that gives tomatoes a uniform red color at the cost of the fruit's taste. [ 2 ] Varieties bearing that mutation which have been favored by industry since the 1940s – that is, tomatoes which are not heirlooms – feature fruits with lower levels of carotenoids and a decreased ability to make sugar within the fruit. [ 3 ] Heirloom tomato cultivars can be found in a wide variety of colors, shapes, flavors, and sizes. Some heirloom cultivars can be prone to cracking or lack disease resistance. As with most garden plants, cultivars can be acclimated over several gardening seasons to thrive in a geographical location through careful selection and seed saving . Some of the most famous examples include Aunt Ruby's German Green , Banana Legs, Big Rainbow , Black Krim , Brandywine , Cherokee Purple , Chocolate Cherry, Costoluto Genovese, Garden Peach , Gardener's Delight, Green Zebra , Hawaiian Pineapple, Hillbilly , Lollypop, Marglobe , Matt's Wild Cherry , Mortgage Lifter , Mr. Stripey , Neville Tomatoes, Paul Robeson , Pruden's Purple, Red Currant, San Marzano , Silvery Fir Tree, Three Sisters , and Yellow Pear . Heirloom seeds " breed true ", unlike the seeds of hybridized plants. Both sides of the DNA in an heirloom variety come from a common stable cultivar. Heirloom tomato varieties are open pollinating, so cross-pollination can occur. Generally, tomatoes most likely to cross are those with potato leaves, double flowers (found on beefsteak types), or currant tomatoes. All of these should be kept at least 50 feet (15 m) apart. All other tomatoes should be kept at least 20 feet (6.1 m) apart to reduce the possibility of cross-pollination. Seed should be saved from tomatoes picked from several different plants throughout the growing season that are true to type to preserve genetic diversity. These seeds should be mixed at the end of the growing season. [ 4 ] There are two main ways to save heirloom tomato seeds. The first method is to let the tomato ripen completely, even to the point of beginning to rot, and then remove the seeds with a spoon and spread them on a piece of cloth or paper to dry. Some people spread them out on a paper towel, let them dry, and then plant the paper towel and seeds together in potting or germinating soil. The second method to save tomato seeds using the fermentation process. The tomatoes are allowed to overripen and then cut to expose the seed cavities. The seeds are then scooped out and put into a container. The tomatoes need to be stirred one or more times per day for three or more days until the seed mixture is soupy. As fermentation occurs, some fungal growth will appear on top of the mixture, but that is normal. At the end of the fermentation process, the seed mixture is stirred, and the seeds dislodge from the gel and sink to the bottom of the container. Water is then poured into the mixture; the pulp and the bad seeds will rise to the top and flow over the side of the container, while the good seeds sink to the bottom. Once the water becomes clear strain the remaining seeds, then spread the seeds out to dry. Once the seeds are dry, mix the seeds together, breaking apart any that are stuck together, and put the seeds in a tightly sealed plastic bag. Seeds should be dated, labeled, and stored at room temperature, away from direct sunlight. Heirloom tomato seeds may be stored for up to ten years. [ 5 ]
https://en.wikipedia.org/wiki/Heirloom_tomato
In physics , the Heisenberg picture or Heisenberg representation [ 1 ] is a formulation (largely due to Werner Heisenberg in 1925) of quantum mechanics in which observables incorporate a dependency on time, but the states are time-independent. It stands in contrast to the Schrödinger picture in which observables are constant and the states evolve in time. It further serves to define a third, hybrid, picture, the interaction picture . In the Heisenberg picture of quantum mechanics the state vectors | ψ ⟩ do not change with time, while observables A satisfy d d t A H ( t ) = i ℏ [ H H ( t ) , A H ( t ) ] + ( ∂ A S ∂ t ) H , {\displaystyle {\frac {d}{dt}}A_{\text{H}}(t)={\frac {i}{\hbar }}[H_{\text{H}}(t),A_{\text{H}}(t)]+\left({\frac {\partial A_{\text{S}}}{\partial t}}\right)_{\text{H}},} where "H" and "S" label observables in Heisenberg and Schrödinger picture respectively, H is the Hamiltonian and [·,·] denotes the commutator of two operators (in this case H and A ). Taking expectation values automatically yields the Ehrenfest theorem , featured in the correspondence principle . By the Stone–von Neumann theorem , the Heisenberg picture and the Schrödinger picture are unitarily equivalent, just a basis change in Hilbert space . In some sense, the Heisenberg picture is more natural and convenient than the equivalent Schrödinger picture, especially for relativistic theories. Lorentz invariance is manifest in the Heisenberg picture, since the state vectors do not single out the time or space. This approach also has a more direct similarity to classical physics : by simply replacing the commutator over the reduced Planck's constant above by the Poisson bracket , the Heisenberg equation reduces to an equation in Hamiltonian mechanics . For the sake of pedagogy, the Heisenberg picture is introduced here from the subsequent, but more familiar, Schrödinger picture . According to Schrödinger's equation , the quantum state at time t {\displaystyle t} is | ψ ( t ) ⟩ = U ( t ) | ψ ( 0 ) ⟩ {\displaystyle |\psi (t)\rangle =U(t)|\psi (0)\rangle } , where U ( t ) = T e − i ℏ ∫ 0 t d s H S ( s ) {\displaystyle U(t)=Te^{-{\frac {i}{\hbar }}\int _{0}^{t}dsH_{\rm {S}}(s)}} is the time-evolution operator induced by a Hamiltonian H S ( t ) {\displaystyle H_{\rm {S}}(t)} that could depend on time, and | ψ ( 0 ) ⟩ {\displaystyle |\psi (0)\rangle } is the initial state. T {\displaystyle T} refers to time-ordering, ħ is the reduced Planck constant , and i is the imaginary unit. Given an observable A S ( t ) {\displaystyle A_{\rm {S}}(t)} in the Schrödinger picture , which is a Hermitian linear operator that could also be time-dependent, in the state | ψ ( t ) ⟩ {\displaystyle |\psi (t)\rangle } , its expectation value is given by ⟨ A ⟩ t = ⟨ ψ ( t ) | A S ( t ) | ψ ( t ) ⟩ . {\displaystyle \langle A\rangle _{t}=\langle \psi (t)|A_{\rm {S}}(t)|\psi (t)\rangle .} In the Heisenberg picture, the quantum state is assumed to remain constant at its initial value | ψ ( 0 ) ⟩ {\displaystyle |\psi (0)\rangle } , whereas operators evolve with time according to the definition A H ( t ) := U † ( t ) A S ( t ) U ( t ) . {\displaystyle A_{\rm {H}}(t):=U^{\dagger }(t)A_{\rm {S}}(t)U(t)\,.} This readily implies ⟨ A ⟩ t = ⟨ ψ ( 0 ) | A H ( t ) | ψ ( 0 ) ⟩ {\displaystyle \langle A\rangle _{t}=\langle \psi (0)|A_{\rm {H}}(t)|\psi (0)\rangle } , so the same expectation value can be obtained by working in either picture. The Schrödinger equation for the time-evolution operator is d d t U ( t ) = − i ℏ H S ( t ) U ( t ) . {\displaystyle {\frac {d}{dt}}U(t)=-{\frac {i}{\hbar }}H_{\rm {S}}(t)U(t).} It follows that d d t A H ( t ) = ( d d t U † ( t ) ) A S ( t ) U ( t ) + U † ( t ) A S ( t ) ( d d t U ( t ) ) + U † ( t ) ( ∂ A S ∂ t ) U ( t ) = i ℏ U † ( t ) H S ( t ) A S ( t ) U ( t ) − i ℏ U † ( t ) A S ( t ) H S ( t ) U ( t ) + U † ( t ) ( ∂ A S ∂ t ) U ( t ) = i ℏ U † ( t ) H S ( t ) U ( t ) U † ( t ) A S ( t ) U ( t ) − i ℏ U † ( t ) A S ( t ) U ( t ) U † ( t ) H S ( t ) U ( t ) + ( ∂ A S ∂ t ) H = i ℏ [ H H ( t ) , A H ( t ) ] + ( ∂ A S ∂ t ) H , {\displaystyle {\begin{aligned}{\frac {d}{dt}}A_{\rm {H}}(t)&=\left({\frac {d}{dt}}U^{\dagger }(t)\right)A_{\rm {S}}(t)U(t)+U^{\dagger }(t)A_{\rm {S}}(t)\left({\frac {d}{dt}}U(t)\right)+U^{\dagger }(t)\left({\frac {\partial A_{\rm {S}}}{\partial t}}\right)U(t)\\&={\frac {i}{\hbar }}U^{\dagger }(t)H_{\rm {S}}(t)A_{\rm {S}}(t)U(t)-{\frac {i}{\hbar }}U^{\dagger }(t)A_{\rm {S}}(t)H_{\rm {S}}(t)U(t)+U^{\dagger }(t)\left({\frac {\partial A_{\rm {S}}}{\partial t}}\right)U(t)\\&={\frac {i}{\hbar }}U^{\dagger }(t)H_{\rm {S}}(t)U(t)U^{\dagger }(t)A_{\rm {S}}(t)U(t)-{\frac {i}{\hbar }}U^{\dagger }(t)A_{\rm {S}}(t)U(t)U^{\dagger }(t)H_{\rm {S}}(t)U(t)+\left({\frac {\partial A_{\rm {S}}}{\partial t}}\right)_{\rm {H}}\\&={\frac {i}{\hbar }}[H_{\rm {H}}(t),A_{\rm {H}}(t)]+\left({\frac {\partial A_{\rm {S}}}{\partial t}}\right)_{\rm {H}},\end{aligned}}} where differentiation was carried out according to the product rule . This is Heisenberg's equation of motion. Note that the Hamiltonian that appears in the final line above is the Heisenberg Hamiltonian H H ( t ) {\displaystyle H_{\rm {H}}(t)} , which may differ from the Schrödinger Hamiltonian H S ( t ) {\displaystyle H_{\rm {S}}(t)} . An important special case of the equation above is obtained if the Hamiltonian H S {\displaystyle H_{\rm {S}}} does not vary with time. Then the time-evolution operator can be written as U ( t ) = e − i ℏ t H S , {\displaystyle U(t)=e^{-{\frac {i}{\hbar }}tH_{\rm {S}}},} and hence H H ≡ H S ≡ H {\displaystyle H_{\rm {H}}\equiv H_{\rm {S}}\equiv H} since U ( t ) {\displaystyle U(t)} now commutes with H {\displaystyle H} . Therefore, ⟨ A ⟩ t = ⟨ ψ ( 0 ) | e i ℏ t H A S ( t ) e − i ℏ t H | ψ ( 0 ) ⟩ {\displaystyle \langle A\rangle _{t}=\langle \psi (0)|e^{{\frac {i}{\hbar }}tH}A_{\rm {S}}(t)e^{-{\frac {i}{\hbar }}tH}|\psi (0)\rangle } and following the previous analyses, d d t A H ( t ) = i ℏ [ H , A H ( t ) ] + e i ℏ t H ( ∂ A S ∂ t ) e − i ℏ t H . {\displaystyle {\begin{aligned}{\frac {d}{dt}}A_{\rm {H}}(t)&={\frac {i}{\hbar }}[H,A_{\rm {H}}(t)]+e^{{\frac {i}{\hbar }}tH}\left({\frac {\partial A_{\rm {S}}}{\partial t}}\right)e^{-{\frac {i}{\hbar }}tH}.\end{aligned}}} Furthermore, if A S ≡ A {\displaystyle A_{\rm {S}}\equiv A} is also time-independent, then the last term vanishes and d d t A H ( t ) = i ℏ [ H , A H ( t ) ] , {\displaystyle {\frac {d}{dt}}A_{\rm {H}}(t)={\frac {i}{\hbar }}[H,A_{\rm {H}}(t)],} where A H ( t ) ≡ A ( t ) = e i ℏ t H A e − i ℏ t H {\displaystyle A_{\rm {H}}(t)\equiv A(t)=e^{{\frac {i}{\hbar }}tH}Ae^{-{\frac {i}{\hbar }}tH}} in this particular case. The equation is solved by use of the standard operator identity , e B A e − B = A + [ B , A ] + 1 2 ! [ B , [ B , A ] ] + 1 3 ! [ B , [ B , [ B , A ] ] ] + ⋯ , {\displaystyle {e^{B}Ae^{-B}}=A+[B,A]+{\frac {1}{2!}}[B,[B,A]]+{\frac {1}{3!}}[B,[B,[B,A]]]+\cdots \,,} which implies A ( t ) = A + i t ℏ [ H , A ] + 1 2 ! ( i t ℏ ) 2 [ H , [ H , A ] ] + 1 3 ! ( i t ℏ ) 3 [ H , [ H , [ H , A ] ] ] + ⋯ {\displaystyle A(t)=A+{\frac {it}{\hbar }}[H,A]+{\frac {1}{2!}}\left({\frac {it}{\hbar }}\right)^{2}[H,[H,A]]+{\frac {1}{3!}}\left({\frac {it}{\hbar }}\right)^{3}[H,[H,[H,A]]]+\cdots } A similar relation also holds for classical mechanics , the classical limit of the above, given by the correspondence between Poisson brackets and commutators : [ A , H ] ⟷ i ℏ { A , H } . {\displaystyle [A,H]\quad \longleftrightarrow \quad i\hbar \{A,H\}.} In classical mechanics, for an A with no explicit time dependence, { A , H } = d A d t , {\displaystyle \{A,H\}={\frac {dA}{dt}}~,} so again the expression for A ( t ) is the Taylor expansion around t = 0. In effect, the initial state of the quantum system has receded from view, and is only considered at the final step of taking specific expectation values or matrix elements of observables that evolved in time according to the Heisenberg equation of motion. A similar analysis applies if the initial state is mixed . The time evolved state | ψ ( t ) ⟩ {\displaystyle |\psi (t)\rangle } in the Schrödinger picture is sometimes written as | ψ S ( t ) ⟩ {\displaystyle |\psi _{\rm {S}}(t)\rangle } to differentiate it from the evolved state | ψ I ( t ) ⟩ {\displaystyle |\psi _{\rm {I}}(t)\rangle } that appears in the different interaction picture . Commutator relations may look different than in the Schrödinger picture, because of the time dependence of operators. For example, consider the operators x ( t 1 ), x ( t 2 ), p ( t 1 ) and p ( t 2 ) . The time evolution of those operators depends on the Hamiltonian of the system. Considering the one-dimensional harmonic oscillator, H = p 2 2 m + m ω 2 x 2 2 , {\displaystyle H={\frac {p^{2}}{2m}}+{\frac {m\omega ^{2}x^{2}}{2}},} the evolution of the position and momentum operators is given by: d d t x ( t ) = i ℏ [ H , x ( t ) ] = p m , {\displaystyle {\frac {d}{dt}}x(t)={\frac {i}{\hbar }}[H,x(t)]={\frac {p}{m}},} d d t p ( t ) = i ℏ [ H , p ( t ) ] = − m ω 2 x . {\displaystyle {\frac {d}{dt}}p(t)={\frac {i}{\hbar }}[H,p(t)]=-m\omega ^{2}x.} Note that the Hamiltonian is time independent and hence x ( t ) , p ( t ) {\displaystyle x(t),p(t)} are the position and momentum operators in the Heisenberg picture. Differentiating both equations once more and solving for them with proper initial conditions, p ˙ ( 0 ) = − m ω 2 x 0 , {\displaystyle {\dot {p}}(0)=-m\omega ^{2}x_{0},} x ˙ ( 0 ) = p 0 m , {\displaystyle {\dot {x}}(0)={\frac {p_{0}}{m}},} leads to x ( t ) = x 0 cos ⁡ ( ω t ) + p 0 ω m sin ⁡ ( ω t ) , {\displaystyle x(t)=x_{0}\cos(\omega t)+{\frac {p_{0}}{\omega m}}\sin(\omega t),} p ( t ) = p 0 cos ⁡ ( ω t ) − m ω x 0 sin ⁡ ( ω t ) . {\displaystyle p(t)=p_{0}\cos(\omega t)-m\omega x_{0}\sin(\omega t).} Direct computation yields the more general commutator relations, [ x ( t 1 ) , x ( t 2 ) ] = i ℏ m ω sin ⁡ ( ω t 2 − ω t 1 ) , {\displaystyle [x(t_{1}),x(t_{2})]={\frac {i\hbar }{m\omega }}\sin \left(\omega t_{2}-\omega t_{1}\right),} [ p ( t 1 ) , p ( t 2 ) ] = i ℏ m ω sin ⁡ ( ω t 2 − ω t 1 ) , {\displaystyle [p(t_{1}),p(t_{2})]=i\hbar m\omega \sin \left(\omega t_{2}-\omega t_{1}\right),} [ x ( t 1 ) , p ( t 2 ) ] = i ℏ cos ⁡ ( ω t 2 − ω t 1 ) . {\displaystyle [x(t_{1}),p(t_{2})]=i\hbar \cos \left(\omega t_{2}-\omega t_{1}\right).} For t 1 = t 2 {\displaystyle t_{1}=t_{2}} , one simply recovers the standard canonical commutation relations valid in all pictures. For a time-independent Hamiltonian H S , where H 0,S is the free Hamiltonian,
https://en.wikipedia.org/wiki/Heisenberg_picture
The Heisenberg–Langevin equations (named after Werner Heisenberg and Paul Langevin ) are equations for open quantum systems . They are a specific case of quantum Langevin equations . In the Heisenberg picture the time evolution of a quantum system is the operators themselves. The solution to the Heisenberg equation of motion determines the subsequent time evolution of the operators. The Heisenberg–Langevin equation is the generalization of this to open quantum systems . [ 1 ] [ 2 ] This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Heisenberg–Langevin_equations
In thermal engineering , Heisler charts are a graphical analysis tool for the evaluation of heat transfer in transient , one-dimensional conduction . [ 1 ] They are a set of two charts per included geometry introduced in 1947 by M. P. Heisler [ 2 ] which were supplemented by a third chart per geometry in 1961 by H. Gröber. Heisler charts allow the evaluation of the central temperature for transient heat conduction through an infinitely long plane wall of thickness 2 L , an infinitely long cylinder of radius r o , and a sphere of radius r o . Each aforementioned geometry can be analyzed by three charts which show the midplane temperature, temperature distribution, and heat transfer. [ 1 ] Although Heisler–Gröber charts are a faster and simpler alternative to the exact solutions of these problems, there are some limitations. First, the body must be at uniform temperature initially. Second, the Fourier's number of the analyzed object should be bigger than 0.2. Additionally, the temperature of the surroundings and the convective heat transfer coefficient must remain constant and uniform. Also, there must be no heat generation from the body itself. [ 1 ] [ 3 ] [ 4 ] These first Heisler–Gröber charts were based upon the first term of the exact Fourier series solution for an infinite plane wall: where T i is the initial uniform temperature of the slab, T ∞ is the constant environmental temperature imposed at the boundary, x is the location in the plane wall, λ is the root of λ * tan λ = Bi , and α is thermal diffusivity . The position x = 0 represents the center of the slab. The first chart for the plane wall is plotted using three different variables. Plotted along the vertical axis of the chart is dimensionless temperature at the midplane, θ o ∗ = T ( 0 , t ) − T ∞ T i − T ∞ . {\displaystyle \theta _{o}^{*}={\frac {T(0,t)-T_{\infty }}{T_{i}-T_{\infty }}}.} Plotted along the horizontal axis is the Fourier number , Fo = αt / L 2 . The curves within the graph are a selection of values for the inverse of the Biot number , where Bi = hL / k . k is the thermal conductivity of the material and h is the heat transfer coefficient. [ 1 ] [ 5 ] The second chart is used to determine the variation of temperature within the plane wall at other location in the x-direction at the same time of T o {\displaystyle To} for different Biot numbers. [ 1 ] The vertical axis is the ratio of a given temperature to that at the centerline θ θ o = T ( x , t ) − T ∞ T ( 0 , t ) − T ∞ {\displaystyle {\frac {\theta }{\theta _{o}}}={\frac {T(x,t)-T_{\infty }}{T(0,t)-T_{\infty }}}} where the x / L curve is the position at which T is taken. The horizontal axis is the value of Bi −1 . [ 5 ] The third chart in each set was supplemented by Gröber in 1961, and this particular one shows the dimensionless heat transferred from the wall as a function of a dimensionless time variable. The vertical axis is a plot of Q / Q o , the ratio of actual heat transfer to the amount of total possible heat transfer before T = T ∞ . On the horizontal axis is the plot of (Bi 2 )(Fo), a dimensionless time variable. [ 5 ] For the infinitely long cylinder, the Heisler chart is based on the first term in an exact solution to a Bessel function . [ 1 ] Each chart plots similar curves to the previous examples, and on each axis is plotted a similar variable. [ 5 ] [ 5 ] [ 5 ] The Heisler chart for a sphere is based on the first term in the exact Fourier series solution: [ 1 ] These charts can be used similar to the first two sets and are plots of similar variables. [ 5 ] [ 5 ] [ 5 ] [ 6 ]
https://en.wikipedia.org/wiki/Heisler_chart
Heisook Lee ( Korean : 이혜숙 , born 1948 [ 1 ] ) is a South Korean mathematician and activist for gender equality in mathematics. She is retired as a professor of mathematics and dean at Ewha Womans University . Her mathematical research has concerned abstract algebra and algebraic coding theory , including work on self-dual codes and bent functions . Lee graduated from Ewha Womans University in 1971. After a master's degree in 1974 from the University of British Columbia in Canada, she completed a Ph.D. in 1978 from Queen's University at Kingston , also in Canada. [ 2 ] Her dissertation, The Brauer Group of an Integral Scheme , was supervised by Morris Orzech. [ 3 ] After postdoctoral research in Germany at the University of Regensburg , she returned to Ewha Womans University in 1980, as a professor of mathematics. [ 4 ] She became dean of natural sciences and dean of research affairs from 1997 to 2001, and dean of graduate studies from 2006 to 2008. [ 2 ] Lee became founding editor of Communications of the Korean Mathematical Society in 1986. From 1994 to 1996 she was editor in chief of the Journal of the Korean Mathematical Society . [ 2 ] She was the second president of the Korea Federation of Women's Science & Technology Associations (KOFWST), serving from 2006 to 2007. [ 5 ] She founded the Center for Women in Science, Engineering, and Technology (WISET), now the Korean Foundation for Women in Science, Engineering, and Technology, and became its first president, serving from 2013 to 2016. [ 6 ] She is a professor emeritus at Ewha Womans University, [ 2 ] and president of the Korea Center for Gendered Innovations (GISTeR). [ 7 ]
https://en.wikipedia.org/wiki/Heisook_Lee
Hele-Shaw flow is defined as flow taking place between two parallel flat plates separated by a narrow gap satisfying certain conditions, named after Henry Selby Hele-Shaw , who studied the problem in 1898. [ 1 ] [ 2 ] Various problems in fluid mechanics can be approximated to Hele-Shaw flows and thus the research of these flows is of importance. Approximation to Hele-Shaw flow is specifically important to micro-flows. This is due to manufacturing techniques, which creates shallow planar configurations, and the typically low Reynolds numbers of micro-flows. The conditions that needs to be satisfied are where h {\displaystyle h} is the gap width between the plates, U {\displaystyle U} is the characteristic velocity scale, l {\displaystyle l} is the characteristic length scale in directions parallel to the plate and ν {\displaystyle \nu } is the kinematic viscosity. Specifically, the Reynolds number R e = U h / ν {\displaystyle \mathrm {Re} =Uh/\nu } need not always be small, but can be order unity or greater as long as it satisfies the condition R e ( h / l ) ≪ 1. {\displaystyle \mathrm {Re} (h/l)\ll 1.} In terms of the Reynolds number R e l = U l / ν {\displaystyle \mathrm {Re} _{l}=Ul/\nu } based on l {\displaystyle l} , the condition becomes R e l ( h / l ) 2 ≪ 1. {\displaystyle \mathrm {Re} _{l}(h/l)^{2}\ll 1.} The governing equation of Hele-Shaw flows is identical to that of the inviscid potential flow and to the flow of fluid through a porous medium ( Darcy's law ). It thus permits visualization of this kind of flow in two dimensions. [ 3 ] [ 4 ] [ 5 ] Let x {\displaystyle x} , y {\displaystyle y} be the directions parallel to the flat plates, and z {\displaystyle z} the perpendicular direction, with h {\displaystyle h} being the gap between the plates (at z = 0 , h {\displaystyle z=0,h} ) and l {\displaystyle l} be the relevant characteristic length scale in the x y {\displaystyle xy} -directions. Under the limits mentioned above, the incompressible Navier–Stokes equations , in the first approximation becomes [ 6 ] ∂ p ∂ x = μ ∂ 2 v x ∂ z 2 , ∂ p ∂ y = μ ∂ 2 v y ∂ z 2 , ∂ p ∂ z = 0 , ∂ v x ∂ x + ∂ v y ∂ y + ∂ v z ∂ z = 0 , {\displaystyle {\begin{aligned}{\frac {\partial p}{\partial x}}=\mu {\frac {\partial ^{2}v_{x}}{\partial z^{2}}},\quad {\frac {\partial p}{\partial y}}&=\mu {\frac {\partial ^{2}v_{y}}{\partial z^{2}}},\quad {\frac {\partial p}{\partial z}}=0,\\{\frac {\partial v_{x}}{\partial x}}+{\frac {\partial v_{y}}{\partial y}}+{\frac {\partial v_{z}}{\partial z}}&=0,\\\end{aligned}}} where μ {\displaystyle \mu } is the viscosity . These equations are similar to boundary layer equations, except that there are no non-linear terms. In the first approximation, we then have, after imposing the non-slip boundary conditions at z = 0 , h {\displaystyle z=0,h} , The equation for p {\displaystyle p} is obtained from the continuity equation. Integrating the continuity equation from across the channel and imposing no-penetration boundary conditions at the walls, we have which leads to the Laplace Equation : This equation is supplemented by appropriate boundary conditions. For example, no-penetration boundary conditions on the side walls become: ∇ p ⋅ n = 0 {\displaystyle {\mathbf {\nabla } }p\cdot \mathbf {n} =0} , where n {\displaystyle \mathbf {n} } is a unit vector perpendicular to the side wall (note that on the side walls, non-slip boundary conditions cannot be imposed). The boundaries may also be regions exposed to constant pressure in which case a Dirichlet boundary condition for p {\displaystyle p} is appropriate. Similarly, periodic boundary conditions can also be used. It can also be noted that the vertical velocity component in the first approximation is that follows from the continuity equation. While the velocity magnitude v x 2 + v y 2 {\displaystyle {\sqrt {v_{x}^{2}+v_{y}^{2}}}} varies in the z {\displaystyle z} direction, the velocity-vector direction tan − 1 ⁡ ( v y / v x ) {\displaystyle \tan ^{-1}(v_{y}/v_{x})} is independent of z {\displaystyle z} direction, that is to say, streamline patterns at each level are similar. The vorticity vector ω {\displaystyle {\boldsymbol {\omega }}} has the components [ 6 ] Since ω z = 0 {\displaystyle \omega _{z}=0} , the streamline patterns in the x y {\displaystyle xy} -plane thus correspond to potential flow (irrotational flow). Unlike potential flow , here the circulation Γ {\displaystyle \Gamma } around any closed contour C {\displaystyle C} (parallel to the x y {\displaystyle xy} -plane), whether it encloses a solid object or not, is zero, where the last integral is set to zero because p {\displaystyle p} is a single-valued function and the integration is done over a closed contour. In a Hele-Shaw channel, one can define the depth-averaged version of any physical quantity, say φ {\displaystyle \varphi } by Then the two-dimensional depth-averaged velocity vector u ≡ ⟨ v x y ⟩ {\displaystyle \mathbf {u} \equiv \langle \mathbf {v} _{xy}\rangle } , where v x y = ( v x , v y ) {\displaystyle \mathbf {v} _{xy}=(v_{x},v_{y})} , satisfies the Darcy's law , Further, ⟨ ω ⟩ = 0. {\displaystyle \langle {\boldsymbol {\omega }}\rangle =0.} The term Hele-Shaw cell is commonly used for cases in which a fluid is injected into the shallow geometry from above or below the geometry, and when the fluid is bounded by another liquid or gas. [ 7 ] For such flows the boundary conditions are defined by pressures and surface tensions.
https://en.wikipedia.org/wiki/Hele-Shaw_flow
Ahmadu Bello University Helen Nosakhare Asemota is a biochemist and agricultural biotechnologist based in Jamaica . She is Professor of Biochemistry and Molecular Biology and Director of the Biotechnology Centre at the University of the West Indies at Mona, Jamaica . Her research develops biotechnology strategies for production and improvement of tropical tuber crops. She is notable for leading large international biotechnology collaborations, as well as for acting as an international biotechnology consultant for the United Nations ( UN ). Asemota was born in Nigeria . [ 1 ] She earned a Bachelor of Science from the University of Benin , a Master of Science from Ahmadu Bello University , and a Doctor of Philosophy from the University of Benin/ Frankfurt University . [ 1 ] [ 2 ] In 1990, Asemota moved to Jamaica to take up a position as Associate Honorary Lecturer at the University of the West Indies . [ 3 ] [ 4 ] She was appointed Lecturer in 1996, and promoted to Senior Lecturer in Biochemistry and Biotechnology in 1998. In 2003, Asemota was promoted to Professor of Biochemistry and Molecular Biology. [ 5 ] She was full professor at the Shaw University , North Carolina from 2005 to 2012. [ 6 ] During this time she was Head of the Nanobiology Division of the Shaw Nanotechnology Initiative at the Nanoscience and Nanotechnology Research Centre (NNRC) from 2005 to 2009, Nature Sciences Biological Sciences' Program Coordinator from 2009 to 2010, and Chairman for the Shaw University Institutional Review Board (IRB) from 2006 to 2009, Senator for the Shaw Faculty Senate between 2007 and 2012, Core Director of the Faculty Research Development at the NIH- Research Infrastructure for Minority Institutions and as IRB Administrator between 2010 and 2012. [ 6 ] In 2013, Asemota was appointed Director of the Biotechnology Centre, a research unit at the University of the West Indies with a focus on biotechnology-based enterprises. [ 3 ] [ 7 ] [ 8 ] At the time of her promotion to Professor in 2003, Asemota was a member of the Caribbean Biotechnology Network, the Biochemical Society of Nigeria, the Third World Organisation for Women in Science, and the Nigerian Association of Women in Science, Technology & Mathematics. She was a Fellow of the American Biographical Institute, a member of the National Geographic Society, the Nigerian Institute of Food Science and Technology, and the New York Academy of Science. [ 5 ] Asemota conducted PhD research at the University of Benin and Frankfurt University, where she studied the molecular genetics and metabolism of the browning of yam tubers in storage. [ 9 ] Upon moving to Jamaica, prompted by ongoing problems with production and storage in the Jamaican yam industry, Asemota continued researching yams, founding the multidisciplinary UWI Yam Biotechnology Project. [ 5 ] [ 7 ] [ 9 ] Initially, Asemota investigated the biochemical effects of removing yam heads at harvest, [ 10 ] a common farming practice in Jamaica. Over the ensuing decades, Asemota's research team has investigated many aspects of yam biochemistry and physiology, from DNA fingerprinting studies of Jamaican yam varieties to the carbohydrate metabolism of yam tubers in storage. [ 9 ] In addition to her work on yam production and storage, Asemota has studied the metabolic effects of yams and yam-derived products on animal models of diseases such as diabetes. [ 11 ] More recently, the Yam Biotechnology Project has moved towards a 'farm to finished products' strategy, with the goal of producing yam-based food, [ 12 ] medical, [ 12 ] [ 13 ] and biofuel products to benefit the Jamaican economy. [ 9 ] [ 14 ] She has also applied similar research techniques to other types of tropical crop. [ 9 ] [ 15 ] Asemota has served as Principal Investigator for the National Institute of Health (NIH) and National Science Foundation (NSF) grants. [ 16 ] She has lectured undergraduates, postgraduates and postdoctoral levels worldwide, and has supervised or advised at least 30 postgraduate students in Biochemistry or Biotechnology. [ 6 ] She has over 250 publications, [ 17 ] and owns four patents from her research. [ 16 ] Asemota has undertaken outreach research with Jamaican farmers, experimenting with lab-derived yam planting materials in their fields, and reviving 'threatened' Jamaican yam varieties. [ 4 ] Asemota has a long history of international consultancy in matters of food security and biotechnology. She was an international technical expert for the European Union (1994-1995), and served the United Nations Technical Cooperation among Developing Countries (TCDC) Programmes as International Technical Cooperation Programmes (TCP). [ 5 ] [ 7 ] [ 17 ] She served as an International Biotechnology consultant to the United Nations Food and Agriculture Organisation from 2001. [ 18 ] This included consulting for the International Technical Cooperation for Syria with the Developing Countries Programmes in 2001 and as technical lead on food sufficiency for the National Seed Potato Production Programme in the Republic of Tajikistan between 2003 and 2007. [ 17 ] She periodically serves the UN-FAO Seed Production Programmes as an International Consultant. [ 5 ]
https://en.wikipedia.org/wiki/Helen_Asemota
The Helen B. Warner Prize for Astronomy is awarded annually by the American Astronomical Society to a young astronomer (aged less than 36, or within 8 years of the award of their PhD) for a significant contribution to observational or theoretical astronomy . This list is from the American Astronomical Society's website. [ 1 ]
https://en.wikipedia.org/wiki/Helen_B._Warner_Prize_for_Astronomy
Helen Hamilton Gardener (1853–1925), born Alice Chenoweth, was an American author, rationalist public intellectual , political activist , and government functionary. Gardener produced many lectures, articles, and books during the 1880s and 1890s and is remembered today for her role in the freethought and women's suffrage movements and for her place as a pioneering woman in the top echelon of the American civil service . Alice Chenoweth, best remembered by her pen name , Helen Hamilton Gardener, was born near Winchester , Virginia , on January 21, 1853. She was the youngest of six children born to Rev. Alfred Griffith Chenoweth, an Episcopalian minister who had become a Methodist circuit rider , and his wife, the former Katherine A. Peel. [ 1 ] The Chenoweth family traced its American antecedents back to a certain Arthur Chenoweth who had arrived in the fledgling Province of Maryland in 1635 to receive a grant of land for honorable service to Lord Baltimore . [ 2 ] The Chenoweth family subsequently made their way to Virginia , where Alice's father had inherited slaves . [ 2 ] As objectors to the institution of slavery, the Chenoweths manumitted their slaves in 1853 over the existing legal obstacles to that course of action. [ 2 ] The family moved to Washington, D.C. shortly thereafter. [ 2 ] This was followed in 1855 by a move to Greencastle, Indiana . [ 1 ] During the American Civil War , Chenoweth's father served the Federal cause, returning to the enemy state of Virginia to serve as a guide for Union troops there. [ 1 ] Alice Chenoweth received an excellent education and showed an interest in and aptitude for science and sociology . [ 2 ] She associated with older people as a girl and read extensively on serious themes. [ 2 ] She studied with tutors and attended various local schools, moving to Cincinnati, Ohio in her late teen years, where she graduated high school. [ 1 ] After leaving high school Chenoweth enrolled in the Cincinnati Normal School , from which she graduated in June 1873. [ 1 ] Chenoweth worked as a schoolteacher for two years, giving up the profession (as was generally the case in the day) when she married in 1875. [ 1 ] Her first husband, Charles Selden Smart, was nearly two decades her senior and served at the time as Ohio State School Commissioner. [ 1 ] The couple moved to New York City in 1880, where Charles entered the insurance business while Alice attended biology courses at Columbia University , albeit not in pursuit of a degree. [ 1 ] Chenoweth-Smart also lectured on sociology as part of the adult education program at the Brooklyn Institute of Arts and Sciences and tried her hand at writing for local newspapers under a variety of masculine pseudonyms. [ 1 ] During her first years in New York City Chenoweth-Smart made the acquaintance of Robert G. Ingersoll , the leading rationalist orator of the day. [ 1 ] At Ingersoll's persistent request in January 1884 Alice Chenoweth-Smart began herself to deliver a series of public lectures, [ 1 ] talks dealing with such skeptical themes as "Men, Women, and Gods," "Historical Facts and Theological Fictions," "By Divine Right," and "Rome or Reason." [ 2 ] Many of these were collected into her first book, Men, Women, and Gods, and Other Lectures, which was issued in hard covers by the radical freethought publication, The Truth Seeker . [ 2 ] Chenoweth-Smart published this book under the pen name "Helen Hamilton Gardener" — a pseudonym which she would use professionally for the rest of her life, eventually adopting this as her own legal name. [ 1 ] A number of short stories and essays by Gardener followed over the second half of the 1880s, pieces which were published in a number of leading magazines of the day. [ 2 ] Throughout the period, Gardener's interest in feminism grew. [ 1 ] Gardener's initial public lectures attempted particularly to demonstrate a linkage between Christianity and the subjugation of women and in 1887 the published views of former Surgeon General of the United States William A. Hammond attesting a neurological basis for female inferiority moved Gardener to even greater concern with the topic. [ 1 ] Gardener began working with neurologist Edward C. Spitzka to refute Hammond's thesis of inherent inferiority of the female brain. [ 1 ] Gardener ultimately produced a paper entitled "Sex in Brain" that was read to the 1888 convention of the International Council of Women in Washington, DC. [ 1 ] In this work, Gardener argued that no connection between brain weight and intellectual capacity had been established and challenged Hammond's methodology of comparing the prized specimen brains of leading men with those of indigent women. [ 1 ] Gardener emerged from the Hammond controversy as a leading public speaker for women's rights. In 1893 she would deliver three more scholarly papers on feminist themes to the Congress of Representative Women held in Chicago in conjunction with the World's Columbian Exposition . [ 1 ] During the early 1890s, Gardener emerged as a novelist. A pair of books were written which together dealt with the theme of the double standard of morality between the sexes — Is This Your Son, My Lord? (1891) and Pray You Sir, Whose Daughter? [ 2 ] Both were published by a leading liberal political magazine of the day, The Arena . Is This Your Son, My Lord? was sharply critical of the low age of consent then in force and the ruin of innocence by the lustful desires of outwardly respectable men. [ 1 ] The book sold an impressive 25,000 copies in its first five months after publication and elicited shocked commentary from critics and readers alike. [ 1 ] In 1894, Gardener published a slightly fictionalized account of her father's life entitled An Unofficial Patriot. [ 1 ] The book's protagonist, patterned after her father, represented a positive male character at variance with those which dominated Gardener's earlier books. [ 1 ] The book was critically well received and served as the basis for an 1899 play by playwright James A. Herne , The Reverend Griffith Davenport. . [ 1 ] In 1907, Gardener returned to Washington, D.C., where she took up the suffrage cause. In 1913 she was appointed a position to the Congressional Committee of the National American Woman Suffrage Association , becoming, six years later, its vice-chairwoman; she was elected as one of NAWSA's vice-presidents as chief liaison under the Woodrow Wilson administration, in 1917. In 1920, Wilson appointed her to the United States Civil Service Commission , the first woman to occupy such a high federal position. Gardener died in July 1925 in Washington, D.C. of chronic myocarditis . [ 1 ] Keeping with her interest in the topic, Gardener's brain was donated for scientific study before her body was cremated and its ashes interred at Arlington National Cemetery beside the grave of her second husband. [ 1 ] Gardener's papers are housed at the Schlesinger Library of Harvard University at Cambridge, Massachusetts as part of its Woman's Rights Collection. An online finding aid of this material, which is encompassed in eight archival folders, is available. [ 3 ] This material has been microfilmed by University Publications of America . [ 3 ] Gardener's brain is part of the Wilder Brain Collection at Cornell University . [ 4 ]
https://en.wikipedia.org/wiki/Helen_H._Gardener
The Helferich method may refer to:
https://en.wikipedia.org/wiki/Helferich_method
The heliacal rising ( / h ɪ ˈ l aɪ . ə k əl / hih- LY -ə-kəl ) [ 1 ] [ 2 ] [ 3 ] of a star or a planet occurs annually when it becomes visible above the eastern horizon at dawn just before sunrise (thus becoming "the morning star "). [ 4 ] A heliacal rising marks the time when a star or planet becomes visible for the first time again in the night sky after having set with the Sun at the western horizon in a previous sunset (its heliacal setting ), having since been in the sky only during daytime , obscured by sunlight. Historically, the most important such rising is that of Sirius , which was an important feature of the Egyptian calendar and astronomical development . The rising of the Pleiades heralded the start of the Ancient Greek sailing season, using celestial navigation , [ 5 ] as well as the farming season (attested by Hesiod in his Works and Days ). Heliacal rising is one of several types of risings and settings, mostly they are grouped into morning and evening risings and settings of objects in the sky. Culmination in the evening and then morning is set apart by half a year, while on the other hand risings and settings in the evenings and the mornings are only at the equator set apart by half a year. Relative to the stars, the Sun appears to drift eastward about one degree per day along a path called the ecliptic because there are 360 degrees in any complete revolution (circle), which takes about 365 days in the case of one revolution of the Earth around the Sun. Any given "distant" star in the belt of the ecliptic will be visible at night for only half of the year, when it will always remain below the horizon. During the other half of the year it will appear to be above the horizon but not visible because the sunlight is too bright during the day. The star's heliacal rising will occur when the Earth has moved to a point in its orbit where the star appears on the eastern horizon at dawn. Each day after the heliacal rising, the star will rise slightly earlier and remain visible for longer before the light from the rising sun overwhelms it. Over the following days the star will move further and further westward (about one degree per day) relative to the Sun, until eventually it is no longer visible in the sky at sunrise because it has already set below the western horizon. This is called the acronycal setting . [ 6 ] The same star will reappear in the eastern sky at dawn approximately one year after its previous heliacal rising. For stars near the ecliptic , the small difference between the solar and sidereal years due to axial precession will cause their heliacal rising to recur about one sidereal year (about 365.2564 days) later, though this depends on its proper motion . For stars far from the ecliptic, the period is somewhat different and varies slowly, but in any case the heliacal rising will move all the way through the zodiac in about 26,000 years due to precession of the equinoxes . Because the heliacal rising depends on the observation of the object, its exact timing can be dependent on weather conditions. [ 7 ] Heliacal phenomena and their use throughout history have made them useful points of reference in archeoastronomy . [ 8 ] Some stars, when viewed from latitudes not at the equator , do not rise or set. These are circumpolar stars , which are either always in the sky or never. For example, the North Star (Polaris) is not visible in Australia and the Southern Cross is not seen in Europe, because they always stay below the respective horizons. The term circumpolar is somewhat localised as between the Tropic of Cancer and the Equator, the Southern polar constellations have a brief spell of annual visibility (thus "heliacal" rising and "cosmic" setting) and the same applies as to the other polar constellations in respect of the reverse tropic. Constellations containing stars that rise and set were incorporated into early calendars or zodiacs . The Sumerians , Babylonians , Egyptians , and Greeks all used the heliacal risings of various stars for the timing of agricultural activities. Because of its position about 40° off the ecliptic, the heliacal risings of the bright star Sirius in Ancient Egypt occurred not over a period of exactly one sidereal year but over a period called the " Sothic year " (from "Sothis", the name for the star Sirius). The Sothic year was about a minute longer than a Julian year of 365.25 days. [ 9 ] Since the development of civilization , this has occurred at Cairo approximately on July 19 on the Julian calendar . [ 10 ] [ a ] Its returns also roughly corresponded to the onset of the annual flooding of the Nile , although the flooding is based on the tropical year and so would occur about three quarters of a day earlier per century in the Julian or Sothic year. (July 19, 1000 BC in the Julian Calendar is July 10 in the proleptic Gregorian Calendar . At that time, the sun would be somewhere near Regulus in Leo , where it is around August 21 in the 2020s.) The ancient Egyptians appear to have constructed their 365-day civil calendar at a time when Wep Renpet , its New Year , corresponded with Sirius's return to the night sky. [ 9 ] Although this calendar's lack of leap years caused the event to shift one day every four years or so, astronomical records of this displacement led to the discovery of the Sothic cycle and, later, the establishment of the more accurate Julian and Alexandrian calendars . The Egyptians also devised a method of telling the time at night based on the heliacal risings of 36 decan stars , one for each 10° segment of the 360° circle of the zodiac and corresponding to the ten-day "weeks" of their civil calendar. To the Māori of New Zealand , the Pleiades are called Matariki , and their heliacal rising signifies the beginning of the new year (around June). The Mapuche of South America called the Pleiades Ngauponi which in the vicinity of the we tripantu (Mapuche new year) will disappear by the west, lafkenmapu or ngulumapu , appearing at dawn to the East, a few days before the birth of new life in nature. Heliacal rising of Ngauponi, i.e. appearance of the Pleiades by the horizon over an hour before the sun approximately 12 days before the winter solstice, announced we tripantu . When a planet has a heliacal rising, there is a conjunction with the sun beforehand. Depending on the type of conjunction, there may be a syzygy , eclipse , transit , or occultation of the sun. The rising of a planet above the eastern horizon at sunset is called its acronycal rising , which for a superior planet signifies an opposition , another type of syzygy . When the Moon has an acronycal rising, it will occur near full moon and thus, two or three times a year, a noticeable lunar eclipse . Cosmic(al) can refer to rising with sunrise or setting at sunset, or the first setting at morning twilight. [ 12 ] Risings and settings are furthermore differentiated between apparent (the above discussed) and actual or true risings or settings. The use of the terms cosmical and acronycal is not consistent. [ 13 ] [ 14 ] The following table gives an overview of the different application of the terms to the rising and setting instances.
https://en.wikipedia.org/wiki/Heliacal_rising
Helicase-dependent amplification (HDA) is a method for in vitro DNA amplification (like the polymerase chain reaction ) that takes place at a constant temperature. The polymerase chain reaction is the most widely used method for in vitro DNA amplification for purposes of molecular biology and biomedical research. [ 1 ] This process involves the separation of the double-stranded DNA in high heat into single strands (the denaturation step, typically achieved at 95–97 °C), annealing of the primers to the single stranded DNA (the annealing step) and copying the single strands to create new double-stranded DNA (the extension step that requires the DNA polymerase) requires the reaction to be done in a thermal cycler . These bench-top machines are large, expensive and costly to run and maintain, limiting the potential applications of DNA amplification in situations outside the laboratory (e.g., in the identification of potentially hazardous micro-organisms at the scene of investigation, or at the point of care of a patient). Although PCR is usually associated with thermal cycling, the original patent by Mullis et al. [ 2 ] disclosed the use of a helicase as a means for denaturation of double stranded DNA thereby including isothermal nucleic acid amplification. In vivo , DNA is replicated by DNA polymerases with various accessory proteins, including a DNA helicase that acts to separate the DNA by unwinding the DNA double helix. [ 3 ] HDA was developed from this concept, using a helicase (an enzyme ) to denature the DNA. Strands of double-stranded DNA are first separated by a DNA helicase and coated by single-stranded DNA (ssDNA)-binding proteins. In the second step, two sequence-specific primers hybridise to each border of the DNA template. DNA polymerases are then used to extend the primers annealed to the templates to produce a double-stranded DNA and the two newly synthesized DNA products are then used as substrates by DNA helicases, entering the next round of the reaction. Thus, a simultaneous chain reaction develops, resulting in exponential amplification of the selected target sequence (see Vincent et al. ., 2004 [ 4 ] for a schematic diagram). Since the publication of its discovery, HDA technology has been used for a "simple, easy to adapt nucleic acid test for the detection of Clostridioides difficile ". [ 5 ] Other applications include the rapid detection of Staphylococcus aureus by the amplification and detection of a short DNA sequence specific to the bacterium. [ 6 ] The advantages of HDA is that it provides a rapid method of nucleic acid amplification of a specific target at an isothermic temperature that does not require a thermal cycler. However, the optimisation of primers and sometimes buffers is required beforehand by the researcher. Normally primer and buffer optimisation is tested and achieved through PCR, raising the question of the need to spend extra on a separate system to do the actual amplification. Despite the selling point that HDA negates the use of a thermal cycler and therefore allows research to be conducted in the field, much of the work required to detect potentially hazardous microorganisms is carried out in a research/hospital lab setting regardless. At present, mass diagnoses from a great number of samples cannot yet be achieved by HDA, whereas PCR reactions carried out in thermal cycler that can hold multi-well sample plates allows for the amplification and detection of the intended DNA target from a maximum of 96 samples. The cost of purchasing reagents for HDA are also relatively expensive to that of PCR reagents, more so since it comes as a ready-made kit.
https://en.wikipedia.org/wiki/Helicase-dependent_amplification
Helicopter dynamics is a field within aerospace engineering concerned with theoretical and practical aspects of helicopter flight. Its comprises helicopter aerodynamics, stability, control, structural dynamics, vibration, and aeroelastic and aeromechanical stability. [ 1 ] By studying the forces in helicopter flight, improved helicopter designs can be made, though due to the scale and speed of the dynamics, physical testing is non-trivial and expensive. In 2013, a combination of stereophotogrammetry and rigid-body correction in post processing was shown to be a valid tool for performing these studies, and the dynamics of a Robinson R44 helicopter were measured during hovering flight, to determine blade dynamics (e.g. harmonics) and the deflection profile. [ 2 ] This technology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Helicopter_dynamics
Helimagnetism is a form of magnetic ordering where spins of neighbouring magnetic moments arrange themselves in a spiral or helical pattern, with a characteristic turn angle of somewhere between 0 and 180 degrees. It results from the competition between ferromagnetic and antiferromagnetic exchange interactions. [ 1 ] It is possible to view ferromagnetism and antiferromagnetism as helimagnetic structures with characteristic turn angles of 0 and 180 degrees respectively. Helimagnetic order breaks spatial inversion symmetry , as it can be either left-handed or right-handed in nature. Strictly speaking, helimagnets have no permanent magnetic moment, and as such are sometimes considered a complicated type of antiferromagnet . This distinguishes helimagnets from conical magnets , (e.g. Holmium below 20 K [ 2 ] ) which have spiral modulation in addition to a permanent magnetic moment. Helimagnets can be characterized by the distance it takes for the spiral to complete one turn. In analogy to the pitch of screw thread , the period of repetition is known as the "pitch" of the helimagnet. If the spiral's period is some rational multiple of the crystal's unit cell, the structure is commensurate , like the structure originally proposed for MnO 2 . [ 3 ] On the other hand, if the multiple is irrational, the magnetism is incommensurate, like the updated MnO 2 structure. [ 4 ] Helimagnetism was first proposed in 1959, as an explanation of the magnetic structure of manganese dioxide . [ 3 ] Initially applied to neutron diffraction , it has since been observed more directly by Lorentz electron microscopy. [ 5 ] Some helimagnetic structures are reported to be stable up to room temperature. [ 6 ] Like how ordinary ferromagnets have domain walls that separate individual magnetic domains, helimagnets have their own classes of domain walls which are characterized by topological charge . [ 7 ] Many helimagnets have a chiral cubic structure, such as the FeSi (B20) crystal structure type . In these materials, the combination of ferromagnetic exchange and the Dzyaloshinskii–Moriya interaction leads to helixes with relatively long periods. Since the crystal structure is noncentrosymetric even in the paramagnetic state, the magnetic transition to a helimagnetic state does not break inversion symmetry, and the direction of the spiral is locked to the crystal structure. On the other hand, helimagnetism in other materials can also be based on frustrated magnetism or the RKKY interaction . The result is that centrosymmetric structures like the MnP-type (B31) compounds can also exhibit double-helix type helimagnetism where both left and right handed spirals coexist. [ 8 ] For these itinerant helimagnets, the direction of the helicity can be controlled by applied electric currents and magnetic fields. [ 9 ]
https://en.wikipedia.org/wiki/Helimagnetism
The Heliocentric Julian Date ( HJD ) is the Julian Date (JD) corrected for differences in the Earth 's position with respect to the Sun . When timing events that occur beyond the Solar System , due to the finite speed of light , the time the event is observed depends on the changing position of the observer in the Solar System. Before multiple observations can be combined, they must be reduced to a common, fixed, reference location. This correction also depends on the direction to the object or event being timed. The correction is zero (HJD = JD) for objects at the poles of the ecliptic . Elsewhere, it is approximately an annual sine curve, and the highest amplitude occurs on the ecliptic. The maximum correction corresponds to the time in which light travels the distance from the Sun to the Earth, i.e. ±8.3 min (500 s, 0.0058 days). JD and HJD are defined independent of the time standard . Rather, JD can be expressed as e.g. UTC, UT1 , TT or TAI . The differences between these time standards are of the order of a minute, so that for minute accuracy of timings the standard used has to be stated. The HJD correction involves the heliocentric position of the Earth, which is expressed in TT. While the practical choice may be UTC, the natural choice is TT. Since the Sun itself orbits around the barycentre of the Solar System, the HJD correction is not actually to a fixed reference. The difference between correction to the heliocentre and to the barycentre is up to ±4 s. For second accuracy, the Barycentric Julian Date (BJD) should be calculated instead of the HJD. The common formulation of the HJD correction assumes that the object is at infinite distance, certainly beyond the Solar System. The resulting error for Edgeworth-Kuiper Belt objects would be 5 s, and for objects in the main asteroid belt it would be 100 s. In this calculation, the Moon – which is closer than the Sun – can be wrongly placed on the far side of the Sun, resulting in an error of about 15 min. In terms of the vector r → {\displaystyle {\vec {r}}} from the heliocentre to the observer, the unit vector n ^ {\displaystyle {\hat {n}}} from the observer toward the object or event, and the speed of light c {\displaystyle c} : H J D = J D + r → ⋅ n ^ c {\displaystyle HJD=JD+{\frac {{\vec {r}}\cdot {\hat {n}}}{c}}} When the scalar product is expressed in terms of the right ascension α {\displaystyle \alpha } and declination δ {\displaystyle \delta } of the Sun (index ⊙ {\displaystyle \odot } ) and of the extrasolar object this becomes: H J D = J D − r c ⋅ [ sin ⁡ ( δ ) ⋅ sin ⁡ ( δ ⊙ ) + cos ⁡ ( δ ) ⋅ cos ⁡ ( δ ⊙ ) ⋅ cos ⁡ ( α − α ⊙ ) ] {\displaystyle HJD=JD-{\frac {r}{c}}\cdot [\sin(\delta )\cdot \sin(\delta _{\odot })+\cos(\delta )\cdot \cos(\delta _{\odot })\cdot \cos(\alpha -\alpha _{\odot })]} where r {\displaystyle r} is the distance between Sun and observer. The same equation can be used with any astronomical coordinate system . In ecliptic coordinates the Sun is at latitude zero, so that H J D = J D − r c ⋅ cos ⁡ ( β ) ⋅ cos ⁡ ( λ − λ ⊙ ) {\displaystyle HJD=JD-{\frac {r}{c}}\cdot \cos(\beta )\cdot \cos(\lambda -\lambda _{\odot })}
https://en.wikipedia.org/wiki/Heliocentric_Julian_Day
Heliocentrism [ a ] (also known as the heliocentric model ) is a superseded astronomical model in which the Earth and planets orbit around the Sun at the center of the universe . Historically, heliocentrism was opposed to geocentrism , which placed the Earth at the center. The notion that the Earth revolves around the Sun had been proposed as early as the 3rd century BC by Aristarchus of Samos , [ 1 ] who had been influenced by a concept presented by Philolaus of Croton (c. 470 – 385 BC). In the 5th century BC the Greek philosophers Philolaus and Hicetas had the thought on different occasions that the Earth was spherical and revolving around a "mystical" central fire , and that this fire regulated the universe. [ 2 ] In medieval Europe , however, Aristarchus' heliocentrism attracted little attention—possibly because of the loss of scientific works of the Hellenistic period . [ b ] It was not until the 16th century that a mathematical model of a heliocentric system was presented by the Renaissance mathematician, astronomer, and Catholic cleric, Nicolaus Copernicus , leading to the Copernican Revolution . In 1576, Thomas Digges published a modified Copernican system. His modifications are close to modern observations. In the following century, Johannes Kepler introduced elliptical orbits , and Galileo Galilei presented supporting observations made using a telescope . With the observations of William Herschel , Friedrich Bessel , and other astronomers, it was realized that the Sun, while near the barycenter of the Solar System , was not central in the universe. Modern astronomy does not distinguish any center. While the sphericity of the Earth was widely recognized in Greco-Roman astronomy from at least the 4th century BC, [ 4 ] the Earth's daily rotation and yearly orbit around the Sun was never universally accepted until the Copernican Revolution . While a moving Earth was proposed at least from the 4th century BC in Pythagoreanism , and a fully developed heliocentric model was developed by Aristarchus of Samos in the 3rd century BC, these ideas were not successful in replacing the view of a static spherical Earth, and from the 2nd century AD the predominant model, which would be inherited by medieval astronomy, was the geocentric model described in Ptolemy 's Almagest . The Ptolemaic system was a sophisticated astronomical system that managed to calculate the positions for the planets to a fair degree of accuracy. [ 5 ] Ptolemy himself, in his Almagest , says that any model for describing the motions of the planets is merely a mathematical device, and since there is no actual way to know which is true, the simplest model that gets the right numbers should be used. [ 6 ] However, he rejected the idea of a spinning Earth as absurd as he believed it would create huge winds. Within his model the distances of the Moon , Sun , planets and stars could be determined by treating orbits' celestial spheres as contiguous realities, which gave the stars' distance as less than 20 Astronomical Units , [ 7 ] a regression, since Aristarchus of Samos 's heliocentric scheme had centuries earlier necessarily placed the stars at least two orders of magnitude more distant. Problems with Ptolemy's system were well recognized in medieval astronomy , and an increasing effort to criticize and improve it in the late medieval period eventually led to the Copernican heliocentrism developed in Renaissance astronomy . The first non-geocentric model of the universe was proposed by the Pythagorean philosopher Philolaus (d. 390 BC), who taught that at the center of the universe was a "central fire", around which the Earth , Sun , Moon and planets revolved in uniform circular motion . This system postulated the existence of a counter-earth collinear with the Earth and central fire, with the same period of revolution around the central fire as the Earth. The Sun revolved around the central fire once a year, and the stars were stationary. The Earth maintained the same hidden face towards the central fire, rendering both it and the "counter-earth" invisible from Earth. The Pythagorean concept of uniform circular motion remained unchallenged for approximately the next 2000 years, and it was to the Pythagoreans that Copernicus referred to show that the notion of a moving Earth was neither new nor revolutionary. [ 8 ] Kepler gave an alternative explanation of the Pythagoreans' "central fire" as the Sun, " as most sects purposely hid[e] their teachings ". [ 9 ] Heraclides of Pontus (4th century BC) said that the rotation of the Earth explained the apparent daily motion of the celestial sphere. It used to be thought that he believed Mercury and Venus to revolve around the Sun, which in turn (along with the other planets) revolves around the Earth. [ 10 ] Macrobius (AD 395—423) later described this as the "Egyptian System," stating that "it did not escape the skill of the Egyptians ," though there is no other evidence it was known in ancient Egypt . [ 11 ] [ 12 ] The first person known to have proposed a heliocentric system was Aristarchus of Samos ( c. 270 BC) . Like his contemporary Eratosthenes , Aristarchus calculated the size of the Earth and measured the sizes and distances of the Sun and Moon . From his estimates, he concluded that the Sun was six to seven times wider than the Earth, and thought that the larger object would have the most attractive force. His writings on the heliocentric system are lost, but some information about them is known from a brief description by his contemporary, Archimedes , and from scattered references by later writers. Archimedes' description of Aristarchus' theory is given in the former's book, The Sand Reckoner . The entire description comprises just three sentences, which Thomas Heath translates as follows: [ 13 ] You [King Gelon] are aware that "universe" is the name given by most astronomers to the sphere, the centre of which is the centre of the earth, while its radius is equal to the straight line between the centre of the sun and the centre of the earth. This is the common account (τά γραφόμενα), as you have heard from astronomers. But Aristarchus brought out a book consisting of certain hypotheses , wherein it appears, as a consequence of the assumptions made, that the universe is many times greater than the "universe" just mentioned. His hypotheses are that the fixed stars and the sun remain unmoved, that the earth revolves about the sun on the circumference of a circle, the sun lying in the middle of the orbit , and that the sphere of the fixed stars, situated about the same centre as the sun, is so great that the circle in which he supposes the earth to revolve bears such a proportion to the distance of the fixed stars as the centre of the sphere bears to its surface. Aristarchus presumably took the stars to be very far away because he was aware that their parallax [ 14 ] would otherwise be observed over the course of a year. The stars are in fact so far away that stellar parallax only became detectable when sufficiently powerful telescopes had been developed in the 1830s . No references to Aristarchus' heliocentrism are known in any other writings from before the common era . The earliest of the handful of other ancient references occur in two passages from the writings of Plutarch . These mention one detail not stated explicitly in Archimedes' account [ 15 ] —namely, that Aristarchus' theory had the Earth rotating on an axis. The first of these reference occurs in On the Face in the Orb of the Moon : [ 16 ] Only do not, my good fellow, enter an action against me for impiety in the style of Cleanthes , who thought it was the duty of Greeks to indict Aristarchus of Samos on the charge of impiety for putting in motion the Hearth of the Universe, this being the effect of his attempt to save the phenomena by supposing the heaven to remain at rest and the earth to revolve in an oblique circle, while it rotates, at the same time, about its own axis. Only scattered fragments of Cleanthes' writings have survived in quotations by other writers, but in Lives and Opinions of Eminent Philosophers , Diogenes Laërtius lists A reply to Aristarchus (Πρὸς Ἀρίσταρχον) as one of Cleanthes' works, [ 17 ] and some scholars [ 18 ] have suggested that this might have been where Cleanthes had accused Aristarchus of impiety . The second of the references by Plutarch is in his Platonic Questions : [ 19 ] Did Plato put the earth in motion, as he did the sun, the moon, and the five planets, which he called the instruments of time on account of their turnings, and was it necessary to conceive that the earth "which is globed about the axis stretched from pole to pole through the whole universe" was not represented as being held together and at rest, but as turning and revolving (στρεφομένην καὶ ἀνειλουμένην), as Aristarchus and Seleucus afterwards maintained that it did, the former stating this as only a hypothesis (ὑποτιθέμενος μόνον), the latter as a definite opinion (καὶ ἀποφαινόμενος)? The remaining references to Aristarchus' heliocentrism are extremely brief, and provide no more information beyond what can be gleaned from those already cited. Ones which mention Aristarchus explicitly by name occur in Aëtius ' Opinions of the Philosophers , Sextus Empiricus ' Against the Mathematicians , [ 19 ] and an anonymous scholiast to Aristotle. [ 20 ] Another passage in Aëtius' Opinions of the Philosophers reports that Seleucus the astronomer had affirmed the Earth's motion, but does not mention Aristarchus. [ 19 ] Since Plutarch mentions the "followers of Aristarchus" in passing, it is likely that there were other astronomers in the Classical period who also espoused heliocentrism, but whose work was lost. [ 21 ] The only other astronomer from antiquity known by name who is known to have supported Aristarchus' heliocentric model was Seleucus of Seleucia (b. 190 BC), a Hellenistic astronomer who flourished a century after Aristarchus in the Seleucid Empire . [ 22 ] Seleucus was a proponent of the heliocentric system of Aristarchus. [ 23 ] Seleucus may have proved the heliocentric theory by determining the constants of a geometric model for the heliocentric theory and developing methods to compute planetary positions using this model. He may have used early trigonometric methods that were available in his time, as he was a contemporary of Hipparchus . [ 24 ] A fragment of a work by Seleucus has survived in Arabic translation, which was referred to by Rhazes (b. 865). [ 25 ] Alternatively, his explanation may have involved the phenomenon of tides , [ 26 ] which he supposedly theorized to be caused by the attraction to the Moon and by the revolution of the Earth around the Earth and Moon's center of mass . There were occasional speculations about heliocentrism in Europe before Copernicus. In Roman Carthage , the pagan Martianus Capella (5th century AD) expressed the opinion that the planets Venus and Mercury did not go about the Earth but instead circled the Sun. [ 27 ] Capella's model was discussed in the Early Middle Ages by various anonymous 9th-century commentators [ 28 ] and Copernicus mentions him as an influence on his own work. [ 29 ] Also Macrobius (420 CE) described a heliocentric model. [ 30 ] Aryabhata (476–550), in his magnum opus Aryabhatiya (499), propounded a planetary model in which the Earth was taken to be spinning on its axis and the periods of the planets were given with respect to the Sun. [ 31 ] [ 30 ] His immediate commentators, such as Lalla , and other later authors, rejected his innovative view about the turning Earth. [ 32 ] It has been argued that Aryabhatta's calculations were based on an underlying heliocentric model, in which the planets orbit the Sun, [ 33 ] [ 34 ] although this has also been rebutted. [ 35 ] The general consensus is that a synodic anomaly (depending on the position of the Sun) does not imply a physically heliocentric orbit (such corrections being also present in late Babylonian astronomical texts), and that Aryabhata's system was not explicitly heliocentric. [ 36 ] He also made many astronomical calculations, such as the times of the solar and lunar eclipses , and the instantaneous motion of the Moon. [ 37 ] Early followers of Aryabhata's model included Varahamihira , Brahmagupta , and Bhaskara II . For a time, Muslim astronomers accepted the Ptolemaic system and the geocentric model, which were used by al-Battani to show that the distance between the Sun and the Earth varies. [ 38 ] [ 39 ] In the 10th century, al-Sijzi accepted that the Earth rotates around its axis . [ 40 ] [ 41 ] According to later astronomer al-Biruni , al-Sijzi invented an astrolabe called al-zūraqī based on a belief held by some of his contemporaries that the apparent motion of the stars was due to the Earth's movement, and not that of the firmament . [ 41 ] [ 42 ] Islamic astronomers began to criticize the Ptolemaic model, including Ibn al-Haytham in his Al-Shukūk 'alā Baṭalamiyūs ("Doubts Concerning Ptolemy", c. 1028), [ 43 ] [ 44 ] who found contradictions in Ptolemy's model, but al-Haytham remained committed to a geocentric model. [ 45 ] Al-Biruni discussed the possibility of whether the Earth rotated about its own axis and orbited the Sun, but in his Masudic Canon (1031), [ 46 ] he expressed his faith in a geocentric and stationary Earth. [ 47 ] He was aware that if the Earth rotated on its axis, it would be consistent with his astronomical observations, [ 48 ] but considered it a problem of natural philosophy rather than one of mathematics. [ 41 ] [ 49 ] In the 12th century, non-heliocentric alternatives to the Ptolemaic system were developed by some Islamic astronomers, such as Nur ad-Din al-Bitruji , who considered the Ptolemaic model mathematical, and not physical. [ 50 ] [ 51 ] His system spread throughout most of Europe in the 13th century, with debates and refutations of his ideas continued to the 16th century. [ 51 ] The Maragha school of astronomy in Ilkhanid -era Persia further developed "non-Ptolemaic" planetary models involving Earth's rotation . Notable astronomers of this school are Al-Urdi (d. 1266) Al-Katibi (d. 1277), [ 52 ] and Al-Tusi (d. 1274). The arguments and evidence used resemble those used by Copernicus to support the Earth's motion. [ 53 ] [ 54 ] The criticism of Ptolemy as developed by Averroes and by the Maragha school explicitly address the Earth's rotation but it did not arrive at explicit heliocentrism. [ 55 ] The observations of the Maragha school were further improved at the Timurid-era Samarkand observatory under Qushji (1403–1474). In India , Nilakantha Somayaji (1444–1544), in his Aryabhatiyabhasya , a commentary on Aryabhata's Aryabhatiya , developed a computational system for a geo-heliocentric planetary model, in which the planets orbit the Sun, which in turn orbits the Earth, similar to the system later proposed by Tycho Brahe . In the Tantrasamgraha (1501), Somayaji further revised his planetary system, which was mathematically more accurate at predicting the heliocentric orbits of the interior planets than both the Tychonic and Copernican models , [ 56 ] [ 57 ] but did not propose any specific models of the universe. [ 58 ] Nilakantha's planetary system also incorporated the Earth's rotation on its axis. [ 59 ] Most astronomers of the Kerala school of astronomy and mathematics seem to have accepted his planetary model. [ 60 ] [ 61 ] Martianus Capella (5th century CE) expressed the opinion that the planets Venus and Mercury did not go about the Earth but instead circled the Sun. [ 27 ] Capella's model was discussed in the Early Middle Ages by various anonymous 9th-century commentators [ 62 ] and Copernicus mentions him as an influence on his own work. [ 63 ] Macrobius (420 CE) described a heliocentric model. [ 30 ] John Scotus Eriugena (815-877 CE) proposed a model reminiscent of that from Tycho Brahe. [ 30 ] In the 14th century, bishop Nicole Oresme discussed the possibility that the Earth rotated on its axis, while Cardinal Nicholas of Cusa in his Learned Ignorance asked whether there was any reason to assert that the Sun (or any other point) was the center of the universe. In parallel to a mystical definition of God, Cusa wrote that "Thus the fabric of the world ( machina mundi ) will quasi have its center everywhere and circumference nowhere," [ 64 ] recalling Hermes Trismegistus . [ 65 ] Some historians maintain that the thought of the Maragheh observatory , in particular the mathematical devices known as the Urdi lemma and the Tusi couple , influenced Renaissance-era European astronomy, and thus was indirectly received by Renaissance-era European astronomy and thus by Copernicus . [ 49 ] [ 66 ] [ 67 ] [ 68 ] [ 69 ] Copernicus used such devices in the same planetary models as found in Arabic sources. [ 70 ] The exact replacement of the equant by two epicycles used by Copernicus in the Commentariolus was found in an earlier work by Ibn al-Shatir (d. c. 1375) of Damascus. [ 71 ] Copernicus' lunar and Mercury models are also identical to Ibn al-Shatir's. [ 72 ] While the influence of the criticism of Ptolemy by Averroes on Renaissance thought is clear and explicit, the claim of direct influence of the Maragha school, postulated by Otto E. Neugebauer in 1957, remains an open question. [ 55 ] [ 73 ] [ 74 ] [ citation needed ] Since the Tusi couple was used by Copernicus in his reformulation of mathematical astronomy, there is a growing consensus that he became aware of this idea in some way. One possible route of transmission may have been through Byzantine science , which translated some of al-Tusi 's works from Arabic into Byzantine Greek . Several Byzantine Greek manuscripts containing the Tusi couple are still extant in Italy. [ 75 ] The Mathematics Genealogy Project suggests that there is a "genealogy" of Nasir al-Dīn al-Ṭūsī → Shams al‐Dīn al‐Bukhārī → Gregory Chioniades → Manuel Bryennios → Theodore Metochites → Gregory Palamas → Nilos Kabasilas → Demetrios Kydones → Gemistos Plethon → Basilios Bessarion → Johannes Regiomontanus → Domenico Maria Novara da Ferrara → Nicolaus (Mikołaj Kopernik) Copernicus. [ 76 ] Leonardo da Vinci (1452–1519) wrote " Il sole non si move. " ("The Sun does not move.") [ 77 ] and he was a student of a student of Bessarion according to the Mathematics Genealogy Project . [ 76 ] It has been suggested that the idea of the Tusi couple may have arrived in Europe leaving few manuscript traces, since it could have occurred without the translation of any Arabic text into Latin. [ 78 ] [ 49 ] Other scholars have argued that Copernicus could well have developed these ideas independently of the late Islamic tradition. [ 79 ] [ 80 ] [ 81 ] [ 82 ] Copernicus explicitly references several astronomers of the " Islamic Golden Age " (10th to 12th centuries) in De Revolutionibus : Albategnius (Al-Battani) , Averroes (Ibn Rushd), Thebit (Thabit Ibn Qurra) , Arzachel (Al-Zarqali) , and Alpetragius (Al-Bitruji) , but he does not show awareness of the existence of any of the later astronomers of the Maragha school. [ 83 ] It has been argued that Copernicus could have independently discovered the Tusi couple or took the idea from Proclus 's Commentary on the First Book of Euclid , [ 84 ] which Copernicus cited. [ 85 ] Another possible source for Copernicus' knowledge of this mathematical device is the Questiones de Spera of Nicole Oresme , who described how a reciprocating linear motion of a celestial body could be produced by a combination of circular motions similar to those proposed by al-Tusi. [ 86 ] The state of knowledge on planetary theory received by Copernicus is summarized in Georg von Peuerbach 's Theoricae Novae Planetarum (printed in 1472 by Regiomontanus ). By 1470, the accuracy of observations by the Vienna school of astronomy, of which Peuerbach and Regiomontanus were members, was high enough to make the eventual development of heliocentrism inevitable, and indeed it is possible that Regiomontanus did arrive at an explicit theory of heliocentrism before his death in 1476, some 30 years before Copernicus. [ 87 ] Nicolaus Copernicus in his De revolutionibus orbium coelestium ("On the revolution of heavenly spheres", first printed in 1543 in Nuremberg ), presented a discussion of a heliocentric model of the universe in much the same way as Ptolemy in the 2nd century had presented his geocentric model in his Almagest . Copernicus discussed the philosophical implications of his proposed system, elaborated it in geometrical detail, used selected astronomical observations to derive the parameters of his model, and wrote astronomical tables which enabled one to compute the past and future positions of the stars and planets. In doing so, Copernicus moved heliocentrism from philosophical speculation to predictive geometrical astronomy. In reality, Copernicus' system did not predict the planets' positions any better than the Ptolemaic system. [ 88 ] This theory resolved the issue of planetary retrograde motion by arguing that such motion was only perceived and apparent, rather than real : it was a parallax effect, as an object that one is passing seems to move backwards against the horizon. This issue was also resolved in the geocentric Tychonic system ; the latter, however, while eliminating the major epicycles , retained as a physical reality the irregular back-and-forth motion of the planets, which Kepler characterized as a " pretzel ". [ 89 ] Copernicus cited Aristarchus in an early (unpublished) manuscript of De Revolutionibus (which still survives), stating: " Philolaus believed in the mobility of the earth, and some even say that Aristarchus of Samos was of that opinion. " [ 90 ] However, in the published version he restricts himself to noting that in works by Cicero he had found an account of the theories of Hicetas and that Plutarch had provided him with an account of the Pythagoreans , Heraclides Ponticus , Philolaus , and Ecphantus . These authors had proposed a moving Earth, which did not, however, revolve around a central sun. The first information about the heliocentric views of Nicolaus Copernicus was circulated in manuscript completed some time before May 1, 1514. [ 91 ] In 1533, Johann Albrecht Widmannstetter delivered in Rome a series of lectures outlining Copernicus' theory. The lectures were heard with interest by Pope Clement VII and several Catholic cardinals . [ 92 ] In 1539, Martin Luther purportedly said: " There is talk of a new astrologer who wants to prove that the earth moves and goes around instead of the sky, the sun, the moon, just as if somebody were moving in a carriage or ship might hold that he was sitting still and at rest while the earth and the trees walked and moved. But that is how things are nowadays: when a man wishes to be clever he must … invent something special, and the way he does it must needs be the best! The fool wants to turn the whole art of astronomy upside-down. However, as Holy Scripture tells us, so did Joshua bid the sun to stand still and not the earth. " [ 93 ] This was reported in the context of a conversation at the dinner table and not a formal statement of faith. Melanchthon , however, opposed the doctrine over a period of years. [ 94 ] [ 95 ] Nicolaus Copernicus published the definitive statement of his system in De Revolutionibus in 1543. Copernicus began to write it in 1506 and finished it in 1530, but did not publish it until the year of his death. Although he was in good standing with the Church and had dedicated the book to Pope Paul III , the published form contained an unsigned preface by Osiander defending the system and arguing that it was useful for computation even if its hypotheses were not necessarily true. Possibly because of that preface, the work of Copernicus inspired very little debate on whether it might be heretical during the next 60 years. There was an early suggestion among Dominicans that the teaching of heliocentrism should be banned, but nothing came of it at the time. Some years after the publication of De Revolutionibus John Calvin preached a sermon in which he denounced those who "pervert the order of nature" by saying that "the sun does not move and that it is the earth that revolves and that it turns". [ 96 ] [ d ] Prior to the publication of De Revolutionibus , the most widely accepted system had been proposed by Ptolemy , in which the Earth was the center of the universe and all celestial bodies orbited it. Tycho Brahe , arguably the most accomplished astronomer of his time, advocated against Copernicus' heliocentric system and for an alternative to the Ptolemaic geocentric system: a geo-heliocentric system now known as the Tychonic system in which the Sun and Moon orbit the Earth, Mercury and Venus orbit the Sun inside the Sun's orbit of the Earth, and Mars, Jupiter and Saturn orbit the Sun outside the Sun's orbit of the Earth. Tycho appreciated the Copernican system, but objected to the idea of a moving Earth on the basis of physics , astronomy , and religion . The Aristotelian physics of the time (modern Newtonian physics was still a century away) offered no physical explanation for the motion of a massive body like Earth, whereas it could easily explain the motion of heavenly bodies by postulating that they were made of a different sort substance called aether that moved naturally. So Tycho said that the Copernican system " ...expertly and completely circumvents all that is superfluous or discordant in the system of Ptolemy. On no point does it offend the principle of mathematics. Yet it ascribes to the Earth, that hulking, lazy body, unfit for motion, a motion as quick as that of the aethereal torches, and a triple motion at that. " [ 101 ] Likewise, Tycho took issue with the vast distances to the stars that Aristarchus and Copernicus had assumed in order to explain the lack of any visible parallax. Tycho had measured the apparent sizes of stars (now known to be illusory), and used geometry to calculate that in order to both have those apparent sizes and be as far away as heliocentrism required, stars would have to be huge (much larger than the sun; the size of Earth's orbit or larger). Regarding this Tycho wrote, " Deduce these things geometrically if you like, and you will see how many absurdities (not to mention others) accompany this assumption [of the motion of the earth] by inference. " [ 102 ] He also cited the Copernican system's " opposition to the authority of Sacred Scripture in more than one place " as a reason why one might wish to reject it, and observed that his own geo-heliocentric alternative " offended neither the principles of physics nor Holy Scripture ." [ 103 ] The Jesuits astronomers in Rome were at first unreceptive to Tycho's system; the most prominent, Clavius , commented that Tycho was " confusing all of astronomy, because he wants to have Mars lower than the Sun. " [ 104 ] However, after the advent of the telescope showed problems with some geocentric models (by demonstrating that Venus circles the Sun, for example), the Tychonic system and variations on that system became popular among geocentrists, and the Jesuit astronomer Giovanni Battista Riccioli would continue Tycho's use of physics, stellar astronomy (now with a telescope), and religion to argue against heliocentrism and for Tycho's system well into the seventeenth century. During Giordano Bruno 's lifetime, he is the only known person to defend Copernicus' heliocentrism. [ 105 ] In 1584, Bruno published two dialogues ( La Cena de le Ceneri and De l'infinito universo et mondi ) in which he argued against the planetary spheres ( Christoph Rothmann did the same in 1586 as did Tycho Brahe in 1587) and affirmed the Copernican principle. In particular, to support the Copernican view and oppose the objection according to which the motion of the Earth would be perceived by means of the motion of winds, clouds etc., in La Cena de le Ceneri Bruno anticipates some of the arguments of Galilei on the relativity principle. [ 106 ] Note that he also uses the example now known as Galileo's ship . [ 107 ] Using measurements made at Tycho's observatory, Johannes Kepler developed his laws of planetary motion between 1609 and 1619. [ 108 ] In Astronomia nova (1609), Kepler made a diagram of the movement of Mars in relation to Earth if Earth were at the center of its orbit, which shows that Mars' orbit would be completely imperfect and never follow along the same path. To solve the apparent derivation of Mars' orbit from a perfect circle, Kepler derived both a mathematical definition and, independently, a matching ellipse around the Sun to explain the motion of the red planet. [ 109 ] Between 1617 and 1621, Kepler developed a heliocentric model of the Solar System in Epitome astronomiae Copernicanae , in which all the planets have elliptical orbits. This provided significantly increased accuracy in predicting the position of the planets. Kepler's ideas were not immediately accepted, and Galileo for example ignored them. In 1621, Epitome astronomia Copernicanae was placed on the Catholic Church's index of prohibited books despite Kepler being a Protestant. Galileo was able to look at the night sky with the newly invented telescope. He published his observations that Jupiter is orbited by moons and that the Sun rotates in his Sidereus Nuncius (1610) [ 110 ] and Letters on Sunspots (1613), respectively. Around this time, he also announced that Venus exhibits a full range of phases (satisfying an argument that had been made against Copernicus). [ 110 ] As the Jesuit astronomers confirmed Galileo's observations, the Jesuits moved away from the Ptolemaic model and toward Tycho's teachings. [ 111 ] In his 1615 " Letter to the Grand Duchess Christina ", Galileo defended heliocentrism, and claimed it was not contrary to Holy Scripture . He took Augustine 's position on Scripture: not to take every passage literally when the scripture in question is in a Bible book of poetry and songs, not a book of instructions or history. The writers of the Scripture wrote from the perspective of the terrestrial world, and from that vantage point the Sun does rise and set. In fact, it is the Earth's rotation which gives the impression of the Sun in motion across the sky. In February 1615, prominent Dominicans including Thomaso Caccini and Niccolò Lorini brought Galileo's writings on heliocentrism to the attention of the Inquisition, because they appeared to violate Holy Scripture and the decrees of the Council of Trent . [ 112 ] [ 113 ] [ 114 ] [ 115 ] Cardinal and Inquisitor Robert Bellarmine was called upon to adjudicate, and wrote in April that treating heliocentrism as a real phenomenon would be "a very dangerous thing," irritating philosophers and theologians , and harming "the Holy Faith by rendering Holy Scripture as false." [ 116 ] In January 1616, Msgr. Francesco Ingoli addressed an essay to Galileo disputing the Copernican system. Galileo later stated that he believed this essay to have been instrumental in the ban against Copernicanism that followed in February. [ 117 ] According to Maurice Finocchiaro, Ingoli had probably been commissioned by the Inquisition to write an expert opinion on the controversy, and the essay provided the "chief direct basis" for the ban. [ 118 ] The essay focused on eighteen physical and mathematical arguments against heliocentrism. It borrowed primarily from the arguments of Tycho Brahe, and it notedly mentioned the problem that heliocentrism requires the stars to be much larger than the Sun. Ingoli wrote that the great distance to the stars in the heliocentric theory " clearly proves ... the fixed stars to be of such size, as they may surpass or equal the size of the orbit circle of the Earth itself. " [ 119 ] Ingoli included four theological arguments in the essay, but suggested to Galileo that he focus on the physical and mathematical arguments. Galileo did not write a response to Ingoli until 1624. [ 120 ] In February 1616, the Inquisition assembled a committee of theologians, known as qualifiers, who delivered their unanimous report condemning heliocentrism as "foolish and absurd in philosophy, and formally heretical since it explicitly contradicts in many places the sense of Holy Scripture." The Inquisition also determined that the Earth's motion "receives the same judgement in philosophy and ... in regard to theological truth it is at least erroneous in faith." [ 121 ] [ 122 ] Bellarmine personally ordered Galileo ...to abstain completely from teaching or defending this doctrine and opinion or from discussing it... to abandon completely... the opinion that the sun stands still at the center of the world and the earth moves, and henceforth not to hold, teach, or defend it in any way whatever, either orally or in writing. In March 1616, after the Inquisition's injunction against Galileo, the papal Master of the Sacred Palace , Congregation of the Index , and the Pope banned all books and letters advocating the Copernican system, which they called "the false Pythagorean doctrine, altogether contrary to Holy Scripture." [ 123 ] [ 124 ] In 1618, the Holy Office recommended that a modified version of Copernicus' De Revolutionibus be allowed for use in calendric calculations, though the original publication remained forbidden until 1758. [ 124 ] Pope Urban VIII encouraged Galileo to publish the pros and cons of heliocentrism. Galileo's response, Dialogue concerning the two chief world systems (1632), clearly advocated heliocentrism, despite his declaration in the preface that, I will endeavour to show that all experiments that can be made upon the Earth are insufficient means to conclude for its mobility but are indifferently applicable to the Earth, movable or immovable... [ 125 ] and his straightforward statement, I might very rationally put it in dispute, whether there be any such centre in nature, or no; being that neither you nor any one else hath ever proved, whether the World be finite and figurate, or else infinite and interminate; yet nevertheless granting you, for the present, that it is finite, and of a terminate Spherical Figure, and that thereupon it hath its centre... [ 125 ] Some ecclesiastics also interpreted the book as characterizing the Pope as a simpleton, since his viewpoint in the dialogue was advocated by the character Simplicio . Urban VIII became hostile to Galileo and he was again summoned to Rome. [ 126 ] Galileo's trial in 1633 involved making fine distinctions between "teaching" and "holding and defending as true". For advancing heliocentric theory Galileo was forced to recant Copernicanism and was put under house arrest for the last few years of his life. According to J. L. Heilbron, informed contemporaries of Galileo's " appreciated that the reference to heresy in connection with Galileo or Copernicus had no general or theological significance. " [ 127 ] In 1664, Pope Alexander VII published his Index Librorum Prohibitorum Alexandri VII Pontificis Maximi jussu editus (Index of Prohibited Books, published by order of Alexander VII, P.M. ) which included all previous condemnations of heliocentric books. [ 128 ] René Descartes ' first cosmological treatise, written between 1629 and 1633 and titled The World , included a heliocentric model, but Descartes abandoned it in the light of Galileo's treatment. [ 129 ] In his Principles of Philosophy (1644), Descartes introduced a mechanical model in which planets do not move relative to their immediate atmosphere, but are constituted around space-matter vortices in curved space ; these rotate due to centrifugal force and the resulting centripetal pressure . [ 130 ] The Galileo affair did little overall to slow the spread of heliocentrism across Europe, as Kepler's Epitome of Copernican Astronomy became increasingly influential in the coming decades. [ 131 ] By 1686, the model was well enough established that the general public was reading about it in Conversations on the Plurality of Worlds , published in France by Bernard le Bovier de Fontenelle and translated into English and other languages in the coming years. It has been called "one of the first great popularizations of science." [ 129 ] In 1687, Isaac Newton published Philosophiæ Naturalis Principia Mathematica , which provided an explanation for Kepler's laws in terms of universal gravitation and what came to be known as Newton's laws of motion . This placed heliocentrism on a firm theoretical foundation, although Newton's heliocentrism was of a somewhat modern kind. Already in the mid-1680s he recognized the "deviation of the Sun" from the center of gravity of the Solar System. [ 132 ] For Newton it was not precisely the center of the Sun or any other body that could be considered at rest, but "the common centre of gravity of the Earth, the Sun and all the Planets is to be esteem'd the Centre of the World", and this center of gravity "either is at rest or moves uniformly forward in a right line". Newton adopted the "at rest" alternative in view of common consent that the center, wherever it was, was at rest. [ 133 ] Meanwhile, the Catholic Church remained opposed to heliocentrism as a literal description, but this did not by any means imply opposition to all astronomy; indeed, it needed observational data to maintain its calendar. In support of this effort it allowed the cathedrals themselves to be used as solar observatories called meridiane ; i.e., they were turned into "reverse sundials ", or gigantic pinhole cameras , where the Sun's image was projected from a hole in a window in the cathedral's lantern onto a meridian line. [ 134 ] In the mid-18th century the Church's opposition began to fade. An annotated copy of Newton's Principia was published in 1742 by Fathers le Seur and Jacquier of the Franciscan Minims, two Catholic mathematicians, with a preface stating that the author's work assumed heliocentrism and could not be explained without the theory. In 1758 the Catholic Church dropped the general prohibition of books advocating heliocentrism from the Index of Forbidden Books . [ 135 ] The Observatory of the Roman College was established by Pope Clement XIV in 1774 (nationalized in 1878, but re-founded by Pope Leo XIII as the Vatican Observatory in 1891). In spite of dropping its active resistance to heliocentrism, the Catholic Church did not lift the prohibition of uncensored versions of Copernicus' De Revolutionibus or Galileo's Dialogue . The affair was revived in 1820, when the Master of the Sacred Palace (the Catholic Church's chief censor), Filippo Anfossi , refused to license a book by a Catholic canon, Giuseppe Settele, because it openly treated heliocentrism as a physical fact. [ 136 ] Settele appealed to pope Pius VII . After the matter had been reconsidered by the Congregation of the Index and the Holy Office, Anfossi's decision was overturned. [ 137 ] Pius VII approved a decree in 1822 by the Sacred Congregation of the Inquisition to allow the printing of heliocentric books in Rome. Copernicus' De Revolutionibus and Galileo's Dialogue were then subsequently omitted from the next edition of the Index when it appeared in 1835. Three apparent proofs of the heliocentric hypothesis were provided in 1727 by James Bradley , in 1838 by Friedrich Wilhelm Bessel , and in 1851 by Léon Foucault . Bradley discovered the stellar aberration, proving the relative motion of the Earth. Bessel proved that the parallax of a star was greater than zero by measuring the parallax of 0.314 arcseconds of a star named 61 Cygni . In the same year Friedrich Georg Wilhelm Struve and Thomas Henderson measured the parallaxes of other stars, Vega and Alpha Centauri . Experiments like those of Foucault were performed by V. Viviani in 1661 in Florence and by Bartolini in 1833 in Rimini. [ 138 ] Already in the Talmud , Greek philosophy and science under the general name "Greek wisdom" were considered dangerous. They were put under ban then and later for some periods. The first Jewish scholar to describe the Copernican system, albeit without mentioning Copernicus by name, was Maharal of Prague , in his book "Be'er ha-Golah" (1593). Maharal makes an argument of radical skepticism , arguing that no scientific theory can be reliable, which he illustrates by the new-fangled theory of heliocentrism upsetting even the most fundamental views on the cosmos. [ 139 ] Copernicus is mentioned in the books of David Gans (1541–1613), who worked with Brahe and Kepler. Gans wrote two books on astronomy in Hebrew : a short one, "Magen David" (1612), and a full one, "Nehmad veNaim" (published only in 1743). He described objectively three systems: those of Ptolemy, Copernicus and Brahe, without taking sides. Joseph Solomon Delmedigo (1591–1655) in his "Elim" (1629) says that the arguments of Copernicus are so strong, that only an imbecile will not accept them. [ 140 ] Delmedigo studied at Padua and was acquainted with Galileo. [ 141 ] An actual controversy on the Copernican model within Judaism arises only in the early 18th century. Most authors in this period had accepted Copernican heliocentrism, with opposition from David Nieto and Tobias Cohn , who argued against heliocentrism on the grounds it contradicted scripture. Nieto merely rejected the new system on those grounds without much passion, whereas Cohn went so far as to call Copernicus "a first-born of Satan", though he also acknowledged that he would have found it difficult to proffer one particular objection based on a passage from the Talmud. [ 142 ] In the 19th century, two students of the Hatam Sofer wrote books that were given approbations by him [ who? ] even though one supported heliocentrism and the other geocentrism. One, a commentary on Genesis titled Yafe’ah le-Ketz [ 143 ] written by R. Israel David Schlesinger resisted a heliocentric model and supported geocentrism. [ 144 ] The other, Mei Menuchot [ 145 ] written by R. Eliezer Lipmann Neusatz encouraged acceptance of the heliocentric model and other modern scientific thinking. [ 146 ] Since the 20th century most Jews have not questioned the science of heliocentrism. Exceptions include Shlomo Benizri [ 147 ] and R. M.M. Schneerson of Chabad who argued that the question of heliocentrism vs. geocentrism is obsolete because of the relativity of motion . [ 148 ] Schneerson's followers in Chabad continue to deny the heliocentric model. [ 149 ] In 1783, amateur astronomer William Herschel attempted to determine the shape of the universe by examining stars through his handmade telescopes . Herschel was the first to propose a model of the universe based on observation and measurement. [ 150 ] At that time, the dominant assumption in cosmology was that the Milky Way was the entire universe, an assumption that has since been proven wrong with observations. [ 151 ] Herschel concluded that it was in the shape of a disk , but assumed that the Sun was in the center of the disk, making the model heliocentric. [ 152 ] [ 153 ] [ 154 ] [ 155 ] Seeing that the stars belonging to the Milky Way appeared to encircle the Earth, Herschel carefully counted stars of given apparent magnitudes, and after finding the numbers were the same in all directions, concluded Earth must be close to the center of the Milky Way. However, there were two flaws in Herschel's methodology : magnitude is not a reliable index to the distance of stars, and some of the areas that he mistook for empty space were actually dark, obscuring nebulae that blocked his view toward the center of the Milky Way. [ 156 ] The Herschel model remained relatively unchallenged for the next hundred years, with minor refinements. Jacobus Kapteyn introduced motion, density , and luminosity to Herschel's star counts, which still implied a near-central location of the Sun. [ 152 ] Already in the early 19th century, Thomas Wright and Immanuel Kant speculated that fuzzy patches of light called nebulae were actually distant "island universes" consisting of many stellar systems . [ 157 ] The shape of the Milky Way galaxy was expected to resemble such "islands universes." However, "scientific arguments were marshalled against such a possibility," and this view was rejected by almost all scientists until the early 20th century, with Harlow Shapley 's work on globular clusters and Edwin Hubble 's measurements in 1924. After Shapley and Hubble showed that the Sun is not the center of the universe, cosmology moved on from heliocentrism to galactocentrism , which states that the Milky Way is the center of the universe. [ 151 ] Hubble's observations of redshift in light from distant galaxies indicated that the universe was expanding and acentric. [ 153 ] As a result, soon after galactocentrism was formulated, it was abandoned in favor of the Big Bang model of the acentric expanding universe. Further assumptions, such as the Copernican principle , the cosmological principle , dark energy , and dark matter , eventually lead to the current model of cosmology, Lambda-CDM . The concept of an absolute velocity, including being "at rest" as a particular case, is ruled out by the principle of relativity , also eliminating any obvious "center" of the universe as a natural origin of coordinates. Even if the discussion is limited to the Solar System , the Sun is not at the geometric center of any planet's orbit, but rather approximately at one focus of the elliptical orbit. Furthermore, to the extent that a planet's mass cannot be neglected in comparison to the Sun's mass, the center of gravity of the Solar System is displaced slightly away from the center of the Sun. [ 133 ] (The masses of the planets, mostly Jupiter , amount to 0.14% of that of the Sun.) Therefore, a hypothetical astronomer on an extrasolar planet would observe a small "wobble" in the Sun's motion. [ 158 ] In modern calculations, the terms "geocentric" and "heliocentric" are often used to refer to reference frames . [ 159 ] In such systems the origin in the center of mass of the Earth, of the Earth–Moon system, of the Sun, of the Sun plus the major planets, or of the entire Solar System, can be selected. [ 160 ] Right ascension and declination are examples of geocentric coordinates, used in Earth-based observations, while the heliocentric latitude and longitude are used for orbital calculations. This leads to such terms as "heliocentric velocity " and "heliocentric angular momentum ". In this heliocentric picture, any planet of the Solar System can be used as a source of mechanical energy because it moves relatively to the Sun. A smaller body (either artificial or natural ) may gain heliocentric velocity due to gravity assist – this effect can change the body's mechanical energy in heliocentric reference frame (although it will not changed in the planetary one). However, such selection of "geocentric" or "heliocentric" frames is merely a matter of computation. It does not have philosophical implications and does not constitute a distinct physical or scientific model . From the point of view of general relativity , inertial reference frames do not exist at all, and any practical reference frame is only an approximation to the actual space-time, which can have higher or lower precision. Some forms of Mach's principle consider the frame at rest with respect to the distant masses in the universe to have special properties. [ citation needed ] The abstract noun in -ism is more recent, recorded from the late 19th century (e.g. in Constance Naden, Induction and Deduction: A Historical and Critical Sketch of Successive Philosophical Conceptions Respecting the Relations Between Inductive and Deductive Thought and Other Essays ), modelled after German Heliocentrismus or Heliozentrismus ( c. 1870 ). All Islamic astronomers from Thabit ibn Qurra in the ninth century to Ibn al-Shatir in the fourteenth, and all natural philosophers from al-Kindi to Averroes and later, are known to have accepted ... the Greek picture of the world as consisting of two spheres of which one, the celestial sphere ... concentrically envelops the other.
https://en.wikipedia.org/wiki/Heliocentrism
A heliometer (from Greek ἥλιος hḗlios "sun" and measure ) is an instrument originally designed for measuring the variation of the Sun 's diameter at different seasons of the year, but applied now to the modern form of the instrument which is capable of much wider use. [ 1 ] The basic concept is to introduce a split element into a telescope's optical path so as to produce a double image. If one element is moved using a screw micrometer , precise angle measurements can be made. The simplest arrangement is to split the object lens in half, with one half fixed and the other attached to the micrometer screw and slid along the cut diameter. To measure the diameter of the Sun, for example, the micrometer is first adjusted so that the two images of the solar disk coincide (the "zero" position where the split elements form essentially a single element). The micrometer is then adjusted so that diametrically opposite sides of the two images of the solar disk just touch each other. The difference in the two micrometer readings so obtained is the (angular) diameter of the Sun. Similarly, a precise measurement of the apparent separation between two nearby stars, A and B , is made by first superimposing the two images of the stars and then adjusting the double image so that star A in one image coincides with star B in the other. The difference in the two micrometer readings so obtained is the apparent separation or angular distance between the two stars. The Syrian Arab astronomer Mu'ayyad al-Din al-Urdi , in his book, described a device called "the instrument with the two holes," which he used to measure and observe the apparent diameters of the Sun and the Moon. [ 2 ] The first application of the divided object-glass and the employment of double images in astronomical measures is due to Servington Savery of Shilstone in 1743. Pierre Bouguer , in 1748, originated the true conception of measurement by double image without the auxiliary aid of a filar micrometer , that is by changing the distance between two object-glasses of equal focus. [ 1 ] John Dollond , in 1754, combined Savary's idea of the divided object-glass with Bouguer's method of measurement, resulting in the construction of the first really practical heliometers. As far as we [ who? ] can ascertain, Joseph von Fraunhofer , some time not long before 1820, constructed the first heliometer with an achromatic divided object-glass, i.e. the first heliometer of the modern type. [ 1 ] The first successful measurements of stellar parallax (to determine the distance to a star) were made by Friedrich Wilhelm Bessel in 1838 for the star 61 Cygni using a Fraunhofer heliometer. [ 3 ] This was the 6.2-inch (157.5 mm) aperture Fraunhofer heliometer at Königsberg Observatory built by Joseph von Fraunhofer's firm, though he did not live to see it delivered to Bessel. [ 4 ] [ 5 ] Although the heliometer was difficult to use, it had certain advantages for Bessel including a wider field of view compared to other great refractors of the period, and overcame atmospheric turbulence in measurements compared to a filar micrometer . [ 5 ]
https://en.wikipedia.org/wiki/Heliometer
A helion (symbol h) is the nucleus of a helium atom, a doubly positively charged cation . The term helion is a portmanteau of helium and ion , and in practice refers specifically to the nucleus of the helium-3 isotope, consisting of two protons and one neutron . The nucleus of the other (and far more common) stable isotope of helium, helium-4 , consisting of two protons and two neutrons, is called an alpha particle or an alpha for short. This particle is the daughter product in the beta-minus decay of tritium , an isotope of hydrogen : [ citation needed ] CODATA reports the mass of a helion particle as m h = 5.006 412 7862 (16) × 10 −27 kg ‍ [ 1 ] = 3.014 932 246 932 (74) Da . [ 2 ] Helions are intermediate products in the proton–proton chain reaction in stellar fusion . An antihelion is the antiparticle of a helion, consisting of two antiprotons and an antineutron . This nuclear chemistry –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Helion_(chemistry)
Heliophysics (from the prefix " helio ", from Attic Greek hḗlios , meaning Sun, and the noun " physics ": the science of matter and energy and their interactions) is the physics of the Sun and its connection with the Solar System . [ 1 ] NASA defines [ 2 ] heliophysics as "(1) the comprehensive new term for the science of the Sun - Solar System Connection, (2) the exploration, discovery, and understanding of Earth's space environment, and (3) the system science that unites all of the linked phenomena in the region of the cosmos influenced by a star like our Sun ." Heliophysics is broader than Solar physics , that studies the Sun itself, including its interior, atmosphere, and magnetic fields. It concentrates on the Sun's effects on Earth and other bodies within the Solar System, as well as the changing conditions in space. It is primarily concerned with the magnetosphere , ionosphere , thermosphere , mesosphere , and upper atmosphere of the Earth and other planets . Heliophysics combines the science of the Sun, corona , heliosphere and geospace , and encompasses a wide variety of astronomical phenomena, including " cosmic rays and particle acceleration , space weather and radiation , dust and magnetic reconnection , nuclear energy generation and internal solar dynamics, solar activity and stellar magnetic fields , aeronomy and space plasmas , magnetic fields and global change ", and the interactions of the Solar System with the Milky Way Galaxy . Term "heliophysics" ( Russian : гелиофизика ) was widely used in Russian-language scientific literature . The Great Soviet Encyclopedia third edition (1969–1978) defines "Heliophysics" as "[…] a division of astrophysics that studies physics of the Sun ". [ 3 ] In 1990, the Higher Attestation Commission , responsible for the advanced academic degrees in Soviet Union and later in Russia and the Former Soviet Union , established a new specialty “Heliophysics and physics of solar system”. In English-language scientific literature prior to about 2001, the term heliophysics was sporadically used to describe the study of the "physics of the Sun". [ 4 ] As such it was a direct translation from the French "héliophysique" and the Russian "гелиофизика". In 2001, Joseph M. Davila, Nat Gopalswamy and Barbara J. Thompson at NASA's Goddard Space Flight Center adopted the term in their preparations of what became known as the International Heliophysical Year (2007–2008), following 50 years after the International Geophysical Year ; in adopting the term for this purpose, they expanded its meaning to encompass the entire domain of influence of the Sun (the heliosphere ). As an early advocate of the newly expanded meaning, George Siscoe offered the following characterization: "Heliophysics [encompasses] environmental science, a unique hybrid between meteorology and astrophysics , comprising a body of data and a set of paradigms (general laws—perhaps mostly still undiscovered) specific to magnetized plasmas and neutrals in the heliosphere interacting with themselves and with gravitating bodies and their atmospheres." Around mid-2006, Richard R. Fisher , then Director of the Sun-Earth Connections Division of NASA's Science Mission Directorate , was challenged by the NASA administrator to come up with a concise new name for his division that "had better end on 'physics'". [ 5 ] He proposed " Heliophysics Science Division ", which has been in use since then. The Heliophysics Science Division uses the term "heliophysics" to denote the study of the heliosphere and the objects that interact with it – most notably planetary atmospheres and magnetospheres, the solar corona, and the interstellar medium . Heliophysical research connects directly to a broader web of physical processes that naturally expand its reach beyond NASA's narrower view that limits it to the Solar System: heliophysics reaches from solar physics out to stellar physics in general, and involves several branches of nuclear physics , plasma physics , space physics and magnetospheric physics . The science of heliophysics lies at the foundation of the study of space weather , and is also directly involved in understanding planetary habitability . The Sun is an active star , and Earth is located within its atmosphere , so there is a dynamic interaction. The Sun' light influences all life and processes on Earth; it is an energy provider that allows and sustains life on Earth. However, the Sun also produces streams of high energy particles known as the solar wind , and radiation that can harm life or alter its evolution. Under the protective shield of Earth's magnetic field and its atmosphere, Earth can be seen as an island in the universe where life has developed and flourished. [ 6 ] [ 7 ] The intertwined response of the Earth and heliosphere are studied because the planet is immersed in this unseen environment. Above the protective cocoon of Earth's lower atmosphere is a plasma soup composed of electrified and magnetized matter entwined with penetrating radiation and energetic particles. Modern technologies are susceptible to the extremes of space weather — severe disturbances of the upper atmosphere and of the near-Earth space environment that are driven by the magnetic activity of the Sun. Strong electrical currents driven in the Earth's surface during auroral events can disrupt and damage modern electric power grids and may contribute to the corrosion of oil and gas pipelines. [ 8 ] Methods have been developed to see into the internal workings of the Sun and understand how the Earth's magnetosphere responds to solar activity . Further studies are concerned with exploring the full system of complex interactions that characterize the relationship of the Sun with the Solar System . [ 6 ] [ 7 ] There are three primary objectives that define the multi-decadal studies: [ 6 ] [ 9 ] Plasmas and their embedded magnetic fields affect the formation and evolution of planets and planetary systems. The heliosphere shields the Solar System from galactic cosmic radiation. Earth is shielded by its magnetic field , protecting it from solar and cosmic particle radiation and from erosion of the atmosphere by the solar wind . Planets without a shielding magnetic field, such as Mars and Venus , are exposed to those processes and evolve differently. On Earth , the magnetic field changes strength and configuration during its occasional polarity reversals, altering the shielding of the planet from external radiation sources. [ 10 ] Determine changes in the Earth's magnetosphere , ionosphere, and upper atmosphere in order to enable specification, prediction, and mitigation of their effects. Heliophysics seeks to develop an understanding of the response of the near-Earth plasma regions to space weather . This complex, highly coupled system protects Earth from the worst solar disturbances while redistributing energy and mass throughout. [ 9 ] [ 10 ]
https://en.wikipedia.org/wiki/Heliophysics
A helioscope is an instrument used in observing the Sun and sunspots . The helioscope was first used by Benedetto Castelli (1578–1643) and refined by Galileo Galilei (1564–1642). The method involves projecting an image of the sun onto a white sheet of paper suspended in a darkened room with the use of a telescope. [ 1 ] [ 2 ] The first machina helioscopica or helioscope was designed by Christoph Scheiner (1575 –1650) to assist his sunspot observations. [ 3 ] In the context of modern astroparticle physics, the term helioscope can also refer to an experiment that seeks to observe hypothetical particles (such as the axion ) produced inside the sun. Examples of such helioscope experiments searching for axions include the CERN Axion Solar Telescope and International Axion Observatory . This article related to the Sun is a stub . You can help Wikipedia by expanding it . This telescope -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Helioscope
Heliotropism , a form of tropism , is the diurnal or seasonal motion of plant parts (flowers or leaves) in response to the direction of the Sun . The habit of some plants to move in the direction of the Sun, a form of tropism, was already known by the Ancient Greeks. They named one of those plants after that property Heliotropium , meaning "sun turn". The Greeks assumed it to be a passive effect, presumably the loss of fluid on the illuminated side, that did not need further study. [ 1 ] Aristotle's logic that plants are passive and immobile organisms prevailed. In the 19th century, however, botanists discovered that growth processes in the plant were involved, and conducted increasingly in-depth experiments. A. P. de Candolle called this phenomenon in any plant heliotropism (1832). [ 2 ] It was renamed phototropism in 1892 because it is a response to light rather than to the sun, and because the phototropism of algae in lab studies at that time strongly depended on the brightness (positive phototropic for weak light, and negative phototropic for bright light, like sunlight). [ 3 ] [ 4 ] A botanist studying this subject in the lab, at the cellular and subcellular level, or using artificial light, is more likely to employ the more abstract word phototropism , a term which includes artificial light as well as natural sunlight . The French scientist Jean-Jacques d'Ortous de Mairan was one of the first to study heliotropism when he experimented with the Mimosa pudica plant. The phenomenon was studied by Charles Darwin and published in his penultimate 1880 book The Power of Movement in Plants , a work which included other stimuli to plant movement such as gravity, moisture and touch. Heliotropic flowers track the Sun 's motion across the sky from east to west. Daisies or Bellis perennis close their petals at night but open in the morning light and then follow the sun as the day progresses. During the night, the flowers may assume a random orientation, while at dawn they turn again toward the east where the Sun rises. The motion is performed by motor cells in a flexible segment just below the flower, called a pulvinus . The motor cells are specialized in pumping potassium ions into nearby tissues, changing their turgor pressure . The segment flexes because the motor cells at the shadow side elongate due to a turgor rise . This is considered to be turgor-mediated heliotropism. For plant organs that lack pulvini, heliotropism can occur through irreversible cell expansion producing particular growth patterns. This form of heliotropism is considered to be growth-mediated. [ 5 ] Heliotropism is a response to light from the Sun. Several hypotheses have been proposed for the occurrence of heliotropism in flowers: In general, flower heliotropism could increase reproductive success by increasing pollination , fertilization success, and/or seed development, [ 9 ] especially in the spring flowers. Some solar tracking plants are not purely heliotropic: in those plants the change of orientation is an innate circadian motion triggered by light, which continues for one or more periods if the light cycle is interrupted. Tropical convolvulaceous flowers show a preferred orientation, pointing in the general direction of the sun but not exactly tracking the sun. They demonstrated no diurnal heliotropism but strong seasonal heliotropism. If solar tracking is exact, the sun’s rays would always enter the corolla tube and warm the gynoecium , a process which could be dangerous in a tropical climate . However, by adopting a certain angle away from the solar angle, this is prevented. The trumpet shape of these flowers thus acts as a parasol shading the gynoecium at times of maximum solar radiation , and not allowing the rays to impinge on the gynoecium . [ 10 ] In the case of sunflowers , a common misconception is that sunflower heads track the Sun across the sky throughout the whole life cycle. The uniform alignment of the flowers does result from heliotropism in an earlier development stage, the bud stage, before the appearance of flower heads. The apical bud of the plant will track the Sun during the day from east to west, and then will quickly move west to east overnight as a result of the plant's circadian clock. [ 11 ] The buds are heliotropic until the end of the bud stage, and finally face east. Phototropic bending can be catalyzed in the hypocotyls of juvenile sunflower seedlings while heliotropic bending in the shoot apex does not start occurring until the later developmental stages of the plant, showing a difference between these two processes. [ 11 ] The flower of the sunflower preserves the final orientation of the bud, thus keeping the mature flower facing east. Leaf heliotropism is the solar tracking behavior of plant leaves. Some plant species have leaves that orient themselves perpendicularly to the sun's rays in the morning ( diaheliotropism ), and others have those that orient themselves parallel to these rays at midday ( paraheliotropism ). [ 12 ] Floral heliotropism is not necessarily exhibited by the same plants that exhibit leaf heliotropism.
https://en.wikipedia.org/wiki/Heliotropism
Helium-3 ( 3 He [ 1 ] [ 2 ] see also helion ) is a light, stable isotope of helium with two protons and one neutron . (In contrast, the most common isotope, helium-4 , has two protons and two neutrons.) Helium-3 and hydrogen-1 are the only stable nuclides with more protons than neutrons. It was discovered in 1939. Helium-3 occurs as a primordial nuclide , escaping from Earth's crust into its atmosphere and into outer space over millions of years. It is also thought to be a natural nucleogenic and cosmogenic nuclide , one produced when lithium is bombarded by natural neutrons, which can be released by spontaneous fission and by nuclear reactions with cosmic rays . Some found in the terrestrial atmosphere is a remnant of atmospheric and underwater nuclear weapons testing . Nuclear fusion using helium-3 has long been viewed as a desirable future energy source . The fusion of two of its atoms would be aneutronic , that is, it would not release the dangerous radiation of traditional fusion or require the much higher temperatures thereof. [ 3 ] The process may unavoidably create other reactions that themselves would cause the surrounding material to become radioactive. [ 4 ] Helium-3 is thought to be more abundant on the Moon than on Earth, having been deposited in the upper layer of regolith by the solar wind over billions of years, [ 5 ] though still lower in abundance than in the Solar System's gas giants . [ 6 ] [ 7 ] The existence of helium-3 was first proposed in 1934 by the Australian nuclear physicist Mark Oliphant while he was working at the University of Cambridge Cavendish Laboratory . Oliphant had performed experiments in which fast deuterons collided with deuteron targets (incidentally, the first demonstration of nuclear fusion ). [ 8 ] Isolation of helium-3 was first accomplished by Luis Alvarez and Robert Cornog in 1939. [ 9 ] [ 10 ] Helium-3 was thought to be a radioactive isotope until it was also found in samples of natural helium, which is mostly helium-4 , taken both from the terrestrial atmosphere and from natural gas wells. [ 11 ] Due to its low atomic mass of 3.016 u , helium-3 has some physical properties different from those of helium-4, with a mass of 4.0026 u. On account of the weak, induced dipole–dipole interaction between the helium atoms, their microscopic physical properties are mainly determined by their zero-point energy . Also, the microscopic properties of helium-3 cause it to have a higher zero-point energy than helium-4. This implies that helium-3 can overcome dipole–dipole interactions with less thermal energy than helium-4 can. The quantum mechanical effects on helium-3 and helium-4 are significantly different because with two protons , two neutrons , and two electrons , helium-4 has an overall spin of zero, making it a boson , but with one fewer neutron, helium-3 has an overall spin of one half, making it a fermion . Pure helium-3 gas boils at 3.19 K compared with helium-4 at 4.23 K, and its critical point is also lower at 3.35 K, compared with helium-4 at 5.2 K. Helium-3 has less than half the density of helium-4 when it is at its boiling point: 59 g/L compared to 125 g/L of helium-4 at a pressure of one atmosphere. Its latent heat of vaporization is also considerably lower at 0.026 kJ/mol compared with the 0.0829 kJ/mol of helium-4. [ 12 ] [ 13 ] An important property of helium-3, which distinguishes it from the more common helium-4, is that its nucleus is a fermion since it contains an odd number of spin 1 ⁄ 2 particles. Helium-4 nuclei are bosons , containing an even number of spin 1 ⁄ 2 particles. This is a direct result of the addition rules for quantized angular momentum. At low temperatures (about 2.17 K), helium-4 undergoes a phase transition : A fraction of it enters a superfluid phase that can be roughly understood as a type of Bose–Einstein condensate . Such a mechanism is not available for helium-3 atoms, which are fermions. Many speculated that helium-3 could also become a superfluid at much lower temperatures, if the atoms formed into pairs analogous to Cooper pairs in the BCS theory of superconductivity . Each Cooper pair, having integer spin, can be thought of as a boson. During the 1970s, David Lee , Douglas Osheroff and Robert Coleman Richardson discovered two phase transitions along the melting curve, which were soon realized to be the two superfluid phases of helium-3. [ 14 ] [ 15 ] The transition to a superfluid occurs at 2.491 millikelvins on the melting curve. They were awarded the 1996 Nobel Prize in Physics for their discovery. Alexei Abrikosov , Vitaly Ginzburg , and Tony Leggett won the 2003 Nobel Prize in Physics for their work on refining understanding of the superfluid phase of helium-3. [ 16 ] In a zero magnetic field, there are two distinct superfluid phases of 3 He, the A-phase and the B-phase. The B-phase is the low-temperature, low-pressure phase which has an isotropic energy gap. The A-phase is the higher temperature, higher pressure phase that is further stabilized by a magnetic field and has two point nodes in its gap. The presence of two phases is a clear indication that 3 He is an unconventional superfluid (superconductor), since the presence of two phases requires an additional symmetry, other than gauge symmetry, to be broken. In fact, it is a p -wave superfluid, with spin one, S =1, and angular momentum one, L =1. The ground state corresponds to total angular momentum zero, J = S + L =0 (vector addition). Excited states are possible with non-zero total angular momentum, J >0, which are excited pair collective modes. These collective modes have been studied with much greater precision than in any other unconventional pairing system, because of the extreme purity of superfluid 3 He. This purity is due to all 4 He phase separating entirely and all other materials solidifying and sinking to the bottom of the liquid, making the A- and B-phases of 3 He the most pure condensed matter state possible. 3 He is a primordial substance in the Earth's mantle , thought to have become entrapped in the Earth during planetary formation. The ratio of 3 He to 4 He within the Earth's crust and mantle is less than that of estimates of solar disk composition as obtained from meteorite and lunar samples, with terrestrial materials generally containing lower 3 He/ 4 He ratios due to production of 4 He from radioactive decay. 3 He has a cosmological ratio of 300 atoms per million atoms of 4 He (at. ppm), [ 17 ] leading to the assumption that the original ratio of these primordial gases in the mantle was around 200-300 ppm when Earth was formed. Over Earth's history alpha-particle decay of uranium, thorium and other radioactive isotopes has generated significant amounts of 4 He, such that only around 7% of the helium now in the mantle is primordial helium, [ 17 ] lowering the total 3 He/ 4 He ratio to around 20 ppm. Ratios of 3 He/ 4 He in excess of atmospheric are indicative of a contribution of 3 He from the mantle. Crustal sources are dominated by the 4 He produced by radioactive decay. The ratio of helium-3 to helium-4 in natural Earth-bound sources varies greatly. [ 18 ] [ 19 ] Samples of the lithium ore spodumene from Edison Mine, South Dakota were found to contain 12 parts of helium-3 to a million parts of helium-4. Samples from other mines showed 2 parts per million. [ 18 ] Helium is also present as up to 7% of some natural gas sources, [ 20 ] and large sources have over 0.5% (above 0.2% makes it viable to extract). [ 21 ] The fraction of 3 He in helium separated from natural gas in the U.S. was found to range from 70 to 242 parts per billion. [ 22 ] [ 23 ] Hence the US 2002 stockpile of 1 billion normal m 3 [ 21 ] would have contained about 12 to 43 kilograms (26 to 95 lb) of helium-3. According to American physicist Richard Garwin , about 26 cubic metres (920 cu ft) or almost 5 kilograms (11 lb) of 3 He is available annually for separation from the US natural gas stream. If the process of separating out the 3 He could employ as feedstock the liquefied helium typically used to transport and store bulk quantities, estimates for the incremental energy cost range from $34 to $300 per litre ($150 to $1,360/imp gal) NTP, excluding the cost of infrastructure and equipment. [ 22 ] Algeria's annual gas production is assumed to contain 100 million normal cubic metres [ 21 ] and this would contain between 7 and 24 cubic metres (250 and 850 cu ft) of helium-3 (about 1 to 4 kilograms (2.2 to 8.8 lb)) assuming a similar 3 He fraction. 3 He is also present in the Earth's atmosphere . The natural abundance of 3 He in naturally occurring helium gas is 1.38 × 10 −6 (1.38 parts per million). The partial pressure of helium in the Earth's atmosphere is about 0.52 pascals (7.5 × 10 −5 psi), and thus helium accounts for 5.2 parts per million of the total pressure (101325 Pa) in the Earth's atmosphere, and 3 He thus accounts for 7.2 parts per trillion of the atmosphere. Since the atmosphere of the Earth has a mass of about 5.14 × 10 18 kilograms (1.133 × 10 19 lb), [ 24 ] the mass of 3 He in the Earth's atmosphere is the product of these numbers, or about 37,000 tonnes (36,000 long tons; 41,000 short tons) of 3 He. (In fact the effective figure is ten times smaller, since the above ppm are ppmv and not ppmw. One must multiply by 3 (the molecular mass of helium-3) and divide by 29 (the mean molecular mass of the atmosphere), resulting in 3,828 tonnes (3,768 long tons; 4,220 short tons) of helium-3 in the earth's atmosphere.) 3 He is produced on Earth from three sources: lithium spallation , cosmic rays , and beta decay of tritium ( 3 H). The contribution from cosmic rays is negligible within all except the oldest regolith materials, and lithium spallation reactions are a lesser contributor than the production of 4 He by alpha particle emissions. The total amount of helium-3 in the mantle may be in the range of 0.1–1 megatonne (98,000–984,000 long tons; 110,000–1,100,000 short tons). Most mantle is not directly accessible. Some helium-3 leaks up through deep-sourced hotspot volcanoes such as those of the Hawaiian Islands , but only 300 grams (11 oz) per year is emitted to the atmosphere. Mid-ocean ridges emit another 3 kilograms per year (8.2 g/d). Around subduction zones , various sources produce helium-3 in natural gas deposits which possibly contain a thousand tonnes of helium-3 (although there may be 25 thousand tonnes if all ancient subduction zones have such deposits). Wittenberg estimated that United States crustal natural gas sources may have only half a tonne total. [ 25 ] Wittenberg cited Anderson's estimate of another 1,200 tonnes (1,200 long tons; 1,300 short tons) in interplanetary dust particles on the ocean floors. [ 26 ] In the 1994 study, extracting helium-3 from these sources consumes more energy than fusion would release. [ 27 ] See Extraterrestrial mining or Lunar resources One early estimate of the primordial ratio of 3 He to 4 He in the solar nebula has been the measurement of their ratio in the atmosphere of Jupiter, measured by the mass spectrometer of the Galileo atmospheric entry probe. This ratio is about 1:10,000, [ 28 ] or 100 parts of 3 He per million parts of 4 He. This is roughly the same ratio of the isotopes as in lunar regolith , which contains 28 ppm helium-4 and 2.8 ppb helium-3 (which is at the lower end of actual sample measurements, which vary from about 1.4 to 15 ppb). Terrestrial ratios of the isotopes are lower by a factor of 100, mainly due to enrichment of helium-4 stocks in the mantle by billions of years of alpha decay from uranium , thorium as well as their decay products and extinct radionuclides . Virtually all helium-3 used in industry today is produced from the radioactive decay of tritium , given its very low natural abundance and its very high cost. Production, sales and distribution of helium-3 in the United States are managed by the US Department of Energy (DOE) DOE Isotope Program . [ 29 ] While tritium has several different experimentally determined values of its half-life , NIST lists 4,500 ± 8 d ( 12.32 ± 0.02 years ). [ 30 ] It decays into helium-3 by beta decay as in this nuclear equation: Among the total released energy of 18.6 keV , the part taken by electron 's kinetic energy varies, with an average of 5.7 keV , while the remaining energy is carried off by the nearly undetectable electron antineutrino . Beta particles from tritium can penetrate only about 6.0 millimetres (0.24 in) of air, and they are incapable of passing through the dead outermost layer of human skin. [ 31 ] The unusually low energy released in the tritium beta decay makes the decay (along with that of rhenium-187 ) appropriate for absolute neutrino mass measurements in the laboratory (the most recent experiment being KATRIN ). The low energy of tritium's radiation makes it difficult to detect tritium-labeled compounds except by using liquid scintillation counting . Tritium is a radioactive isotope of hydrogen and is typically produced by bombarding lithium-6 with neutrons in a nuclear reactor. The lithium nucleus absorbs a neutron and splits into helium-4 and tritium. Tritium decays into helium-3 with a half-life of 12.3 years , so helium-3 can be produced by simply storing the tritium until it undergoes radioactive decay. As tritium forms a stable compound with oxygen ( tritiated water ) while helium-3 does not, the storage and collection process could continuously collect the material that outgasses from the stored material. Tritium is a critical component of nuclear weapons and historically it was produced and stockpiled primarily for this application. The decay of tritium into helium-3 reduces the explosive power of the fusion warhead, so periodically the accumulated helium-3 must be removed from warhead reservoirs and tritium in storage. Helium-3 removed during this process is marketed for other applications. For decades this has been, and remains, the principal source of the world's helium-3. [ 32 ] Since the signing of the START I Treaty in 1991 the number of nuclear warheads that are kept ready for use has decreased. [ 33 ] [ 34 ] This has reduced the quantity of helium-3 available from this source. Helium-3 stockpiles have been further diminished by increased demand, [ 22 ] primarily for use in neutron radiation detectors and medical diagnostic procedures. US industrial demand for helium-3 reached a peak of 70,000 litres (15,000 imp gal; 18,000 US gal) (approximately 8 kilograms (18 lb)) per year in 2008. Price at auction, historically about $100 per litre ($450/imp gal), reached as high as $2,000 per litre ($9,100/imp gal). [ 35 ] Since then, demand for helium-3 has declined to about 6,000 litres (1,300 imp gal; 1,600 US gal) per year due to the high cost and efforts by the DOE to recycle it and find substitutes. Assuming a density of 114 grams per cubic metre (0.192 lb/cu yd) at $100/l helium-3 would be about a thirtieth as expensive as tritium (roughly $880 per gram ($25,000/oz) vs roughly $30,000 per gram ($850,000/oz)) while at $2000/l helium-3 would be about half as expensive as tritium ($17,540 per gram ($497,000/oz) vs $30,000 per gram ($850,000/oz)). The DOE recognized the developing shortage of both tritium and helium-3, and began producing tritium by lithium irradiation at the Tennessee Valley Authority 's Watts Bar Nuclear Generating Station in 2010. [ 22 ] In this process tritium-producing burnable absorber rods (TPBARs) containing lithium in a ceramic form are inserted into the reactor in place of the normal boron control rods [ 36 ] Periodically the TPBARs are replaced and the tritium extracted. Currently only two commercial nuclear reactors (Watts Bar Nuclear Plant Units 1 and 2) are being used for tritium production but the process could, if necessary, be vastly scaled up to meet any conceivable demand simply by utilizing more of the nation's power reactors [ citation needed ] . Substantial quantities of tritium and helium-3 could also be extracted from the heavy water moderator in CANDU nuclear reactors. [ 22 ] [ 37 ] India and Canada, the two countries with the largest heavy water reactor fleet, are both known to extract tritium from moderator/coolant heavy water, but those amounts are not nearly enough to satisfy global demand of either tritium or helium-3. As tritium is also produced inadvertently in various processes in light water reactors (see the article on tritium for details), extraction from those sources could be another source of helium-3. If the annual discharge of tritium (per 2018 figures) at La Hague reprocessing facility is taken as a basis, the amounts discharged (31.2 grams (1.10 oz) at La Hague) are not nearly enough to satisfy demand, even if 100% recovery is achieved. Helium-3 can be used to do spin echo experiments of surface dynamics , which are underway at the Surface Physics Group at the Cavendish Laboratory in Cambridge and in the Chemistry Department at Swansea University . Helium-3 is an important isotope in instrumentation for neutron detection . It has a high absorption cross section for thermal neutron beams and is used as a converter gas in neutron detectors. The neutron is converted through the nuclear reaction into charged particles tritium ions (T, 3 H) and Hydrogen ions , or protons (p, 1 H) which then are detected by creating a charge cloud in the stopping gas of a proportional counter or a Geiger–Müller tube . [ 40 ] Furthermore, the absorption process is strongly spin -dependent, which allows a spin-polarized helium-3 volume to transmit neutrons with one spin component while absorbing the other. This effect is employed in neutron polarization analysis , a technique which probes for magnetic properties of matter. [ 41 ] [ 42 ] [ 43 ] [ 44 ] The United States Department of Homeland Security had hoped to deploy detectors to spot smuggled plutonium in shipping containers by their neutron emissions, but the worldwide shortage of helium-3 following the drawdown in nuclear weapons production since the Cold War has to some extent prevented this. [ 45 ] As of 2012, DHS determined the commercial supply of boron-10 would support converting its neutron detection infrastructure to that technology. [ 46 ] Helium-3 refrigerators are devices used in experimental physics for obtaining temperatures down to about 0.2 kelvin . [ 47 ] By evaporative cooling of helium-4, a 1-K pot liquefies a small amount of helium-3 in a small vessel called a helium-3 pot. Evaporative cooling at low pressure of the liquid helium-3, usually driven by adsorption since due to its high price the helium-3 is usually contained in a closed system to avoid losses, cools the helium-3 pot to a fraction of a kelvin. A dilution refrigerator uses a mixture of helium-3 and helium-4 to reach cryogenic temperatures as low as a few thousandths of a kelvin. [ 48 ] Helium-3 nuclei have an intrinsic nuclear spin of 1 ⁄ 2 , and a relatively high gyromagnetic ratio . Because of this, it is possible to use Nuclear magnetic resonance (NMR) to observe Helium-3. This analytical technique, usually called 3 He-NMR, can be used to identify helium-containing compounds. It is however limited by the low abundance of helium-3 in comparison to helium-4, which is itself not NMR-active. Helium-3 can be hyperpolarized using non-equilibrium means such as spin-exchange optical pumping. [ 49 ] During this process, circularly polarized infrared laser light, tuned to the appropriate wavelength, is used to excite electrons in an alkali metal , such as caesium or rubidium inside a sealed glass vessel. The angular momentum is transferred from the alkali metal electrons to the noble gas nuclei through collisions. In essence, this process effectively aligns the nuclear spins with the magnetic field in order to enhance the NMR signal. The hyperpolarized gas may then be stored at pressures of 10 atm, for up to 100 hours. Following inhalation, gas mixtures containing the hyperpolarized helium-3 gas can be imaged with an MRI scanner to produce anatomical and functional images of lung ventilation. This technique is also able to produce images of the airway tree, locate unventilated defects, measure the alveolar oxygen partial pressure , and measure the ventilation/perfusion ratio . This technique may be critical for the diagnosis and treatment management of chronic respiratory diseases such as chronic obstructive pulmonary disease (COPD) , emphysema , cystic fibrosis , and asthma . [ 50 ] Because a helium atom, or even two helium atoms , can be encased in fullerene -like cages, the NMR spectroscopy of this element can be a sensitive probe for changes of the carbon framework around it. [ 51 ] [ 52 ] Using carbon-13 NMR to analyze fullerenes themselves is complicated by so many subtle differences among the carbons in anything but the simplest, highly-symmetric structures. Both MIT's Alcator C-Mod tokamak and the Joint European Torus (JET) have experimented with adding a little helium-3 to a H–D plasma to increase the absorption of radio-frequency (RF) energy to heat the hydrogen and deuterium ions, a "three-ion" effect. [ 53 ] [ 54 ] 3 He can be produced by the low temperature fusion of (D-p) 2 H + 1 p → 3 He + γ + 4.98 MeV. If the fusion temperature is below that for the helium nuclei to fuse, the reaction produces a high energy alpha particle which quickly acquires an electron producing a stable light helium ion which can be utilized directly as a source of electricity without producing dangerous neutrons. 3 He can be used in fusion reactions by either of the reactions 2 H + 3 He → 4 He + 1 p + 18.3 MeV , or 3 He + 3 He → 4 He + 2 1 p + 12.86 MeV. The conventional deuterium + tritium (" D–T ") fusion process produces energetic neutrons which render reactor components radioactive with activation products . The appeal of helium-3 fusion stems from the aneutronic nature of its reaction products. Helium-3 itself is non-radioactive. The lone high-energy by-product, the proton , can be contained by means of electric and magnetic fields. The momentum energy of this proton (created in the fusion process) will interact with the containing electromagnetic field, resulting in direct net electricity generation. [ 60 ] Because of the higher Coulomb barrier , the temperatures required for 2 H + 3 He fusion are much higher than those of conventional D–T fusion . Moreover, since both reactants need to be mixed together to fuse, reactions between nuclei of the same reactant will occur, and the D–D reaction ( 2 H + 2 H ) does produce a neutron . Reaction rates vary with temperature, but the D– 3 He reaction rate is never greater than 3.56 times the D–D reaction rate (see graph). Therefore, fusion using D– 3 He fuel at the right temperature and a D-lean fuel mixture, can produce a much lower neutron flux than D–T fusion, but is not clean, negating some of its main attraction. The second possibility, fusing 3 He with itself ( 3 He + 3 He ), requires even higher temperatures (since now both reactants have a +2 charge), and thus is even more difficult than the D- 3 He reaction. It offers a theoretical reaction that produces no neutrons; the charged protons produced can be contained in electric and magnetic fields, which in turn directly generates electricity. 3 He + 3 He fusion is feasible as demonstrated in the laboratory and has immense advantages, but commercial viability is many years in the future. [ 61 ] The amounts of helium-3 needed as a replacement for conventional fuels are substantial by comparison to amounts currently available. The total amount of energy produced in the 2 D + 3 He reaction is 18.4 M eV , which corresponds to some 493 megawatt-hours (4.93×10 8 W·h) per three grams (one mole ) of 3 He . If the total amount of energy could be converted to electrical power with 100% efficiency (a physical impossibility), it would correspond to about 30 minutes of output of a gigawatt electrical plant per mole of 3 He . Thus, a year's production (at 6 grams for each operation hour) would require 52.5 kilograms of helium-3. The amount of fuel needed for large-scale applications can also be put in terms of total consumption: electricity consumption by 107 million U.S. households in 2001 [ 62 ] totaled 1,140 billion kW·h (1.14×10 15 W·h). Again assuming 100% conversion efficiency, 6.7 tonnes per year of helium-3 would be required for that segment of the energy demand of the United States, 15 to 20 tonnes per year given a more realistic end-to-end conversion efficiency. [ citation needed ] A second-generation approach to controlled fusion power involves combining helium-3 and deuterium, 2 D . This reaction produces an alpha particle and a high-energy proton . The most important potential advantage of this fusion reaction for power production as well as other applications lies in its compatibility with the use of electrostatic fields to control fuel ions and the fusion protons. High speed protons, as positively charged particles, can have their kinetic energy converted directly into electricity , through use of solid-state conversion materials as well as other techniques. Potential conversion efficiencies of 70% may be possible, as there is no need to convert proton energy to heat in order to drive a turbine -powered electrical generator . [ citation needed ] There have been many claims about the capabilities of helium-3 power plants. According to proponents, fusion power plants operating on deuterium and helium-3 would offer lower capital and operating costs than their competitors due to less technical complexity, higher conversion efficiency, smaller size, the absence of radioactive fuel, no air or water pollution , and only low-level radioactive waste disposal requirements. Recent estimates suggest that about $6 billion in investment capital will be required to develop and construct the first helium-3 fusion power plant . Financial break even at today's wholesale electricity prices (5 US cents per kilowatt-hour ) would occur after five 1- gigawatt plants were on line, replacing old conventional plants or meeting new demand. [ 63 ] The reality is not so clear-cut. The most advanced fusion programs in the world are inertial confinement fusion (such as National Ignition Facility ) and magnetic confinement fusion (such as ITER and Wendelstein 7-X ). In the case of the former, there is no solid roadmap to power generation. In the case of the latter, commercial power generation is not expected until around 2050. [ 64 ] In both cases, the type of fusion discussed is the simplest: D–T fusion. The reason for this is the very low Coulomb barrier for this reaction; for D+ 3 He, the barrier is much higher, and it is even higher for 3 He– 3 He. The immense cost of reactors like ITER and National Ignition Facility are largely due to their immense size, yet to scale up to higher plasma temperatures would require reactors far larger still. The 14.7 MeV proton and 3.6 MeV alpha particle from D– 3 He fusion, plus the higher conversion efficiency, means that more electricity is obtained per kilogram than with D–T fusion (17.6 MeV), but not that much more. As a further downside, the rates of reaction for helium-3 fusion reactions are not particularly high, requiring a reactor that is larger still or more reactors to produce the same amount of electricity. In 2022, Helion Energy claimed that their 7th fusion prototype (Polaris; fully funded and under construction as of September 2022) will demonstrate "net electricity from fusion", and will demonstrate "helium-3 production through deuterium–deuterium fusion" by means of a "patented high-efficiency closed-fuel cycle". [ 65 ] To attempt to work around this problem of massively large power plants that may not even be economical with D–T fusion, let alone the far more challenging D– 3 He fusion, a number of other reactors have been proposed – the Fusor , Polywell , Focus fusion , and many more, though many of these concepts have fundamental problems with achieving a net energy gain, and generally attempt to achieve fusion in thermal disequilibrium, something that could potentially prove impossible, [ 66 ] and consequently, these long-shot programs tend to have trouble garnering funding despite their low budgets. Unlike the "big" and "hot" fusion systems, if such systems worked, they could scale to the higher barrier aneutronic fuels, and so their proponents tend to promote p-B fusion , which requires no exotic fuel such as helium-3. Materials on the Moon 's surface contain helium-3 at concentrations between 1.4 and 15 ppb in sunlit areas, [ 67 ] [ 68 ] and may contain concentrations as much as 50 ppb in permanently shadowed regions. [ 7 ] A number of people, starting with Gerald Kulcinski in 1986, [ 69 ] have proposed to explore the Moon , mine lunar regolith and use the helium-3 for fusion . Because of the low concentrations of helium-3, any mining equipment would need to process extremely large amounts of regolith (over 150 tonnes of regolith to obtain one gram of helium-3). [ 70 ] The primary objective of Indian Space Research Organisation 's first lunar probe called Chandrayaan-1 , launched on October 22, 2008, was reported in some sources to be mapping the Moon's surface for helium-3-containing minerals. [ 71 ] No such objective is mentioned in the project's official list of goals, though many of its scientific payloads have held helium-3-related applications. [ 72 ] [ 73 ] Cosmochemist and geochemist Ouyang Ziyuan from the Chinese Academy of Sciences who is now in charge of the Chinese Lunar Exploration Program has already stated on many occasions that one of the main goals of the program would be the mining of helium-3, from which operation "each year, three space shuttle missions could bring enough fuel for all human beings across the world". [ 74 ] In January 2006, the Russian space company RKK Energiya announced that it considers lunar helium-3 a potential economic resource to be mined by 2020, [ 75 ] if funding can be found. [ 76 ] [ 77 ] Not all writers feel the extraction of lunar helium-3 is feasible, or even that there will be a demand for it for fusion. Dwayne Day , writing in The Space Review in 2015, characterises helium-3 extraction from the Moon for use in fusion as magical thinking about an unproven technology, and questions the feasibility of lunar extraction, as compared to production on Earth. [ 78 ] Mining gas giants for helium-3 has also been proposed. [ 79 ] The British Interplanetary Society 's hypothetical Project Daedalus interstellar probe design was fueled by helium-3 mines in the atmosphere of Jupiter , for example.
https://en.wikipedia.org/wiki/Helium-3
Helium-3 surface spin echo ( HeSE ) is an inelastic scattering technique in surface science that has been used to measure microscopic dynamics at well-defined surfaces in ultra-high vacuum . The information available from HeSE complements and extends that available from other inelastic scattering techniques such as neutron spin echo and traditional helium-4 atom scattering (HAS). The experimental principles of the HeSE experiment are analogous to those of neutron spin echo, differing in details such as the nature of the probe/sample interactions that give rise to scattering. In outline, a polarized 3 He beam is created by a supersonic expansion followed by a spin-filtering stage (polariser). The helium scatters from the experimental sample and is detected at the end of the beamline after another spin-filtering stage (analyser). Before and after the scattering process, the beam passes through magnetic fields that precess the probe spins in the usual sense of a spin echo experiment. The raw data of the experiment are the spin-resolved scattered helium intensities as a function of the incoming magnetic field integral, outgoing field integral and any other variable parameters relevant to specific experiments, such as surface orientation and temperature. In the most general kind of scattering-with-precession experiment, the data can be used to construct the 2D 'wavelength intensity matrix' [ 1 ] for the surface scattering process, i.e. the probability that a helium atom of a certain incoming wavelength scatters into a state with a certain outgoing wavelength. Conventional 'spin echo' measurements are a common special case of the more general scattering-with-precession measurements, in which the incoming and outgoing magnetic field integrals are constrained to be equal. The polarization of the outgoing beam is measured as a function of the precession field integral by measuring the intensity of the outgoing beam resolved into different spin states. The spin echo case is referred to as a type of 'tilted projection measurement'. [ 2 ] Spin echo measurements are an appropriate tilted projection for quasi-elastic measurements of surface dynamics because the raw data are closely related to the intermediate scattering function (ISF), which in many cases can be interpreted in terms of standard dynamical signatures. [ 3 ] The surface processes that HeSE can measure can be broadly divided into elastic, quasielastic and inelastic processes. Measurements in which the predominant signal is elastically scattered include standard helium diffraction and the measurement of selective adsorption resonances . Quasielastic measurements generally correspond to measurements of microscopic surface diffusion in which the Doppler-like energy gain and loss of the helium atoms is small compared to the beam energy. More strongly inelastic measurements can provide information about energy loss channels on the surface such as surface phonons . HeSE has been used to study the diffusion rates and mechanisms of atoms and molecules ('adsorbates') at surfaces. A non-exhaustive list of the research themes associated with HeSE diffusion measurements include: nuclear quantum effects in the surface diffusion of atomic hydrogen; [ 4 ] [ 5 ] benchmarking the adsorbate/surface free energy landscape; [ 6 ] energy exchange ('friction') between adsorbates and the surface; [ 7 ] pairwise [ 8 ] and many-body [ 9 ] inter-adsorbate interactions. HeSE has been used to construct empirical helium-surface scattering potentials through the measurement of selective adsorption resonances (bound state resonances) on the clean LiF(001) surface [ 10 ] and the hydrogenated Si(111) surface. [ 11 ]
https://en.wikipedia.org/wiki/Helium-3_surface_spin_echo
Helium atom scattering ( HAS ) is a surface analysis technique used in materials science . It provides information about the surface structure and lattice dynamics of a material by measuring the diffracted atoms from a monochromatic helium beam incident on the sample. The first recorded helium diffraction experiment was completed in 1930 by Immanuel Estermann and Otto Stern [ 1 ] on the (100) crystal face of lithium fluoride . This experimentally established the feasibility of atom diffraction when the de Broglie wavelength , λ, of the impinging atoms is on the order of the interatomic spacing of the material. At the time, the major limit to the experimental resolution of this method was due to the large velocity spread of the helium beam. It wasn't until the development of high pressure nozzle sources capable of producing intense and strongly monochromatic beams in the 1970s that HAS gained popularity for probing surface structure. Interest in studying the collision of rarefied gases with solid surfaces was helped by a connection with aeronautics and space problems of the time. Plenty of studies showing the fine structures in the diffraction pattern of materials using helium atom scattering were published in the 1970s. However, it wasn't until a third generation of nozzle beam sources was developed, around 1980, that studies of surface phonons could be made by helium atom scattering. These nozzle beam sources were capable of producing helium atom beams with an energy resolution of less than 1meV, making it possible to explicitly resolve the very small energy changes resulting from the inelastic collision of a helium atom with the vibrational modes of a solid surface, so HAS could now be used to probe lattice dynamics. The first measurement of such a surface phonon dispersion curve was reported in 1981, [ 2 ] leading to a renewed interest in helium atom scattering applications, particularly for the study of surface dynamics. Generally speaking, surface bonding is different from the bonding within the bulk of a material. In order to accurately model and describe the surface characteristics and properties of a material, it is necessary to understand the specific bonding mechanisms at work at the surface. To do this, one must employ a technique that is able to probe only the surface, we call such a technique "surface-sensitive". That is, the 'observing' particle (whether it be an electron, a neutron, or an atom) needs to be able to only 'see' (gather information from) the surface. If the penetration depth of the incident particle is too deep into the sample, the information it carries out of the sample for detection will contain contributions not only from the surface, but also from the bulk material. While there are several techniques that probe only the first few monolayers of a material, such as low-energy electron diffraction (LEED) , helium atom scattering is unique in that it does not penetrate the surface of the sample at all! In fact, the scattering 'turnaround' point of the helium atom is 3-4 angstroms above the surface plane of atoms on the material. Therefore, the information carried out in the scattered helium atom comes solely from the very surface of the sample. A visual comparison of helium scattering and electron scattering is shown below: Helium at thermal energies can be modeled classically as scattering from a hard potential wall, with the location of scattering points representing a constant electron density surface. Since single scattering dominates the helium-surface interactions, the collected helium signal easily gives information on the surface structure without the complications of considering multiple electron scattering events (such as in LEED). A qualitative sketch of the elastic one-dimensional interaction potential between the incident helium atom and an atom on the surface of the sample is shown here: This potential can be broken down into an attractive portion due to Van der Waals forces , which dominates over large separation distances, and a steep repulsive force due to electrostatic repulsion of the positive nuclei, which dominates the short distances. To modify the potential for a two-dimensional surface, a function is added to describe the surface atomic corrugations of the sample. The resulting three-dimensional potential can be modeled as a corrugated Morse potential as: [ 3 ] The first term is for the laterally-averaged surface potential - a potential well with a depth D at the minimum of z = z m and a fitting parameter α , and the second term is the repulsive potential modified by the corrugation function, ξ ( x , y ), with the same periodicity as the surface and fitting parameter β . Helium atoms, in general, can be scattered either elastically (with no energy transfer to or from the crystal surface) or inelastically through excitation or deexcitation of the surface vibrational modes (phonon creation or annihilation). Each of these scattering results can be used in order to study different properties of a material's surface. There are several advantages to using helium atoms as compared with x-rays, neutrons, and electrons to probe a surface and study its structures and phonon dynamics. As mentioned previously, the lightweight helium atoms at thermal energies do not penetrate into the bulk of the material being studied. This means that in addition to being strictly surface-sensitive they are truly non-destructive to the sample. Their de Broglie wavelength is also on the order of the interatomic spacing of materials, making them ideal probing particles. Since they are neutral, helium atoms are insensitive to surface charges. As a noble gas, the helium atoms are chemically inert. When used at thermal energies, as is the usual scenario, the helium atomic beam is an inert probe (chemically, electrically, magnetically, and mechanically). It is therefore capable of studying the surface structure and dynamics of a wide variety of materials, including those with reactive or metastable surfaces. A helium atom beam can even probe surfaces in the presence of electromagnetic fields and during ultra-high vacuum surface processing without interfering with the ongoing process. Because of this, helium atoms can be useful to make measurements of sputtering or annealing, and adsorbate layer depositions. Finally, because the thermal helium atom has no rotational and vibrational degrees of freedom and no available electronic transitions, only the translational kinetic energy of the incident and scattered beam need be analyzed in order to extract information about the surface. The accompanying figure is a general schematic of a helium atom scattering experimental setup. It consists of a nozzle beam source, an ultra high vacuum scattering chamber with a crystal manipulator, and a detector chamber. Every system can have a different particular arrangement and setup, but most will have this basic structure. The helium atom beam, with a very narrow energy spread of less than 1meV, is created through free adiabatic expansion of helium at a pressure of ~200bar into a low-vacuum chamber through a small ~5-10μm nozzle. [ 4 ] Depending on the system operating temperature range, typical helium atom energies produced can be 5-200meV. A conical aperture between A and B called the skimmer extracts the center portion of the helium beam. At this point, the atoms of the helium beam should be moving with nearly uniform velocity. Also contained in section B is a chopper system, which is responsible for creating the beam pulses needed to generate the time of flight measurements to be discussed later. The scattering chamber, area C, generally contains the crystal manipulator and any other analytical instruments that can be used to characterize the crystal surface. Equipment that can be included in the main scattering chamber includes a LEED screen (to make complementary measurements of the surface structure), an Auger analysis system (to determine the contamination level of the surface), a mass spectrometer (to monitor the vacuum quality and residual gas composition), and, for working with metal surfaces, an ion gun (for sputter cleaning of the sample surface). In order to maintain clean surfaces, the pressure in the scattering chamber needs to be in the range of 10 −8 to 10 −9 Pa. This requires the use of turbomolecular or cryogenic vacuum pumps. The crystal manipulator allows for at least three different motions of the sample. The azimuthal rotation allows the crystal to change the direction of the surface atoms, the tilt angle is used to set the normal of the crystal to be in the scattering plane, and the rotation of the manipulator around the z -axis alters the beam incidence angle. The crystal manipulator should also incorporate a system to control the temperature of the crystal. After the beam scatters off the crystal surface, it goes into the detector area D . The most commonly used detector setup is an electron bombardment ion source followed by a mass filter and an electron multiplier. The beam is directed through a series of differential pumping stages that reduce the noise-to-signal ratio before hitting the detector. A time-of-flight analyzer can follow the detector to take energy loss measurements. Under conditions for which elastic diffractive scattering dominates, the relative angular positions of the diffraction peaks reflect the geometric properties of the surface being examined. That is, the locations of the diffraction peaks reveal the symmetry of the two-dimensional space group that characterizes the observed surface of the crystal. The width of the diffraction peaks reflects the energy spread of the beam. The elastic scattering is governed by two kinematic conditions - conservation of energy and the energy of the momentum component parallel to the crystal: Here G is a reciprocal lattice vector, k G and k i are the final and initial (incident) wave vectors of the helium atom. The Ewald sphere construction will determine the diffracted beams to be seen and the scattering angles at which they will appear. A characteristic diffraction pattern will appear, determined by the periodicity of the surface, in a similar manner to that seen for Bragg scattering in electron and x-ray diffraction. Most helium atom scattering studies will scan the detector in a plane defined by the incoming atomic beam direction and the surface normal, reducing the Ewald sphere to a circle of radius R = k 0 intersecting only reciprocal lattice rods that lie in the scattering plane as shown here: The intensities of the diffraction peaks provide information about the static gas-surface interaction potentials. Measuring the diffraction peak intensities under different incident beam conditions can reveal the surface corrugation (the surface electron density) of the outermost atoms on the surface. Note that the detection of the helium atoms is much less efficient than for electrons, so the scattered intensity can only be determined for one point in k-space at a time. For an ideal surface, there should be no elastic scattering intensity between the observed diffraction peaks. If there is intensity seen here, it is due to a surface imperfection, such as steps or adatoms . From the angular position, width and intensity of the peaks, information is gained regarding the surface structure and symmetry, and the ordering of surface features. The inelastic scattering of the helium atom beam reveals the surface phonon dispersion for a material. At scattering angles far away from the specular or diffraction angles, the scattering intensity of the ordered surface is dominated by inelastic collisions. In order to study the inelastic scattering of the helium atom beam due only to single-phonon contributions, an energy analysis needs to be made of the scattered atoms. The most popular way to do this is through the use of time-of-flight (TOF) analysis . The TOF analysis requires the beam to be pulsed through the mechanical chopper, producing collimated beam 'packets' that have a 'time-of-flight' (TOF) to travel from the chopper to the detector. The beams that scatter inelastically will lose some energy in their encounter with the surface and therefore have a different velocity after scattering than they were incident with. The creation or annihilation of surface phonons can be measured, therefore, by the shifts in the energy of the scattered beam. By changing the scattering angles or incident beam energy, it is possible to sample inelastic scattering at different values of energy and momentum transfer, mapping out the dispersion relations for the surface modes. Analyzing the dispersion curves reveals sought-after information about the surface structure and bonding. A TOF analysis plot would show intensity peaks as a function of time. The main peak (with the highest intensity) is that for the unscattered helium beam 'packet'. A peak to the left is that for the annihilation of a phonon. If a phonon creation process occurred, it would appear as a peak to the right: The qualitative sketch above shows what a time-of-flight plot might look like near a diffraction angle. However, as the crystal rotates away from the diffraction angle, the elastic (main) peak drops in intensity. The intensity never shrinks to zero even far from diffraction conditions, however, due to incoherent elastic scattering from surface defects. The intensity of the incoherent elastic peak and its dependence on scattering angle can therefore provide useful information about surface imperfections present on the crystal. The kinematics of the phonon annihilation or creation process are extremely simple - conservation of energy and momentum can be combined to yield an equation for the energy exchange Δ E and momentum exchange q during the collision process. This inelastic scattering process is described as a phonon of energy Δ E = ℏ ω and wavevector q . The vibrational modes of the lattice can then be described by the dispersion relations ω ( q ) , which give the possible phonon frequencies ω as a function of the phonon wavevector q . In addition to detecting surface phonons, because of the low energy of the helium beam, low-frequency vibrations of adsorbates can be detected as well, leading to the determination of their potential energy.
https://en.wikipedia.org/wiki/Helium_atom_scattering
The helium dimer is a van der Waals molecule with formula He 2 consisting of two helium atoms . [ 2 ] This chemical is the largest diatomic molecule —a molecule consisting of two atoms bonded together. The bond that holds this dimer together is so weak that it will break if the molecule rotates, or vibrates too much. It can only exist at very low cryogenic temperatures. Two excited helium atoms can also bond to each other in a form called an excimer . This was discovered from a spectrum of helium that contained bands first seen in 1912. Written as He 2 * with the * meaning an excited state, it is the first known Rydberg molecule . [ 3 ] Several dihelium ions also exist, having net charges of negative one, positive one, and positive two. Two helium atoms can be confined together without bonding in the cage of a fullerene . Based on molecular orbital theory , He 2 should not exist, and a chemical bond cannot form between the atoms. However, the van der Waals force exists between helium atoms as shown by the existence of liquid helium , and at a certain range of distances between atoms the attraction exceeds the repulsion. So a molecule composed of two helium atoms bound by the van der Waals force can exist. [ 4 ] The existence of this molecule was proposed as early as 1937. [ 5 ] He 2 is the largest known molecule of two atoms when in its ground state , due to its extremely long bond length. [ 4 ] The He 2 molecule has a large separation distance between the atoms of about 5,200 picometres (52 Å ). This is the largest for a diatomic molecule without rovibronic excitation. The binding energy is only about 1.3 mK, 10 −7 eV [ 6 ] [ 7 ] [ 8 ] or 1.1×10 −5 kcal/mol. [ 9 ] Both helium atoms in the dimer can be ionized by a single photon with energy 63.86 eV. The proposed mechanism for this double ionization is that the photon ejects an electron from one atom, and then that electron hits the other helium atom and ionizes that as well. [ 10 ] The dimer then explodes as two helium cations repel each other, moving with the same speed but in opposite directions. [ 10 ] A dihelium molecule bound by Van der Waals forces was first proposed by John Clarke Slater in 1928. [ 11 ] The helium dimer can be formed in small amounts when helium gas expands and cools as it passes through a nozzle in a gas beam. [ 2 ] Only the isotope 4 He can form molecules like this; 4 He 3 He and 3 He 3 He do not exist, as they do not have a stable bound state . [ 6 ] The amount of the dimer formed in the gas beam is of the order of one percent. [ 10 ] He 2 + is a related ion bonded by a half covalent bond . It can be formed in a helium electrical discharge. It recombines with electrons to form an electronically excited He 2 ( a 3 Σ + u ) excimer molecule. [ 12 ] Both of these molecules are much smaller with more normally sized interatomic distances. He 2 + reacts with N 2 , Ar , Xe , O 2 , and CO 2 to form cations and neutral helium atoms. [ 13 ] The helium dication dimer He 2 2+ releases a large amount energy when it dissociates, around 835 kJ/mol. [ 14 ] However, an energy barrier of 138.91 kJ/mol prevents immediate decay. This ion was studied theoretically by Linus Pauling in 1933. [ 15 ] This ion is isoelectronic with the hydrogen molecule. [ 16 ] [ 17 ] He 2 2+ is the smallest possible molecule with a double positive charge. It is detectable using mass spectroscopy. [ 14 ] [ 18 ] The negative helium dimer He 2 − is metastable and was discovered by Bae, Coggiola and Peterson in 1984 by passing He 2 + through caesium vapor. [ 19 ] Subsequently, H. H. Michels theoretically confirmed its existence and concluded that the 4 Π g state of He 2 − is bound relative to the a 2 Σ + u state of He 2 . [ 20 ] The calculated electron affinity is 0.233 eV compared to 0.077 eV for the He − [ 4 P ∘ ] ion. The He 2 − decays through the long-lived 5/2g component with τ~350 μsec and the much shorter-lived 3/2g, 1/2g components with τ~10 μsec. The 4 Π g state has a 1σ 2 g 1σ u 2σ g 2π u electronic configuration, its electron affinity E is 0.18±0.03 eV, and its lifetime is 135±15 μsec; only the v=0 vibrational state is responsible for this long-lived state. [ 21 ] The molecular helium anion is also found in liquid helium that has been excited by electrons with an energy level higher than 22 eV. This takes place firstly by penetration of liquid He, taking 1.2 eV, followed by excitation of a He atom electron to the 3 P level, which takes 19.8 eV. The electron can then combine with another helium atom and the excited helium atom to form He 2 − . He 2 − repels helium atoms, and so has a void around it. It will tend to migrate to the surface of liquid helium. [ 22 ] In a normal helium atom, two electrons are found in the 1s orbital. However, if sufficient energy is added, one electron can be elevated to a higher energy level. This high energy electron can become a valence electron, and the electron that remains in the 1s orbital is a core electron. Two excited helium atoms can form a covalent bond, creating a molecule called dihelium that lasts for times from the order of a microsecond up to second or so. [ 3 ] (Excited helium atoms in the 2 3 S state can last for up to an hour, and react like alkali metal atoms. [ 23 ] ) The first clues that dihelium exists were noticed in 1900 when W. Heuse observed a band spectrum in a helium discharge. However, no information about the nature of the spectrum was published. Independently E. Goldstein from Germany and W. E. Curtis from London published details of the spectrum in 1913. [ 24 ] [ 25 ] Curtis was called away to military service in World War I, and the study of the spectrum was continued by Alfred Fowler . Fowler recognised that the double headed bands fell into two sequences analogous to principal and diffuse series in line spectra. [ 26 ] The emission band spectrum shows a number of bands that degrade towards the red, meaning that the lines thin out and the spectrum weakens towards the longer wavelengths. Only one band with a green band head at 5732 Å degrades towards the violet. Other strong band heads are at 6400 (red), 4649, 4626, 4546, 4157.8, 3777, 3677, 3665, 3356.5, and 3348.5 Å. There are also some headless bands and extra lines in the spectrum. [ 24 ] Weak bands are found with heads at 5133 and 5108. [ 26 ] If the valence electron is in a 2s 3s, or 3d orbital, a 1 Σ u state results; if it is in 2p 3p or 4p, a 1 Σ g state results. [ 27 ] The ground state is X 1 Σ g + . [ 28 ] The three lowest triplet states of He 2 have designations a 3 Σ u , b 3 Π g and c 3 Σ g . [ 29 ] The a 3 Σ u state with no vibration ( v =0) has a long metastable lifetime of 18 s, much longer than the lifetime for other states or inert gas excimers. [ 3 ] The explanation is that the a 3 Σ u state has no electron orbital angular momentum, as all the electrons are in S orbitals for the helium state. [ 3 ] The lower lying singlet states of He 2 are A 1 Σ u , B 1 Π g and C 1 Σ g . [ 30 ] The excimer molecules are much smaller and more tightly bound than the van der Waals bonded helium dimer. For the A 1 Σ u state the binding energy is around 2.5 eV, with a separation of the atoms of 103.9 pm. The C 1 Σ g state has a binding energy 0.643 eV and the separation between atoms is 109.1 pm. [ 27 ] These two states have a repulsive range of distances with a maximum around 300 pm, where if the excited atoms approach, they have to overcome an energy barrier. [ 27 ] The singlet state A 1 Σ + u is very unstable with a lifetime only nanoseconds long. [ 31 ] The spectrum of the He 2 excimer contains bands due to a great number of lines due to transitions between different rotation rates and vibrational states, combined with different electronic transitions. The lines can be grouped into P, Q and R branches. But the even numbered rotational levels do not have Q branch lines, due to both nuclei being spin 0. Numerous electronic states of the molecule have been studied, including Rydberg states with the number of the shell up to 25. [ 32 ] Helium discharge lamps produce vacuum ultraviolet radiation from helium molecules. When high energy protons hit helium gas it also produces UV emission at around 600 Å by the decay of excited highly vibrating molecules of He 2 in the A 1 Σ u state to the ground state. [ 33 ] The UV radiation from excited helium molecules is used in the pulsed discharge ionization detector (PDHID) which is capable of detecting the contents of mixed gases at levels below parts per billion. [ 34 ] The Hopfield continuum (named after J. J. Hopfield ) is a band of ultraviolet light between 600 and 1000 Å in wavelength formed by photodissociation of helium molecules. [ 33 ] One mechanism for formation of the helium molecules is firstly a helium atom becomes excited with one electron in the 2 1 S orbital. This excited atom meets two other non excited helium atoms in a three body association and reacts to form a A 1 Σ u state molecule with maximum vibration and a helium atom. [ 33 ] Helium molecules in the quintet state 5 Σ + g can be formed by the reaction of two spin polarised helium atoms in He(2 3 S 1 ) states. This molecule has a high energy level of 20 eV. The highest vibration level allowed is v=14. [ 35 ] In liquid helium the excimer forms a solvation bubble. In a 3 d state a He * 2 molecule is surrounded by a bubble 12.7 Å in radius at atmospheric pressure . When pressure is increased to 24 atmospheres the bubble radius shrinks to 10.8 Å. This changing bubble size causes a shift in the fluorescence bands. [ 36 ] In very strong magnetic fields, (around 750,000 Tesla) and low enough temperatures, helium atoms attract, and can even form linear chains. This may happen in white dwarfs and neutron stars. [ 37 ] The bond length and dissociation energy both increase as the magnetic field increases. [ 38 ] The dihelium excimer is an important component in the helium discharge lamp. A second use of dihelium ion is in ambient ionization techniques using low temperature plasma. In this helium atoms are excited, and then combine to yield the dihelium ion. The He 2 + goes on to react with N 2 in the air to make N 2 + . These ions react with a sample surface to make positive ions that are used in mass spectroscopy . The plasma containing the helium dimer can be as low as 30 °C in temperature, and this reduces heat damage to samples. [ 39 ] He 2 has been shown to form van der Waals compounds with other atoms forming bigger clusters such as 24 MgHe 2 and 40 CaHe 2 . [ 40 ] The helium-4 trimer ( 4 He 3 ), a cluster of three helium atoms, is predicted to have an excited state which is an Efimov state . [ 41 ] [ 42 ] This has been confirmed experimentally in 2015. [ 43 ] Two helium atoms can fit inside larger fullerenes, including C 70 and C 84 . These can be detected by the nuclear magnetic resonance of 3 He having a small shift, and by mass spectrometry. C 84 with enclosed helium can contain 20% He 2 @C 84 , whereas C 78 has 10% and C 76 has 8%. The larger cavities are more likely to hold more atoms. [ 44 ] Even when the two helium atoms are placed closely to each other in a small cage, there is no chemical bond between them. [ 45 ] [ 46 ] The presence of two He atoms in a C 60 fullerene cage is only predicted to have a small effect on the reactivity of the fullerene. [ 47 ] The effect is to have electrons withdrawn from the endohedral helium atoms, giving them a slight positive partial charge to produce He 2 δ+ , which have a stronger bond than uncharged helium atoms. [ 48 ] However, by the Löwdin definition there is a bond present. [ 49 ] The two helium atoms inside the C 60 cage are separated by 1.979 Å and the distance from a helium atom to the carbon cage is 2.507 Å. The charge transfer gives 0.011 electron charge units to each helium atom. There should be at least 10 vibrational levels for the He-He pair. [ 49 ]
https://en.wikipedia.org/wiki/Helium_dimer
A helium flash is a very brief thermal runaway nuclear fusion of large quantities of helium into carbon through the triple-alpha process in the core of low-mass stars (between 0.8 solar masses ( M ☉ ) and 2.0 M ☉ [ 1 ] ) during their red giant phase. The Sun is predicted to experience a flash 1.2 billion years after it leaves the main sequence . A much rarer runaway helium fusion process can also occur on the surface of accreting white dwarf stars. Low-mass stars do not produce enough gravitational pressure to initiate normal helium fusion. As the hydrogen in the core is exhausted, some of the helium left behind is instead compacted into degenerate matter , supported against gravitational collapse by quantum mechanical pressure rather than thermal pressure . Subsequent hydrogen shell fusion further increases the mass of the core until it reaches temperature of approximately 100 million kelvin , which is hot enough to initiate helium fusion (or "helium burning") in the core. However, a property of degenerate matter is that increases in temperature do not produce an increase in the pressure of the matter until the thermal pressure becomes so very high that it exceeds degeneracy pressure. In main sequence stars, thermal expansion regulates the core temperature, but in degenerate cores, this does not occur. Helium fusion increases the temperature, which increases the fusion rate, which further increases the temperature in a runaway reaction which quickly spans the entire core. This produces a flash of very intense helium fusion that lasts only a few minutes, [ 2 ] but during that time, produces energy at a rate comparable to the entire Milky Way galaxy. [ 2 ] In the case of normal low-mass stars, the vast energy release causes much of the core to come out of degeneracy, allowing it to thermally expand. This consumes most of the total energy released by the helium flash, [ 2 ] and any left-over energy is absorbed into the star's upper layers. Thus the helium flash is mostly undetectable by observation, and is described solely by astrophysical models. After the core's expansion and cooling, the star's surface rapidly cools and contracts in as little as 10,000 years until it is roughly 2% of its former radius and luminosity. It is estimated that the electron-degenerate helium core weighs about 40% of the star mass and that 6% of the core is converted into carbon. [ 2 ] Subflashes are pulsational instabilities that occur after the main helium flash. They are driven by stars that do not have good convective or radiative boundaries. [ 3 ] Subflashes can last several hours to days and can occur for many years with each subsequent flash generally being weaker. [ 3 ] Subflashes can be detected by applying fourier transforms to the light curve data. [ 4 ] During the red giant phase of stellar evolution in stars with less than 2.0 M ☉ , the nuclear fusion of hydrogen ceases in the core as it is depleted, leaving a helium-rich core. While fusion of hydrogen continues in the star's shell causing a continuation of the accumulation of helium in the core, making the core denser, the temperature is still unable to reach the level required for helium fusion, as happens in more massive stars. Thus the thermal pressure from fusion is no longer sufficient to counter the gravitational collapse and create the hydrostatic equilibrium found in most stars. This causes the star to start contracting and increasing in temperature until it eventually becomes compressed enough for the helium core to become degenerate matter . This degeneracy pressure is finally sufficient to stop further collapse of the most central material but the rest of the core continues to contract and the temperature continues to rise until it reaches a point ( ≈1 × 10 8 K ) at which the helium can ignite and start to fuse. [ 6 ] [ 7 ] [ 8 ] The explosive nature of the helium flash arises from its taking place in degenerate matter. Once the temperature reaches 100 million–200 million kelvin and helium fusion begins using the triple-alpha process , the temperature rapidly increases, further raising the helium fusion rate and, because degenerate matter is a good conductor of heat , widening the reaction region. However, since degeneracy pressure (which is purely a function of density) is dominating thermal pressure (proportional to the product of density and temperature), the total pressure is only weakly dependent on temperature. Thus, the dramatic increase in temperature only causes a slight increase in pressure, so there is no stabilizing cooling expansion of the core. This runaway reaction quickly climbs to about 100 billion times the star's normal energy production (for a few seconds) until the temperature increases to the point that thermal pressure again becomes dominant, eliminating the degeneracy. The core can then expand and cool down and a stable burning of helium will continue. [ 9 ] A star with mass greater than about 2.25 M ☉ starts to burn helium without its core becoming degenerate, and so does not exhibit this type of helium flash. In a very low-mass star (less than about 0.5 M ☉ ), the core is never hot enough to ignite helium. The degenerate helium core will keep on contracting, and finally becomes a helium white dwarf . The helium flash is not directly observable on the surface by electromagnetic radiation. The flash occurs in the core deep inside the star, and the net effect will be that all released energy is absorbed by the entire core, causing the degenerate state to become nondegenerate. Earlier computations indicated that a nondisruptive mass loss would be possible in some cases, [ 10 ] but later star modeling taking neutrino energy loss into account indicates no such mass loss. [ 11 ] [ 12 ] In a one solar mass star, the helium flash is estimated to release about 5 × 10 41 J , [ 13 ] or about 0.3% of the energy release of a 1.5 × 10 44 J type Ia supernova , [ 14 ] which is triggered by an analogous ignition of carbon fusion in a carbon–oxygen white dwarf. When hydrogen gas is accreted onto a white dwarf from a binary companion star, the hydrogen can fuse to form helium for a narrow range of accretion rates, but most systems develop a layer of hydrogen over the degenerate white dwarf interior. This hydrogen can build up to form a shell near the surface of the star. When the mass of hydrogen becomes sufficiently large, runaway fusion causes a nova . In a few binary systems where the hydrogen fuses on the surface, the mass of helium built up can burn in an unstable helium flash. In certain binary systems the companion star may have lost most of its hydrogen and donate helium-rich material to the compact star. Note that similar flashes occur on neutron stars. [ citation needed ] Helium shell flashes are a somewhat analogous but much less violent, nonrunaway helium ignition event, taking place in the absence of degenerate matter. They occur periodically in asymptotic giant branch stars in a shell outside the core. This is late in the life of a star in its giant phase. The star has burnt most of the helium available in the core, which is now composed of carbon and oxygen. Helium fusion continues in a thin shell around this core, but then turns off as helium becomes depleted. This allows hydrogen fusion to start in a layer above the helium layer. After enough additional helium accumulates, helium fusion is reignited, leading to a thermal pulse which eventually causes the star to expand and brighten temporarily (the pulse in luminosity is delayed because it takes a number of years for the energy from restarted helium fusion to reach the surface [ 15 ] ). Such pulses may last a few hundred years, and are thought to occur periodically every 10,000 to 100,000 years. [ 15 ] After the flash, helium fusion continues at an exponentially decaying rate for about 40% of the cycle as the helium shell is consumed. [ 15 ] Thermal pulses may cause a star to shed circumstellar shells of gas and dust. [ citation needed ]
https://en.wikipedia.org/wiki/Helium_flash
A helium mass spectrometer is an instrument commonly used to detect and locate small leaks. It was initially developed in the Manhattan Project during World War II to find extremely small leaks in the gas diffusion process of uranium enrichment plants . [ 1 ] It typically uses a vacuum chamber in which a sealed container filled with helium is placed. Helium leaks out of the container, and the rate of the leak is detected by a mass spectrometer . Helium is used as a tracer because it penetrates small leaks rapidly. Helium also has the properties of being non-toxic, chemically inert and present in the atmosphere only in minute quantities (5 ppm ). Typically a helium leak detector will be used to measure leaks in the range of 10 −5 to 10 −12 Pa · m 3 · s −1 . A flow of 10 −5 Pa·m 3 ·s −1 is about 0.006 ml per minute at standard conditions for temperature and pressure (STP). A flow of 10 −13 Pa·m 3 ·s −1 is about 0.003 ml per century at STP. Typically there are two types of leaks in the detection of helium as a tracer for leak detection: residual leak and virtual leak. A residual leak is a real leak due to an imperfect seal, a puncture, or some other hole in the system. A virtual leak is the semblance of a leak in a vacuum system caused by outgassing of chemicals trapped or adhered to the interior of a system that is actually sealed. As the gases are released into the chamber, they can create a false positive indication of a residual leak in the system. Helium mass spectrometer leak detectors are used in production line industries such as refrigeration and air conditioning , automotive parts, carbonated beverage containers food packages and aerosol packaging, as well as in the manufacture of steam products, gas bottles , fire extinguishers , tire valves, and numerous other products including all vacuum systems. This method requires the part to be tested to be connected to a helium leak detector. The outer surface of the part to be tested will be located in some kind of a tent in which the helium concentration will be raised to 100% helium. If the part is small the vacuum system included in the leak testing instrument will be able to reach low enough pressure to allow for mass spectrometer operation. If the size of the part is too large, an additional vacuum pumping system may be required to reach low enough pressure in a reasonable length of time. Once operating pressure has been reached, the mass spectrometer can start its measuring operation. If leakage is encountered the small and "agile" molecules of helium will migrate through the cracks into the part. The vacuum system will carry any tracer gas molecule into the analyzer cell of the magnetic sector mass spectrometer. A signal will inform the operator of the value of the leakage encountered. This method is a small variation from the one above. It still requires the part to be tested to be connected to a helium leak detector. The outer surface of the part to be tested is sprayed with a localized stream of helium tracer gas. If the part is small the vacuum system included in the instrument will be able to reach low enough pressure to allow for mass spectrometer operation. If the size of the part is too large, an additional pumping system may be required to reach low enough pressure in a reasonable length of time. Once operating pressure has been reached, the mass spectrometer can start its measuring operation. If leakage is encountered the small and "agile" molecules of helium will migrate through the cracks into the part. The vacuum system will carry any tracer gas molecule into the analyzer cell of the magnetic sector mass spectrometer. A signal will inform the operator of the value of the leakage encountered. Thus correlation between maximum leakage signal and location of helium spray head will allow the operator to pinpoint the leaky area. In this case the part is pressurized (sometime this test is combined with a burst test, i.e. at 40 bar) with helium while sitting in a vacuum chamber. The vacuum chamber is connected to a vacuum pumping system and a leak detector. Once the vacuum has reached the mass spectrometer operating pressure, any helium leakage will be measured. This test method applies to a lot of components that will operate under pressure: airbag canisters, evaporators, condensers, high-voltage SF 6 filled switchgear. In contrast to the Helium charged sniffer test, the partial vacuum method, the ultra sniffer test gas method (UST-method) uses the partial vacuum effect, so that the gas tightness of test sample can be detected at normal pressure with the same sensitivity as the helium charged vacuum test with helium gas helium. The method has a sensitivity of 10 −12 Pa·m 3 ·s −1 . Similar to the classical Helium charged sniffer test the test sample is enclosed in a bag, but in contrast to the classic method, the bag is exposed with a helium-free gas, so that the helium concentration inside the bag can reduced from 5·10 −7 to 10 −12 Pa·m 3 ·s −1 . This sensitivity corresponds to a theoretical gas loss of 1 cm 3 in 3000 years. [ 2 ] The UST method can be used very economically for the ad hoc testing of test samples. The test system can be set up easily, with normal pneumatic items, such as valves and plastic hoses. For the embedding of the test samples, a simple plastic bag is sufficient. The UST method was also used for the leak testing of component of the fusion experiment Wendelstein 7-X in Germany. This method applies to objects that are supposedly sealed. First the device under test will be exposed for an extended length of time to a high helium pressure in a "bombing" chamber. If the part is leaky, helium will be able to penetrate the device. Later the device will be placed in a vacuum chamber, connected to a vacuum pump and a mass spectrometer. The tiny amount of gas that entered the device under pressure will be released in the vacuum chamber and sent to the mass spectrometer where the leak rate will be measured. This test method applies to implantable medical devices, crystal oscillator, saw filter devices. This method is not able to detect a massive leak as the tracer gas will be quickly pumped out when test chamber is pumped down. In this last case the part is pressurized with helium. The mass spectrometer is fitted with a special device, a sniffer probe, that allows it to sample air (and tracer gas when confronted with a leak) at atmospheric pressure and to bring it into the mass spectrometer. This mode of operation is frequently used to locate a leak that has been detected by other methods, in order to allow for parts repair. Modern machines can digitally remove the helium two decades below the background level and thus it is now possible detect leaks as small as 5·10 −10 Pa·m 3 ·s −1 in sniffing mode.
https://en.wikipedia.org/wiki/Helium_mass_spectrometer
The helium trimer (or trihelium ) is a weakly bound molecule consisting of three helium atoms. Van der Waals forces link the atoms together. The combination of three atoms is much more stable than the two-atom helium dimer . The three-atom combination of helium-4 atoms is an Efimov state . [ 1 ] [ 2 ] Helium-3 is predicted to form a trimer, although ground state dimers containing helium-3 are completely unstable. [ 3 ] Helium trimer molecules have been produced by expanding cold helium gas from a nozzle into a vacuum chamber. Such a set up also produces the helium dimer and other helium atom clusters. The existence of the molecule was proven by matter wave diffraction through a diffraction grating . [ 4 ] Properties of the molecules can be discovered by Coulomb explosion imaging. [ 4 ] In this process, a laser ionizes all three atoms simultaneously, which then fly away from each other due to electrostatic repulsion and are detected. The helium trimer is large, being more than 100 Å, which is even larger than the helium dimer. The atoms are not arranged in an equilateral triangle , but instead form random shaped triangles. [ 5 ] Interatomic Coulombic decay can occur when one atom is ionised and excited. It can transfer energy to another atom in the trimer, even though they are separated. However this is much more likely to occur when the atoms are close together, and so the interatomic distances measured by this vary with half full height from 3.3 to 12 Å. The predicted mean distance for Interatomic Coulombic decay in 4 He 3 is 10.4 Å. For 3 He 4 He 2 this distance is even larger at 20.5 Å. [ 6 ]
https://en.wikipedia.org/wiki/Helium_trimer
Helixmith Co. LTD. is a biotechnology company located in Seoul , South Korea with US presence in San Diego . The company has an extensive gene therapy pipeline, including a non-viral plasmid DNA program for neuromuscular and ischemic disease, a CAR-T program targeting several different types of solid tumors, and an AAV vector program targeting neuromuscular diseases . Helixmith’s lead gene is Engensis (VM202), currently in phase III diabetic peripheral neuropathy (DPN) in the US. Engensis (VM202) is a plasmid DNA designed to simultaneously express two isoforms of hepatocyte growth factor (HGF), HGF 728 and HGF 723. In addition to DPN, Engensis is also being studied in diabetic foot ulcers (DFU), amyotrophic lateral sclerosis (ALS), coronary artery disease (CAD), claudication , and Charcot-Marie-Tooth disease (CMT). Helixmith Co. LTD. (prev. ViroMed) was established in 1996 as the first on-campus startup at Seoul National University in 1996, and later renamed to ViroMed in 1999. The company has been listed on the Korean Securities Dealers Automated Quotations (KOSDAQ: 084990) since 2006. In April 2019, the company was renamed to Helixmith, and moved its headquarters from its previous research facility in Seoul National University to Magok, Seoul. Helixmith’s main business area is in gene therapy development. Helixmith’s lead gene therapy product is Engensis (VM202), a non-viral plasmid DNA that encodes the therapeutic gene called hepatocyte growth factor (HGF). Engensis is being developed for diabetic peripheral neuropathy (DPN, phase 3) in the US. The product is also being studied for diabetic foot ulcers (DFU, phase 3), amyotrophic lateral sclerosis (ALS, phase 2), coronary artery disease (CAD, phase 2), claudication (phase 2) and Charcot-Marie-Tooth disease (CMT, phase 1/2a). Helixmith’s pipeline extends to CAR-T cell therapy and AAV gene therapy. In CAR-T cell therapy, the company aims at eradicating various solid tumors. The CAR-T program is in pre-clinical stage through a separate subsidiary called Cartexell. In AAV gene therapy, the company has a number of early stage products targeting neuromuscular diseases such as ALS and multiple sclerosis . Helixmith also has an antibody pipeline including VM507, an antibody that can detect and activate c-MET, receptor of hepatocyte growth factor (HGF). Helixmith is also developing phytotherapeutics based on natural plant extracts with therapeutic potential. The company has unique experience in areas including natural medicine, health functional food and cosmetic products using botanical sources. Helixmith’s non-viral plasmid DNA product, Engensis, is designed to express recombinant HGF protein in nerve and Schwann cells to promote nerve system regeneration and induce the formation of microvascular blood vessels. HGF has a short half-life (5 minutes or less) and is quickly removed from the body by the liver, creating an obstacle to effective treatment with previous injectable recombinant HGF protein products. A single injection of Helixmith’s proprietary plasmid DNA product expresses the HGF gene at levels 30-40 times higher than conventional plasmid DNA and provides sustained gene expression in mouse models for 2 weeks, with peak protein expression at Day 7 and a gradual decrease over the next week To date, more than 500 patients have been treated with Engensis across ten clinical trials in six different diseases and conditions. Data from previous clinical studies suggest that Engensis is well tolerated and has the potential to provide durable analgesic and/or symptomatic relief in a variety of disease settings. Beyond potentially alleviating pain, Engensis is designed to target the underlying causes of neuropathy through its predicted angiogenic and neuroregenerative properties. The US FDA recognized the potential for Engensis to meet the unmet need for this condition in 2018 by designating it as a Regenerative Medicine Advanced Therapy (RMAT), making it the first RMAT-designated gene therapy for a prevalent disease with over one million patients. This designation grants all the benefits afforded by the fast track and breakthrough designations, including priority review, to Engensis. Helixmith currently has multiple target indications under the Engensis pipeline: diabetic peripheral neuropathy (DPN, phase 3), diabetic foot ulcers (DFU, phase 3), amyotrophic lateral sclerosis (ALS, phase 2), coronary artery disease (CAD, phase 2), claudication (phase 2) and Charcot-Marie-Tooth disease (CMT, phase 1/2a). The US FDA granted a RMAT (Regenerative Medicine Advanced Therapy) designation to VM202-DPN in 2018. This is the first RMAT designation for a drug product based in Korea, and the first and the only RMAT designation worldwide in pain area. Engensis has been attracting huge attention in painful DPN because of its big market size. The US FDA granted orphan drug and fast track designation for Engensis (VM202-ALS) in 2016. Engensis (VM202) is currently under development as a possible treatment for chronic DFU with the hope to potentially heal the ulcer by supplying sufficient blood through new blood vessel formation around occluded or narrowed blood vessels towards the lower extremities. VM507, Helixmith’s leading antibody treatment, is an antibody that can detect and activate c-MET (HGF receptor). An antibody is an immune protein that binds to an antigen to inhibit its activity or stimulate neutralization or activation. Although it is a protein generated in the immune system originally, an antibody is available to be mass-produced, purified, and analyzed into monoclonal antibody, regarding specific antibodies with selectivity and specificity against specific antigens. VM507 is an antibody that can detect and activate c-MET, receptor of hepatocyte growth factor (HGF). As a fully human antibody, it has the potential to be safe immunologically, transmissible via blood vessel injection or local injection to other various tissues and organs, and the longer half-life may contribute to improved efficacy. The c-Met level is especially high in patients with chronic/acute renal disease . VM507 showed therapeutic efficacies such as inhibition of renal fibrosis and improvement of functional index by binding with c-MET receptor in the renal tissue when injected intravenously in the mouse model of renal disease. According to its website, the company is involved in the following clinical trials: Helixmith currently has two target indications under its phytotherapeutics pipeline: PG201 (Osteoarthritis), and HX204 (Inflammatory bowel disease). PG201 is a prescription drug for osteoarthritis and is the 7th botanical drug that has ever been approved by the MFDS (Ministry of Food and Drug Safety) in 2012. It is being sold under the brand name “LAYLA Tab” and has been generating a domestic annual revenue of 20 billion KRW since it has been licensed out to PMG Pharma. PG201 showed significant improvement in various animal models of osteoarthritis and rheumatoid arthritis . In addition, it has been founded that it can prevent cartilage destruction by regulating the expression of cartilage degradation enzymes unlike conventional anti-inflammatory analgesic drugs such as NSAIDs. PG201 has proved its safety and efficacy on patients with osteoarthritis by conducting phase II and phase III clinical trials. HX204 is currently under pre-clinical development and is expected to enter clinical phase in 2022. [ citation needed ]
https://en.wikipedia.org/wiki/Helixmith
Helix–coil transition models are formalized techniques in statistical mechanics developed to describe conformations of linear polymers in solution. The models are usually but not exclusively applied to polypeptides as a measure of the relative fraction of the molecule in an alpha helix conformation versus turn or random coil . The main attraction in investigating alpha helix formation is that one encounters many of the features of protein folding but in their simplest version. [ 1 ] [ 2 ] Most of the helix–coil models contain parameters for the likelihood of helix nucleation from a coil region, and helix propagation along the sequence once nucleated; because polypeptides are directional and have distinct N-terminal and C-terminal ends, propagation parameters may differ in each direction. The two states are Common transition models include the Zimm–Bragg model and the Lifson–Roig model , and their extensions and variations. Energy of host poly-alanine helix in aqueous solution: where m is number of residues in the helix. [ 3 ] This article about statistical mechanics is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Helix–coil_transition_model
The Hellenic Aeronautical Engineers Society (HAES) (Greek: Σύλλογος Ελλήνων Αεροναυπηγών) is the society of professional licensed Aeronautical Engineers in Greece . The purpose of HAES is to provide a basis where Greek-licensed Aeronautical Engineers can fraternize and coordinate scientific and professional efforts to assist the state and support, develop, and promote aviation and space activities. HAES was first registered in 1975 as a society (noncommercial) and it is a branch organization of the Hellenic Technical (Engineering) Chambers (Τεχνικό Επιμελητήριο Ελλάδας) in Greece . The Society is a member and the national representative of the International Council of the Aeronautical Sciences (ICAS), the Council of European Aerospace Societies (CEAS), and the European Federation of National Engineering Associations (FEANI). The society numbers approximately 250 members, almost all having university degrees in Aeronautical Engineering from countries outside Greece (mostly the United Kingdom , Italy , Germany , France , and the United States ) due to the practically non-existence of such academic programs in Greece until recently. The main requirement for one to become a member is to have a Professional License in Aeronautical Engineering from the Hellenic Technical Chambers in Greece and be in good standing with the chamber.
https://en.wikipedia.org/wiki/Hellenic_Aeronautical_Engineers_Society
Heller's test is a chemical test that shows that strong acids cause the denaturation of precipitated proteins. Concentrated nitric acid is added to a protein solution from the side of the test tube to form two layers. A white ring appears between the two layers if the test is positive. [ 1 ] Heller's test is commonly used to test for the presence of proteins in urine . [ 2 ] This test was discovered by the Austrian Chemist, Johann Florian Heller (1813-1871). This article about analytical chemistry is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Heller's_test
In quantum mechanics , the Hellmann–Feynman theorem relates the derivative of the total energy with respect to a parameter to the expectation value of the derivative of the Hamiltonian with respect to that same parameter. According to the theorem, once the spatial distribution of the electrons has been determined by solving the Schrödinger equation , all the forces in the system can be calculated using classical electrostatics . The theorem has been proven independently by many authors, including Paul Güttinger (1932), [ 1 ] Wolfgang Pauli (1933), [ 2 ] Hans Hellmann (1937) [ 3 ] and Richard Feynman (1939). [ 4 ] The theorem states where Note that there is a breakdown of the Hellmann-Feynman theorem close to quantum critical points in the thermodynamic limit. [ 5 ] This proof of the Hellmann–Feynman theorem requires that the wave function be an eigenfunction of the Hamiltonian under consideration; however, it is also possible to prove more generally that the theorem holds for non-eigenfunction wave functions which are stationary (partial derivative is zero) for all relevant variables (such as orbital rotations). The Hartree–Fock wavefunction is an important example of an approximate eigenfunction that still satisfies the Hellmann–Feynman theorem. Notable example of where the Hellmann–Feynman is not applicable is for example finite-order Møller–Plesset perturbation theory , which is not variational. [ 6 ] The proof also employs an identity of normalized wavefunctions – that derivatives of the overlap of a wave function with itself must be zero. Using Dirac's bra–ket notation these two conditions are written as The proof then follows through an application of the derivative product rule to the expectation value of the Hamiltonian viewed as a function of λ {\displaystyle \lambda } : The Hellmann–Feynman theorem is actually a direct, and to some extent trivial, consequence of the variational principle (the Rayleigh–Ritz variational principle ) from which the Schrödinger equation may be derived. This is why the Hellmann–Feynman theorem holds for wave-functions (such as the Hartree–Fock wave-function) that, though not eigenfunctions of the Hamiltonian, do derive from a variational principle. This is also why it holds, e.g., in density functional theory , e.g. in the adiabatic connection fluctuation dissipation theorem , which is not wave-function based and for which the standard derivation does not apply. According to the Rayleigh–Ritz variational principle, the eigenfunctions of the Schrödinger equation are stationary points of the functional (which is nicknamed Schrödinger functional for brevity): The eigenvalues are the values that the Schrödinger functional takes at the stationary points: where ψ λ {\displaystyle \psi _{\lambda }} satisfies the variational condition: By differentiating Eq. (3) using the chain rule , the following equation is obtained: Due to the variational condition, Eq. (4), the second term in Eq. (5) vanishes. In one sentence, the Hellmann–Feynman theorem states that the derivative of the stationary values of a function(al) with respect to a parameter on which it may depend, can be computed from the explicit dependence only, disregarding the implicit one . [ citation needed ] Because the Schrödinger functional can only depend explicitly on an external parameter through the Hamiltonian, Eq. (1) trivially follows. The most common application of the Hellmann–Feynman theorem is the calculation of intramolecular forces in molecules. This allows for the calculation of equilibrium geometries – the nuclear coordinates where the forces acting upon the nuclei, due to the electrons and other nuclei, vanish. The parameter λ {\displaystyle \lambda } corresponds to the coordinates of the nuclei. For a molecule with 1 ≤ i ≤ N {\displaystyle 1\leq i\leq N} electrons with coordinates { r i } {\displaystyle \{\mathbf {r} _{i}\}} , and 1 ≤ α ≤ M {\displaystyle 1\leq \alpha \leq M} nuclei, each located at a specified point { R α = { X α , Y α , Z α } } {\displaystyle \{\mathbf {R} _{\alpha }=\{X_{\alpha },Y_{\alpha },Z_{\alpha }\}\}} and with nuclear charge Z α {\displaystyle Z_{\alpha }} , the clamped nucleus Hamiltonian is The x {\displaystyle x} -component of the force acting on a given nucleus is equal to the negative of the derivative of the total energy with respect to that coordinate. Employing the Hellmann–Feynman theorem this is equal to Only two components of the Hamiltonian contribute to the required derivative – the electron-nucleus and nucleus-nucleus terms. Differentiating the Hamiltonian yields [ 7 ] Insertion of this in to the Hellmann–Feynman theorem returns the x {\displaystyle x} -component of the force on the given nucleus in terms of the electronic density ρ ( r ) {\displaystyle \rho (\mathbf {r} )} and the atomic coordinates and nuclear charges: A comprehensive survey of similar applications of the Hellmann-Feynman theorem in quantum chemistry is given in B. M. Deb (ed.) The Force Concept in Chemistry , Van Nostrand Rheinhold, 1981. An alternative approach for applying the Hellmann–Feynman theorem is to promote a fixed or discrete parameter which appears in a Hamiltonian to be a continuous variable solely for the mathematical purpose of taking a derivative. Possible parameters are physical constants or discrete quantum numbers. As an example, the radial Schrödinger equation for a hydrogen-like atom is which depends upon the discrete azimuthal quantum number l {\displaystyle l} . Promoting l {\displaystyle l} to be a continuous parameter allows for the derivative of the Hamiltonian to be taken: The Hellmann–Feynman theorem then allows for the determination of the expectation value of 1 r 2 {\displaystyle {\frac {1}{r^{2}}}} for hydrogen-like atoms: [ 8 ] In order to compute the energy derivative, the way n {\displaystyle n} depends on l {\displaystyle l} has to be known. These quantum numbers are usually independent, but here the solutions must be varied so as to keep the number of nodes in the wavefunction fixed. The number of nodes is n − l − 1 {\displaystyle n-l-1} , so ∂ n / ∂ l = 1 {\displaystyle \partial n/\partial l=1} . In the end of Feynman's paper, he states that, " Van der Waals' forces can also be interpreted as arising from charge distributions with higher concentration between the nuclei. The Schrödinger perturbation theory for two interacting atoms at a separation R {\displaystyle R} , large compared to the radii of the atoms, leads to the result that the charge distribution of each is distorted from central symmetry, a dipole moment of order 1 / R 7 {\displaystyle 1/R^{7}} being induced in each atom. The negative charge distribution of each atom has its center of gravity moved slightly toward the other. It is not the interaction of these dipoles which leads to van der Waals's force, but rather the attraction of each nucleus for the distorted charge distribution of its own electrons that gives the attractive 1 / R 7 {\displaystyle 1/R^{7}} force." [ excessive quote ] For a general time-dependent wavefunction satisfying the time-dependent Schrödinger equation , the Hellmann–Feynman theorem is not valid. However, the following identity holds: [ 9 ] [ 10 ] For The proof only relies on the Schrödinger equation and the assumption that partial derivatives with respect to λ and t can be interchanged.
https://en.wikipedia.org/wiki/Hellmann–Feynman_theorem
Hellmut Friedrich Fischmeister (14 May 1927 – 6 November 2019) was an Austrian metallurgist who was a pioneer in powder metallurgy . [ 1 ] [ 2 ] Fischmeister studied physics, mathematics, and chemistry at the University of Graz from 1945 to 1951 and received his doctorate in physical chemistry with Otto Kratky in 1951. From 1953, he was a research assistant at the Institute of Inorganic Chemistry at Uppsala University . In 1956, he became head of the Physics and Materials groups at the Development Laboratory of LM Ericsson in Stockholm. From 1958, he led the Laboratory of Powder Metallurgy at the Swedish Institute for Metals Research (Institutet för Metallforskning) in Stockholm. In 1961, he qualified as a university lecturer at Uppsala University in the field of general and inorganic chemistry. From 1961, he headed the research department for cemented carbides at the stainless steel works of Stora Kopparbergs Bergslags AB in Söderfors , subsequently leading the entire research, development, and quality assurance of the stainless steel works in Söderfors (today Erasteel Kloster AB and Alleima Söderfors). [ 3 ] In 1965, Fischmeister accepted a call to the chair and head of the Institute of Metallic Materials at Chalmers University of Technology in Gothenburg. In 1975, he was appointed chair and head of the Institute of Metallurgy and Materials Testing at the University of Leoben . In 1981, he became a scientific member of the Max Planck Society and director of the Institute of Materials Sciences at the Max Planck Institute for Metals Research in Stuttgart (now the Max Planck Institute for Intelligent Systems ). In addition to his leadership role at the Max Planck Institute for Metals Research, he was also the founding director of the Max Planck Institute of Microstructure Physics in Halle (Saale) from 1991 to 1993. In 1995, he retired from the Max Planck Institute for Metals Research. [ 4 ] Fischmeister was a member of the Austrian Universities' Board of Trustees (Universitätenkuratorium) from 1993 till 2003 and was a member of the Austrian Science Council (Wissenschaftsrat) from 2004 to 2009. Hellmut Fischmeister was elected as a foreign member of the Royal Swedish Academy of Engineering Sciences in 1975. [ 5 ] In 1981, he was elected as a corresponding member of the Austrian Academy of Sciences and was a member of the Academia Europaea since 1989. In 1995, he became a full member of the mathematical-natural sciences class of the Austrian Academy of Sciences. [ 6 ]
https://en.wikipedia.org/wiki/Hellmut_Fischmeister
In mathematics , Helly's selection theorem (also called the Helly selection principle ) states that a uniformly bounded sequence of monotone real functions admits a convergent subsequence . In other words, it is a sequential compactness theorem for the space of uniformly bounded monotone functions. It is named for the Austrian mathematician Eduard Helly . A more general version of the theorem asserts compactness of the space BV loc of functions locally of bounded total variation that are uniformly bounded at a point. The theorem has applications throughout mathematical analysis . In probability theory , the result implies compactness of a tight family of measures . Let ( f n ) n ∈ N be a sequence of increasing functions mapping a real interval I into the real line R , and suppose that it is uniformly bounded: there are a,b ∈ R such that a ≤ f n ≤ b for every n ∈ N . Then the sequence ( f n ) n ∈ N admits a pointwise convergent subsequence. Let A = { x ∈ I : f ( y ) ↛ f ( x ) as y → x } {\displaystyle A=\{x\in I:f(y)\not \rightarrow f(x){\text{ as }}y\rightarrow x\}} , i.e. the set of discontinuities, then since f is increasing, any x in A satisfies f ( x − ) ≤ f ( x ) ≤ f ( x + ) {\displaystyle f(x^{-})\leq f(x)\leq f(x^{+})} , where f ( x − ) = lim y ↑ x f ( y ) {\displaystyle f(x^{-})=\lim \limits _{y\uparrow x}f(y)} , f ( x + ) = lim y ↓ x f ( y ) {\displaystyle f(x^{+})=\lim \limits _{y\downarrow x}f(y)} , hence by discontinuity, f ( x − ) < f ( x + ) {\displaystyle f(x^{-})<f(x^{+})} . Since the set of rational numbers is dense in R, ∏ x ∈ A [ ( f ( x − ) , f ( x + ) ) ∩ Q ] {\displaystyle \prod _{x\in A}[{\bigl (}f(x^{-}),f(x^{+}){\bigr )}\cap \mathrm {Q} ]} is non-empty. Thus the axiom of choice indicates that there is a mapping s from A to Q . It is sufficient to show that s is injective, which implies that A has a non-larger cardinity than Q , which is countable . Suppose x 1 ,x 2 ∈ A , x 1 < x 2 , then f ( x 1 − ) < f ( x 1 + ) ≤ f ( x 2 − ) < f ( x 2 + ) {\displaystyle f(x_{1}^{-})<f(x_{1}^{+})\leq f(x_{2}^{-})<f(x_{2}^{+})} , by the construction of s , we have s(x 1 )<s(x 2 ) . Thus s is injective. Let A n = { x ∈ I ; f n ( y ) ↛ f n ( x ) as y → x } {\displaystyle A_{n}=\{x\in I;f_{n}(y)\not \rightarrow f_{n}(x){\text{ as }}y\to x\}} , i.e. the discontinuities of f n, A = ( ∪ n ∈ N A n ) ∪ ( I ∩ Q ) {\displaystyle A=(\cup _{n\in \mathrm {N} }A_{n})\cup (\mathrm {I} \cap \mathrm {Q} )} , then A is countable, and it can be denoted as { a n : n ∈ N }. By the uniform boundedness of ( f n ) n ∈ N and B-W theorem , there is a subsequence ( f (1) n ) n ∈ N such that ( f (1) n (a 1 ) ) n ∈ N converges. Suppose ( f (k) n ) n ∈ N has been chosen such that ( f (k) n (a i ) ) n ∈ N converges for i =1,...,k, then by uniform boundedness, there is a subsequence ( f (k+1) n ) n ∈ N of ( f (k) n ) n ∈ N , such that ( f (k+1) n (a k+1 ) ) n ∈ N converges, thus ( f (k+1) n ) n ∈ N converges for i =1,...,k+1. Let g k = f k ( k ) {\displaystyle g_{k}=f_{k}^{(k)}} , then g k is a subsequence of f n that converges pointwise in A . Let h k ( x ) = sup a ≤ x , a ∈ A g k ( a ) {\displaystyle h_{k}(x)=\sup _{a\leq x,a\in A}g_{k}(a)} , then , h k (a)=g k (a) for a ∈ A , h k is increasing, let h ( x ) = lim sup k → ∞ h k ( x ) {\displaystyle h(x)=\limsup \limits _{k\rightarrow \infty }h_{k}(x)} , then h is increasing, since supremes and limits of increasing functions are increasing, and h ( a ) = lim k → ∞ g k ( a ) {\displaystyle h(a)=\lim \limits _{k\rightarrow \infty }g_{k}(a)} for a ∈ A by Step 2 . By Step 1 , h has at most countably many discontinuities. We will show that g k converges at all continuities of h . Let x be a continuity of h , q,r ∈ A, q<x<r , then g k ( q ) − h ( r ) ≤ g k ( x ) − h ( x ) ≤ g k ( r ) − h ( q ) {\displaystyle g_{k}(q)-h(r)\leq g_{k}(x)-h(x)\leq g_{k}(r)-h(q)} ,hence lim sup k → ∞ ( g k ( x ) − h ( x ) ) ≤ lim sup k → ∞ ( g k ( r ) − h ( q ) ) = h ( r ) − h ( q ) {\displaystyle \limsup \limits _{k\rightarrow \infty }{\bigl (}g_{k}(x)-h(x){\bigr )}\leq \limsup \limits _{k\rightarrow \infty }{\bigl (}g_{k}(r)-h(q){\bigr )}=h(r)-h(q)} h ( q ) − h ( r ) = lim inf k → ∞ ( g k ( q ) − h ( r ) ) ≤ lim inf k → ∞ ( g k ( x ) − h ( x ) ) {\displaystyle h(q)-h(r)=\liminf \limits _{k\rightarrow \infty }{\bigl (}g_{k}(q)-h(r){\bigr )}\leq \liminf \limits _{k\rightarrow \infty }{\bigl (}g_{k}(x)-h(x){\bigr )}} Thus, h ( q ) − h ( r ) ≤ lim inf k → ∞ ( g k ( x ) − h ( x ) ) ≤ lim sup k → ∞ ( g k ( x ) − h ( x ) ) ≤ h ( r ) − h ( q ) {\displaystyle h(q)-h(r)\leq \liminf \limits _{k\rightarrow \infty }{\bigl (}g_{k}(x)-h(x){\bigr )}\leq \limsup \limits _{k\rightarrow \infty }{\bigl (}g_{k}(x)-h(x){\bigr )}\leq h(r)-h(q)} Since h is continuous at x , by taking the limits q ↑ x , r ↓ x {\displaystyle q\uparrow x,r\downarrow x} , we have h ( q ) , h ( r ) → h ( x ) {\displaystyle h(q),h(r)\rightarrow h(x)} , thus lim k → ∞ g k ( x ) = h ( x ) {\displaystyle \lim \limits _{k\rightarrow \infty }g_{k}(x)=h(x)} This can be done with a diagonal process similar to Step 2 . With the above steps we have constructed a subsequence of ( f n ) n ∈ N that converges pointwise in I. Let U be an open subset of the real line and let f n : U → R , n ∈ N , be a sequence of functions. Suppose that ( f n ) has uniformly bounded total variation on any W that is compactly embedded in U . That is, for all sets W ⊆ U with compact closure W̄ ⊆ U , Then, there exists a subsequence f n k , k ∈ N , of f n and a function f : U → R , locally of bounded variation , such that There are many generalizations and refinements of Helly's theorem. The following theorem, for BV functions taking values in Banach spaces , is due to Barbu and Precupanu: Let X be a reflexive , separable Hilbert space and let E be a closed, convex subset of X . Let Δ : X → [0, +∞) be positive-definite and homogeneous of degree one . Suppose that z n is a uniformly bounded sequence in BV([0, T ]; X ) with z n ( t ) ∈ E for all n ∈ N and t ∈ [0, T ]. Then there exists a subsequence z n k and functions δ , z ∈ BV([0, T ]; X ) such that
https://en.wikipedia.org/wiki/Helly's_selection_theorem
Helly's theorem is a basic result in discrete geometry on the intersection of convex sets . It was discovered by Eduard Helly in 1913, [ 1 ] but not published by him until 1923, by which time alternative proofs by Radon (1921) and König (1922) had already appeared. Helly's theorem gave rise to the notion of a Helly family . Let X 1 , ..., X n be a finite collection of convex subsets of R d {\displaystyle \mathbb {R} ^{d}} , with n ≥ d + 1 {\displaystyle n\geq d+1} . If the intersection of every d + 1 {\displaystyle d+1} of these sets is nonempty, then the whole collection has a nonempty intersection; that is, For infinite collections one has to assume compactness: Let { X α } {\displaystyle \{X_{\alpha }\}} be a collection of compact convex subsets of R d {\displaystyle \mathbb {R} ^{d}} , such that every subcollection of cardinality at most d + 1 {\displaystyle d+1} has nonempty intersection. Then the whole collection has nonempty intersection. We prove the finite version, using Radon's theorem as in the proof by Radon (1921) . The infinite version then follows by the finite intersection property characterization of compactness : a collection of closed subsets of a compact space has a non-empty intersection if and only if every finite subcollection has a non-empty intersection (once you fix a single set, the intersection of all others with it are closed subsets of a fixed compact space). The proof is by induction : Base case: Let n = d + 2 . By our assumptions, for every j = 1, ..., n there is a point x j that is in the common intersection of all X i with the possible exception of X j . Now we apply Radon's theorem to the set A = { x 1 , ..., x n }, which furnishes us with disjoint subsets A 1 , A 2 of A such that the convex hull of A 1 intersects the convex hull of A 2 . Suppose that p is a point in the intersection of these two convex hulls. We claim that Indeed, consider any j ∈ {1, ..., n }. We shall prove that p ∈ X j . Note that the only element of A that may not be in X j is x j . If x j ∈ A 1 , then x j ∉ A 2 , and therefore X j ⊃ A 2 . Since X j is convex, it then also contains the convex hull of A 2 and therefore also p ∈ X j . Likewise, if x j ∉ A 1 , then X j ⊃ A 1 , and by the same reasoning p ∈ X j . Since p is in every X j , it must also be in the intersection. Above, we have assumed that the points x 1 , ..., x n are all distinct. If this is not the case, say x i = x k for some i ≠ k , then x i is in every one of the sets X j , and again we conclude that the intersection is nonempty. This completes the proof in the case n = d + 2 . Inductive Step: Suppose n > d + 2 and that the statement is true for n −1 . The argument above shows that any subcollection of d + 2 sets will have nonempty intersection. We may then consider the collection where we replace the two sets X n −1 and X n with the single set X n −1 ∩ X n . In this new collection, every subcollection of d + 1 sets will have nonempty intersection. The inductive hypothesis therefore applies, and shows that this new collection has nonempty intersection. This implies the same for the original collection, and completes the proof. The colorful Helly theorem is an extension of Helly's theorem in which, instead of one collection, there are d +1 collections of convex subsets of R d . If, for every choice of a transversal – one set from every collection – there is a point in common to all the chosen sets, then for at least one of the collections, there is a point in common to all sets in the collection. Figuratively, one can consider the d +1 collections to be of d +1 different colors. Then the theorem says that, if every choice of one-set-per-color has a non-empty intersection, then there exists a color such that all sets of that color have a non-empty intersection. [ 2 ] For every a > 0 there is some b > 0 such that, if X 1 , ..., X n are n convex subsets of R d , and at least an a -fraction of ( d +1)-tuples of the sets have a point in common, then a fraction of at least b of the sets have a point in common. [ 2 ]
https://en.wikipedia.org/wiki/Helly's_theorem
In game theory , the Helly metric is used to assess the distance between two strategies . It is named for Eduard Helly . Consider a game Γ = ⟨ X , Y , H ⟩ {\displaystyle \Gamma =\left\langle {\mathfrak {X}},{\mathfrak {Y}},H\right\rangle } , between player I and II. Here, X {\displaystyle {\mathfrak {X}}} and Y {\displaystyle {\mathfrak {Y}}} are the sets of pure strategies for players I and II respectively. The payoff function is denoted by H = H ( ⋅ , ⋅ ) {\displaystyle H=H(\cdot ,\cdot )} . In other words, if player I plays x ∈ X {\displaystyle x\in {\mathfrak {X}}} and player II plays y ∈ Y {\displaystyle y\in {\mathfrak {Y}}} , then player I pays H ( x , y ) {\displaystyle H(x,y)} to player II. The Helly metric ρ ( x 1 , x 2 ) {\displaystyle \rho (x_{1},x_{2})} is defined as The metric so defined is symmetric, reflexive, and satisfies the triangle inequality . The Helly metric measures distances between strategies, not in terms of the differences between the strategies themselves, but in terms of the consequences of the strategies. Two strategies are distant if their payoffs are different. Note that ρ ( x 1 , x 2 ) = 0 {\displaystyle \rho (x_{1},x_{2})=0} does not imply x 1 = x 2 {\displaystyle x_{1}=x_{2}} but it does imply that the consequences of x 1 {\displaystyle x_{1}} and x 2 {\displaystyle x_{2}} are identical; and indeed this induces an equivalence relation . If one stipulates that ρ ( x 1 , x 2 ) = 0 {\displaystyle \rho (x_{1},x_{2})=0} implies x 1 = x 2 {\displaystyle x_{1}=x_{2}} , then the topology so induced is called the natural topology . The metric on the space of player II's strategies is analogous: Note that Γ {\displaystyle \Gamma } thus defines two Helly metrics: one for each player's strategy space. Recall the definition of ϵ {\displaystyle \epsilon } -net: A set X ϵ {\displaystyle X_{\epsilon }} is an ϵ {\displaystyle \epsilon } -net in the space X {\displaystyle X} with metric ρ {\displaystyle \rho } if for any x ∈ X {\displaystyle x\in X} there exists x ϵ ∈ X ϵ {\displaystyle x_{\epsilon }\in X_{\epsilon }} with ρ ( x , x ϵ ) < ϵ {\displaystyle \rho (x,x_{\epsilon })<\epsilon } . A metric space P {\displaystyle P} is conditionally compact (or precompact), if for any ϵ > 0 {\displaystyle \epsilon >0} there exists a finite ϵ {\displaystyle \epsilon } -net in P {\displaystyle P} . Any game that is conditionally compact in the Helly metric has an ϵ {\displaystyle \epsilon } -optimal strategy for any ϵ > 0 {\displaystyle \epsilon >0} . fMoreover, if the space of strategies for one player is conditionally compact, then the space of strategies for the other player is conditionally compact (in their Helly metric). This game theory article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Helly_metric
In probability theory , the Helly–Bray theorem relates the weak convergence of cumulative distribution functions to the convergence of expectations of certain measurable functions . It is named after Eduard Helly and Hubert Evelyn Bray . Let F and F 1 , F 2 , ... be cumulative distribution functions on the real line . The Helly–Bray theorem states that if F n converges weakly to F , then for each bounded , continuous function g : R → R , where the integrals involved are Riemann–Stieltjes integrals . Note that if X and X 1 , X 2 , ... are random variables corresponding to these distribution functions, then the Helly–Bray theorem does not imply that E( X n ) → E( X ), since g ( x ) = x is not a bounded function. In fact, a stronger and more general theorem holds. Let P and P 1 , P 2 , ... be probability measures on some set S . Then P n converges weakly to P if and only if for all bounded, continuous and real-valued functions on S . (The integrals in this version of the theorem are Lebesgue–Stieltjes integrals .) The more general theorem above is sometimes taken as defining weak convergence of measures (see Billingsley, 1999, p. 3). This article incorporates material from Helly–Bray theorem on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License .
https://en.wikipedia.org/wiki/Helly–Bray_theorem
The Hell–Volhard–Zelinsky halogenation reaction is a chemical transformation that transforms an alkyl carboxylic acid to the α-bromo derivative. It is a specialized and rare kind of halogenation . [ 1 ] [ 2 ] An example of the Hell–Volhard–Zelinsky reaction can be seen in the preparation of alanine from propionic acid . In the first step, a combination of bromine and phosphorus tribromide ( catalyst ) is used in the Hell–Volhard–Zelinsky reaction to prepare 2-bromopropionic acid, [ 3 ] which in the second step is converted to a racemic mixture of the amino acid product by ammonolysis . [ 4 ] [ 5 ] The reaction is initiated by addition of a catalytic amount of PBr 3 , after which one molar equivalent of Br 2 is added. PBr 3 converts the carboxylic OH to the acyl bromide. The acyl bromide tautomerizes to an enol , which reacts with the Br 2 to brominate at the α position. In neutral to slightly acidic aqueous solution, hydrolysis of the α-bromo acyl bromide occurs spontaneously, yielding the α-bromo carboxylic acid. If an aqueous solution is desirable, a full molar equivalent of PBr 3 must be used as the catalytic chain is disrupted. If little nucleophilic solvent is present, reaction of the α-bromo acyl bromide with the carboxylic acid yields the α-bromo carboxylic acid and regenerates the acyl bromide intermediate. In practice a molar equivalent of PBr 3 is often used anyway to overcome the slow reaction kinetics . The mechanism for the exchange between an alkanoyl bromide and a carboxylic acid is below. The α-bromoalkanoyl bromide has a strongly electrophilic carbonyl carbon because of the electron-withdrawing effects of the two bromides. By quenching the reaction with an alcohol, instead of water, the α-bromo ester can be obtained. The reaction is named after the German chemists Carl Magnus von Hell (1849–1926) and Jacob Volhard (1834–1910) and the Russian chemist Nikolay Zelinsky (1861–1953). [ 6 ] [ 7 ] [ 8 ]
https://en.wikipedia.org/wiki/Hell–Volhard–Zelinsky_halogenation
The Helmert transformation (named after Friedrich Robert Helmert , 1843–1917) is a geometric transformation method within a three-dimensional space . It is frequently used in geodesy to produce datum transformations between datums . The Helmert transformation is also called a seven-parameter transformation and is a similarity transformation . It can be expressed as: where The parameters are: A special case is the two-dimensional Helmert transformation. Here, only four parameters are needed (two translations, one scaling, one rotation). These can be determined from two known points; if more points are available then checks can be made. Sometimes it is sufficient to use the five parameter transformation , composed of three translations, only one rotation about the Z-axis, and one change of scale. The Helmert transformation only uses one scale factor, so it is not suitable for: In these cases, a more general affine transformation is preferable. The Helmert transformation is used, among other things, in geodesy to transform the coordinates of the point from one coordinate system into another. Using it, it becomes possible to convert regional surveying points into the WGS84 locations used by GPS . For example, starting with the Gauss–Krüger coordinate , x and y , plus the height, h , are converted into 3D values in steps: The third step consists of the application of a rotation matrix , multiplication with the scale factor μ = 1 + s {\displaystyle \mu =1+s} (with a value near 1) and the addition of the three translations, c x , c y , c z . The coordinates of a reference system B are derived from reference system A by the following formula (position vector transformation convention and very small rotation angles simplification): [ 1 ] or for each single parameter of the coordinate: For the reverse transformation, each element is multiplied by −1. The seven parameters are determined for each region with three or more "identical points" of both systems. To bring them into agreement, the small inconsistencies (usually only a few cm) are adjusted using the method of least squares – that is, eliminated in a statistically plausible manner. These are standard parameter sets for the 7-parameter transformation (or data transformation) between two datums. For a transformation in the opposite direction, inverse transformation parameters should be calculated or inverse transformation should be applied (as described in paper "On geodetic transformations" [ 2 ] ). The translations c x , c y , c z are sometimes described as t x , t y , t z , or dx , dy , dz . The rotations r x , r y , and r z are sometimes also described as ω {\displaystyle \omega } , ϕ {\displaystyle \phi } and κ {\displaystyle \kappa } . [ who? ] In the United Kingdom the prime interest is the transformation between the OSGB36 datum used by the Ordnance survey for Grid References on its Landranger and Explorer maps to the WGS84 implementation used by GPS technology. The Gauss–Krüger coordinate system used in Germany normally refers to the Bessel ellipsoid . A further datum of interest was ED50 (European Datum 1950) based on the Hayford ellipsoid . ED50 was part of the fundamentals of the NATO coordinates up to the 1980s, and many national coordinate systems of Gauss–Krüger are defined by ED50. The earth does not have a perfect ellipsoidal shape, but is described as a geoid . Instead, the geoid of the earth is described by many ellipsoids. Depending upon the actual location, the "locally best aligned ellipsoid" has been used for surveying and mapping purposes. The standard parameter set gives an accuracy of about 7 m for an OSGB36/WGS84 transformation. This is not precise enough for surveying, and the Ordnance Survey supplements these results by using a lookup table of further translations in order to reach 1 cm accuracy. If the transformation parameters are unknown, they can be calculated with reference points (that is, points whose coordinates are known before and after the transformation. Since a total of seven parameters (three translations, one scale, three rotations) have to be determined, at least two points and one coordinate of a third point (for example, the Z-coordinate) must be known. This gives a system with seven equations and seven unknowns, which can be solved. For transformations between conformal map projections near an arbitrary point, the Helmert transformation parameters can be calculated exactly from the Jacobian matrix of the transformation function. In practice, it is best to use more points. Through this correspondence, more accuracy is obtained, and a statistical assessment of the results becomes possible. In this case, the calculation is adjusted with the Gaussian least squares method. A numerical value for the accuracy of the transformation parameters is obtained by calculating the values at the reference points, and weighting the results relative to the centroid of the points. While the method is mathematically rigorous, it is entirely dependent on the accuracy of the parameters that are used. In practice, these parameters are computed from the inclusion of at least three known points in the networks. However the accuracy of these will affect the following transformation parameters, as these points will contain observation errors. Therefore, a "real-world" transformation will only be a best estimate and should contain a statistical measure of its quality.
https://en.wikipedia.org/wiki/Helmert_transformation
In fluid mechanics , Helmholtz's theorems , named after Hermann von Helmholtz , describe the three-dimensional motion of fluid in the vicinity of vortex lines. These theorems apply to inviscid flows and flows where the influence of viscous forces are small and can be ignored. Helmholtz's three theorems are as follows: [ 1 ] Helmholtz's theorems apply to inviscid flows. In observations of vortices in real fluids the strength of the vortices always decays gradually due to the dissipative effect of viscous forces . Alternative expressions of the three theorems are as follows: Helmholtz's theorems have application in understanding: Helmholtz's theorems are now generally proven with reference to Kelvin's circulation theorem . However Helmholtz's theorems were published in 1858, [ 3 ] nine years before the 1867 publication of Kelvin's theorem.
https://en.wikipedia.org/wiki/Helmholtz's_theorems
In thermodynamics , the Helmholtz free energy (or Helmholtz energy ) is a thermodynamic potential that measures the useful work obtainable from a closed thermodynamic system at a constant temperature ( isothermal ). The change in the Helmholtz energy during a process is equal to the maximum amount of work that the system can perform in a thermodynamic process in which temperature is held constant. At constant temperature, the Helmholtz free energy is minimized at equilibrium. In contrast, the Gibbs free energy or free enthalpy is most commonly used as a measure of thermodynamic potential (especially in chemistry ) when it is convenient for applications that occur at constant pressure . For example, in explosives research Helmholtz free energy is often used, since explosive reactions by their nature induce pressure changes. It is also frequently used to define fundamental equations of state of pure substances. The concept of free energy was developed by Hermann von Helmholtz , a German physicist, and first presented in 1882 in a lecture called "On the thermodynamics of chemical processes". [ 1 ] From the German word Arbeit (work), the International Union of Pure and Applied Chemistry (IUPAC) recommends the symbol A and the name Helmholtz energy . [ 2 ] In physics , the symbol F is also used in reference to free energy or Helmholtz function . The Helmholtz free energy is defined as [ 3 ] A ≡ U − T S , {\displaystyle A\equiv U-TS,} where The Helmholtz energy is the Legendre transformation of the internal energy U , in which temperature replaces entropy as the independent variable. The first law of thermodynamics in a closed system provides d U = δ Q + δ W , {\displaystyle \mathrm {d} U=\delta Q\ +\delta W,} where U {\displaystyle U} is the internal energy, δ Q {\displaystyle \delta Q} is the energy added as heat, and δ W {\displaystyle \delta W} is the work done on the system. The second law of thermodynamics for a reversible process yields δ Q = T d S {\displaystyle \delta Q=T\,\mathrm {d} S} . In case of a reversible change, the work done can be expressed as δ W = − p d V {\displaystyle \delta W=-p\,\mathrm {d} V} (ignoring electrical and other non- PV work) and so: d U = T d S − p d V . {\displaystyle \mathrm {d} U=T\,\mathrm {d} S-p\,\mathrm {d} V.} Applying the product rule for differentiation to d ( T S ) = T d S + S d T {\displaystyle \mathrm {d} (TS)=T\mathrm {d} S\,+S\mathrm {d} T} , it follows d U = d ( T S ) − S d T − p d V , {\displaystyle \mathrm {d} U=\mathrm {d} (TS)-S\,\mathrm {d} T-p\,\mathrm {d} V,} and d ( U − T S ) = − S d T − p d V . {\displaystyle \mathrm {d} (U-TS)=-S\,\mathrm {d} T-p\,\mathrm {d} V.} The definition of A = U − T S {\displaystyle A=U-TS} allows us to rewrite this as d A = − S d T − p d V . {\displaystyle \mathrm {d} A=-S\,\mathrm {d} T-p\,\mathrm {d} V.} Because A is a thermodynamic function of state , this relation is also valid for a process (without electrical work or composition change) that is not reversible. The laws of thermodynamics are only directly applicable to systems in thermal equilibrium. If we wish to describe phenomena like chemical reactions, then the best we can do is to consider suitably chosen initial and final states in which the system is in (metastable) thermal equilibrium. If the system is kept at fixed volume and is in contact with a heat bath at some constant temperature, then we can reason as follows. Since the thermodynamical variables of the system are well defined in the initial state and the final state, the internal energy increase Δ U {\displaystyle \Delta U} , the entropy increase Δ S {\displaystyle \Delta S} , and the total amount of work that can be extracted, performed by the system, W {\displaystyle W} , are well defined quantities. Conservation of energy implies Δ U bath + Δ U + W = 0. {\displaystyle \Delta U_{\text{bath}}+\Delta U+W=0.} The volume of the system is kept constant. This means that the volume of the heat bath does not change either, and we can conclude that the heat bath does not perform any work. This implies that the amount of heat that flows into the heat bath is given by Q bath = Δ U bath = − ( Δ U + W ) . {\displaystyle Q_{\text{bath}}=\Delta U_{\text{bath}}=-(\Delta U+W).} The heat bath remains in thermal equilibrium at temperature T no matter what the system does. Therefore, the entropy change of the heat bath is Δ S bath = Q bath T = − Δ U + W T . {\displaystyle \Delta S_{\text{bath}}={\frac {Q_{\text{bath}}}{T}}=-{\frac {\Delta U+W}{T}}.} The total entropy change is thus given by Δ S bath + Δ S = − Δ U − T Δ S + W T . {\displaystyle \Delta S_{\text{bath}}+\Delta S=-{\frac {\Delta U-T\Delta S+W}{T}}.} Since the system is in thermal equilibrium with the heat bath in the initial and the final states, T is also the temperature of the system in these states. The fact that the system's temperature does not change allows us to express the numerator as the free energy change of the system: Δ S bath + Δ S = − Δ A + W T . {\displaystyle \Delta S_{\text{bath}}+\Delta S=-{\frac {\Delta A+W}{T}}.} Since the total change in entropy must always be larger or equal to zero, we obtain the inequality W ≤ − Δ A . {\displaystyle W\leq -\Delta A.} We see that the total amount of work that can be extracted in an isothermal process is limited by the free-energy decrease, and that increasing the free energy in a reversible process requires work to be done on the system. If no work is extracted from the system, then Δ A ≤ 0 , {\displaystyle \Delta A\leq 0,} and thus for a system kept at constant temperature and volume and not capable of performing electrical or other non- PV work, the total free energy during a spontaneous change can only decrease. This result seems to contradict the equation d A = − S d T − p d V {\displaystyle \mathrm {d} A=-S\,\mathrm {d} T-p\,\mathrm {d} V} , as keeping T and V constant seems to imply d A = 0 {\displaystyle \mathrm {d} A=0} , and hence A = c o n s t . {\displaystyle A=\mathrm {const.} } In reality there is no contradiction: In a simple one-component system, to which the validity of the equation d A = − S d T − p d V {\displaystyle \mathrm {d} A=-S\,\mathrm {d} T-p\,\mathrm {d} V} is restricted, no process can occur at constant T and V , since there is a unique P ( T , V ) {\displaystyle P(T,V)} relation, and thus T , V , and P are all fixed. To allow for spontaneous processes at constant T and V , one needs to enlarge the thermodynamical state space of the system. In case of a chemical reaction, one must allow for changes in the numbers N j of particles of each type j . The differential of the free energy then generalizes to d A = − S d T − P d V + ∑ j μ j d N j , {\displaystyle \mathrm {d} A=-S\,\mathrm {d} T-P\,\mathrm {d} V+\sum _{j}\mu _{j}\,\mathrm {d} N_{j},} where the N j {\displaystyle N_{j}} are the numbers of particles of type j and the μ j {\displaystyle \mu _{j}} are the corresponding chemical potentials . This equation is then again valid for both reversible and non-reversible changes. In case of a spontaneous change at constant T and V, the last term will thus be negative. In case there are other external parameters, the above relation further generalizes to d A = − S d T − ∑ i X i d x i + ∑ j μ j d N j . {\displaystyle \mathrm {d} A=-S\,\mathrm {d} T-\sum _{i}X_{i}\,\mathrm {d} x_{i}+\sum _{j}\mu _{j}\,\mathrm {d} N_{j}.} Here the x i {\displaystyle x_{i}} are the external variables, and the X i {\displaystyle X_{i}} the corresponding generalized forces . A system kept at constant volume, temperature, and particle number is described by the canonical ensemble . The probability of finding the system in some energy eigenstate r , for any microstate i , is given by P r = e − β E r Z , {\displaystyle P_{r}={\frac {e^{-\beta E_{r}}}{Z}},} where Z is called the partition function of the system. The fact that the system does not have a unique energy means that the various thermodynamical quantities must be defined as expectation values. In the thermodynamical limit of infinite system size, the relative fluctuations in these averages will go to zero. The average internal energy of the system is the expectation value of the energy and can be expressed in terms of Z as follows: U ≡ ⟨ E ⟩ = ∑ r P r E r = ∑ r e − β E r E r Z = ∑ r − ∂ ∂ β e − β E r Z = − ∂ ∂ β ∑ r e − β E r Z = − ∂ log ⁡ Z ∂ β . {\displaystyle U\equiv \langle E\rangle =\sum _{r}P_{r}E_{r}=\sum _{r}{\frac {e^{-\beta E_{r}}E_{r}}{Z}}=\sum _{r}{\frac {-{\frac {\partial }{\partial \beta }}e^{-\beta E_{r}}}{Z}}={\frac {-{\frac {\partial }{\partial \beta }}\sum _{r}e^{-\beta E_{r}}}{Z}}=-{\frac {\partial \log Z}{\partial \beta }}.} If the system is in state r , then the generalized force corresponding to an external variable x is given by X r = − ∂ E r ∂ x . {\displaystyle X_{r}=-{\frac {\partial E_{r}}{\partial x}}.} The thermal average of this can be written as X = ∑ r P r X r = 1 β ∂ log ⁡ Z ∂ x . {\displaystyle X=\sum _{r}P_{r}X_{r}={\frac {1}{\beta }}{\frac {\partial \log Z}{\partial x}}.} Suppose that the system has one external variable x {\displaystyle x} . Then changing the system's temperature parameter by d β {\displaystyle d\beta } and the external variable by d x {\displaystyle dx} will lead to a change in log ⁡ Z {\displaystyle \log Z} : d ( log ⁡ Z ) = ∂ log ⁡ Z ∂ β d β + ∂ log ⁡ Z ∂ x d x = − U d β + β X d x . {\displaystyle d(\log Z)={\frac {\partial \log Z}{\partial \beta }}\,d\beta +{\frac {\partial \log Z}{\partial x}}\,dx=-U\,d\beta +\beta X\,dx.} If we write U d β {\displaystyle U\,d\beta } as U d β = d ( β U ) − β d U , {\displaystyle U\,d\beta =d(\beta U)-\beta \,dU,} we get d ( log ⁡ Z ) = − d ( β U ) + β d U + β X d x . {\displaystyle d(\log Z)=-d(\beta U)+\beta \,dU+\beta X\,dx.} This means that the change in the internal energy is given by d U = 1 β d ( log ⁡ Z + β U ) − X d x . {\displaystyle dU={\frac {1}{\beta }}\,d(\log Z+\beta U)-X\,dx.} In the thermodynamic limit, the fundamental thermodynamic relation should hold: d U = T d S − X d x . {\displaystyle dU=T\,dS-X\,dx.} This then implies that the entropy of the system is given by S = k log ⁡ Z + U T + c , {\displaystyle S=k\log Z+{\frac {U}{T}}+c,} where c is some constant. The value of c can be determined by considering the limit T → 0. In this limit the entropy becomes S = k log ⁡ Ω 0 {\displaystyle S=k\log \Omega _{0}} , where Ω 0 {\displaystyle \Omega _{0}} is the ground-state degeneracy. The partition function in this limit is Ω 0 e − β U 0 {\displaystyle \Omega _{0}e^{-\beta U_{0}}} , where U 0 {\displaystyle U_{0}} is the ground-state energy. Thus, we see that c = 0 {\displaystyle c=0} and that A = − k T log ⁡ Z . {\displaystyle \,A=-kT\log Z.} Combining the definition of Helmholtz free energy A = U − T S {\displaystyle A=U-TS} along with the fundamental thermodynamic relation d A = − S d T − P d V + μ d N , {\displaystyle \mathrm {d} A=-S\,\mathrm {d} T-P\,\mathrm {d} V+\mu \,\mathrm {d} N,} one can find expressions for entropy, pressure and chemical potential: [ 4 ] S = − ( ∂ A ∂ T ) | V , N , P = − ( ∂ A ∂ V ) | T , N , μ = ( ∂ A ∂ N ) | T , V . {\displaystyle S=\left.-\left({\frac {\partial A}{\partial T}}\right)\right|_{V,N},\quad P=\left.-\left({\frac {\partial A}{\partial V}}\right)\right|_{T,N},\quad \mu =\left.\left({\frac {\partial A}{\partial N}}\right)\right|_{T,V}.} These three equations, along with the free energy in terms of the partition function, A = − k T log ⁡ Z , {\displaystyle A=-kT\log Z,} allow an efficient way of calculating thermodynamic variables of interest given the partition function and are often used in density of state calculations. One can also do Legendre transformations for different systems. For example, for a system with a magnetic field or potential, it is true that m = − ( ∂ A ∂ B ) | T , N , V = ( ∂ A ∂ Q ) | N , T . {\displaystyle m=\left.-\left({\frac {\partial A}{\partial B}}\right)\right|_{T,N},\quad V=\left.\left({\frac {\partial A}{\partial Q}}\right)\right|_{N,T}.} Computing the free energy is an intractable problem for all but the simplest models in statistical physics. A powerful approximation method is mean-field theory , which is a variational method based on the Bogoliubov inequality. This inequality can be formulated as follows. Suppose we replace the real Hamiltonian H {\displaystyle H} of the model by a trial Hamiltonian H ~ {\displaystyle {\tilde {H}}} , which has different interactions and may depend on extra parameters that are not present in the original model. If we choose this trial Hamiltonian such that ⟨ H ~ ⟩ = ⟨ H ⟩ , {\displaystyle \left\langle {\tilde {H}}\right\rangle =\langle H\rangle ,} where both averages are taken with respect to the canonical distribution defined by the trial Hamiltonian H ~ {\displaystyle {\tilde {H}}} , then the Bogoliubov inequality states A ≤ A ~ , {\displaystyle A\leq {\tilde {A}},} where A {\displaystyle A} is the free energy of the original Hamiltonian, and A ~ {\displaystyle {\tilde {A}}} is the free energy of the trial Hamiltonian. We will prove this below. By including a large number of parameters in the trial Hamiltonian and minimizing the free energy, we can expect to get a close approximation to the exact free energy. The Bogoliubov inequality is often applied in the following way. If we write the Hamiltonian as H = H 0 + Δ H , {\displaystyle H=H_{0}+\Delta H,} where H 0 {\displaystyle H_{0}} is some exactly solvable Hamiltonian, then we can apply the above inequality by defining H ~ = H 0 + ⟨ Δ H ⟩ 0 . {\displaystyle {\tilde {H}}=H_{0}+\langle \Delta H\rangle _{0}.} Here we have defined ⟨ X ⟩ 0 {\displaystyle \langle X\rangle _{0}} to be the average of X over the canonical ensemble defined by H 0 {\displaystyle H_{0}} . Since H ~ {\displaystyle {\tilde {H}}} defined this way differs from H 0 {\displaystyle H_{0}} by a constant, we have in general ⟨ X ⟩ 0 = ⟨ X ⟩ . {\displaystyle \langle X\rangle _{0}=\langle X\rangle .} where ⟨ X ⟩ {\displaystyle \langle X\rangle } is still the average over H ~ {\displaystyle {\tilde {H}}} , as specified above. Therefore, ⟨ H ~ ⟩ = ⟨ H 0 + ⟨ Δ H ⟩ ⟩ = ⟨ H ⟩ , {\displaystyle \left\langle {\tilde {H}}\right\rangle ={\big \langle }H_{0}+\langle \Delta H\rangle {\big \rangle }=\langle H\rangle ,} and thus the inequality A ≤ A ~ {\displaystyle A\leq {\tilde {A}}} holds. The free energy A ~ {\displaystyle {\tilde {A}}} is the free energy of the model defined by H 0 {\displaystyle H_{0}} plus ⟨ Δ H ⟩ {\displaystyle \langle \Delta H\rangle } . This means that A ~ = ⟨ H 0 ⟩ 0 − T S 0 + ⟨ Δ H ⟩ 0 = ⟨ H ⟩ 0 − T S 0 , {\displaystyle {\tilde {A}}=\langle H_{0}\rangle _{0}-TS_{0}+\langle \Delta H\rangle _{0}=\langle H\rangle _{0}-TS_{0},} and thus A ≤ ⟨ H ⟩ 0 − T S 0 . {\displaystyle A\leq \langle H\rangle _{0}-TS_{0}.} For a classical model we can prove the Bogoliubov inequality as follows. We denote the canonical probability distributions for the Hamiltonian and the trial Hamiltonian by P r {\displaystyle P_{r}} and P ~ r {\displaystyle {\tilde {P}}_{r}} , respectively. From Gibbs' inequality we know that: ∑ r P ~ r log ⁡ ( P ~ r ) ≥ ∑ r P ~ r log ⁡ ( P r ) {\displaystyle \sum _{r}{\tilde {P}}_{r}\log \left({\tilde {P}}_{r}\right)\geq \sum _{r}{\tilde {P}}_{r}\log \left(P_{r}\right)\,} holds. To see this, consider the difference between the left hand side and the right hand side. We can write this as: ∑ r P ~ r log ⁡ ( P ~ r P r ) {\displaystyle \sum _{r}{\tilde {P}}_{r}\log \left({\frac {{\tilde {P}}_{r}}{P_{r}}}\right)\,} Since log ⁡ ( x ) ≥ 1 − 1 x {\displaystyle \log \left(x\right)\geq 1-{\frac {1}{x}}\,} it follows that: ∑ r P ~ r log ⁡ ( P ~ r P r ) ≥ ∑ r ( P ~ r − P r ) = 0 {\displaystyle \sum _{r}{\tilde {P}}_{r}\log \left({\frac {{\tilde {P}}_{r}}{P_{r}}}\right)\geq \sum _{r}\left({\tilde {P}}_{r}-P_{r}\right)=0\,} where in the last step we have used that both probability distributions are normalized to 1. We can write the inequality as: ⟨ log ⁡ P ~ r ⟩ ≥ ⟨ log ⁡ P r ⟩ {\displaystyle \left\langle \log {\tilde {P}}_{r}\right\rangle \geq \left\langle \log P_{r}\right\rangle } where the averages are taken with respect to P ~ r {\displaystyle {\tilde {P}}_{r}} . If we now substitute in here the expressions for the probability distributions: P r = exp ⁡ [ − β H ( r ) ] Z {\displaystyle P_{r}={\frac {\exp \left[-\beta H(r)\right]}{Z}}} and P ~ r = exp ⁡ [ − β H ~ ( r ) ] Z ~ {\displaystyle {\tilde {P}}_{r}={\frac {\exp \left[-\beta {\tilde {H}}(r)\right]}{\tilde {Z}}}} we get: ⟨ − β H ~ − log ⁡ Z ~ ⟩ ≥ ⟨ − β H − log ⁡ Z ⟩ {\displaystyle \left\langle -\beta {\tilde {H}}-\log {\tilde {Z}}\right\rangle \geq \left\langle -\beta H-\log Z\right\rangle } Since the averages of H {\displaystyle H} and H ~ {\displaystyle {\tilde {H}}} are, by assumption, identical we have: A ≤ A ~ {\displaystyle A\leq {\tilde {A}}} Here we have used that the partition functions are constants with respect to taking averages and that the free energy is proportional to minus the logarithm of the partition function. We can easily generalize this proof to the case of quantum mechanical models. We denote the eigenstates of H ~ {\displaystyle {\tilde {H}}} by | r ⟩ {\displaystyle \left|r\right\rangle } . We denote the diagonal components of the density matrices for the canonical distributions for H {\displaystyle H} and H ~ {\displaystyle {\tilde {H}}} in this basis as: P r = ⟨ r | exp ⁡ [ − β H ] Z | r ⟩ {\displaystyle P_{r}=\left\langle r\left|{\frac {\exp \left[-\beta H\right]}{Z}}\right|r\right\rangle \,} and P ~ r = ⟨ r | exp ⁡ [ − β H ~ ] Z ~ | r ⟩ = exp ⁡ ( − β E ~ r ) Z ~ {\displaystyle {\tilde {P}}_{r}=\left\langle r\left|{\frac {\exp \left[-\beta {\tilde {H}}\right]}{\tilde {Z}}}\right|r\right\rangle ={\frac {\exp \left(-\beta {\tilde {E}}_{r}\right)}{\tilde {Z}}}\,} where the E ~ r {\displaystyle {\tilde {E}}_{r}} are the eigenvalues of H ~ {\displaystyle {\tilde {H}}} We assume again that the averages of H and H ~ {\displaystyle {\tilde {H}}} in the canonical ensemble defined by H ~ {\displaystyle {\tilde {H}}} are the same: ⟨ H ~ ⟩ = ⟨ H ⟩ {\displaystyle \left\langle {\tilde {H}}\right\rangle =\left\langle H\right\rangle \,} where ⟨ H ⟩ = ∑ r P ~ r ⟨ r | H | r ⟩ {\displaystyle \left\langle H\right\rangle =\sum _{r}{\tilde {P}}_{r}\left\langle r\left|H\right|r\right\rangle \,} The inequality ∑ r P ~ r log ⁡ P ~ r ≥ ∑ r P ~ r log ⁡ P r {\displaystyle \sum _{r}{\tilde {P}}_{r}\log {\tilde {P}}_{r}\geq \sum _{r}{\tilde {P}}_{r}\log P_{r}} still holds as both the P r {\displaystyle P_{r}} and the P ~ r {\displaystyle {\tilde {P}}_{r}} sum to 1. On the left-hand side we can replace: log ⁡ P ~ r = − β E ~ r − log ⁡ Z ~ {\displaystyle \log {\tilde {P}}_{r}=-\beta {\tilde {E}}_{r}-\log {\tilde {Z}}} On the right-hand side we can use the inequality ⟨ e X ⟩ r ≥ e ⟨ X ⟩ r {\displaystyle \left\langle e^{X}\right\rangle _{r}\geq e^{{\left\langle X\right\rangle }_{r}}} where we have introduced the notation ⟨ Y ⟩ r ≡ ⟨ r | Y | r ⟩ {\displaystyle \left\langle Y\right\rangle _{r}\equiv \left\langle r\left|Y\right|r\right\rangle \,} for the expectation value of the operator Y in the state r. See here for a proof. Taking the logarithm of this inequality gives: log ⁡ [ ⟨ e X ⟩ r ] ≥ ⟨ X ⟩ r {\displaystyle \log \left[\left\langle e^{X}\right\rangle _{r}\right]\geq \left\langle X\right\rangle _{r}\,} This allows us to write: log ⁡ P r = log ⁡ [ ⟨ exp ⁡ ( − β H − log ⁡ Z ) ⟩ r ] ≥ ⟨ − β H − log ⁡ Z ⟩ r {\displaystyle \log P_{r}=\log \left[\left\langle \exp \left(-\beta H-\log Z\right)\right\rangle _{r}\right]\geq \left\langle -\beta H-\log Z\right\rangle _{r}} The fact that the averages of H and H ~ {\displaystyle {\tilde {H}}} are the same then leads to the same conclusion as in the classical case: A ≤ A ~ {\displaystyle A\leq {\tilde {A}}} In the more general case, the mechanical term p d V {\displaystyle p\mathrm {d} V} must be replaced by the product of volume, stress , and an infinitesimal strain: [ 5 ] d A = V ∑ i j σ i j d ε i j − S d T + ∑ i μ i d N i , {\displaystyle \mathrm {d} A=V\sum _{ij}\sigma _{ij}\,\mathrm {d} \varepsilon _{ij}-S\,\mathrm {d} T+\sum _{i}\mu _{i}\,\mathrm {d} N_{i},} where σ i j {\displaystyle \sigma _{ij}} is the stress tensor, and ε i j {\displaystyle \varepsilon _{ij}} is the strain tensor. In the case of linear elastic materials that obey Hooke's law , the stress is related to the strain by σ i j = C i j k l ε k l , {\displaystyle \sigma _{ij}=C_{ijkl}\varepsilon _{kl},} where we are now using Einstein notation for the tensors, in which repeated indices in a product are summed. We may integrate the expression for d A {\displaystyle \mathrm {d} A} to obtain the Helmholtz energy: A = 1 2 V C i j k l ε i j ε k l − S T + ∑ i μ i N i = 1 2 V σ i j ε i j − S T + ∑ i μ i N i . {\displaystyle {\begin{aligned}A&={\frac {1}{2}}VC_{ijkl}\varepsilon _{ij}\varepsilon _{kl}-ST+\sum _{i}\mu _{i}N_{i}\\&={\frac {1}{2}}V\sigma _{ij}\varepsilon _{ij}-ST+\sum _{i}\mu _{i}N_{i}.\end{aligned}}} The Helmholtz free energy function for a pure substance (together with its partial derivatives) can be used to determine all other thermodynamic properties for the substance. See, for example, the equations of state for water , as given by the IAPWS in their IAPWS-95 release. Hinton and Zemel [ 6 ] "derive an objective function for training auto-encoder based on the minimum description length (MDL) principle". "The description length of an input vector using a particular code is the sum of the code cost and reconstruction cost. They define this to be the energy of the code. Given an input vector, they define the energy of a code to be the sum of the code cost and the reconstruction cost." The true expected combined cost is A = ∑ i p i E i − H , {\displaystyle A=\sum _{i}p_{i}E_{i}-H,} "which has exactly the form of Helmholtz free energy".
https://en.wikipedia.org/wiki/Helmholtz_free_energy
In fluid mechanics , Helmholtz minimum dissipation theorem (named after Hermann von Helmholtz who published it in 1868 [ 1 ] [ 2 ] ) states that the steady Stokes flow motion of an incompressible fluid has the smallest rate of dissipation than any other incompressible motion with the same velocity on the boundary . [ 3 ] [ 4 ] The theorem also has been studied by Diederik Korteweg in 1883 [ 5 ] and by Lord Rayleigh in 1913. [ 6 ] This theorem is, in fact, true for any fluid motion where the nonlinear term of the incompressible Navier-Stokes equations can be neglected or equivalently when ∇ × ∇ × ω = 0 {\displaystyle \nabla \times \nabla \times {\boldsymbol {\omega }}=0} , where ω {\displaystyle {\boldsymbol {\omega }}} is the vorticity vector. For example, the theorem also applies to unidirectional flows such as Couette flow and Hagen–Poiseuille flow , where nonlinear terms disappear automatically. Let u , p {\displaystyle \mathbf {u} ,\ p} and E = 1 2 ( ∇ u + ( ∇ u ) T ) {\displaystyle E={\frac {1}{2}}(\nabla \mathbf {u} +(\nabla \mathbf {u} )^{T})} be the velocity, pressure and strain rate tensor of the Stokes flow and u ′ , p ′ {\displaystyle \mathbf {u} ',\ p'} and E ′ = 1 2 ( ∇ u ′ + ( ∇ u ′ ) T ) {\displaystyle E'={\frac {1}{2}}(\nabla \mathbf {u} '+(\nabla \mathbf {u} ')^{T})} be the velocity, pressure and strain rate tensor of any other incompressible motion with u = u ′ {\displaystyle \mathbf {u} =\mathbf {u} '} on the boundary. Let u i {\displaystyle u_{i}} and e i j {\displaystyle e_{ij}} be the representation of velocity and strain tensor in index notation , where the index runs from one to three. Let Ω ⊂ R 3 {\displaystyle \Omega \subset \mathbb {R} ^{3}} be a bounded domain with boundary Γ {\displaystyle \Gamma } of class C 1 {\displaystyle C^{1}} . [ 7 ] Consider the following integral, where in the above integral, only symmetrical part of the deformation tensor remains, because the contraction of symmetrical and antisymmetrical tensor is identically zero. Integration by parts gives The first integral is zero because velocity at the boundaries of the two fields are equal. Now, for the second integral, since u i {\displaystyle u_{i}} satisfies the Stokes flow equation , i.e., μ ∇ 2 u i = ∂ p ∂ x i {\displaystyle \mu \nabla ^{2}u_{i}={\frac {\partial p}{\partial x_{i}}}} , we can write Again doing an Integration by parts gives The first integral is zero because velocities are equal and the second integral is zero because the flow is incompressible, i.e., ∇ ⋅ u = ∇ ⋅ u ′ = 0 {\displaystyle \nabla \cdot \mathbf {u} =\nabla \cdot \mathbf {u} '=0} . Therefore we have the identity which says, The total rate of viscous dissipation energy over the whole volume Ω {\displaystyle \Omega } of the field u ′ {\displaystyle \mathbf {u} '} is given by and after a rearrangement using above identity, we get If D {\displaystyle D} is the total rate of viscous dissipation energy over the whole volume of the field u {\displaystyle \mathbf {u} } , then we have The second integral is non-negative and zero only if e i j = e i j ′ {\displaystyle e_{ij}=e_{ij}'} , thus proving the theorem ( D ′ ≥ D {\displaystyle D'\geq D} ). The Poiseuille flow theorem [ 8 ] is a consequence of the Helmholtz theorem states that The steady laminar flow of an incompressible viscous fluid down a straight pipe of arbitrary cross-section is characterized by the property that its energy dissipation is least among all laminar (or spatially periodic) flows down the pipe which have the same total flux.
https://en.wikipedia.org/wiki/Helmholtz_minimum_dissipation_theorem
The Helmholtz theorem of classical mechanics reads as follows: Let H ( x , p ; V ) = K ( p ) + φ ( x ; V ) {\displaystyle H(x,p;V)=K(p)+\varphi (x;V)} be the Hamiltonian of a one-dimensional system, where K = p 2 2 m {\displaystyle K={\frac {p^{2}}{2m}}} is the kinetic energy and φ ( x ; V ) {\displaystyle \varphi (x;V)} is a "U-shaped" potential energy profile which depends on a parameter V {\displaystyle V} . Let ⟨ ⋅ ⟩ t {\displaystyle \left\langle \cdot \right\rangle _{t}} denote the time average. Let Then d S = d E + P d V T . {\displaystyle dS={\frac {dE+PdV}{T}}.} The thesis of this theorem of classical mechanics reads exactly as the heat theorem of thermodynamics . This fact shows that thermodynamic-like relations exist between certain mechanical quantities. This in turn allows to define the "thermodynamic state" of a one-dimensional mechanical system. In particular the temperature T {\displaystyle T} is given by time average of the kinetic energy, and the entropy S {\displaystyle S} by the logarithm of the action (i.e., ∮ d x 2 m ( E − φ ( x , V ) ) {\textstyle \oint dx{\sqrt {2m\left(E-\varphi \left(x,V\right)\right)}}} ). The importance of this theorem has been recognized by Ludwig Boltzmann who saw how to apply it to macroscopic systems (i.e. multidimensional systems), in order to provide a mechanical foundation of equilibrium thermodynamics . This research activity was strictly related to his formulation of the ergodic hypothesis . A multidimensional version of the Helmholtz theorem, based on the ergodic theorem of George David Birkhoff is known as the generalized Helmholtz theorem. The generalized Helmholtz theorem is the multi-dimensional generalization of the Helmholtz theorem, and reads as follows. Let be the canonical coordinates of a s -dimensional Hamiltonian system , and let be the Hamiltonian function, where is the kinetic energy and is the potential energy which depends on a parameter V {\displaystyle V} . Let the hyper-surfaces of constant energy in the 2 s -dimensional phase space of the system be metrically indecomposable and let ⟨ ⋅ ⟩ t {\displaystyle \left\langle \cdot \right\rangle _{t}} denote time average. Define the quantities E {\displaystyle E} , P {\displaystyle P} , T {\displaystyle T} , S {\displaystyle S} , as follows: Then: This classical mechanics –related article is a stub . You can help Wikipedia by expanding it . This article about statistical mechanics is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Helmholtz_theorem_(classical_mechanics)
Helminthology is the study of parasitic worms (helminths). The field studies the taxonomy of helminths and their effects on their hosts . The origin of the first compound of the word is the Greek ἕλμινς - helmins, meaning "worm". In the 18th and early 19th century there was wave of publications on helminthology; this period has been described as the science's "Golden Era". During that period the authors Félix Dujardin , [ 1 ] William Blaxland Benham , Peter Simon Pallas , Marcus Elieser Bloch , Otto Friedrich Müller , [ 2 ] Johann Goeze , Friedrich Zenker , Charles Wardell Stiles , Carl Asmund Rudolphi , Otto Friedrich Bernhard von Linstow [ 3 ] and Johann Gottfried Bremser started systematic scientific studies of the subject. [ 4 ] The Japanese parasitologist Satyu Yamaguti was one of the most active helminthologists of the 20th century; he wrote the six-volume Systema Helminthum . [ 5 ] [ 6 ] This protostome -related article is a stub . You can help Wikipedia by expanding it . This parasitic animal -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Helminthology
Helmut Hermann W. Hofer (born February 28, 1956) [ 1 ] is a German-American mathematician, one of the founders of the area of symplectic topology . [ 2 ] He is a member of the National Academy of Sciences , [ 3 ] and the recipient of the 1999 Ostrowski Prize [ 4 ] and the 2013 Heinz Hopf Prize . Since 2009, he is a faculty member at the Institute for Advanced Study in Princeton, New Jersey. [ 2 ] He currently works on symplectic geometry , dynamical systems , and partial differential equations . His contributions to the field include Hofer geometry. Hofer was elected to the American Academy of Arts and Sciences in 2020. [ 5 ] He was an invited speaker at the International Congress of Mathematicians (ICM) in 1990 in Kyoto [ 6 ] and a plenary speaker at the ICM in 1998 in Berlin. [ 7 ] He is currently an editor of Annals of Mathematics . [ 8 ]
https://en.wikipedia.org/wiki/Helmut_Hofer
Helopeltis antonii Stonedahl, 1991 Helopeltis antonii , also known as the tea mosquito bug , are heteropterans found within the Miridae family. They have a relatively large geographical distribution and are a known pest of many agricultural “cash” crops such as cocoa, cashew, and tea. Subsequently, their impact negatively influences economic growth within the regions in which they inhabit. Thus, their impact on humans has caused them to be of great interest biologically, resulting in significant environmental implications. Helopeltis antonii are found in a region known as the old-world tropics which encompasses places such as India, Northern Australia, Guinea, Vietnam, Tanzania, Nigeria, and Indonesia. [ 1 ] [ 2 ] More specifically, they are more concentrated in the agricultural regions of the old-world tropics. [ 2 ] In India their distribution is primarily found within the “cashew belt” which is located along the western coast and central regions of the country due to its high affinity for these plants. [ 2 ] However, different nations grow certain crops in various locations within their borders. Crops that H. antonii prefer will ultimately determine their specific distribution within a country. H. antonii are often mistaken and misidentified with other Helopeltis species. [ 2 ] Thus, identifying the exact geographical range of H. antonii has become a difficult process. However, recent advances in species identification though DNA barcoding has made it much easier. [ 1 ] DNA barcoding is a rapid and relatively inexpensive identification technique that locates unique genetic markers in their DNA allowing for the accurate identification of not only H. antonii , but other species as well. [ 1 ] Reproduction for H. antonii occurs in 4 stages—arousal, mounting, copulation, and termination of copulation—and occurs year-round. [ 3 ] [ 4 ] Mounting, arousal, and termination of copulation occurs within a short time frame; copulation is much longer and more variable in length. [ 3 ] Mating typically occur in shaded, covered areas [ 3 ] Arousal consists of both chemical and tactile stimuli. [ 3 ] Pheromones play an important role in the chemical attraction of females for mating. [ 3 ] Although these chemical cues are important, physical cues comprise the bulk of mate attraction and arousal. [ 3 ] Males are the sole initiators for reproductive encounters. This first done through sexual identification of a female partner. Sexual identification is only possible when in close proximity of each other. [ 3 ] Once a female is located, the male makes contact with the female by gently probing her body with his antennae. Receptive females remain passive, permitting the male to proceed. In contrast, non-receptive females move to escape any further male interaction. [ 3 ] Following the initial arousal, the process of mounting ensues. Males mount females on the posterior region of her body allowing the erect male rostrum to stroke the dorsal side of the female, just below the thoracic shield. This stroking behaviour quiets the female and allows for easier insertion of the male aedeagus into the female genital aperture. If insertion is not achieved the male begins a left to right stroking motion to aid in its insertion. Females can also kick or shake males off to prevent further progression of mating. When this occurs, males are quick to remount and re-attempt insertion of their aedeagus into the female genital aperture. Successful insertion leads to copulation. [ 3 ] Once insertion has been established the male twists around in an end-to-end fashion to allow for copulation. Once in this end-to-end position, both the male and female remain still until copulation has completed. This can last anywhere from 10 minutes to 2 hours. [ 3 ] Following copulation, they abruptly disjoin, however, detachment can be often difficult due to the males' twisted position. Once separated both the male and female begin to feed and clean their own genitals and antenna—this feeding and cleaning behaviour typically occurs within a few steps from the site of copulation. [ 3 ] Females do not respond to any other mating advances immediately following copulation. [ 3 ] However, females typically reproduce more than once during their lifetime. [ 5 ] Interspecific mating has been known to occur between Helopeltis species, specifically between H. antonii and H. theivora . However, their mating results in the production of unviable eggs. [ 6 ] The production of eggs following interspecific mating between H. antonii and H. bradyi has not been observed. [ 6 ] This ability and inability to engage in interspecific mating is due to the difference in genital structure between females. [ 6 ] Females of both H. antonii and H. theivora have sclerotized rings that are not fused, whereas, the females of H. bradyi have fused sclerotized rings in its genitalia. This difference acts like a “lock and key” model for genitalia. [ 6 ] Males and females are able to reproduce and lay viable eggs after their first day of sexual maturity. [ 5 ] Unmated females are capable of laying eggs; however, they are sterile. [ 5 ] The sex ratio of males to females does not influence the number of eggs a female can lay but environments with a high ratio reduces female longevity due to mating exhaustion. [ 5 ] Females that reproduce more than once lay a larger number of eggs during oviposition . [ 5 ] Females probe plant tissues with the tip of their rostrum to find a suitable site for the deposition of their eggs. [ 3 ] The exact reason behind site choice is unknown, but once found the female bends her abdomen to establish contact between her ovipositor and the plant tissue. [ 3 ] The ovipositor is then inserted into the plant tissue and the eggs are deposited, below the epidermis and parenchymatous tissue of the plant, via abdominal contractions. [ 3 ] The eggs are ovo-elongate and silvery-white in colour and are approximately 1.0x0.3mm in size. [ 6 ] [ 3 ] Abundance of eggs laid is also weather dependent. Conditions that yield higher temperatures and increased sun exposure result in a higher abundance; whereas cooler temperatures, less available sunlight, and increased rain exposure reduces abundance. [ 4 ] H. antonii experiences partial metamorphosis, otherwise known as hemimetabolous development, which is characterized by it transition from an egg to nymph and eventually into a matured adult. [ 7 ] This developmental pattern takes about 25 days from the time the eggs are laid to adulthood. [ 7 ] The eggs take eggs 12–13 days to hatch followed by 12–13 days of progressive nymph instars . [ 7 ] H. antonii experience 5 instars in total before reaching adulthood. [ 7 ] During the first instar, the body appears light orange in colour and progresses to a deep orange in the second instar. [ 7 ] During the third instar, the body beings to develop wing buds and a scutellar horn. [ 7 ] Wing pads become visibly prominent as the fourth instar emerges. [ 7 ] Finally, in the fifth instar, the wing pads cover half of the abdomen—with the wings being transparent—and the body is light brown in colour but darkens via sclerotization. [ 7 ] Additionally, in the fifth instar, the dorsum of the thorax appears red in colour, the tergum of the abdomen a dull white, the dorsal abdominal segment a deep orange colour, and overlapped hemi-elytra covers over the abdomen with its distal end containing a triangular blackish-brown colouration. [ 7 ] The less-matured first, second and third instars tend to group close to each other and remain in proximity of their hatch site for feeding. In contrast, the more matured fourth and fifth instars tend to be more dispersed and feed in areas farther from their hatch site as a result. [ 7 ] Matured females have a characteristic white patch present on their fifth abdominal segment. [ 7 ] Although colouration is an important identifying feature of H. antonii it is subject to variation due to variations in temperature and sunlight exposure. [ 4 ] Red colour morphs tend to peak in abundance during October and reach their minimum abundance during February (for males) and June (for females). [ 4 ] Black colour morphs peak during June for both sexes. [ 4 ] A brownish-black colour morph is also seen within the population, but its abundance is low, and its frequency remains constant throughout the year. [ 4 ] H. antonii are herbivorous insects that have been known to feed on more than 100 different plant species. [ 2 ] The sites of feeding, on these plant hosts, are not localized. Rather, both adult and nymphs feed on various sites ranging from tender shoots, buds, stems, and even their fruiting bodies to obtain sap. [ 8 ] H. antonii possess modified mouthparts which work to form a long straw-like structure known as a “ stylet ”. [ 7 ] This modified mouth part enables them to suck up sap from deep within the plant tissues that would not otherwise be as easily accessible. [ 7 ] H. antonii feed on both native plants as well as agriculturally grown crops. [ 8 ] However, their availability changes with the seasons. This change in availability is due to the different growth cycles host plants experience throughout the year. As host plants enter their fruiting or flushing stages, they begin to have a higher rates of sap production and as a result become targeted by H. antonii. [ 8 ] In native, non-cultivated, habitats there appears to be a preference for certain types of host plants even when many others are present. [ 8 ] During January to February Annona is preferred, from March to April neem is preferred, from May to August papaya is preferred, and from September to December Singapore cherry is preferred. [ 8 ] In addition to the consumption of native plant species, agricultural “cash crops” such black pepper, cashew, cocoa, and tea as are often at high risk for consumption and damage due to their large-scale cultivation and ease of accessibility. [ 2 ] However, their feeding schedule on these is agricultural crops are more restricted based due to growing and harvest seasons. Like the seasonal preference of plants, preference is also seen in consumption habits of fruits with respect to different plants. For example, in custard apples the immature fruits are preferred over the matured fruits. Whereas in the Singapore cherries there is no observed feeding preference for immature or mature fruits. [ 8 ] Feeding requires the insertion of their stylet into the plant tissues. This insertion results in the secretion of saliva. Present within their saliva are toxic substances that cause death of plant tissues following feeding. However, the biochemical understanding of the toxin's toxicology and function within the saliva is poorly understood and is a site of current research. [ 9 ] Being a pest to many agricultural crops, resulting in severe destruction of plants following their consumption, have since made H. antonii a major target in hopes to reduce their prevalence in the agricultural industry. The use of insecticides and pesticides have long been used in an attempt to manage and reduce the damaging effects of H. antonii feeding. However, the effectiveness of these chemicals are concentration and volume dependent with respect to the type being used. [ 10 ] Some of these pesticides have a prevalence of 500 liters per hectare at concentrations ranging from 50g/L-500g/L. [ 10 ] The use of such chemical agents poses a risk not only to the environment but to humans as well—as exposure and administration levels continue to increase so too does its level of toxicity. [ 10 ] Additionally, many countries that import these crops do not import those that have traces of pesticides. [ 11 ] Thus, natural predators and parasitoids have been looked to for their biological control properties to prevent the use of these harmful chemicals. [ 10 ] H. antonii are subject to both predation and parasitism via parasitoids. [ 10 ] Parasitoids of both nymph and adult morphs include Hymenoptera ( Braconidae , Platygastridae ) and Diptera ( Sarcophagidae ). [ 10 ] Predators are more extensive in diversity and consist of Hymenoptera ( Formicidae , Vespidae ), Coleoptera , Mantodea , and Odonata . [ 10 ] Of specific interest and use are hymenopteran parasitoids, specifically, Telenomus cupis due to their high specificity and specialization on H. antonii eggs. The employment of these parasitoid specialists has significantly decreased the abundance of H. antonii eggs to effectively reduce their devastating impact on agricultural crops. [ 11 ] Additionally, these hymenopteran parasitoids are one of the few parasitoids that are active year-round. [ 11 ] The combined use of pesticides and biological control agents are less effective in reducing the number of H. antonii within agricultural systems. This is because these pesticides also act against biological control agents—reducing their effectiveness. Additionally, the biological control agents tend to be more affected by pesticides than H. antonii. [ 10 ] Biological predators and parasitoids are more affected than H. antonii due to their increase locomotory abilities causing them to be exposed to larger amounts of the synthetic pesticides found on crops. [ 10 ] The extensive and prolonged use of pesticides and its lesser effect on H. antonii , when compared to its biological control agents, raises concerns regarding pesticide resistance . However, such evidence has yet to suggest the acquisition of pesticide resistance in H. antonii. [ 4 ] H. antonii foraging behaviour, especially on commercially produced crops, has devastating impacts on overall crop yields showing yield reduction of as much as 35-75 percent. [ 10 ] As more of the native landscape becomes converted into agricultural lands it provides an increased food supply for them. This increased food supply allows for an increase in population. [ 11 ] As their population increases more plant tissues are subject to damage and injury. Thus, injured plants are no longer able to allocate their desired resources into fruit/seed production, rather, they are forced to allocate resources and energy into damage control and repair. [ 11 ] This alternative allocation of resources is what causes the observed yield reductions. [ 11 ] Poor yields result in poor economic outcomes for producers which also has adverse consequences for consumers such as increased prices, as well as an overall reduction in the number and overall quality of available products. Foraging behaviour of H. antonii causes necrotic lesions to develop on plant tissues at feeding sites which can cause the death to new plant buds. [ 5 ] Bud death inhibits plants from producing fruit—decreasing yield. [ 5 ] Similarly, feeding on premature and mature fruits causes fruit desiccation resulting in a reduction in size and quality—as seen in cashew plants. [ 5 ] Although feeding results in necrotic lesioning and desiccation, it is not the only factor that impacts yield. Following foraging, fungal pathogens can enter the wound tissues more readily and cause die-back of shoots and is the primary cause of inflorescence blight . [ 9 ] Even though fungal blight is a common occurrence in various plants, the wounds caused by H. antonii in plant tissues exacerbates and accelerates its effects. [ 9 ] Die-back from blight also limits the plant's ability to produce products and grow—further perpetuating yield loss. [ 9 ]
https://en.wikipedia.org/wiki/Helopeltis_antonii
Help Conquer Cancer is a volunteer computing project that runs on the BOINC platform. It is a joint project of the Ontario Cancer Institute and the Hauptman-Woodward Medical Research Institute . It is also the first project under World Community Grid to run with a GPU counterpart. [ 1 ] The goal is to enhance the efficiency of protein X-ray crystallography , which will enable researchers to determine the structure of many cancer -related proteins faster. This will lead to improving the understanding of the function of these proteins, and accelerate the development of new pharmaceutical drugs . This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Help_Conquer_Cancer
Help Cure Muscular Dystrophy is a volunteer computing project that runs on the BOINC platform. It is a joint effort of the French muscular dystrophy charity, L'Association française contre les myopathies; [ 1 ] and L' Institut de biologie moléculaire et cellulaire (Molecular and Cellular Biology Institute). Help Cure Muscular Dystrophy studies the function of various proteins that are produced by the two hundred genes known to be involved in the production of neuromuscular proteins by modelling the protein-protein interactions of the forty thousand relevant proteins that are listed in the Protein Data Bank . More specifically, it models how a protein would be affected when another protein or a ligand docks with it. [ 2 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Help_Cure_Muscular_Dystrophy
Helping behavior refers to voluntary actions intended to help others, with reward regarded or disregarded. It is a type of prosocial behavior (voluntary action intended to help or benefit another individual or group of individuals, [ 1 ] such as sharing, comforting, rescuing and helping). Altruism is distinguished from helping behavior in this way: Altruism refers to prosocial behaviors that are carried out without expectation of obtaining external reward (concrete reward or social reward) or internal reward (self-reward). An example of altruism would be anonymously donating to charity. [ 2 ] Kin selection theory explains altruism from an evolutionary perspective. Since natural selection screens out species without abilities to adapt to the challenging environment, preservation of good traits and superior genes are important for survival of future generations (i.e. inclusive fitness ). [ 3 ] Kin selection refers to an inheritable tendency to perform behaviors that may favor the chance of survival of people with a similar genetic base. [ 4 ] W. D. Hamilton proposed a mathematical expression for the kin selection: "where B is the benefit to the recipient, C is the cost to the altruist (both measured as the number of offspring gained or lost) and r is the coefficient of relationship (i.e. the probability that they share the same gene by descent)." [ 5 ] An experiment conducted in Britain supported kin selection [ 5 ] It is illustrated [ clarification needed ] by diagram below. The result showed that people were more willing to provide help to people with higher relatedness, something which occurs in both genders and in various cultures. The result also shows gender difference in kin selection: men are more affected by cues suggesting a similar genetic base than women. Reciprocal altruism is the idea that the incentive for an individual to help in the present is based on the expectation of receipt of help in the future. [ 6 ] Robert Trivers believes it is advantageous for an organism to pay a cost for the benefit of another non-related organism if the favor is repaid (when the benefit of the sacrifice outweighs the cost). As Peter Singer [ 7 ] notes, "reciprocity is found amongst all social mammals with long memories who live in stable communities and recognize each other as individuals." Individuals should identify cheaters (those who do not reciprocate help) who lose the benefit of help from them in the future, as seen, for example, in blood-sharing by vampire bats . [ 8 ] Economic trade and business [ 9 ] may be fostered by reciprocal altruism in which products given and received involve different exchanges. [ 10 ] Economic trades follow the "I'll scratch your back if you scratch mine" principle. A pattern of frequent giving and receiving of help among workers boosts both productivity and social standing. The negative-state relief model of helping [ 11 ] states that people help because of egoism . Egoistic motives lead a person to help others in bad circumstances in order to reduce personal distress experienced from knowing the situation of the people in need. Helping behavior happens only when the personal distress cannot be relieved by other actions. This model also explains people's avoidance behavior when they notice people in need: this is an alternative way for them to reduce their own distress. In one study, guilt feelings were induced in subjects by having them accidentally ruin a student's thesis data or by them seeing the data being ruined. Some subjects experienced positive events afterwards, e.g. being praised. Subjects who experienced negative guilt feelings were more motivated to help than those who had a neutral emotion. However, once the negative mood was relieved by receiving praise, subjects no longer had high motivation to help. [ 12 ] A second study found that people who anticipate positive events (in this case, listening to a comedy tape), show low helping motivation since they are expecting their negative emotions to be lifted up by the upcoming stimulation. [ 11 ] People may initiate helping behavior when they feel empathy for the person they are helping—when they can relate to that person and feel and understand what that person is experiencing. [ 13 ] Daniel Batson's Empathy-altruism hypothesis [ 14 ] asserts that the decision of whether to help or not is primarily influenced by the presence of empathy towards the person in need, and secondarily by factors like the potential costs and rewards (social exchange concerns). The hypothesis was supported by a study that divided participants into a high-empathy group and a low-empathy group. [ 15 ] Both groups listened to Janet, a fellow student, sharing her feelings of loneliness. The results indicated that the high-empathy group (instructed to vividly imagine Janet's emotions) volunteered to spend more time with her , regardless of whether their help remained anonymous [ clarification needed ] . This finding underscores the idea that empathetic individuals are more likely to provide assistance, without being primarily motivated by considerations of costs and rewards, thus lending support to the empathy-altruism hypothesis.. A strong influence on helping is feeling responsible to help, especially when combined with the belief that one can help other people. The feeling of responsibility can result from a situation that focuses responsibility on a person, or it can be a personal characteristic (leading to helping when activated by others' need). Ervin Staub described a "prosocial value orientation" that makes helping more likely when noticing a person in physical distress or psychological distress. Prosocial orientation was also negatively related to aggression in boys, and positively related to "constructive patriotism". The components of this orientation are a positive view of human beings, concern about others' welfare, and a feeling of and belief in one's responsibility for others' welfare. [ 16 ] According to the social-exchange theory , people help because they want to gain goods from the one being helped. [ 17 ] People estimate the rewards and costs of helping others, and aim at maximizing the former and minimizing the latter. Rewards are incentives, which can be material goods, social rewards which can improve one's image and reputation (e.g. praise), or self-reward [ clarification needed ] . [ 18 ] Rewards are either external or internal. External rewards are things that are obtained from others when helping them, for instance, friendship and gratitude. People are more likely to help those who are more attractive or important, whose approval is desired. [ 19 ] Internal reward is generated by oneself when helping. This can be, for example, a sense of goodness and self-satisfaction. When seeing someone in distress, we may empathize with that person and thereby become aroused and distressed. We may choose to help in order to reduce this arousal and distress. [ 20 ] According to this theory, before helping, people consciously calculate the benefits and costs of helping and not helping, and they help when the overall benefit to themselves of helping outweighs the cost. [ 21 ] A major cultural difference is between collectivism and individualism . Collectivists attend more to the needs and goals of the group they belong to, while individualists focus on themselves. This might suggest that collectivists would be more likely to help ingroup members, and would help strangers less frequently than would individualists. [ 22 ] Helping behavior is influenced by the economic environment . In general, frequency of helping behavior in a country is inversely related to the country's economic status [ clarification needed ] . [ 23 ] A meta-analytical study found out that at either extreme, urban (300,000 people or more) or rural environments (5,000 people or less), are the worst places if someone is looking for help. [ 24 ] Edgar Henry Schein describes three different roles people may follow when they respond to requests for help: The Expert Resource Role, The Doctor Role, The Process Consultant Role. [ 25 ] : 53 –54
https://en.wikipedia.org/wiki/Helping_behavior
Hemagglutination , or haemagglutination , is a specific form of agglutination that involves red blood cells (RBCs). It has two common uses in the laboratory: blood typing and the quantification of virus dilutions in a haemagglutination assay . Blood type can be determined by using antibodies that bind to the A or B blood group antigens in a sample of blood. For example, if antibodies that bind the A blood group are added and agglutination occurs, the blood is either type A or type AB. To determine between type A or type AB, antibodies that bind the B group are added and if agglutination does not occur, the blood is type A. If agglutination does not occur with either antibodies that bind to type A or type B antigens, then neither antigen is present on the blood cells, which means the blood is type O. [ 1 ] [ 2 ] In blood grouping, the patient's serum is tested against RBCs of known blood groups and also the patient's RBCs are tested against known serum types. In this way the patient's blood group is confirmed from both RBCs and serum. A direct Coombs test is also done on the patient's blood sample in case there are any confounding antibodies. Many viruses attach to molecules present on the surface of RBCs. A consequence of this is that at certain concentrations, a viral suspension may bind together (agglutinate) the RBCs, thus preventing them from settling out of suspension. Since agglutination is not linked to infectivity, attenuated viruses can therefore be used in assays while an additional assay such as a plaque assay must be used to determine infectivity. By serially diluting a virus suspension into an assay tray (a series of wells of uniform volume) and adding a standard amount of blood cells, an estimation of the number of virus particles can be made. While less accurate than a plaque assay , it is cheaper and quicker (taking just 30 minutes). [ citation needed ] This assay may be modified to include the addition of an antiserum. By using a standard amount of virus, a standard amount of blood cells, and serially diluting the antiserum , one can identify the concentration of the antiserum (the greatest dilution which inhibits hemagglutination).
https://en.wikipedia.org/wiki/Hemagglutination
The hemagglutination assay or haemagglutination assay ( HA ) and the hemagglutination inhibition assay ( HI or HAI ) were developed in 1941–42 by American virologist George Hirst as methods for quantifying the relative concentration of viruses , bacteria , or antibodies. [ 1 ] HA and HAI apply the process of hemagglutination , in which sialic acid receptors on the surface of red blood cells (RBCs) bind to the hemagglutinin glycoprotein found on the surface of influenza virus (and several other viruses) and create a network, or lattice structure, of interconnected RBCs and virus particles. [ 2 ] The agglutinated lattice maintains the RBCs in a suspended distribution, typically viewed as a diffuse reddish solution. The formation of the lattice depends on the concentrations of the virus and RBCs, and when the relative virus concentration is too low, the RBCs are not constrained by the lattice and settle to the bottom of the well. Hemagglutination is observed in the presence of staphylococci, vibrios, and other bacterial species, similar to the mechanism viruses use to cause agglutination of erythrocytes. [ 3 ] [ 4 ] The RBCs used in HA and HI assays are typically from chickens, turkeys, horses, guinea pigs, or humans depending on the selectivity of the targeted virus or bacterium and the associated surface receptors on the RBC. A general procedure for HA is as follows, a serial dilution of virus is prepared across the rows in a U or V- bottom shaped 96-well microtiter plate. [ 5 ] The most concentrated sample in the first well is often diluted to be 1/5x of the stock, and subsequent wells are typically two-fold dilutions (1/10, 1/20, 1/40, etc.).The final well serves as a negative control with no virus. Each row of the plate typically has a different virus and the same pattern of dilutions. After serial dilutions, a standardized concentration of RBCs is added to each well and mixed gently. The plate is incubated for 30 minutes at room temperature. Following the incubation period, the assay can be analyzed to distinguish between agglutinated and non-agglutinated wells. The images across a row will typically progress from agglutinated wells with high virus concentration and a diffuse reddish appearance to a series of wells with low virus concentrations containing a dark red pellet, or button, in the center of the well. The low concentration wells appear nearly identical to the no-virus negative control well. The button appearance occurs because the RBCs are not held in the agglutinated lattice structure and settle into the low point of the U or V-bottom well. The transition from agglutinated to non-agglutinated wells occurs distinctively, within 1 to 2 wells. The relative concentration, or titer, of the virus sample is based on the well with the last agglutinated appearance, immediately before a pellet is observed. [ 6 ] Relative to the initial viral stock concentration, the virus concentration in this well will be some dilution of the stock, for example, 1/40-fold. The titer value of that sample is the inverse of the dilution, i.e., 40. In some cases, the virus is initially so dilute that agglutinated wells are never observed. In that case, the titer of these samples is commonly assigned as 5, indicating the highest possible concentration, but the accuracy of that value is clearly low. Alternatively, if the relative concentration of the virus is extremely high and the wells never transition to a button appearance. The titer value is then commonly assigned to be the highest dilution, such as 5120. HI is closely related to the HA assay, but includes anti-viral antibodies as “inhibitors” to interfere with the virus-RBC interaction. The goal is to characterize the concentration of antibodies in the antiserum or other samples containing antibodies. [ 7 ] The HI assay is generally performed by creating a dilution series of antiserum across the rows of a 96-well microtiter plate. Each row would usually be a different sample. A standardized amount of virus or bacteria is added to each well, and the mixture is allowed to incubate at room temperature for 30 minutes. The last well in each row would be a negative control with no virus added. During the incubation, antibodies bind to the viral particles, and if the concentration and binding affinity of the antibodies are high enough, the viral particles are effectively blocked from causing hemagglutination. [ 8 ] Next, a standardized amount of RBCs is added to each well and allowed to incubate at room temperature for an additional 30 minutes. The resulting HI plate images usually progress from non-agglutinated, “button” wells with high antibody concentration to agglutinated, red diffuse wells with low antibody concentration. The HI titer value is the inverse of the last dilution of serum that completely inhibited hemagglutination. [ 9 ] The preceding descriptions of the HA and HI processes are generalized, and specific details can vary depending on the operator and laboratory. For example, serial dilutions across the rows is described, but some laboratories use an alternate orientation and perform dilutions down the columns instead. Similarly, the starting dilution, serial dilution factor, incubation times, and choice of U or V-bottom plate can depend on the specific laboratory. HA and HI have the advantages that the assays are simple, use relatively inexpensive and available instruments and supplies, and provide results within a few hours. The assays are also well established in many laboratories around the world, allowing some measure of credibility, comparison, and standardization. [ 10 ] [ 11 ] Optimal and reliable results require controlling several variables, such as incubation times, red blood cell concentration, and type of red blood cell. [ 12 ] Non-specific factors in the sample can lead to interference and incorrect titer values. For example, molecules in the sample other than virus-specific antibodies can inhibit agglutination between virus and RBCs, as well as potentially blocking antibody from binding to virus. Receptor-destroying enzymes (RDE) are commonly used to treat samples prior to analysis to prevent non-specific inhibition. [ 13 ] Analysis of the HA or HI results relies on a qualified individual to read the plate and determine the titer values. The manual interpretation method introduces more opportunities for discrepancies in the assay because results can be subjective and the agreement between human readers is inconsistent. [ 14 ] Also, there is no digital record of the plate or titer determinations so the initial interpretation is tedious and commonly done in replicates. The range of potential variables and differences between expert readers can make comparing inter-laboratory results difficult. [ 15 ]
https://en.wikipedia.org/wiki/Hemagglutination_assay
The term hemagglutinin (alternatively spelt haemagglutinin , from the Greek haima , 'blood' + Latin gluten , 'glue') refers to any protein that can cause red blood cells (erythrocytes) to clump together (" agglutinate ") in vitro . [ 1 ] They do this by binding to the sugar residues on a red blood cell; when a single hemagglutinin molecule binds sugars from multiple red blood cells, it "glues" these cells together. As a result, they are carbohydrate-binding proteins ( lectins ). The ability to bind red blood cell sugars have independently appeared several times, and as a result hemaglutinins do not all bind using the same mechanism. The ability to bind red blood sugars is also not necessarily related to the in vivo function of the protein. [ 2 ] The term hemagglutinin is most commonly applied to plant and viral lectins. Natural proteins that clump together red blood cells were known since the turn of the 19th century. [ 2 ] Virologist George K. Hirst is also credited for "discovering agglutination and hemagglutinin" in 1941. [ 3 ] Alfred Gottschalk proved in 1957 that hemagglutinins bind a virus to a host cell by attaching to sialic acids on carbohydrate side chains of cell-membrane glycoproteins and glycolipids . [ 4 ] In the viral families Paramyxoviridae and Orthomyxoviridae , viruses use a homotrimeric glycoprotein hemagglutinin on their protein capsids . [ 5 ] [ 6 ] [ 7 ] Hemagglutinins are responsible for binding to receptors , sialic acid residues , on host cell membranes to initiate virus docking and infection . [ 8 ] [ 9 ] Specifically, they recognize cell-surface glycoconjugates containing sialic acid on the surface of host red blood cells with a low affinity and use them to enter the endosome of host cells. [ 10 ] Hemagglutinins tend to recognize α-2,6-linked sialic acids of the host cells in humans and α-2,3-linked sialic acids in avian species, although there is evidence that hemagglutinin specificity can vary. This correlates to the fact that Influenza A typically establishes infections in the upper respiratory tract in humans, where many of these α-2,6-linked sialic acids are present. [ 11 ] There are various subtypes of hemagglutinins, in which H1, H2, and H3 are known to have human susceptibility. [ 12 ] It is the variation in hemagglutinin (and neuraminidase ) subtypes that require health organizations (ex. WHO ) to constantly update and surveil the known circulating flu viruses in human and animal populations (ex. H5N1 ). In the endosome, hemagglutinins undergo conformational changes due to a pH drop to of 5–6.5 enabling viral attachment through a fusion peptide . [ 13 ] Hemagglutinins are small proteins that extend from the surface of the virus membrane as spikes that are 135 Angstroms (Å) in length and 30-50 Å in diameter. [ 20 ] Each spike is composed of three identical monomer subunits, making the protein a homotrimer . These monomers are formed of two glycopeptides , HA1 and HA2, and linked by two disulphide polypeptides , including membrane-distal HA1 and the smaller membrane-proximal HA2. X-ray crystallography , NMR spectroscopy , and cryo-electron microscopy were used to solve the protein's structure, the majority of which is α-helical . [ 21 ] In addition to the homotrimeric core structure, hemagglutinins have four subdomains: the membrane-distal receptor binding R subdomain, the vestigial domain E, that functions as a receptor-destroying esterase , the fusion domain F, and the membrane anchor subdomain M. The membrane anchor subdomain forms elastic protein chains linking the hemagglutinin to the ectodomain . [ 22 ] On the viral capsids of influenza types A and B , hemagglutinin is initially inactive. Only when cleaved by host proteins, does each monomer polypeptide of the homotrimer transforms into a dimer – composed of HA1 and HA2 subunits attached by disulfide bridges. [ 23 ] The HA1 subunit is responsible for docking the viral capsid onto the host cell by binding to sialic acid residues present on the surface of host respiratory cells. This binding triggers endocytosis . [ 9 ] The pH in the endosomal compartment then decreases from proton influx, and this causes a conformational change in HA that forces the HA2 subunit to “flip outward.” The HA2 subunit is responsible for membrane fusion. It binds to the endosomal membrane, pulling the viral capsid membrane and the endosomal membrane tightly together, eventually forming a pore through which the viral genome can enter into the host cell cytoplasm. [ 7 ] From here, the virus can use host machinery to proliferate. See phytohaemagglutinin .
https://en.wikipedia.org/wiki/Hemagglutinin
Hemamala Indivari Karunadasa is an assistant professor of chemistry at Stanford University . [ 1 ] [ 2 ] She works on hybrid organic – inorganic materials, such as perovskites , for clean energy and large area lighting. Karunadasa grew up in Colombo . [ 3 ] She attended high school in Sri Lanka and was a student at Ladies' College, Colombo . [ 4 ] She thought that she would become a doctor, and eventually decided to apply to university in America. [ 3 ] She attended Princeton University , where she worked with Robert Cava on the geometric magnetic frustration of metal oxides. [ 5 ] Cava's excitement about research inspired Karunadasa to continue her own academic career. [ 3 ] Graduating with a degree in chemistry and a certificate in materials science and engineering, Karunadasa joined the University of California, Berkeley for her doctoral studies. There she worked in the lab of Jeffrey R. Long on heavy-atom building units for magnetic materials and electrocatalysts for water splitting . [ 6 ] Karunadasa continued her work on water-splitting electrocatalysts with Jeffrey R. Long and Christopher Chang as a postdoctoral fellow. The molybdenum-oxo metal complex synthesized by Karunadasa is around seventy times cheaper than platinum, the most commonly used metal catalyst in water splitting. [ 4 ] [ 6 ] [ 7 ] She then moved to the California Institute of Technology , where she worked on catalysts for hydrocarbon oxidation with Harry B. Gray as a BP Postdoctoral Fellow. [ 5 ] Karunadasa began her independent career at Stanford University in 2012. [ 8 ] Her group synthesizes hybrid perovskite materials that combine small organic molecules with inorganic solids. Three-dimensional lead iodide perovskites are being investigated for solar cells, but they can be both unstable and toxic. For example, their sensitivity to water makes them difficult materials to use in the fabrication of large-scale devices. [ 9 ] Karunadasa is interested in ways to mitigate these shortcomings, and any transient changes that may occur when these materials absorb light. [ 9 ] In particular, Karunadasa has created two-dimensional perovskites, with thin inorganic sheets, that can be tuned to emit every colour of visible light. [ 10 ] [ 11 ] In these systems the organic small molecules are sandwiched between the sheets. [ 10 ] [ 12 ] In the case of thick inorganic sheets, the inorganic materials act as absorbers, and enhance the stability of the perovskite materials. The organo-metal-halide perovskites created by Karunadasa and her collaborator Michael D. McGehee can be processed in solution. [ 13 ] She believes that through careful chemical design it is possible to determine the fate of photogenerated charge carriers. Karunadasa has investigated the lifetimes of acoustic phonons in lead iodide perovskites with Michael Toney and Aron Walsh. [ 14 ] Her awards and honours include; Her publications include; Her work was featured in the Journal of the American Chemical Society Young Investigators Issue in 2019. [ 22 ] She serves on the editorial board of Inorganic Chemistry .
https://en.wikipedia.org/wiki/Hemamala_Karunadasa
Hemangada Thakura was the King of Mithila between 1571 AD to 1590 AD. He was also an Indian Astronomer in 16th century. [ 1 ] He was famous for his astronomical treatise Grahan Mala. The book told the dates of the eclipses for 1088 years from 1620 AD to 2708 AD. The dates of lunar and solar eclipse that Hemangad Thakur had fixed on the basis of his unique calculations are proving to be true till date. [ 2 ] Hemangada Thakura was born in a Maithil Brahmin family in Mithila region of present Bihar state in India . He born in 1530 AD. He was the grandson of Mahamahopadhyay Mahesha Thakura and the son of Gopal Thakur. Mahesh Thakur was also the King of Mithila in Khandwala Dynasty . After the abdication of his father Gopal Thakur, he was handed over the throne of Mithila in 1571 AD. But he was not interested in governance. In 1572 AD, he was arrested and taken to Delhi and imprisoned for not paying taxes on time to the Mughal Empire . It is said that in prison he started writing mathematical calculations on the surface of the floor of the jail, then the jailer asked him about the mathematical figures drawn on the floor. Hemangada Thakura replied that he was trying to understand the motion of the Moon. Then the jailer spread the news that Hemangada Thakura had been mental mad. After hearing the news, the Mughal Emperor himself went to see Hemangada Thakura and asked about the mathematical calculations and figures drawn on the floor. Then Hemangada Thakura replied that he had calculated the dates of the eclipses for the next 500 years. After hearing the reply, the emperor immediately granted copperplate and pen to him for writing the calculations and told him that if his calculations became true, then he would be released from the prison. There in prison, he composed his famous book Grahan Mala which explained the eclipses for 1088 years. He predicated the date and time of the next lunar eclipse and informed the emperor. The prediction of the next lunar eclipse came to true on the same date and time as calculated by him. [ 3 ] On composing this book, the Mughal emperor not only released him, but also returned the tax-free kingdom of Mithila . Hemangada Thakura calculated the dates of the eclipses for 1088 years from 1620 AD to 2708 AD on the basis of his unique calculations. The eclipses dates have been proved to be true till date. He composed an astronomical treatise book known as Grahan Mala which explains the dates of the eclipses. In making Panchang , scholars and Pandits take helps of this book. The manuscript of the book was preserved in Kameshwar Singh Darbhanga Sanskrit University , which has been stolen a few years back. By the way, in 1983 itself, the university had published this book, which is present in various libraries. [ 4 ] [ 5 ] [ 6 ] Indian National Science Academy started a research project through national commission ( 2014 - 2022 ) by Vanaja V on the astronomical treatise Grahan Mala. The research project is known as “A Critical Study of Hemangada Thakkura’s Grahaṇamala” . [ 7 ]
https://en.wikipedia.org/wiki/Hemangada_Thakura
Hemangioblasts are the multipotent precursor cells that can differentiate into both hematopoietic and endothelial cells. [ 1 ] [ 2 ] [ 3 ] In the mouse embryo, the emergence of blood islands in the yolk sac at embryonic day 7 marks the onset of hematopoiesis. From these blood islands, the hematopoietic cells and vasculature are formed shortly after. Hemangioblasts are the progenitors that form the blood islands. To date, the hemangioblast has been identified in human, mouse and zebrafish [ 4 ] embryos. Hemangioblasts have been first extracted from embryonic cultures and manipulated by cytokines to differentiate along either hematopoietic or endothelial route. It has been shown that these pre-endothelial/pre-hematopoietic cells in the embryo arise out of a phenotype CD34 population. It was then found that hemangioblasts are also present in the tissue of post-natal individuals, such as in newborn infants and adults. There is now emerging evidence of hemangioblasts that continue to exist in the adult as circulating stem cells in the peripheral blood that can give rise to both endothelial cells and hematopoietic cells. These cells are thought to express both CD34 and CD133 [ 5 ] These cells are likely derived from the bone marrow , and may even be derived from hematopoietic stem cells . The hemangioblast was first hypothesized in 1900 by Wilhelm His . Existence of the hemangioblast was first proposed in 1917 by Florence Sabin, who observed the close spatial and temporal proximity of the emergence of blood vessels and red blood cells within the yolk sac in chick embryos. [ 6 ] In 1932, making the same observation as Sabin, Murray coined the term “hemangioblast”. [ 7 ] The hypothesis of a bipotential precursor was further supported by the fact that endothelial cells and hematopoietic cells share many of the same markers, including Flk1, Vegf, CD34, Scl, Gata2, Runx1, and Pecam-1. Furthermore, it was shown that depletion of Flk1 in the developing embryo results in disappearance of both hematopoietic cells and endothelial cells. [ 8 ] In 1997, Kennedy from the Keller Lab first isolated the in vitro equivalent of the hemangioblast. These cells were termed blast colony forming cells (BL-CFC). Using aggregates of differentiating mouse embryonic stem cells called embryoid bodies, the authors plated cells in the differentiation timeline just prior to the arise of hematopoietic cells. In the presence of the proper cytokines, a subset of these cells was able to differentiate into hematopoietic lineages. [ 9 ] In addition, these same cells can also be differentiated into endothelial cells, as shown by Choi of the Keller Lab . [ 10 ] In 2004, hemangioblasts were isolated in the mouse embryo by Huber of the Keller Lab. They are derived from the posterior primitive streak region of the mesoderm in the gastrulating embryo. By using limiting dilutions, the authors demonstrated that the resulting hematopoietic and endothelial cells were indeed of clonal origin, proving that they had successfully isolated the hemangioblast in the developing embryo. [ 11 ]
https://en.wikipedia.org/wiki/Hemangioblast
Hematochrome is a yellow, orange, or (most commonly) red biological pigment present in some green algae , especially when exposed to intense light. It is a name used mainly in older literature. Hematochrome is a mixture of carotenoid pigments and their derivates. [ 1 ] [ 2 ] [ 3 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Hematochrome
Heme ( American English ), or haem ( Commonwealth English , both pronounced / hi:m / HEEM ), is a ring-shaped iron-containing molecular component of hemoglobin , which is necessary to bind oxygen in the bloodstream . It is composed of four pyrrole rings with 2 vinyl and 2 propionic acid side chains. [ 1 ] Heme is biosynthesized in both the bone marrow and the liver . [ 2 ] Heme plays a critical role in multiple different redox reactions in mammals, due to its ability to carry the oxygen molecule. Reactions include oxidative metabolism ( cytochrome c oxidase , succinate dehydrogenase ), xenobiotic detoxification via cytochrome P450 pathways (including metabolism of some drugs), gas sensing ( guanyl cyclases , nitric oxide synthase), and microRNA processing (DGCR8). [ 3 ] [ 4 ] Heme is a coordination complex "consisting of an iron ion coordinated to a tetrapyrrole acting as a tetradentate ligand , and to one or two axial ligands". [ 5 ] The definition is loose, and many depictions omit the axial ligands. [ 6 ] Among the metalloporphyrins deployed by metalloproteins as prosthetic groups , heme is one of the most widely used [ 7 ] and defines a family of proteins known as hemoproteins . Hemes are most commonly recognized as components of hemoglobin , the red pigment in blood , but are also found in a number of other biologically important hemoproteins such as myoglobin , cytochromes , catalases , heme peroxidase , and endothelial nitric oxide synthase . [ 8 ] [ 9 ] The word haem is derived from Greek αἷμα haima 'blood'. Hemoproteins have diverse biological functions including the transportation of diatomic gases, chemical catalysis , diatomic gas detection, and electron transfer . The heme iron serves as a source or sink of electrons during electron transfer or redox chemistry. In peroxidase reactions, the porphyrin molecule also serves as an electron source, being able to delocalize radical electrons in the conjugated ring. In the transportation or detection of diatomic gases, the gas binds to the heme iron. During the detection of diatomic gases, the binding of the gas ligand to the heme iron induces conformational changes in the surrounding protein. [ 10 ] In general, diatomic gases only bind to the reduced heme, as ferrous Fe(II) while most peroxidases cycle between Fe(III) and Fe(IV) and hemeproteins involved in mitochondrial redox, oxidation-reduction, cycle between Fe(II) and Fe(III). It has been speculated that the original evolutionary function of hemoproteins was electron transfer in primitive sulfur -based photosynthesis pathways in ancestral cyanobacteria -like organisms before the appearance of molecular oxygen . [ 11 ] Hemoproteins achieve their remarkable functional diversity by modifying the environment of the heme macrocycle within the protein matrix. [ 12 ] For example, the ability of hemoglobin to effectively deliver oxygen to tissues is due to specific amino acid residues located near the heme molecule. [ 13 ] Hemoglobin reversibly binds to oxygen in the lungs when the pH is high, and the carbon dioxide concentration is low. When the situation is reversed (low pH and high carbon dioxide concentrations), hemoglobin will release oxygen into the tissues. This phenomenon, which states that hemoglobin's oxygen binding affinity is inversely proportional to both acidity and concentration of carbon dioxide, is known as the Bohr effect . [ 14 ] The molecular mechanism behind this effect is the steric organization of the globin chain; a histidine residue, located adjacent to the heme group, becomes positively charged under acidic conditions (which are caused by dissolved CO 2 in working muscles, etc.), releasing oxygen from the heme group. [ 15 ] There are several biologically important kinds of heme: The most common type is heme B ; other important types include heme A and heme C . Isolated hemes are commonly designated by capital letters while hemes bound to proteins are designated by lower case letters. Cytochrome a refers to the heme A in specific combination with membrane protein forming a portion of cytochrome c oxidase . [ 18 ] The names of cytochromes typically (but not always) reflect the kinds of hemes they contain: cytochrome a contains heme A, cytochrome c contains heme C, etc. This convention may have been first introduced with the publication of the structure of heme A . The practice of designating hemes with upper case letters was formalized in a footnote in a paper by Puustinen and Wikstrom, [ 26 ] which explains under which conditions a capital letter should be used: "we prefer the use of capital letters to describe the heme structure as isolated. Lowercase letters may then be freely used for cytochromes and enzymes, as well as to describe individual protein-bound heme groups (for example, cytochrome bc, and aa3 complexes, cytochrome b 5 , heme c 1 of the bc 1 complex, heme a 3 of the aa 3 complex, etc)." In other words, the chemical compound would be designated with a capital letter, but specific instances in structures with lowercase. Thus cytochrome oxidase, which has two A hemes (heme a and heme a 3 ) in its structure, contains two moles of heme A per mole protein. Cytochrome bc 1 , with hemes b H , b L , and c 1 , contains heme B and heme C in a 2:1 ratio. The practice seems to have originated in a paper by Caughey and York in which the product of a new isolation procedure for the heme of cytochrome aa3 was designated heme A to differentiate it from previous preparations: "Our product is not identical in all respects with the heme a obtained in solution by other workers by the reduction of the hemin a as isolated previously (2). For this reason, we shall designate our product heme A until the apparent differences can be rationalized." [ 27 ] In a later paper, [ 28 ] Caughey's group uses capital letters for isolated heme B and C as well as A. The enzymatic process that produces heme is properly called porphyrin synthesis, as all the intermediates are tetrapyrroles that are chemically classified as porphyrins. The process is highly conserved across biology. In humans, this pathway serves almost exclusively to form heme. In bacteria , it also produces more complex substances such as cofactor F430 and cobalamin ( vitamin B 12 ). [ 29 ] The pathway is initiated by the synthesis of δ-aminolevulinic acid (dALA or δALA) from the amino acid glycine and succinyl-CoA from the citric acid cycle (Krebs cycle). The rate-limiting enzyme responsible for this reaction, ALA synthase , is negatively regulated by glucose and heme concentration. Mechanism of inhibition of ALAs by heme or hemin is by decreasing stability of mRNA synthesis and by decreasing the intake of mRNA in the mitochondria. This mechanism is of therapeutic importance: infusion of heme arginate or hematin and glucose can abort attacks of acute intermittent porphyria in patients with an inborn error of metabolism of this process, by reducing transcription of ALA synthase. [ 30 ] The organs mainly involved in heme synthesis are the liver (in which the rate of synthesis is highly variable, depending on the systemic heme pool) and the bone marrow (in which rate of synthesis of Heme is relatively constant and depends on the production of globin chain), although every cell requires heme to function properly. However, due to its toxic properties, proteins such as emopexin (Hx) are required to help maintain physiological stores of iron in order for them to be used in synthesis. [ 31 ] Heme is seen as an intermediate molecule in catabolism of hemoglobin in the process of bilirubin metabolism . Defects in various enzymes in synthesis of heme can lead to group of disorder called porphyrias, which include acute intermittent porphyria , congenital erythropoetic porphyria , porphyria cutanea tarda , hereditary coproporphyria , variegate porphyria , and erythropoietic protoporphyria . [ 32 ] Impossible Foods , producers of plant-based meat substitutes , use an accelerated heme synthesis process involving soybean root leghemoglobin and yeast , adding the resulting heme to items such as meatless ( vegan ) Impossible burger patties. The DNA for leghemoglobin production was extracted from the soybean root nodules and expressed in yeast cells to overproduce heme for use in the meatless burgers. [ 33 ] This process claims to create a meaty flavor in the resulting products. [ 34 ] [ 35 ] Degradation begins inside macrophages of the spleen , which remove old and damaged erythrocytes from the circulation. In the first step, heme is converted to biliverdin by the enzyme heme oxygenase (HO). [ 36 ] NADPH is used as the reducing agent, molecular oxygen enters the reaction, carbon monoxide (CO) is produced and the iron is released from the molecule as the ferrous ion (Fe 2+ ). [ 37 ] CO acts as a cellular messenger and functions in vasodilation. [ 38 ] In addition, heme degradation appears to be an evolutionarily-conserved response to oxidative stress . Briefly, when cells are exposed to free radicals , there is a rapid induction of the expression of the stress-responsive heme oxygenase-1 (HMOX1) isoenzyme that catabolizes heme (see below). [ 39 ] The reason why cells must increase exponentially their capability to degrade heme in response to oxidative stress remains unclear but this appears to be part of a cytoprotective response that avoids the deleterious effects of free heme. When large amounts of free heme accumulates, the heme detoxification/degradation systems get overwhelmed, enabling heme to exert its damaging effects. [ 31 ] In the second reaction, biliverdin is converted to bilirubin by biliverdin reductase (BVR): [ 40 ] Bilirubin is transported into the liver by facilitated diffusion bound to a protein ( serum albumin ), where it is conjugated with glucuronic acid to become more water-soluble. The reaction is catalyzed by the enzyme UDP- glucuronosyltransferase . [ 41 ] This form of bilirubin is excreted from the liver in bile . Excretion of bilirubin from liver to biliary canaliculi is an active, energy-dependent and rate-limiting process. The intestinal bacteria deconjugate bilirubin diglucuronide releasing free bilirubin, which can either be reabsorbed or reduced to urobilinogen by the bacterial enzyme bilirubin reductase. [ 42 ] Some urobilinogen is absorbed by intestinal cells and transported into the kidneys and excreted with urine ( urobilin , which is the product of oxidation of urobilinogen, and is responsible for the yellow colour of urine). The remainder travels down the digestive tract and is converted to stercobilinogen . This is oxidized to stercobilin , which is excreted and is responsible for the brown color of feces . [ 43 ] Under homeostasis , the reactivity of heme is controlled by its insertion into the "heme pockets" of hemoproteins. [ citation needed ] Under oxidative stress however, some hemoproteins, e.g. hemoglobin, can release their heme prosthetic groups. [ 44 ] [ 45 ] The non-protein-bound (free) heme produced in this manner becomes highly cytotoxic, most probably due to the iron atom contained within its protoporphyrin IX ring, which can act as a Fenton's reagent to catalyze in an unfettered manner the production of free radicals. [ 46 ] It catalyzes the oxidation and aggregation of protein, the formation of cytotoxic lipid peroxide via lipid peroxidation and damages DNA through oxidative stress. Due to its lipophilic properties, it impairs lipid bilayers in organelles such as mitochondria and nuclei. [ 47 ] These properties of free heme can sensitize a variety of cell types to undergo programmed cell death in response to pro-inflammatory agonists, a deleterious effect that plays an important role in the pathogenesis of certain inflammatory diseases such as malaria [ 48 ] and sepsis . [ 49 ] There is an association between high intake of heme iron sourced from meat and increased risk of colorectal cancer via nitrosamine formation during digestion (N-nitroso). [ 50 ] The American Institute for Cancer Research (AICR) and World Cancer Research Fund International (WCRF) concluded in a 2018 report that there is limited but suggestive evidence that foods containing heme iron increase risk of colorectal cancer. [ 51 ] A 2019 review found that heme iron intake is associated with increased breast cancer risk. [ 52 ] The following genes are part of the chemical pathway for making heme:
https://en.wikipedia.org/wiki/Heme