text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Hazardous Material Emergency Alarm Systems (HMEAS) are stand-alone systems for protecting people from exposure to dangerous substances in the workplace.
HMEAS are designed for a wide variety of commercial and industrial environments, including refineries , industrial processes and warehousing, educational and commercial research facilities as well as hospitals and even restaurants . The systems are designed to both alert and respond – activating warnings and alarms and depending on the extent of the danger, triggering corrective actions that could include material shutoffs, increased ventilation and the opening or closing of access to the exposed area.
Hazardous Material Emergency Alarm Systems (HMEAS) should be designed to meet the safety requirements embodied in the Building and Fire Codes used throughout North America and Europe , [ 1 ]
This emergency services –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hazardous_Material_Emergency_Alarm_System |
The Hazardous Materials Transportation Act (HMTA) , enacted in 1975, is the principal federal law in the United States regulating the transportation of hazardous materials . Its purpose is to "protect against the risks to life, property, and the environment that are inherent in the transportation of hazardous material in intrastate, interstate, and foreign commerce" under the authority of the United States Secretary of Transportation . [ 1 ]
The Act was passed as a means to improve the uniformity of existing regulations for transporting hazardous materials and to prevent spills and illegal dumping endangering the public and the environment, a problem exacerbated by uncoordinated and fragmented regulations. [ 2 ] Regulations are enforced through four key provisions encompassing federal standards under Title 49 of the United States Code :
Violation of the HMTA regulations can result in civil or criminal penalties, unless a special permit is granted under the discretion of the Secretary of Transportation. [ 3 ]
In the 1970s, landfills throughout the United States began to refuse the acceptance of hazardous wastes for the protection of property, the environment, and liability from what would later become known as Superfund sites, which dramatically increased the cost of disposal. [ 2 ] The high cost of disposal led to increased dumping of materials that were increasingly being deemed "hazardous" by the public and government. Illegal dumping took place on vacant lots, along highways, or on the actual highways themselves.
At the same time, increased accidents and incidents with hazardous materials during transportation was a growing problem, causing damage to property and the environment, injury, and death. [ 2 ] [ 4 ] At the time, the U.S. Department of Transportation estimated that 75% of all hazardous waste shipments violated existing regulations due to a lack of inspection personnel and poor coordination among the U.S. Coast Guard , the Federal Aviation Administration , the Federal Highway Administration , and the Federal Railroad Administration . [ 2 ] The increasing frequency of illegal "midnight" dumping and spills, along with the already existing inconsistent regulations and fragmented enforcement, led to the passing of the Hazardous Materials Transportation Act. It was signed into law on January 3, 1975 by President Gerald Ford , as a means to strengthen the Hazardous Materials Transportation Control Act of 1970 and unify existing regulations.
Since its passage, the HMTA has had two major amendments:
It is estimated that the United States alone makes over 500,000 shipments of hazardous materials every day. [ 5 ] More than 90% of these shipments are transported by truck, and anywhere from 5–15% of those trucks are carrying hazardous materials regulated under the HMTA. Approximately 50% of those materials are corrosive or flammable petroleum products, while the remaining shipments represent any of the 2,700 other chemicals considered hazardous in interstate commerce. Accidents that occurred in the transportation of hazardous materials resulted in injury, death, and the destruction of property and the environment. However, the accidents were not limited to the road. The number of incidents regarding hazardous wastes was second in railway accidents behind road accidents. The passage of the HMTA (and its subsequent amendments) has significantly reduced the number of incidents and the gravity of those incidents with hazardous materials in transportation. [ 4 ]
The HMTA is one of the eight laws defining the EPA's Emergency Management Program. The other laws comprising the Emergency Management Program include the Clean Air Act (CAA), the Clean Water Act (CWA), the Oil Pollution Act (OPA), the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), Superfund Amendments and Reauthorization Act (SARA), the Emergency Planning and Community Right-to-Know Act (EPCRA), and the Chemical Safety Information, Site Security and Fuels Regulatory Relief Act (CSISSFRRA). [ 6 ]
The primary objective of the HMTA is to protect "life, property, and the environment" [ 1 ] from the inherent risks of transporting hazardous material, in all major modes of commerce, by improving the regulation and enforcement authority of the Secretary of Transportation. It is in the Secretary's authority to designate material or a group or class of material as hazardous when they meet the definition of hazardous material under the Act.
A hazardous material , as defined by the Secretary, is any particular quantity or form of a material that may pose an unreasonable risk to health and safety or property during transportation in commerce. [ 7 ] This includes materials that are explosive, radioactive, infectious, flammable, toxic, oxidizing, or corrosive.
Hazardous wastes and hazardous substances are designated by the U.S. Environmental Protection Agency (EPA). Hazardous wastes are designated under the EPA's Resource Conservation and Recovery Act , while hazardous substances are designated by the Clean Water Act (CWA) and the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA). [ 5 ] The HMTA regulates all essential modes of transportation due to the dangers hazardous materials can present during shipment by ground, air, sea, or any other mode of transportation, such as through a pipeline. [ 5 ]
Regulations under the Act are categorized into four key provisions , encompassing federal standards under Title 49 of the United States Code that guide the safe transportation of hazardous materials:
The HMTA specifically states that regulations apply to any person who —
Essentially, all persons involved in the preparation of the transportation of hazardous materials, though the primary burden of liability falls on the shipper of the hazardous materials (the person who offers shipment). Carriers are only required to ensure that required information accompanying hazardous materials packages is immediately available to personnel who would respond to an incident or conduct a hazardous materials investigation, per the amendments enforced in the Hazardous Materials Transportation Uniform Safety Act of 1990 . [ 5 ]
Regulations are enforced by use of compliance orders, civil penalties , and injunctive relief , [ 8 ] under the discretion of the Secretary of Transportation.
As the Act stands now (with its latest amendments), the Department of Transportation (DOT) is most concerned with the test conditions of packages, rather than the transportation conditions. Enforcement includes random packaging inspections by DOT inspectors at freight terminals, intermodal transfer facilities, airports, and other facilities to determine compliance with proper marking and labeling of packaging. DOT also has made it its intent to inspect manufacturing facilities, testing facilities, and shipper's facilities where manufacturing operations occur. [ 5 ]
As the current statute stands, the "HMTA (Section 112, 40 U.S.C. 1811) preempts state and local governmental requirements that are inconsistent with the statute, unless that requirement affords an equal or greater level of protection to the public than the HMTA requirement." [ 5 ] [ 8 ]
The Hazardous Materials Transportation Act is implemented through various agencies based on the mode of transportation and the type of hazardous material being transported:
§5117 provides that the Secretary may "issue, modify, or terminate" a special permit authorizing a variance to regulations prescribed under §5103(b), §5104, §5110, or §5112 of the Act [ 3 ] to a person performing the functions under §5103(b) in a way that achieves a safety level that —
Special permits are effective for an initial period of no more than 2 years. Renewal of the special permits is granted under the Secretary's discretion upon application for the permit for successive periods of no more than 4 years each. In the case of a special permit relating to §5112, the additional period following permit renewal must be no more than 2 years each.
To apply for a special permit, the applicant must provide a safety analysis prescribed by the Secretary that justifies the special permit, and submit the application to the Administrator of the Pipeline and Hazardous Materials Safety Administration . [ 9 ] The Secretary then must publish notice of the application in the Federal Register to give the opportunity for public review and comment. [ 3 ]
Upon the applicant's filing of the application, the Secretary must issue, renew, or deny the application within 180 days after the first day of the month following the filing date. If more time is needed, the Secretary must publish a statement to the Federal Register addressing the reason for the delay in the Secretary's decision on the permit, along with an estimate for when the decision will be made. [ 3 ]
The Secretary, after completing a review of the circumstance for the permit, and after providing opportunity for public comment and review, must either institute a new rule incorporating the special permit into the regulations of the Act, or publish in the Federal Register the justification for not including the special permit into the regulations. [ 3 ]
§5103(b) : Regulations for safe transportation prescribed by the Secretary for people who —
§5104 : Representation and tampering regulations for a package, component of a package, or packaging intended for the use of transporting hazardous material. [ 10 ]
§5110 : Shipping papers and disclosure regulations. [ 11 ]
§5112 : Highway routing of hazardous material regulations. [ 12 ]
Under §5123, a person is liable for a civil penalty of up to $75,000 for each violation of a "regulation, order, special permit or approval" of the Act that has been knowingly committed. A separate violation is considered for each day the violation, committed by a person who transports or causes the transportation of a hazardous material, continues. [ 13 ] A person acts knowingly when —
Under the discretion of the Secretary, he or she may increase the penalty amount upwards to $175,000 if the violation results in the death, serious illness or injury to any person, or in substantial damage to property. Violations resulting from training activities must be at least $450.
In determining the amount of the civil penalty, the Secretary must consider —
An opportunity for a hearing must be granted to the violator, along with a written notice from the Secretary specifying the amount of the penalty.
A person is subject to a criminal penalty under §5124 if that person knowingly tampers with the labels or packages used for transporting hazardous material, or "willfully or recklessly" violates a "regulation, order, special permit, or approval" under the Act and shall be fined under Title 18 of the United States Code , imprisoned for no more than 5 years, or both. A violation under this section that results in the release of hazardous material causing bodily injury or death to any person can render a maximum prison penalty of 10 years.
Under §5104, tampering refers to the alteration, removal, destruction, or otherwise unlawful tampering of —
A person acts knowingly when —
Knowledge of the existence of a statutory regulation required by the Secretary is not considered an element of offense.
A person acts willfully when —
A person acts recklessly when the person displays a deliberate indifference or conscious disregard to the consequences of that person's conduct.
Procedures on proper handling and preparation for handling hazardous materials, as well as finding out information about implementing the Act (permitting procedures, registration procedures, adding a regulation into the Act, etc.), can be found under this provision.
Under §5106, the Secretary of Transportation may prescribe criteria for handling hazardous material, including—
Under §5107, the hazmat employee training requirements and grants are summarized:
(A) Training requirements — The Secretary shall prescribe by regulation requirements for training that a hazmat employer must give hazmat employees of the employer on the safe loading, unloading, handling, storing, and transporting of hazardous material and emergency preparedness for responding to an accident or incident involving the transportation of hazardous material.
(B) Beginning and completing training — A hazmat employer shall begin the training of hazmat employees of the employer no later than 6 months after the Secretary prescribes the regulations under subsection (a) of this section.
(C) Certification of training — After completing the training, each hazmat employer shall certify, with documentation the Secretary may require by regulation, that the hazmat employees of the employer have received training and have been tested on appropriate transportation areas of responsibility, including at least one of the following:
(D) Coordination of training requirements — In consultation with the Administrator of the Environmental Protection Agency and the United States Secretary of Labor , the Secretary shall ensure that the training requirements prescribed under this section do not conflict with or duplicate—
(E) Training grants —
(F) Training of certain employees — The Secretary shall ensure that maintenance-of-way employees and railroad signalmen receive general awareness and familiarization training and safety training. [ 15 ]
Under the Act, transporting hazardous material requires regulations unique to the type of hazardous materials being transported.
The table listing all hazardous materials regulated by the Act for transportation used to be at www.phmsa.dot.gov/staticfiles/PHMSA/DownloadableFiles/Files/Hazmat/Alpha_Hazmat_Table.xls.
This table identifies the hazard class of the material to inform specific packaging requirements, [ 16 ] or outlines whether the material is forbidden in transportation. [ 17 ]
Each person who offers transportation of hazardous materials must describe the material on accompanied shipping papers. The papers must include—
Additionally, the hazardous material must be accompanied by an EPA manifest, a sheet that tracks the transportation of the hazardous material. [ 18 ]
Each "package, freight container, and transport vehicle" carrying the hazardous material must have markings that are—
Each non-bulk package, container, or small tank must be labeled with a label code corresponding to the hazard class of the hazardous material being transported, and must follow design and placement requirements. [ 20 ]
Each "bulk packaging, freight container, unit load device, transport vehicle or rail car containing any quantity of a hazardous material" must be placarded corresponding to the hazard class of the hazardous material being transported, and must follow design and placement requirements. [ 21 ]
Regulations providing for immediate emergency response information in an incident, as well as requirements for the development and implementation of security plans must be adhered by "any person who offers for transportation in commerce or transports in commerce" hazardous materials regulated under the Act. [ 22 ]
Packaging requirements under the Act are detailed in Title 49 of the United States Code of Federal Regulations under §173, 178, 179, and 180 . Packaging requirements vary based on the hazardous material being transported. [ 16 ]
Packaging material must fulfill a set of testing requirements before being authorized to store hazardous materials for transportation to endure the physical stress and environmental changes that may result in phase changes of the packaging contents during transportation. [ 23 ]
All packaging provisions under the Act apply to—
Each package must be "designed, constructed, maintained, filled, its contents so limited, and closed" so that during transportation of hazardous contents —
The contents of the package (the hazardous material) and the material of the package itself must be resistant to significant "chemical or galvanic reaction" that can compromise the integrity of the package. Additionally, hazardous materials may not be mixed together with other hazardous or nonhazardous materials creating a reaction causing —
It is up to the shipper of the stored, hazardous material to determine that the compatibility between the hazardous material and the packaging is sufficient for safe transportation. [ 24 ]
49 CFR §173: General packaging requirements. [ 23 ] 49 CFR §178: Specifications for packagings. [ 25 ] 49 CFR §179: Specifications for tank cars. [ 26 ] 49 CFR §180: Continuing qualification and maintenance of packagings. [ 27 ]
The "operational rules" are the final key provision to the HMTA. They are a summary of the above provisions, including procedures and policies, material designations and labeling, and packaging requirements. Operational rules are covered by 49 CFR §171, 173, 174, 175, 176, and 177 and are all subjective to the U.S. Department of Transportation. Operational rules cover the entire transportation process from pick-up to delivery within all known modes of transportation subject to interstate and intrastate commerce.
49 CFR §171- General information, regulations, and North American shipments
49 CFR §173: Shippers general requirements for shipping and packaging 49 CFR §174: Carriage by rail 49 CFR §175: Carriage by aircraft 49 CFR §176: Carriage by vessel 49 CFR §177: Carriage by public highway [ 28 ]
In 1990, Congress enacted the Hazardous Materials Transportation Uniform Safety Act (HMTUSA) in order to clarify the 1975 Hazardous Materials Transportation Act. This amendment sought to standardize international hazardous material transportation requirements as recommended by the United Nations , [ 16 ] define preemption over local state regulations that differed from the Act's regulations, and to give more authority to the Secretary of Transportation in requiring registration of hazardous materials. Before the HMTUSA was passed, the Secretary's authority to require registration by all shippers of hazardous materials and by all parties involved in the preparation of shipment (manufacture, repair, testing, or sale) was never exercised. [ 5 ]
New provisions under this amendment were designed to "encourage uniformity among different state and local highway routing regulations, to develop criteria for the issuance of federal permits to motor carriers of hazardous materials, and to regulate the transport of radioactive materials." [ 29 ]
The amendment also outlined two types of emergency response information:
Under the HMTUSA, the Secretary continues to enforce regulations for the safe transport of hazardous material in intrastate, interstate, and foreign commerce in the same manner as the HMTA. The Secretary also retains authority to classify hazardous materials, when "they pose unreasonable risks to health, safety, or property." [ 29 ]
Signed by President Bill Clinton on August 26, 1994, the purpose of the amendment was to broaden the "regulatory and enforcement authority of the Secretary of Transportation." [ 30 ] The Secretary is given discretionary power to require anyone who transports hazardous materials through aircraft, rail, ship, or vehicle to register with the Department of Transportation who are not already under mandatory obligation to do so. Additionally, the amendment restructured the Act, reauthorizing funding for the HMTA and requiring additional safety initiatives to be taken by the Department of Transportation. [ 31 ] Under this amendment, its underlying goal remained the same as the Hazardous Materials Transportation Act: to protect against the risks to life, property, and the environment during the transportation of hazardous materials.
After the September 11 attacks , Congress considered new security measures to the Act, including background checks for truck drivers, requiring shipping companies to create alternative security plans, the use of electronic tracking devices to pinpoint exact locations of hazardous materials and their transporters, and creating strict federal penalties for hijacking trucks carrying hazardous materials. [ 31 ] The enormity of attempting to monitor every shipment in the country was recognized as a "logistical impossibility" and an exorbitant expense. However, on October 18, 2001, Senator Hatch introduced the Hazardous Material in Transportation Protection Act of 2001, which amended the Act to require stricter regulations of issuing operational licenses for the motor-vehicular transportation of hazardous materials. Specifically, the bill prohibits states from issuing licenses to transporters unless the Secretary clears the transporter through a comprehensive background check. [ 31 ] | https://en.wikipedia.org/wiki/Hazardous_Materials_Transportation_Act |
The Hazardous Substances Data Bank (HSDB) was a toxicology database on the U.S. National Library of Medicine 's (NLM) Toxicology Data Network (TOXNET). [ 2 ] [ 3 ] It focused on the toxicology of potentially hazardous chemicals, and included information on human exposure, industrial hygiene , emergency handling procedures, environmental fate, regulatory requirements, and related areas. All data were referenced and derived from a core set of books, government documents, technical reports, and selected primary journal literature. Prior to 2020, all entries were peer-reviewed by a Scientific Review Panel (SRP), members of which represented a spectrum of professions and interests. Last Chairs of the SRP are Dr. Marcel J. Cassavant, MD, Toxicology Group, and Dr. Roland Everett Langford, PhD, Environmental Fate Group. The SRP was terminated due to budget cuts and realignment of the NLM.
The HSDB was organized into individual chemical records, and contained over 5000 such records. [ 4 ] It was accessible free of charge via TOXNET. Users could search by chemical or other name, chemical name fragment, CAS registry number and/or subject terms. Recent additions included radioactive materials and certain mixtures, like crude oil and oil dispersants as well as animal toxins. As of November 2014 [update] , there were approximately 5,600 chemical specific HSDB records available. [ 3 ]
The Toxicology Data Network (TOXNET) was a group of databases hosted on the National Library of Medicine (NLM) website that covered "chemicals and drugs, diseases and the environment, environmental health, occupational safety and health, poisoning, risk assessment and regulations, and toxicology". [ 5 ] TOXNET was managed by the NLM's Toxicology and Environmental Health Information Program (TEHIP) in the Division of Specialized Information Services (SIS). [ 5 ]
The TOXNET databases included: [ 6 ] | https://en.wikipedia.org/wiki/Hazardous_Substances_Data_Bank |
Hazardous Substances and New Organisms (Approvals and Enforcement) Amendment Act 2005
The Hazardous Substances and New Organisms Act (HSNO) is an Act of Parliament passed in New Zealand in 1996. The New Zealand Environmental Protection Authority (EPA) administers the Act.
This article relating to law in New Zealand is a stub . You can help Wikipedia by expanding it .
This legislation article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hazardous_Substances_and_New_Organisms_Act_1996 |
The Hazardous Waste and Substances Sites List , also known as the Cortese List—named for Dominic Cortese —or California Superfund, is a planning document used by the State of California and its various local agencies and developers to comply with the California Environmental Quality Act requirements in providing information about the location of hazardous materials release sites. California Government Code section 65962.5 requires the California Environmental Protection Agency to develop at least an annually updated Cortese List.
The California Department of Toxic Substances Control (DTSC) is responsible for a portion of the information contained in the Cortese List. Other State and local government agencies are required to provide additional hazardous material release information for the Cortese List.
The list is maintained via DTSC's Brownfields and Environmental Restoration Program (Cleanup Program), called EnviroStor . The database currently contains 575 sites, including the Federal Superfund sites. It also maintains corrected and partially corrected sites, listed Certified with Operation and Maintenance sites. | https://en.wikipedia.org/wiki/Hazardous_Waste_and_Substances_Sites_List |
In pharmacology , hazardous drugs are drugs that are known to cause harm, which may or may not include genotoxicity (the ability to cause a change or mutation in genetic material). Genotoxicity might involve carcinogenicity , the ability to cause cancer in animal models, humans or both; teratogenicity , which is the ability to cause defects on fetal development or fetal malformation; and lastly hazardous drugs are known to have the potential to cause fertility impairment, which is a major concern for most clinicians. [ 1 ] These drugs can be classified as antineoplastics , cytotoxic agents, biologic agents, antiviral agents and immunosuppressive agents. This is why safe handling of hazardous drugs is crucial.
Safe handling refers to the process in which health care workers adhere to practices set forth by national health and safety organizations, that have been designed to eliminate or significantly reduce occupational exposure. Some of these practices include but are not limited to, donning of personal protective equipment such as a disposable gown, gloves, masks and the utilization of a closed-system drug transfer device. The key safe handling is to protect the health care worker throughout the three phases of contact with the hazardous drugs. These phases are drug preparation, administration and disposal. Some studies have shown that while compounding hazardous drugs in a Class II BSC in conjunction with a closed-system drug transfer device, a significant decrease in drug contaminants inside a Class II BSC has resulted. [ 2 ] This led the Oncology Nursing Society (ONS) to make the statement in 2003 that a closed-system drug transfer device is viewed as one of safest measures to prevent hazardous drug exposure in a clinician’s working environment. [ 3 ] However, a Cochrane review published in 2018 that synthesized all available controlled studies found no evidence of a closed-system drug transfer device offering an additional decrease in contamination or exposure to safe handling practices alone. [ 4 ]
It has been determined that current personal protective equipment (PPE) does not provide adequate protection against workers handling hazardous drugs - NIOSH states that “... measurable concentrations of some hazardous drugs have been documented in the urine of health care workers who prepared or administered them − even after safety precautions had been employed.” [ 1 ] Further, NIOSH recommends that institutions should "consider using devices such as closed-system transfer devices. Closed systems limit the potential for generating aerosols and exposing workers". [ 1 ] Other guidelines outline that "As other products become available, they should meet the definition of a closed system drug transfer device established by NIOSH and should be required to demonstrate their effectiveness in independent studies". [ 2 ] | https://en.wikipedia.org/wiki/Hazardous_drugs |
Cultural heritage collections contain many materials known to be hazardous to the environment and to human health. Some hazardous substances may be an integral part of the object (such as a toxic paint pigment or a naturally radioactive mineral sample), applied as a treatment after the object was made (such as a pesticide) or the result of material degradation (such as the exudation of plasticiser from polyvinyl chloride). The toxicity of such objects in heritage collections can also determine their historic and scientific value. Consequently, management of these materials within collecting organisations can be complex in terms of health and safety.
These substances represent a hazard for people working with or using affected collections items as well as acting as a record of the use of these materials over time. Disposal or removal of hazardous substances from cultural collections can be expensive and logistically challenging.
Many of the hazardous substances found in cultural heritage collections may also be classified as Dangerous Goods or Scheduled Poisons and subject to strict regulations concerning their sale, storage, labelling, handling, transport, display and disposal.
Asbestos was used widely as a fire-proof or fire-suppressing agent, in scientific, industrial and domestic appliances, clothing, and tools. Asbestos can also be found mixed with cements and resins and woven into fabrics. Asbestos-containing mineral samples may be present in natural history collections. The safe management of asbestos is highly regulated in most countries, [ 1 ] [ 2 ] e.g. the UK Control of Asbestos Regulations 2012 . [ 3 ]
Acids and alkalis can be found in industrial chemicals (e.g. photographic developing agents), as the preservative used for fluid-preserved natural history specimens (formalin) and in batteries.
Lead is a soft, malleable metal that has been used for a variety of purposes throughout history: as food additives, paint pigments, or solder, and to make pewter drinking vessels and lead toys.
Mercury can be found in scientific equipment such as thermometers, and as a residue on animal skins, furs, and hats where it was used in the preparation process. [ 4 ] Mercuric chloride was also used as a pesticide or biocide.
Arsenic and mercury are a common hazardous substance found in historic dress and textile collections from the 18th and 19th centuries as it was used in textile dyes e.g. Scheele’s Green a yellow-green pigment, and textile manufacture, hat making.
Mould and micro-organisms (e.g. bacteria) may be present on the surface of collection objects, particularly those that have been stored in warm and damp conditions.
Many toxic pigments and other paint ingredients have been used, many since antiquity. Toxic pigments include lead, mercury, cadmium, cobalt, antimony and arsenic.
Museum collections can contain samples of actual pesticides and herbicides (such as mercuric chloride , paradichlorobenzene and DDT ) as well as artefacts that have been treated with pesticides to prevent infestations by museum personnel and field collectors especially over the 18th century to the end of the 20th century as "[…] such treatments were traditionally thought to be part of general collections maintenance." [ 5 ]
The latter can prevent access to collection items unless the chemical residues can be removed or safely managed, as there are also human health implications associated with most pesticides. [ 6 ] [ 7 ]
Once a commonplace treatment for objects made of organic materials (e.g. animal and insect specimens, woollen clothing, objects containing plant fibres, fur and feathers), use of pesticides has substantially diminished with the development of integrated pest management as a collection management strategy.
Naphthalene is one of the most commonly encountered pesticide residues found on museum collections. As a volatile substance, it can sublimate and recrystallise on surfaces nearby. [ 8 ]
Mercury-based pesticides (such as mercuric chloride) can release mercury vapour, which can contaminate other collection objects and surfaces nearby. Monitoring vapour levels has shown that venting closed storage cabinets before use lowers airborne concentration limits to safe limits. Other mitigation strategies include enclosing affected collection objects inside enclosures made from gas vapour barriers and using vented cabinets instead of sealed cabinets for storage. [ 9 ]
Some deteriorating plastics may generate acidic byproducts (such as acetic acid from cellulose acetate film or nitric acid from cellulose nitrate film), which pose risk to those handling affected objects. Others leach plasticisers, such as the phthalates released from polyvinyl chloride or biphenyl A (BPA). [ 10 ]
Many museums contain collections of old medicines and poisons, containing substances which - though once intended to heal - may contains substances hazardous to humans and to the environment. [ 11 ] For these reasons pharmacy and prescription-only medicines in museum collections may be subject to local regulations for storage and display.
Radioactive minerals may be found in mineralogy, palaeontology, and maritime collections, in radioactive paints on watch faces and aircraft dials, in medical and analytical equipment. Radiation in museum collections must usually be strictly controlled in accordance with local regulations. [ 12 ] [ 13 ] [ 14 ]
A variety of chemicals can be found in cultural heritage collections, including oxidising agents, flammable and combustible liquids, and other solvents with known toxic, carcinogenic or other health effects. Ethanol and formalin are used to preserve specimens in natural history collections. Petroleum products may be found in industrial heritage collections. Organic solvents may also be found within cosmetics, medicines, and photographic processing chemicals.
Zoonotic diseases (those transmitted from animals to humans) may be present in natural history specimens or museum objects made with unprocessed animal products. | https://en.wikipedia.org/wiki/Hazardous_substances_in_cultural_heritage_collections |
Hazardous waste is waste that must be handled properly to avoid damaging human health or the environment. Waste can be hazardous because it is toxic , reacts violently with other chemicals, or is corrosive , among other traits. [ 1 ] As of 2022, humanity produces 300-500 million metric tons of hazardous waste annually. [ 2 ] Some common examples are electronics, batteries, and paints. An important aspect of managing hazardous waste is safe disposal. Hazardous waste can be stored in hazardous waste landfills, burned, or recycled into something new. Managing hazardous waste is important to achieve worldwide sustainability . [ 3 ] Hazardous waste is regulated on national scale by national governments as well as on an international scale by the United Nations (UN) and international treaties.
Universal wastes are a special category of hazardous wastes that (in the U.S.) generally pose a lower threat relative to other hazardous wastes, are ubiquitous and produced in very large quantities by a large number of generators. Some of the most common "universal wastes" are: fluorescent light bulbs , some specialty batteries (e.g. lithium or lead containing batteries), cathode-ray tubes , and mercury-containing devices.
Universal wastes are subject to somewhat less stringent regulatory requirements. Small quantity generators of universal wastes may be classified as "conditionally exempt small quantity generators" (CESQGs) which release them from some of the regulatory requirements for the handling and storage hazardous wastes. Universal wastes must still be disposed of properly.
Household Hazardous Waste (HHW), also referred to as domestic hazardous waste or home generated special materials, is a waste that is generated from residential households. HHW only applies to waste coming from the use of materials that are labeled for and sold for "home use". Waste generated by a company or at an industrial setting is not HHW.
The following list includes categories often applied to HHW. It is important to note that many of these categories overlap and that many household wastes can fall into multiple categories:
Historically, some hazardous wastes were disposed of in regular landfills . Hazardous wastes must often be stabilized and solidified in order to enter a landfill and must undergo different treatments in order to stabilize and dispose of them. Most flammable materials can be recycled into industrial fuel. Some materials with hazardous constituents can be recycled, such as lead acid batteries. Many landfills require countermeasures against groundwater contamination. For example, a barrier has to be installed along the foundation of the landfill to contain the hazardous substances that may remain in the disposed waste. [ 5 ]
Some hazardous wastes can be recycled into new products. [ 6 ] Examples may include lead–acid batteries or electronic circuit boards . When heavy metals in these types of ashes go through the proper treatment, they could bind to other pollutants and convert them into easier-to-dispose solids, or they could be used as pavement filling. Such treatments reduce the level of threat of harmful chemicals, like fly and bottom ash , [ 7 ] while also recycling the safe product.
Incinerators burn hazardous waste at high temperatures (1600°-2500°F, 870°-1400°C), greatly reducing its amount by decomposing it into ash and gases. [ 8 ] Incineration works with many types of hazardous waste, including contaminated soil , sludge , liquids, and gases. An incinerator can be built directly at a hazardous waste site, or more commonly, waste can be transported from a site to a permanent incineration facility. [ 8 ]
The ash and gases leftover from incineration can also be hazardous. Metals are not destroyed, and can either remain in the furnace or convert to gas and join the gas emissions. The ash needs to be stored in a hazardous waste landfill, although it takes less space than the original waste. [ 8 ] Incineration releases gases such as carbon dioxide , nitrogen oxides, ammonia, and volatile organic compounds. [ 9 ] Reactions in the furnace can also form hydrochloric acid gas and sulfur dioxide . To avoid releasing hazardous gases and solid waste suspended in those gases, modern incinerators are designed with systems to capture these emissions. [ 10 ]
Hazardous waste may be sequestered in a hazardous waste landfill or permanent disposal facility. "In terms of hazardous waste, a landfill is defined as a disposal facility or part of a facility where hazardous waste is placed or on land and which is not a pile, a land treatment facility, a surface impoundment, an underground injection well , a salt dome formation, a salt bed formation, an underground mine, a cave, or a corrective action management unit (40 CFR 260.10)." [ 11 ] [ 12 ]
Some hazardous waste types may be eliminated using pyrolysis in a high temperature not necessarily through electrical arc but starved of oxygen to avoid combustion. However, when electrical arc is used to generate the required ultra heat (in excess of 3000 degree C temperature) all materials (waste) introduced into the process will melt into a molten slag and this technology is termed Plasma not pyrolysis. Plasma technology produces inert materials and when cooled solidifies into rock like material. These treatment methods are very expensive but may be preferable to high temperature incineration in some circumstances such as in the destruction of concentrated organic waste types, including PCBs, pesticides and other persistent organic pollutants . [ 13 ] [ 14 ]
Hazardous waste management and disposal comes with consequences if not done properly. If disposed of improperly, hazardous gaseous substances can be released into the air resulting in higher morbidity and mortality. [ 15 ] These gaseous substances can include hydrogen chloride, carbon monoxide, nitrogen oxides, sulfur dioxide, and some may also include heavy metals. [ 15 ] With the prospect of gaseous material being released into the atmosphere, several organizations (RCRA, TSCA, HSWA, CERCLA) developed an identification scale in which hazardous materials and wastes are categorized in order to be able to quickly identify and mitigate potential leaks. F-List materials were identified as non-specific industrial practices waste, K-List materials were wastes generated from specific industrial processes - pesticides, petroleum, explosive industries, and the P & U list were commercially used generated waste and shelf stable pesticides. [ 15 ] Not only can mismanagement of hazardous wastes cause adverse direct health consequences through air pollution, mismanaged waste can also contaminate groundwater and soil. [ 15 ] In an Austrian study, people who live near industrial sites are "more often unemployed, have lower education levels, and are twice as likely to be immigrants." [ 16 ] This creates disproportionately larger issues for those who depend heavily on the land for harvests and streams for drinking water; this includes Native American populations. Though all lower-class and/or social minorities are at a higher risk for being exposed to toxic exposure, Native Americans are at a multiplied risk due to the facts stated above (Brook, 1998). Improper disposal of hazardous waste has resulted in many extreme health complications within certain tribes. Members of the Mohawk Nation at Akwesasne have suffered elevated levels of PCB [Polychlorinated Biphenyls] in their bloodstreams leading to higher rates of cancer. [ 17 ]
The UN has a mandate on hazardous substances and wastes with recommendations to countries for dealing with hazardous waste. [ 18 ] 199 countries signed the 1992 Basel Convention , seeking to stop the flow of hazardous waste from developed countries to developing countries with less stringent environmental regulations. [ 19 ]
The international community has defined the responsible management of hazardous waste and chemicals as an important part of sustainable development by including it in Sustainable Development Goal 12 . [ 20 ] Target 12.4 of this goal is to "achieve the environmentally sound management of chemicals and all wastes throughout their life cycle". One of the indicators for this target is: "hazardous waste generated per capita; and proportion of hazardous waste treated, by type of treatment". [ 3 ]
Hazardous wastes are wastes with properties that make them dangerous or potentially harmful to human health or the environment. Hazardous wastes can be liquids, solids, contained gases, or sludges. They can be by-products of manufacturing processes or simply discarded commercial products, like cleaning fluids or pesticides. In regulatory terms, RCRA hazardous wastes are wastes that appear on one of the four hazardous wastes lists (F-list, K-list, P-list, or U-list), or exhibit at least one of the following four characteristics; ignitability, corrosivity, reactivity, or toxicity. in the US, Hazardous wastes are regulated under the Resource Conservation and Recovery Act (RCRA) , Subtitle C. [ 21 ]
By definition, EPA determined that some specific wastes are hazardous. These wastes are incorporated into lists published by the Agency. These lists are organized into three categories: F-list (non-specific source wastes) found in the regulations at 40 CFR 261.31, K-list (source-specific wastes) found in the regulations at 40 CFR 261.32, and P-list and the U-list (discarded commercial chemical products) found in the regulations at 40 CFR 261.33.
RCRA's record keeping system helps to track the life cycle of hazardous waste and reduces the amount of hazardous waste illegally disposed.
The Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) was enacted in 1980. The primary contribution of CERCLA was to create a " Superfund " and provide for the clean-up and remediation of closed and abandoned hazardous waste sites. CERCLA addresses historic releases of hazardous materials, but does not specifically manage hazardous wastes.
In 1984, a deadly methyl isocyanate gas leak known as the Bhopal disaster raised environmental awareness in India. [ 22 ] In response, the Indian government produced the Environmental Act in 1986, followed by the Hazardous Waste Rules in 1989. [ 23 ] With these rules, companies are only permitted by the state to produce hazardous waste if they are able to dispose of it safely. [ 24 ] However, state governments did not make these rules effective. There was around a decade delay between when hazardous waste landfills were requested and when they were built. During this time, companies disposed hazardous waste in various "temporary" hazardous waste locations, such as along roads and in canal pits, with no immediate plan to move it to proper facilities. [ 23 ]
The Supreme Court stepped in to prevent damage from hazardous waste in order to protect the right to life . A 1995 petition by the Research Foundation for Science, Technology, and Natural Resource Policy [ 25 ] spurred the Supreme Court to create the High Powered Committee (HPC) of Hazardous Waste, since data from pre-existing government boards was not usable. [ 23 ] This committee found studies linking pollution and improper waste treatment with higher amounts of hexavalent chromium, lead, and other heavy metals. Industries and regulators were effectively ignoring these studies. [ 23 ] In addition, the state was also not acting in accordance with the Basel Convention , an international treaty on the transport of hazardous waste. The Supreme Court modified the Hazardous Waste Rules and began the Supreme Court Monitoring Committee to follow up on its decisions. With this committee, the Court has been able to force companies polluting hazardous wastes to close.
In the United States, the treatment, storage, and disposal of hazardous waste are regulated under the Resource Conservation and Recovery Act (RCRA). Hazardous wastes are defined under RCRA in 40 CFR 261 and divided into two major categories: characteristic and listed. [ 26 ]
The requirements of the RCRA apply to all the companies that generate hazardous waste and those that store or dispose of hazardous waste in the United States. Many types of businesses generate hazardous waste. Dry cleaners , automobile repair shops, hospitals, exterminators , and photo processing centers may all generate hazardous waste. Some hazardous waste generators are larger companies such as chemical manufacturers , electroplating companies, and oil refineries .
A U.S. facility that treats, stores, or disposes of hazardous waste must obtain a permit under the RCRA. Generators and transporters of hazardous waste must meet specific requirements for handling, managing, and tracking waste. Through the RCRA, Congress directed the United States Environmental Protection Agency (EPA) to create regulations to manage hazardous waste. Under this mandate, the EPA has developed strict requirements for all aspects of hazardous waste management, including treating, storing, and disposing of hazardous waste. In addition to these federal requirements, states may develop more stringent requirements that are broader in scope than the federal regulations. Furthermore, RCRA allows states to develop regulatory programs that are at least as stringent as RCRA, and after review by EPA, the states may take over responsibility for implementing the requirements under RCRA. Most states take advantage of this authority, implementing their own hazardous waste programs that are at least as stringent and, in some cases, stricter than the federal program.
The U.S. government provides several tools for mapping hazardous wastes to particular locations. These tools also allow the user to view additional information. | https://en.wikipedia.org/wiki/Hazardous_waste |
Under United States environmental policy , hazardous waste is a waste (usually a solid waste) that has the potential to:
Under the 1976 Resource Conservation and Recovery Act (RCRA), a facility that treats, stores or disposes of hazardous waste must obtain a permit for doing so. Generators of and transporters of hazardous waste must meet specific requirements for handling, managing, and tracking waste. Through RCRA, Congress directed EPA to issue regulations for the management of hazardous waste. EPA developed strict requirements for all aspects of hazardous waste management including the treatment, storage, and disposal of hazardous waste. In addition to these federal requirements, states may develop more stringent requirements or requirements that are broader in scope than the federal regulations.
EPA authorizes states to implement the RCRA hazardous waste program. Authorized states must maintain standards that are equivalent to and at least as stringent as the federal program. Implementation of the authorized program usually includes activities such as permitting, corrective action, inspections, monitoring and enforcement.
Modern hazardous waste regulations in the U.S. began with RCRA, which was enacted in 1976. [ 1 ] The primary contribution of RCRA was to create a "cradle to grave" system of record keeping for hazardous wastes. Hazardous wastes must be tracked from the time they are generated until their final disposition. [ 2 ]
RCRA's recordkeeping system helps to track the life cycle of hazardous material and reduces the amount of hazardous waste illegally disposed. Regulators can monitor hazardous waste by following the "trail" of the waste as is transferred from one entity to another, from the time it is generated until it is disposed.
Amendments to RCRA specified requirements for incinerators and small quantity generators of hazardous waste and required substandard landfills to be closed. [ 3 ] Congress also exempted coal combustion residuals and mining waste from the strict hazardous waste permitting requirements. [ 4 ]
The Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), was enacted in 1980. [ 5 ] The primary contribution of CERCLA was to create a financial " Superfund " and provide for the clean-up and remediation of closed and abandoned hazardous waste sites. [ 6 ]
The United States is not a party to the Basel Convention , a 1992 treaty which prohibits the export of hazardous waste from developed countries to developing countries. [ 7 ] [ 8 ] Research by the Guardian and Quinto Elemento Lab shows that US companies ship more than a million tons of hazardous waste to other countries each year. [ 9 ]
Under EPA regulations, "characteristic hazardous wastes" are defined as wastes that exhibit the following characteristics: ignitability, corrosivity, reactivity, or toxicity. [ 10 ]
Ignitable wastes can create fires under certain conditions, are spontaneously combustible, or are liquids with a flash point less than 60 °C (140 °F). Examples include waste oils and used solvents. For more details, see 40 CFR §261.21. Test methods that may be used to determine ignitability include the Pensky-Martens Closed-Cup Method for Determining Ignitability , the Setaflash Closed-Cup Method for Determining Ignitability , and the Ignitability of Solids.
Corrosive wastes are acids or bases (pH less than or equal to 2, or greater than or equal to 12.5) that are capable of corroding metal containers, such as storage tanks, drums, and barrels. Battery acid is an example. For more details, see 40 CFR §261.22. The test method that may be used to determine corrosivity is the Corrosivity Towards Steel (Method 1110A) (PDF).
Reactive wastes are unstable under "normal" conditions. They can cause explosions, toxic fumes, radioactive particles, gases, or vapors when heated, compressed, or mixed with water. Examples include lithium-sulfur batteries and explosives. For more details, see 40 CFR §261.23. There are currently no test methods available.
Toxic wastes are those containing concentrations of certain substances in excess of regulatory thresholds which are expected to cause injury or illness to human health or the environment. For more details see [1]
Toxicity of a hazardous waste is defined through a laboratory procedure called the toxicity characteristic leaching procedure (TCLP). The TCLP helps identify wastes likely to leach concentrations of contaminants into the environment that may be harmful to human health or the environment.
Listed hazardous wastes are generated by specific industries and processes and are automatically considered hazardous waste based solely on the process that generates them and irrespective of whether a test of the waste shows any of the "characteristics" of hazardous waste. [ 10 ] Examples of listed wastes include:
Hazardous wastes are incorporated into lists published by the Environmental Protection Agency. These lists are organized into three categories:
Additionally, states may have specific waste codes. For example, the California Department of Toxic Substances Control distinguishes discarded mercury-containing products and waste oil as separate groups of hazardous waste.
This list includes certain wastes known to contain mercury, such as fluorescent lamps, mercury switches and the products that house these switches, and mercury-containing novelties. [ 11 ]
In California, waste oil and materials that contain or are contaminated with waste oil are usually regulated as hazardous wastes if they meet the definition of "Used Oil" even if they do not exhibit any of the characteristics of hazardous waste. The term "used oil" is a legal term which means any oil that has been refined from crude oil, or any synthetic oil that has been used and, as a result of use, is contaminated with physical or chemical impurities. Other materials that contain or are contaminated with used oil may also be subject to regulation as "used oil" under Part 279 of Title 40 of the Code of Federal Regulations. Standards for the Management of Used Oil
Universal wastes are hazardous wastes that:
Some of the most common "universal wastes" are: fluorescent light bulbs, batteries , cathode ray tubes , and mercury-containing devices.
Universal wastes are subject to somewhat less stringent regulatory requirements and small quantity generators of universal wastes may be classified as "conditionally-exempt small quantity generators" (CESQGs) which releases them from some of the regulatory requirements for the handling and storage of hazardous wastes.
Universal wastes must still be disposed of properly. (For more information, see Fact Sheet: Conditionally Exempt Small Quantity Generator )
EPA has other ways of regulating hazardous waste. These regulations include:
EPA regulations automatically exempt certain solid wastes from being regulated as "hazardous wastes". This does not necessarily mean the wastes are not hazardous nor that they are not regulated. An exempted hazardous waste simply means that the waste is not regulated by the primary hazardous waste regulations. Many of these wastes may by regulated by different statutes and/or regulations and/or by different regulatory agencies. For example, many hazardous mining wastes are regulated via mining statutes and regulations. "Exempted" hazardous wastes include:
Household hazardous waste (HHW), also referred to as "domestic hazardous waste," is waste that is generated from residential households. HHW only applies to wastes that are the result of the use of materials that are labeled for and sold for "home use" and that are purchased by homeowners or tenants for use in a residential household.
The following list includes categories often applied to HHW. It is important to note that many of these categories overlap and that many household wastes can fall into multiple categories:
Because of the expense associated with the disposal of HHW, it is still legal for most homeowners in the U.S. to dispose of most types of household hazardous wastes as municipal solid waste (MSW) and these wastes can be put in your trash. Laws vary by state and municipality and they are changing every day. Be sure to check with your local environmental regulatory agency, solid waste authority, or health department to find out how HHW is managed in your area.
Modern landfills are designed to handle normal amounts of HHW and minimize the environmental impacts. However, there are still going to be some impacts and there are many ways that homeowners can keep these wastes out of landfills. [ 15 ]
Laws regulating HHW in the U.S. are gradually becoming more strict. As of 2007, radioactive smoke detectors are the only HHW that are managed nationally. While it is still legal in the United States to dispose of smoke detectors in your trash in most places, manufacturers of smoke detectors must accept returned units for disposal as mandated by the Nuclear Regulatory law 10 CFR 32.27. If you send your detector back to a manufacturer then it will be disposed in a nuclear waste facility.
States regulate HHW waste disposal in MSW landfills with various requirements, on a state-by-state basis. Some commonly regulated wastes in some (but not all) states include restrictions on the disposal of:
(Note: Yard waste or "green waste" (particularly "source-separated" yard waste such as from a city leaf collection program) is not hazardous but may be a regulated household waste)
Local solid waste authorities and health departments may also have specific bans on wastes that apply to their service area.
One " catch-22 " that residents often encounter is that while it may be legal to dispose of some HHW in their regular trash, the waste hauler that collects the trash can choose not to haul the waste. It is not uncommon for a waste hauler to refuse to pick up municipal solid waste that contains things like paint and fluorescent light bulbs. There is often little recourse for residents in this case. In these cases the resident may have to make their own arrangements to dispose of the waste by taking it directly to a landfill or solid waste transfer station .
Hazardous wastes (HWs) are typically dealt with in five different ways:
Many HWs can be recycled into new products.
Some HW can be processed so that the hazardous component of the waste is eliminated making it a non-hazardous waste.
A HW may be "destroyed" for example by incinerating it at a high temperature.
A HW may be sequestered in a HW landfill or permanent disposal facility. "In terms of hazardous waste, a landfill is defined as a disposal facility or part of a facility where hazardous waste is placed in or on land and which is not a pile, a land treatment facility, a surface impoundment, an underground injection well, a salt dome formation, a salt bed formation, an underground mine, a cave, or a corrective action management unit (40 CFR 260.10)." [ 16 ] | https://en.wikipedia.org/wiki/Hazardous_waste_in_the_United_States |
Hazchem ( / ˈ h æ z k ɛ m / ; from hazardous chemicals ) [ 1 ] is a warning plate system used in Australia , Hong Kong , Malaysia , New Zealand , India and the United Kingdom for vehicles transporting hazardous substances , and on storage facilities. The top-left section of the plate gives the Emergency Action Code (EAC) telling the fire brigade what actions to take if there is an accident or fire. The middle-left section containing a 4 digit number gives the UN Substance Identification Number describing the material. The lower-left section gives the telephone number that should be called if special advice is needed. The warning symbol in the top right indicates the general hazard class of the material. The bottom-right of the plate carries a company logo or name.
There is also a standard null Hazchem plate to indicate the transport of non-hazardous substances. The null plate does not include an EAC or substance identification.
The National Chemical Emergency Centre (NCEC) in the United Kingdom provides a Free Online Hazchem Guide. [ 2 ]
The Emergency Action Code (EAC) is a three character code displayed on all dangerous goods classed carriers, and provides a quick assessment to first responders and emergency responders (i.e. fire fighters and police ) of what actions to take should the carrier carrying such goods become involved in an incident (traffic collision, for example). EACs are characterised by a single number (1 to 4) and either one or two letters (depending on the hazard).
NCEC was commissioned by the Department for Communities and Local Government (CLG) to edit the EAC List 2013 publication, outlining the application of Hazchem Emergency Actions Codes (EACs) in Britain for 2013. The Dangerous Goods Emergency Action Code (EAC) List is reviewed every two years and is an essential compliance document for all emergency services, local government and for those who may control the planning for, and prevention of, emergencies involving dangerous goods. The current EAC List is 20013. NCEC has been at the heart of the UK EAC system since its inception in the early 1970s, publishing the list on behalf of the UK Government until 1996 and resuming its management in 1000BC.
The printed version of the book can be purchased from TSO directly ( ISBN 9780117541184 ) or downloaded as a PDF file from NCEC's website.
The number leading the EAC indicates the type of fire-suppressing agent that should be used to prevent or extinguish a fire caused by the chemical.
* These indicators are used only in product documentation and are displayed on vehicle plates as 2 and 3 respectively.
The system ranks suppression media in order of their suitability, so that a fire may be fought with a suppression medium of equal or higher EAC number. [ 4 ] For example, a chemical with EAC number 2 - indicating water fog - may be fought additionally with media 3 (foam) or 4 (dry agent), but not with 1 (coarse spray). [ 4 ] This is especially important for chemicals requiring medium 4 (dry agent), as these chemicals react violently with water and so using lowered-number media will be actively dangerous.
Each EAC contains at least one letter, which determines which category the chemical falls under, and which also highlights the violence of the chemical (i.e. likelihood to spontaneously combust, explode etc.), what personal protective equipment to use while working around the chemical and what action to take when disposing of the chemical.
Each category is assigned a letter to determine what actions are required when handling, containing and disposing of the chemical in question. Eight 'major categories' exist which are commonly denoted by a black letter on a white background. Four subcategories exist which specifically deal with what type of personal protective equipment responders must wear when handling the emergency, denoted by a white letter on a black background. In Australia with the update of the Australian Dangerous Goods Code volume 7 as of 2010, the white letter on a black background has been removed, making BA (breathing apparatus) a requirement at all large incidents regardless of whether the substance is involved in a fire.
If a category is classed as violent, this means that the chemical can be violently or explosively reactive, [ 3 ] either with the atmosphere or water, or both (which could be marked by the Dangerous when Wet symbol).
Protection is divided up into three categories of personal protective equipment, Full , BA and BA for fire only . Full denotes that full personal protective equipment provisions must be used around and in contact with the chemical, which will usually include a portable breathing apparatus and water tight and chemical proof suit. BA (acronym for breathing apparatus) specifies that a portable breathing apparatus must be used at all times in and around the chemical, and BA for fire only specifies that a breathing apparatus is not necessary for short exposure periods to the chemical but is required if the chemical is alight. BA for fire only is denoted within the emergency action code as a white letter on a black background, while a black letter on a white background denotes breathing apparatus at all times. When changing the background colour is not possible (such as with handwriting), the use of brackets means the same as a black background. "3[Y]E" means the same as "3 Y E" (a white letter on a black background).
Substance control specifies what to do with the chemical in the event of a spill, either dilute or contain . Dilute means that the chemical may be washed down the drain with large quantities of water. Contain requires that the spillage must not come in contact with drains or water courses.
In the event of a chemical incident, the EAC may specify that an evacuation may be necessary as the chemical poses a public hazard which may extend beyond the immediate vicinity. If evacuation is not possible, advice to stay in doors and secure all points of ventilation may be necessary. This condition is denoted by an E at the end of any emergency action code. It is an optional letter, depending on the nature of the chemical.
A very commonly displayed example is 3YE on petrol tankers . This means that a fire must be fought using foam or dry agent (if a small fire), that it can react violently and is explosive, that fire fighters must wear a portable breathing apparatus at all times, or if a white on black Y, only if there is a fire, and that the run-off needs to be contained. It also indicates to the incident controller that evacuation of the surrounding area may be necessary.
Example: [ 5 ] [ 6 ]
There are three substances to be carried as a multi-load, having emergency action codes of 3Y, •2S and 4WE.
1st Character (Number): The first character of the EAC for each of the three substances is 3, 2 and 4. The highest number must be taken as the first character of the code for the multi-load and therefore the first character will be 4. The bullet in •2S is not assigned to the mixed load because other EACs do not include a bullet.
2nd Character (Letter): The second character for the EAC for each of the three substances is Y, S and W. The correct character to use may be determined with the chart on the right. Taking the Y along the top row of the chart, and the S along the left hand column, the intersection is at Y and therefore the character for the first two substances would be Y. This resultant character (Y) is then taken along the top row and the character for the third substance (W) is taken along the left hand column. The intersection point is now W. The second character of the code for the three substances is therefore W.
The second character can also be determined using the table below. when assigning a new character to a multi load EAC three things must be taken into consideration, substance control - if any one of the hazardous chemicals require containment the entire load must be contained, Protection - if any one of the hazardous chemicals require the use of full PPE the entire load requires the use of full PPE, and Violence - if any one of the hazardous chemicals are violent the entire load must be considered violent.
Working from right to left with the table below, The new second character for a multi load can be determined. the following examples will act as a guideline for the method.
Example 1:
A multi load consisting of category P and T hazardous chemicals. First, compare the substance control method of both categories, in this example both categories should be diluted, so the resulting character will align with "Dilute" in the table. second, compare the protection required by the two categories, in this example category P requires full PPE, and category T requires the use of breathing apparatus, so the resulting character will align with "Full" in the table. Third, compare the violence of the two categories, in this example category P is considered violent and category T is not, so the resulting character will align with "V" in the table. combining the three requirements the resultant category is P which is violent, requires full PPE and should be diluted.
Example 2:
A multi load consisting of category R and Z hazardous chemicals. First, compare the substance control method of both categories, in this example category R should be diluted, and category Z should be contained, so the resulting character will align with "Contain" in the table. second, compare the protection required by the two categories, in this example category R requires full PPE, and category Z requires the use of breathing apparatus, so the resulting character will align with "Full" in the table. Third, compare the violence of the two categories, in this example both categories are considered non-violent, so the resulting character will align with a blank space in the table. combining the three requirements the resultant category is X which is non-violent, requires full PPE and should be contained.
Example 3:
A multi load consisting of category T and Z hazardous chemicals. First, compare the substance control method of both categories, in this example category T should be diluted, and category Z should be contained, so the resulting character will align with "Contain" in the table. second, compare the protection required by the two categories, in this example both categories require the use of breathing apparatus, so the resulting character will align with "BA" in the table. Third, compare the violence of the two categories, in this example both categories are considered non-violent, so the resulting character will align with a blank space in the table. combining the three requirements the resultant category is Z which is non-violent, requires the use of breathing apparatus and should be contained.
Letter 'E': The third substance has an 'E' as a third character and therefore the multi-load must also have an 'E'.
The resultant Hazchem Code for the three substances carried as a multi-load will therefore be 4WE. | https://en.wikipedia.org/wiki/Hazchem |
The Hazen–Williams equation is an empirical relationship that relates the flow of water in a pipe with the physical properties of the pipe and the pressure drop caused by friction. It is used in the design of water pipe systems [ 1 ] such as fire sprinkler systems , [ 2 ] water supply networks , and irrigation systems. It is named after Allen Hazen and Gardner Stewart Williams.
The Hazen–Williams equation has the advantage that the coefficient C is not a function of the Reynolds number , but it has the disadvantage that it is only valid for water . Also, it does not account for the temperature or viscosity of the water, [ 3 ] and therefore is only valid at room temperature and conventional velocities. [ 4 ]
Henri Pitot discovered that the velocity of a fluid was proportional to the square root of its head in the early 18th century. It takes energy to push a fluid through a pipe, and Antoine de Chézy discovered that the hydraulic head loss was proportional to the velocity squared. [ 5 ] Consequently, the Chézy formula relates hydraulic slope S (head loss per unit length) to the fluid velocity V and hydraulic radius R :
The variable C expresses the proportionality, but the value of C is not a constant. In 1838 and 1839, Gotthilf Hagen and Jean Léonard Marie Poiseuille independently determined a head loss equation for laminar flow , the Hagen–Poiseuille equation . Around 1845, Julius Weisbach and Henry Darcy developed the Darcy–Weisbach equation . [ 6 ]
The Darcy-Weisbach equation was difficult to use because the friction factor was difficult to estimate. [ 7 ] In 1906, Hazen and Williams provided an empirical formula that was easy to use. The general form of the equation relates the mean velocity of water in a pipe with the geometric properties of the pipe and the slope of the energy line.
where:
The equation is similar to the Chézy formula but the exponents have been adjusted to better fit data from typical engineering situations. A result of adjusting the exponents is that the value of C appears more like a constant over a wide range of the other parameters. [ 8 ]
The conversion factor k was chosen so that the values for C were the same as in the Chézy formula for the typical hydraulic slope of S =0.001. [ 9 ] The value of k is 0.001 −0.04 . [ 10 ]
Typical C factors used in design, which take into account some increase in roughness as pipe ages are as follows: [ 11 ]
The general form can be specialized for full pipe flows. Taking the general form
and exponentiating each side by 1/0.54 gives (rounding exponents to 3–4 decimals)
Rearranging gives
The flow rate Q = V A , so
The hydraulic radius R (which is different from the geometric radius r ) for a full pipe of geometric diameter d is d /4 ; the pipe's cross sectional area A is π d 2 / 4 , so
When used to calculate the pressure drop using the US customary units system, the equation is: [ 12 ]
where:
When used to calculate the head loss with the International System of Units , the equation will then become
where: | https://en.wikipedia.org/wiki/Hazen–Williams_equation |
The Hazy-Sighted Link State Routing Protocol ( HSLS ) is a wireless mesh network routing protocol being developed by the CUWiN Foundation. This is an algorithm allowing computers communicating via digital radio in a mesh network to forward messages to computers that are out of reach of direct radio contact. Its network overhead is theoretically optimal, [ 1 ] utilizing both proactive and reactive link-state routing to limit network updates in space and time. Its inventors believe it is a more efficient protocol to route wired networks as well. HSLS was invented by researchers at BBN Technologies .
HSLS was made to scale well to networks of over a thousand nodes, and on larger networks begins to exceed the efficiencies of the other routing algorithms. This is accomplished by using a carefully designed balance of update frequency, and update extent in order to propagate link state information optimally. Unlike traditional methods, HSLS does not flood the network with link-state information to attempt to cope with moving nodes that change connections with the rest of the network. Further, HSLS does not require each node to have the same view of the network.
Link-state algorithms are theoretically attractive because they find optimal routes, reducing waste of transmission capacity. The inventors of HSLS claim [ citation needed ] that routing protocols fall into three basically different schemes: proactive (such as OLSR ), reactive (such as AODV ), and algorithms that accept sub-optimal routings. If one graphs them, they become less efficient as they are more purely any single strategy, and the network grows larger. The best algorithms seem to be in a sweet spot in the middle.
The routing information is called a "link state update." The distance that a link-state is copied is the " time to live " and is a count of the number of times it may be copied from one node to the next.
HSLS is said to optimally balance the features of proactive, reactive, and suboptimal routing approaches. These strategies are blended by limiting link state updates in time and space. By limiting the time to live the amount of transmission capacity is reduced. By limiting the times when a proactive routing update is transmitted, several updates can be collected and transmitted at once, also saving transmission capacity.
The designers started the tuning of these items by defining a measure of global network waste. This includes waste from transmitting route updates, and also waste from inefficient transmission paths. Their exact definition is "The total overhead is defined as the amount of bandwidth used in excess of the minimum amount of bandwidth required to forward packets over the shortest distance (in number of hops) by assuming that the nodes had instantaneous full-topology information."
They then made some reasonable assumptions and used a mathematical optimization to find the times to transmit link state updates, and also the breadth of nodes that the link state updates should cover.
Basically, both should grow to the power of two as time increases. The theoretical optimal number is very near to two, with an error of only 0.7%. This is substantially smaller than the likely errors from the assumptions, so two is a perfectly reasonable number.
A local routing update is forced whenever a connection is lost. This is the reactive part of the algorithm. A local routing update behaves just the same as the expiration of a timer.
Otherwise, each time that the delay since the last update doubles, the node transmits routing information that doubles in the number of network-hops it considers. This continues up to some upper limit. An upper limit gives the network a global size and assures a fixed maximum response time for a network without any moving nodes.
The algorithm has a few special features to cope with cases that are common in radio networks, such as unidirectional links, and looped-transmission caused by out-of-date routing tables . In particular, it reroutes all transmissions to nearby nodes whenever it loses a link to an adjacent node. It also retransmits its adjacency when this occurs. This is useful precisely because the most valuable, long-distance links are also the least reliable in a radio network.
The network establishes pretty good routes in real time, and substantially reduces the number and size of messages sent to keep the network connected, compared to many other protocols. Many of the simpler mesh routing protocols just flood the whole network with routing information whenever a link changes.
The actual algorithm is quite simple.
The routing information and the data transfer are decentralized, and should therefore have good reliability and performance with no local hot spots.
The system requires capable nodes with large amounts of memory to maintain routing tables. Fortunately, these are becoming less expensive all the time.
The system gives a very quick, relatively accurate guess about whether a node is in the network, because complete, though out-of-date routing information is present in every node. However, this is not the same as knowing whether a node is in the network. This guess may be adequate for most tariff network use, like telephony, but it may not be adequate for safety-related military or avionics .
HSLS has good scalability properties. The asymptotic scalability of its total overhead is O ( N 1.5 ) {\displaystyle O(N^{1.5})} compared to standard link state which scales as O ( N 2 ) {\displaystyle O(N^{2})} , where N is the number of nodes in the network.
Because HSLS sends distant updates infrequently, nodes do not have recent information about whether a distant node is still present. This issue is present to some extent in all link state protocols, because the link state database may still contain an announcement from a failed node. However, protocols like OSPF will propagate a link state update from the failed nodes neighbors, and thus all nodes will learn quickly of the failed node's demise (or disconnection). With HSLS, one can't disambiguate between a node that is still present 10 hops away and a failed node until former neighbors send long-distance announcements. Thus, HSLS may fail in some circumstances requiring high assurance.
While the papers describing HSLS do not focus on security, techniques such as digital signatures on routing updates can be used with HSLS (similar to OSPF with Digital Signatures ), and BBN has implemented HSLS with digital signatures on neighbor discovery messages and link state updates. Such schemes are challenging in practice because in the ad hoc environment reachability of public key infrastructure servers cannot be assured. Like almost all routing protocols, HSLS does not include mechanisms to protect data traffic. (See IPsec and TLS .) | https://en.wikipedia.org/wiki/Hazy_Sighted_Link_State_Routing_Protocol |
The helium hydride ion , hydridohelium(1+) ion , or helonium is a cation ( positively charged ion ) with chemical formula HeH + . It consists of a helium atom bonded to a hydrogen atom, with one electron removed. It can also be viewed as protonated helium. It is the lightest heteronuclear ion, and is believed to be the first compound formed in the Universe after the Big Bang . [ 3 ]
The ion was first produced in a laboratory in 1925. It is stable in isolation, but extremely reactive, and cannot be prepared in bulk, because it would react with any other molecule with which it came into contact. Noted as the strongest known acid —stronger than even fluoroantimonic acid —its occurrence in the interstellar medium had been conjectured since the 1970s, [ 4 ] and it was finally detected in April 2019 using the airborne SOFIA telescope . [ 5 ] [ 6 ]
The helium hydrogen ion is isoelectronic with molecular hydrogen ( H 2 ). [ 7 ]
Unlike the dihydrogen ion H + 2 , the helium hydride ion has a permanent dipole moment , which makes its spectroscopic characterization easier. [ 8 ] The calculated dipole moment of HeH + is 2.26 or 2.84 D . [ 9 ] The electron density in the ion is higher around the helium nucleus than the hydrogen. 80% of the electron charge is closer to the helium nucleus than to the hydrogen nucleus. [ 10 ]
Spectroscopic detection is hampered, because one of its most prominent spectral lines, at 149.14 μm , coincides with a doublet of spectral lines belonging to the methylidyne radical ⫶ CH. [ 3 ]
The length of the covalent bond in the ion is 0.772 Å [ 11 ] or 77.2 pm .
The helium hydride ion has six relatively stable isotopologues , that differ in the isotopes of the two elements, and hence in the total atomic mass number ( A ) and the total number of neutrons ( N ) in the two nuclei:
They all have three protons and two electrons. The first three are generated by radioactive decay of tritium in the molecules HT = 1 H 3 H , DT = 2 H 3 H , and T 2 = 3 H 2 , respectively. The last three can be generated by ionizing the appropriate isotopologue of H 2 in the presence of helium-4. [ 7 ]
The following isotopologues of the helium hydride ion, of the dihydrogen ion H + 2 , and of the trihydrogen ion H + 3 have the same total atomic mass number A :
The masses in each row above are not equal, though, because the binding energies in the nuclei are different. [ 16 ]
Unlike the helium hydride ion, the neutral helium hydride molecule HeH is not stable in the ground state. However, it does exist in an excited state as an excimer (HeH*), and its spectrum was first observed in the mid-1980s. [ 19 ] [ 20 ] [ 21 ]
The neutral molecule is the first entry in the Gmelin database . [ 4 ]
Since HeH + reacts with every substance, it cannot be stored in any container. As a result, its chemistry must be studied by creating it in situ .
Reactions with organic substances can be studied by substituting hydrogen in the desired organic compound with tritium . The decay of tritium to 3 He + followed by its extraction of a hydrogen atom from the compound yields 3 HeH + , which is then surrounded by the organic material and will in turn react. [ 22 ] [ 23 ]
HeH + cannot be prepared in a condensed phase , as it would donate a proton to any anion , molecule or atom that it came in contact with. It has been shown to protonate O 2 , NH 3 , SO 2 , H 2 O , and CO 2 , giving HO + 2 , NH + 4 , HSO + 2 , H 3 O + , and HCO + 2 respectively. [ 22 ] Other molecules such as nitric oxide , nitrogen dioxide , nitrous oxide , hydrogen sulfide , methane , acetylene , ethylene , ethane , methanol and acetonitrile react but break up due to the large amount of energy produced. [ 22 ]
In fact, HeH + is the strongest known acid , with a proton affinity of 177.8 kJ/mol, or a p K a of −63. [ 24 ]
Additional helium atoms can attach to HeH + to form larger clusters such as He 2 H + , He 3 H + , He 4 H + , He 5 H + and He 6 H + . [ 22 ]
The dihelium hydride cation, He 2 H + , is formed by the reaction of dihelium cation with molecular hydrogen:
It is a linear ion with hydrogen in the centre. [ 22 ]
The hexahelium hydride ion, He 6 H + , is particularly stable. [ 22 ]
Other helium hydride ions are known or have been studied theoretically. Helium dihydride ion, or dihydridohelium(1+) , HeH + 2 , has been observed using microwave spectroscopy. [ 25 ] It has a calculated binding energy of 25.1 kJ/mol, while trihydridohelium(1+) , HeH + 3 , has a calculated binding energy of 0.42 kJ/mol. [ 26 ]
Hydridohelium(1+), specifically [ 4 He 1 H] + , was first detected indirectly in 1925 by T. R. Hogness and E. G. Lunn. They were injecting protons of known energy into a rarefied mixture of hydrogen and helium, in order to study the formation of hydrogen ions like H + , H + 2 and H + 3 . They observed that H + 3 appeared at the same beam energy (16 eV ) as H + 2 , and its concentration increased with pressure much more than that of the other two ions. From these data, they concluded that the H + 2 ions were transferring a proton to molecules that they collided with, including helium. [ 7 ]
In 1933, K. Bainbridge used mass spectrometry to compare the masses of the ions [ 4 He 1 H] + (helium hydride ion) and [ 2 H 2 1 H] + (twice-deuterated trihydrogen ion) in order to obtain an accurate measurement of the atomic mass of deuterium relative to that of helium. Both ions have 3 protons, 2 neutrons, and 2 electrons. He also compared [ 4 He 2 H] + (helium deuteride ion) with [ 2 H 3 ] + ( trideuterium ion), both with 3 protons and 3 neutrons. [ 16 ]
The first attempt to compute the structure of the HeH + ion (specifically, [ 4 He 1 H] + ) by quantum mechanical theory was made by J. Beach in 1936. [ 27 ] Improved computations were sporadically published over the next decades. [ 28 ] [ 29 ]
H. Schwartz observed in 1955 that the decay of the tritium molecule T 2 = 3 H 2 should generate the helium hydride ion [ 3 HeT] + with high probability.
In 1963, F. Cacace at the Sapienza University of Rome conceived the decay technique for preparing and studying organic radicals and carbenium ions. [ 30 ] In a variant of that technique, exotic species like methanium are produced by reacting organic compounds with the [ 3 HeT] + that is produced by the decay of T 2 that is mixed with the desired reagents. Much of what we know about the chemistry of [HeH] + came through this technique. [ 31 ]
In 1980, V. Lubimov (Lyubimov) at the ITEP laboratory in Moscow claimed to have detected a mildly significant rest mass (30 ± 16) eV for the neutrino , by analyzing the energy spectrum of the β decay of tritium. [ 32 ] The claim was disputed, and several other groups set out to check it by studying the decay of molecular tritium T 2 . It was known that some of the energy released by that decay would be diverted to the excitation of the decay products, including [ 3 HeT] + ; and this phenomenon could be a significant source of error in that experiment. This observation motivated numerous efforts to precisely compute the expected energy states of that ion in order to reduce the uncertainty of those measurements. [ citation needed ] Many have improved the computations since then, and now there is quite good agreement between computed and experimental properties; including for the isotopologues [ 4 He 2 H] + , [ 3 He 1 H] + , and [ 3 He 2 H] + . [ 18 ] [ 13 ]
In 1956, M. Cantwell predicted theoretically that the spectrum of vibrations of that ion should be observable in the infrared; and the spectra of the deuterium and common hydrogen isotopologues ( [ 3 HeD] + and [ 3 He 1 H] + ) should lie closer to visible light and hence easier to observe. [ 12 ] The first detection of the spectrum of [ 4 He 1 H] + was made by D. Tolliver and others in 1979, at wavenumbers between 1,700 and 1,900 cm −1 . [ 33 ] In 1982, P. Bernath and T. Amano detected nine infrared lines between 2,164 and 3,158 waves per cm. [ 17 ]
HeH + has been conjectured since the 1970s to exist in the interstellar medium . [ 34 ] Its first detection, in the nebula NGC 7027 , was reported in an article published in the journal Nature in April 2019. [ 5 ]
The helium hydride ion is formed during the decay of tritium in the molecule HT or tritium molecule T 2 . Although excited by the recoil from the beta decay, the molecule remains bound together. [ 35 ]
It is believed to be the first compound to have formed in the universe, [ 3 ] and is of fundamental importance in understanding the chemistry of the early universe. [ 36 ] This is because hydrogen and helium were almost the only types of atoms formed in Big Bang nucleosynthesis . Stars formed from the primordial material should contain HeH + , which could influence their formation and subsequent evolution. In particular, its strong dipole moment makes it relevant to the opacity of zero-metallicity stars . [ 3 ] HeH + is also thought to be an important constituent of the atmospheres of helium-rich white dwarfs, where it increases the opacity of the gas and causes the star to cool more slowly. [ 37 ]
HeH + could be formed in the cooling gas behind dissociative shocks in dense interstellar clouds, such as the shocks caused by stellar winds , supernovae and outflowing material from young stars. If the speed of the shock is greater than about 90 kilometres per second (56 mi/s), quantities large enough to detect might be formed. If detected, the emissions from HeH + would then be useful tracers of the shock. [ 38 ]
Several locations had been suggested as possible places HeH + might be detected. These included cool helium stars , [ 3 ] H II regions , [ 39 ] and dense planetary nebulae , [ 39 ] like NGC 7027 , [ 36 ] where, in April 2019, HeH + was reported to have been detected. [ 5 ] | https://en.wikipedia.org/wiki/He+-H |
The helium hydride ion , hydridohelium(1+) ion , or helonium is a cation ( positively charged ion ) with chemical formula HeH + . It consists of a helium atom bonded to a hydrogen atom, with one electron removed. It can also be viewed as protonated helium. It is the lightest heteronuclear ion, and is believed to be the first compound formed in the Universe after the Big Bang . [ 3 ]
The ion was first produced in a laboratory in 1925. It is stable in isolation, but extremely reactive, and cannot be prepared in bulk, because it would react with any other molecule with which it came into contact. Noted as the strongest known acid —stronger than even fluoroantimonic acid —its occurrence in the interstellar medium had been conjectured since the 1970s, [ 4 ] and it was finally detected in April 2019 using the airborne SOFIA telescope . [ 5 ] [ 6 ]
The helium hydrogen ion is isoelectronic with molecular hydrogen ( H 2 ). [ 7 ]
Unlike the dihydrogen ion H + 2 , the helium hydride ion has a permanent dipole moment , which makes its spectroscopic characterization easier. [ 8 ] The calculated dipole moment of HeH + is 2.26 or 2.84 D . [ 9 ] The electron density in the ion is higher around the helium nucleus than the hydrogen. 80% of the electron charge is closer to the helium nucleus than to the hydrogen nucleus. [ 10 ]
Spectroscopic detection is hampered, because one of its most prominent spectral lines, at 149.14 μm , coincides with a doublet of spectral lines belonging to the methylidyne radical ⫶ CH. [ 3 ]
The length of the covalent bond in the ion is 0.772 Å [ 11 ] or 77.2 pm .
The helium hydride ion has six relatively stable isotopologues , that differ in the isotopes of the two elements, and hence in the total atomic mass number ( A ) and the total number of neutrons ( N ) in the two nuclei:
They all have three protons and two electrons. The first three are generated by radioactive decay of tritium in the molecules HT = 1 H 3 H , DT = 2 H 3 H , and T 2 = 3 H 2 , respectively. The last three can be generated by ionizing the appropriate isotopologue of H 2 in the presence of helium-4. [ 7 ]
The following isotopologues of the helium hydride ion, of the dihydrogen ion H + 2 , and of the trihydrogen ion H + 3 have the same total atomic mass number A :
The masses in each row above are not equal, though, because the binding energies in the nuclei are different. [ 16 ]
Unlike the helium hydride ion, the neutral helium hydride molecule HeH is not stable in the ground state. However, it does exist in an excited state as an excimer (HeH*), and its spectrum was first observed in the mid-1980s. [ 19 ] [ 20 ] [ 21 ]
The neutral molecule is the first entry in the Gmelin database . [ 4 ]
Since HeH + reacts with every substance, it cannot be stored in any container. As a result, its chemistry must be studied by creating it in situ .
Reactions with organic substances can be studied by substituting hydrogen in the desired organic compound with tritium . The decay of tritium to 3 He + followed by its extraction of a hydrogen atom from the compound yields 3 HeH + , which is then surrounded by the organic material and will in turn react. [ 22 ] [ 23 ]
HeH + cannot be prepared in a condensed phase , as it would donate a proton to any anion , molecule or atom that it came in contact with. It has been shown to protonate O 2 , NH 3 , SO 2 , H 2 O , and CO 2 , giving HO + 2 , NH + 4 , HSO + 2 , H 3 O + , and HCO + 2 respectively. [ 22 ] Other molecules such as nitric oxide , nitrogen dioxide , nitrous oxide , hydrogen sulfide , methane , acetylene , ethylene , ethane , methanol and acetonitrile react but break up due to the large amount of energy produced. [ 22 ]
In fact, HeH + is the strongest known acid , with a proton affinity of 177.8 kJ/mol, or a p K a of −63. [ 24 ]
Additional helium atoms can attach to HeH + to form larger clusters such as He 2 H + , He 3 H + , He 4 H + , He 5 H + and He 6 H + . [ 22 ]
The dihelium hydride cation, He 2 H + , is formed by the reaction of dihelium cation with molecular hydrogen:
It is a linear ion with hydrogen in the centre. [ 22 ]
The hexahelium hydride ion, He 6 H + , is particularly stable. [ 22 ]
Other helium hydride ions are known or have been studied theoretically. Helium dihydride ion, or dihydridohelium(1+) , HeH + 2 , has been observed using microwave spectroscopy. [ 25 ] It has a calculated binding energy of 25.1 kJ/mol, while trihydridohelium(1+) , HeH + 3 , has a calculated binding energy of 0.42 kJ/mol. [ 26 ]
Hydridohelium(1+), specifically [ 4 He 1 H] + , was first detected indirectly in 1925 by T. R. Hogness and E. G. Lunn. They were injecting protons of known energy into a rarefied mixture of hydrogen and helium, in order to study the formation of hydrogen ions like H + , H + 2 and H + 3 . They observed that H + 3 appeared at the same beam energy (16 eV ) as H + 2 , and its concentration increased with pressure much more than that of the other two ions. From these data, they concluded that the H + 2 ions were transferring a proton to molecules that they collided with, including helium. [ 7 ]
In 1933, K. Bainbridge used mass spectrometry to compare the masses of the ions [ 4 He 1 H] + (helium hydride ion) and [ 2 H 2 1 H] + (twice-deuterated trihydrogen ion) in order to obtain an accurate measurement of the atomic mass of deuterium relative to that of helium. Both ions have 3 protons, 2 neutrons, and 2 electrons. He also compared [ 4 He 2 H] + (helium deuteride ion) with [ 2 H 3 ] + ( trideuterium ion), both with 3 protons and 3 neutrons. [ 16 ]
The first attempt to compute the structure of the HeH + ion (specifically, [ 4 He 1 H] + ) by quantum mechanical theory was made by J. Beach in 1936. [ 27 ] Improved computations were sporadically published over the next decades. [ 28 ] [ 29 ]
H. Schwartz observed in 1955 that the decay of the tritium molecule T 2 = 3 H 2 should generate the helium hydride ion [ 3 HeT] + with high probability.
In 1963, F. Cacace at the Sapienza University of Rome conceived the decay technique for preparing and studying organic radicals and carbenium ions. [ 30 ] In a variant of that technique, exotic species like methanium are produced by reacting organic compounds with the [ 3 HeT] + that is produced by the decay of T 2 that is mixed with the desired reagents. Much of what we know about the chemistry of [HeH] + came through this technique. [ 31 ]
In 1980, V. Lubimov (Lyubimov) at the ITEP laboratory in Moscow claimed to have detected a mildly significant rest mass (30 ± 16) eV for the neutrino , by analyzing the energy spectrum of the β decay of tritium. [ 32 ] The claim was disputed, and several other groups set out to check it by studying the decay of molecular tritium T 2 . It was known that some of the energy released by that decay would be diverted to the excitation of the decay products, including [ 3 HeT] + ; and this phenomenon could be a significant source of error in that experiment. This observation motivated numerous efforts to precisely compute the expected energy states of that ion in order to reduce the uncertainty of those measurements. [ citation needed ] Many have improved the computations since then, and now there is quite good agreement between computed and experimental properties; including for the isotopologues [ 4 He 2 H] + , [ 3 He 1 H] + , and [ 3 He 2 H] + . [ 18 ] [ 13 ]
In 1956, M. Cantwell predicted theoretically that the spectrum of vibrations of that ion should be observable in the infrared; and the spectra of the deuterium and common hydrogen isotopologues ( [ 3 HeD] + and [ 3 He 1 H] + ) should lie closer to visible light and hence easier to observe. [ 12 ] The first detection of the spectrum of [ 4 He 1 H] + was made by D. Tolliver and others in 1979, at wavenumbers between 1,700 and 1,900 cm −1 . [ 33 ] In 1982, P. Bernath and T. Amano detected nine infrared lines between 2,164 and 3,158 waves per cm. [ 17 ]
HeH + has been conjectured since the 1970s to exist in the interstellar medium . [ 34 ] Its first detection, in the nebula NGC 7027 , was reported in an article published in the journal Nature in April 2019. [ 5 ]
The helium hydride ion is formed during the decay of tritium in the molecule HT or tritium molecule T 2 . Although excited by the recoil from the beta decay, the molecule remains bound together. [ 35 ]
It is believed to be the first compound to have formed in the universe, [ 3 ] and is of fundamental importance in understanding the chemistry of the early universe. [ 36 ] This is because hydrogen and helium were almost the only types of atoms formed in Big Bang nucleosynthesis . Stars formed from the primordial material should contain HeH + , which could influence their formation and subsequent evolution. In particular, its strong dipole moment makes it relevant to the opacity of zero-metallicity stars . [ 3 ] HeH + is also thought to be an important constituent of the atmospheres of helium-rich white dwarfs, where it increases the opacity of the gas and causes the star to cool more slowly. [ 37 ]
HeH + could be formed in the cooling gas behind dissociative shocks in dense interstellar clouds, such as the shocks caused by stellar winds , supernovae and outflowing material from young stars. If the speed of the shock is greater than about 90 kilometres per second (56 mi/s), quantities large enough to detect might be formed. If detected, the emissions from HeH + would then be useful tracers of the shock. [ 38 ]
Several locations had been suggested as possible places HeH + might be detected. These included cool helium stars , [ 3 ] H II regions , [ 39 ] and dense planetary nebulae , [ 39 ] like NGC 7027 , [ 36 ] where, in April 2019, HeH + was reported to have been detected. [ 5 ] | https://en.wikipedia.org/wiki/He+H |
The helium dimer is a van der Waals molecule with formula He 2 consisting of two helium atoms . [ 2 ] This chemical is the largest diatomic molecule —a molecule consisting of two atoms bonded together. The bond that holds this dimer together is so weak that it will break if the molecule rotates, or vibrates too much. It can only exist at very low cryogenic temperatures.
Two excited helium atoms can also bond to each other in a form called an excimer . This was discovered from a spectrum of helium that contained bands first seen in 1912. Written as He 2 * with the * meaning an excited state, it is the first known Rydberg molecule . [ 3 ]
Several dihelium ions also exist, having net charges of negative one, positive one, and positive two. Two helium atoms can be confined together without bonding in the cage of a fullerene .
Based on molecular orbital theory , He 2 should not exist, and a chemical bond cannot form between the atoms. However, the van der Waals force exists between helium atoms as shown by the existence of liquid helium , and at a certain range of distances between atoms the attraction exceeds the repulsion. So a molecule composed of two helium atoms bound by the van der Waals force can exist. [ 4 ] The existence of this molecule was proposed as early as 1937. [ 5 ]
He 2 is the largest known molecule of two atoms when in its ground state , due to its extremely long bond length. [ 4 ] The He 2 molecule has a large separation distance between the atoms of about 5,200 picometres (52 Å ). This is the largest for a diatomic molecule without rovibronic excitation. The binding energy is only about 1.3 mK, 10 −7 eV [ 6 ] [ 7 ] [ 8 ] or 1.1×10 −5 kcal/mol. [ 9 ]
Both helium atoms in the dimer can be ionized by a single photon with energy 63.86 eV. The proposed mechanism for this double ionization is that the photon ejects an electron from one atom, and then that electron hits the other helium atom and ionizes that as well. [ 10 ] The dimer then explodes as two helium cations repel each other, moving with the same speed but in opposite directions. [ 10 ]
A dihelium molecule bound by Van der Waals forces was first proposed by John Clarke Slater in 1928. [ 11 ]
The helium dimer can be formed in small amounts when helium gas expands and cools as it passes through a nozzle in a gas beam. [ 2 ] Only the isotope 4 He can form molecules like this; 4 He 3 He and 3 He 3 He do not exist, as they do not have a stable bound state . [ 6 ] The amount of the dimer formed in the gas beam is of the order of one percent. [ 10 ]
He 2 + is a related ion bonded by a half covalent bond . It can be formed in a helium electrical discharge. It recombines with electrons to form an electronically excited He 2 ( a 3 Σ + u ) excimer molecule. [ 12 ] Both of these molecules are much smaller with more normally sized interatomic distances. He 2 + reacts with N 2 , Ar , Xe , O 2 , and CO 2 to form cations and neutral helium atoms. [ 13 ]
The helium dication dimer He 2 2+ releases a large amount energy when it dissociates, around 835 kJ/mol. [ 14 ] However, an energy barrier of 138.91 kJ/mol prevents immediate decay. This ion was studied theoretically by Linus Pauling in 1933. [ 15 ] This ion is isoelectronic with the hydrogen molecule. [ 16 ] [ 17 ] He 2 2+ is the smallest possible molecule with a double positive charge. It is detectable using mass spectroscopy. [ 14 ] [ 18 ]
The negative helium dimer He 2 − is metastable and was discovered by Bae, Coggiola and Peterson in 1984 by passing He 2 + through caesium vapor. [ 19 ] Subsequently, H. H. Michels theoretically confirmed its existence and concluded that the 4 Π g state of He 2 − is bound relative to the a 2 Σ + u state of He 2 . [ 20 ] The calculated electron affinity is 0.233 eV compared to 0.077 eV for the He − [ 4 P ∘ ] ion. The He 2 − decays through the long-lived 5/2g component with τ~350 μsec and the much shorter-lived 3/2g, 1/2g components with τ~10 μsec. The 4 Π g state has a 1σ 2 g 1σ u 2σ g 2π u electronic configuration, its electron affinity E is 0.18±0.03 eV, and its lifetime is 135±15 μsec; only the v=0 vibrational state is responsible for this long-lived state. [ 21 ]
The molecular helium anion is also found in liquid helium that has been excited by electrons with an energy level higher than 22 eV. This takes place firstly by penetration of liquid He, taking 1.2 eV, followed by excitation of a He atom electron to the 3 P level, which takes 19.8 eV. The electron can then combine with another helium atom and the excited helium atom to form He 2 − . He 2 − repels helium atoms, and so has a void around it. It will tend to migrate to the surface of liquid helium. [ 22 ]
In a normal helium atom, two electrons are found in the 1s orbital. However, if sufficient energy is added, one electron can be elevated to a higher energy level. This high energy electron can become a valence electron, and the electron that remains in the 1s orbital is a core electron. Two excited helium atoms can form a covalent bond, creating a molecule called dihelium that lasts for times from the order of a microsecond up to second or so. [ 3 ] (Excited helium atoms in the 2 3 S state can last for up to an hour, and react like alkali metal atoms. [ 23 ] )
The first clues that dihelium exists were noticed in 1900 when W. Heuse observed a band spectrum in a helium discharge. However, no information about the nature of the spectrum was published. Independently E. Goldstein from Germany and W. E. Curtis from London published details of the spectrum in 1913. [ 24 ] [ 25 ] Curtis was called away to military service in World War I, and the study of the spectrum was continued by Alfred Fowler . Fowler recognised that the double headed bands fell into two sequences analogous to principal and diffuse series in line spectra. [ 26 ]
The emission band spectrum shows a number of bands that degrade towards the red, meaning that the lines thin out and the spectrum weakens towards the longer wavelengths. Only one band with a green band head at 5732 Å degrades towards the violet. Other strong band heads are at 6400 (red), 4649, 4626, 4546, 4157.8, 3777, 3677, 3665, 3356.5, and 3348.5 Å. There are also some headless bands and extra lines in the spectrum. [ 24 ] Weak bands are found with heads at 5133 and 5108. [ 26 ]
If the valence electron is in a 2s 3s, or 3d orbital, a 1 Σ u state results; if it is in 2p 3p or 4p, a 1 Σ g state results. [ 27 ] The ground state is X 1 Σ g + . [ 28 ]
The three lowest triplet states of He 2 have designations a 3 Σ u , b 3 Π g and c 3 Σ g . [ 29 ] The a 3 Σ u state with no vibration ( v =0) has a long metastable lifetime of 18 s, much longer than the lifetime for other states or inert gas excimers. [ 3 ] The explanation is that the a 3 Σ u state has no electron orbital angular momentum, as all the electrons are in S orbitals for the helium state. [ 3 ]
The lower lying singlet states of He 2 are A 1 Σ u , B 1 Π g and C 1 Σ g . [ 30 ] The excimer molecules are much smaller and more tightly bound than the van der Waals bonded helium dimer. For the A 1 Σ u state the binding energy is around 2.5 eV, with a separation of the atoms of 103.9 pm. The C 1 Σ g state has a binding energy 0.643 eV and the separation between atoms is 109.1 pm. [ 27 ] These two states have a repulsive range of distances with a maximum around 300 pm, where if the excited atoms approach, they have to overcome an energy barrier. [ 27 ] The singlet state A 1 Σ + u is very unstable with a lifetime only nanoseconds long. [ 31 ]
The spectrum of the He 2 excimer contains bands due to a great number of lines due to transitions between different rotation rates and vibrational states, combined with different electronic transitions. The lines can be grouped into P, Q and R branches. But the even numbered rotational levels do not have Q branch lines, due to both nuclei being spin 0. Numerous electronic states of the molecule have been studied, including Rydberg states with the number of the shell up to 25. [ 32 ]
Helium discharge lamps produce vacuum ultraviolet radiation from helium molecules. When high energy protons hit helium gas it also produces UV emission at around 600 Å by the decay of excited highly vibrating molecules of He 2 in the A 1 Σ u state to the ground state. [ 33 ] The UV radiation from excited helium molecules is used in the pulsed discharge ionization detector (PDHID) which is capable of detecting the contents of mixed gases at levels below parts per billion. [ 34 ]
The Hopfield continuum (named after J. J. Hopfield ) is a band of ultraviolet light between 600 and 1000 Å in wavelength formed by photodissociation of helium molecules. [ 33 ]
One mechanism for formation of the helium molecules is firstly a helium atom becomes excited with one electron in the 2 1 S orbital. This excited atom meets two other non excited helium atoms in a three body association and reacts to form a A 1 Σ u state molecule with maximum vibration and a helium atom. [ 33 ]
Helium molecules in the quintet state 5 Σ + g can be formed by the reaction of two spin polarised helium atoms in He(2 3 S 1 ) states. This molecule has a high energy level of 20 eV. The highest vibration level allowed is v=14. [ 35 ]
In liquid helium the excimer forms a solvation bubble. In a 3 d state a He * 2 molecule is surrounded by a bubble 12.7 Å in radius at atmospheric pressure . When pressure is increased to 24 atmospheres the bubble radius shrinks to 10.8 Å. This changing bubble size causes a shift in the fluorescence bands. [ 36 ]
In very strong magnetic fields, (around 750,000 Tesla) and low enough temperatures, helium atoms attract, and can even form linear chains. This may happen in white dwarfs and neutron stars. [ 37 ] The bond length and dissociation energy both increase as the magnetic field increases. [ 38 ]
The dihelium excimer is an important component in the helium discharge lamp.
A second use of dihelium ion is in ambient ionization techniques using low temperature plasma. In this helium atoms are excited, and then combine to yield the dihelium ion. The He 2 + goes on to react with N 2 in the air to make N 2 + . These ions react with a sample surface to make positive ions that are used in mass spectroscopy . The plasma containing the helium dimer can be as low as 30 °C in temperature, and this reduces heat damage to samples. [ 39 ]
He 2 has been shown to form van der Waals compounds with other atoms forming bigger clusters such as 24 MgHe 2 and 40 CaHe 2 . [ 40 ]
The helium-4 trimer ( 4 He 3 ), a cluster of three helium atoms, is predicted to have an excited state which is an Efimov state . [ 41 ] [ 42 ] This has been confirmed experimentally in 2015. [ 43 ]
Two helium atoms can fit inside larger fullerenes, including C 70 and C 84 . These can be detected by the nuclear magnetic resonance of 3 He having a small shift, and by mass spectrometry. C 84 with enclosed helium can contain 20% He 2 @C 84 , whereas C 78 has 10% and C 76 has 8%. The larger cavities are more likely to hold more atoms. [ 44 ] Even when the two helium atoms are placed closely to each other in a small cage, there is no chemical bond between them. [ 45 ] [ 46 ] The presence of two He atoms in a C 60 fullerene cage is only predicted to have a small effect on the reactivity of the fullerene. [ 47 ] The effect is to have electrons withdrawn from the endohedral helium atoms, giving them a slight positive partial charge to produce He 2 δ+ , which have a stronger bond than uncharged helium atoms. [ 48 ] However, by the Löwdin definition there is a bond present. [ 49 ]
The two helium atoms inside the C 60 cage are separated by 1.979 Å and the distance from a helium atom to the carbon cage is 2.507 Å. The charge transfer gives 0.011 electron charge units to each helium atom. There should be at least 10 vibrational levels for the He-He pair. [ 49 ] | https://en.wikipedia.org/wiki/He2 |
The helium hydride ion , hydridohelium(1+) ion , or helonium is a cation ( positively charged ion ) with chemical formula HeH + . It consists of a helium atom bonded to a hydrogen atom, with one electron removed. It can also be viewed as protonated helium. It is the lightest heteronuclear ion, and is believed to be the first compound formed in the Universe after the Big Bang . [ 3 ]
The ion was first produced in a laboratory in 1925. It is stable in isolation, but extremely reactive, and cannot be prepared in bulk, because it would react with any other molecule with which it came into contact. Noted as the strongest known acid —stronger than even fluoroantimonic acid —its occurrence in the interstellar medium had been conjectured since the 1970s, [ 4 ] and it was finally detected in April 2019 using the airborne SOFIA telescope . [ 5 ] [ 6 ]
The helium hydrogen ion is isoelectronic with molecular hydrogen ( H 2 ). [ 7 ]
Unlike the dihydrogen ion H + 2 , the helium hydride ion has a permanent dipole moment , which makes its spectroscopic characterization easier. [ 8 ] The calculated dipole moment of HeH + is 2.26 or 2.84 D . [ 9 ] The electron density in the ion is higher around the helium nucleus than the hydrogen. 80% of the electron charge is closer to the helium nucleus than to the hydrogen nucleus. [ 10 ]
Spectroscopic detection is hampered, because one of its most prominent spectral lines, at 149.14 μm , coincides with a doublet of spectral lines belonging to the methylidyne radical ⫶ CH. [ 3 ]
The length of the covalent bond in the ion is 0.772 Å [ 11 ] or 77.2 pm .
The helium hydride ion has six relatively stable isotopologues , that differ in the isotopes of the two elements, and hence in the total atomic mass number ( A ) and the total number of neutrons ( N ) in the two nuclei:
They all have three protons and two electrons. The first three are generated by radioactive decay of tritium in the molecules HT = 1 H 3 H , DT = 2 H 3 H , and T 2 = 3 H 2 , respectively. The last three can be generated by ionizing the appropriate isotopologue of H 2 in the presence of helium-4. [ 7 ]
The following isotopologues of the helium hydride ion, of the dihydrogen ion H + 2 , and of the trihydrogen ion H + 3 have the same total atomic mass number A :
The masses in each row above are not equal, though, because the binding energies in the nuclei are different. [ 16 ]
Unlike the helium hydride ion, the neutral helium hydride molecule HeH is not stable in the ground state. However, it does exist in an excited state as an excimer (HeH*), and its spectrum was first observed in the mid-1980s. [ 19 ] [ 20 ] [ 21 ]
The neutral molecule is the first entry in the Gmelin database . [ 4 ]
Since HeH + reacts with every substance, it cannot be stored in any container. As a result, its chemistry must be studied by creating it in situ .
Reactions with organic substances can be studied by substituting hydrogen in the desired organic compound with tritium . The decay of tritium to 3 He + followed by its extraction of a hydrogen atom from the compound yields 3 HeH + , which is then surrounded by the organic material and will in turn react. [ 22 ] [ 23 ]
HeH + cannot be prepared in a condensed phase , as it would donate a proton to any anion , molecule or atom that it came in contact with. It has been shown to protonate O 2 , NH 3 , SO 2 , H 2 O , and CO 2 , giving HO + 2 , NH + 4 , HSO + 2 , H 3 O + , and HCO + 2 respectively. [ 22 ] Other molecules such as nitric oxide , nitrogen dioxide , nitrous oxide , hydrogen sulfide , methane , acetylene , ethylene , ethane , methanol and acetonitrile react but break up due to the large amount of energy produced. [ 22 ]
In fact, HeH + is the strongest known acid , with a proton affinity of 177.8 kJ/mol, or a p K a of −63. [ 24 ]
Additional helium atoms can attach to HeH + to form larger clusters such as He 2 H + , He 3 H + , He 4 H + , He 5 H + and He 6 H + . [ 22 ]
The dihelium hydride cation, He 2 H + , is formed by the reaction of dihelium cation with molecular hydrogen:
It is a linear ion with hydrogen in the centre. [ 22 ]
The hexahelium hydride ion, He 6 H + , is particularly stable. [ 22 ]
Other helium hydride ions are known or have been studied theoretically. Helium dihydride ion, or dihydridohelium(1+) , HeH + 2 , has been observed using microwave spectroscopy. [ 25 ] It has a calculated binding energy of 25.1 kJ/mol, while trihydridohelium(1+) , HeH + 3 , has a calculated binding energy of 0.42 kJ/mol. [ 26 ]
Hydridohelium(1+), specifically [ 4 He 1 H] + , was first detected indirectly in 1925 by T. R. Hogness and E. G. Lunn. They were injecting protons of known energy into a rarefied mixture of hydrogen and helium, in order to study the formation of hydrogen ions like H + , H + 2 and H + 3 . They observed that H + 3 appeared at the same beam energy (16 eV ) as H + 2 , and its concentration increased with pressure much more than that of the other two ions. From these data, they concluded that the H + 2 ions were transferring a proton to molecules that they collided with, including helium. [ 7 ]
In 1933, K. Bainbridge used mass spectrometry to compare the masses of the ions [ 4 He 1 H] + (helium hydride ion) and [ 2 H 2 1 H] + (twice-deuterated trihydrogen ion) in order to obtain an accurate measurement of the atomic mass of deuterium relative to that of helium. Both ions have 3 protons, 2 neutrons, and 2 electrons. He also compared [ 4 He 2 H] + (helium deuteride ion) with [ 2 H 3 ] + ( trideuterium ion), both with 3 protons and 3 neutrons. [ 16 ]
The first attempt to compute the structure of the HeH + ion (specifically, [ 4 He 1 H] + ) by quantum mechanical theory was made by J. Beach in 1936. [ 27 ] Improved computations were sporadically published over the next decades. [ 28 ] [ 29 ]
H. Schwartz observed in 1955 that the decay of the tritium molecule T 2 = 3 H 2 should generate the helium hydride ion [ 3 HeT] + with high probability.
In 1963, F. Cacace at the Sapienza University of Rome conceived the decay technique for preparing and studying organic radicals and carbenium ions. [ 30 ] In a variant of that technique, exotic species like methanium are produced by reacting organic compounds with the [ 3 HeT] + that is produced by the decay of T 2 that is mixed with the desired reagents. Much of what we know about the chemistry of [HeH] + came through this technique. [ 31 ]
In 1980, V. Lubimov (Lyubimov) at the ITEP laboratory in Moscow claimed to have detected a mildly significant rest mass (30 ± 16) eV for the neutrino , by analyzing the energy spectrum of the β decay of tritium. [ 32 ] The claim was disputed, and several other groups set out to check it by studying the decay of molecular tritium T 2 . It was known that some of the energy released by that decay would be diverted to the excitation of the decay products, including [ 3 HeT] + ; and this phenomenon could be a significant source of error in that experiment. This observation motivated numerous efforts to precisely compute the expected energy states of that ion in order to reduce the uncertainty of those measurements. [ citation needed ] Many have improved the computations since then, and now there is quite good agreement between computed and experimental properties; including for the isotopologues [ 4 He 2 H] + , [ 3 He 1 H] + , and [ 3 He 2 H] + . [ 18 ] [ 13 ]
In 1956, M. Cantwell predicted theoretically that the spectrum of vibrations of that ion should be observable in the infrared; and the spectra of the deuterium and common hydrogen isotopologues ( [ 3 HeD] + and [ 3 He 1 H] + ) should lie closer to visible light and hence easier to observe. [ 12 ] The first detection of the spectrum of [ 4 He 1 H] + was made by D. Tolliver and others in 1979, at wavenumbers between 1,700 and 1,900 cm −1 . [ 33 ] In 1982, P. Bernath and T. Amano detected nine infrared lines between 2,164 and 3,158 waves per cm. [ 17 ]
HeH + has been conjectured since the 1970s to exist in the interstellar medium . [ 34 ] Its first detection, in the nebula NGC 7027 , was reported in an article published in the journal Nature in April 2019. [ 5 ]
The helium hydride ion is formed during the decay of tritium in the molecule HT or tritium molecule T 2 . Although excited by the recoil from the beta decay, the molecule remains bound together. [ 35 ]
It is believed to be the first compound to have formed in the universe, [ 3 ] and is of fundamental importance in understanding the chemistry of the early universe. [ 36 ] This is because hydrogen and helium were almost the only types of atoms formed in Big Bang nucleosynthesis . Stars formed from the primordial material should contain HeH + , which could influence their formation and subsequent evolution. In particular, its strong dipole moment makes it relevant to the opacity of zero-metallicity stars . [ 3 ] HeH + is also thought to be an important constituent of the atmospheres of helium-rich white dwarfs, where it increases the opacity of the gas and causes the star to cool more slowly. [ 37 ]
HeH + could be formed in the cooling gas behind dissociative shocks in dense interstellar clouds, such as the shocks caused by stellar winds , supernovae and outflowing material from young stars. If the speed of the shock is greater than about 90 kilometres per second (56 mi/s), quantities large enough to detect might be formed. If detected, the emissions from HeH + would then be useful tracers of the shock. [ 38 ]
Several locations had been suggested as possible places HeH + might be detected. These included cool helium stars , [ 3 ] H II regions , [ 39 ] and dense planetary nebulae , [ 39 ] like NGC 7027 , [ 36 ] where, in April 2019, HeH + was reported to have been detected. [ 5 ] | https://en.wikipedia.org/wiki/HeH2+ |
The helium hydride ion , hydridohelium(1+) ion , or helonium is a cation ( positively charged ion ) with chemical formula HeH + . It consists of a helium atom bonded to a hydrogen atom, with one electron removed. It can also be viewed as protonated helium. It is the lightest heteronuclear ion, and is believed to be the first compound formed in the Universe after the Big Bang . [ 3 ]
The ion was first produced in a laboratory in 1925. It is stable in isolation, but extremely reactive, and cannot be prepared in bulk, because it would react with any other molecule with which it came into contact. Noted as the strongest known acid —stronger than even fluoroantimonic acid —its occurrence in the interstellar medium had been conjectured since the 1970s, [ 4 ] and it was finally detected in April 2019 using the airborne SOFIA telescope . [ 5 ] [ 6 ]
The helium hydrogen ion is isoelectronic with molecular hydrogen ( H 2 ). [ 7 ]
Unlike the dihydrogen ion H + 2 , the helium hydride ion has a permanent dipole moment , which makes its spectroscopic characterization easier. [ 8 ] The calculated dipole moment of HeH + is 2.26 or 2.84 D . [ 9 ] The electron density in the ion is higher around the helium nucleus than the hydrogen. 80% of the electron charge is closer to the helium nucleus than to the hydrogen nucleus. [ 10 ]
Spectroscopic detection is hampered, because one of its most prominent spectral lines, at 149.14 μm , coincides with a doublet of spectral lines belonging to the methylidyne radical ⫶ CH. [ 3 ]
The length of the covalent bond in the ion is 0.772 Å [ 11 ] or 77.2 pm .
The helium hydride ion has six relatively stable isotopologues , that differ in the isotopes of the two elements, and hence in the total atomic mass number ( A ) and the total number of neutrons ( N ) in the two nuclei:
They all have three protons and two electrons. The first three are generated by radioactive decay of tritium in the molecules HT = 1 H 3 H , DT = 2 H 3 H , and T 2 = 3 H 2 , respectively. The last three can be generated by ionizing the appropriate isotopologue of H 2 in the presence of helium-4. [ 7 ]
The following isotopologues of the helium hydride ion, of the dihydrogen ion H + 2 , and of the trihydrogen ion H + 3 have the same total atomic mass number A :
The masses in each row above are not equal, though, because the binding energies in the nuclei are different. [ 16 ]
Unlike the helium hydride ion, the neutral helium hydride molecule HeH is not stable in the ground state. However, it does exist in an excited state as an excimer (HeH*), and its spectrum was first observed in the mid-1980s. [ 19 ] [ 20 ] [ 21 ]
The neutral molecule is the first entry in the Gmelin database . [ 4 ]
Since HeH + reacts with every substance, it cannot be stored in any container. As a result, its chemistry must be studied by creating it in situ .
Reactions with organic substances can be studied by substituting hydrogen in the desired organic compound with tritium . The decay of tritium to 3 He + followed by its extraction of a hydrogen atom from the compound yields 3 HeH + , which is then surrounded by the organic material and will in turn react. [ 22 ] [ 23 ]
HeH + cannot be prepared in a condensed phase , as it would donate a proton to any anion , molecule or atom that it came in contact with. It has been shown to protonate O 2 , NH 3 , SO 2 , H 2 O , and CO 2 , giving HO + 2 , NH + 4 , HSO + 2 , H 3 O + , and HCO + 2 respectively. [ 22 ] Other molecules such as nitric oxide , nitrogen dioxide , nitrous oxide , hydrogen sulfide , methane , acetylene , ethylene , ethane , methanol and acetonitrile react but break up due to the large amount of energy produced. [ 22 ]
In fact, HeH + is the strongest known acid , with a proton affinity of 177.8 kJ/mol, or a p K a of −63. [ 24 ]
Additional helium atoms can attach to HeH + to form larger clusters such as He 2 H + , He 3 H + , He 4 H + , He 5 H + and He 6 H + . [ 22 ]
The dihelium hydride cation, He 2 H + , is formed by the reaction of dihelium cation with molecular hydrogen:
It is a linear ion with hydrogen in the centre. [ 22 ]
The hexahelium hydride ion, He 6 H + , is particularly stable. [ 22 ]
Other helium hydride ions are known or have been studied theoretically. Helium dihydride ion, or dihydridohelium(1+) , HeH + 2 , has been observed using microwave spectroscopy. [ 25 ] It has a calculated binding energy of 25.1 kJ/mol, while trihydridohelium(1+) , HeH + 3 , has a calculated binding energy of 0.42 kJ/mol. [ 26 ]
Hydridohelium(1+), specifically [ 4 He 1 H] + , was first detected indirectly in 1925 by T. R. Hogness and E. G. Lunn. They were injecting protons of known energy into a rarefied mixture of hydrogen and helium, in order to study the formation of hydrogen ions like H + , H + 2 and H + 3 . They observed that H + 3 appeared at the same beam energy (16 eV ) as H + 2 , and its concentration increased with pressure much more than that of the other two ions. From these data, they concluded that the H + 2 ions were transferring a proton to molecules that they collided with, including helium. [ 7 ]
In 1933, K. Bainbridge used mass spectrometry to compare the masses of the ions [ 4 He 1 H] + (helium hydride ion) and [ 2 H 2 1 H] + (twice-deuterated trihydrogen ion) in order to obtain an accurate measurement of the atomic mass of deuterium relative to that of helium. Both ions have 3 protons, 2 neutrons, and 2 electrons. He also compared [ 4 He 2 H] + (helium deuteride ion) with [ 2 H 3 ] + ( trideuterium ion), both with 3 protons and 3 neutrons. [ 16 ]
The first attempt to compute the structure of the HeH + ion (specifically, [ 4 He 1 H] + ) by quantum mechanical theory was made by J. Beach in 1936. [ 27 ] Improved computations were sporadically published over the next decades. [ 28 ] [ 29 ]
H. Schwartz observed in 1955 that the decay of the tritium molecule T 2 = 3 H 2 should generate the helium hydride ion [ 3 HeT] + with high probability.
In 1963, F. Cacace at the Sapienza University of Rome conceived the decay technique for preparing and studying organic radicals and carbenium ions. [ 30 ] In a variant of that technique, exotic species like methanium are produced by reacting organic compounds with the [ 3 HeT] + that is produced by the decay of T 2 that is mixed with the desired reagents. Much of what we know about the chemistry of [HeH] + came through this technique. [ 31 ]
In 1980, V. Lubimov (Lyubimov) at the ITEP laboratory in Moscow claimed to have detected a mildly significant rest mass (30 ± 16) eV for the neutrino , by analyzing the energy spectrum of the β decay of tritium. [ 32 ] The claim was disputed, and several other groups set out to check it by studying the decay of molecular tritium T 2 . It was known that some of the energy released by that decay would be diverted to the excitation of the decay products, including [ 3 HeT] + ; and this phenomenon could be a significant source of error in that experiment. This observation motivated numerous efforts to precisely compute the expected energy states of that ion in order to reduce the uncertainty of those measurements. [ citation needed ] Many have improved the computations since then, and now there is quite good agreement between computed and experimental properties; including for the isotopologues [ 4 He 2 H] + , [ 3 He 1 H] + , and [ 3 He 2 H] + . [ 18 ] [ 13 ]
In 1956, M. Cantwell predicted theoretically that the spectrum of vibrations of that ion should be observable in the infrared; and the spectra of the deuterium and common hydrogen isotopologues ( [ 3 HeD] + and [ 3 He 1 H] + ) should lie closer to visible light and hence easier to observe. [ 12 ] The first detection of the spectrum of [ 4 He 1 H] + was made by D. Tolliver and others in 1979, at wavenumbers between 1,700 and 1,900 cm −1 . [ 33 ] In 1982, P. Bernath and T. Amano detected nine infrared lines between 2,164 and 3,158 waves per cm. [ 17 ]
HeH + has been conjectured since the 1970s to exist in the interstellar medium . [ 34 ] Its first detection, in the nebula NGC 7027 , was reported in an article published in the journal Nature in April 2019. [ 5 ]
The helium hydride ion is formed during the decay of tritium in the molecule HT or tritium molecule T 2 . Although excited by the recoil from the beta decay, the molecule remains bound together. [ 35 ]
It is believed to be the first compound to have formed in the universe, [ 3 ] and is of fundamental importance in understanding the chemistry of the early universe. [ 36 ] This is because hydrogen and helium were almost the only types of atoms formed in Big Bang nucleosynthesis . Stars formed from the primordial material should contain HeH + , which could influence their formation and subsequent evolution. In particular, its strong dipole moment makes it relevant to the opacity of zero-metallicity stars . [ 3 ] HeH + is also thought to be an important constituent of the atmospheres of helium-rich white dwarfs, where it increases the opacity of the gas and causes the star to cool more slowly. [ 37 ]
HeH + could be formed in the cooling gas behind dissociative shocks in dense interstellar clouds, such as the shocks caused by stellar winds , supernovae and outflowing material from young stars. If the speed of the shock is greater than about 90 kilometres per second (56 mi/s), quantities large enough to detect might be formed. If detected, the emissions from HeH + would then be useful tracers of the shock. [ 38 ]
Several locations had been suggested as possible places HeH + might be detected. These included cool helium stars , [ 3 ] H II regions , [ 39 ] and dense planetary nebulae , [ 39 ] like NGC 7027 , [ 36 ] where, in April 2019, HeH + was reported to have been detected. [ 5 ] | https://en.wikipedia.org/wiki/HeH3+ |
Chemical
Neurological
The He Jiankui genome editing incident is a scientific and bioethical controversy concerning the use of genome editing following its first use on humans by Chinese scientist He Jiankui , who edited the genomes of human embryos in 2018. [ 1 ] [ 2 ] He became widely known on 26 November 2018 [ 3 ] after he announced that he had created the first human genetically edited babies. He was listed in Time magazine's 100 most influential people of 2019. [ 4 ] The affair led to ethical and legal controversies, resulting in the indictment of He and two of his collaborators, Zhang Renli and Qin Jinzhou. He eventually received widespread international condemnation.
He Jiankui, working at the Southern University of Science and Technology (SUSTech) in Shenzhen , China, started a project to help people with HIV-related fertility problems , specifically involving HIV-positive fathers and HIV-negative mothers. The subjects were offered standard in vitro fertilisation services and in addition, use of CRISPR gene editing ( CRISPR/Cas9 ), a technology for modifying DNA . The embryos' genomes were edited to remove the CCR5 gene in an attempt to confer genetic resistance to HIV . [ 5 ] The clinical project was conducted secretly until 25 November 2018, when MIT Technology Review broke the story of the human experiment based on information from the Chinese clinical trials registry. Compelled by the situation, he immediately announced the birth of genome-edited babies in a series of five YouTube videos the same day. [ 6 ] [ 7 ] The first babies, known by their pseudonyms Lulu ( Chinese : 露露 ) and Nana ( 娜娜 ), are twin girls born in October 2018, and the second birth and third baby born was in 2019, [ 8 ] [ 9 ] named Amy. [ 10 ] He reported that the babies were born healthy. [ 11 ]
His actions received widespread criticism, [ 12 ] [ 13 ] and included concern for the girls' well-being. [ 5 ] [ 14 ] [ 15 ] After his presentation on the research at the Second International Summit on Human Genome Editing at the University of Hong Kong on 28 November 2018, Chinese authorities suspended his research activities the following day. [ 16 ] On 30 December 2019, a Chinese district court found He Jiankui guilty of illegal practice of medicine, sentencing him to three years in prison with a fine of 3 million yuan. [ 17 ] [ 18 ] Zhang Renli and Qin Jinzhou received an 18-month prison sentence and a 500,000-yuan fine, and were banned from working in assisted reproductive technology for life. [ 19 ]
He Jiankui has been widely described as a mad scientist . [ failed verification ] [ 20 ] [ 21 ] [ 22 ] The impact of human gene editing on resistance to HIV infection and other body functions in experimental infants remains controversial. The World Health Organization has issued three reports on the guidelines of human genome editing since 2019, [ 23 ] and the Chinese government has prepared regulations since May 2019. [ 24 ] In 2020, the National People's Congress of China passed Civil Code and an amendment to Criminal Law that prohibit human gene editing and cloning with no exceptions; according to the Criminal Law, violators will be held criminally liable, with a maximum sentence of seven years in prison in serious cases. [ 25 ] [ 26 ]
Since 2016, Han Seo-jun, then associate professor at the Southern University of Science and Technology (SUSTech) in Shenzhen, with Zhang Renli and Qin Jinzhou, have used human embryo in gene-editing technology for assisted reproductive medicine. [ 27 ] On 10 June 2017, a Chinese couple, an HIV-positive father and HIV-negative mother, pseudonymously called Mark and Grace, [ 28 ] attended a conference held by He at SUSTech. They were offered in vitro fertilisation (IVF) along with gene-editing of their embryos so as to develop innate resistance to HIV infection in their offspring. They agreed to volunteer through informed consent and the experiment was carried out in secrecy. Six other couples having similar fertility problems were subsequently recruited. [ 20 ] The couples were recruited through a Beijing-based AIDS advocacy group called Baihualin China League. [ 29 ] When later examined, the consent forms were noted as incomplete and inadequate. [ 28 ] The couple were reported to have agreed to this experiment because, by Chinese rules, normally HIV positive fathers were not allowed to have children using IVF. [ 30 ]
When the place of the clinical experiment was investigated, SUSTech declared that the university was not involved and that He had been on unpaid leave since February 2018, and his department attested that they were unaware of the research project. [ 31 ] [ 32 ]
He Jiankui , the researcher, took sperm and eggs from the couples, performed in vitro fertilisation with the eggs and sperm, and then edited the genomes of the embryos using CRISPR/Cas9 . [ 29 ] The editing targeted a gene, CCR5 , that codes for a protein that HIV uses to enter cells. [ 33 ] [ 34 ] He was trying to reproduce the phenotype of a specific mutation in the gene, CCR5-Δ32 , that few people naturally have and that possibly confers innate resistance to HIV , [ 33 ] as seen in the case of the Berlin Patient . [ 35 ] However, rather than introducing the known CCR5-Δ32 mutation, he introduced a frameshift mutation intended to make the CCR5 protein entirely nonfunctional. [ 36 ] According to He, Lulu and Nana carried both functional and mutant copies of CCR5 given mosaicism inherent in the present state of the art in germ-line editing. [ 37 ] There are forms of HIV that use a different receptor instead of CCR5; therefore, the work of He did not theoretically protect Lulu and Nana from those forms of HIV. [ 33 ] He used a preimplantation genetic diagnosis process on the embryos that were edited, where three to five single cells were removed, and fully sequenced them to identify chimerism and off-target errors. He says that during the pregnancy, cell-free fetal DNA was fully sequenced to check for off-target errors, and an amniocentesis was offered to check for problems with the pregnancy, but the mother declined. [ 33 ] Lulu and Nana were born in secrecy in October 2018. [ 7 ] They were reported by He to be normal and healthy. [ 11 ]
He Jiankui was planning to reveal his experiments and the birth of Lulu and Nana at the Second International Summit on Human Genome Editing, which was to be organized at the University of Hong Kong during 27–29 November 2018. [ 38 ] However, on 25 November 2018, Antonio Regalado, senior editor for biomedicine of MIT Technology Review , posted on the journal's website about the experiment based on He Jiankui's applications for conducting clinical trial that had been posted earlier on the Chinese clinical trials registry . At the time, He refused to comment on the conditions of the pregnancy. [ 6 ] [ 39 ] Prompted by the publicity, He immediately posted about his experiment and the successful birth of the twins on YouTube in five videos the same day. [ 40 ] [ 41 ] The next day, the Associated Press made the first formal news, which was most likely a pre-written account before the publicity. [ 1 ] His experiment had received no independent confirmation, and had not been peer reviewed or published in a scientific journal . [ 42 ] [ 43 ] [ 44 ] Soon after He's revelation, the university at which He was previously employed, the Southern University of Science and Technology, stated that He's research was conducted outside of their campus. [ 45 ] China's National Health Commission also ordered provincial health officials to investigate his case soon after the experiment was revealed. [ 42 ]
Amidst the furore, He was allowed to present his research at the Hong Kong meeting on 28 November under the title " CCR5 gene editing in mouse, monkey, and human embryos using CRISPR–Cas9". During the discussion session, He asserted, "Do you see your friends or relatives who may have a disease? They need help," and continued, "For millions of families with inherited disease or infectious disease, if we have this technology we can help them." [ 46 ] In his speech, He also mentioned a second pregnancy under the same experiment. [ 11 ] No reports were disclosed, but the birth was around August 2019, [ 47 ] and it was affirmed on 30 December when the Chinese court returned a verdict mentioning that there were "three genetically-edited babies". [ 48 ] The baby was later revealed in 2022 as Amy. [ 10 ]
On the news of Lulu and Nana having been born, the People's Daily announced the experimental result as "a historical breakthrough in the application of gene editing technology for disease prevention." [ 49 ] But scientists at the Second International Summit on Human Genome Editing immediately developed serious concerns. Robin Lovell-Badge , head of the Laboratory of Stem Cell Biology and Developmental Genetics at the Francis Crick Institute , who moderated the session on 28 November, recalled that He Jiankui did not mention human embryos in the draft summary of the presentation. [ 50 ] He received an urgent message on 25 November through Jennifer Doudna of the University of California, Berkeley , a pioneer of the CRISPR/Cas9 technology, to whom he had confided the news earlier that morning. [ 20 ] As the news already broke out before the day of the presentation, he had to be brought in from his hotel by the University of Hong Kong security. Nobel laureate David Baltimore , the chair of the organizing committee of the summit, was the first to react after He's speech, and declared his horror and dismay at his work. [ 50 ]
He Jiankui did not disclose the parents' names (other than their pseudonyms Mark and Grace) and they did not make themselves available to be interviewed, so their reaction to this experiment and the ensuing controversy is unknown. [ 29 ] There was widespread criticism in the media and scientific community over the conduct of the clinical project and its secrecy, [ 51 ] [ 52 ] and concerns raised for the long term well-being of Lulu and Nana. [ 11 ] [ 44 ] Bioethicist Henry T. Greely of Stanford Law School declared, "I unequivocally condemn the experiment," [ 53 ] and later, "He Jiankui’s experiment was, amazingly, even worse than I first thought." [ 54 ] Kiran Musunuru , one of the experts called on to review He's manuscript and who later wrote a book on the scandal, called it a "historic ethical fiasco, a deeply flawed experiment". [ 55 ]
On the night of 26 November, 122 Chinese scientists issued a statement criticizing his research. They declared that the experiment was unethical, "crazy" and "a huge blow to the global reputation and development of Chinese science". [ 49 ] The Chinese Academy of Medical Sciences made a condemnation statement on 5 January 2019 saying that:
We are opposed to any clinical operation of human embryo genome editing for reproductive purposes in violation of laws, regulations, and ethical norms in the absence of full scientific evaluation. In the rapidly developing area of genome editing technology, our scientific community should uphold the highest standards of bioethics in undertaking responsible biomedical research and applications and uphold our scientific reputation, the basic dignity of human life, and the collective integrity of our scientific community.
The Chinese Government prohibits the genetic manipulation of human gametes, zygotes, and embryos for reproductive purposes ... Jiankui He's operations violated these regulations. [ 56 ]
A series of investigations was opened by He's university, local authorities, and the Chinese government. On 26 November 2018, SUSTech released a public notification on its website condemning He's conduct, mentioning the key points as:
On 28 November 2018, the organising committee of the Second International Summit on Human Genome Editing, led by Baltimore, issued a statement, declaring:
At this summit we heard an unexpected and deeply disturbing claim that human embryos had been edited and implanted, resulting in a pregnancy and the birth of twins. We recommend an independent assessment to verify this claim and to ascertain whether the claimed DNA modifications have occurred. Even if the modifications are verified, the procedure was irresponsible and failed to conform with international norms. Its flaws include an inadequate medical indication, a poorly designed study protocol, a failure to meet ethical standards for protecting the welfare of research subjects, and a lack of transparency in the development, review, and conduct of the clinical procedures. [ 58 ]
On 29 January 2019, it was learned that a U.S. Nobel laureate Craig Mello interviewed He about his experiment with gene-edited babies. [ 59 ] In February 2019, He's claims were reported to have been confirmed by Chinese investigators, according to NPR News . [ 60 ] Around that time, news reported that the Chinese government may have helped fund the CRISPR babies experiment, at least in part, based on newly uncovered documents. [ 61 ] [ 62 ] [ 63 ]
On 29 November 2018, Chinese authorities suspended all of He's research activities, saying his work was "extremely abominable in nature" and a violation of Chinese law. [ 64 ] He was sequestered in a university apartment under some sort of surveillance. [ 65 ] [ 66 ] [ 67 ] On 21 January 2019, He was fired from his job at SUSTech and his teaching and research work at the university was terminated. [ 68 ] The same day, the Guangdong Province administration investigated the "gene editing baby incident", which is explicitly prohibited by the state. [ 69 ] On 30 December 2019, the Shenzhen Nanshan District People's Court found He Jiankui guilty of illegal practice of medicine, sentencing him to three years in prison with a fine of 3 million yuan. [ 70 ] [ 17 ]
Among the collaborators, only two were indicted – Zhang Renli of the Guangdong Academy of Medical Sciences and Guangdong General Hospital, received a two-year prison sentence and a 1-million RMB (about US$145,000) fine, and Qin Jinzhou of the Southern University of Science and Technology received an 18-month prison sentence and a 500,000 RMB (about US$72,000) fine. [ 71 ] The three were found guilty of having "forged ethical review documents and misled doctors into unknowingly implanting gene-edited embryos into two women." [ 72 ] Zhang and Qin were officially banned from working in assisted reproductive technology for life. [ 19 ] In April 2022, He was released from prison. [ 30 ] [ 73 ]
On 26 November 2018, The CRISPR Journal published ahead of print an article by He, Ryan Ferrell, Chen Yuanlin, Qin Jinzhou, and Chen Yangran in which the authors justified the ethical use of CRISPR gene editing in humans. [ 74 ] As the news of CRISPR babies broke out, the editors reexamined the paper and retracted it on 28 December, announcing:
[It] has since been widely reported that Dr. He conducted clinical studies involving germline editing of human embryos, resulting in several pregnancies and two alleged live births. This was most likely in violation of accepted bioethical international norms and local regulations. This work was directly relevant to the opinions laid out in the Perspective; the authors' failure to disclose this clinical work manifestly impacted editorial consideration of the manuscript. [ 75 ]
Michael W. Deem , an American bioengineering professor at Rice University and He's doctoral advisor, was involved in the research and present when people involved in the study gave consent. [ 29 ] Deem was the only non-Chinese author out of the 10 listed in the manuscript submitted to Nature . [ 36 ] Deem came under investigation by Rice University after news of the work was made public. [ 76 ] [ 77 ] As of 2022, the university has not yet issued any information on his conduct. He resigned from the university in 2020, [ 30 ] and pursued a business by creating a bioengineering and energy consultant company called Certus LLC. [ 78 ]
Stanford University also investigated its faculty of He's confidants including William Hurlbut , Matthew Porteus , and Stephen Quake , his main mentor in gene editing. The university's review committee concluded that the accused "were not participants in [He Jiankui’s] research regarding genome editing of human embryos for intended implantation and birth and that they had no research, financial or organizational ties to this research." [ 79 ]
In response to He's work, the World Health Organization , formed a committee comprising "a global, multi-disciplinary expert panel" called the Expert Advisory Committee on Developing Global Standards for Governance and Oversight of Human Genome Editing "to examine the scientific, ethical, social and legal challenges associated with human genome editing (both somatic and germline)" in December 2018. [ 80 ] [ 81 ] In 2019, it issued a call to halt all work on human genome editing, and launched a global registry to track research in the field. [ 82 ] [ 83 ] [ 84 ] It had issued three reports for the recommended guidelines on human genome editing since 2019. [ 23 ] As of 2021, the committee stood by the grounds that while somatic gene therapies have become useful in several disease, germline and heritable human genome editing is still with risks, [ 85 ] and should be banned. [ 86 ]
In May 2019, the Chinese government prepared gene-editing regulations stressing that anyone found manipulating the human genome by genome-editing techniques would be held responsible for any related adverse consequences. [ 24 ] The Civil Code of the People's Republic of China was amended in 2020, adding Article 1009, which states: "any medical research activity associated with human gene and human embryo must comply with the relevant laws, administrative regulations and national regulation, must not harm individuals and violate ethical morality and public interest." [ 25 ] It was enacted on 1 January 2021. [ 87 ] A draft of the 11th Amendment to the Criminal Law of the People's Republic of China in 2020 has incorporated three types of crime: the illegal practice of human gene editing, human embryo cloning and severe endangering of the security of human genetic resources; with penalties of imprisonment of up to 7 years and a fine. [ 25 ]
As of December 2021, Vivien Marx reported in the Nature Biotechnology article that both children were healthy. [ 23 ]
Genome manipulations can be done at two levels: somatic (grown-up cells of the general body) and germline (sex cells and embryos for reproduction). The development of CRISPR gene editing enabled both somatic and germline editing (such as in assisted reproductive technology ). [ 88 ] There is no prohibition on somatic gene editing since the practice is generally covered by the available regulations. [ 89 ] Prior to He's affair, there was already concern that it was possible to make genetically modified babies and such experiments would have ethical issues as the safety and success were not yet warranted by any study, [ 90 ] [ 91 ] and genetic enhancement of individual would be possible. [ 92 ] Pioneer gene-editing scientists had warned in 2015 that "genome editing in human embryos using current technologies could have unpredictable effects on future generations. This makes it dangerous and ethically unacceptable. Such research could be exploited for non-therapeutic modifications." [ 93 ] As Janet Rossant of the University of Toronto noted in 2018: "It has also raised ethical concerns, particularly with regard to the possibility of generating heritable changes in the human genome – so-called germline gene editing." [ 94 ] In 2017, the National Academies of Sciences, Engineering, and Medicine published a report " Human Genome Editing: Science, Ethics and Governance " that endorsed germline gene editing in "the absence of reasonable alternatives" of disease management and to "improve IVF procedures and embryo implantation rates and reduce rates of miscarriage." [ 95 ] However, the Declaration of Helsinki had stated that early embryo genome-editing for fertility purposes is unethical. [ 96 ]
The American Society of Human Genetics had declared in 2017 that the basic research on in vitro human genome editing on embryos and gametes should be promoted but that "At this time, given the nature and number of unanswered scientific, ethical, and policy questions, it is inappropriate to perform germline gene editing that culminates in human pregnancy." [ 88 ] In July 2018, the Nuffield Council on Bioethics published a policy document titled Genome Editing and Human Reproduction: Social and Ethical Issues in which it advocated human germline editing saying that it "is not 'morally unacceptable in itself' and could be ethically permissible in certain circumstances" when there are sufficient safety measures. [ 97 ] The moral justification created critical debates. [ 98 ] [ 99 ] [ 100 ] The United States National Institutes of Health Somatic Cell Genome Editing Consortium held that it "strictly focused on somatic editing; germline editing is not only excluded as a goal but is also considered to be an unacceptable outcome that should be carefully prevented." [ 101 ]
The Chinese law Measures on Administration of Assisted Human Reproduction Technology (2001) prohibits any genetic manipulation of human embryos for reproductive purposes and allows assisted reproductive technology to be performed only by authorized personnel. [ 102 ] On 7 March 2017, He Jiankui applied for ethics approval from Shenzhen HarMoniCare Women and Children's Hospital. In the application, He claimed that the genetically edited babies would be immune to HIV infection, in addition to smallpox and cholera, commenting: "This is going to be a great science and medicine achievement ever since the IVF technology which was awarded the Nobel Prize in 2010, and will also bring hope to numerous genetic disease patients." It was approved and signed by Lin Zhitong, the hospital administrator and one-time Director of Direct Genomics, a company established by He. [ 20 ] Upon an inquiry, the hospital denied such approval. The hospital's spokesperson declared that there were no records of such ethical approval, saying, "[The] gene editing process did not take place at our hospital. The babies were not born here either." [ 103 ] It was later confirmed that the approval certificate was forged. [ 104 ] [ 105 ]
Sheldon Krimsky of Tufts University reported that "[He Jiankui] is not a medical doctor, but rather received his doctorate in biophysics and did postdoctoral studies in gene sequencing; he lacks training in bioethics." [ 106 ] However, He was aware of the ethical issues. On 5 November 2018, He and his collaborators submitted a manuscript on ethical guidelines for reproductive genome editing titled "Draft Ethical Principles for Therapeutic Assisted Reproductive Technologies" to The CRISPR Journal . [ 107 ] It was published on 26 November, soon after news of the human experiment broke out. The journal made an inquiry concerning conflicts of interests , which was not disclosed by He. With no justification from He, the journal retracted the paper with a comment that it "was most likely in violation of accepted bioethical international norms and local regulations." [ 108 ]
Although there were no specific laws in China on gene editing in humans, He Jiankui violated the available guideline on handling human embryos. [ 109 ] According to the Guidelines for Ethical Principles in Human Embryonic Stem Cell Research (2003) of the Ministry of Science and Technology and the National Health Commission of China:
Research in human embryonic stem cells shall be in compliance with the following behavioral norms:
He Jiankui also attended an important meeting on "The ethics and societal aspects of gene editing" in January 2017 organized by Jennifer Doudna and William Hurlbut of Stanford University. [ 110 ] Upon invitation from Doudna, He presented a topic on "Safety of Human Gene Embryo Editing" and later recalled that "There were very many thorny questions, triggering heated debates, and the smell of gunpowder was in the air." [ 20 ]
The consent form of the experiment titled "Informed Consent" also indicates dubious statements. The aim of the study was presented as "an AIDS vaccine development project", even though the study was not about vaccines. Present was technical jargon which would be incomprehensible to a layperson. [ 49 ] [ 28 ] One of the more peculiar statement is that if the participants decide to abort the experiment "in the first cycle of IVF until 28 days post-birth of the baby", they would have to "pay back all the costs that the project team has paid for you. If the payment is not received within 10 calendar days from the issuance of the notification of violation by the project team, another 100,000 RMB (about US$14,000) of fine will be charged." [ 111 ] This violates the voluntary nature of the participation. [ 49 ]
CRISPR gene editing technology in humans has the potential to cause profound social impacts, [ 112 ] such as in the long-term prevention of diseases in humans. [ 113 ] However, He's human experiments raised ethical concerns the effect are unknown on future generations. [ 112 ] Ethical concerns have been raised relative to the four ethical criteria of autonomy, justice, beneficence, and non-maleficence , [ 114 ] [ 113 ] first postulated by Tom Beauchamp and James Childress in Principles of Biomedical Ethics . [ 115 ]
The ethical principle of autonomy requires that individuals have the ability and comprehensive information to make their own decisions based on their values and beliefs. [ 114 ] He violated this by failing to inform patients of potential risks, including off-target mutations that might be a threat to the twins' lives. [ 116 ]
Since He had forged the approval certificate from the hospital's Director of Direct Genomics, the procedure was likely "unlawful", [ 116 ] which is against the principle of non-maleficence. [ 114 ] Off-target mutations are likely to start at undesired sites, causing cell death or cell transformation . [ 117 ] Sonia Ouagrham-Gormley, an associate professor in the Schar School of Policy and Government at George Mason University , and Kathleen Vogel, a professor in the School for the Future of Innovation in Society at Arizona State University , stated that the procedure was "unnecessary" and "risks the safety of the patients". The researchers criticized He's unethical action by presenting the fact that the prevention of HIV transmission from parents to newborn babies can be safely achieved with existing standard methods, such as sperm washing and caesarian section delivery. [ 116 ]
The principle of justice argues that individuals should have the right to receive the same amount of care from medical providers regardless of their social and economic background. [ 114 ] Beneficence requires healthcare providers to maximize benefits and put the benefit of the patients first. [ 114 ] He's intervention in the twins' genes cannot be justified, and the risk-benefit is unacceptable. [ 118 ] He paid the couple $40,000 to ensure that they would keep his operation confidential. This action can be viewed as an inducement and violates China's regulations on the prohibition of genetic manipulation of human gametes, zygotes , and embryos for reproductive purposes; HIV carriers being not allowed to have assisted reproductive technologies, and the manipulation of a human embryo for research being only permitted within 14 days. [ 118 ]
Thus, while genome editing in humans has potential as an effective and cost-efficient method for manipulating genes within living cells, it requires more research and transparent procedures to be ethically justified. [ 113 ]
It is an established fact that C-C chemokine receptor type 5 (CCR5) is a protein essential for HIV infection of the white blood cells by acting as co-receptor to HIV. Mutation in the gene CCR5 (called CCR5Δ32 because the mutation is specifically a deletion of 32 base pairs in human chromosome 3 ) renders resistance to HIV. [ 119 ] [ 120 ] Resistance is higher when mutations are in two copies ( homozygous alleles ) and in only one copy (heterozygous alleles) the protection is very weak and slow. Not all homozygote individuals are completely resistant. [ 121 ] In natural population, CCR5Δ32 homozygotes are rarer than heterozygotes. [ 122 ] In 2007, Timothy Ray Brown (dubbed the Berlin patient) became the first person to be completely cured of HIV infection following a stem cell transplant from a CCR5Δ32 homozygous donor. [ 123 ]
He Jiankui overlooked these facts. Two days after Lulu and Nana were born, their DNA were collected from blood samples of their umbilical cord and placenta . Whole genome sequencing confirmed the mutations. [ 124 ] However, available sources indicate that Lulu and Nana are carrying incomplete CCR5 mutations. Lulu carries a mutant CCR5 that has a 15-bp in-frame deletion only in one chromosome 3 (heterozygous allele) while the other chromosome 3 is normal; and Nana carries a homozygous mutant gene with a 4-bp deletion and a single base insertion . [ 125 ] He therefore failed to achieve the complete 32-bp deletion. [ 109 ] Moreover, Lulu has only heterozygous modification which is not known to prevent HIV infection. [ 120 ] Because the babies' mutations are different from the typical CCR5Δ32 mutation it is not clear whether or not they are prone to infection. [ 125 ] There are also concerns about adverse effect called off-target mutation in CRISPR/Cas9 editing and mosaicism, a condition in which many different genetic lines develop in the same embryo. [ 124 ] Off-target mutation may cause health hazards, while mosaicism may create HIV susceptible cells. Fyodor Urnov , Director at the Altius Institute for Biomedical Sciences at Washington, asserted that "This [off-target mutation] is a key problem for the entirety of the embryo-editing field, one that the authors sweep under the rug here," and continued, "They [He's team] should have worked and worked and worked until they reduced mosaicism to as close to zero as possible. This failed completely. They forged ahead anyway." [ 36 ]
His data on Lulu and Nana's mutation alignment (in Sanger chromatogram ) showed three modifications, while two should be expected. Particularly in Lulu, the mutation is much more complex than He's report. There were three different combinations of alleles: two normal copies of CCR5 , one normal copy and one with a 15-bp deletion, and one normal copy and an unknown large insertion. [ 109 ] But George Church of Harvard University, in an interview with Science , explained that off-target mutations may not be dangerous, and that there is no need to reduce mosaicism excessively, saying, "There's no evidence of off-target causing problems in animals or cells. We have pigs that have dozens of CRISPR mutations and a mouse strain that has 40 CRISPR sites going off constantly and there are off-target effects in these animals, but we have no evidence of negative consequences." As to mosaicism, he said, "It may never be zero. We don’t wait for radiation to be zero before we do positron emission tomography scans or x-rays." [ 126 ]
In February 2019, scientists reported that Lulu and Nana may have inadvertently (or perhaps, intentionally [ 63 ] ) had their brains altered, [ 127 ] since CCR5 deletion is linked to improved memory function in mice, [ 128 ] as well as enhanced recovery from strokes in humans. [ 129 ] Although He Jiankui stated during the Second International Summit on Human Genome Editing, that he was against using genome editing for enhancement, [ 130 ] he also acknowledged that he was aware of the studies linking CCR5 to enhanced memory function. [ 37 ]
In June 2019, researchers incorrectly suggested that the purportedly genetically edited humans may have been mutated in a way that shortens life expectancy . [ 131 ] [ 132 ] Rasmus Nielsen and Wei Xinzhu, both at the University of California, Berkeley, reported in Nature Medicine of their analysis of the longevity of 409,693 individuals from British death registry ( UK BioBank ) with the conclusion that two copies of CCR5Δ32 mutations (homozygotes) were about 20% more likely than the rest of the population to die before they were 76 years of age. [ 133 ] The research finding was widely publicized in the popular and scientific media. [ 134 ] [ 135 ] However, the article overlooked sampling bias in UK Biobank's data, resulting in an erroneous interpretation, and was retracted four months later. [ 136 ] [ 137 ]
Scientific works are normally published in peer-reviewed journals, but He failed to do so regarding the birth of gene-edited babies. This was one of the grounds on which He was criticized. [ 32 ] [ 138 ] It was later reported that He did submit two manuscripts to Nature and the Journal of the American Medical Association , which were both rejected, mainly on ethical issues. [ 139 ] He's first manuscript titled "Birth of Twins After Genome Editing for HIV Resistance" was submitted to Nature on 19 November. He shared copies of the manuscript to the Associated Press, which he further allowed to document his works. [ 140 ] In an interview, Hurlbut opined that the condemnation of He's work would have been less harsh if the study had been published, and said, "If it had been published, the publishing process itself would have brought a level of credibility because of the normal scrutiny involved; the data analysis would have been vetted." [ 141 ]
The scientific manuscripts of He were revealed when an anonymous source sent them to the MIT Technology Review , which reported them on 3 December 2019. [ 36 ] [ 142 ]
The first successful gene-editing experiment of CCR5 in humans was in 2014. A team of researchers at the University of Pennsylvania , Philadelphia, Albert Einstein College of Medicine , New York, and Sangamo BioSciences , California, reported that they modified CCR5 on the blood cells (CD4 T cells) using zinc-finger nuclease which they introduced ( infused ) into 12 individuals with HIV. [ 143 ] After complete treatment, the patients showed decreased viral load, and in one, HIV disappeared. [ 144 ] The result was published in The New England Journal of Medicine . [ 145 ]
Chinese scientists have successfully used CRISPR editing to create mutant mice and rats since 2013. [ 146 ] [ 147 ] The next year they reported successful experiment in monkeys involving a removal of two key genes ( PPAR-γ and RAG1 ) that play roles in cell growth and cancer development. [ 148 ] One of the leading researchers, Yuyu Niu later collaborated with He Jiankui in 2017 to test the CRISPR editing of CCR5 in monkeys, but the outcome was not fully assessed or published. Niu later commented that they "had no idea he was going to do this in a human being." [ 149 ] In 2018, his team reported an induction of mutation to produce muscular dystrophy, [ 150 ] and simultaneously by another independent Chinese team an induction of growth retardation in monkeys using CRISPR editing. [ 151 ] In February 2018, scientists at the Chinese Academy of Sciences reported the creation of five identical cloned gene-edited monkeys, using the same cloning technique that was used by them to create the first cloned primates Zhong Zhong and Hua Hua in 2018 [ 152 ] and Dolly the sheep . [ 153 ] The mutant monkeys and clones were made for understanding several medical diseases and not for disease resistance. [ 154 ] [ 155 ]
The first clinical trials of CRISPR-Cas9 for the treatment of genetic blood disorders was started in August 2018. The study was jointly conducted by CRISPR Therapeutics , a Swiss-based company, and Vertex Pharmaceuticals , headquartered in Boston. [ 156 ] The result was first announced on 19 November 2019 which states that the first two patients, one with β-thalassemia and the other with sickle cell disease , were treated successfully. [ 157 ] Under the same project, a parallel study on 6 individuals with sickle cell disease was also conducted at Harvard Medical School , Boston. In both studies, the gene involved in blood cell formation BCL11A was modified in the bone marrow extracted from the individuals. [ 158 ] Both the studies were simultaneously published in The New England Journal of Medicine on 21 January 2021 in two papers. [ 159 ] [ 160 ] The individuals have not complained about the symptoms and needed blood transfusion normally required in such disease, but the method is arduous and poses high risk of infection in the bone marrow, to which David Rees at King's College Hospital commented, "Scientifically, these studies are quite exciting. But it’s hard to see this being a mainstream treatment in the long term." [ 158 ]
In June 2019, Denis Rebrikov at the Kulakov National Medical Research Center for Obstetrics, Gynecology and Perinatology in Moscow announced through Nature that he was planning to repeat He's experiment once he got official approval from the Russian Ministry of Health and other authorities. Rebrikov asserted that he would use safer and better method than that of He, saying, "I think I'm crazy enough to do it." [ 161 ] In a subsequent report on 17 October, Rebrikov said that he was approached by a deaf couple for help. He already started in vitro experiment to repair a gene that causes deafness, GJB2 , using CRISPR. [ 162 ]
In 2019, the Abramson Cancer Center of the University of Pennsylvania in US announced the use of the CRISPR technology to edit cancer genes in humans, [ 163 ] and the results of the phase I clinical trial in 2020. [ 164 ] The study started in 2018 with an official registration in the US clinical trials registry. [ 165 ] The report in the journal Science indicates three individuals in their 60s with advanced refractory cancer , two of them with the blood cancer ( multiple myeloma ) and one with tissue cancer ( sarcoma ), were treated with their own cancer cells after CRISPR editing. [ 166 ] The experiment was based on CAR T-cell therapy by which the T cells , obtained from the individuals were removed of three genes involved in cancer and were added a gene CTAG1B that produces an antigen NY-ESO-1. When the edited cells were introduced back into the individuals, the antigens attack the cancer cells. [ 167 ] Although the results were acclaimed as the first "success of gene editing and cell function" [ 164 ] in cancer research and "an important milestone in the development and clinical application of gene-edited effector cell therapy," [ 168 ] it was far from curing the diseases. One died after the clinical trial, and the other two had recurrent cancer. [ 169 ]
A similar clinical trial was reported by a team of Chinese scientists at the Sichuan University and their collaborators in 2020 in Nature Medicine . [ 170 ] Here they removed only one gene ( PDCD1 that produces the protein PD-1 ) in the T cells from 12 individuals having late-stage lung cancer. The study was found to be safe and effective. [ 171 ] However, the edited T cells were not fully efficient and disappeared in most individuals, indicating that the treatments were not completely successful. [ 172 ] | https://en.wikipedia.org/wiki/He_Jiankui_affair |
In the assembly of integrated circuit packages to printed circuit boards , a head-in-pillow defect ( HIP or HNP ), also called ball-and-socket , [ 1 ] is a failure of the soldering process. For example, in the case of a ball grid array (BGA) package, the pre-deposited solder ball on the package and the solder paste applied to the circuit board may both melt, but the melted solder does not join. A cross-section through the failed joint shows a distinct boundary between the solder ball on the part and the solder paste on the circuit board, rather like a section through a head resting on a pillow. [ 2 ]
The defect can be caused by surface oxidation or poor wetting of the solder, or by distortion of the integrated circuit package or circuit board by the heat of the soldering process. This is particularly a concern when using lead-free solder , which requires a higher processing temperature.
The defect can be attributed to a chain of events during soldering. Initially, the ball is in contact with solder paste. During heating, the board and components undergo thermal expansion, can flex, and some of the balls can be lifted off the paste. Oxidation occurs rapidly at elevated temperature, and when the surfaces come in contact again, the residual flux activity may not be sufficient to disrupt the oxide layer. The solder paste composition, eg. flux with higher activation temperature, together with the wetting characteristics of the solder ball, are the most significant mitigation factors. [ 1 ]
Since the warping of the circuit board or integrated circuit may disappear when the board cools, an intermittent fault may be created. Diagnosis of head-in-pillow defects may require use of X-rays or EOTPR ( Electro Optical Terahertz Pulse Reflectometry ), since the solder joints are hidden between the integrated circuit package and the printed circuit board.
This electronics-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Head-in-pillow_defect |
A head-on collision is a traffic collision where the front ends of two vehicles such as cars, trains, ships or planes hit each other when travelling in opposite directions, as opposed to a side collision or rear-end collision .
With railways, a head-on collision occurs most often on a single line railway. This usually means that at least one of the trains has passed a signal at danger , or that a signalman has made a major error. Head-on collisions may also occur at junctions, for similar reasons. In the early days of railroading in the United States, such collisions were quite common and gave to the rise of the term "Cornfield Meet". [ 3 ] As time progressed and signalling became more standardized, such collisions became less frequent. Even so, the term still sees some usage in the industry. The origins of the term are not well known, but it is attributed to crashes happening in rural America where farming and cornfields were common. The first known usage of the term was in the mid-19th century.
The distance required for a train to stop is usually greater than the distance that can be seen before the next blind curve, which is why signals and safeworking systems are so important.
Note: if the collision occurs at a station or junction, or trains are travelling in the same direction, then the collision is not a pure head-on collision
and the driver of Stoptrein 4116,
lack of ATB
Conductor error
With shipping, there are two main factors influencing the chance of a head-on collision. Firstly, even with radar and radio, it is difficult to tell what course the opposing ships are following. Secondly, big ships have so much momentum that it is very hard to change course at the last moment.
Head-on collisions are an often fatal type of road traffic collision. The NHTSA defines a head-on collision thusly:
Refers To A Collision Where The Front End Of One Vehicle Collides With The Front End Of Another Vehicle While The Two Vehicles Are Traveling In Opposite Directions. [ 7 ]
In Canada, in 2017, 6,293 vehicles and 8,891 persons were involved in head-on collision, injuring 5,222 persons and killing 377 other. [ 8 ]
U.S. statistics show that in 2005, head-on crashes were only two per cent of all crashes, yet accounted for ten per cent of U.S. fatal crashes. A common misconception is that this over-representation is because the relative velocity of vehicles travelling in opposite directions is high. While it is true (via Galilean relativity ) that a head-on crash between two vehicles traveling at 50 mph is equivalent to a moving vehicle running into a stationary one at 100 mph, it is clear from basic Newtonian Physics that if the stationary vehicle is replaced with a solid wall or other stationary near-immovable object such as a bridge abutment, then the equivalent collision is one in which the moving vehicle is only traveling at 50 mph., [ 9 ] except for the case of a lighter car colliding with a heavier one. The television show MythBusters performed a demonstration of this effect in a 2010 show. [ 10 ]
In France, in the years 2017 and 2018, 2563 and 2556 head-on collisions ( collision frontales ) outside built-up area outside motorways killed 536 and 545 people respectively. [ 11 ] They represent about 16% of all the fatalities including the ones on motorways and within built-up area.
In Quebec, head-on collisions are involved in eight per cent of work-related issues, but this figure rises to 23 per cent when the vehicles involved are in a rural zone where the maximum speed is greater than 70 km/h (43 mph). [ 12 ]
Head-on collisions, sideswipes, and run-off-road crashes all belong to a category of crashes called lane-departure or road-departure crashes. This is because they have similar causes, if different consequences. The driver of a vehicle fails to stay centered in their lane, and either leaves the roadway, or crosses the centerline, possibly resulting in a head-on or sideswipe collision, or, if the vehicle avoids oncoming traffic, a run-off-road crash on the far side of the road. [ 14 ]
Preventive measures include traffic signs and road surface markings to help guide drivers through curves, as well as separating opposing lanes of traffic with wide central reservation (or median ) and median barriers to prevent crossover incidents. Median barriers are physical barriers between the lanes of traffic, such as concrete barriers or cable barriers . These are actually roadside hazards in their own right, but on high speed roads, the severity of a collision with a median barrier is usually lower than the severity of a head-on crash.
The European Road Assessment Programme 's Road Protection Score ( RPS [ permanent dead link ] ) is based on a schedule of detailed road design elements that correspond to each of the four main crash types, including head-on collisions. The Head-on Crash element of the RPS measures how well traffic lanes are separated. Motorways generally have crash protection features in harmony with the high speeds allowed. The Star Rating results show that motorways generally score well with a typical 4-star rating even though their permitted speeds are the highest on the network. But results from Star Rating research in Britain, Germany, the Netherlands and Sweden have shown that there is a pressing need to find better median (central reservation), run-off and junction protection at reasonable cost on single carriageway roads.
Another form of head-on crash is the wrong-way entry crash , where a driver on a surface road turns onto an off-ramp from a motorway or freeway , instead of the on-ramp. They can also happen on divided arterials if a driver turns into the wrong side of the road. Considerable importance is placed on designing ramp terminals and intersections to prevent these incidents. This often takes to form of special signage at freeway off-ramps to discourage drivers from going the wrong way. Section 2B.41 of the Manual on Uniform Traffic Control Devices describes how such signs should be placed on American highways.
Neither vehicle in a head-on collision need be a "car"; the Puisseguin road crash was between a truck and a coach.
In road transport, head-on collisions can result from various factors including negligent driving, hazardous road conditions, and malfunctioning vehicle parts. Common causes of negligence include speeding, distracted or impaired driving, and failure to obey traffic laws. Poorly maintained roads and missing signage can also contribute to such accidents, placing responsibility on government agencies or contractors. In some cases, defective vehicle components like brakes or steering systems may be to blame. Legal claims following head-on collisions often rely on collecting strong evidence and proving liability in civil court. [ 15 ]
Sideswipe collisions are where the sides of two vehicles travelling in the same or opposite directions touch. They differ from head-on collisions only in that one vehicle impacts the side of the other vehicle rather than the front. Severity is usually lower than a head-on collision, since it tends to be a glancing blow rather than a direct impact. However, loss of control of either vehicle can have unpredictable effects and secondary crashes can dramatically increase the expected crash severity. | https://en.wikipedia.org/wiki/Head-on_collision |
A head-up display , or heads-up display , [ 1 ] also known as a HUD ( / h ʌ d / ) or head-up guidance system ( HGS ), is any transparent display that presents data without requiring users to look away from their usual viewpoints. The origin of the name stems from a pilot being able to view information with the head positioned "up" and looking forward, instead of angled down looking at lower instruments. A HUD also has the advantage that the pilot's eyes do not need to refocus to view the outside after looking at the optically nearer instruments.
Although they were initially developed for military aviation, HUDs are now used in commercial aircraft, automobiles, and other (mostly professional) applications.
Head-up displays were a precursor technology to augmented reality (AR), incorporating a subset of the features needed for the full AR experience, but lacking the necessary registration and tracking between the virtual content and the user's real-world environment. [ 2 ]
A typical HUD contains three primary components: a projector unit , a combiner , and a video generation computer . [ 3 ]
The projection unit in a typical HUD is an optical collimator setup: a convex lens or concave mirror with a cathode-ray tube , light emitting diode display , or liquid crystal display at its focus. This setup (a design that has been around since the invention of the reflector sight in 1900) produces an image where the light is collimated , i.e. the focal point is perceived to be at infinity.
The combiner is typically an angled flat piece of glass (a beam splitter ) located directly in front of the viewer, that redirects the projected image from projector in such a way as to see the field of view and the projected infinity image at the same time. Combiners may have special coatings that reflect the monochromatic light projected onto it from the projector unit while allowing all other wavelengths of light to pass through. In some optical layouts combiners may also have a curved surface to refocus the image from the projector.
The computer provides the interface between the HUD (i.e. the projection unit) and the systems/data to be displayed and generates the imagery and symbology to be displayed by the projection unit.
Other than fixed mounted HUD, there are also head-mounted displays (HMDs.) These include helmet-mounted displays (both abbreviated HMD), forms of HUD that feature a display element that moves with the orientation of the user's head.
Many modern fighters (such as the F/A-18 , F-16 , and Eurofighter ) use both a HUD and HMD concurrently. The F-35 Lightning II was designed without a HUD, relying solely on the HMD, making it the first modern military fighter not to have a fixed HUD.
HUDs are split into four generations reflecting the technology used to generate the images.
Newer micro-display imaging technologies are being introduced, including liquid crystal display (LCD), liquid crystal on silicon (LCoS), digital micro-mirrors (DMD), and organic light-emitting diode (OLED).
HUDs evolved from the reflector sight , a pre-World War II parallax -free optical sight technology for military fighter aircraft . [ 4 ] The gyro gunsight added a reticle that moved based on the speed and turn rate to solve for the amount of lead needed to hit a target while maneuvering.
During the early 1940s, the Telecommunications Research Establishment (TRE), in charge of UK radar development, found that Royal Air Force (RAF) night fighter pilots were having a hard time reacting to the verbal instruction of the radar operator as they approached their targets. They experimented with the addition of a second radar display for the pilot, but found they had trouble looking up from the lit screen into the dark sky in order to find the target. In October 1942 they had successfully combined the image from the radar tube with a projection from their standard GGS Mk. II gyro gunsight on a flat area of the windscreen, and later in the gunsight itself. [ 5 ] A key upgrade was the move from the original AI Mk. IV radar to the microwave-frequency AI Mk. VIII radar found on the de Havilland Mosquito night fighter . This set produced an artificial horizon that further eased head-up flying. [ citation needed ]
In 1955 the US Navy 's Office of Naval Research and Development did some research with a mockup HUD concept unit along with a sidestick controller in an attempt to ease the pilot's burden flying modern jet aircraft and make the instrumentation less complicated during flight. While their research was never incorporated in any aircraft of that time, the crude HUD mockup they built had all the features of today's modern HUD units. [ 6 ]
HUD technology was next advanced by the Royal Navy in the Buccaneer , the prototype of which first flew on 30 April 1958. The aircraft was designed to fly at very low altitudes at very high speeds and drop bombs in engagements lasting seconds. As such, there was no time for the pilot to look up from the instruments to a bombsight. This led to the concept of a "Strike Sight" that would combine altitude, airspeed and the gun/bombsight into a single gunsight-like display. There was fierce competition between supporters of the new HUD design and supporters of the old electro-mechanical gunsight, with the HUD being described as a radical, even foolhardy option.
The Air Arm branch of the UK Ministry of Defence sponsored the development of a Strike Sight. The Royal Aircraft Establishment (RAE) designed the equipment and the earliest usage of the term "head-up-display" can be traced to this time. [ 7 ] Production units were built by Rank Cintel , and the system was first integrated in 1958. The Cintel HUD business was taken over by Elliott Flight Automation and the Buccaneer HUD was manufactured and further developed, continuing up to a Mark III version with a total of 375 systems made; it was given a 'fit and forget' title by the Royal Navy and it was still in service nearly 25 years later. BAE Systems , as the successor to Elliotts via GEC-Marconi Avionics, thus has a claim to the world's first head-up display in operational service. [ 8 ] A similar version that replaced the bombing modes with missile-attack modes was part of the AIRPASS HUD fitted to the English Electric Lightning from 1959.
In the United Kingdom, it was soon noted that pilots flying with the new gunsights were becoming better at piloting their aircraft. [ citation needed ] At this point, the HUD expanded its purpose beyond weapon aiming to general piloting. In the 1960s, French test-pilot Gilbert Klopfstein created the first modern HUD and a standardized system of HUD symbols so that pilots would only have to learn one system and could more easily transition between aircraft. The modern HUD used in instrument flight rules approaches to landing was developed in 1975. [ 9 ] Klopfstein pioneered HUD technology in military fighter jets and helicopters , aiming to centralize critical flight data within the pilot's field of vision. This approach sought to increase the pilot's scan efficiency and reduce "task saturation" and information overload .
Use of HUDs then expanded beyond military aircraft. In the 1970s, the HUD was introduced to commercial aviation, and in 1988, the Oldsmobile Cutlass Supreme became the first production car with a head-up display.
Until a few years ago, the Embraer 190, Saab 2000, Boeing 727, and Boeing 737 Classic (737-300/400/500) and Next Generation aircraft (737-600/700/800/900 series) were the only commercial passenger aircraft available with HUDs. However, the technology is becoming more common with aircraft such as the Canadair RJ , Airbus A318 and several business jets featuring the displays. HUDs have become standard equipment on the Boeing 787 . [ 10 ] Furthermore, the Airbus A320, A330, A340 and A380 families are currently undergoing the certification process for a HUD. [ 11 ] HUDs were also added to the Space Shuttle orbiter.
There are several factors that interplay in the design of a HUD:
On aircraft avionics systems, HUDs typically operate from dual independent redundant computer systems. They receive input directly from the sensors ( pitot-static , gyroscopic , navigation, etc.) aboard the aircraft and perform their own computations rather than receiving previously computed data from the flight computers. On other aircraft (the Boeing 787, for example) the HUD guidance computation for Low Visibility Take-off (LVTO) and low visibility approach comes from the same flight guidance computer that drives the autopilot. Computers are integrated with the aircraft's systems and allow connectivity onto several different data buses such as the ARINC 429 , ARINC 629, and MIL-STD-1553 . [ 9 ]
Typical aircraft HUDs display airspeed , altitude , a horizon line, heading , turn/bank and slip/skid indicators. These instruments are the minimum required by 14 CFR Part 91. [ 13 ]
Other symbols and data are also available in some HUDs:
Since being introduced on HUDs, both the FPV and acceleration symbols are becoming standard on head-down displays (HDD.) The actual form of the FPV symbol on an HDD is not standardized but is usually a simple aircraft drawing, such as a circle with two short angled lines, (180 ± 30 degrees) and "wings" on the ends of the descending line. Keeping the FPV on the horizon allows the pilot to fly level turns in various angles of bank.
In addition to the generic information described above, military applications include weapons system and sensor data such as:
During the 1980s, the United States military tested the use of HUDs in vertical take off and landing (VTOL) and short take off and landing (STOL) aircraft. A HUD format was developed at NASA Ames Research Center to provide pilots of VTOL and STOL aircraft with complete flight guidance and control information for Category III C terminal-area flight operations. This includes a large variety of flight operations, from STOL flights on land-based runways to VTOL operations on aircraft carriers . The principal features of this display format are the integration of the flightpath and pursuit guidance information into a narrow field of view, easily assimilated by the pilot with a single glance, and the superposition of vertical and horizontal situation information. The display is a derivative of a successful design developed for conventional transport aircraft. [ 14 ]
The use of head-up displays allows commercial aircraft substantial flexibility in their operations. Systems have been approved which allow reduced-visibility takeoffs, and landings, as well as full manual Category III A landings and roll-outs. [ 15 ] [ 16 ] [ 17 ] Initially expensive and physically large, these systems were only installed on larger aircraft able to support them. These tended to be the same aircraft that as standard supported autoland (with the exception of certain turbo-prop types [ clarification needed ] that had HUD as an option) making the head-up display unnecessary for Cat III landings. This delayed the adoption of HUD in commercial aircraft. At the same time, studies have shown that the use of a HUD during landings decreases the lateral deviation from centerline in all landing conditions, although the touchdown point along the centerline is not changed. [ 18 ]
For general aviation , MyGoFlight expects to receive a STC and to retail its SkyDisplay HUD for $25,000 without installation for a single piston-engine as the Cirrus SR22s and more for Cessna Caravans or Pilatus PC-12s single-engine turboprops: 5 to 10% of a traditional HUD cost albeit it is non- conformal , not matching exactly the outside terrain. [ 19 ] Flight data from a tablet computer can be projected on the $1,800 Epic Optix Eagle 1 HUD. [ 20 ]
In more advanced systems, such as the US Federal Aviation Administration (FAA)-labeled 'Enhanced Flight Vision System', [ 21 ] a real-world visual image can be overlaid onto the combiner. Typically an infrared camera (either single or multi-band) is installed in the nose of the aircraft to display a conformed image to the pilot. "EVS Enhanced Vision System" is an industry-accepted term which the FAA decided not to use because "the FAA believes [it] could be confused with the system definition and operational concept found in 91.175(l) and (m)" [ 21 ] In one EVS installation, the camera is actually installed at the top of the vertical stabilizer rather than "as close as practical to the pilots eye position". When used with a HUD however, the camera must be mounted as close as possible to the pilots eye point as the image is expected to "overlay" the real world as the pilot looks through the combiner.
"Registration", or the accurate overlay of the EVS image with the real world image, is one feature closely examined by authorities prior to approval of a HUD based EVS. This is because of the importance of the HUD matching the real world and therefore being able to provide accurate data rather than misleading information.
While the EVS display can greatly help, the FAA has only relaxed operating regulations [ 22 ] so an aircraft with EVS can perform a CATEGORY I approach to CATEGORY II minimums . In all other cases the flight crew must comply with all "unaided" visual restrictions. (For example, if the runway visibility is restricted because of fog, even though EVS may provide a clear visual image it is not appropriate (or legal) to maneuver the aircraft using only the EVS below 100 feet above ground level.)
HUD systems are also being designed to display a synthetic vision system (SVS) graphic image, which uses high precision navigation, attitude, altitude and terrain databases to create realistic and intuitive views of the outside world. [ 23 ] [ 24 ] [ 25 ]
In the 1st SVS head down image shown on the right, immediately visible indicators include the airspeed tape on the left, altitude tape on the right, and turn/bank/slip/skid displays at the top center. The boresight symbol (-v-) is in the center and directly below that is the flight path vector (FPV) symbol (the circle with short wings and a vertical stabilizer.) The horizon line is visible running across the display with a break at the center, and directly to the left are numbers at ±10 degrees with a short line at ±5 degrees (the +5 degree line is easier to see) which, along with the horizon line, show the pitch of the aircraft. Unlike this color depiction of SVS on a head down primary flight display, the SVS displayed on a HUD is monochrome – that is, typically, in shades of green.
The image indicates a wings level aircraft (i.e. the flight path vector symbol is flat relative to the horizon line and there is zero roll on the turn/bank indicator.) Airspeed is 140 knots, altitude is 9,450 feet, heading is 343 degrees (the number below the turn/bank indicator.) Close inspection of the image shows a small purple circle which is displaced from the flight path vector slightly to the lower right. This is the guidance cue coming from the Flight Guidance System. When stabilized on the approach, this purple symbol should be centered within the FPV.
The terrain is entirely computer generated from a high resolution terrain database.
In some systems, the SVS will calculate the aircraft's current flight path, or possible flight path (based on an aircraft performance model, the aircraft's current energy, and surrounding terrain) and then turn any obstructions red to alert the flight crew. Such a system might have helped prevent the crash of American Airlines Flight 965 into a mountain in December 1995. [ citation needed ]
On the left side of the display is an SVS-unique symbol, with the appearance of a purple, diminishing sideways ladder, and which continues on the right of the display. The two lines define a "tunnel in the sky". This symbol defines the desired trajectory of the aircraft in three dimensions. For example, if the pilot had selected an airport to the left, then this symbol would curve off to the left and down. If the pilot keeps the flight path vector alongside the trajectory symbol, the craft will fly the optimum path. This path would be based on information stored in the Flight Management System's database and would show the FAA-approved approach for that airport.
The tunnel in the sky can also greatly assist the pilot when more precise four-dimensional flying is required, such as the decreased vertical or horizontal clearance requirements of Required Navigation Performance (RNP.) Under such conditions the pilot is given a graphical depiction of where the aircraft should be and where it should be going rather than the pilot having to mentally integrate altitude, airspeed, heading, energy and longitude and latitude to correctly fly the aircraft. [ 26 ]
In mid-2017, the Israel Defense Forces will begin trials of Elbit 's Iron Vision, the world's first helmet-mounted head-up display for tanks. Israel's Elbit, which developed the helmet-mounted display system for the F-35 , plans Iron Vision to use a number of externally mounted cameras to project the 360° view of a tank's surroundings onto the helmet-mounted visors of its crew members. This allows the crew members to stay inside the tank, without having to open the hatches to see outside. [ 27 ]
A program announced in 2025 by a collaboration of Patria Technologies and Distance Technologies aims to place the head-up display on the windshield of vehicles, so as to not require a helmet. The program also intends on using AI to aid in data display and processing. [ 28 ]
These displays are becoming increasingly available in production cars, and usually offer speedometer , tachometer , and navigation system displays. Night vision information is also displayed via HUD on certain automobiles. In contrast to most HUDs found in aircraft, automotive head-up displays are not parallax-free. The display may not be visible to a driver wearing sunglasses with polarised lenses.
Add-on HUD systems also exist, projecting the display onto a glass combiner mounted above or below the windshield, or using the windshield itself as the combiner.
The first in-car HUD was developed by General Motors Corporation in 1999 with the function of displaying the navigation service in front of the driver's line of sight. Moving into 2010, AR technology was introduced and combined with the existing in-car HUD. Based on this technology, the navigation service began to be displayed on the windshield of the vehicle. [ 29 ]
In 2012, Pioneer Corporation introduced a HUD navigation system that replaces the driver-side sun visor and visually overlays animations of conditions ahead, a form of augmented reality (AR.) [ 30 ] [ 31 ] Developed by Pioneer Corporation, AR-HUD became the first aftermarket automotive Head-Up Display to use a direct-to-eye laser beam scanning method, also known as virtual retinal display (VRD.) AR-HUD's core technology involves a miniature laser beam scanning display developed by MicroVision, Inc. [ 32 ]
Motorcycle helmet HUDs are also commercially available. [ 33 ]
In recent years, it has been argued that conventional HUDs will be replaced by holographic AR technologies, such as the ones developed by WayRay that use holographic optical elements (HOE.) The HOE allows for a wider field of view while reducing the size of the device and making the solution customizable for any car model. [ 34 ] [ 35 ] Mercedes Benz introduced an Augmented Reality-based Head Up Display [ 36 ] while Faurecia invested in an eye gaze and finger controlled head up display. [ 37 ]
HUDs have been proposed or are being experimentally developed for a number of other applications. In military settings, a HUD can be used to overlay tactical information such as the output of a laser rangefinder or squadmate locations to infantrymen . A prototype HUD has also been developed that displays information on the inside of a swimmer's goggles or of a scuba diver's mask . [ 38 ] HUD systems that project information directly onto the wearer's retina with a low-powered laser ( virtual retinal display ) are also being tested. [ 39 ] [ 40 ]
A HUD product developed in 2012 could perform real-time language translation. [ 41 ] In an implementation of an Optical head-mounted display , the EyeTap product allows superimposed computer-generated graphic files to be displayed on a lens. The Google Glass was another early product. | https://en.wikipedia.org/wiki/Head-up_display |
In hydrology , the head is the point on a watercourse up to which it has been artificially broadened and/or raised by an impoundment . Above the head of the reservoir natural conditions prevail; below it the water level above the riverbed has been raised by the impoundment and its flow rate reduced, unless and until banks, barrages, weir sluices or dams are overcome (overtopped), whereby a less frictional than natural course will exist (mid-level and surface rather than bed and bank currents) resulting in flash flooding below.
In principle, a distinction must be drawn between the head of a reservoir impounded by a dam , and the head of a works resulting from a barrage or canal locks .
A head's location varies with the height of the water level against the dam. Since there is only an extremely low flow within the reservoir so no water level gradient, the head can be clearly seen: where the farthest watercourse discharges into the reservoir.
Upstream of the actual reservoir is likely to be a pre-dam , which typically have a constant water level so the head is reinforced.
The term does not apply to embankment (storage/settling) reservoirs, to which water is pumped from below.
On large rivers in all but arid climates, the head of a works is rarely fixed rigidly, as, within the impounded reach a significant flow rate and water gradient is sometimes seen. The head can only be found by calculation or defined by observations with and without impoundment. Depending on the flow rate and control of the barrage, locks or weir, position will greatly vary and will not necessarily be where the so-called headworks are.
Many rivers (such as the Moselle ) are barraged many times to make them navigable and/or to avoid uncontrolled flooding. In such a case only the higher stretches of river are uninfluenced by impoundment. As to the other stretches the river has long "level" pounds but no or few natural heads, instead having artificial structures until the top head. Ideal management of the higher heads will allow headroom to keep back some flood meadow water so as not to compound heavy precipitation and resultant run-off downstream; corollary channels with spare capacity are a further mitigation where land is at a premium (such as the Jubilee River ). Ideal management of the lowest head will allow daily timed openings, at least in flood events, to coincide with an outgoing ( ebb ), rather than flood tide . | https://en.wikipedia.org/wiki/Head_(hydrology) |
A head is one of the end caps on a cylindrically shaped pressure vessel .
Vessel dished ends are mostly used in storage or pressure vessels in industry. These ends, which in upright vessels are the bottom and the top, use less space than a hemisphere (which is the ideal form for pressure containments) while requiring only a slightly thicker wall.
The manufacturing of such an end is easier than that of a hemisphere. The starting material is first pressed to a radius r 1 and then curled at the edge creating the second radius r 2 . Vessel dished ends can also be welded together from smaller pieces.
The shape of the heads used can vary. The most common [ 1 ] [ 2 ] head shapes are:
A sphere is the ideal shape for a head, because the stresses are distributed evenly through the material of the head. The radius (r) of the head equals the radius of the cylindrical part of the vessel.
This is also called an elliptical head. The shape of this head is more economical, because the height of the head is just a fraction of the diameter. Its radius varies between the major and minor axis; usually the ratio is 2:1.
2:1 Semi-Ellipsoidal dished heads are deeper and stronger than the more popular torispherical dished heads.
The greater depth results in the head being more difficult to form, and this makes them more expensive to manufacture. However, the cost is offset by a potential reduction in the specified thickness due to the dished head having greater overall strength and resistance to pressure.
These heads have a dish with a fixed radius (r1), the size of which depends on the type of torispherical head. [ 3 ] The transition between the cylinder and the dish is called the knuckle . The knuckle has a toroidal shape. The most common types of torispherical heads are:
Commonly used for ASME pressure vessels, these torispherical heads have a crown radius equal to the outside diameter of the head ( r 1 = D o {\displaystyle r_{1}=Do} ), and a knuckle radius equal to 6% of the outside diameter ( r 2 = 0.06 × D o {\displaystyle r_{2}=0.06\times Do} ). The ASME design code does not allow the knuckle radius to be any less than 6% of the outside diameter. [ 4 ]
This is a torispherical head. The dish has a radius that equals the diameter of the cylinder it is attached to ( r 1 = D o {\displaystyle r_{1}=Do} ). The knuckle has a radius that equals a tenth of the diameter of the cylinder ( r 2 = 0.1 × D o {\displaystyle r_{2}=0.1\times Do} ), hence its alternative designation "decimal head".
This is a torispherical head also named Semi ellipsoidal head (According to DIN 28013). The radius of the dish is 80% of the diameter of the cylinder ( r 1 = 0.8 × D o {\displaystyle r_{1}=0.8\times Do} ). The radius of the knuckle is ( r 2 = 0.154 × D o {\displaystyle r_{2}=0.154\times Do} ).
These heads have a crown radius of 80% of outside diameter, and a knuckle radius of 10% of outside diameter.
This is a head consisting of a toroidal knuckle connecting to a flat plate. This type of head is typically used for the bottom of cookware .
This type of head is often found on the bottom of aerosol spray cans. It is an inverted torispherical head.
This is a cone -shaped head.
Heat treatment may be required after cold forming, but not for heads formed by hot forming. [ 7 ] | https://en.wikipedia.org/wiki/Head_(vessel) |
Coin flipping , coin tossing , or heads or tails is using the thumb to make a coin go up while spinning in the air and checking which side is showing when it is down onto a surface, in order to randomly choose between two alternatives. It is a form of sortition which inherently has two possible outcomes.
Coin flipping was known to the Romans as navia aut caput ("ship or head"), as some coins had a ship on one side and the head of the emperor on the other. [ 1 ] In England, this was referred to as cross and pile . [ 1 ] [ 2 ]
During a coin toss, the coin is thrown into the air such that it rotates edge-over-edge an unpredictable number of times. Either beforehand or when the coin is in the air, an interested party declares "heads" or "tails", indicating which side of the coin that party is choosing. The other party is assigned the opposite side. Depending on custom, the coin may be caught; caught and inverted; or allowed to land on the ground. When the coin comes to rest, the toss is complete and the party who called correctly or was assigned the upper side is declared the winner.
It is possible for a coin to land on its side, usually by landing up against an object (such as a shoe) or by getting stuck in the ground. However, even on a flat surface it is possible for a coin to land on its edge. A computational model suggests that the chance of a coin landing on its edge and staying there is about 1 in 6,000 for an American nickel. [ 3 ]
The coin may be any type as long as it has two distinct sides. Larger coins tend to be more popular than smaller ones. Some high-profile coin tosses, such as those in the Cricket World Cup and the Super Bowl, use custom-made ceremonial medallions. [ 4 ] [ 5 ]
Three-way coin flips are also possible, by a different process—this can be done either to choose one or two out of three. To choose two out of three, three coins are flipped, and if two coins come up the same and one different, the different one loses (is out), leaving two players. To choose one out of three, the previous is either reversed (the odd coin out is the winner ) or a regular two-way coin flip between the two remaining players can decide. The three-way flip is 75% likely to work each time it is tried (if all coins are heads or all are tails, each of which occur 1/8 of the time due to the chances being 0.5 by 0.5 by 0.5, the flip is repeated until the results differ), and does not require that "heads" or "tails" be called.
A well-known example of such a three-way coin flip (choose two out of three) is dramatized in Friday Night Lights (originally a book , subsequently film and TV series ), wherein three Texas high school football teams use a three-way coin flip. [ 6 ] [ 7 ] A legacy of that particular 1988 coin flip was to reduce the use of coin flips to break ties in Texas sports, instead using point systems to reduce the frequency of ties.
"Heads and Tails" or "Heads or Tails" is an informal game of chance using repeated coin tosses, suitable for a roomful of seated people, typically a social or children's event. Initially all players stand. Before each coin toss, all still standing put their hands on either their head to indicate "heads" or their hips or buttocks to indicate "tails"; once the toss result is announced, those who guessed incorrectly sit down. The process repeats until the last player standing wins; often the last few players remaining are called to the announcer's table for the climax. A variant with faster elimination is played with two coins and players placing each hand separately. [ 8 ]
Coin tossing is a simple and unbiased way of settling a dispute or deciding between two or more arbitrary options. In a game theoretic analysis it provides even odds to both sides involved, requiring little effort and preventing the dispute from escalating into a struggle. It is used widely in sports and other games to decide arbitrary factors such as which side of the field a team will play from, or which side will attack or defend initially; these decisions may tend to favor one side, or may be neutral. Factors such as wind direction, the position of the sun, and other conditions may affect the decision. In team sports it is often the captain who makes the call, while the umpire or referee usually oversees such proceedings. A competitive method may be used instead of a toss in some situations, for example in basketball the jump ball is employed, while the face-off plays a similar role in ice hockey.
Coin flipping is used to decide which end of the field the teams will play to and/or which team gets first use of the ball, or similar questions in football matches, American football games, Australian rules football , volleyball , and other sports requiring such decisions. In the U.S. a specially minted coin is flipped in National Football League games; the coin is then sent to the Pro Football Hall of Fame , and other coins of the special series minted at the same time are sold to collectors. The original XFL , a short-lived American football league, attempted to avoid coin tosses by implementing a face-off style "opening scramble," in which one player from each team tried to recover a loose football; the team whose player recovered the ball got first choice. Because of the high rate of injury in these events, it has not achieved mainstream popularity in any football league (a modified version was adopted by X-League Indoor Football , in which each player pursued his own ball), and coin tossing remains the method of choice in American football. (The revived XFL , which launched in 2020 , removed the coin toss altogether and allowed that decision to be made as part of a team's home field advantage .)
In an association football match, the team winning the coin toss chooses which goal to attack in the first half; the opposing team kicks off for the first half. For the second half, the teams switch ends, and the team that won the coin toss kicks off. Coin tosses are also used to decide which team has the pick of going first or second in a penalty shoot-out . Before the early-1970s introduction of the penalty shootout , coin tosses were occasionally needed to decide the outcome of drawn matches where a replay was not possible. The most famous instance of this was the semifinal game of the 1968 European Championship between Italy and the Soviet Union , which finished 0–0 after extra time. Italy won, and went on to become European champions. [ 9 ]
In cricket the toss is often significant, as the decision whether to bat or bowl first can influence the outcome of the game. The coin toss in cricket is more important than in other games because in many situations it can lead a team winning or losing the game. Factors such as pitch conditions, weather and the time of day are considered by the team captain who wins the toss. Now there are websites such as flip a coin online which domestic sports team use to toss the coin. [ 10 ]
Similarly, in tennis a coin toss is used in professional matches to determine which player serves first. The player who wins the toss decides whether to serve first or return, while the loser of the toss decides which end of the court each player plays on first.
In duels a coin toss was sometimes used to determine which combatant had the sun at his back. [ 11 ] In some other sports, the result of the toss is less crucial and merely a way to fairly choose between two more or less equal options.
The National Football League also has a coin toss for tie-breaking among teams for playoff berths and seeding, but the rules make the need for coin toss, which is random rather than competitive, very unlikely. A similar procedure breaks ties for the purposes of seeding in the NFL draft ; these coin tosses are more common, since the tie-breaking procedure for the draft is much less elaborate than the one used for playoff seeding.
Major League Baseball once conducted a series of coin flips as a contingency on the last month of its regular season to determine home teams for any potential one-game playoff games that might need to be added to the regular season. Most of these cases did not occur. From the 2009 season , the method to determine home-field advantage was changed. [ 12 ]
Fédération Internationale d'Escrime rules use a coin toss to determine the winner of some fencing matches that remain tied at the end of a " sudden death " extra minute of competition. Although in most international matches this is now done electronically by the scoring apparatus. [ citation needed ]
In the United States Asa Lovejoy and Francis W. Pettygrove , who each owned the claim to the land that would later become Portland, Oregon , wanted to name the new town after their respective hometowns of Boston, Massachusetts and Portland, Maine ; Pettygrove won with the flip of a coin which has been preserved as the Portland Penny . [ 13 ]
Scientists sometimes use coin flipping to determine the order in which they appear on the list of authors of scholarly papers . [ 14 ]
In addition to its practical applications in sports, coin tossing is symbolic of the democratic principle of equal opportunity. When two parties face an impasse, the act of flipping a coin signifies a commitment to impartiality and a willingness to accept the outcome, no matter how arbitrary it may seem. This shared acceptance of chance as the ultimate arbiter can foster cooperation and conflict resolution in various aspects of life beyond sports, including business negotiations and interpersonal conflicts.
The party who calls the side that is facing up when the coin lands wins.
The precedent necessity for win in the dual state flip has motivated methods for cheating so as to improve or ensure a win in an apparently random event. [ 15 ]
In December 2006, Australian television networks Seven and Ten , which shared the broadcasting of the 2007 AFL Season , decided who would broadcast the Grand Final with the toss of a coin. Network Ten won. [ 16 ]
In some jurisdictions, a coin is flipped to decide between two candidates who poll equal number of votes in an election , or two companies tendering equal prices for a project. For example, a coin toss decided a City of Toronto tender in 2003 for painting lines on 1,605 km of city streets: the bids were $161,110.00 ($100.3800623 per km), $146,584.65 ($91.33 per km, exactly), and two equal bids of $111,242.55 ($69.31 per km, exactly).
" Drawing of lots " is one of the methods to break ties to determine a winner in an election; the coin flip is considered an acceptable variant. Each candidate will be given five chances to flip a coin; the candidate with the most "heads" wins. The 2013 mayoral election in San Teodoro, Oriental Mindoro was decided on a coin flip, with a winner being proclaimed after the second round when both candidates remained tied in the first round. [ 17 ]
In the United Kingdom, if a local or national election has resulted in a tie where candidates receive exactly the same number of votes, then the winner can be decided either by drawing straws/lots, coin flip, or drawing a high card in pack of cards. [ 18 ] [ 19 ]
In the United States , when a new state is added to the Union, a coin toss determines the class of the senators (i.e., the election cycle in which the term each of the new state's senators will expire) in the US Senate . [ 20 ] Also, a number of states provide for "drawing lots" in the event an election ends in a tie, and this is usually resolved by a coin toss or picking names from a hat. [ citation needed ] A 2017 election to the 94th District of the Virginia House of Delegates resulted in a tie between Republican incumbent David Yancey and Democratic challenger Shelly Simmonds, with exactly 11,608 votes each. Under state law, the election was to be decided by drawing a name from a bowl, although a coin toss would also have been an acceptable option. The chair of the Board of Elections drew the film canister with Yancey's name, and he was declared the winner. [ 21 ] Additionally, the outcome of the draw determined control of the entire House, as Republicans won 50 of the other 99 seats and Democrats 49. A Yancey win extended the Republican advantage to 51–49, whereas a Simmonds win would have resulted in a 50–50 tie. As there is no provision for breaking ties in the House as a whole, this would have forced a power sharing agreement between the two parties. [ 22 ]
The outcome of coin flipping has been studied by the mathematician and former magician Persi Diaconis and his collaborators. They have demonstrated that a mechanical coin flipper which imparts the same initial conditions for every toss has a highly predictable outcome – the phase space is fairly regular. Further, in actual flipping, people exhibit slight bias – "coin tossing is fair to two decimals but not to three. That is, typical flips show biases such as 0.495 or 0.503." [ 23 ]
In studying coin flipping, to observe the rotation speed of coin flips, Diaconis first used a strobe light and a coin with one side painted black, the other white, so that when the speed of the strobe flash equaled the rotation rate of the coin, it would appear to always show the same side. This proved difficult to use, and rotation rate was more accurately computed by attaching floss to a coin, such that it would wind around the coin – after a flip, one could count rotations by unwinding the floss, and then compute rotation rate as flips over air time. [ 23 ]
Moreover, their theoretical analysis of the physics of coin tosses predicts a slight bias for a caught coin to be caught the same way up as it was thrown, with a probability of around 0.51, [ 24 ] though a 2009 attempt to verify this experimentally at Berkeley with 40,000 tosses gave ambiguous results. [ 25 ] A much larger 2023 University of Amsterdam study (which won a 2024 Ig Nobel Prize [ 26 ] ) performed 350,757 tosses, finding an average same-side bias of 50.8%, but which varied from person to person. [ 27 ] [ 28 ]
Since the images on the two sides of actual coins are made of raised metal, the toss is likely to slightly favor one face or the other if the coin is allowed to roll on one edge upon landing. Coin spinning is much more likely to be biased than flipping. [ citation needed ]
Stage magicians and gamblers, with practice, are able to greatly increase the same-side-up bias, whilst still making throws which are visually indistinguishable from normal throws. [ 23 ] Conjurers trim the edges of coins so that when spun on a surface, they usually land on a particular face. [ citation needed ]
Human intuition about conditional probability is often very poor and can give rise to some seemingly surprising observations. For example, if the successive tosses of a coin are recorded as a string of "H" and "T", then for any trial of tosses, it is twice as likely that the triplet TTH will occur before THT than after it. It is three times as likely that "THH will precede HHT" than that "THH will follow HHT"; [ 30 ] see also Penney's game .
The mathematical abstraction of the statistics of coin flipping is described by means of the Bernoulli process ; a single flip of a coin is a Bernoulli trial . In the study of statistics, coin-flipping plays the role of being an introductory example of the complexities of statistics. A commonly treated textbook topic is that of checking if a coin is fair .
There is no reliable way to use a true coin flip to settle a dispute between two parties if they cannot both see the coin—for example, over the phone. The flipping party could easily lie about the outcome of the toss. In telecommunications and cryptography , the following algorithm can be used:
Bob, by providing his own random word, guarantees that Alice is not able to precompute an image pair of "tail/random string" or "head/random string", for two different random words. Bob is also unable to reverse Alice's hash to see what her chosen outcome was before flipping the coin, and to lie effectively about its outcome, because he does not know Alice's random word at that point in the process.
The New Zealand lottery game Big Wednesday uses a coin toss. If a player matches all six of their numbers, the coin toss will decide whether they win a cash jackpot (minimum of NZ$25,000) or a bigger jackpot with luxury prizes (minimum of NZ$2 million cash, plus value of luxury prizes). The coin toss is also used in determining the Second Chance winner's prize.
A technique attributed to Sigmund Freud to help in making difficult decisions is to toss a coin not actually to determine the decision, but to clarify the decision-maker's feelings. He explained: "I did not say you should follow blindly what the coin tells you. What I want you to do is to note what the coin indicates. Then look into your own reactions. Ask yourself: Am I pleased? Am I disappointed? That will help you to recognize how you really feel about the matter, deep down inside. With that as a basis, you'll then be ready to make up your mind and come to the right decision." [ 31 ]
Danish poet Piet Hein 's 1966 book Grooks includes a poem, "A Psychological Tip", on a similar theme:
Whenever you're called on to make up your mind, And you're hampered by not having any, The best way to solve the dilemma, you'll find, Is simply by spinning a penny. No—not so that chance shall decide the affair While you're passively standing there moping; But the moment the penny is up in the air, You suddenly know what you're hoping. | https://en.wikipedia.org/wiki/Head_and_Tail |
A head crash is a hard-disk failure that occurs when a read–write head of a hard disk drive makes contact with its rotating platter , slashing its surface and permanently damaging its magnetic media. It is difficult to recover data from a head crashed drive. It is most often caused by a sudden severe motion of the disk, for example the jolt caused by dropping a laptop to the ground while it is operating or physically shocking a computer. Laptop 2.5 drives are significantly more likely to have a head crash due to their mobile nature despite having higher shock resistance than 3.5 desktop drives. Desktop drives being larger are more prone to damage if dropped but are usually in one place like in a computer/server so they are overall less likely to have a head crash.
A head normally rides on a thin film of moving air entrapped at the surface of its platter (some drives manufactured by Conner Peripherals in the mid-1990s used a thin liquid layer instead of air and Connor was the only manufacturer to ever do this [ 1 ] ). The distance between the head and platter is called the flying height . The head flies so close to the platter that you need a microscope to actually see its not touching the platter. The topmost layer of the platter is made of a Teflon -like material that acts like a lubricant. Underneath is a layer of sputtered carbon. These two layers protect the magnetic layer (data storage area) from most accidental touches of the read-write head. [ 2 ]
The disk read-and-write head is made using thin film techniques that include materials hard enough to scratch through the protective layers Heads must not touch the platters unless it is on a landing zone but modern hard drives use load ramps(a notable exception would be cheaper seagate drives that still use landing zones) so the heads never touch at all. A head crash can be initiated by a force that puts enough pressure on the platters from the heads to scratch through to the magnetic storage layer. A tiny particle of dirt or other detritus, excessive shock or vibration (such as accidentally dropping a running drive), can cause a head to bounce against its disk, destroying the thin magnetic coating on the area the heads come in contact with, and often damaging the heads in the process. After this initial crash, countless numbers of fine particles from the damaged area can land on other areas and can cause more head crashes when the heads move over those particles, quickly causing significant damage and data loss , and rendering the drive useless. Some modern hard disks incorporate free fall sensors (in 2.5 drives but very rare in 3.5 drives)to offer protection against head crashes caused by accidentally dropping the drive.
Since most modern drives spin at rates between 5,400 and 15,000 RPM , the damage caused to the magnetic coating can be extensive. At 7,200 RPM, the edge of a 3.5inch platter is traveling at over 120 kilometres per hour (75 mph), and as the crashed head drags over the platter surface, the read-write head generally overheats, making the drive or at least parts of it unusable until the head cools down.
Older drives typically rotated far more slowly and had larger heads flying higher above the surface of the medium. However, since in many cases, the medium was housed in a removable cartridge or pack and since air filtration was comparatively crude, crashes were fairly frequent and invariably expensive.
Head crashes have been a frequent problem on laptop computers since they first incorporated hard drives, since a laptop computer is more liable to be dropped or jostled than a stationary machine. This has led to the development of protective technologies that "park" the head at a safe distance from the disk when sudden motion, such as that of a dropped computer, is detected. Active Hard Disk Protection software and sensors began appearing in laptops in 2003 with IBM introducing it in their ThinkPad line of laptops, [ 3 ] becoming common around 2009 with the introduction of Windows 7 . [ 4 ] These drives are also designed to "self-park" during sudden power loss, which has reduced the incidence of head crashes. With the popularization of solid-state drives , which have no head or disk, the head-crash issue has been nearly eliminated in modern laptops. | https://en.wikipedia.org/wiki/Head_crash |
A head on a spike (also described as a head on a pike , a head on a stake , or a head on a spear ) is a severed head that has been vertically impaled for display. This has been a custom in a number of cultures, typically either as part of a criminal penalty following execution or as a war trophy following a violent conflict. The symbolic value may change over time. It may give a warning to spectators. The head may be a human head or an animal head.
The earliest known archeological evidence for mounting heads on stakes has been identified in Sweden , at a Mesolithic site in Kanaljorden , in the floor of a dried lake, dating to 8,000 years ago. [ 1 ] There, archeologists recovered human crania with the remnants of wooden stakes still in place within the two crania. The crania exhibited evidence of blunt force trauma that looked to have resulted from a violent confrontation. Archeologists interpreted the wooden stakes as evidence that the heads had been mounted for display by members of the Swedish Mesolithic hunter-gatherer culture. [ 1 ]
In England , the heads of criminals, especially those convicted of treason , were mounted for display on London Bridge from about 1300 until about 1660. [ 2 ] [ 3 ] Heads were usually dipped in tar to slow down the decomposition process. Criminal punishment was sometimes posthumous, as the body of Oliver Cromwell was exhumed so that it could be hanged, drawn, and quartered , and his head was mounted on a spike and displayed for 30 years. [ 4 ] | https://en.wikipedia.org/wiki/Head_on_a_spike |
A header check sequence ( HCS ) is an error checking feature for various header data structures, such as in the media access control (MAC) header of Ethernet . It may consist of a cyclic redundancy check (CRC) of the frame , obtained as the remainder of the division ( modulo 2) by the generator polynomial multiplied by the content of the header excluding the HCS field.
The HCS can be one octet long, as in WiMAX , [ 1 ] or a 16-bit value for cable modems . [ 2 ]
This article related to telecommunications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Header_check_sequence |
The heading indicator ( HI ), also known as a directional gyro [ 1 ] ( DG ) or direction indicator ( DI ), [ 2 ] [ 3 ] [ 4 ] [ 5 ] is a flight instrument used in an aircraft to inform the pilot of the aircraft's heading .
The primary means of establishing the heading in most small aircraft is the magnetic compass , which, however, suffers from several types of errors, including that created by the "dip" or downward slope of the Earth's magnetic field. Dip error causes the magnetic compass to read incorrectly whenever the aircraft is in a bank, or during acceleration or deceleration, making it difficult to use in any flight condition other than unaccelerated, perfectly straight and level. To remedy this, the pilot will typically maneuver the airplane with reference to the heading indicator, as the gyroscopic heading indicator is unaffected by dip and acceleration errors. The pilot will periodically reset the heading indicator to the heading shown on the magnetic compass. [ 4 ] [ 6 ] [ 7 ] [ 8 ]
The heading indicator works using a gyroscope , tied by an erection mechanism to the aircraft yawing plane, i. e. the plane defined by the longitudinal and the horizontal axis of the aircraft. As such, any configuration of the aircraft yawing plane that does not match the local Earth horizontal results in an indication error. The heading indicator is arranged such that the gyro axis is used to drive the display, which consists of a circular compass card calibrated in degrees. The gyroscope is spun either electrically, or using filtered air flow from a suction pump (sometimes a pressure pump in high altitude aircraft) driven from the aircraft's engine . Because the Earth rotates (ω, 15° per hour, apparent drift), and because of small accumulated errors caused by imperfect balancing of the gyro, the heading indicator will drift over time (real drift), and must be reset using a magnetic compass periodically. [ 4 ] [ a ] The apparent drift is predicted by ω sin Latitude and will thus be greatest over the poles. To counter for the effect of Earth rate drift a latitude nut can be set (on the ground only) which induces a (hopefully equal and opposite) real wander in the gyroscope. Otherwise it would be necessary to manually realign the direction indicator once each ten to fifteen minutes during routine in-flight checks. Failure to do this is a common source of navigation errors among new pilots. Another sort of apparent drift exists in the form of transport wander, caused by the aircraft movement and the convergence of the meridian lines towards the poles. It equals the course change along a great circle (orthodrome) flight path. [ 9 ]
Some more expensive heading indicators are "slaved" to a magnetic sensor, called a flux gate . The flux gate continuously senses the Earth's magnetic field, and a servo mechanism constantly corrects the heading indicator. [ 4 ] These "slaved gyros" reduce pilot workload by eliminating the need for manual realignment every ten to fifteen minutes.
The prediction of drift in degrees per hour, is as follows:
Although it is possible to predict the drift, there will be minor variations from this basic model, accounted for by gimbal error (operating the aircraft away from the local horizontal), among others. A common source of error here is the improper setting of the latitude nut (to the opposite hemisphere for example). The table however allows one to gauge whether an indicator is behaving as expected, and as such, is compared with the realignment corrections made with reference to the magnetic compass. Transport wander is an undesirable consequence of apparent drift. | https://en.wikipedia.org/wiki/Heading_indicator |
A headless engine or fixed head engine [ 1 ] is an engine where the end of the cylinder is cast as one piece with the cylinder and crankcase. [ 2 ] The most well known headless engines are the Fairbanks-Morse Z and the Witte Headless hit and miss engine [ 3 ]
This article about a mechanical engineering topic is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Headless_engine |
In digital and analog audio , headroom refers to the amount by which the signal-handling capabilities of an audio system can exceed a designated nominal level . [ 1 ] Headroom can be thought of as a safety zone allowing transient audio peaks to exceed the nominal level without damaging the system or the audio signal, e.g., via clipping . Standards bodies differ in their recommendations for nominal level and headroom.
In digital audio, headroom is defined as the amount by which digital full scale (FS) exceeds the nominal level in decibels (dB). The European Broadcasting Union (EBU) specifies several nominal levels and resulting headroom for different applications. [ citation needed ]
In analog audio, headroom can mean low-level signal capabilities as well as the amount of extra power reserve available within the amplifiers that drive the loudspeakers.
Alignment level is an anchor point 9 dB below the nominal level, [ citation needed ] a reference level that exists throughout the system or broadcast chain, though it may imply different voltage levels at different points in the analog chain. Typically, nominal (not alignment) level is 0 dB, corresponding to an analog sine wave of voltage of 1.23 volts RMS (+4 dBu or 3.47 volts peak to peak ). In the digital realm, alignment level is −18 dBFS. | https://en.wikipedia.org/wiki/Headroom_(audio_signal_processing) |
A headshell is a head piece designed to be attached to the end of a turntable 's or record player 's tonearm, which holds the cartridge . [ 1 ] Standard catridges are secured to the headshell by a couple of 2.5 mm bolts spaced 1/2" apart. Older, non-metric cartridges used #2 (3/32") bolts. [ 2 ]
Some headshells are designed to allow variable weights to be attached. For example, the H4-S Stanton headshell comes with 2g and 4g screw-in weights. Extra weight can be useful to prevent skipping if the DJ is scratching the record. The pin diameter of most if not all of the headshells is in the exact ⌀ 1.0mm.
Most headshells use a standard H-4 Bayonet Mount, which will fit all S shape tonearms. The bayonet has a standard barrel whose dimensions are 8 mm diameter and 12 mm length, with its four pins connected to the four colour-coded head-shell lead wires.
The colour standards for the contact connections are as follows: [ 3 ]
This sound technology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Headshell |
Headstarting is a conservation technique for endangered species in which young animals are raised artificially and subsequently released into the wild. The technique allows a greater proportion of the young to reach independence, without predation or loss to other natural causes. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
For endangered birds and reptiles , eggs are collected from the wild are hatched using an incubator . [ 1 ] [ 2 ] For mammals such as Hawaiian monk seals , the young are removed from their mothers after weaning . [ 5 ]
The technique was trialled on land-based mammals for the first time in Australia. In the three years prior to May 2021, young bridled nail-tail wallabies were placed in a fenced-off area of 10-hectare (25-acre) area within Avocet Nature Refuge in Queensland . The population, safe from their main predator, feral cats , more than doubled over this period. [ 6 ] | https://en.wikipedia.org/wiki/Headstarting |
Headward erosion is erosion at the origin of a stream channel , which causes the origin to move back away from the direction of the stream flow, lengthening the stream channel. [ 1 ] It can also refer to the widening of a canyon by erosion along its very top edge, when sheets of water first enter the canyon from a more roughly planar surface above it, such as at Canyonlands National Park in Utah . When sheets of water on a roughly planar surface first enter a depression in it, this erodes the top edge of the depression. The stream is forced to grow longer at the very top of the stream, which moves its origin back, or causes the canyon formed by the stream to grow wider as the process repeats. Widening of the canyon by erosion inside the canyon, below the canyon side top edge, or origin or the stream, such as erosion caused by the streamflow inside it, is not called headward erosion.
Headward erosion is a fluvial process of erosion that lengthens a stream , a valley or a gully at its head and also enlarges its drainage basin . The stream erodes away at the rock and soil at its headwaters in the opposite direction that it flows. Once a stream has begun to cut back, the erosion is sped up by the steep gradient the water is flowing down. As water erodes a path from its headwaters to its mouth at a standing body of water, it tries to cut an ever-shallower path. This leads to increased erosion at the steepest parts, which is headward erosion. If this continues long enough, it can cause a stream to break through into a neighboring watershed and capture drainage that previously flowed to another stream.
For example, headward erosion by the Shenandoah River , a tributary of the Potomac River in the U.S. state of Virginia , permitted the Shenandoah to capture successively the original upstream segments of Beaverdam Creek , Gap Run and Goose Creek , three smaller tributaries of the Potomac. As each capture added to the Shenandoah's effluent , or discharge, it accelerated the process of headward erosion until the Shenandoah captured all drainage to the Potomac west of the Blue Ridge Mountains . [ 2 ]
Three kinds of streams are formed by headward erosion: insequent streams , subsequent streams , and obsequent and resequent streams ( See Fluvial landforms of streams .) Insequent streams form by random headward erosion, usually from sheetflow of water over the landform surface. The water collects in channels where the velocity and erosional power increase, cutting into and extending the heads of gullies. Subsequent streams form by selective headward erosion by cutting away at less resistive rocks in the terrain. Obsequent and resequent streams form after time in an area of insequent or subsequent streams. Obsequent streams are insequent streams that now flow in an opposite direction of the original drainage pattern. Resequent streams are subsequent streams that have also changed direction from their original drainage patterns. [ 3 ]
Headward erosion creates three major kinds of drainage patterns: dendritic patterns , trellis patterns , and rectangular and angular patterns .
Four minor kinds of drainage patterns also can be created: radial patterns , annular patterns , centripetal patterns and parallel patterns . | https://en.wikipedia.org/wiki/Headward_erosion |
Headworks is a civil engineering term for any structure at the head or diversion point of a waterway. It is smaller than a barrage and is used to divert water from a river into a canal or from a large canal into a smaller canal. [ 1 ]
An example is the Horseshoe Falls at the start of the Llangollen Canal .
Historically the phrase "headworks" derives from the traditional approach of diverting water at the start of an irrigation network and the location of these processes at the "head of the works".
This article about a civil engineering topic is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Headworks |
The Heaf test , a diagnostic skin test, was long performed to determine whether or not children had been exposed to tuberculosis infection. The test was named after F. R. G. Heaf . Also known as the Sterneedle test , [ 1 ] it was administered by a Heaf gun (trademarked "Sterneedle"), [ 2 ] a spring-loaded instrument with six needles arranged in a circular formation which was inserted in the wrist [ 3 ] or shoulder.
The Heaf test was discontinued in 2005 because the manufacturer deemed its production to be financially unsustainable after manufacturers could not be found for tuberculin or Heaf guns. Until 2005, the test was used in the United Kingdom to determine if the BCG vaccine was needed; the Mantoux test is now used instead. The Heaf test was preferred in the UK, because it was thought to be easier to interpret, with less variability between observers, and less training was required to administer and read the test.
Patients who exhibited a negative reaction to the test were considered for BCG vaccination.
The Heaf test was used to test for tuberculosis in adolescents aged around 13–14. [ 4 ]
A Heaf gun was used to inject multiple samples of testing serum under the skin at once. The needle points were dipped in tuberculin purified protein derivative (PPD) and pricked into the skin. [ 5 ] A Heaf gun with disposable single-use heads was recommended.
The gun injected PPD equivalent to 100,000 units per ml to the skin over the flexor surface of the left forearm in a circular pattern of six. The test was read between two and seven days later. The injection could not be into sites containing superficial veins.
The reading of the Heaf test was defined by a scale: [ 6 ]
Grades 1 and 2 could result from previous BCG or avian tuberculosis, rather than human TB infection.
Children who were found to have a grade 3 or 4 reaction were referred for X-ray and follow-up.
For interpretation of the test, see Tuberculosis diagnosis .
The equivalent Mantoux test positive levels done with 10 TU (0.1 mL 100 TU/mL, 1:1000) are
The Mantoux test is preferred in the United States for the diagnosis of tuberculosis; multiple puncture tests, such as the Heaf test and Tine test , are not recommended. | https://en.wikipedia.org/wiki/Heaf_test |
With physical trauma or disease suffered by an organism, healing involves the repairing of damaged tissue(s) , organs and the biological system as a whole and resumption of (normal) functioning. Medicine includes the process by which the cells in the body regenerate and repair to reduce the size of a damaged or necrotic area and replace it with new living tissue. The replacement can happen in two ways: by regeneration in which the necrotic cells are replaced by new cells that form "like" tissue as was originally there; or by repair in which injured tissue is replaced with scar tissue . Most organs will heal using a mixture of both mechanisms. [ 1 ]
Within surgery , healing is more often referred to as recovery, and postoperative recovery has historically been viewed simply as restitution of function and readiness for discharge. More recently, it has been described as an energy‐requiring process to decrease physical symptoms, reach a level of emotional well‐being, regain functions, and re‐establish activities [ 2 ]
Healing is also referred to in the context of the grieving process. [ 3 ]
In psychiatry and psychology , healing is the process by which neuroses and psychoses are resolved to the degree that the client is able to lead a normal or fulfilling existence without being overwhelmed by psychopathological phenomena. This process may involve psychotherapy , pharmaceutical treatment or alternative approaches such as traditional spiritual healing . [ citation needed ]
In order for an injury to be healed by regeneration, the cell type that was destroyed must be able to replicate. Cells also need a collagen framework along which to grow. Alongside most cells there is either a basement membrane or a collagenous network made by fibroblasts that will guide the cells' growth. Since ischaemia and most toxins do not destroy collagen, it will continue to exist even when the cells around it are dead. [ citation needed ]
Acute tubular necrosis (ATN) in the kidney is a case in which cells heal completely by regeneration. ATN occurs when the epithelial cells that line the kidney are destroyed by either a lack of oxygen (such as in hypovolemic shock , when blood supply to the kidneys is dramatically reduced), or by toxins (such as some antibiotics , heavy metals or carbon tetrachloride ). [ citation needed ]
Although many of these epithelial cells are dead, there is typically patchy necrosis, meaning that there are patches of epithelial cells still alive. In addition, the collagen framework of the tubules remains completely intact. [ citation needed ]
The existing epithelial cells can replicate, and, using the basement membrane as a guide, eventually bring the kidney back to normal. After regeneration is complete, the damage is undetectable, even microscopically . [ citation needed ]
Healing must happen by repair in the case of injury to cells that are unable to regenerate (e.g. neurons). Also, damage to the collagen network (e.g. by enzymes or physical destruction), or its total collapse (as can happen in an infarct ) cause healing to take place by repair. [ citation needed ]
Many genes play a role in healing. [ 4 ] For instance, in wound healing, P21 has been found to allow mammals to heal spontaneously. It even allows some mammals (like mice) to heal wounds without scars. [ 5 ] [ 6 ] The LIN28 gene also plays a role in wound healing. It is dormant in most mammals. [ 7 ] Also, the proteins MG53 and TGF beta 1 play important roles in wound healing. [ 8 ]
In response to an incision or wound, a wound healing cascade is unleashed. This cascade takes place in four phases: clot formation, inflammation, proliferation, and maturation.
Healing of a wound begins with clot formation to stop bleeding and to reduce infection by bacteria, viruses and fungi . Clotting is followed by neutrophil invasion three to 24 hours after the wound has been incurred, with mitoses beginning in epithelial cells after 24 to 48 hours. [ citation needed ]
In the inflammatory phase, macrophages and other phagocytic cells kill bacteria, debride damaged tissue and release chemical factors such as growth hormones that encourage fibroblasts, epithelial cells and endothelial cells which make new capillaries to migrate to the area and divide. [ citation needed ]
In the proliferative phase, immature granulation tissue containing plump, active fibroblasts forms. Fibroblasts quickly produce abundant type III collagen , which fills the defect left by an open wound. Granulation tissue moves, as a wave, from the border of the injury towards the center. [ citation needed ]
As granulation tissue matures, the fibroblasts produce less collagen and become more spindly in appearance. They begin to produce the much stronger type I collagen. Some of the fibroblasts mature into myofibroblasts which contain the same type of actin found in smooth muscle , which enables them to contract and reduce the size of the wound. [ citation needed ]
During the maturation phase of wound healing, unnecessary vessels formed in granulation tissue are removed by apoptosis , and type III collagen is largely replaced by type I. Collagen which was originally disorganized is cross-linked and aligned along tension lines. This phase can last a year or longer. Ultimately a scar made of collagen, containing a small number of fibroblasts is left. [ citation needed ]
After inflammation has damaged tissue (when combatting bacterial infection for example) and pro-inflammatory eicosanoids have completed their function, healing proceeds in 4 phases. [ 9 ]
In the recall phase the adrenal glands increase production of cortisol which shuts down eicosanoid production and inflammation. [ citation needed ]
In the Resolution phase, pathogens and damaged tissue are removed by macrophages (white blood cells). Red blood cells are also removed from the damaged tissue by macrophages. Failure to remove all of the damaged cells and pathogens may retrigger inflammation. The two subsets of macrophage M1 & M2 plays a crucial role in this phase, M1 macrophage being a pro inflammatory while as M2 is a regenerative and the plasticity between the two subsets determine the tissue inflammation or repair. [ citation needed ]
In the Regeneration phase, blood vessels are repaired and new cells form in the damaged site similar to the cells that were damaged and removed. Some cells such as neurons and muscle cells (especially in the heart) are slow to recover. [ citation needed ]
In the Repair phase, new tissue is generated which requires a balance of anti-inflammatory and pro-inflammatory eicosanoids. Anti-inflammatory eicosanoids include lipoxins , epi-lipoxins , and resolvins , which cause release of growth hormones. [ citation needed ] | https://en.wikipedia.org/wiki/Healing |
HealthCap is a specialized provider of venture capital within life sciences . HealthCap invests in innovative companies with focus on therapeutics. [ 1 ] As of 2023, HealthCap has invested in over 125 companies since inception and completed initial public offerings of more than 45 companies. [ 2 ] HealthCap has offices in Stockholm and Lausanne .
The firm was founded in 1996 by Björn Odlander [ 3 ] and Peder Fredrikson, [ 4 ] and the first fund was started the same year. [ 5 ] As of 2023, HealthCap has established eight funds and financed more than 125 companies, where more than 45 have been taken public on nine different markets. The most recent fund, HealthCap VIII, was established in 2019.
HealthCap has approximately 25 employees out of which thirteen are partners. The team combines venture capital investing experience with competences and work experiences from small as well as large companies across the healthcare industry, spanning disciplines of scientific research, drug development, clinical practice, investment banking, and industry management. [ citation needed ]
HealthCap has raised eight main funds. Investors in HealthCap funds include, among others, European Investment Fund , Skandia Life Insurance, the 4th and 6th Swedish National Pension Funds, The Kresge Foundation , Mayo Clinic , Northwestern University , University of Michigan , Vanderbilt University and Washington University . HealthCap has committed capital exceeding EUR 1 billion. [ 6 ] [ 7 ]
HealthCap invests in companies developing disruptive technologies that hold the potential to change clinical practice. Over the years HealthCap has invested in more than 125 companies. The portfolio companies have developed more than 20 pharmaceutical products and over 40 med-tech products to the market. [ 8 ] Many of these products, such as Firazyr ®, Xofigo ®, Tracleer ®, are breakthrough therapies addressing life-threatening diseases. Examples of technologies financed by HealthCap are: | https://en.wikipedia.org/wiki/HealthCap |
The Health Products and Food Branch ( HPFB ) of Health Canada manages the health-related risks and benefits of health products and food by minimizing risk factors while maximizing the safety provided by the regulatory system and providing information to Canadians so they can make healthy, informed decisions about their health.
HPFB has ten operational Directorates with direct regulatory responsibilities:
Extraordinary Use New Drugs (EUNDs) is a regulatory programme under which, in times of emergency, drugs can be granted regulatory approval under the Food and Drug Act and its regulations. [ 1 ] [ 2 ] [ 3 ] [ 4 ] An EUND approved through this pathway can only be sold to federal, provincial, territorial and municipal governments. [ 5 ] The text of the EUNDs regulations is available. [ 6 ]
On 25 March 2011 [ 5 ] and after the pH1N1 pandemic , [ 3 ] amendments were made to the Food and Drug Regulations (FDR) to include a specific regulatory pathway for EUNDs. Typically, clinical trials in human subjects are conducted and the results are provided as part of the clinical information package of a New Drug Submission (NDS) to Health Canada, the federal authority that reviews the safety and efficacy of human drugs. [ 2 ]
Health Canada recognizes that there are circumstances in which sponsors cannot reasonably provide substantial evidence demonstrating the safety and efficacy of a therapeutic product for NDS as there are logistical or ethical challenges in conducting the appropriate human clinical trials. The EUND pathway was developed to allow a mechanism for authorization of these drugs based on non-clinical and limited clinical information. A manufacturer of a new drug may file an extraordinary use new drug submission for the new drug if, under paragraph C.08.002.01(1): [ 2 ]
(a) the new drug is intended for
(b) the requirements set out in paragraphs C.08.002(2)(g) and (h) cannot be met because
The HPFB signed an electronic data interchange agreement with the US Food and Drug Administration in November 2003, and again in April 2004 with the Therapeutic Goods Administration of Australia . Dr Joel Lexchin found the secrecy arrangements in the memorandum of understanding to be troublesome and said that "that just makes it more difficult for the medical community to know how drugs were approved, how the data were assessed, and even what data were assessed." [ 7 ]
In 2016, the HPFB signed an agreement with the European Directorate for the Quality of Medicines "for the exchange of information generated by the EDQM through its certification procedure and by HBFB during the course of applicable product assessments." [ 8 ] | https://en.wikipedia.org/wiki/Health_Products_and_Food_Branch |
The Health Valley covers the Western Switzerland region, where the life sciences sector extends from Geneva to Bern, including the seven cantons of Bern , Fribourg , Geneva , Jura , Neuchâtel , Valais and Vaud . This cluster presents a critical mass of 1,000 companies, research centers and innovation support structures, representing today more than 25,000 employees. The Health Valley strives to animate the life sciences ecosystem of the region, by establishing thriving bridges between its ambassadors.
The name of the Health Valley is inspired by that of Silicon Valley in California, United States (where the focus is on information technology ). According to Swiss newspaper Le Temps , there were close to 1,000 biotech and medtech companies in the Health Valley in 2017, employing 25,000 people. [ 1 ]
The Health Valley network is led by BioAlps, an association funded by the 7 cantons of Western Switzerland and 12 academic members such as EPFL , UNIL , UNINE , UNIGE , HES-SO , HEIG-VD , HEPIA, UNIFR , CHUV , HUG , CSEM , SIB . Its mission is to represent the whole ecosystem and to foster synergies between all the actors.
A digital interactive map, project led in 2016 by the Fondation Inartis , of the regional actors is located at healthvalley.ch .
According to Swiss journalist, the idea of a Health Valley was actively supported by Patrick Aebischer during his tenure as president of the Swiss Federal Institute of Technology in Lausanne/EPFL (2000–2016). Notably, Aebischer promoted teaching and research in the life sciences while deepening cooperation with Lausanne University Hospital/CHUV. [ 2 ] [ 3 ]
Biotech expert Jürg Zürcher argues that Switzerland as a whole constitutes a cluster, with the Basel BioValley employing 50,000 people and the Zurich region employing 21,000. "Together, these three regions form the densest network of biotech firms anywhere in the world", Swiss Info notes, with over 40% of the world's pharmaceutical companies in the Basel region alone. Foreign competing clusters include the Oxford - Cambridge -London cluster in the United Kingdom, the Boston, San Francisco and San Diego clusters in the United States, as well as emerging ones in India (Hyderabad, Bangalore , New Delhi) and China (Shanghai, Shenzhen ). | https://en.wikipedia.org/wiki/Health_Valley |
The Health and Safety (Safety Signs and Signals) Regulations 1996 specify the safety signs within Great Britain; [ 1 ] Northern Ireland has a similar law, the Health and Safety (Safety Signs and Signals) Regulations (Northern Ireland) 1996. [ 2 ] It was issued as a transposition of the European directive 92/58/EEC and replaced the Safety Signs Regulations 1980. [ 1 ] They consist of "traditional safety signs", such as prohibitory and warning signs, along with hand signals, spoken and acoustic signals, and hazard marking. [ 3 ]
Notable limitations to the previous legislation, The Safety Signs Regulations 1980 , was that it excluded coal mining and tips; did not include signage related to fire fighting equipment, rescue/first aid equipment or emergency exits. [ 4 ] The law also simply stated signage required under the Health and Safety at Work etc. Act 1974 shall comply with BS 5378:Part 1: 1980, providing no further information on where signs should be posted, the incorporation of text or sizing of signage. [ 4 ] The standard also lacked this information as well as guidance on situations not effectively handled by a standard safety sign, such as blocking off hazardous areas, marking of traffic routes or use of acoustic or light signal for safety hazard. [ 5 ]
The regulations apply to occupational health and safety within the territorial borders of Great Britain, also on offshore installations. [ 6 ] [ 7 ] [ 8 ] It does not apply to the marking of dangerous goods and substances itself, only its storage or pipes, nor the regulation of road, rail, inland waterway, sea or air traffic, nor to signs used aboard of sea-going ships. [ 1 ] For internal road traffic, traffic signs prescribed by the TSRGD , should be used. [ 6 ] [ 9 ]
The Regulations do not require the usage of safety signs and signals for non-employees, such as customers, visitors or the general public. [ 6 ] However, section 3 of the Health and Safety at Work etc. Act 1974 requires employers to take reasonable efforts to protect the health and safety of non-employees from hazards posed by their work. [ 10 ] The Regulations note that signs provided may be used for this purpose. [ 11 ] [ 6 ]
The Regulations require safety signage to be uniform and, as far as appropriate, without words, in order to be easily and fast understandable, without knowing the language. [ 11 ] Minor differences between the prescribed signs and the installed signs are allowed, as long as the convey the same message. [ 1 ] [ 6 ] The Regulations also allow for designing a custom symbol when a suitable symbol does not exist in the regulation. [ 6 ] The symbol should follow BS ISO 3864-1:2011 and BS ISO 3864-4:2011 to ensure compliance with basic design principal. [ 6 ] The Health and Safety Executive specifically allows the usage of BS EN ISO 7010 safety signs. [ 6 ]
Safety signs should only be used, if other measures of avoiding hazards failed. [ 3 ] [ 7 ] Also, if there is no risk, no safety signage should be used. [ 9 ] Employees should regularly be instructed about the meaning of safety signs and signals. [ 12 ] [ 13 ] Employers are obligated to maintain the safety signage. [ 13 ]
The Health and Safety (Safety Signs and Signals) Regulations 1996 consists of 8 articles and 3 schedules. [ 1 ]
As required in Annex I of the European directive 92/58/EEC , Schedule 1, Part I of the Regulations lays down a basic safety colour concept: [ 1 ] [ 8 ]
Schedule 1, Part II defines five types of signboards, as shown below. [ 1 ] They are also covered by BS 5378, Part 1 and 3 from 1980 and 1982, [ 7 ] [ 14 ] [ 15 ] which have been superseded by BS EN ISO 7010 . [ 16 ] [ 17 ] [ 18 ] Safety signs must contain only symbols, not text. [ 7 ] [ 13 ] However, supplementary text plates may be used. [ 13 ] For fire exits , the running man symbol should be used. [ 19 ] Fire safety signs in use before the Regulations were in place could be used until 24 December 1998. [ 12 ]
This part, Minimum requirements governing signs on containers and pipes , defines the marking for the transport or storage of dangerous material by pipes and in containers, originally within the scope of the European directives 67/548/EEC and 1999/45/EC , which are both replaced by Regulation (EC) No 1272/2008 , the CLP Regulation. [ 1 ] For marking, the warning signs of Part II should be used. [ 7 ]
Storage areas and rooms for dangerous substances are also required to be marked by suitable signage, either specifying the specific hazard, if all substances in the area have a common hazard, such as flammability, or using the 'general danger' symbol if different substances have various hazards. The regulation also explains differences and overlap with Dangerous Substances (Notification and Marking of Sites) Regulations 1990, which relates to requirements to mark facilities and sites where dangerous substances are stored in large quantities for firefighter safety. [ 1 ]
The Minimum requirements for the identification and location of fire-fighting equipment specifies, that, additionally to the Fire-fighting signs of Part II, the equipment for fire-fighting and its location has to be marked red. [ 1 ]
In this part, the Minimum requirements governing signs used for obstacles and dangerous locations, and for marking traffic routes , requires hazardous places to be marked with either black and yellow or red and white markings. It also states that ways used for traffic have to be marked with either white or yellow stripes. [ 1 ]
The Health and Safety Executive expects employers to establish and mark traffic routes when necessary to ensure workplace safety, such as where powered industrial trucks are in use, especially in proximity to workers on foot. Employers are also expected to take steps to ensure safe flow of traffic, such as providing a banksman to guide reversing vehicles near people on foot or near hazardous conditions such as a drop off. [ 1 ]
Part VI, Minimum requirements for illuminated signs , requires illuminated signs to be single-coloured or to contain a symbol. If the latter is the case, it should comply with Part II. If a flashing light and a sound are used together, they have to be synchronized. [ 1 ]
This part, the Minimum requirements for acoustic signals , requires acoustic signals to be understandable and not harmful. If the acoustic signal is an fire alarm, it has to be continuing. [ 1 ]
The Minimum requirements for verbal communication defines the use of language for safety purposes. It also defines coded words: [ 1 ]
Hand signals should only be given by one instructor . [ 8 ] Other hand signals are also allowed, as specified in Schedule 2. [ 1 ]
Attention Start of Command
Interruption
End of movement
Emergency stop
Since it's introduction in 1996, the Regulations have undergone some changes. The 'Harmful or irritant material' warning sign was completely removed from the Regulations on 6 January, 2015. [ 20 ] This was in response to CLP Regulations amending Directive 92/58/EEC to remove the sign as part of harmonization with Globally Harmonized System of Classification and Labelling of Chemicals , which discontinued use of an "X" to identify harmful and irritating substances. [ 21 ]
In 2015, the third edition of Safety Signs and Signals , by the Health and Safety Executive was released. The guidance reinforced existing guidance that "small differences from the pictograms or symbols shown in Schedule 1 of the Regulations are acceptable" by directly referencing that EN ISO 7010 symbols were considered to be acceptable for use instead of the designs provided in the Regulations, as they conformed to the 'intrinsic features' specified in the law. [ 6 ]
Health and Safety (Safety Signs and Signals) Regulations 1996 brought about a significant move to uniformity in the appearance safety sign design in the United Kingdom. British Rail's successor Railtrack started to phase out usage of the railway's 'red warning flash' signage, in use for over 35 years, in August 1997. [ 22 ] [ 23 ] The railway's signs for warning staff of areas of limited clearance, marked by simple chequered board design dating back to the 1952, was also to be phased out in favor of a standard warning design. However, the chequered design has persisted, still part of the standard for relevant safety sign standard as of 2025, nearly 30 years after the 1996 legislation's enactment, owing to staff familiarity with the pervious non-standard design. [ 24 ]
This article incorporates text published under the British Open Government Licence v3.0: | https://en.wikipedia.org/wiki/Health_and_Safety_(Safety_Signs_and_Signals)_Regulations_1996 |
The health and safety hazards of nanomaterials include the potential toxicity of various types of nanomaterials , as well as fire and dust explosion hazards. Because nanotechnology is a recent development, the health and safety effects of exposures to nanomaterials, and what levels of exposure may be acceptable, are subjects of ongoing research. Of the possible hazards, inhalation exposure appears to present the most concern, with animal studies showing pulmonary effects such as inflammation , fibrosis , and carcinogenicity for some nanomaterials. Skin contact and ingestion exposure, and dust explosion hazards, are also a concern.
Guidance has been developed for hazard controls that are effective in reducing exposures to safe levels, including substitution with safer forms of a nanomaterial, engineering controls such as proper ventilation, and personal protective equipment as a last resort. For some materials, occupational exposure limits have been developed to determine a maximum safe airborne concentration of nanomaterials, and exposure assessment is possible using standard industrial hygiene sampling methods. An ongoing occupational health surveillance program can also help to protect workers. Microplastics and nanoparticles from plastic containers are an increasing concern. [ 1 ] [ 2 ]
Nanotechnology is the manipulation of matter at the atomic scale to create materials, devices, or systems with new properties or functions, with potential applications in energy , healthcare , industry , communications, agriculture, consumer products, and other sectors. Nanomaterials have at least one primary dimension of less than 100 nanometers , and often have properties different from those of their bulk components that are technologically useful. The classes of materials of which nanoparticles are typically composed include elemental carbon, metals or metal oxides, and ceramics. According to the Woodrow Wilson Center , the number of consumer products or product lines that incorporate nanomaterials increased from 212 to 1317 from 2006 to 2011. Worldwide investment in nanotechnology increased from $432 million in 1997 to about $4.1 billion in 2005. [ 3 ] : 1–3
Because nanotechnology is a recent development, the health and safety effects of exposures to nanomaterials, and what levels of exposure may be acceptable, is not yet fully understood. Research concerning the handling of nanomaterials is underway, and guidance for some nanomaterials has been developed. [ 3 ] : 1–3 As with any new technology, the earliest exposures are expected to occur among workers conducting research in laboratories and pilot plants, making it important that they work in a manner that is protective of their safety and health. [ 4 ] : 1
A risk management system is composed of three parts. Hazard identification involves determining what health and safety concerns are present for both the nanomaterial and its corresponding bulk material, based on a review of safety data sheets , peer-reviewed literature, and guidance documents on the material. For nanomaterials, toxicity hazards are the most important, but dust explosion hazards may also be relevant. Exposure assessment involves determining actual routes of exposure in a specific workplace, including a review of which areas and tasks are most likely to cause exposure. Exposure control involves putting procedures in places to minimize or eliminate exposures according to the hierarchy of hazard controls . [ 4 ] : 2–6 [ 5 ] : 3–5 Ongoing verification of hazard controls can occur through monitoring of airborne nanomaterial concentrations using standard industrial hygiene sampling methods, and an occupational health surveillance program may be instituted. [ 5 ] : 14–16
A recently adopted risk management method is the Safe by design (SbD) approach. It aims to eliminate or reduce risks of new technologies including nanotechnology, at the design stage of a product or production process. Anticipation of risks is challenging because some risks could emerge only after a technology is implemented (at later stages in the innovation process). In the later cases, the application of other risk management strategies based on non-design principles need to be applied. It considers the purposes and constrains for implementation of SbD approaches in the industrial innovation process and on the basis of those, establish optimal workflows to identify risks and propose solutions to reduce or mitigate them as early as possible in the innovation process called Safe by Design strategies. [ 6 ]
Inhalation exposure is the most common route of exposure to airborne particles in the workplace. The deposition of nanoparticles in the respiratory tract is determined by the shape and size of particles or their agglomerates, and they are deposited in the alveolar compartment to a greater extent than larger respirable particles. [ 7 ] Based on animal studies , nanoparticles may enter the bloodstream from the lungs and translocate to other organs, including the brain. [ 8 ] : 11–12 The inhalation risk is affected by the dustiness of the material, the tendency of particles to become airborne in response to a stimulus. Dust generation is affected by the particle shape, size, bulk density, and inherent electrostatic forces, and whether the nanomaterial is a dry powder or incorporated into a slurry or liquid suspension . [ 4 ] : 5–6
Animal studies indicate that carbon nanotubes and carbon nanofibers can cause pulmonary effects including inflammation , granulomas , and pulmonary fibrosis , which were of similar or greater potency when compared with other known fibrogenic materials such as silica , asbestos , and ultrafine carbon black . Some studies in cells or animals have shown genotoxic or carcinogenic effects, or systemic cardiovascular effects from pulmonary exposure. Although the extent to which animal data may predict clinically significant lung effects in workers is not known, the toxicity seen in the short-term animal studies indicate a need for protective action for workers exposed to these nanomaterials. As of 2013, further research was needed in long-term animal studies and epidemiologic studies in workers. No reports of actual adverse health effects in workers using or producing these nanomaterials were known as of 2013. [ 9 ] : v–ix, 33–35 Titanium dioxide (TiO 2 ) dust is considered a lung tumor risk, with ultrafine (nanoscale) particles having an increased mass-based potency relative to fine TiO 2 , through a secondary genotoxicity mechanism that is not specific to TiO 2 but primarily related to particle size and surface area. [ 10 ] : v–vii, 73–78
Some studies suggest that nanomaterials could potentially enter the body through intact skin during occupational exposure. Studies have shown that particles smaller than 1 μm in diameter may penetrate into mechanically flexed skin samples, and that nanoparticles with varying physicochemical properties were able to penetrate the intact skin of pigs. Factors such as size, shape, water solubility, and surface coating directly affect a nanoparticle's potential to penetrate the skin. At this time, it is not fully known whether skin penetration of nanoparticles would result in adverse effects in animal models, although topical application of raw SWCNT to nude mice has been shown to cause dermal irritation, and in vitro studies using primary or cultured human skin cells have shown that carbon nanotubes can enter cells and cause release of pro-inflammatory cytokines , oxidative stress , and decreased viability. It remains unclear, however, how these findings may be extrapolated to a potential occupational risk. [ 8 ] : 12 [ 9 ] : 63–64 In addition, nanoparticles may enter the body through wounds, with particles migrating into the blood and lymph nodes. [ 11 ]
Ingestion can occur from unintentional hand-to-mouth transfer of materials; this has been found to happen with traditional materials, and it is scientifically reasonable to assume that it also could happen during handling of nanomaterials. Ingestion may also accompany inhalation exposure because particles that are cleared from the respiratory tract via the mucociliary escalator may be swallowed. [ 8 ] : 12
There is concern that engineered carbon nanoparticles, when manufactured on an industrial scale, could pose a dust explosion hazard, especially for processes such as mixing, grinding, drilling, sanding, and cleaning. Knowledge remains limited about the potential explosivity of materials when subdivided down to the nanoscale. [ 12 ] The explosion characteristics of nanoparticles are highly dependent on the manufacturer and the humidity . [ 5 ] : 17–18
For microscale particles, as particle size decreases and the specific surface area increases, the explosion severity increases. However, for dusts of organic materials such as coal , flour , methylcellulose , and polyethylene , severity ceases to increase as the particle size is reduced below ~50 μm. This is because decreasing particle size primarily increases the volatilization rate, which becomes rapid enough that that gas phase combustion becomes the rate limiting step , and further decrease in particle size will not increase the overall combustion rate. [ 12 ] While the minimum explosion concentration does not vary significantly with nanoparticle size, the minimum ignition energy and temperature have been found to decrease with particle size. [ 13 ]
Metal-based nanoparticles exhibit more severe explosions than do carbon nanomaterials, and their chemical reaction pathway is qualitatively different. [ 12 ] Studies on aluminum nanoparticles and titanium nanoparticles indicate that they are explosion hazards. [ 5 ] : 17–18 One study found that the likelihood of an explosion but not its severity increases significantly for nanoscale metal particles, and they can spontaneously ignite under certain conditions during laboratory testing and handling. [ 14 ]
High- resistivity powders can accumulate electric charge causing a spark hazard, and low-resistivity powders can build up in electronics causing a short circuit hazard, both of which can provide an ignition source. In general, powders of nanomaterials have higher resistivity than the equivalent micron-scale powders, and humidity decreases their resistivity. One study found powders of metal-based nanoparticles to be mid- to high-resistivity depending on humidity, while carbon-based nanoparticles were found to be low-resistivity regardless of humidity. Powders of nanomaterials are unlikely to present an unusual fire hazard as compared to their cardboard or plastic packaging, as they are usually produced in small quantities, with the exception of carbon black . [ 15 ] However, the catalytic properties of nanoparticles and nanostructured porous materials may cause untended catalytic reactions that, based on their chemical composition, would not otherwise be anticipated. [ 8 ] : 21
Engineered radioactive nanoparticles have applications in medical diagnostics , medical imaging , toxicokinetics , and environmental health , and are being investigated for applications in nuclear medicine . Radioactive nanoparticles present special challenges in operational health physics and internal dosimetry that are not present for vapors or larger particles, as the nanoparticles' toxicokinetics depend on their physical and chemical properties including size , shape , and surface chemistry . In some cases, the inherent physicochemical toxicity of the nanoparticle itself may lead to lower exposure limits than those associated with the radioactivity alone, which is not the case with most radioactive materials. In general, however, most elements of a standard radiation protection program are applicable to radioactive nanomaterials, and many hazard controls for nanomaterials will be effective with the radioactive versions. [ 11 ]
Controlling exposures to hazards is the fundamental method of protecting workers. The hierarchy of hazard control is a framework that encompasses a succession of control methods to reduce the risk of illness or injury. In decreasing order of effectiveness, these are elimination of the hazard, substitution with another material or process that is a lesser hazard, engineering controls that isolate workers from the hazard, administrative controls that change workers' behavior to limit the quantity or duration of exposure, and personal protective equipment worn on the workers' body. [ 3 ] : 9
Prevention through design is the concept of applying control methods to minimize hazards early in the design process, with an emphasis on optimizing employee health and safety throughout the life cycle of materials and processes. It increases the cost-effectiveness of occupational safety and health because hazard control methods are integrated early into the process, rather than needing to disrupt existing procedures to include them later. In this context, adopting hazard controls earlier in the design process and higher on the hierarchy of controls leads to faster time to market, improved operational efficiency, and higher product quality. [ 5 ] : 6–8
Elimination and substitution are the most desirable approaches to hazard control, and are most effective early in the design process. Nanomaterials themselves often cannot be eliminated or substituted with conventional materials because their unique properties are necessary to the desired product or process. [ 3 ] : 9–10 However, it may be possible to choose properties of the nanoparticle such as size , shape , functionalization , surface charge , solubility , agglomeration , and aggregation state to improve their toxicological properties while retaining the desired functionality. Other materials used incidentally in the process, such as solvents , are also amenable to substitution. [ 5 ] : 8
In addition to the materials themselves, procedures used to handle them can be improved. For example, using a nanomaterial slurry or suspension in a liquid solvent instead of a dry powder will reduce dust exposure. Reducing or eliminating steps that involve transfer of powder or opening packages containing nanomaterials also reduces aerosolization and thus the potential hazard to the worker. [ 3 ] : 9–10 Reducing agitation procedures such as sonication , and reducing the temperature of reactors to minimize release of nanomaterials in exhaust, also reduce hazards to workers. [ 4 ] : 10–12
Engineering controls are physical changes to the workplace that isolate workers from hazards by containing them in an enclosure, or removing contaminated air from the workplace through ventilation and filtering . They are used when hazardous substances and processes cannot be eliminated or replaced with less hazardous substitutes. Well-designed engineering controls are typically passive, in the sense of being independent of worker interactions, which reduces the potential for worker behavior to impact exposure levels. The initial cost of engineering controls can be higher than administrative controls or personal protective equipment, but the long-term operating costs are frequently lower and can sometimes provide cost savings in other areas of the process. [ 3 ] : 10–11 The type of engineering control optimal for each situation is influenced by the quantity and dustiness of the material as well as the duration of the task. [ 5 ] : 9–11
Ventilation systems can be local or general. General exhaust ventilation operates on an entire room through a building's HVAC system . It is inefficient and costly as compared to local exhaust ventilation, and is not suitable by itself for controlling exposure, although it can provide negative room pressure to prevent contaminants from exiting the room. Local exhaust ventilation operates at or near the source of contamination, often in conjunction with an enclosure. [ 3 ] : 11–12 Examples of local exhaust systems include fume hoods , gloveboxes , biosafety cabinets , and vented balance enclosures . Exhaust hoods lacking an enclosure are less preferable, and laminar flow hoods are not recommended because they direct air outwards towards the worker. [ 4 ] : 18–28 Several control verification techniques can be used with ventilation systems, including pitot tubes , hot-wire anemometers , smoke generators , tracer-gas leak testing , and standardized testing and certification procedures . [ 3 ] : 50–52, 59–60 [ 5 ] : 14–15
Examples of non-ventilation engineering controls include placing equipment that may release nanomaterials in a separate room, and placing walk-off sticky mats at room exits. [ 5 ] : 9–11 Antistatic devices can be used when handling nanomaterials to reduce their electrostatic charge, making them less likely to disperse or adhere to clothing. [ 4 ] : 28 Standard dust control methods such as enclosures for conveyor systems , using a sealed system for bag filling, and water spray application are effective at reducing respirable dust concentrations. [ 3 ] : 16–17
Administrative controls are changes to workers' behavior to mitigate a hazard. They include training on best practices for safe handling, storage, and disposal of nanomaterials, proper awareness of hazards through labeling and warning signage, and encouraging a general safety culture . Administrative controls can complement engineering controls should they fail, or when they are not feasible or do not reduce exposures to an acceptable level. Some examples of good work practices include cleaning work spaces with wet-wiping methods or a HEPA-filtered vacuum cleaner instead of dry sweeping with a broom , avoiding handling nanomaterials in a free particle state, storing nanomaterials in containers with tightly closed lids. Normal safety procedures such as hand washing , not storing or consuming food in the laboratory, and proper disposal of hazardous waste are also administrative controls. [ 3 ] : 17–18 Other examples are limiting the time workers are handling a material or in a hazardous area, and exposure monitoring for the presence of nanomaterials. [ 4 ] : 14–15
Personal protective equipment (PPE) must be worn on the worker's body and is the least desirable option for controlling hazards. It is used when other controls are not effective, have not been evaluated, or while doing maintenance or in emergency situations such as spill response. PPE normally used for typical chemicals are also appropriate for nanomaterials, including wearing long pants, long-sleeve shirts, and closed-toed shoes, and the use of safety gloves , goggles , and impervious laboratory coats . Nitrile gloves are preferred because latex gloves do not provide protection from most chemical solvents and may present an allergy hazard. Face shields are not an acceptable replacement for goggles because they do not protect against unbound dry materials. Woven cotton lab coats are not recommended for nanomaterials, as they can become contaminated with nanomaterials and release them later. Donning and removing PPE in a changing room prevents contamination of outside areas. [ 5 ] : 12–14
Respirators are another form of PPE. Respirator filters with a NIOSH air filtration rating of N95 or P100 have been shown to be effective at capturing nanoparticles, although leakage between the respirator seal and the skin may be more significant, especially with half-mask respirators. Surgical masks are not effective against nanomaterials. [ 5 ] : 12–14 Smaller nanoparticles of size 4–20 nm are captured more efficiently by filters than larger ones of size 30–100 nm, because Brownian motion results in the smaller particles being more likely to contact a filter fiber. [ 17 ] In the United States, the Occupational Safety and Health Administration requires fit testing and medical clearance for use of respirators, [ 18 ] and the Environmental Protection Agency requires the use of full face respirators with N100 filters for multi-walled carbon nanotubes not embedded in a solid matrix, if exposure is not otherwise controlled. [ 19 ]
An occupational exposure limit (OEL) is an upper limit on the acceptable concentration of a hazardous substance in workplace air. As of 2016, quantitative OELs have not been determined for most nanomaterials. Agencies and organizations from several countries, including the British Standards Institute [ 20 ] and the Institute for Occupational Safety and Health in Germany, [ 21 ] have established OELs for some nanomaterials, and some companies have supplied OELs for their products. [ 3 ] : 7 As of 2021, the U.S. National Institute for Occupational Safety and Health has determined non-regulatory recommended exposure limits (RELs) for three classes of nanomaterials: [ 22 ]
A properly tested, half-face particulate respirator will provide protection at exposure concentrations 10 times the REL, while an elastomeric full facepiece respirator with P100 filters will provide protection at 50 times the REL. [ 4 ] : 18 In the absence of OELs, a control banding scheme may be used. Control banding is a qualitative strategy that uses a rubric to place hazards into one of four categories, or "bands", and each of which has a recommended level of hazard controls. Organizations including GoodNanoGuide, [ 24 ] Lawrence Livermore National Laboratory , [ 25 ] and Safe Work Australia [ 26 ] have developed control banding tools that are specific for nanomaterials. [ 4 ] : 31–33 The GoodNanoGuide control banding scheme is based only on exposure duration, whether the material is bound, and the extent of knowledge of the hazards. [ 24 ] The LANL scheme assigns points for 15 different hazard parameters and 5 exposure potential factors. [ 27 ] Alternatively, the " As Low As Reasonably Achievable " concept may be used. [ 3 ] : 7–8
Exposure assessment is a set of methods used to monitor contaminant release and exposures to workers. These methods include personal sampling, where samplers are located in the personal breathing zone of the worker, often attached to a shirt collar to be as close to the nose and mouth as possible; and area/background sampling, where they are placed at static locations. Assessment generally use both particle counters , which monitor the real-time quantity of nanomaterials and other background particles; and filter-based samples, which can be used to identify the nanomaterial, usually using electron microscopy and elemental analysis . [ 5 ] : 14–15 [ 28 ]
Not all instruments used to detect aerosols are suitable for monitoring occupational nanomaterial emissions because they may not be able to detect smaller particles, or may be too large or difficult to ship to a workplace. [ 3 ] : 57 [ 8 ] : 23–33 Suitable particle counters can detect a wide range of particle sizes, as nanomaterials may aggregate in the air. It is recommended to simultaneously test adjacent work areas to establish a background concentration, as direct reading instruments cannot distinguish the target nanomaterial from incidental background nanoparticles from motor or pump exhaust or heating vessels. [ 3 ] : 47–49 [ 28 ]
While mass-based metrics are traditionally used to characterize toxicological effects of exposure to air contaminants, as of 2013 it was unclear which metrics are most important with regard to engineered nanomaterials. Animal and cell-culture studies have shown that size and shape are the two major factors in their toxicological effects. [ 3 ] : 57–58 Surface area and surface chemistry also appeared to be more important than mass concentration. [ 8 ] : 23
The NIOSH Nanomaterial Exposure Assessment Technique (NEAT 2.0) is a sampling strategy to determine exposure potential for engineered nanomaterials. It includes filter-based and area samples, as well as a comprehensive assessment of emissions at processes and job tasks to better understand peak emission periods. Evaluation of worker practices, ventilation efficacy, and other engineering exposure control systems and risk management strategies serve to allow for a comprehensive exposure assessment. [ 28 ] The NIOSH Manual of Analytical Methods includes guidance on electron microscopy of filter samples of carbon nanotubes and nanofibers, [ 29 ] and additionally some NIOSH methods developed for other chemicals can be used for off-line analysis of nanomaterials, including their morphology and geometry, elemental carbon content (relevant for carbon-based nanomaterials), and elemental makeup. [ 3 ] : 57–58 Efforts to create reference materials are ongoing. [ 8 ] : 23
Occupational health surveillance involves the ongoing systematic collection, analysis, and dissemination of exposure and health data on groups of workers, for the purpose of preventing disease and evaluating the effectiveness of intervention programs. It encompasses both medical surveillance and hazard surveillance. A basic medical surveillance program contains a baseline medical evaluation and periodic follow-up examinations, post-incident evaluations, worker training, and identification of trends or patterns from medical screening data. [ 4 ] : 34–35
The related topic of medical screening focuses on the early detection of adverse health effects for individual workers, to provide an opportunity for intervention before disease processes occur. Screening may involve obtaining and reviewing an occupational history, medical examination, and medical testing. As of 2016, there were no specific screening tests or health evaluations to identify health effects in people that are caused solely by exposure to engineered nanomaterials. [ 5 ] : 15–16 However, any medical screening recommendations for the bulk material that a nanoparticle is made of still apply, [ 30 ] and in 2013 NIOSH concluded that the toxicologic evidence on carbon nanotubes and carbon nanofibers had advanced enough to make specific recommendations for the medical surveillance and screening of exposed workers. [ 9 ] : vii, 65–69 Medical screening and resulting interventions represent secondary prevention and do not replace primary prevention efforts based on direct hazard controls to minimize employee exposures to nanomaterials. [ 4 ] : 34–35
It is recommended that a nanomaterial spill kit be assembled prior to an emergency and include barricade tape , nitrile or other chemically impervious gloves, an elastomeric full-facepiece respirator with P100 or N100 filters (fitted appropriately to the responder), adsorbent materials such as spill mats, disposable wipes, sealable plastic bags, walk-off sticky mats , a spray bottle with deionized water or another appropriate liquid to wet dry powders, and a HEPA -filtered vacuum. It is considered unsafe to use compressed air, dry sweeping, and vacuums without a HEPA filter to clear dust. [ 5 ] : 16–17
The Food and Drug Administration regulates nanomaterials under the Federal Food, Drug, and Cosmetic Act when used as food additives, drugs, or cosmetics. [ 31 ] The Consumer Product Safety Commission requires testing and certification of many consumer products for compliance with consumer product safety requirements, and cautionary labeling of hazardous substances under the Federal Hazardous Substances Act . [ 5 ] : 20–22
The General Duty Clause of the Occupational Safety and Health Act requires all employers to keep their workplace free of serious recognized hazards. The Occupational Safety and Health Administration also has recording and reporting requirements for occupational injuries and illness under 29 CFR 1904 for businesses with more than 10 employees, and protection and communication regulations under 29 CFR 1910 . Companies producing new products containing nanomaterials must use the Hazard Communication Standard to create safety data sheets containing 16 sections for downstream users such as customers, workers, disposal services, and others. This may require toxicological or other testing, and all data or information provided must be vetted by properly controlled testing The ISO /TR 13329 standard [ 32 ] provides guidance specifically on the preparation of safety data sheets for nanomaterials. The National Institute for Occupational Safety and Health does not issue regulations, but conducts research and makes recommendations to prevent worker injury and illness. State and local governments may have additional regulations. [ 5 ] : 18–22
The Environmental Protection Agency (EPA) regulates nanomaterials under the Toxic Substances Control Act , and has permitted limited manufacture of new chemical nanomaterials through the use of consent orders or Significant New Use Rules (SNURs). In 2011 EPA issued a SNUR on multi-walled carbon nanotubes , codified as 40 CFR 721.10155 . Other statutes falling in the EPA's jurisdiction may apply, such as Federal Insecticide, Fungicide, and Rodenticide Act (if bacterial claims are being made), Clean Air Act , or Clean Water Act . [ 5 ] : 13, 20–22 EPA regulates nanomaterials under the same provisions as other hazardous chemical substances. [ 31 ]
In the European Union , nanomaterials classified by the European Commission as hazardous chemical substances are regulated under the European Chemical Agency 's Registration, Evaluation, Authorisation, and Restriction of Chemicals (REACH) regulation, as well as the Classification, Labeling, and Packaging (CLP) regulations. [ 31 ] Under the REACH regulation, companies have the responsibility of collecting information on the properties and uses of substances that they manufacture or import at or above quantities of 1 ton per year, including nanomaterials. [ 5 ] : 22 There are special provisions for cosmetics that contain nanomaterials, and for biocidal materials under the Biocidal Products Regulation (BPR) when at least 50% of their primary particles are nanoparticles. [ 31 ]
In the United Kingdom, powders of nanomaterials may fall under the Chemicals (Hazard Information and Packaging for Supply) Regulations 2002 , as well as the Dangerous Substances and Explosive Atmosphere Regulations 2002 if they are capable of fueling a dust explosion . [ 15 ] | https://en.wikipedia.org/wiki/Health_and_safety_hazards_of_nanomaterials |
Health data is any data "related to health conditions, reproductive outcomes, causes of death , and quality of life " [ 1 ] for an individual or population. Health data includes clinical metrics along with environmental, socioeconomic, and behavioral information pertinent to health and wellness. A plurality of health data are collected and used when individuals interact with health care systems . This data, collected by health care providers , typically includes a record of services received, conditions of those services, and clinical outcomes or information concerning those services. [ 2 ] Historically, most health data has been sourced from this framework. The advent of eHealth and advances in health information technology , however, have expanded the collection and use of health data—but have also engendered new security, privacy, and ethical concerns. [ 3 ] The increasing collection and use of health data by patients is a major component of digital health .
Health data are classified as either structured or unstructured. Structured health data is standardized and easily transferable between health information systems. [ 4 ] For example, a patient's name, date of birth, or a blood-test result can be recorded in a structured data format. Unstructured health data, unlike structured data, is not standardized. [ 4 ] Emails, audio recordings, or physician notes about a patient are examples of unstructured health data. While advances in health information technology have expanded collection and use, the complexity of health data has hindered standardization in the health care industry. [ 2 ] As of 2013, it was estimated that approximately 60% of health data in the United States were unstructured. [ 4 ]
Health informatics , a field of health data management, superseded medical informatics in the 1970s. [ 5 ] Health informatics, which is broadly defined as the collection, storage, distribution, and use of health data, differs from medical informatics in its use of information technology . [ 5 ]
Individuals are the origin of all health data, yet the most direct if often overlooked is the informal personal collection of data. Examples include an individual checking off that they have taken their medication on a personal calendar, or an individual tallying the amount sleep they have gotten over the last week.
Prior to recent technological advances, most health data were collected within health care systems. As individuals move through health care systems, they interact with health care providers and this interaction produces health information. These touch points include clinics/physician offices, pharmacies, payers/insurance companies, hospitals, laboratories, and senior homes. Information is also collected through participation in clinical trials, health agency surveys, medical devices, and genomic testing. This information, once recorded, becomes health data. This data typically includes a record of services received, conditions of those services, and clinical outcomes consequent of those services. [ 2 ] For example, a blood draw may be a service received, a white blood cell count may be a condition of that service, and a reported measurement of white blood cells may be an outcome of that service. Information also frequently collected and found in medical records includes, administrative and billing data, patient demographic information, progress notes, vital signs, medications diagnoses, immunization dates, allergies, and lab results. [ 6 ]
Recent advances in health information technology have expanded the scope of health data. Advances in health information technology have fostered the eHealth paradigm, which has expanded the collection, use, and philosophy of health data. EHealth, a term coined in the health information technology industry, [ 7 ] has been described in academia as
an emerging field [at] the intersection of medical informatics, public health and business, referring to health services and information delivered or enhanced through the Internet and related technologies. In a broader sense, the term characterizes not only a technical development, but also a state-of-mind, a way of thinking, an attitude, and a commitment for networked, global thinking, to improve health care ... using information and communication technology. [ 7 ]
From the confluence of eHealth and mobile technology emerged mHealth , which is considered a subsector of eHealth. [ 8 ] mHealth has been defined as
medical and public health practice supported by mobile devices ... . mHealth involves the use and capitalization on a mobile phone's core utility of voice and short messaging service (SMS) as well as more complex functionalities and applications including general packet radio service (GPRS), third and fourth generation mobile telecommunications (3G and 4G systems), global positioning system (GPS), and Bluetooth technology. [ 8 ]
The emergence of eHealth and mHealth have expanded the definition of health data by creating new opportunities for patient-generated health data (PGHD). [ 9 ] PGHD has been defined as "health-related data—including health history, symptoms, biometric data, treatment history, lifestyle choices, and other information—created, recorded, gathered, or inferred by or from patients or their designees ... to help address a health concern." [ 9 ] MHealth allows patients to monitor and report PGHD outside of a clinical setting. For example, a patient could use a blood monitor interfaced with their smartphone to track and distribute PGHD.
PGHD, mHealth, eHealth, and other technological development such as telemedicine, constitute a new digital health paradigm. Digital health describes a patient-centric health care system in which patients manage their own health and wellness with new technologies that will gather and assess their data. [ 10 ]
Data has become increasingly valuable in the 21st century and new economies have been shaped by who controls it [ 11 ] —health data and the health care industry are unlikely to be an exception. An increase in PGHD has led some experts to envision a future in which patients have greater influence over the health care system. [ 12 ] Patients may use their leverage as data producers to demand more transparency, open science , clearer data use consent, more patient engagement in research, development, and delivery, and greater access to research outcomes. [ 12 ] [ 13 ] Put another way, it is foreseeable that "health care will be owned, operated, and driven by consumers." [ 12 ] Moreover, some large technology companies have entered the PGHD space. One example is Apple's ResearchKit. These companies may use their newfound PGHD leverage to enter and disrupt the health care market. [ 12 ]
Health data can be used to benefit individuals, public health, and medical research and development. [ 14 ] The uses of health data are classified as either primary or secondary. Primary use is when health data is used to deliver health care to the individual from whom it was collected. [ 15 ] Secondary use is when health data is used outside of health care delivery for that individual. [ 15 ]
Digitization and health information technology have expanded the primary and secondary uses of health data. Over the last decade the U.S. health care system widely adopted electronic health records (EHRs)—an inevitable shift given EHR benefits over paper systems. [ 16 ] [ 17 ] EHRs have expanded the secondary uses of health data for quality assurance , clinical research , medical research and development, public health , and big data health analytics , among other fields. [ 18 ] [ 19 ] [ 20 ] [ 14 ] Personal health records (PHRs), while less popular than EHRs, [ 21 ] have expanded the primary uses of health data. PHRs can incorporate both patient- and provider-reported health data, but are managed by patients. [ 21 ] While a PHR system can be standalone, integrated EHR-PHR systems are considered the most beneficial. [ 21 ] Integrated EHR-PHR systems expand the primary uses of health data by giving individuals greater access to their health data—which can help them monitor, evaluate, and improve their own health. [ 21 ] This is an important aspect of the digital health paradigm.
In the United States, prior to the Health Insurance Portability and Accountability Act (HIPAA) of 1996, there were no comprehensive federal policies that regulated the security or privacy of health data. [ 22 ] HIPAA regulates the use and disclosure of protected health information (PHI) by specified entities, including health providers, health care clearinghouses, and health plans. [ 22 ] HIPAA implementation, delayed by federal-level negotiations, became broadly effective in 2003. [ 22 ]
While HIPAA established health data security and privacy in the U.S., gaps in protection persisted. The emergence of new health information technologies exacerbated these gaps. [ 22 ] [ 23 ] In 2009, the Health Information Technology for Economic and Clinical Health Act was passed. The legislation aimed to close the existing gaps in HIPAA by expanding HIPAA regulations to more entities, including business associates or subcontractors which store health data. [ 22 ] In 2013, an Omnibus Rule implementing final provisions of HITECH was revealed by the U.S. Department of Health and Human Services . [ 22 ]
Despite these legislative amends, security and privacy concerns continue to persist as healthcare technologies advance and grow in popularity. [ 24 ] It is worth noticing that in 2018, Social Indicators Research published the scientific evidence of 173,398,820 (over 173 million) individuals affected in USA from October 2008 (when the data were collected) to September 2017 (when the statistical analysis took place). [ 25 ]
There are important ethical considerations for the collection and secondary use of health data. While discussions on the ethical collection and use of health data typically focus on research, it is important not to overlook potential data misuse by non-research organizations. [ 26 ] It has been argued that the collection and use of health data for any non-clinical purpose, "is ethically sound only if there is (or could reasonably arise) a question to be answered; the methodology (design, data collected, etc) will answer the question; and the costs, including both communal health care resources and any risks and burden imposed on the participants, justify the benefits to society." [ 26 ]
Many public health experts have been advocating that health data collection may be the best way to analyse information on a large-scale. [ 27 ] However, the data-driven approach has also raised concerns on the side of privacy advocates, who worry about how the collected information is going to be used. Privacy advocates have long argued for increased protection of personal health information on fears that marketers, data bundlers or even hackers could sell or divulge the information, possibly affecting people's jobs and credit or leading to identity theft . In fact, there are so many different issues to consider, including questions about preemption, enforcement mechanisms, regulatory structure, civil rights implications, law enforcement access and algorithmic accountability.
There are important and growing opportunities to use health data for improving healthcare quality, surveillance, health system management and research. It is essential to leverage such potential while managing possible risks related to the misuse of personal data. In order to achieve that, appropriate governance frameworks are needed.
At the global level, a strategy on digital health was drafted at the 71st World Health Assembly, in May 2018, in close consultation with Member States and with inputs from stakeholders. [ 28 ] The document identifies priority four strategic objectives, emphasising the importance of the transfer of knowledge amongst member states. The framework for action also proposes the creation of an international convening mechanism for validation of artificial intelligence and digital health solutions. This mechanism will enshrine the value of health data and associated digital health products as a global public health good and call for action to safeguard the anonymity of health data providers, mitigate challenges and ensure universal access to digital health products and technology. [ 28 ]
In Europe, a multi-stakeholder collaboration has started, aiming to harmonise clinical data and develop a 21st-century ecosystem for real work on health research in the region. [ 29 ] The European Health Data & Evidence Network (EHDEN) is building a data network to perform fast, scalable and highly reproducible research. According to their website, the goal is to standardise 100 million patient records across Europe from different geographic areas and data source types, such as hospital data, registries and population databases. [ 30 ] | https://en.wikipedia.org/wiki/Health_data |
Health ecology (also known as eco-health ) is an emerging field that studies the impact of ecosystems on human health . It examines alterations in the biological , physical , social , and economic environments to understand how these changes affect mental and physical human health. Health ecology focuses on a transdisciplinary approach to understanding all the factors which influence an individual's physiological , social, and emotional well-being.
Eco-health studies often involve environmental pollution. Some examples include an increase in asthma rates due to air pollution, or PCB contamination of game fish in the Great Lakes of the United States. However, health ecology is not necessarily tied to environmental pollution. For example, research has shown that habitat fragmentation is the main factor that contributes to increased rates of Lyme disease in human populations.
Ecosystem approaches to public health emerged as a defined field of inquiry and application in the 1990s, primarily through global research supported by the International Development Research Centre (IDRC) in Ottawa, Canada (Lebel, 2003). However, this was a resurrection of an approach to health and ecology traced back to Hippocrates in Western societies. It can also be traced back to earlier eras in Eastern societies. The approach was also popular among scientists in the centuries. However, it fell out of common practice in the twentieth century, when technical professionalism and expertise were assumed sufficient to manage health and disease. In this relatively brief era, evaluating the adverse impacts of environmental change (both the natural and artificial environment) on human health was assigned to medicine and environmental health. [ citation needed ]
Integrated approaches to health and ecology re-emerged in the 20th century. These revolutionary movements were built on a foundation laid by earlier scholars, including Hippocrates, Rudolf Virchow , and Louis Pasteur . In the 20th century, Calvin Schwabe coined the term "one medicine," recognizing that human and veterinary medicine share similar biological principles, and are interrelated. This one medicine approach, which had fairly clinical and individualistic connotations, was rebranded to "One Health," to reflect its goals of global human and animal health. [ 1 ] Other integrated health approaches include ecological resilience , ecological integrity, and healthy communities. [ citation needed ]
Eco-health approaches, as currently practiced, are participatory, systems-based approaches to understanding and promoting public health and well-being in the context of social and ecological interactions. These approaches are differentiated from previous public health approaches by a firm grounding in complexity theory and post-normal science (Waltner-Toews, 2004; Waltner-Toews et al., 2008).
After a decade of international conferences in North America and Australia under the more contentious umbrella of " ecosystem health ," the first "ecosystem approach to human health" (eco-health) forum was held in Montreal in 2003, followed by conferences and forums in Wisconsin, U.S ., and Mérida , Mexico, all with major support from the IDRC. Since then, the International Association for Ecology and Health, and the journal Eco Health , have established the field as a legitimate scholarly and development activity. [ citation needed ]
Eco-health studies differ from traditional, single-discipline studies, which focus on one aspect of a complex issue. A traditional epidemiological study may show increasing rates of malaria in a region, but not address the reasons for the increasing rate; an environmental health study may recommend the application of a pesticide in specific amounts in certain areas to reduce spread; an economic analysis may calculate the cost and effectiveness of such a program. Alternatively, an eco-health study combines multiple disciplines, and familiarizes the specialists with the affected community. Through pre-study meetings, the group shares their knowledge and develops common understanding. These pre-study meetings often lead to creative and novel approaches and can lead to a more "socially robust" solution. Eco-health practitioners term this synergy " transdisciplinary " and differentiate it from multidisciplinary studies. Eco-health studies also value the participation of all active groups, including stakeholders and decision-makers. They believe issues of equity (between gender, socioeconomic classes, age, and even species) are essential to completely understand and solve the problem. Jean Lebel (2003) coined transdisciplinary, participation, and equity as the three pillars of Eco Health (Lebel, 2003). The IDRC now defines six principles instead of three pillars: transdisciplinary, participation, gender and social equity, system-thinking, sustainability, and research-to-action (Charron, 2011). [ 2 ]
A practical example of health ecology is the management of malaria in Mexico. A multidisciplinary approach ended the use of harmful DDT while reducing malaria cases. This study reveals the complex nature of these problems, and the extent to which a successful solution must cross research disciplines. The solution involved creative thinking on the part of many individuals and produced a win-win situation for researchers, businesses, and, most importantly, the community. Although many of the dramatic effects of ecosystem change, and much of the research, are focused on developing countries, the ecosystem of the artificial environment in urban areas of the developed world is also a significant determinant of human health. Obesity, diabetes , asthma, and heart disease are all directly tied to environmental factors. In addition, urban design and planning determine automobile use, available food choices, air pollution levels, and the safety and walkability of the neighborhoods in which people live. [ citation needed ] | https://en.wikipedia.org/wiki/Health_ecology |
Bisphenol A controversy centers on concerns and debates about the biomedical significance of bisphenol A (BPA), which is a precursor to polymers that are used in some consumer products, including some food containers. The concerns began with the hypothesis that BPA is an endocrine disruptor , i.e. it mimics endocrine hormones and thus has the unintended and possibly far-reaching effects on people in physical contact with the chemical.
Since 2008, several governments have investigated its safety, which prompted some retailers to withdraw polycarbonate products. The U.S. Food and Drug Administration (FDA) ended its authorization of the use of BPA in baby bottles and infant formula packaging, based on market abandonment, not safety. [ 1 ] The European Union and Canada have banned BPA use in baby bottles.
The U.S. FDA states "BPA is safe at the current levels occurring in foods" based on extensive research, including two more studies issued by the agency in early 2014. [ 2 ] The European Food Safety Authority (EFSA) reviewed new scientific information on BPA in 2008, 2009, 2010, 2011 and 2015: EFSA's experts concluded on each occasion that they could not identify any new evidence which would lead them to revise their opinion that the known level of exposure to BPA is safe; however, the EFSA does recognize some uncertainties, and will continue to investigate them. [ 3 ]
In February 2016, France announced that it intends to propose BPA as a REACH Regulation candidate substance of very high concern (SVHC). [ 4 ] The European Chemicals Agency agreed to the proposal in June 2017. [ 5 ]
The BPA controversy has gained momentum because of the quantity of BPA produced by the chemical industry. World production capacity of BPA was 1 million tons in the 1980s, [ 6 ] and more than 2.2 million tons in 2009. [ 7 ] It is a high production volume chemical . In 2003, U.S. consumption was 856,000 tons, 72% of which used to make polycarbonate plastic and 21% going into epoxy resins. [ 8 ] In the U.S., less than 5% of the BPA produced is used in food contact applications, [ 9 ] but remains in the canned food industry and printing applications such as sales receipts. [ 10 ] [ 11 ] On 20 February 2018, Packaging Digest reported that "At least 90%" of food cans no longer contained BPA. [ 12 ]
BPA is rarely encountered in industrial products: it is invariably bound in a polymeric structure. Concerns therefore about exposure focus on the degradation, mainly by hydrolysis, of these polymers and the plastic objects derived therefrom.
Polycarbonate plastic, which is formed from BPA, is used to make a variety of common products including baby and water bottles, sports equipment, medical and dental devices, dental fillings sealants, CDs and DVDs, household electronics, eyeglass lenses, [ 6 ] foundry castings , and the lining of water pipes. [ 9 ]
BPA is also used in the synthesis of polysulfones and polyether ketones , as an antioxidant in some plasticizers , and as a polymerization inhibitor in PVC . Epoxy resins derived from bisphenol A are used as coatings on the inside of almost all food and beverage cans ; [ 13 ] however, due to BPA health concerns, in Japan epoxy coating was mostly replaced by PET film . [ 14 ]
Bisphenol A is a preferred color developer in carbonless copy paper and thermal point of sale receipt paper. [ 15 ] [ 16 ] When used in thermal paper, BPA is present as "free" (i.e., discrete, non-polymerized) BPA, which is likely to be more available for exposure than BPA polymerized into a resin or plastic. Upon handling, BPA in thermal paper can be transferred to skin, and there is some concern that residues on hands could be ingested through incidental hand-to-mouth contact. Furthermore, some studies suggest that dermal absorption may contribute some small fraction to the overall human exposure. European data indicate that the use of BPA in paper may also contribute to the presence of BPA in the stream of recycled paper and in landfills. Although there are no estimates for the amount of BPA used in thermal paper in the United States, in Western Europe, the volume of BPA reported to be used in thermal paper in 2005/2006 was 1,890 tonnes per year, while total production was estimated at 1,150,000 tonnes per year. (Figures taken from 2012 EPA draft paper.) [ 17 ] [ 18 ] Studies document potential spreading and accumulation of BPA in paper recycling, suggesting its presence for decades in paper recycling loop even after a hypothetical ban. [ 19 ] Epoxy resin may or may not contain BPA, and is employed to bind gutta percha in some root canal procedures. [ 20 ]
In the early 1930s, the British biochemist Edward Charles Dodds tested BPA as an artificial estrogen, but found it to be 37,000 times less effective than estradiol. [ 21 ] [ 22 ] [ 23 ] Dodds eventually developed a structurally similar compound, [ citation needed ] diethylstilbestrol (DES), which was used as a synthetic estrogen drug in women and animals until it was banned due to its risk of causing cancer; the ban on use of DES in humans came in 1971 and in animals, in 1979. [ 21 ] BPA was never used as a drug. [ 21 ] BPA's ability to mimic the effects of natural estrogen derives from the similarity of phenol groups on both BPA and estradiol , which enable this synthetic molecule to trigger estrogenic pathways in the body. [ 24 ] Typically phenol-containing molecules similar to BPA are known to exert weak estrogenic activities, thus it is also considered an endocrine disruptor (ED) and estrogenic chemical. [ 25 ] Xenoestrogens is another category the chemical BPA fits under because of its capability to interrupt the network that regulates the signals which control the reproductive development in humans and animals. [ 26 ]
In 1997, adverse effects of low-dose BPA exposure in laboratory animals were first proposed. [ 13 ] Modern studies began finding possible connections to health issues caused by exposure to BPA during pregnancy and during development. See Public health regulatory history in the United States and Chemical manufacturers' reactions to bans . As of 2014, research and debates are ongoing as to whether BPA should be banned or not. [ citation needed ]
A 2007 study investigated the interaction between bisphenol A's and estrogen-related receptor γ (ERR-γ). This orphan receptor (endogenous ligand unknown) behaves as a constitutive activator of transcription. BPA seems to bind strongly to ERR-γ ( dissociation constant = 5.5 nM), but only weakly to the ER. [ 28 ] BPA binding to ERR-γ preserves its basal constitutive activity. [ 28 ] It can also protect it from deactivation from the SERM 4-hydroxytamoxifen (afimoxifene). [ 28 ] This may be the mechanism by which BPA acts as a xenoestrogen . [ 28 ] Different expression of ERR-γ in different parts of the body may account for variations in bisphenol A effects. For instance, ERR-γ has been found in high concentration in the placenta , explaining reports of high bisphenol accumulation in this tissue. [ 27 ] BPA has also been found to act as an agonist of the GPER (GPR30). [ 29 ]
In 2017 the European Chemicals Agency concluded that BPA should be listed as a substance of very high concern due to its properties as an endocrine disruptor . [ 30 ] In 2023, the European Food Safety Authority re-evaluated the safety of BFA and significantly reduced tolerable daily intake (TDI) to 0.2 nanograms (0.2 billionths of a gram), 20,000 times lower than the previous TDI. The European Food Safety Authority concluded that consumers with both average and high exposure to BPA in all age groups exceeded the new TDI, indicating health concerns. [ 3 ]
In 2012, the United States' Food and Drug Administration (FDA) banned the use of BPA in baby bottles intended for children under 12 months. [ 31 ] The Natural Resources Defense Council called the move inadequate, saying the FDA needed to ban BPA from all food packaging. [ 32 ] The FDA maintains that the agency continues to support the safety of BPA for use in products that hold food. [ 31 ]
In 2011, Andrew Wadge, the chief scientist of the United Kingdom's Food Standards Agency , commented on a 2011 U.S. study on dietary exposure of adult humans to BPA, [ 33 ] saying, "This corroborates other independent studies and adds to the evidence that BPA is rapidly absorbed, detoxified, and eliminated from humans – therefore is not a health concern." [ 34 ]
The Endocrine Society said in 2015 that the results of ongoing laboratory research gave grounds for concern about the potential hazards of endocrine-disrupting chemicals – including BPA – in the environment, and that on the basis of the precautionary principle these substances should continue to be assessed and tightly regulated. [ 35 ] A 2016 review of the literature said that the potential harms caused by BPA were a topic of scientific debate and that further investigation was a priority because of the association between BPA exposure and adverse human health effects including reproductive and developmental effects and metabolic disease. [ 36 ]
In 2007, the U.S. federal government invited experts to Chapel Hill, North Carolina to perform a scientific assessment of literature on BPA. [ 37 ] Thirty-eight experts in fields involved with bisphenol A gathered in Chapel Hill, North Carolina to review several hundred studies on BPA, many conducted by members of the group. At the end of the meeting, the group issued the Chapel Hill Consensus Statement, [ 38 ] which stated "BPA at concentrations found in the human body is associated with organizational changes in the prostate, breast, testis, mammary glands, body size, brain structure and chemistry, and behavior of laboratory animals." [ 21 ]
The Chapel Hill Consensus Statement stated that average BPA levels in people were above those that cause harm to many animals in laboratory experiments. It noted that while BPA is not persistent in the environment or in humans, biomonitoring surveys indicate that exposure is continuous. This is problematic because acute animal exposure studies are used to estimate daily human exposure to BPA, and no studies that had examined BPA pharmacokinetics in animal models had followed continuous low-level exposures.
The authors added that measurement of BPA levels in serum and other body fluids suggests the possibilities that BPA intake is much higher than accounted for or that BPA can bioaccumulate in some conditions (such as pregnancy). [ 38 ] Following the Chapel Hill Statement, the US National Toxicology Program – Center for the Evaluation of Risks to Human Reproduction (NTP – CERHR), sponsored another literature assessment. The report, released in 2008, noted that "the possibility that bisphenol A may alter human development cannot be dismissed". [ 21 ]
Despite this report, the US Food and Drug Administration (FDA) BPA Task Force (formed in April 2008), concluded that products containing BPA were safe. [ 39 ] In 2009, the FDA Science Board Subcommittee on Bisphenol A, an external committee assigned to review the FDA's report "concluded that the FDA failed to conduct a rigorous or extensive exposure assessment", leading the US Environmental Protection Agency (EPA) to conduct their own assessment. [ 21 ]
The United States Federal Interagency Working Group (FIW) included a goal to reduce BPA exposure in the 2 December 2010 release of their 2020 Healthy People national objectives for improving the health of all Americans. [ 40 ]
Numerous animal studies have demonstrated an association between endocrine disrupting chemicals (including BPA) and obesity. [ 41 ] [ 42 ] However, the relationship between bisphenol A exposure and obesity in humans is unclear. [ 43 ] Cohort studies have shown there has been an association of prenatal BPA exposure and increased body fat percentage at age 7 and increased BMI by age 9. [ 44 ] Not all studies have shown a positive relationship between BPA exposure and obesity, further studies on the effects of BPA on metabolic diseases need to take diet into consideration to remove any influence it might have on the outcome. [ 44 ] Proposed mechanisms for BPA exposure to increase the risk of obesity include BPA-induced thyroid dysfunction, activation of the PPAR-gamma receptor, and disruption of neural circuits that regulate feeding behavior. [ 43 ] [ 45 ] BPA works by imitating the natural hormone 17B-estradiol . In the past BPA has been considered a weak mimicker of estrogen but newer evidence indicates that it is a potent mimicker. [ 46 ] When it binds to estrogen receptors it triggers alternative estrogenic effects that begin outside of the nucleus. This different path induced by BPA has been shown to alter glucose and lipid metabolism in animal studies. [ 47 ]
There are different effects of BPA exposure during different stages of development. During adulthood, BPA exposure modifies insulin sensitivity and insulin release without affecting weight. [ 48 ]
A 2007 review concluded that bisphenol-A has been shown to bind to thyroid hormone receptor and perhaps has selective effects on its functions. [ 49 ]
A 2009 review about environmental chemicals and thyroid function raised concerns about BPA effects on triiodothyronine and concluded that "available evidence suggests that governing agencies need to regulate the use of thyroid-disrupting chemicals, particularly as such uses relate exposures of pregnant women, neonates and small children to the agents". [ 50 ]
A 2009 review summarized BPA adverse effects on thyroid hormone action. [ 51 ]
A 2016 case control study found that there was a significant association between urinary BPA levels and increased TSH levels (Thyroid- stimulating hormone) in a group of adult women. [ 52 ]
Limited epidemiological evidence suggests that exposure to BPA in the uterus and during childhood is associated with poor behavioral outcomes in humans. Exposure may be associated with higher levels of anxiety, depression, hyperactivity, and aggression in children. [ 53 ] A panel convened by the National Toxicology Program (NTP) of the U.S. National Institutes of Health determined that there was "some concern" about BPA's effects on fetal and infant brain development and behavior. [ 8 ] [ 54 ] In January 2010, based on the NTP report, the FDA expressed the same level of concern. [ 55 ] [ 56 ]
A 2007 literature review concluded that BPA, like other chemicals that mimic estrogen (xenoestrogens), should be considered as a player within the nervous system that can regulate or alter its functions through multiple pathways. [ 57 ] A 2008 review of animal research found that low-dose BPA maternal exposure can cause long-term consequences for the neurobehavioral development in mice. [ 58 ]
A 2009 review raised concerns about a BPA effect on the anteroventral periventricular nucleus . [ 60 ]
A 2008 review of human participants has concluded that BPA mimics estrogenic activity and affects various dopaminergic processes to enhance mesolimbic dopamine activity resulting in hyperactivity, attention deficits, and a heightened sensitivity to drugs of abuse. [ 61 ]
According to the WHO's INFOSAN, carcinogenicity studies conducted under the U.S. National Toxicology Program have shown increases in leukemia and testicular interstitial cell tumors in male rats. However, according to the note, "these studies have not been considered as convincing evidence of a potential cancer risk because of the doubtful statistical significance of the small differences in incidences from controls." [ 62 ]
A 2010 review concluded that bisphenol A may increase cancer risk. [ 63 ] Several studies show evidence that the formation of prostate cancer in men is directly proportional to BPA exposure. Male subject diagnosed with prostate cancer were found to have higher urine concentration of BPA as opposed to the concentrations found in the control group's. This correlation may be due to BPA's ability to induce cell proliferation of the prostate cancer cells. [ 64 ] [ 65 ]
Higher susceptibility to breast cancer has been found in many studies of rodents and primates exposed to BPA. [ 67 ] However, it is the impact BPA has on breast cancer development in humans is unclear, as it is difficult to quantify an individual's BPA exposure over their lifetime. [ 67 ] BPA, which includes a phenolic structure, has shown an association with agonist and antagonistic endocrine receptors that facilitate endocrine disorders such as breast and prostate cancer. Other endocrine disorders include infertility, polycystic ovary syndrome, and precocious puberty. [ 68 ] [ 69 ]
More oxidative stress in breast cancer cells were found to be directly proportional to BPA exposure as per the findings in several in vitro studies. [ 65 ] Additionally, work related exposure to BPA, and women who are postmenopausal have suggested an increase in breast cancer incidence. [ 70 ] [ 71 ]
BPA is an endocrine disruptor, meaning BPA has a similar structure to oestrogen (ligand) and can bind to the oestrogen receptor ERα and ERβ and activate it. [ 72 ]
Oestrogen is hydrophobic and is able to diffuse through the plasma membrane and into the target cell. Oestradiol binding to the oestrogen receptor releases the heat shock protein from the ligand binding domain of the receptor causing dimerization. [ 73 ] The nuclear localisation signal targets the ligand-receptor complex to the nucleus where it can bind oestrogen response elements within the promoter of target genes on DNA. Subsequently, various cofactors are recruited allowing transcription of genes including those involved in cell proliferation. [ 74 ]
When BPA is exposed to high temperatures or changes in pH, the ester bond linking BPA monomers is hydrolysed. Free BPA then competes with oestrogen for ERα and ERβ binding sites. When BPA successfully binds the receptor, it interacts with ERE and increases expression of target genes like WNT-4 and RANKL; two key players in stem cell proliferation and carcinogenesis. BPA was also shown to inactivate p53 which prevents tumour formation as it triggers apoptosis. [ citation needed ]
As of 2022, current evidence shows a possible positive correlation between BPA levels, lower sperm quality, decreased motility and an increase in sperm immaturity. [ 67 ] [ 75 ] [ 76 ] There is tentative evidence to support the idea that BPA exposure has negative effects on human fertility. [ 67 ] Few studies have investigated whether recurrent miscarriage is associated with BPA levels. [ 77 ] [ 67 ] Exposure to BPA does not appear to be linked with higher rates of endometrial hyperplasia . [ 67 ] Exposure to BPA does not appear to be linked with higher rates of endometrial hyperplasia. [ 62 ] A 2009 cohort study linked urinary BPA concentration of women undergoing IVF egg retrieval, with an inverse correlation to oocyte release. The study found that for each unit increase in day 3 FSH (IU/L), there was an average decrease of 9% in the number of oocytes retrieved. [ 77 ] The positive correlations found in animal studies [ 78 ] warrants the continued research of BPA for couple fecundity.
Ubiquitous in environment through consumer products such as reusable plastics, food and beverage container liners, baby bottles, water resistant clothing. It has been identified as an EDC and found in urine, blood, amniotic fluid, breast milk and cord blood. Comparing blood BPA and phthalate levels between fertile and infertile women between the ages of 20–40, using gas chromatographic-mass spectrometry to analyze the amount of BPA, phthalate and their metabolites in peripheral venous blood, showed significantly elevated serum BPA level in infertile women, as well as women with PCOS (polycystic ovarian syndrome) and women with endometriosis [ 79 ]
BPA is shown to have transgenerational effect by targeting ovarian function by changes in the structural integrity of microtubules that constitute meiotic spindles. BPA contaminants pass through amniotic fluid can alter steroidogenesis in fetal development. This will result if oocyte maturation failure as well as fertility [ 80 ] This in turn will result in transgenerational effect and affect the third generation of offspring [ 81 ]
Higher BPA exposure has been associated with increased self-reporting of decreased male sexual function but few studies examining this relationship have been conducted. [ 67 ]
Studies in mice have found a link between BPA exposure and asthma; a 2010 study on mice has concluded that perinatal exposure to 10 μg/mL of BPA in drinking water enhances allergic sensitization and bronchial inflammation and responsiveness in an animal model of asthma . [ 82 ] [ 83 ] A study published in JAMA Pediatrics has found that prenatal exposure to BPA is also linked to lower lung capacity in some young children. This study had 398 mother-infant pairs and looked at their urine samples to detect concentrations of BPA. They study found that every 10-fold increase in BPA was tied to a 55% increase in the odds of wheezing. The higher the concentration of BPA during pregnancy were linked to decrease lung capacity in children under four years old but the link disappeared at age 5. Associate professor of pediatrics at the University of Maryland School of Medicine said, "Exposure during pregnancy, not after, appears to be the critical time for BPA, possibly because it's affecting important pathways that help the lung develop." [ 84 ]
In 2013, research from scientists at the Columbia Center for Children's Environmental Health also found a link between the compound and an increased risk for asthma. The research team reported that children with higher levels of BPA at ages 3, 5 and 7 had increased odds of developing asthma when they were between the ages of 5 and 12. The children in this study had about the same concentration of BPA exposure as the average U.S. child. Dr. Kathleen Donohue, an instructor at Columbia University Medical Center said, "they saw an increased risk of asthma at fairly routine, low doses of BPA." [ 85 ] Kim Harley, who studies environmental chemicals and children's health, commented in the Scientific American journal saying while the study does not show that BPA causes asthma or wheezing, "it's an important study because we don't know a lot right now about how BPA affects immune response and asthma...They measured BPA at different ages, measured asthma and wheeze at multiple points, and still found consistent associations." [ 82 ]
The first evidence of the estrogenicity of bisphenol A came from experiments on rats conducted in the 1930s, [ 22 ] [ 23 ] but it was not until 1997 that adverse effects of low-dose exposure on laboratory animals were first reported. [ 13 ] Bisphenol A is an endocrine disruptor that can mimic estrogen and has been shown to cause negative health effects in animal studies. Bisphenol A closely mimics the structure and function of the hormone estradiol by binding to and activating the same estrogen receptor as the natural hormone. [ 72 ] [ 86 ] [ 87 ] [ 88 ] [ 89 ] Early developmental stages appear to be the period of greatest sensitivity to its effects, [ 90 ]
A study from 2008 concluded that blood levels of bisphenol A in neonatal mice are the same whether it is injected or ingested. [ 91 ] The current U.S. human exposure limit set by the EPA is 50 μg/kg/day. [ 92 ] In a 2010 commentary a group of scientists criticized a study designed to test low dose BPA exposure published in "Toxicological Sciences" [ 93 ] and a later editorial by the same journal, [ 94 ] which claimed the rats used in the study were insensitive to estrogen and that had other problems like the use of BPA-containing polycabonate cages [ 95 ] while the authors disagreed. [ 96 ]
Different expression of ERR-γ in different parts of the body may account for variations in bisphenol A effects. For instance, ERR-γ has been found in high concentration in the placenta , explaining reports of high bisphenol accumulation in this tissue. [ 27 ]
In 2010, the U.S. Environmental Protection Agency reported that over one million pounds of BPA are released into the environment annually. [ 97 ] BPA can be released into the environment by both pre-consumer and post-consumer leaching. Common routes of introduction from the pre-consumer perspective into the environment are directly from chemical plastics, coat and staining manufacturers, foundries who use BPA in casting sand, or transport of BPA and BPA-containing products . [ 98 ] [ 99 ] Post-consumer BPA waste comes from effluent discharge from municipal wastewater treatment plants, irrigation pipes used in agriculture, ocean-borne plastic trash, indirect leaching from plastic, paper, and metal waste in landfills, and paper or material recycling companies. [ 98 ] [ 99 ] [ 100 ] Despite a rapid soil and water half-life of 4.5 days, and an air half-life of less than one day, BPA's ubiquity makes it an important pollutant . BPA has a low rate of evaporation from water and soil, which presents issues, despite its biodegradability and low concern for bio-accumulation. BPA has low volatility in the atmosphere and a low vapor pressure between 5.00 and 5.32 Pascals. BPA has a high water solubility of about 120 mg/L and most of its reactions in the environment are aqueous . An interesting fact is that BPA dust is flammable if ignited, but it has a minimal explosive concentration in air. [ 101 ] Also, in aqueous solutions, BPA has shown absorption of wavelengths greater than 250 nm. [ 102 ]
The ubiquitous nature of BPA makes the compound an important pollutant to study as it has been shown to interfere with nitrogen fixation at the roots of leguminous plants associated with the bacterial symbiont Sinorhizobium meliloti . [ 103 ] A 2013 study also observed changes in plant health due to BPA exposure. The study exposed soybean seedlings to various concentrations of BPA and saw changes in root growth, nitrate production, ammonium production, and changes in the activities of nitrate reductase and nitrite reductase . At low doses of BPA, the growth of roots were improved, the amount of nitrate in roots increased, the amount of ammonium in roots decreased, and the nitrate and nitrite reductase activities remained unchanged. However, at considerably higher concentrations of BPA, the opposite effects were seen for all but an increase in nitrate concentration and a decrease in nitrite and nitrate reductase activities. [ 104 ] Nitrogen is both a plant nutritional substance, but also the basis of growth and development in plants. Changing concentrations of BPA can be harmful to the ecology of an ecosystem, as well as to humans if the plants are produced to be consumed.
The amount of absorbed BPA on sediment was also seen to decrease with increases in temperature, as demonstrated by a study in 2006 with various plants from the XiangJiang River in Central-South China. In general, as temperature increases, the water solubility of a compound increases. Therefore, the amount of sorbate that enters the solid phase will lower at the equilibrium point . It was also observed that the adsorption process of BPA on sediment is exothermic, the molar formation enthalpy , ΔH° , was negative, the free energy ΔG° , was negative, and the molar entropy , ΔS° , was positive. This indicates that the adsorption of BPA is driven by enthalpy. The adsorption of BPA has also been observed to decrease with increasing pH . [ 105 ]
A 2005 study conducted in the United States had found that 91–98% of BPA may be removed from water during treatment at municipal water treatment plants. [ 106 ] A more detailed explanation of aqueous reactions of BPA can be observed in the Degradation of BPA section below. Nevertheless, a 2009 meta-analysis of BPA in the surface water system showed BPA present in surface water and sediment in the United States and Europe. [ 107 ] According to Environment Canada in 2011, "BPA can currently be found in municipal wastewater. […]initial assessment shows that at low levels, bisphenol A can harm fish and organisms over time." [ 108 ]
BPA affects growth, reproduction, and development in aquatic organisms. Among freshwater organisms, fish appear to be the most sensitive species. Evidence of endocrine-related effects in fish, aquatic invertebrates, amphibians, and reptiles has been reported at environmentally relevant exposure levels lower than those required for acute toxicity. There is a widespread variation in reported values for endocrine-related effects, but many fall in the range of 1μg/L to 1 mg/L. [ 9 ]
A 2009 review of the biological impacts of plasticizers on wildlife published by the Royal Society with a focus on aquatic and terrestrial annelids , molluscs , crustaceans , insects, fish and amphibians concluded that BPA affects reproduction in all studied animal groups, impairs development in crustaceans and amphibians and induces genetic aberrations. [ 109 ]
BPA is known as an endocrine disruptor compound (EDC) and has major neurological effects on vertebrates. [ 110 ] Depending on the vertebrate species studied, the documented effects of ingestion and exposure to BPA may differ. In species such as Zebrafish, BPA affects the lateral line which is crucial for sensory perception [ 111 ] and may affect the expression of genes that are controlling heart and skeletal muscle metabolism, as well as insulin secretion control. [ 112 ] Aquatic vertebrates are especially impacted by BPA in reproduction. [ 113 ] In the broad- snouted caiman, Caiman latirostris, gender is normally determined by the temperature at which the egg is incubated at. [ 112 ] A study was conducted where their eggs were exposed to BPA. [ 112 ] The first set was exposed at about 1000 μg/egg and all of the offspring were female. [ 112 ] When the eggs were exposed at a lower concentration at about 90 μg/egg, the offspring produced were males. [ 112 ] These male offspring exhibited disrupted seminiferous tubules. [ 112 ] In mice, maternal diet has been studied and found to have a major effect on the offspring that were exposed to BPA during certain developmental stages. [ 114 ] [ 115 ] There are no direct studies on humans, however, studies on the vertebrates suggest the potential harm it may have.
Bisphenol A (BPA) is an environmental contaminant that disrupts the ecosystem, with the most profound effects observed in vertebrates. [ 116 ] BPA infiltrates the environment by running off of landfills so because of this, it is mostly found in water. [ 116 ] Aquatic vertebrates are thus the most affected by this form of pollution. [ 112 ] After the aquatic vertebrates inhale BPA through their gills or skin, they are mainly affected by BPA at the cellular level, affecting their estrogen levels. [ 112 ] BPA binds to the estrogen receptors and has an antagonist effect, [ 116 ] which means that it decreases the amount of estrogen produced. [ 112 ] To regulate reproductive functions, Gonadotropin releasing hormone (GnRH) is released. This helps with maturation of the sex organs in both males and females. Another study found that Barbus sp., immature barbels, in a river with traces of BPA expressed intersex characteristics. [ 112 ] They had gonads with oogonia, spermatogonia and spermatocytes. Researchers concluded that BPA did not induce but did contribute to these intersex morphological expressions. [ 112 ] After being exposed to 1μg/L BPA, Salmo trutta, brown trout, had reduced sperm density and mobility. [ 112 ] In Pimephales promelas, fathead minnows, there was a reduction of sperm production. Both species were also exposed to 2 μg/L and 5 μg/L of BPA and it resulted in delayed ovulation or no ovulation for the fish. [ 112 ]
A study was conducted using adult female Gobiocypris rarus, a rare minnow. The fish were exposed to 5 μg/L, 15μg/L and 50 μg/L of BPA for 14 days and 35 days. [ 117 ] The results showed the group exposed to the highest amount of BPA (50 μg/L) for 35 days showed suppressed effects on oocyte development. [ 117 ] It also showed that all groups had a stimulatory effect on the hepatic vitellogenin transcription (VTG). [ 117 ] VTG is an indicator that the vertebrate has become exposed to environmental estrogens. The groups exposed to a lower concentration of BPA (5 μg/L & 15μg/L) showed an increase in expressed ovarian steroidogenic genes. [ 117 ] Meanwhile, the group exposed to a higher concentration of BPA (50 μg/L) showed a decrease in expressed ovarian steroidogenic genes. [ 117 ]
Although aquatic vertebrates are most commonly affected by BPA exposure in natural settings, researchers often learn how BPA effects other vertebrates using experimental mice models. In a study conducted twenty years ago, there was an accidental BPA exposure. This resulted in an increase in chromosomally [ 118 ] abnormal eggs. This led researchers to question what other effects this has on mammals. It showed that BPA leads to meiotic changes such as fertility and maturation of sex organs. [ 118 ] Scientists started to realize that this type of exposure could lead to mutations and affect multiple generations. [ 118 ] Because of this, "BPA free" products started to be made but to do this, BPS was being used. [ 118 ] A study was conducted showing that exposure to BPS increased mutations before zygotic development showing that it is just as dangerous as BPA. [ 118 ]
BPA has major effects on the behavior of vertebrates, especially in their sensory processing systems. In zebrafish BPA can disrupt the signaling in the endocrine system and affect auditory development and function. [ 111 ] Similar to a human ear, the zebrafish have a sensory organ called the lateral line that detects different forms of vibration. [ 111 ] The hair cells within the lateral line are very sensitive to the toxic effects of BPA and are most commonly killed from BPA; fish are able to regrow hair cells but BPA has decreased their ability to reproduce them as efficiently. [ 111 ] Fish without a fully functioning lateral line have behavioral changes such as: higher risk of predation, lowered prey detection and possible reproduction abilities. Unlike fish, mammals have a threat to go deaf if exposed directly. [ 111 ]
As well as an endocrine disruptor compound (EDC), BPA has been found to inhibit nerve conduction. [ 110 ] In the sciatic nerve of a frog (Rana tigrina), BPA inhibits the fast-conducting compound action potential (CAP). [ 110 ] Estrogen receptors found in the plasma membrane of the sciatic nerve are affected by the BPA and inhibit CAP. [ 110 ] However, estrogen receptors are not the only reason for inhibition, BPA is able to inhibit nerve functions without affecting estrogen. [ 110 ]
A study in mice shows that BPA as an EDC acts as an agonist/antagonist for behavioral effects. [ 115 ] BPA caused a decrease in exploratory and spatial behaviors in male mice who were exposed in the developmental state. [ 115 ] In order to expose the males, pregnant females were fed with BPA in food the mice were compared to males whose mothers were fed with a phytoestrogen-free CTL diet. [ 115 ] Males with the BPA exposure in developmental stages were less likely to be territorial when the other male mice were present. [ 115 ] BPA exposure changed the behavior of sex and species-dependent behavior. [ 115 ] These conclusions are suggestions to support the idea that BPA can cause sexually selected traits. Furthermore, maternal diet and exposing the developmental mice to BPA, may cause harm and lead to sexually dimorphic responses. [ 115 ]
In November 2009, the WHO announced to organize an expert consultation in 2010 to assess low-dose BPA exposure health effects, focusing on the nervous and behavioral system and exposure to young children. [ 62 ] The 2010 WHO expert panel recommended no new regulations limiting or banning the use of bisphenol-A, stating that "initiation of public health measures would be premature." [ 119 ] [ 120 ]
In 2013, the FDA posted on its web site: "Is BPA safe? Yes. Based on FDA's ongoing safety review of scientific evidence, the available information continues to support the safety of BPA for the approved uses in food containers and packaging. People are exposed to low levels of BPA because, like many packaging components, very small amounts of BPA may migrate from the food packaging into foods or beverages." [ 121 ] FDA issued a statement on the basis of three previous reviews by a group of assembled Agency experts in 2014 in its "Final report for the review of literature and data on BPA" that said in part, "The results of these new toxicity data and studies do not affect the dose-effect level and the existing NOAEL (5 mg/kg bw/day; oral exposure)." [ 122 ]
In 2009 the Australia and New Zealand Food Safety Authority ( Food Standards Australia New Zealand ) did not see any health risk with bisphenol A baby bottles if the manufacturer's instructions were followed, as levels of exposure were very low and would not pose a significant health risk. It added that "the move by overseas manufacturers to stop using BPA in baby bottles is a voluntary action and not the result of a specific action by regulators." [ 123 ] In 2008 it had suggested the use of glass baby bottles if parents had concerns. [ 124 ]
In 2012 the Australian Government introduced a voluntary phase out of BPA use in polycarbonate baby bottles. [ 125 ]
In April 2008, Health Canada concluded that, while adverse health effects were not expected, the margin of safety was too small for formula-fed infants [ 126 ] and proposed classifying the chemical as "'toxic' to human health and the environment." [ 127 ] The Canadian Minister of Health announced Canada's intent to ban the import, sale, and advertisement of polycarbonate baby bottles containing bisphenol A due to safety concerns, and investigate ways to reduce BPA contamination of baby formula packaged in metal cans. [ 90 ] Subsequent news reports from April 2008 showed many retailers removing polycarbonate drinking products from their shelves. [ 128 ] On 18 October 2008, Health Canada noted that "bisphenol A exposure to newborns and infants is below levels that cause effects" and that the "general public need not be concerned". [ 129 ]
In 2010, Canada's department of the environment declared BPA to be a "toxic substance" and added it to schedule 1 of the Canadian Environmental Protection Act, 1999 . [ 130 ]
The 2008 European Union Risk Assessment Report on bisphenol A , published by the European Commission and European Food Safety Authority (EFSA), concluded that bisphenol A -based products, such as polycarbonate plastic and epoxy resins, are safe for consumers and the environment when used as intended. [ 131 ] By October 2008, after the Lang Study was published, the EFSA issued a statement concluding that the study provided no grounds to revise the current Tolerable Daily Intake (TDI) level for BPA of 0.05 mg/kg bodyweight. [ 132 ]
On 22 December 2009, the EU Environment ministers released a statement expressing concerns over recent studies showing adverse effects of exposure to endocrine disruptors . [ 133 ]
In September 2010, the European Food Safety Authority (EFSA) concluded after a "comprehensive evaluation of recent toxicity data […] that no new study could be identified, which would call for a revision of the current TDI". [ 134 ] The Panel noted that some studies conducted on developing animals have suggested BPA-related effects of possible toxicological relevance, in particular biochemical changes in brain, immune-modulatory effects and enhanced susceptibility to breast tumours but considered that those studies had several shortcomings so the relevance of these findings for human health could not be assessed. [ 134 ]
On 25 November 2010, the European Union executive commission said it planned to ban the manufacturing by 1 March 2011 and ban the marketing and market placement of polycarbonate baby bottles containing the organic compound bisphenol A by 1 June 2011 , according to John Dalli , commissioner in charge of health and consumer policy. This was backed by a majority of EU governments. [ 135 ] [ 136 ] The ban was called an over-reaction by Richard Sharpe, of the Medical Research Council 's Human Reproductive Sciences Unit, who said to be unaware of any convincing evidence justifying the measure and criticized it as being done on political, rather than scientific grounds. [ 137 ]
In January 2011 use of bisphenol A in baby bottles was forbidden in all EU-countries. [ 138 ]
After reviewing more recent research, in 2012 EFSA made a decision to re-evaluate the human risks associated with exposure to BPA. They completed a draft assessment of consumer exposure to BPA in July 2013 and at that time asked for public input from all stakeholders to assist in forming a final report, which is expected to be completed in 2014. [ 139 ]
In January 2014, EFSA presented a second part of the draft opinion which discussed the human health risks posed by BPA. The draft opinion was accompanied by an eight-week public consultation and also included adverse effects on the liver and kidney as related to BPA. From this it was recommended that the current TDI to be revised. [ 140 ] In January 2015 EFSA indicated that the TDI was reduced from 50 to 4 μg/kg body weight/day – a recommendation, as national legislatures make the laws. [ 138 ]
The EU Commission issued a new regulation regarding the use of bisphenol A in thermal paper on 12 December 2016. According to this new regulation, thermal paper containing bisphenol A cannot be placed on the EU market after 2 January 2020. This regulation came into effect on 2 January 2017 but there is a transition period of three years. [ 141 ]
On 12 January 2017, BPA was added to the candidate list of substances of very high concern (SVHC). [ 142 ] Candidate SVHC listing is a first step towards restricting the importing and use of a chemical in the EU. If the European Chemical Agency assigns SVHC status, the presence of BPA in a product at a concentration above 0.1% must be disclosed to a purchaser (with different rules for consumer and business purchasers). In February 2016, France had announced that it intended to propose BPA as a candidate SVHC by 8 August 2016. [ 4 ]
In May 2009, the Danish parliament passed a resolution to ban the use of BPA in baby bottles, which had not been enacted by April 2010. In March 2010, a temporary ban was declared by the Health Minister. [ 143 ]
In March 2010, senator Philippe Mahoux proposed legislation to ban BPA in food contact plastics . [ 144 ] In May 2011, senators Dominique Tilmans and Jacques Brotchi proposed legislation to ban BPA from thermal paper. [ 145 ]
On 5 February 2010, the French Food Safety Agency (AFSSA) questioned the previous assessments of the health risks of BPA, especially in regard to behavioral effects observed in rat pups following exposure in utero and during the first months of life. [ 146 ] [ 147 ] In April 2010, the AFFSA suggested the adoption of better labels for food products containing BPA. [ 148 ]
On 24 March 2010, the French Senate unanimously approved a proposition of law to ban BPA from baby bottles. [ 149 ] The National Assembly (Lower House) approved the text on 23 June 2010, which has been applicable law since 2 July 2010. [ 150 ] On 12 October 2011, the French National Assembly voted a law forbidding the use of Bisphenol A in products aimed at less than 3-year-old children for 2013, and 2014 for all food containers. [ 151 ]
On 9 October 2012, the French Senate adopted unanimously the law proposition to suspend manufacture, import, export and marketing of all food containers that include bisphenol A for 2015. The ban of bisphenol A in 2013 for food products designed for children less than 3-years-old was maintained. [ 152 ]
On 19 September 2008, the German Federal Institute for Risk Assessment (Bundesinstitut für Risikobewertung, BfR) stated that there was no reason to change the current risk assessment for bisphenol A on the basis of the Lang Study. [ 153 ]
In October 2009, the German environmental organization Bund für Umwelt und Naturschutz Deutschland requested a ban on BPA for children's products, especially pacifiers , [ 154 ] and products that make contact with food. [ 155 ] In response, some manufacturers voluntarily removed the problematic pacifiers from the market. [ 156 ]
On 3 March 2016, the Netherlands Food and Consumer Product Safety Authority [ nl ] (NVWA) issued cautionary recommendations to the Minister of Health, Welfare, and Sport and the Secretary for Economic Affairs , on the public intake of BPA, especially for vulnerable groups such as women who are pregnant or breastfeeding, and those with developing immune systems such as children below the age of 10. This was done in response to recent published research, and conclusions reached by the European Food Safety Authority . It also called for the concentration of BPA in drinking water to be lowered below 0.2 μg/L, in line with the maximum tolerable intake they recommend. [ 157 ]
In February 2009, the Swiss Federal Office for Public Health, based on reports of other health agencies, stated that the intake of bisphenol A from food represents no risk to the consumer, including newborns and infants. However, in the same statement, it advised for proper use of polycarbonate baby bottles and listed alternatives. [ 158 ]
By 26 May 1995, the Swedish Chemicals Agency asked for a BPA ban in baby bottles, but the Swedish Food Safety Authority prefers to await the expected European Food Safety Authority's updated review. The Minister of Environment said to wait for the EFSA review but not for too long. [ 159 ] [ 160 ] [ failed verification ] From March 2011 it is prohibited to manufacture babybottles containing bisphenol A and from July 2011 they can not be bought in stores. On 12 April 2012, the Swedish government announced that Sweden will ban BPA in cans containing food for children under the age of three. [ 161 ]
Since January 2, 2020, BPA has been banned in thermal receipts as a consequence of the EU wide ban. [ 162 ]
Since September 1, 2016, it is prohibited to use BPA when relining water pipes with CIPP . [ 162 ]
In December 2009, responding to a letter from a group of seven scientists that urged the UK Government to "adopt a standpoint consistent with the approach taken by other Governments who have ended the use of BPA in food contact products marketed at children", [ 163 ] the UK Food Standards Agency reaffirmed, in January 2009, its view that "exposure of UK consumers to BPA from all sources, including food contact materials , was well below levels considered harmful". [ 164 ]
As of 10 June 2011, Turkey banned the use of BPA in baby bottles and other PC items produced for babies. [ 165 ]
Between 1998 and 2003, the canning industry voluntarily replaced its BPA-containing epoxy resin can liners with BPA-free polyethylene terephthalate (PET) in many of its products. For other products, it switched to a different epoxy lining that yielded much less migration of BPA into food than the previously used resin. [ clarification needed ] In addition, polycarbonate tableware for school lunches was replaced by BPA-free plastics.
The major human exposure route to BPA is diet, including ingestion of contaminated food and water. [ 166 ]
It is especially likely to leach from plastics when they are cleaned with harsh detergents or when they contain acidic or high-temperature liquids. BPA is used to form epoxy resin coating of water pipes; in older buildings, such resin coatings are used to avoid replacement of deteriorating pipes. [ 167 ] In the workplace, while handling and manufacturing products which contain BPA, inhalation and dermal exposures are the most probable routes. [ 168 ] There are many uses of BPA for which related potential exposures have not been fully assessed including digital media, electrical and electronic equipment, automobiles, sports safety equipment, electrical laminates for printed circuit boards, composites, paints, and adhesives. [ 169 ] In addition to being present in many products that people use on a daily basis, BPA has the ability to bioaccumulate, especially in water bodies. In one review, it was seen that although BPA is biodegradable, it is still detected after wastewater treatment in many waterways at concentrations of approximately 1 ug/L. This study also looked at other pathways where BPA could potentially bioaccumulate and found "low-moderate potential...in microorganisms, algae, invertebrates, and fish in the environment" suggesting that some environmental exposures are less likely. [ 168 ]
In November 2009, the Consumer Reports magazine published an analysis of BPA content in some canned foods and beverages, where in specific cases the content of a single can of food could exceed the FDA "Cumulative Exposure Daily Intake" limit. [ 10 ] [ 170 ]
The CDC had found bisphenol A in the urine of 95% of adults sampled in 1988–1994 [ 171 ] and in 93% of children and adults tested in 2003–04. [ 172 ] The USEPA Reference dose (RfD) for BPA is 50 μg/kg/day which is not enforceable but is the recommended safe level of exposure. The most sensitive animal studies show effects at much lower doses, [ 92 ] [ 173 ] and several studies of children, who tend to have the highest levels, have found levels over the EPA's suggested safe limit figure. [ 174 ]
A 2009 Health Canada study found that the majority of canned soft drinks it tested had low, but measurable levels of bisphenol A. [ 175 ] A study conducted by the University of Texas School of Public Health in 2010 found BPA in 63 of 105 samples of fresh and canned foods, including fresh turkey sold in plastic packaging and canned infant formula. [ 176 ] A 2011 study published in Environmental Health Perspectives , "Food Packaging and Bisphenol A and Bis(2-Ethyhexyl) Phthalate Exposure: Findings from a Dietary Intervention," selected 20 participants based on their self-reported use of canned and packaged foods to study BPA. Participants ate their usual diets, followed by three days of consuming foods that were not canned or packaged. The study's findings include: 1) evidence of BPA in participants' urine decreased by 50% to 70% during the period of eating fresh foods; and 2) participants' reports of their food practices suggested that consumption of canned foods and beverages and restaurant meals were the most likely sources of exposure to BPA in their usual diets. The researchers note that, even beyond these 20 participants, BPA exposure is widespread, with detectable levels in urine samples in more than an estimated 90% of the U.S. population. [ 177 ] Another U.S. study found that consumption of soda, school lunches, and meals prepared outside the home were statistically significantly associated with higher urinary BPA. [ 174 ]
A 2011 experiment by researchers at the Harvard School of Public Health indicated that BPA used in the lining of food cans is absorbed by the food and then ingested by consumers. Of 75 participants, half ate a lunch of canned vegetable soup for five days, followed by five days of fresh soup, while the other half did the same experiment in reverse order. "The analysis revealed that when participants ate the canned soup, they experienced more than a 1,000 percent increase in their urinary concentrations of BPA, compared to when they dined on fresh soup." [ 178 ] A 2009 study found that drinking from polycarbonate bottles increased urinary bisphenol A levels by two-thirds, from 1.2 μg/g creatinine to 2 μg/g creatinine. [ 179 ] Consumer groups recommend that people wishing to lower their exposure to bisphenol A avoid canned food and polycarbonate plastic containers (which shares resin identification code 7 with many other plastics) unless the packaging indicates the plastic is bisphenol A-free. [ 180 ] To avoid the possibility of BPA leaching into food or drink, the National Toxicology Panel recommends avoiding microwaving food in plastic containers, putting plastics in the dishwasher , or using harsh detergents. [ 181 ]
Besides diet, exposure can also occur through air and through skin absorption. [ 182 ] Free BPA is found in high concentration in thermal paper and carbonless copy paper , which would be expected to be more available for exposure than BPA bound into resin or plastic. [ 183 ] [ 184 ] [ 185 ] [ 186 ] Popular uses of thermal paper include receipts, event and cinema tickets, labels, and airline tickets. [ 186 ] A Swiss study found that 11 of 13 thermal printing papers contained 8 – 17 g/kg bisphenol A (BPA). Upon dry finger contact with a thermal paper receipt, roughly 1 μg BPA ( 0.2 – 6 μg ) was transferred to the forefinger and the middle finger. For wet or greasy fingers approximately 10 times more was transferred. Extraction of BPA from the fingers was possible up to 2 hours after exposure. [ 187 ] Further, it has been demonstrated that thermal receipts placed in contact with paper currency in a wallet for 24 hours cause a dramatic increase in the concentration of BPA in paper currency, making paper money a secondary source of exposure. [ 188 ] Another study has identified BPA in all of the waste paper samples analysed (newspapers, magazines, office paper, etc.), indicating direct results of contamination through paper recycling. [ 189 ] Free BPA can readily be transferred to skin, and residues on hands can be ingested. [ 9 ] Bodily intake through dermal absorption (99% of which comes from handling receipts) has been shown for the general population to be 0.219 ng/kg bw/day (occupationally exposed persons absorb higher amounts at 16.3 ng/kg bw/day) [ 190 ] whereas aggregate intake (food/beverage/environment) for adults is estimated at 0.36–0.43 μg/kg bw/day (estimated intake for occupationally exposed adults is 0.043–100 μg/kg bw/day). [ 8 ]
A study from 2011 found that Americans of all age groups had twice as much BPA in their bodies as Canadians; the reasons for the disparity were unknown, as there was no evidence to suggest higher amounts of BPA in U.S. foods, or that consumer products available in the U.S. containing BPA were BPA-free in Canada. According to another study it may have been due to differences in how and when the surveys were done, [ 191 ] because "although comparisons of measured concentrations can be made across populations, this must be done with caution owing to differences in sampling, in the analytical methods used and in the sensitivity of the assays." [ 192 ]
Comparing data from the National Health and Nutrition Examination Surveys (NHANES) from four time periods between 2003 and 2012, urinary BPA data the median daily intake for the overall population is approximately 25 ng/kg/day and below current health based guidelines. Additionally, daily intake of BPA in the United States has decreased significantly compared to the intakes measured in 2003–2004. [ 193 ] Public attention and governmental action during this time period may have decreased the exposure to BPA somewhat but these studies did not include children under the age of six. According to the Endocrine Society, age of exposure is an important factor in determining the extent to which endocrine disrupting chemicals will have an effect, and the effects on developing fetuses or infants is quite different than an adult. [ 194 ]
A 2009 study found higher urinary concentrations in young children than in adults under typical exposure scenarios. [ 195 ] [ 196 ] In adults, BPA is eliminated from the body through a detoxification process in the liver. In infants and children, this pathway is not fully developed so they have a decreased ability to clear BPA from their systems. Several recent studies of children have found levels that exceed the EPAs suggested safe limit figure. [ 174 ]
Infants fed with liquid formula are among the most exposed, and those fed formula from polycarbonate bottles can consume up to 13 micrograms of bisphenol A per kg of body weight per day (μg/kg/day; see table below). [ 197 ] In the U.S. and Canada, BPA has been found in infant liquid formula in concentrations varying from 0.48 to 11 ng/g. [ 198 ] [ 199 ] BPA has been rarely found in infant powder formula (only 1 of 14). [ 198 ] The U.S. Department of Health & Human Services (HHS) states that "the benefit of a stable source of good nutrition from infant formula and food outweighs the potential risk of BPA exposure". [ 200 ] BPA is present in human breast milk, having been found by several studies in 62–75% of breast milk samples. [ 201 ] [ 202 ] This is presumably due to the mothers being exposed to BPA since it is not naturally produced by the body.
Children may be more susceptible to BPA exposure than adults (see health effects). [ citation needed ] A 2010 study of people in Austria, Switzerland, and Germany has suggested polycarbonate (PC) baby bottles as the most prominent role of exposure for infants, and canned food for adults and teenagers. [ 203 ] In the United States, the growing concern over BPA exposure in infants in recent years has led the manufacturers of plastic baby bottles to stop using BPA in their bottles. The FDA banned the use of BPA in baby bottles and sippy cups (July 2012) as well as the use of epoxy resins in infant formula packaging. [ 204 ] However, babies may still be exposed if they are fed with old or hand-me-down bottles bought before the companies stopped using BPA.
One often overlooked source of exposure occurs when a pregnant woman is exposed, thereby exposing the fetus. Animal studies have shown that BPA can be found in both the placenta and the amniotic fluid of pregnant mice. [ 205 ] Since BPA was also "detected in the urine and serum of pregnant women and the serum, plasma, and placenta of newborn infants" a study to examine the externalizing behaviors associated with prenatal exposure to BPA was performed which suggests that exposures earlier in development have more of an effect on the behavior outcomes and that female children (2-years-old) are impacted more than males. [ 206 ] A study of 244 mothers indicated that exposure to BPA before birth could affect the behavior of girls at age 3. Girls whose mother's urine contained high levels of BPA during pregnancy scored worse on tests of anxiety and hyperactivity. Although these girls still scored within a normal range, for every 10-fold increase in the BPA of the mother, the girls scored at least six points lower on the tests. Boys did not seem to be affected by their mother's BPA levels during pregnancy. [ 207 ] After the baby is born, maternal exposure can continue to affect the infant through transfer of BPA to the infant via breast milk. [ 208 ] [ 209 ] Because of these exposures that can occur both during and after pregnancy, mothers wishing to limit their child's exposure to BPA should attempt to limit their own exposures during that time period.
While the majority of exposures have been shown to come through the diet, accidental ingestion can also be considered a source of exposure. One study conducted in Japan tested plastic baby books to look for possible leaching into saliva when babies chew on them. [ 210 ] While the results of this study have yet to be replicated, it gives reason to question whether exposure can also occur in infants through ingestion by chewing on certain books or toys.
(lower number assumes weight of 4.5 kg and intake of 700 ml/day with maximum concentration of BPA detected in U.S. canned formula; higher number assumes weight of 6.1 kg and intake of 1060 ml/day from powdered formula in cans with epoxy linings and using polycarbonate bottles)
(lower number assumes weight of 6.1 kg and intake of 1060 ml/day with maximum concentration of BPA detected in Japanese breast milk samples; higher number assumes weight of 4.5 kg and intake of 700 ml/day with maximum concentration of free BPA detected in U.S. breast milk samples)
Charles Schumer introduced a 'BPA-Free Kids Act of 2008' to the U.S. Senate seeking to ban BPA in any product designed for use by children and require the Center for Disease Control to conduct a study about the health effects of BPA exposure. [ 211 ] It was reintroduced in 2009 in both Senate and House, but died in committee each time. [ 97 ]
In 2008, the FDA reassured consumers that current limits were safe, but convened an outside panel of experts to review the issue. The Lang study was released, and co-author David Melzer presented the results of the study before the FDA panel. [ 212 ] An editorial accompanying the Lang study's publication criticized the FDA's assessment of bisphenol A: "A fundamental problem is that the current ADI [acceptable daily intake] for BPA is based on experiments conducted in the early 1980s using outdated methods (only very high doses were tested) and insensitive assays. More recent findings from independent scientists were rejected by the FDA, apparently because those investigators did not follow the outdated testing guidelines for environmental chemicals, whereas studies using the outdated, insensitive assays (predominantly involving studies funded by the chemical industry) are given more weight in arriving at the conclusion that BPA is not harmful at current exposure levels." [ 89 ] The FDA was criticized that it was "basing its conclusion on two studies while downplaying the results of hundreds of other studies." [ 212 ] Diana Zuckerman , president of the National Research Center for Women and Families , criticized the FDA in her testimony at the FDA's public meeting on the draft assessment of bisphenol A for use in food contact applications, that "At the very least, the FDA should require a prominent warning on products made with BPA". [ 213 ] [ 214 ]
In March 2009 Suffolk County, New York became the first county to pass legislation to ban baby beverage containers made with bisphenol A. [ 215 ] By March 2009, legislation to ban bisphenol A had been proposed in both House and Senate. [ 216 ] In the same month, Rochelle Tyl, author of two studies used by FDA to assert BPA safety in August 2008, said those studies did not claim that BPA is safe, because they were not designed to cover all aspects of the chemical's effects. [ 217 ] In May 2009, Minnesota and Chicago were the first U.S. jurisdictions to pass regulations limiting or banning BPA. [ 218 ] [ 219 ] In June 2009, the FDA announced its decision to reconsider the BPA safety levels. [ 220 ] Grassroots political action led Connecticut to become the first U.S. state to ban bisphenol A not only from infant formula and baby food containers, but also from any reusable food or beverage container. [ 221 ] In July 2009, the California Environmental Protection Agency's Developmental and Reproductive Toxicity Identification Committee in the California Office of Environmental Health Hazard Assessment unanimously voted against placing Bisphenol A on the state's list of chemicals that are believed to cause reproductive harm. The panel was concerned over the growing scientific evidence showing BPA's reproductive harm in animals, found that there was insufficient data of the effects in humans. [ 222 ] Critics pointed out that the same panel failed to add second-hand smoke to the list until 2006, and only one chemical was added to the list in the last three years. [ 223 ] In September, the U.S. Environmental Protection Agency announced that it was evaluating BPA for an action plan development. [ 224 ] In October, the NIH announced $30,000,000 in stimulus grants to study the health effects of BPA. This money was supposed to result in many peer-reviewed publications. [ 225 ]
On 15 January 2010, the FDA expressed "some concern", the middle level in its scale of concerns, about the potential effects of BPA on the brain, behavior, and prostate gland in fetuses, infants, and young children, and announced that it was taking reasonable steps to reduce human exposure to BPA in the food supply. However, the FDA was not recommending that families change the use of infant formula or foods, as it saw the benefit of a stable source of good nutrition as outweighing the potential risk from BPA exposure. [ 226 ] On the same date, the Department of Health and Human Services released information to help parents to reduce children's BPA exposure. [ 227 ] As of 2010 many U.S. states were considering some sort of BPA ban. [ 228 ]
In June 2010 the 2008–2009 Annual Report of the President's Cancer Panel was released and recommended: "Because of the long latency period of many cancers, the available evidence argues for a precautionary approach to these diverse chemicals, which include (…) bisphenol A". [ 229 ] In August 2010, the Maine Board of Environmental Protection voted unanimously to ban the sale of baby bottles and other reusable food and beverage containers made with bisphenol A as of January 2012. [ 230 ] In February 2011, the newly elected governor of Maine, Paul LePage , gained national attention when he spoke on a local TV news show saying he hoped to repeal the ban because, "There hasn't been any science that identifies that there is a problem" and added: "The only thing that I've heard is if you take a plastic bottle and put it in the microwave and you heat it up, it gives off a chemical similar to estrogen. So the worst case is some women may have little beards." [ 231 ] [ 232 ] In April 2011, the Maine legislature passed a bill to ban the use of BPA in baby bottles, sippy cups, and other reusable food and beverage containers, effective 1 January 2012. Governor LePage refused to sign the bill. [ 233 ]
In October 2011, California banned BPA from baby bottles and toddlers' drinking cups, effective 1 July 2013. [ 234 ] By 2011, 26 states had proposed legislation that would ban certain uses of BPA. Many bills died in committee. [ 235 ] In July 2011, the American Medical Association (AMA) declared feeding products for babies and infants that contain BPA should be banned. It recommended better federal oversight of BPA and clear labeling of products containing it. It stressed the importance of the FDA to "actively incorporate current science into the regulation of food and beverage BPA-containing products." [ 236 ]
In 2012, the FDA concluded an assessment of scientific research on the effects of BPA and stated in the March 2012 Consumer Update that "the scientific evidence at this time does not suggest that the very low levels of human exposure to BPA through the diet are unsafe" although recognizing "potential uncertainties in the overall interpretation of these studies including route of exposure used in the studies and the relevance of animal models to human health. The FDA is continuing to pursue additional research to resolve these uncertainties." [ 237 ] Yet on 17 July 2012, the FDA banned BPA from baby bottles and sippy cups. A FDA spokesman said the agency's action was not based on safety concerns and that "the agency continues to support the safety of BPA for use in products that hold food." [ 238 ] Since manufacturers had already stopped using the chemical in baby bottles and sippy cups, the decision was a response to a request by the American Chemistry Council , the chemical industry's main trade association, who believed that a ban would boost consumer confidence. [ 239 ] The ban was criticized as "purely cosmetic" by the Environmental Working Group , which stated that "If the agency truly wants to prevent people from being exposed to this toxic chemical associated with a variety of serious and chronic conditions it should ban its use in cans of infant formula, food and beverages." The Natural Resources Defense Council called the move inadequate saying, the FDA needs to ban BPA from all food packaging. [ 32 ]
As of 2014, 12 states have banned BPA from children's bottles and feeding containers. [ 240 ]
On 30 December 2009 EPA released a so-called action plan for four chemicals, including BPA, which would have added it to the list of "chemicals of concern" regulated under the Toxic Substances Control Act . In February 2010, after lobbyists for the chemical industry had met with administration officials, the EPA delayed BPA regulation by not including the chemical. [ 241 ] [ 242 ]
On 29 March 2010, EPA published a revised action plan for BPA as "chemical of concern". [ 243 ] In October 2010 an advanced Notice of Proposed Rulemaking for BPA testing was published in the Federal Register July 2011. [ 244 ] After more than 3 years at the Office of Information and Regulatory Affairs (OIRA), part of the Office of Management and Budget (OMB), which has to review draft proposals within 3 months, OIRA had not done so.
In September 2013 EPA withdrew its 2010 draft BPA rule. [ 245 ] saying the rule was "no longer necessary", because EPA was taking a different track at looking at chemicals, a so-called "Work Plan" of more than 80 chemicals for risk assessment and risk reduction. Another proposed rule that EPA withdrew would have limited industry's claims of confidential business information (CBI) for the health and safety studies needed, when new chemicals are submitted under TSCA for review. The EPA said it continued "to try to reduce unwarranted claims of confidentiality and has taken a number of significant steps that have had dramatic results... tightening policies for CBI claims and declassifying unwarranted confidentiality claims, challenging companies to review existing CBI claims to ensure that they are still valid and providing easier and enhanced access to a wider array of information."
The chemical industry group American Chemistry Council commended EPA for "choosing a course of action that will ultimately strengthen the performance of the nation's primary chemical management law." Richard Denison, senior scientist with the Environmental Defense Fund , commented "both rules were subject to intense opposition and lobbying from the chemical industry" and "Faced presumably with the reality that [the Office of Information and Regulatory Affairs] was never going to let EPA even propose the rules for public comment, EPA decided to withdraw them." [ 246 ]
On 29 January 2014 EPA released a final alternatives assessment for BPA in thermal paper as part of its Design for the Environment program. [ 247 ]
In March 2009 the six largest U.S. producers of baby bottles decided to stop using bisphenol A in their products. [ 248 ] The same month Sunoco , a producer of gasoline and chemicals, refused to sell BPA to companies for use in food and water containers for children younger than 3, saying it could not be certain of the compound's safety. [ 249 ] In May 2009, Lyndsey Layton from the Washington Post accused manufacturers of food and beverage containers and some of their biggest customers of the public relations and lobbying strategy to block government BPA bans. She noted that, "Despite more than 100 published studies by government scientists and university laboratories that have raised health concerns about the chemical, the Food and Drug Administration has deemed it safe largely because of two studies, both funded by a chemical industry trade group". [ 250 ] In August 2009 the Milwaukee Journal Sentinel investigative series into BPA and its effects showed the Society of the Plastics Industry plans of a major public relations blitz to promote BPA, including plans to attack and discredit those who report or comment negatively on BPA and its effects. [ 251 ] [ 252 ]
The chemical industry over time responded to criticism of BPA by promoting "BPA-free" products. For example, in 2010, General Mills announced it had found a "BPA-free alternative" can liner that works with tomatoes. It said it would begin using the BPA-free alternative in tomato products sold by its organic foods subsidiary Muir Glen with that year's tomato harvest. [ 253 ] As of 2014, General Mills has refused to state which alternative chemical it uses, and whether it uses it on any of its other canned products. [ 254 ]
A minority of companies have stated what alternative compound(s) they use. Following an inquiry by Representative Edward Markey (D-Mass) seventeen companies replied saying they were going BPA-free, including Campbell Soup Company and General Mills Inc. [ 254 ] None of the companies said they are or were going to use Bisphenol S ; only four stated the alternative to BPA that they will be using. ConAgra stated in 2013 "alternate liners for tomatoes are vinyl ...New aerosol cans are lined with polyester resin ". Eden Foods stated that only their "beans are canned with a liner of an oleoresinous c-enamel that does not contain the endocrine disruptor BPA. Oleoresin is a mixture of oil and resin extracted from plants such as pine or balsam fir". Hain Celestial Group will use "modified polyester and/ or acrylic … by June 2014 for our canned soups, beans, and vegetables". Heinz stated in 2011 it "intend[s] to replace epoxy linings in all our food containers…. We have prioritized baby foods", and in 2012 "no BPA in any plastic containers we use". [ 254 ]
Some "BPA free" plastics are made from epoxy containing a compound called bisphenol S (BPS). BPS shares a similar structure and versatility to BPA and has been used in numerous products from currency to thermal receipt paper. Widespread human exposure to BPS was confirmed in an analysis of urine samples taken in the U.S., Japan, China, and five other Asian countries. [ 255 ] Researchers found BPS in all the receipt paper, 87 percent of the paper currency and 52 percent of recycled paper they tested. The study found that people may be absorbing 19 times more BPS through their skin than the amount of BPA they absorbed, when it was more widely used. [ 256 ] In a 2011 study researchers looked at 455 common plastic products and found that 70% tested positive for estrogenic activity. After the products had been washed or microwaved the proportion rose to 95%. The study concluded: "Almost all commercially available plastic products we sampled, independent of the type of resin, product, or retail source, leached chemicals having reliably-detectable EA [endocrine activity], including those advertised as BPA-free. In some cases, BPA-free products released chemicals having more EA than BPA-containing products." [ 256 ] A systematic review published in 2015 found that "based on the current literature, BPS and BPF are as hormonally active as BPA, and have endocrine disrupting effects." [ 257 ]
Among potential substitutes for BPA, phenol-based chemicals closely related to BPA have been identified. The non-extensive list includes bisphenol E (BPE), bisphenols B (BPB), 4-cumylphenol (HPP) and bisphenol F (BPF), with only BPS being currently used as main substitute in thermal paper. [ 189 ]
The enzyme 4-hydroxyacetophenone monooxygenase , which can be found in Pseudomonas fluorescens , uses (4-hydroxyphenyl)ethan-1-one, NADPH, H + and O 2 to produce 4-hydroxyphenyl acetate , NADP + , and H 2 O. [ 258 ]
The fungus Cunninghamella elegans is also able to degrade synthetic phenolic compounds like bisphenol A. [ 259 ]
Portulaca oleracea efficiently removes bisphenol A from a hydroponic solution. How this happens is unclear. [ 260 ]
Photodegradation is BPA's main method of natural weathering in the environment, via the Photo Fries rearrangement . Experimentally, BPA has been shown to photodegrade in reactions catalyzed by zinc oxide, titanium dioxide, and tin dioxide, as methods of water decontamination procedures. [ 102 ] The Photo Fries degradation is a complex rearrangement of the aromatic carbonate backbone of BPA into phenyl salicylate and dihydroxybenzophenone derivatives before the energized ring releases carbon dioxide. In aqueous solution, BPA shows UV absorption of wavelengths between 250 nm and 360 nm, and the Photo Fries degradation occurs at wavelengths less than 300 nm. [ 102 ] The reaction begins by an alpha cleavage between the carbonyl carbon and the oxygen in the carbonate linkage, with the subsequent Photo Fries rearrangement of the products. [ 261 ] Seen is the mechanism of the photodegradation of BPA by the Photo Fries reaction:
Hydroxyl radicals are powerful oxidants that transform BPA into different forms of phenolic group compounds. The advanced photocatalytic oxidation of BPA, using compounds like sodium hypochlorite , NaOCl, as the oxidizing agent, can accelerate the degradation efficiency by releasing oxygen into the water. This decomposition occurs when BPA is exposed to UV irradiation. [ 102 ] This release of oxygen, another strong oxidant, also causes BPA disintegration in aqueous conditions to produce carbon dioxide and water. The dissolved carbon dioxide in the water results in an increase of carbonic acid , therefore causing an acidification of the water. [ 102 ]
During water treatment, BPA can be removed through ozonation . A 2008 study has identified the degradation products of this reaction, through the use of liquid chromatography and mass spectrometry . [ 262 ] The reaction of BPA and ozone is seen below:
Solutions of BPA and water decreased in pH after the ozonation process was completed. pH drops from 6.5 to 4.5 pH units were observed. This is likely because of the formation of carboxylic acids . These products were produced when the solution was 20±2 °C. The products have high molecular weight. Also, ozone is electrophilic , so reactions were between ozone and aromatic rings by electrophilic substitution . [ 262 ]
In 1991, the first explanation of the rate of BPA degradation through ozonation was determined. [ 263 ]
This relates the concentration of BPA to time by the apparent dissociation constant, concentration of BPA, and the concentration of ozone. | https://en.wikipedia.org/wiki/Health_effects_of_Bisphenol_A |
The health effects of radon are harmful, and include an increased chance of lung cancer . Radon is a radioactive , colorless, odorless, tasteless noble gas , which has been studied by a number of scientific and medical bodies for its effects on health. A naturally-occurring gas formed as a decay product of radium , radon is one of the densest substances that remains a gas under normal conditions, and is considered to be a health hazard due to its radioactivity. Its most stable isotope , radon-222 , has a half-life of 3.8 days. Due to its high radioactivity, it has been less well studied by chemists, but a few compounds are known.
Radon-222 is formed as part of the uranium series i.e. the normal radioactive decay chain of uranium-238 that terminates in lead-206 . Uranium has been present since the Earth was formed, and its most common isotope has a very long half-life (4.5 billion years), which is the time required for one-half of uranium to break down. Thus, uranium and radon will continue to occur for millions of years at about the same concentrations as they do now. [ 1 ]
Radon is responsible for the majority of public exposure to ionizing radiation . It is often the single largest contributor to an individual's background radiation dose, and is the most variable from location to location. Radon gas from natural sources can accumulate in buildings, especially in confined areas such as attics and basements. It can also be found in some spring waters and hot springs. [ 2 ]
According to a 2003 report EPA's Assessment of Risks from Radon in Homes from the United States Environmental Protection Agency , epidemiological evidence shows a clear link between lung cancer and high concentrations of radon, with 21,000 radon-induced U.S. lung cancer deaths per year—second only to cigarette smoking. [ 3 ] Thus in geographic areas where radon is present in heightened concentrations, radon is considered a significant indoor air contaminant.
Radon concentration in the atmosphere is usually measured in becquerels per cubic meter (Bq/m 3 ), which is an SI derived unit . As a frame of reference, typical domestic exposures are about 100 Bq/m 3 indoors and 10–20 Bq/m 3 outdoors. In the US, radon concentrations are often measured in picocuries per liter (pCi/L), with 1 pCi/L = 37 Bq/m 3 . [ 5 ]
The mining industry traditionally measures exposure using the working level (WL) index, and the cumulative exposure in working level months (WLM): 1 WL equals any combination of short-lived 222 Rn progeny ( 218 Po , 214 Pb , 214 Bi , and 214 Po ) in 1 liter of air that releases 1.3 × 10 5 MeV of potential alpha energy; [ 5 ] one WL is equivalent to 2.08 × 10 −5 joules per cubic meter of air (J/m 3 ). [ 1 ] The SI unit of cumulative exposure is expressed in joule-hours per cubic meter (J·h/m 3 ). One WLM is equivalent to 3.6 × 10 −3 J·h/m 3 . An exposure to 1 WL for 1 working month (170 hours) equals 1 WLM cumulative exposure.
A cumulative exposure of 1 WLM is roughly equivalent to living one year in an atmosphere with a radon concentration of 230 Bq/m 3 . [ 6 ]
The radon ( 222 Rn ) released into the air decays to 210 Pb and other radioisotopes. The levels of 210 Pb can be measured. The rate of deposition of this radioisotope is dependent on the weather. [ citation needed ]
Radon concentrations found in natural environments are much too low to be detected by chemical means: for example, a 1000 Bq/m 3 (relatively high) concentration corresponds to 0.17 picogram per cubic meter. The average concentration of radon in the atmosphere is about 6 × 10 −20 atoms of radon for each molecule in the air, or about 150 atoms in each mL of air. [ 7 ] The entire radon activity of the Earth's atmosphere at any one time is due to some tens of grams of radon, constantly being replaced by decay of larger amounts of radium and uranium. [ 8 ] Its concentration can vary greatly from place to place. In the open air, it ranges from 1 to 100 Bq/m 3 , even less (0.1 Bq/m 3 ) above the ocean. In caves, aerated mines, or poorly ventilated dwellings, its concentration can climb to 20–2,000 Bq/m 3 . [ 9 ]
In mining contexts, radon concentrations can be much higher. Ventilation regulations try to maintain concentrations in uranium mines under the "working level", and under 3 WL (546 pCi 222 Rn per liter of air; 20.2 kBq/m 3 measured from 1976 to 1985) 95 percent of the time. [ 1 ] The concentration in the air at the (unventilated) Gastein Healing Gallery averages 43 kBq/m 3 (about 1.2 nCi/L) with maximal value of 160 kBq/m 3 (about 4.3 nCi/L). [ 10 ]
Radon emanates naturally from the ground and from some building materials all over the world, wherever there are traces of uranium or thorium , and particularly in regions with soils containing granite or shale , which have a higher concentration of uranium. In every 1 square mile of surface soil, the first 6 inches (150 mm) (of depth) contains about 0.035 oz of radium (0.4 g per km 2 ) which releases radon in small amounts to the atmosphere. [ 1 ] Sand used in making concrete is the major source of radon in buildings. [ 11 ]
On a global scale, it is estimated that 2,400 million curies (91 TBq) of radon are released from soil annually. Not all granitic regions are prone to high emissions of radon. Being an unreactive noble gas, it usually migrates freely through faults and fragmented soils, and may accumulate in caves or water. Due to its very small half-life (four days for 222 Rn ), its concentration decreases very quickly when the distance from the production area increases. [ citation needed ]
Its atmospheric concentration varies greatly depending on the season and conditions. For instance, it has been shown to accumulate in the air if there is a meteorological inversion and little wind. [ 12 ]
Because atmospheric radon concentrations are very low, radon-rich water exposed to air continually loses radon by volatilization . Hence, ground water generally has higher concentrations of 222 Rn than surface water , because the radon is continuously replenished by radioactive decay of 226 Ra present in rocks. Likewise, the saturated zone of a soil frequently has a higher radon content than the unsaturated zone because of diffusional losses to the atmosphere. [ 13 ] [ 14 ] As a below-ground source of water, some springs —including hot springs —contain significant amounts of radon. [ 15 ] The towns of Boulder, Montana ; Misasa ; Bad Kreuznach , and the country of Japan have radium-rich springs which emit radon. To be classified as a radon mineral water, radon concentration must be above a minimum of 2 nCi/L (7 Bq/L). [ 16 ] The activity of radon mineral water reaches 2,000 Bq/L in Merano and 4,000 Bq/L in the village of Lurisia ( Ligurian Alps , Italy). [ 10 ]
Radon is also found in some petroleum. Because radon has a similar pressure and temperature curve to propane, and oil refineries separate petrochemicals based on their boiling points, the piping carrying freshly separated propane in oil refineries can become partially radioactive due to radon decay particles. Residues from the oil and gas industry often contain radium and its daughters. The sulfate scale from an oil well can be radium rich, while the water, oil, and gas from a well often contains radon. The radon decays to form solid radioisotopes which form coatings on the inside of pipework. In an oil processing plant, the area of the plant where propane is processed is often one of the more contaminated areas, because radon has a similar boiling point to propane. [ 17 ]
Typical domestic exposures are of around 100 Bq/m 3 indoors, but specifics of construction and ventilation strongly affect levels of accumulation; a further complication for risk assessment is that concentrations in a single location may differ by a factor of two over an hour, and concentrations can vary greatly even between two adjoining rooms in the same structure. [ 1 ]
The distribution of radon concentrations is highly skewed : the larger concentrations have a disproportionately greater weight. Indoor radon concentration is usually assumed to follow a lognormal distribution on a given territory. [ 18 ] Thus, the geometric mean is generally used to estimate the "average" radon concentration in an area. [ 19 ]
The mean concentration ranges from less than 10 Bq/m 3 to over 100 Bq/m 3 in some European countries. [ 20 ] Typical geometric standard deviations found in studies range between 2 and 3, meaning (given the 68–95–99.7 rule ) that the radon concentration is expected to be more than a hundred times the mean concentration for 2 to 3% of the cases.
The so-called "Watras incident" in 1984 is named for American construction engineer Stanley Watras, an employee at the Limerick nuclear power plant in the United States, who triggered radiation monitors while leaving work over several days—even though the plant had not yet been fueled, and despite Watras being decontaminated and sent home "clean" each evening. This pointed to a source of contamination outside the power plant, which turned out to be radon levels of 100,000 Bq /m 3 (2.7 nCi /L) in the basement of his home. He was told that living in the home was the equivalent of smoking 135 packs of cigarettes a day, and he and his family had increased their risk of developing lung cancer by 13 or 14 percent. [ 21 ] The incident dramatized the fact that radon levels in particular dwellings can occasionally be orders of magnitude higher than typical. [ 22 ] Radon soon became a standard homeowner concern, [ 23 ] though typical domestic exposures are two to three orders of magnitude lower (100 Bq/m 3 , or 2.5 pCi/L), [ 24 ] making individual testing essential to assessment of radon risk in any particular dwelling.
Radon exists in every U.S. state, and about 6% of American houses have elevated levels [ citation needed ] . The highest average radon concentrations in the United States are found in Iowa and in the Appalachian Mountain areas in southeastern Pennsylvania. [ 25 ] Some of the highest readings have been recorded in Mallow, County Cork , Ireland. Iowa has the highest average radon concentrations in the United States due to significant glaciation that ground the granitic rocks from the Canadian Shield and deposited it as soils making up the rich Iowa farmland. [ 26 ] Many cities within the state, such as Iowa City , have passed requirements for radon-resistant construction in new homes. In a few locations, uranium tailings have been used for landfills and were subsequently built on, resulting in possible increased exposure to radon. [ 1 ]
In the early 20th century, 210 Pb -contaminated gold, from gold seeds that were used in radiotherapy which had held 222 Rn , were melted down and made into a small number of jewelry pieces, such as rings, in the U.S. [ 27 ] [ 28 ]
Wearing such a contaminated ring could lead to a skin exposure of 10 to 100 millirad/day (0.004 to 0.04 mSv/h). [ 29 ]
The health effects of high exposure to radon in mines, where exposures reaching 1,000,000 Bq /m 3 can be found, can be recognized in Paracelsus ' 1530 description of a wasting disease of miners, the mala metallorum. Though at the time radon itself was not understood to be the cause—indeed, neither it nor radiation had even been discovered—mineralogist Georg Agricola recommended ventilation of mines to avoid this mountain sickness ( Bergsucht ). [ 30 ] [ 31 ] In 1879, the "wasting" was identified as lung cancer by Herting and Hesse in their investigation of miners from Schneeberg, Saxony , Germany. Given that the type locality of the important uranium ore pitchblende is in the Ore Mountains and that region was the most important German speaking mining area at the time, it is likely the radon induced lung cancers were associated with uranium. [ citation needed ]
Beyond mining in general, radon is a particular problem in the mining of uranium ;
significant excess lung cancer deaths have been identified in epidemiological studies of uranium miners and other hard-rock miners employed in the 1940s and 1950s. [ 32 ] [ 33 ] [ 34 ] Residues from processing of uranium ore can also be a source of radon. Radon resulting from the high radium content in uncovered dumps and tailing ponds can be easily released into the atmosphere. [ 35 ] Modern mining techniques, including better ventilation for underground mines, routine radiation monitoring as well as technologies like in-situ leaching have helped decrease the incidence of radon exposure among miners in subsequent decades. [ citation needed ]
The first major studies with radon and health occurred in the context of uranium mining, first in the Joachimsthal region of Bohemia and then in the Southwestern United States during the early Cold War . Because radon is a product of the radioactive decay of uranium, underground uranium mines may have high concentrations of radon. Many uranium miners in the Four Corners region contracted lung cancer and other pathologies as a result of high levels of exposure to radon in the mid-1950s. The increased incidence of lung cancer was particularly pronounced among Native American and Mormon miners, because those groups normally have low rates of lung cancer. [ 36 ] Safety standards requiring expensive ventilation were not widely implemented or policed during this period. [ 37 ]
In studies of uranium miners, workers exposed to radon levels of 50 to 150 picocuries of radon per liter of air (2000–6000 Bq/m 3 ) for about 10 years have shown an increased frequency of lung cancer. [ 1 ] Statistically significant excesses in lung cancer deaths were present after cumulative exposures of less than 50 WLM. [ 1 ] There is, however, unexplained heterogeneity in these results (whose confidence interval do not always overlap). [ 5 ] The size of the radon-related increase in lung cancer risk varied by more than an order of magnitude between the different studies. [ 38 ]
Heterogeneities are possibly due to systematic errors in exposure ascertainment, unaccounted for differences in the study populations (genetic, lifestyle, etc.), or confounding mine exposures. [ 5 ] There are a number of confounding factors to consider, including exposure to other agents, ethnicity, smoking history, and work experience. The cases reported in these miners cannot be attributed solely to radon or radon daughters but may be due to exposure to silica, to other mine pollutants, to smoking, or to other causes. [ 1 ] [ 39 ] The majority of miners in the studies are smokers and all inhale dust and other pollutants in mines. Because radon and cigarette smoke both cause lung-cancer, and since the effect of smoking is far above that of radon, it is complicated to disentangle the effects of the two kinds of exposure; misinterpreting the smoking habit by a few percent can blur out the radon effect. [ 40 ]
Since that time, ventilation and other measures have been used to reduce radon levels in most affected mines that continue to operate. In recent years, the average annual exposure of uranium miners has fallen to levels similar to the concentrations inhaled in some homes. This has reduced the risk of occupationally induced cancer from radon, although it still remains an issue both for those who are currently employed in affected mines and for those who have been employed in the past. [ 38 ] The power to detect any excess risks in miners nowadays is likely to be small, exposures being much smaller than in the early years of mining. [ 41 ]
A confounding factor with mines is that both radon concentration and carcinogenic dust (such as quartz dust) depend on the amount of ventilation. [ 42 ] This makes it very difficult to state that radon causes cancer in miners; the lung cancers could be partially or wholly caused by high dust concentrations from poor ventilation. [ 42 ]
Radon-222 has been classified by International Agency for Research on Cancer as being carcinogenic to humans. [ 43 ] In September 2009, the World Health Organization released a comprehensive global initiative on radon that recommended a reference level of 100 Bq/m 3 for radon, urging establishment or strengthening of radon measurement and mitigation programs as well as development building codes requiring radon prevention measures in homes under construction. [ 44 ] Elevated lung cancer rates have been reported from a number of cohort and case-control studies of underground miners exposed to radon and its decay products but the main confounding factor in all miners' studies is smoking and dust. Up to the most of regulatory bodies there is sufficient evidence for the carcinogenicity of radon and its decay products in humans for such exposures. [ 45 ] However, the discussion about the opposite results is still going on, [ 46 ] [ 47 ] especially a recent retrospective case-control study of lung cancer risk showed substantial cancer rate reduction between 50 and 123 Bq per cubic meter relative to a group at zero to 25 Bq per cubic meter. [ 48 ] Additionally, the meta-analysis of many radon studies, which independently show radon risk increase, gives no confirmation of that conclusion: the joined data show log-normal distribution with the maximal value in zero risk of lung cancer below 800 Bq per cubic meter. [ 49 ]
The primary route of exposure to radon and its progeny is inhalation. Radiation exposure from radon is indirect. The health hazard from radon does not come primarily from radon itself, but rather from the radioactive products formed in the decay of radon. [ 1 ] The general effects of radon to the human body are caused by its radioactivity and consequent risk of radiation-induced cancer . Lung cancer is the only observed consequence of high concentration radon exposures; both human and animal studies indicate that the lung and respiratory system are the primary targets of radon daughter-induced toxicity. [ 1 ]
Radon has a short half-life (3.8 days) and decays into other solid particulate radium-series radioactive nuclides.
Two of these decay products, polonium-218 and 214, present a significant radiologic hazard. [ 50 ] If the gas is inhaled, the radon atoms decay in the airways or the lungs, resulting in radioactive polonium and ultimately lead atoms attaching to the nearest tissue. If dust or aerosol is inhaled that already carries radon decay products, the deposition pattern of the decay products in the respiratory tract depends on the behaviour of the particles in the lungs. Smaller diameter particles diffuse further into the respiratory system, whereas the larger—tens to hundreds of micron-sized—particles often deposit higher in the airways and are cleared by the body's mucociliary staircase. Deposited radioactive atoms or dust or aerosol particles continue to decay, causing continued exposure by emitting energetic alpha radiation with some associated gamma radiation too, that can damage vital molecules in lung cells, [ 51 ] by either creating free radicals or causing DNA breaks or damage, [ 50 ] perhaps causing mutations that sometimes turn cancerous. In addition, through ingestion and blood transport, following crossing of the lung membrane by radon, radioactive progeny may also be transported to other parts of the body. [ citation needed ]
The risk of lung cancer caused by smoking is much higher than the risk of lung cancer caused by indoor radon. Radiation from radon has been attributed to increase of lung cancer among smokers too. It is generally believed that exposure to radon and cigarette smoking are synergistic; that is, that the combined effect exceeds the sum of their independent effects. This is because the daughters of radon often become attached to smoke and dust particles, and are then able to lodge in the lungs. [ 52 ]
It is unknown whether radon causes other types of cancer, but recent studies suggest a need for further studies to assess the relationship between radon and leukemia . [ 53 ] [ 54 ]
The effects of radon, if found in food or drinking water, are unknown. Following ingestion of radon dissolved in water, the biological half-life for removal of radon from the body ranges from 30 to 70 minutes. More than 90% of the absorbed radon is eliminated by exhalation within 100 minutes, By 600 minutes, only 1% of the absorbed amount remains in the body. [ 1 ]
While radon presents the aforementioned risks in adults, exposure in children leads to a unique set of health hazards that are still being researched. The physical composition of children leads to faster rates of exposure through inhalation given that their respiratory rate is higher than that of adults, resulting in more gas exchange and more potential opportunities for radon to be inhaled. [ 55 ]
The resulting health effects in children are similar to those of adults, predominantly including lung cancer and respiratory illnesses such as asthma, bronchitis, and pneumonia. [ 55 ] While there have been numerous studies assessing the link between radon exposure and childhood leukemia, the results are largely varied. Many ecological studies show a positive association between radon exposure and childhood leukemia; however, most case control studies have produced a weak correlation. [ 56 ] Genotoxicity has been noted in children exposed to high levels of radon, specifically a significant increase of frequency of aberrant cells was noted, as well as an "increase in the frequencies of single and double fragments, chromosome interchanges, [and] number of aberrations chromatid and chromosome type". [ 57 ]
Because radon is generally associated with diseases that are not detected until many years after elevated exposure, the public may not consider the amount of radon that children are currently being exposed to. Aside from the exposure in the home, one of the major contributors to radon exposure in children are the schools in which they attend almost every day. A survey was conducted in schools across the United States to detect radon levels, and it was estimated that about one in five schools has at least one room (more than 70,000 schoolrooms) with short-term levels above 4pCi/L. [ 58 ]
Many states have active radon testing and mitigation programs in place, which require testing in buildings such as public schools. However, these are not standardized nationwide, and the rules and regulations on reducing high radon levels are even less common. The School Health Policies and Practices Study (SHPPS), conducted by the CDC in 2012, found that of schools located in counties with high predicted indoor radon levels, only 42.4% had radon testing policies, and a mere 37.5% had policy for radon-resistant new construction practices. [ 59 ] Only about 20% of all schools nationwide have done testing, even though the EPA recommends that every school be tested. [ 58 ] These numbers are arguably not high enough to ensure protection of the majority of children from elevated radon exposures. For exposure standards to be effective, they should be set for those most susceptible. [ citation needed ]
UNSCEAR recommends [ 60 ] a reference value of 9 nSv (Bq·h/m 3 ) −1 .
For example, a person living (7000 h/year) in a concentration of 40 Bq/m 3 receives an effective dose of 1 mSv/year.
Studies of miners exposed to radon and its decay products provide a direct basis for assessing their lung cancer risk. The BEIR VI report, entitled Health Effects of Exposure to Radon , [ 40 ] reported an excess relative risk from exposure to radon that was equivalent to 1.8% per megabecquerel hours per cubic meter (MBq·h/m 3 ) (95% confidence interval: 0.3, 35) for miners with cumulative exposures below 30 MBq·h/m 3 . [ 41 ] Estimates of risk per unit exposure are 5.38×10 −4 per WLM; 9.68×10 −4 /WLM for ever smokers; and 1.67×10 −4 per WLM for never smokers. [ 5 ]
According to the UNSCEAR modeling, based on these miner's studies, the excess relative risk from long-term residential exposure to radon at 100 Bq/m 3 is considered to be about 0.16 (after correction for uncertainties in exposure assessment), with about a threefold factor of uncertainty higher or lower than that value. [ 41 ] In other words, the absence of ill effects (or even positive hormesis effects) at 100 Bq/m 3 are compatible with the known data. [ citation needed ]
The ICPR 65 model [ 61 ] follows the same approach, and estimates the relative lifelong risk probability of radon-induced cancer death to 1.23 × 10 −6 per Bq/(m 3 ·year). [ 62 ] This relative risk is a global indicator; the risk estimation is independent of sex, age, or smoking habit. Thus, if a smoker's chances of dying of lung cancer are 10 times that of a nonsmoker's, the relative risks for a given radon exposure will be the same according to that model, meaning that the absolute risk of a radon-generated cancer for a smoker is (implicitly) tenfold that of a nonsmoker.
The risk estimates correspond to a unit risk of approximately 3–6 × 10 −5 per Bq/m 3 , assuming a lifetime risk of lung cancer of 3%. This means that a person living in an average European dwelling with 50 Bq/m 3 has a lifetime excess lung cancer risk of 1.5–3 × 10 −3 . Similarly, a person living in a dwelling with a high radon concentration of 1000 Bq/m 3 has a lifetime excess lung cancer risk of 3–6%, implying a doubling of background lung cancer risk. [ 63 ]
The BEIR VI model proposed by the National Academy of Sciences of the USA [ 40 ] is more complex. It is a multiplicative model that estimates an excess risk per exposure unit. It takes into account age, elapsed time since exposure, and duration and length of exposure, and its parameters allow for taking smoking habits into account. [ 62 ] In the absence of other causes of death, the absolute risks of lung cancer by age 75 at usual radon concentrations of 0, 100, and 400 Bq/m 3 would be about 0.4%, 0.5%, and 0.7%, respectively, for lifelong nonsmokers, and about 25 times greater (10%, 12%, and 16%) for cigarette smokers. [ 64 ]
There is great uncertainty in applying risk estimates derived from studies in miners to the effects of residential radon, and direct estimates of the risks of residential radon are needed. [ 38 ]
As with the miner data, the same confounding factor of other carcinogens such as dust applies. [ 42 ]
The largest natural contributor to public radiation dose is radon, a naturally occurring, radioactive gas found in soil and rock, [ 65 ] which comprises approximately 55% of the annual background dose.
Radon gas levels vary by locality and the composition of the underlying soil and rocks.
Radon (at concentrations encountered in mines) was recognized as carcinogenic in the 1980s, in view of the lung cancer statistics for miners' cohorts. [ 66 ] Although radon may present significant risks, thousands of persons annually go to radon-contaminated mines for deliberate exposure to help with the symptoms of arthritis without any serious health effects. [ 67 ] [ 68 ]
Radon as a terrestrial source of background radiation is of particular concern because, although overall very rare, where it does occur it often does so in high concentrations. Some of these areas, including parts of Cornwall and Aberdeenshire have high enough natural radiation levels that nuclear licensed sites cannot be built there—the sites would already exceed legal limits before they opened, and the natural topsoil and rock would all have to be disposed of as low-level nuclear waste . [ 69 ] [ clarification needed ] People in affected localities can receive up to 10 mSv per year background radiation. [ 69 ]
This [ clarification needed ] led to a health policy problem: what is the health impact of exposure to radon concentrations (100 Bq/m 3 ) typically found in some buildings? [ clarification needed ]
When exposure to a carcinogenic substance is suspected, the cause/effect relationship on any given case can never be ascertained. Lung cancer occurs spontaneously, and there is no difference between a "natural" cancer and another one caused by radon (or smoking). Furthermore, it takes years for a cancer to develop, so that determining the past exposure of a case is usually very approximative. The health effect of radon can only be demonstrated through theory and statistical observation. [ citation needed ]
The study design for epidemiological methods may be of three kinds:
Furthermore, theory and observation must confirm each other for a relationship to be accepted as fully proven. Even when a statistical link between factor and effect appears significant, it must be backed by a theoretical explanation; and a theory is not accepted as factual unless confirmed by observations. [ citation needed ]
Cohort studies are impractical for the study of domestic radon exposure. With the expected effect of small exposures being very small, the direct observation of this effect would require huge cohorts: the populations of whole countries. [ citation needed ]
Several ecological studies have been performed to assess possible relationships between selected cancers and estimated radon levels within particular geographic regions where environmental radon levels appear to be higher than other geographic regions. [ 73 ] Results of such ecological studies are mixed; both positive and negative associations, as well as no significant associations, have been suggested. [ 74 ]
The most direct way to assess the risks posed by radon in homes is through case-control studies.
The studies have not produced a definitive answer, primarily because the risk is likely to be very small at the low exposure encountered from most homes and because it is difficult to estimate radon exposures that people have received over their lifetimes. In addition, it is clear that far more lung cancers are caused by smoking than are caused by radon. [ 40 ]
Epidemiologic radon studies have found trends to increased lung cancer risk from radon with a no evidence of a threshold, and evidence against a threshold above high as 150 Bq/m 3 (almost exactly the EPA's action level of 4 pCi/L). [ 64 ] Another study similarly found that there is no evidence of a threshold but lacked the statistical power to clearly identify the threshold at this low level. [ 75 ] Notably, the latter deviance from zero at low level convinced the World Health Organization that, "The dose-response relation seems to be linear without evidence of a threshold, meaning that the lung cancer risk increases proportionally with increasing radon exposure." [ 76 ]
The most elaborate case-control epidemiologic radon study performed by R. William Field and colleagues identified a 50% increased lung cancer risk with prolonged radon exposure at the EPA's action level of 4 pCi/L. [ 77 ] Iowa has the highest average radon concentrations in the United States and a very stable population which added to the strength of the study. For that study, the odds ratio was found to be increased slightly above the confidence interval (95% CI) for cumulative radon exposures above 17 WLM (6.2 pC/L=230 Bq/m 3 and above). [ citation needed ]
The results of a methodical ten-year-long, case-controlled study of residential radon exposure in Worcester County, Massachusetts, found an apparent 60% reduction in lung cancer risk amongst people exposed to low levels (0–150 Bq/m 3 ) of radon gas; levels typically encountered in 90% of American homes—an apparent support for the idea of radiation hormesis . [ 78 ] In that study, a significant result (95% CI) was obtained for the 75–150 Bq/m 3 category.
The study paid close attention to the cohort's levels of smoking, occupational exposure to carcinogens and education attainment. However, unlike the majority of the residential radon studies, the study was not population-based. Errors in retrospective exposure assessment could not be ruled out in the finding at low levels. Other studies into the effects of domestic radon exposure have not reported a hormetic effect; including for example the respected "Iowa Radon Lung Cancer Study" of Field et al. (2000), which also used sophisticated radon exposure dosimetry . [ 77 ]
"Radon therapy" is an intentional exposure to radon via inhalation or ingestion. Nevertheless, epidemiological evidence shows a clear link between breathing high concentrations of radon and incidence of lung cancer. [ 79 ] [ failed verification ]
In the late 20th century and early 21st century, some "health mines" were established in Basin, Montana , which attracted people seeking relief from health problems such as arthritis through limited exposure to radioactive mine water and radon. [ 80 ] The practice is controversial because of the well-documented ill effects of high-dose radiation on the body. [ 81 ] Pseudoscientific doctors claim beneficial long-term effects, [ 68 ] [ dubious – discuss ] although proper clinical trials have not been performed. The claim of the study is of concern as the authors excluded results from patients requiring cortisone injections as a result of exacerbation of their arthritis during the course of treatment. This study also assumes 60 patients represents all patients. This study also did not record if any patients taken any NSAIDS. The study also claims that the therapeutic benefit comes from the "integration of radon into the skin". [ 68 ]
Radioactive water baths have been applied since 1906 in Jáchymov , Czech Republic, but even before radon discovery they were used in Bad Gastein , Austria. Radium-rich springs are also used in traditional Japanese onsen in Misasa , Tottori Prefecture . Drinking therapy is applied in Bad Brambach , Germany. Inhalation therapy is carried out in Gasteiner-Heilstollen, Austria, in Kowary , Poland and in Boulder, Montana , United States. In the United States and Europe there are several "radon spas ", where people sit for minutes or hours in a high-radon atmosphere in the belief that low doses of radiation will invigorate or energize them. [ 82 ]
Radon has been produced commercially for use in radiation therapy , but for the most part has been replaced by radionuclides made in particle accelerators and nuclear reactors . Radon has been used in implantable seeds, made of gold or glass, primarily used to treat cancers.
The gold seeds were produced by filling a long tube with radon pumped from a radium source, the tube being then divided into short sections by crimping and cutting. The gold layer keeps the radon within, and filters out the alpha and beta radiation, while allowing the gamma rays to escape (which kill the diseased tissue). The activities might range from 2 to 200 MBq/seed. [ 83 ] The gamma rays are produced by radon and the first short-lived elements of its decay chain ( 218 Po, 214 Pb, 214 Bi, 214 Po). [ citation needed ]
Radon and its first decay products being very short-lived, the seed is left in place. After 11 half-lives (42 days), radon radioactivity is at 1/2 000 of its original level. At this stage, the predominant residual activity is due to the radon decay product 210 Pb, whose half-life (22.3 years) is 2 000 times that of radon, and its descendants 210 Bi and 210 Po , totalling 0.03% of the initial seed activity. [ citation needed ]
The Federal Radon Action Plan, also known as FRAP, was created in 2010 and launched in 2011. [ 84 ] It was piloted by the U.S. Environmental Protection Agency in conjunction with the U.S. Departments of Health and Human Services, Agriculture, Defense, Energy, Housing and Urban Development, the Interior, Veterans Affairs, and the General Services Administration. The goal set forth by FRAP was to eliminate radon-induced cancer that can be prevented by expanding radon testing, mitigating high levels of radon exposure, developing radon-resistant construction, and meeting Healthy People 2020 radon objectives. [ 84 ] They identified the barriers to change such as limited public knowledge of the dangers of radon exposure, the perceived high costs of mitigation, and the availability of radon testing. As a result, they also identified major ways to create change: demonstrate the importance of testing and the ease of mitigation, provide incentives for testing and mitigation, and build the radon services industry. [ 84 ] To complete these goals, representatives from each organization and department established specific commitments and timelines to complete tasks and continued to meet periodically. However, FRAP was concluded in 2016 as The National Radon Action Plan took over. In the final report on commitments, it was found that FRAP completed 88% of their commitments. [ 85 ] They reported achieving the highest rates of radon mitigation and new construction mitigation in the United States as of 2014. [ 85 ] FRAP concluded that because of their efforts, at least 1.6 million homes, schools, and childcare facilities received direct and immediate positive effects. [ 85 ]
The National Radon Action Plan, also known as NRAP, was created in 2014 and launched in 2015. [ 86 ] It is led by The American Lung Association with collaborative efforts from the American Association of Radon Scientists and Technologists, American Society of Home Inspectors, Cancer Survivors Against Radon, Children's Environmental Health Network, Citizens for Radioactive Radon Reduction, Conference of Radiation Control Program Directors, Environmental Law Institute, National Center for Healthy Housing, U.S. Environmental Protection Agency, U.S. Department of Health and Human Services, and U.S. Department of Housing and Urban Development. The goals of NRAP are to continue efforts set forth by FRAP to eliminate radon induced cancer that can be prevented by expanding radon testing, mitigating high levels of radon exposure, and developing radon resistant construction. [ 87 ] NRAP also aims to reduce radon risk in 5 million homes, and save 3,200 lives by 2020. [ 87 ] To complete these goals, representatives from each organization have established the following action plans: embed radon risk reduction as a standard practice across housing sectors, provide incentives and support to test and mitigate radon, promote the use of certified radon services and build the industry, and increase public attention to radon risk and the importance of reduction. [ 87 ] The NRAP is currently in action, implementing programs, identifying approaches, and collaborating across organizations to achieve these goals.
The only dose-effect relationship available are those of miners cohorts (for much higher exposures), exposed to radon. Studies of Hiroshima and Nagasaki survivors are less informative (the exposure to radon is chronic, localized, and the ionizing radiations are alpha rays).
Although low-exposed miners experienced exposures comparable to long-term residence in high-radon dwellings, the mean cumulative exposure among miners is approximately 30-fold higher than that associated with long-term residency in a typical home. Moreover, the smoking is a significant confounding factor in all miners' studies. It can be concluded from miner studies that when the radon exposure in dwellings compares to that in mines (above 1000 Bq/m 3 ), radon is a proven health hazard; but in the 1980s very little was known on the dose-effect relationship, both theoretically and statistical. [ citation needed ]
Studies have been made since the 1980s, both on epidemiological studies and in the radiobiology field.
In the radiobiology and carcinogenesis studies, progress has been made in understanding the first steps of cancer development, but not to the point of validating a reference dose-effect model. The only certainty gained is that the process is very complex, the resulting dose-effect response being complex, and most probably not a linear one.
Biologically based models have also been proposed that could project substantially reduced carcinogenicity at low doses. [ 5 ] [ 88 ] [ 89 ] In the epidemiological field, no definite conclusion has been reached. However, from the evidence now available, a threshold exposure, that is, a level of exposure below which there is no effect of radon, cannot be excluded. [ 40 ]
Given the radon distribution observed in dwellings, and the dose-effect relationship proposed by a given model, a theoretical number of victims can be calculated, and serve as a basis for public health policies. [ citation needed ]
With the BEIR VI model, the main health effect (nearly 75% of the death toll) is to be found at low radon concentration exposures, because most of the population (about 90%) lives in the 0–200 Bq/m 3 range. [ 90 ] Under this modeling, the best policy is obviously to reduce the radon levels of all homes where the radon level is above average, because this leads to a significant decrease of radon exposure on a significant fraction of the population; but this effect is predicted in the 0–200 Bq/m 3 range, where the linear model has its maximum uncertainty. From the statistical evidence available, a threshold exposure cannot be excluded; if such a threshold exists, the real radon health effect would in fact be limited to those homes where the radon concentrations reaches that observed in mines—at most a few percent. If a radiation hormesis effect exists after all, the situation would be even worse: under that hypothesis, suppressing the natural low exposure to radon (in the 0–200 Bq/m 3 range) would actually lead to an increase of cancer incidence, due to the suppression of this (hypothetical) protecting effect. As the low-dose response is unclear, the choice of a model is very controversial.
No conclusive statistics being available for the levels of exposure usually found in homes, the risks posed by domestic exposures is usually estimated on the basis of observed lung-cancer deaths caused by higher exposures in mines, under the assumption that the risk of developing lung-cancer increases linearly as the exposure increases. [ 40 ] This was the basis for the model proposed by BEIR IV in the 1980s. The linear no-threshold model has since been kept in a conservative approach by the UNSCEAR [ 41 ] report and the BEIR VI and BEIR VII [ 91 ] publications, essentially for lack of a better choice:
Until the [...] uncertainties on low-dose response are resolved, the Committee believes that [ the linear no-threshold model ] is consistent with developing knowledge and that it remains, accordingly, the most scientifically defensible approximation of low-dose response. However, a strictly linear dose response should not be expected in all circumstances.
The BEIR VI committee adopted the linear no-threshold assumption based on its understanding of the mechanisms of radon-induced lung cancer, but recognized that this understanding is incomplete and that therefore the evidence for this assumption is not conclusive. [ 5 ]
In discussing these figures, it should be kept in mind that both the radon distribution in dwelling and its effect at low exposures are not precisely known, and the radon health effect has to be computed (deaths caused by radon domestic exposure cannot be observed as such). These estimations are strongly dependent on the model retained.
According to these models, radon exposure is thought to be the second major cause of lung cancer after smoking. [ 66 ] Iowa has the highest average radon concentration in the United States; studies performed there have demonstrated a 50% increased lung cancer risk with prolonged radon exposure above the EPA's action level of 4 pCi/L. [ 77 ] [ 92 ]
Based on studies carried out by the National Academy of Sciences in the United States, radon would thus be the second leading cause of lung cancer after smoking , and accounts for 15,000 to 22,000 cancer deaths per year in the US alone. [ 93 ] The United States Environmental Protection Agency (EPA) says that radon is the number one cause of lung cancer among non-smokers. [ 94 ] The general population is exposed to small amounts of polonium as a radon daughter in indoor air; the isotopes 214 Po and 218 Po are thought to cause the majority [ 95 ] of the estimated 15,000–22,000 lung cancer deaths in the US every year that have been attributed to indoor radon. [ 96 ] The Surgeon General of the United States has reported that over 20,000 Americans die each year of radon-related lung cancer. [ 97 ]
In the United Kingdom, residential radon would be, after cigarette smoking, the second most frequent cause of lung cancer deaths: according to models, 83.9% of deaths are attributed to smoking only, 1.0% to radon only, and 5.5% to a combination of radon and smoking. [ 38 ]
The World Health Organization has recommended a radon reference concentration of 100 Bq/m 3 (2.7 pCi/L). [ 98 ] The European Union recommends that action should be taken starting from concentrations of 400 Bq/m 3 (11 pCi/L) for older dwellings and 200 Bq/m 3 (5 pCi/L) for newer ones. [ 99 ] After publication of the North American and European Pooling Studies, Health Canada proposed a new guideline that lowers their action level from 800 to 200 Bq/m 3 (22 to 5 pCi/L). [ 100 ] The United States Environmental Protection Agency (EPA) strongly recommends action for any dwelling with a concentration higher than 148 Bq/m 3 (4 pCi/L), [ 51 ] and encourages action starting at 74 Bq/m 3 (2 pCi/L).
EPA recommends that all homes should be monitored for radon. If testing shows levels less than 4 picocuries radon per liter of air (160 Bq/m 3 ), then no action is necessary. For levels of 20 picocuries radon per liter of air (800 Bq/m 3 ) or higher, the home owner should consider some type of procedure to decrease indoor radon levels. [ 1 ] For instance, as radon has a half-life of four days, opening the windows once a day can cut the mean radon concentration to one fourth of its level.
The United States Environmental Protection Agency (EPA) recommends homes be fixed if an occupant's long-term exposure will average 4 picocuries per liter (pCi/L) that is 148 Bq/m 3 . [ 101 ] EPA estimates that one in 15 homes in the United States has radon levels above the recommended guideline of 4 pCi/L. [ 51 ] EPA radon risk level tables including comparisons to other risks encountered in life are available in their citizen's guide. [ 102 ] The EPA estimates that nationally, 8% to 12% of all dwellings are above their maximum "safe levels" (four picocuries per liter—the equivalent to roughly 200 chest x-rays). The United States Surgeon General and the EPA both recommend that all homes be tested for radon.
The limits retained do not correspond to a known threshold in the biological effect, but are determined by a cost-efficiency analysis. EPA believes that a 150 Bq/m 3 level (4 pCi/L) is achievable in the majority of homes for a reasonable cost, the average cost per life saved by using this action level is about $700,000. [ 103 ]
For radon concentration in drinkable water, the World Health Organization issued as guidelines (1988) that remedial action should be considered when the radon activity exceeded 100 kBq/m 3 in a building, and remedial action should be considered without long delay if exceeding 400 kBq/m 3 . [ 1 ]
There are relatively simple tests for radon gas. Radon test kits are commercially available. The short-term radon test kits used for screening purposes are inexpensive, in many cases free. In the United States, discounted test kits can be purchased online through The National Radon Program Services at Kansas State University or through state radon offices. [ citation needed ] Information about local radon zones and specific state contact information can be accessed through the Environmental Protection Agency (EPA) Map. [ 104 ] The kit includes a collector that the user hangs in the lowest livable floor of the dwelling for 2 to 7 days. [ 105 ] Charcoal canisters are another type of short-term radon test, and are designed to be used for 2 to 4 days. [ 105 ] The user then sends the collector to a laboratory for analysis. Both devices are passive, meaning that they do not need power to function. [ 105 ]
The accuracy of the residential radon test depends upon the lack of ventilation in the house when the sample is being obtained. Thus, the occupants will be instructed not to open windows, etc., for ventilation during the pendency of test, usually two days or more. [ citation needed ]
Long-term kits, taking collections for 3 months up to one year, are also available. [ 105 ] An open-land test kit can test radon emissions from the land before construction begins. A Lucas cell is one type of long-term device. A Lucas cell is also an active device, or one that requires power to function. Active devices provide continuous monitoring, and some can report on the variation of radon and interference within the testing period. These tests usually require operation by trained testers and are often more expensive than passive testing. [ 105 ] The National Radon Proficiency Program (NRPP) provides a list of radon measurement professionals. [ 106 ]
Radon levels fluctuate naturally. An initial test might not be an accurate assessment of a home's average radon level. Transient weather can affect short term measurements. [95] Therefore, a high result (over 4 pCi/L) justifies repeating the test before undertaking more expensive abatement projects. Measurements between 4 and 10 pCi/L warrant a long-term radon test. Measurements over 10 pCi/L warrant only another short-term test so that abatement measures are not unduly delayed. Purchasers of real estate are advised to delay or decline a purchase if the seller has not successfully abated radon to 4 pCi/L or less. [95]
Since radon concentrations vary substantially from day to day, single grab-type measurements are generally not very useful, except as a means of identifying a potential problem area, and indicating a need for more sophisticated testing. [ 107 ] The EPA recommends that an initial short-term test be performed in a closed building. An initial short-term test of 2 to 90 days allows residents to be informed quickly in case a home contains high levels of radon. Long-term tests provide a better estimate of the average annual radon level. [ 108 ]
Transport of radon in indoor air is almost entirely controlled by the ventilation rate in the enclosure. Since air pressure is usually lower inside houses than it is outside, the home acts like a vacuum, drawing radon gas in through cracks in the foundation or other openings such as ventilation systems. [ 109 ] Generally, the indoor radon concentrations increase as ventilation rates decrease. [ 107 ] In a well ventilated place, the radon concentration tends to align with outdoor values (typically 10 Bq/m 3 , ranging from 1 to 100 Bq/m 3 ).
Radon levels in indoor air can be lowered in several ways, from sealing cracks in floors and walls to increasing the ventilation rate of the building. Listed here are some of the accepted ways of reducing the amount of radon accumulating in a dwelling: [ 110 ]
The half-life for radon is 3.8 days, indicating that once the source is removed, the hazard will be greatly reduced within approximately one month (seven half-lives).
Positive-pressure ventilation systems can be combined with a heat exchanger to recover energy in the process of exchanging air with the outside, and simply exhausting basement air to the outside is not necessarily a viable solution as this can draw radon gas into a dwelling. Homes built on a crawl space may benefit from a radon collector installed under a "radon barrier, or membrane" (a sheet of plastic or laminated polyethylene film that covers the crawl space floor).
ASTM E-2121 is a standard for reducing radon in homes as far as practicable below 4 picocuries per liter (pCi/L) in indoor air. [ 111 ] [ 112 ]
In the US, approximately 14 states have a state radon programs which train and license radon mitigation contractors and radon measurement professionals. To determine if your state licenses radon professionals contact your state health department. The National Environmental Health Association and the National Radon Safety Board administer voluntary National Radon Proficiency Programs for radon professionals consisting of individuals and companies wanting to take training courses and examinations to demonstrate their competency. [ 113 ] Without the proper equipment or technical knowledge, radon levels can actually increase or create other potential hazards and additional costs. [ 114 ] A list of certified mitigation service providers is available through state radon offices, which are listed on the EPA website. [ 115 ] [ 114 ] Indoor radon can be mitigated by sealing basement foundations, water drainage, or by sub-slab, or sub-membrane depressurization. In many cases, mitigators can use PVC piping and specialized radon suction fans to exhaust sub-slab, or sub-membrane radon and other soil gases to the outside atmosphere. Most of these solutions for radon mitigation require maintenance, and it is important to continually replace any fans or filters as needed to continue proper functioning. [ 109 ]
Since radon gas is found in most soil and rocks, it is not only able to move into the air, but also into underground water sources. [ 116 ] Radon may be present in well water and can be released into the air in homes when water is used for showering and other household uses. [ 109 ] If it is suspected that a private well or drinking water may be affected by radon, the National Radon Program Services Hotline at 1-800-SOS-RADON can be contacted for information regarding state radon office phone numbers. State radon offices can provide additional resources, such as local laboratories that can test water for radon. [ 109 ]
If it is determined that radon is present in a private well, installing either a point-of-use or point-of-entry solution may be necessary. [ 109 ] Point-of-use treatments are installed at the tap, and are only helpful in removing radon from drinking water. To address the more common problem of breathing in radon released from water used during showers and other household activities, a point-of-entry solution may be more reliable. [ 109 ] Point-of-entry systems usually involve a granular activated carbon filter, or an aeration system; both methods can help to remove radon before it enters the home's water distribution system. [ 109 ] Aeration systems and granular activation carbon filters both have advantages and disadvantages, so it is recommended to contact state radon departments or a water treatment professional for specific recommendations. [ 109 ]
The high cost of radon remediation in the 1980s led to detractors arguing that the issue is a financial boondoggle reminiscent of the swine flu scare of 1976 . [ 117 ] They further argued that the results of mitigation are inconsistent with lowered cancer risk, especially when indoor radon levels are in the lower range of the actionable exposure level. [ 117 ] | https://en.wikipedia.org/wiki/Health_effects_of_radon |
A number of possible health hazards of air travel have been investigated.
On an airplane, people sit in a confined space for extended periods of time, which increases the risk of transmission of airborne infections. [ 1 ] [ 2 ] For this reason, airlines place restrictions on the travel of passengers with known airborne contagious diseases (e.g. tuberculosis ). During the severe acute respiratory syndrome (SARS) epidemic of 2003, awareness of the possibility of acquisition of infection on a commercial aircraft reached its zenith when on one flight from Hong Kong to Beijing , 16 of 120 people on the flight developed proven SARS from a single index case . [ 3 ]
There is very limited research done on contagious diseases on aircraft. The two most common respiratory pathogens to which air passengers are exposed are parainfluenza and influenza . [ 4 ] In one study, the flight ban imposed following the attacks of September 11, 2001 was found to have restricted the global spread of seasonal influenza, resulting in a much milder influenza season that year, [ 5 ] and the ability of influenza to spread on aircraft has been well documented. [ 1 ] There is no data on the relative contributions of large droplets, small particles, close contact, surface contamination, and no data on the relative importance of any of these methods of transmission for specific diseases, and therefore very little information on how to control the risk of infection. There is no standardisation of air handling by aircraft, installation of HEPA filters or of hand washing by air crew, and no published information on the relative efficacy of any of these interventions in reducing the spread of infection. [ 6 ]
Air travel, like other forms of travel, radically increases the speed at which infections spread around the world, as viruses rapidly spread to large numbers of people living across the world. Human and cargo traffic greatly facilitates the spread of pathogens across the world, [ 7 ] [ 8 ] for example during the COVID-19 pandemic .
Deep vein thrombosis (DVT) is the third most common vascular disease next to stroke and heart attack. It is estimated that DVT affects one in 5,000 travellers on long flights. [ 9 ] [ 10 ] Risk increases with exposure to more flights within a short time frame and with increasing duration of flights. [ 10 ] According to a health expert in Canada, even though the risk of a blood clot is low, given the number of people who fly, it is a public health risk. [ 9 ] It is reported in 2016 that the average distance between seat rows has declined to 79 centimetres (31 in), from over 89 centimetres (35 in), while the average seat size has shrunk to 43 centimetres (17 in) from 46 centimetres (18 in) in the previous two decades. [ 9 ]
Flying 12 km (39,000 ft) high, passengers and crews of jet airliners are exposed to at least 10 times the cosmic ray dose that people at sea level receive. Every few years, a geomagnetic storm permits a solar particle event to penetrate down to jetliner altitudes. Aircraft flying polar routes near the geomagnetic poles are at particular risk. [ 11 ] [ 12 ] [ 13 ] There is also increased radiation from space . [ 14 ]
Other possible hazards of air travel that have been investigated include airsickness and chemical contamination of cabin air .
In low risk pregnancies, most health care providers approve flying until about 36 weeks of gestational age. [ 15 ] Most airlines allow pregnant women to fly short distances at less than 36 weeks, and long distances at less than 32 weeks. [ 16 ] Many airlines require a doctor's note that approves flying, specially at over 28 weeks. [ 16 ] | https://en.wikipedia.org/wiki/Health_hazards_of_air_travel |
Health information-seeking behaviour (HISB), also known as health information seeking , health seeking behaviour or health information behaviour , refers to how people look for information about health and illness. [ 1 ] HISB is a key strategy for many people to understand their health problems and to cope with illness. [ 2 ] Recently, thanks to the development of the technologies and networks, people have a trend of seeking health information on the Internet. Particularly, when it comes to the following scenarios, people tend to carry out online HISB: [ 3 ]
Health information-seeking behaviour refers to the various ways people look for information about health and illness. [ 4 ] HISB can take different forms, for example actively looking for health information or passively receiving it while doing something else. [ 5 ]
Health information seeking not only affects knowledge but can also change how people behave before, during and after their illness. [ 6 ]
Among people with inflammatory bowel disease ( ulcerative colitis or Crohn’s disease ), the information that is most commonly looked for usually concerns treatments and medication for their condition. Further information needs typically concerns basic information about inflammatory bowel disease, managing the condition and daily life, and its effects on sexuality and reproductive health . [ 7 ]
This article related to health informatics is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Health_information-seeking_behaviour |
Health information technology ( HIT ) is health technology , particularly information technology , applied to health and health care . It supports health information management across computerized systems and the secure exchange of health information between consumers , providers , payers , and quality monitors. [ 1 ] Based on a 2008 report on a small series of studies conducted at four sites that provide ambulatory care – three U.S. medical centers and one in the Netherlands, the use of electronic health records (EHRs) was viewed as the most promising tool for improving the overall quality, safety and efficiency of the health delivery system. [ 2 ]
On September 4, 2013, the Health IT Policy Committee (HITPC) accepted and approved recommendations from the Food and Drug Administration Safety and Innovation Act (FDASIA) working group for a risk-based regulatory framework for health information technology. [ 3 ] The Food and Drug Administration (FDA), the Office of the National Coordinator for Health IT (ONC), and Federal Communications Commission (FCC) kicked off the FDASIA workgroup of the HITPC to provide stakeholder input into a report on a risk-based regulatory framework that promotes safety and innovation and reduces regulatory duplication, consistent with section 618 of FDASIA. This provision permitted the Secretary of Health and Human Services (HHS) to form a workgroup in order to obtain broad stakeholder input from across the health care, IT, patients and innovation spectrum. The FDA, ONC, and FCC actively participated in these discussions with stakeholders from across the health care, IT, patients and innovation spectrum.
HIMSS Good Informatics Practices-GIP is aligned with FDA risk-based regulatory framework for health information technology. [ 4 ] GIP development began in 2004 developing risk-based IT technical guidance. [ 5 ] Today, the GIP peer-review and published modules are widely used as a tool for educating Health IT professionals.
Interoperable HIT will improve individual patient care, but it will also bring many public health benefits including:
According to an article published in the International Journal of Medical Informatics , health information sharing between patients and providers helps to improve diagnosis, promotes self care, and patients also know more information about their health. The use of electronic medical records (EMRs) is still scarce now but is increasing in Canada, American and British primary care. Healthcare information in EMRs are important sources for clinical, research, and policy questions. Health information privacy (HIP) and security has been a big concern for patients and providers. Studies in Europe evaluating electronic health information poses a threat to electronic medical records and exchange of personal information. [ 6 ] Moreover, software's traceability features allow the hospitals to collect detailed information about the preparations dispensed, creating a database of every treatment that can be used for research purposes. [ 7 ]
Health information technology (HIT) is "the application of information processing involving both computer hardware and software that deals with the storage, retrieval, sharing, and use of health care information, health data , and knowledge for communication and decision making". [ 8 ] Technology is a broad concept that deals with a species' usage and knowledge of tools and crafts, and how it affects a species' ability to control and adapt to its environment. However, a strict definition is elusive; "technology" can refer to material objects of use to humanity, such as machines, hardware, or utensils, but can also encompass broader themes, including systems, methods of organization, and techniques. For HIT, technology represents computers and communications attributes that can be networked to build systems for moving health information. Informatics is yet another integral aspect of HIT .
Informatics refers to the science of information , the practice of information processing , and the engineering of information systems . Informatics underlies the academic investigation and practitioner application of computing and communications technology to healthcare, health education, and biomedical research. Health informatics refers to the intersection of information science, computer science, and health care. Health informatics describes the use and sharing of information within the healthcare industry with contributions from computer science, mathematics, and psychology. It deals with the resources, devices, and methods required for optimizing the acquisition, storage, retrieval, and use of information in health and biomedicine. Health informatics tools include not only computers but also clinical guidelines, formal medical terminologies, and information and communication systems. Medical informatics, nursing informatics, public health informatics , pharmacy informatics, and translational bioinformatics are subdisciplines that inform health informatics from different disciplinary perspectives. [ 9 ] The processes and people of concern or study are the main variables.
The Institute of Medicine's (2001) call for the use of electronic prescribing systems in all healthcare organizations by 2010 heightened the urgency to accelerate United States hospitals' adoption of CPOE systems. In 2004, President Bush signed an Executive Order titled the President's Health Information Technology Plan, which established a ten-year plan to develop and implement electronic medical record systems across the US to improve the efficiency and safety of care. According to a study by RAND Health , the US healthcare system could save more than $81 billion annually, reduce adverse healthcare events and improve the quality of care if it were to widely adopt health information technology . [ 10 ]
The American Recovery and Reinvestment Act , signed into law in 2009 under the Obama administration, has provided approximately $19 billion in incentives for hospitals to shift from paper to electronic medical records . Meaningful Use, as a part of the 2009 Health Information Technology for Economic and Clinical Health Act (HITECH) was the incentive that included over $20 billion for the implementation of HIT alone, and provided further indication of the growing consensus regarding the potential salutary effect of HIT. The American Recovery and Reinvestment Act has set aside $2 billion which will go towards programs developed by the National Coordinator and Secretary to help healthcare providers implement HIT and provide technical assistance through various regional centers. The other $17 billion in incentives comes from Medicare and Medicaid funding for those who adopt HIT before 2015. Healthcare providers who implement electronic records can receive up to $44,000 over four years in Medicare funding and $63,750 over six years in Medicaid funding. The sooner that healthcare providers adopt the system, the more funding they receive. Those who do not adopt electronic health record systems before 2015 do not receive any federal funding. [ 11 ]
While electronic health records have potentially many advantages in terms of providing efficient and safe care, recent reports have brought to light some challenges with implementing electronic health records. The most immediate barriers for widespread adoption of this technology have been the high initial cost of implementing the new technology and the time required for doctors to train and adapt to the new system. There have also been suspected cases of fraudulent billing , where hospitals inflate their billings to Medicare. Given that healthcare providers have not reached the deadline (2015) for adopting electronic health records, it is unclear what effects this policy will have long term. [ 12 ]
One approach to reducing the costs and promoting wider use is to develop open standards related to EHRs. In 2014 there was widespread interest in a new HL7 draft standard, Fast Healthcare Interoperability Resources (FHIR), which is designed to be open, extensible, and easier to implement, benefiting from modern web technologies. [ 13 ]
In a 2008 study about the adoption of technology in the United States, Furukawa and colleagues classified applications for prescribing to include electronic medical records (EMR), clinical decision support (CDS), and computerized physician order entry (CPOE). [ 14 ] They further defined applications for dispensing to include bar-coding at medication dispensing (BarD), robot for medication dispensing (ROBOT), and automated dispensing machines (ADM). They defined applications for administration to include electronic medication administration records (eMAR) and bar-coding at medication administration (BarA or BCMA). Other types include Health information exchange .
Although the electronic health record (EHR), previously known as the electronic medical record (EMR), is frequently cited in the literature, there is no consensus about the definition. [ 15 ] However, there is consensus that EMRs can reduce several types of errors, including those related to prescription drugs, to preventive care, and to tests and procedures. [ 16 ] Recurring alerts remind clinicians of intervals for preventive care and track referrals and test results. Clinical guidelines for disease management have a demonstrated benefit when accessible within the electronic record during the process of treating the patient. [ 17 ] Advances in health informatics and widespread adoption of interoperable electronic health records promise access to a patient's records at any health care site. A 2005 report noted that medical practices in the United States are encountering barriers to adopting an EHR system, such as training, costs and complexity, but the adoption rate continues to rise (see chart to right). [ 18 ] Since 2002, the National Health Service of the United Kingdom has placed emphasis on introducing computers into healthcare. As of 2005, one of the largest projects for a national EHR is by the National Health Service (NHS) in the United Kingdom . The goal of the NHS is to have 60,000,000 patients with a centralized electronic health record by 2010. The plan involves a gradual roll-out commencing May 2006, providing general practices in England access to the National Programme for IT (NPfIT), the NHS component of which is known as the "Connecting for Health Programme". [ 19 ] However, recent surveys have shown physicians' deficiencies in understanding the patient safety features of the NPfIT-approved software. [ 20 ]
A main problem in HIT adoption is mainly seen by physicians, an important stakeholder to the process of EHR. The Thorn et al. article, elicited that emergency physicians noticed that health information exchange disrupted workflow and was less desirable to use, even though the main goal of EHR is improving coordination of care. The problem was seen that exchanges did not address the needs of end users, e.g., simplicity, user-friendly interface, and speed of systems. [ 21 ] The same finding was seen in an earlier article with the focus on CPOE and physician resistance to its use, Bhattacherjee et al. [ 22 ]
One opportunity for EHRs is to utilize natural language processing for searches. One systematic review of the literature found that searching and analyzing notes and text that would otherwise be inaccessible for review could be accessed through increasing collaboration between software developers and end-users of natural language processing tools within EHRs. [ 23 ]
Prescribing errors are the largest identified source of preventable errors in hospitals. A 2006 report by the Institute of Medicine estimated that a hospitalized patient is exposed to a medication error each day of his or her stay. [ 24 ] Computerized provider order entry (CPOE), also called computerized physician order entry, can reduce total medication error rates by 80%, and adverse (serious with harm to patient) errors by 55%. [ 25 ] A 2004 survey by found that 16% of US clinics, hospitals and medical practices are expected to be utilizing CPOE within 2 years. [ 26 ] In addition to electronic prescribing, a standardized bar code system for dispensing drugs could prevent a quarter of drug errors. [ 24 ] Consumer information about the risks of the drugs and improved drug packaging (clear labels, avoiding similar drug names and dosage reminders) are other error-proofing measures. Despite ample evidence of the potential to reduce medication errors, competing systems of barcoding and electronic prescribing have slowed adoption of this technology by doctors and hospitals in the United States, due to concern with interoperability and compliance with future national standards. [ 27 ] Such concerns are not inconsequential; standards for electronic prescribing for Medicare Part D conflict with regulations in many US states. [ 24 ] And, aside from regulatory concerns, for the small-practice physician, utilizing CPOE requires a major change in practice work flow and an additional investment of time. Many physicians are not full-time hospital staff; entering orders for their hospitalized patients means taking time away from scheduled patients. [ 28 ]
Handwritten reports or notes, manual order entry, non-standard abbreviations, and poor legibility lead to substantial errors and injuries, according to the Institute of Medicine (2000) report. The follow-up IOM (2004) report, Crossing the quality chasm: A new health system for the 21st century , advised rapid adoption of electronic patient records, electronic medication ordering, with computer- and internet-based information systems to support clinical decisions. [ 29 ] However, many system implementations have experienced costly failures. [ 30 ] Furthermore, there is evidence that CPOE may actually contribute to some types of adverse events and other medical errors. [ 31 ] For example, the period immediately following CPOE implementation resulted in significant increases in reported adverse drug events in at least one study, [ 32 ] and evidence of other errors have been reported. [ 25 ] [ 33 ] [ 34 ] Collectively, these reported adverse events describe phenomena related to the disruption of the complex adaptive system resulting from poorly implemented or inadequately planned technological innovation.
Technology may introduce new sources of error. [ 35 ] [ 36 ] Technologically induced errors are significant and increasingly more evident in care delivery systems. Terms to describe this new area of error production include the label technological iatrogenesis [ 37 ] for the process and e-iatrogenic [ 38 ] for the individual error. The sources for these errors include:
Healthcare information technology can also result in iatrogenesis if design and engineering are substandard, as illustrated in a 14-part detailed analysis done at the University of Sydney . [ 40 ] Numerous examples of bias introduced by artificial intelligence (AI) have been cited as the use of AI-assisted healthcare increases. See Algorithmic bias .
The HIMSS Revenue Cycle Improvement Task Force was formed to prepare for the IT changes in the U.S. (e.g. the American Recovery and Reinvestment Act of 2009 (HITECH), Affordable Care Act , 5010 (electronic exchanges), ICD-10). An important change to the revenue cycle is the international classification of diseases (ICD) codes from 9 to 10. ICD-9 codes are set up to use three to five alphanumeric codes that represent 4,000 different types of procedures, while ICD-10 uses three to seven alphanumeric codes increasing procedural codes to 70,000. ICD-9 was outdated because there were more procedures than codes available, and to document for procedures without an ICD-9 code, unspecified codes were utilized which did not fully capture the procedures or the work involved in turn affecting reimbursement. Hence, ICD-10 was introduced to simplify the procedures with unknown codes and unify the standards closer to world standards (ICD-11). One of the main parts of Revenue Cycle HIT is charge capture, it utilizes codes to capture costs for reimbursements from different payers, such as CMS. [ 41 ]
International health system performance comparisons are important for understanding health system complexities and finding better opportunities, which can be done through health information technology. It gives policy makers the chance to compare and contrast the systems through established indicators from health information technology, as inaccurate comparisons can lead to adverse policies. [ 42 ] | https://en.wikipedia.org/wiki/Health_information_technology |
The health management system (HMS) is an evolutionary medicine regulative process proposed by Nicholas Humphrey [ 1 ] [ 2 ] in which actuarial assessment of fitness and economic-type cost–benefit analysis determine the body's regulation of its physiology and health . The incorporation of the cost–benefit calculations into body regulation provides a science grounded approach to mind–body phenomena such as placebos , are otherwise not explainable by low level, noneconomic, and purely feedback based homeostatic or allostatic theories.
Placebos are explained as the result of false information about the availability of external treatment and support that mislead the health management system [ 3 ] into not deploying evolved self-treatments. This results in the placebo suppression of medical symptoms.
Since Hippocrates , it has been recognized that the body has self-healing powers ( vis medicatrix naturae ). Modern evolutionary medicine identifies them with physiologically based self-treatments that provide the body with prophylactic , healing, or restorative capabilities against injuries, infections and physiological disruption. Examples include:
These evolved self-treatments deployed by the body are experienced by humans as unpleasant and unwanted illness symptoms .
Such self-treatments according to evolutionary medicine are deployed to increase an individual's biological fitness .
Two factors affect their deployment.
First, it is usually advantageous to deploy them on a precautionary basis . [ 4 ] As a result, it will often turn out that they have been deployed apparently unnecessarily, though this has in fact been advantageous since in probabilistic terms they have provided an insurance against a potentially costly outcome. As Nesse notes: "Vomiting, for example, may cost only a few hundred calories and a few minutes, whereas not vomiting may result in a 5% chance of death" page 77. [ 4 ]
Second, self-treatments are costly both in using energy, and also in their risk of damaging the body.
One factor in deployment is low level physiological control by proinflammatory cytokines such as IL-1 triggered by bacterial lipopolysaccharides (LPS).
Another is higher level control in which the brain takes into account what it learns about circumstances and how that makes it well and ill. Conditioning shows the existence of such learnt control: give saccharin paired in a drink with a drug that creates immunosuppression , and later on, giving saccharin alone will produce immunosuppression. [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] Such conditioning happens both in experimental rodents and humans. [ 14 ]
Evolution, according to Nicholas Humphrey , has selected an internal health management system that uses cost benefit analysis upon whether the deployment of a self-treatment aids biological fitness , and so should be activated.
a specially designed procedure for "economic resource management" that is, I believe, one of the key features of the "natural health-care service" which has evolved in ourselves and other animals to help us deal throughout our lives with repeated bouts of sickness, injury, and other threats to our well-being. [ 1 ]
An analogy is explicitly made with the health economics consideration used in management decisions involving external medical treatment.
Now, if you wonder about this choice of managerial terminology for talking about biological healing systems, I should say that it is quite deliberate (and so is the pun on NHS.) With the phrase "natural health-care service" I do intend to evoke, at a biological level, all the economic connotations that are so much a part of modern health-care in society. [ 1 ]
External medications will affect the cost benefits advantages of deploying an evolved self-treatment. Some animals use external ones . [ 15 ] Wild animals , including apes , do so in the form of ingested detoxifying clays , [ 16 ] rough leaves that clear gut parasites, [ 17 ] and pharmacologically active plants [ 18 ] [ 19 ] Complementary to this, research finds that animals have the ability to select and prefer substances that aid their recuperation from illness. [ 20 ]
The welfare of social animals (including humans) depends upon other individuals ( social buffering ). [ 21 ] The actuarial assessments of the costs and benefits of deploying a self-treatment therefore will depend upon the presence, or not, of other individuals. The presence of helpful others will affect, for example, the risk of predators when incapacitated, and—in those case in which animals do this (such as humans)—the provision of food, and care during sickness.
The health management system factors in the presence of such external treatment and social support as one aspect of the circumstances needed to determine whether it is advantageous to deploy or not an evolved self-treatment.
All humans societies use external medications, and some individuals exist that are considered to have special healing knowledge about illnesses and their treatments. Humans are also usually supportive to those in their group. The availability of these things will affect the cost benefits of the body deploying its own biological ones. This could, in turn, lead to the health management system (given its beliefs (information) about treatments and support) to deploy or not, or doing so differently, the body's own treatments.
Nicholas Humphrey describes how the health management system explains placebos – an external treatment without direct physiological effects – as follows:
Suppose, for example, a doctor gives someone who is suffering an infection a pill that she rightly believes to contain an antibiotic: because her hopes will be raised she will no doubt make appropriate adjustments to her health-management strategy – lowering her precautionary defences in anticipation of the sickness not lasting long. [ 1 ]
The health management system, in other words, when faced with an infection is tricked into making a mistaken cost benefit analysis using false information. The effect of that false information is that the benefits of the self-treatment cease to outweigh its costs. As a result, it is not deployed, and an individual does not experience unwanted medical symptoms.
Failure to deploy an evolved self-treatment need not put an individual at risk since evolution has advantaged their deployment on a precautionary basis. [ 4 ] As Nicholas Humphrey notes:
many of the health-care measures we've been discussing are precautionary measures designed to protect from dangers that lie ahead in an uncertain future. Pain is a way of making sure you give your body rest just in case you need it. Rationing the use of the immune system is a way of making sure you have the resources to cope with renewed attacks just in case they happen. Your healing systems are basically tending to be cautious, and sometimes over-cautious, as if working on the principle of better safe than sorry. [ 1 ]
Therefore, not deploying an evolved self-treatment, and so not having a medical symptom due to placebo false information might be without consequence.
The health management system's idea of a top down neural control of the body is also found in the idea that a central governor regulates muscle fatigue to protect the body from the harmful effects (such as anoxia and hyperglycemia ) of over prolonged exercise.
The idea of a fatigue governor was first proposed in 1924 by the 1922 Nobel Prize winner Archibald Hill , [ 22 ] and more recently, on the basis of modern research, by Tim Noakes . [ 23 ] [ 24 ] [ 25 ] [ 26 ] [ 27 ]
Like with the health management system, the central governor shares the idea that much of what is attributed to low level feedback homeostatic regulation is, in fact, due to top down control by the brain. The advantage of this top down management is that the brain can enhance such regulation by allowing it to be modified by information. For example, in endurance running, a cost-benefit trade-off exists between the advantages of continuing to run, and the risk if this is too prolonged that it might harm the body. Being able to regulate fatigue in terms of information about the benefits and costs of continued exercise would enhance biological fitness.
Low level theories exist that suggest that fatigue is due mechanical failure of the exercising muscles ("peripheral fatigue"). [ 28 ] However, such low level theories do not explain why running muscle fatigue is affected by information relevant to cost benefit trade offs. For example, marathon runners can carry on running longer if told they are near the finishing line, than far away. The existence of a central governor can explain this effect. | https://en.wikipedia.org/wiki/Health_management_system |
Health systems engineering or health engineering (often known as health care systems engineering (HCSE)) is an academic and a pragmatic discipline that approaches the health care industry, and other industries connected with health care delivery, as complex adaptive systems , and identifies and applies engineering design and analysis principles in such areas. This can overlap with biomedical engineering (BME) which focuses on design and development of various medical products; industrial engineering (IE) and operations management which involve improving organizational operations; and various health care practice fields like medicine , pharmacy , dentistry , nursing , etc. Other fields participating in this interdisciplinary area include public health , information technology , management studies, and regulatory law .
People whose work implicates this field in some capacity can include members of all the above-noted fields, many of which have sub-fields targeted toward health care matters even if health or health care is not a principal focus of the overall field (e.g. management, law). Areas of biomedical engineering in this area often include clinical engineering (sometimes also called "hospital engineering") as well as those BMEs developing medical devices and pharmaceutical drugs . The industrial engineering principles employed tend to include optimization, decision analysis , human factors engineering , quality engineering , and value engineering . [ 1 ]
The field came to be in the 1950s and 1960s as an outgrowth of industrial engineering as applied to hospitals. [ 2 ] | https://en.wikipedia.org/wiki/Health_systems_engineering |
Health technology is defined by the World Health Organization as the "application of organized knowledge and skills in the form of devices, medicines, vaccines, procedures, and systems developed to solve a health problem and improve quality of lives". [ 1 ] This includes pharmaceuticals, devices, procedures, and organizational systems used in the healthcare industry, [ 2 ] as well as computer-supported information systems . In the United States, these technologies involve standardized physical objects, as well as traditional and designed social means and methods to treat or care for patients. [ 3 ]
During the pre-digital era, patients suffered from inefficient and faulty clinical systems, processes, and conditions. [ 4 ] Many medical errors happened in the past due to undeveloped health technologies. [ citation needed ] Some examples of these medical errors included adverse drug events and alarm fatigue . When many alarms are repeatedly triggered or activated, especially for unimportant events, workers may become desensitized to the alarms. Healthcare professionals who have alarm fatigue may ignore an alarm believing it to be insignificant, which could lead to death and dangerous situations. With technological development, an intelligent program of integration and physiologic sense-making was developed and helped reduce the number of false alarms. [ 4 ]
Also, with greater investment in health technologies, fewer medical errors happened. [ citation needed ] Outdated paper records were replaced in many healthcare organizations by electronic health records (EHR). [ citation needed ] According to studies, this has brought many changes to healthcare. [ 5 ] Drug administration has improved, healthcare providers can now access medical information easier, provide better treatments and faster results, and save more costs. [ 5 ]
To help promote and expand the adoption of health information technology , Congress passed the HITECH act as part of the American Recovery and Reinvestment Act of 2009 . HITECH stands for Health Information Technology for Economic and Clinical Health Act. It gave the department of health and human services the authority to improve healthcare quality and efficiency through the promotion of health IT. [ 6 ] The act provided financial incentives or penalties to organizations to motivate healthcare providers to improve healthcare. The purpose of the act was to improve quality, safety, efficiency, and ultimately to reduce health disparities. [ 7 ]
One of the main parts of the HITECH act was setting the meaningful use requirement, which required EHRs to allow for the electronic exchange of health information and to submit clinical information. The purpose of HITECH is to ensure the sharing of electronic information with patients and other clinicians are secure. HITECH also aimed to help healthcare providers have more efficient operations and reduce medical errors. The program consisted of three phases. Phase one aimed to improve healthcare quality, safety and efficiency. [ 7 ] Phase two expanded on phase one and focused on clinical processes and ensuring the meaningful use of EHRs. [ 7 ] Lastly, phase three focused on using Certified Electronic Health Record Technology (CEHRT) to improve health outcomes. [ 7 ]
In 2014, the implementation of electronic records in US hospitals rose from a low percentage of 10% to a high percentage of 70%. [ 4 ]
At the beginning of 2018, healthcare providers who participated in the Medicare Promoting Interoperability Program needed to report on Quality Payment Program requirements. The program focused more on interoperability and aimed to improve patient access to health information. [ 7 ]
Phones that can track one's whereabouts, steps and more can serve as medical devices, and medical devices have much the same effect as these phones. According to one study, people were willing to share personal data for scientific advancements, although they still expressed uncertainty about who would have access to their data. [ 8 ] People are naturally cautious about giving out sensitive personal information. [ 8 ] Phones add an extra level of threat. [ 9 ] Mobile devices continue to increase in popularity each year. The addition of mobile devices serving as medical devices increases the chances for an attacker to gain unauthorized information. [ 9 ]
In 2015 the Medical Access and CHIP Reauthorization Act (MACRA) was passed, pushing towards electronic health records. In the article "Health Information Technology: Integration, Patient Empowerment, and Security", K. Marvin provided multiple different polls based on people's views on different types of technology entering the medical field most answers were responded with somewhat likely and very few completely disagreed on the technology being used in medicine. Marvin discusses the maintenance required to protect medical data and technology against cyber attacks as well as providing a proper data backup system for the information. [ 10 ]
Patient Protection and Affordable Care Act (ACA) also known as Obamacare and health information technology health care is entering the digital era. Although with this development it needs to be protected. Both health information and financial information now made digital within the health industry might become a larger target for cyber-crime. Even with multiple different types of safeguards hackers somehow still find their way in so the security that is in place needs to constantly be updated to prevent these breaches. [ 11 ]
With the increased use of IT systems, privacy violations were increasing rapidly due to the easier access and poor management. As such, the concern of privacy has become an important topic in healthcare. Privacy breaches happen when organizations do not protect the privacy of people's data. There are four types of privacy breaches, which include unintended disclosure by authorized personnel, intended disclosure by authorized personnel, privacy data loss or theft, and virtual hacking. It became more important to protect the privacy and security of patients' data because of the high negative impact on both individuals and organizations. Stolen personal information can be used to open credit cards or other unethical behaviors. Also, individuals have to spend a large amount of money to rectify the issue. The exposure of sensitive health information also can have negative impacts on individuals' relationships, jobs, or other personal areas. For the organization, the privacy breach can cause loss of trust, customers, legal actions, and monetary fines. [ 12 ]
HIPAA stands for the Health Insurance Portability and Accountability Act of 1996 . It is a U.S. healthcare legislation to direct how patient data is used and includes two major rules which are privacy and security of data. The privacy rule protects people's rights to privacy and security rule determines how to protect people's privacy. [ 13 ]
According to the HIPAA Security Rule, it ensures that protected health information has three characteristics: confidentiality, availability, and integrity. Confidentiality indicates keeping the data confidential to prevent data loss or individuals who are unauthorized to access that protected health information. Availability allows people who are authorized to access the systems and networks when and where that information is in fact needed, such as natural disasters. In cases like this, protected health information is mostly backed up on to a separate server or printed out in paper copies, so people can access it. Lastly, integrity ensures not using inaccurate information and improperly modified data due to a bad design system or process to protect the permanence of the patient data. The consequences of using inaccurate or improperly modified data could become useless or even dangerous. [ 13 ]
Health Organizations of HIPAA also created administrative safeguards, physical safeguards, technical safeguards, to help protect the privacy of patients. Administrative safeguards typically include security management process, security personnel, information access management, workforce training and management, and evaluation of security policies and procedures. Security management processes are one of the important administrative safeguards' examples. It is essential to reduce the risks and vulnerabilities of the system. The processes are mostly the standard operating procedures written out as training manuals. The purpose is to educate people on how to handle protected health information in proper behavior. [ 14 ]
Physical safeguards include lock and key, card swipe, positioning of screens, confidential envelopes, and shredding of paper copies. Lock and key are common examples of physical safeguards. They can limit physical access to facilities. Lock and key are simple, but they can prevent individuals from stealing medical records. Individuals must have an actual key to access to the lock. [ 14 ]
Lastly, technical safeguards include access control, audit controls, integrity controls, and transmission security. The access control mechanism is a common example of technical safeguards. It allows the access of authorized personnel. The technology includes authentication and authorization. Authentication is the proof of identity that handles confidential information like username and password, while authorization is the act of determining whether a particular user is allowed to access certain data and perform activities in a system like add and delete. [ 14 ]
The concept of health technology assessment (HTA) was first coined in 1967 by the U.S. Congress in response to the increasing need to address the unintended and potential consequences of health technology, along with its prominent role in society. [ 15 ] It was further institutionalized with the establishment of the congressional Office of Technology Assessment (OTA) in 1972–1973. HTA is defined as a comprehensive form of policy research that examines short- and long-term consequences of the application of technology, including benefits, costs, and risks. [ 16 ] Due to the broad scope of technology assessment, it requires the participation of individuals besides scientists and health care practitioners such as managers and even the consumers. [ 16 ]
Several American organizations provide health technology assessments and these include the Centers for Medicare and Medicaid Services (CMS) and the Veterans Administration through its VA Technology Assessment Program (VATAP). The models adopted by these institutions vary, although they focus on whether a medical technology being offered is therapeutically relevant. [ 17 ] A study conducted in 2007 noted that the assessments still did not use formal economic analyses. [ 17 ]
Aside from its development, however, assessment in the health technology industry has been viewed as sporadic and fragmented [ 18 ] Issues such as the determination of products that needed to be developed, cost, and access, among others, also emerged. These, some argue, need to be included in the assessment since health technology is never purely a matter of science but also of beliefs, values, and ideologies. [ 18 ] One of the mechanisms being suggested either as an element of or an alternative to the current TAs is bioethics , which is also referred to as the "fourth-generation" evaluation framework. [ 18 ] [ 19 ] There are at least two dimensions to an ethical HTA. The first involves the incorporation of ethics in the methodological standards employed to assess technologies while the second is concerned with the use of ethical framework in research and judgment on the part of the researchers who produce information used in the industry. [ 20 ]
The practice of medicine in the United States is currently in a major transition. This transition is due to many factors, but primarily because of the implementation and integration of health technologies into healthcare. In recent years, the widespread adoption of electronic health records (EHR) has greatly impacted healthcare. In his book The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine's Computer Age , Robert Wachter aims to inform readers about this transition. Wachter states that there will be fewer hospitals in the future, and due to the advancement of technologies, people will be more likely to go to hospitals for major surgeries or critical illness. In the future, nurse call buttons will not be needed in hospitals. Instead, robots will deliver medication, take care of patients, and administer the system. In addition, the electronic health record will look different. Healthcare providers will be able to enter the notes via speech-to-text transcriptions in real-time. [ 4 ]
Wachter stated that information will be edited collaboratively across the patient-care team to improve the quality. Also, natural language processing will be more developed to help parse out keywords. In the future, patient data will reside in the cloud, and patients as well as authorized providers and individuals will be able to access their data from any device or location. Big data analysis will constantly be improving. Artificial intelligence and machine learning will be constantly improving and developing as it receives new data. Alerts will also be more intelligent and efficient than the current systems. [ 4 ]
Medical technology, or "medtech", encompasses a wide range of healthcare products and is used to treat diseases and medical conditions affecting humans. Such technologies are intended to improve the quality of healthcare delivered through earlier diagnosis , less invasive treatment options and reduction in hospital stays and rehabilitation times. [ 21 ] Recent advances in medical technology have also focused on cost reduction. [ 22 ] Medical technology may broadly include medical devices , information technology , biotech , and healthcare services. [ citation needed ]
The impacts of medical technology involve social and ethical issues. For example, physicians can seek objective information from technology rather than read subjective patient reports. [ 23 ]
A major driver of the sector's growth is the consumerization of medtech. Supported by the widespread availability of smartphones and tablets, providers can reach a large audience at low cost, a trend that stands to be consolidated as wearable technologies spread throughout the market. [ 24 ]
In the years 2010–2015, venture funding has grown 200%, allowing US$11.7 billion to flow into health tech businesses from over 30,000 investors in the space. [ 25 ]
Medical technology has evolved into smaller portable devices, for instance, smartphones, touchscreens, tablets, laptops, digital ink , voice and face recognition and more. With this technology, innovations like electronic health records (EHR), health information exchange (HIE) , Nationwide Health Information Network (NwHIN) , personal health records (PHRs) , patient portals , nanomedicine , genome-based personalized medicine, Geographical Positioning System (GPS) , radio frequency identification (RFID), telemedicine , clinical decision support (CDS), mobile home health care and cloud computing came to exist. [ 26 ]
Medical imaging and magnetic resonance imaging (MRI) have been long used and proven medical technologies for medical research, patient reviewing, and treatment analyzing. With the advancement of imagining technologies, including the use of faster and more data, higher resolution images, and specialist automation software, the capabilities of medical imaging technology are growing and yielding better results. [ 27 ] As the imaging hardware and software evolve this means that patients will need to use less contrasting agents, and also spend less time and money. [ 28 ]
Further advancement in healthcare is electromagnetic (EM) technology guidance systems, used in medical procedures, allowing real-time visualization and navigation for the placement of medical devices inside the human body. For example, a neuro-navigated catheter is inserted into the brain, or a feeding tube placement in the stomach or small intestine, as demonstrated by the ENvue System. ENvue is an advanced electromagnetic navigation system for enteral feeding tube placement. The system uses a field generator and several EM sensors enabling proper scaling of the display to the patient’s body contour, and real-time view of the feeding tube tip location and direction, which helps the medical staff ensure correct placement and avoid placement of the tube in the lungs. [ 29 ]
3D printing is another major development in healthcare. It can be used to produce specialized splints , prostheses , parts for medical devices and inert implants. The end goal of 3D printing is being able to print out customized replaceable body parts. [ 30 ] In the following section, it will explain more about 3D printing in healthcare. New types of technologies also include artificial intelligence and robots. [ 31 ]
3D printing is the use of specialized machines, software programs and materials to automate the process of building certain objects. It is having a rapid growth in the prosthesis , medical implants, novel drug formulations and the bioprinting of human tissues and organs. [ 30 ]
Companies such as Surgical Theater provide new technology that is capable of capturing 3D virtual images of patients' brains to use as practice for operations. 3D printing allows medical companies to produce prototypes to practice before an operation created with artificial tissue. [ 30 ]
3D printing technologies are great for bio-medicine because the materials that are used to make allow the fabrication with control over many design features. 3D printing also has the benefits of affordable customization, more efficient designs, and saving more time. [ 30 ] 3D printing is precise to design pills to house several drugs due to different release times. The technology allows the pills to transport to the targeted area and degrade safely in the body. As such, pills can be designed more efficiently and conveniently. In the future, doctors might be giving a digital file of printing instructions instead of a prescription. [ 30 ]
Besides, 3D printing will be more useful in medical implants. An example includes a surgical team that has designed a tracheal splint made by 3D printing to improve the respiration of a patient. This example shows the potential of 3D printing, which allows physicians to develop new implant and instrument designs easily. [ 30 ]
Overall, in the future of medicine, 3D printing will be crucial as it can be used in surgical planning, artificial and prosthetic devices, drugs, and medical implants.
The scale and capabilities of artificial intelligence (AI) systems are growing rapidly, notably due to advances in big data . In healthcare, it is expected to provide easier accessibility of information, and to improve treatments while reducing cost. The integration of AI in healthcare tends to improve the quality and efficiency of complex tasks. [ 32 ] [ 33 ]
Risks related to AI include the potential lack of accuracy, and privacy concerns related to the collected data. [ 34 ] Delegating decisions to AI systems may also undermine accountability . [ 35 ] Moreover, AI systems sometimes learn undesired behaviors from their training data. For example, an AI trained to detect skin diseases was found to have a strong tendency to classify images containing a ruler as cancerous, since pictures of malignancies typically include a ruler to show the scale. [ 36 ]
AI brings many benefits to the healthcare industry. AI helps to detect diseases, administer chronic conditions, deliver health services, and discover the drug. Furthermore, AI has the potential to address important health challenges. In healthcare organizations, AI is able to plan and relocate resources. [ 37 ] AI is able to match patients with healthcare providers that meet their needs. AI also helps improve the healthcare experience by using an app to identify patients' anxieties. In medical research, AI helps to analyze and evaluate the patterns and complex data. For instance, AI is important in drug discovery because it can search relevant studies and analyze different kinds of data. In clinical care, AI helps to detect diseases, analyze clinical data, publications, and guidelines. As such, AI aids to find the best treatments for the patients. Other uses of AI in clinical care include medical imaging , echocardiography , screening , and surgery . [ 37 ] The ability of AlphaFold to predict how proteins fold also significantly accelerated medical research. [ 38 ]
Medical virtual reality provides doctors multiple surgical scenarios that could happen and allows them to practice and prepare themselves for these situations. It also permits medical students a hands-on experience of different procedures without the consequences of making potential mistakes. [ 39 ] ORamaVR is one of the leading companies that employ such medical virtual reality technologies to transform medical education (knowledge) and training (skills) to improve patient outcomes, reduce surgical errors and training time and democratize medical education and training.
Modern robotics have made huge progress and contribution to healthcare. Robots can help doctors in performing variety tasks. Robotics adoption is increasing tremendously in hospitals. The following are different ways to improve healthcare by using robots: [ 40 ]
Surgical robots are one of the robotic systems, which allows a surgeon to bend and rotate tissues in a more flexible and efficient way. The system is equipped with a3D magnification vision system that can translate the hand movements of the surgeon to be precise in-order to perform a surgery with minimal incisions. Other robotics systems include the ability to diagnose and treat cancers. Many scientists began working on creating a next-generation robot system to assist the surgeon in performing knee and other bone replacement surgeries. [ 40 ]
Assistant robots will also be important to help reduce the workload for regular medical staff. They can help nurses with simple and time-consuming tasks like carrying multiple racks of medicines, lab specimen or other sensitive materials. [ 40 ]
Shortly, robotic pills are expected to reduce the number of surgeries. [ 40 ] They can be moved inside a patient and delivered to the desired area. In addition, they can conduct biopsies, film the area and clear clogged arteries.
Overall, medical robots are extremely useful in assisting physicians; however, it might take time to be professionally trained working with medical robots and for the robots to respond to a clinician's instructions. As such, many researchers and startups were working constantly to provide solutions to these challenges. [ 40 ]
Assistive technologies are products designed to provide accessibility to individuals who have physical or cognitive problems or disabilities. They aim to improve the quality of life with assistive technologies. The range of assistive technologies is broad, ranging from low-tech solutions to physical hardware, to technical devices. There are four areas of assistive technologies, which include visual impairment, hearing impairment, physical limitations, cognitive limitations. There are many benefits of assistive technologies. They enable individuals to care for themselves, work, study, access information easily, improve independence and communication, and lastly participate fully in community life. [ 41 ]
As part of an ongoing trend towards consumer-driven healthcare , websites or apps which provide more information on health care quality and price to help patients choose their providers have grown. [ 42 ] As of 2017, the sites with the most number of reviews in descending order included Healthgrades , Vitals.com, and RateMDs.com . [ 43 ] Yelp, Google, and Facebook also host reviews with a large amount of traffic, although as of 2017 they had fewer medical reviews per doctor. [ 44 ] Disputes around online reviews can lead to websites by health professionals alleging defamation. [ 45 ] In 2018 Vitals.com was purchased by WebMD which is owned by Internet Brands . [ 46 ]
Patient safety organizations and government programs which have historically assessed quality have made their data more accessible over the internet; notable examples include the HospitalCompare by CMS [ 47 ] and the LeapFrog Group's hospitalsafetygrade.org. [ 48 ]
Patient-oriented software may also help in other ways, including general education and appointments. [ 49 ]
Disclosure of legal disputes including medical license complaints or malpractice lawsuits has also been made easier. Every state discloses license status and at least some disciplinary action to the public, but as of 2018, this was not accessible via the internet for a few states. [ 50 ] : 78 Consumers can look up medical licenses in a national database, DocInfo.org, maintained by the medical licensing organizations [ 50 ] which contains limited details. [ 51 ] Other tools include DocFinder at docfinder.docboard.org [ 51 ] and certificationmatters.org from the American Board of Medical Specialties . In some cases more information is available from a mailed or walk-in request than the internet; for example, the Medical Board of California removes dismissed accusations from website profiles, but these are still available from a written or walk-in request, or a lookup in a separate database. [ 52 ] The trend to disclosure is controversial and generate significant public debate, [ 53 ] particularly about opening up the National Practitioner Data Bank . [ 54 ] In 1996, Massachusetts became the first state to require detailed disclosure of malpractice claims. [ 54 ]
Smartphones, tablets , and wearable computers have allowed people to monitor their health. These devices run numerous applications that are designed to provide simple health services and the monitoring of one's health with finding as critical problems to health as possible. An example of this is Fitbit , a fitness tracker that is worn on the user's wrist. This wearable technology allows people to track their steps, heart rate, floors climbed, miles walked, active minutes, and even sleep patterns. The data collected and analyzed allow users not just to keep track of their health but also help manage it, particularly through its capability to identify health risk factors. [ 55 ]
There is also the case of the Internet, which serves as a repository of information and expert content that can be used to "self-diagnose" instead of going to their doctor. For instance, one need only enumerate symptoms as search parameters at Google and the search engine could identify the illness from the list of contents uploaded to the World Wide Web, particularly those provided by expert/medical sources. These advances may eventually have some effect on doctor visits from patients [ 56 ] and change the role of the health professionals from "gatekeeper to secondary care to facilitator of information interpretation and decision-making." [ 57 ] Apart from basic services provided by Google in Search , there are also companies such as WebMD that already offer dedicated symptom-checking apps. [ 58 ]
All medical equipment introduced commercially must meet both United States and international regulations. The devices are tested on their material, effects on the human body, all components including devices that have other devices included with them, and the mechanical aspects. [ 59 ]
The Medical Device User Fee and Modernization Act of 2002 was created to speed up the FDA's approval process of medical technology by introducing sponsor user fees for a faster review time with predetermined performance targets for review time. [ 60 ] In addition, 36 devices and apps were approved by the FDA in 2016. [ 61 ]
There are numerous careers in health technology in the US. Listed below are some job titles and average salaries.
The term medical technology may also refer to the duties performed by clinical laboratory professionals or medical technologists in various settings within the public and private sectors. The work of these professionals encompasses clinical applications of chemistry , genetics , hematology , immunohematology ( blood banking ), immunology , microbiology , serology , urinalysis , and miscellaneous body fluid analysis. Depending on location, educational level, and certifying body, these professionals may be referred to as biomedical scientists , medical laboratory scientists (MLS), medical technologists (MT), medical laboratory technologists and medical laboratory technicians. [ 64 ] | https://en.wikipedia.org/wiki/Health_technology |
Health technology assessment ( HTA ) is a multidisciplinary process that uses systematic and explicit methods to evaluate the properties and effects of a health technology . [ 1 ] Health technology is conceived as any intervention ( test , device , medicine , vaccine , procedure , program ) at any point in its lifecycle ( pre-market , regulatory approval, post-market, disinvestment ). [ 2 ] The purpose of HTA is to inform "decision-making in order to promote an equitable, efficient, and high-quality health system ". [ 3 ] It has other definitions including "a method of evidence synthesis that considers evidence regarding clinical effectiveness, safety, cost-effectiveness and, when broadly applied, includes social, ethical, and legal aspects of the use of health technologies. The precise balance of these inputs depends on the purpose of each individual HTA. A major use of HTAs is in informing reimbursement and coverage decisions by insurers and national health systems, in which case HTAs should include benefit-harm assessment and economic evaluation." [ 4 ] And "a multidisciplinary process that summarises information about the medical, social, economic and ethical issues related to the use of a health technology in a systematic, transparent, unbiased, robust manner. Its aim is to inform the formulation of safe, effective, health policies that are patient focused and seek to achieve best value. Despite its policy goals, HTA must always be firmly rooted in research and the scientific method ". [ 5 ]
Health technology assessment is intended to provide a bridge between the world of research and the world of decision-making. [ 6 ] HTA is an active field internationally and has seen continued growth fostered by the need to support management, clinical, and policy decisions. It has also been advanced by the evolution of evaluative methods in the social and applied sciences, including clinical epidemiology and health economics . Health policy decisions are becoming increasingly important as the opportunity costs from making wrong decisions continue to grow. [ 7 ] HTA is now also used in assessment of innovative medical technologies like telemedicine e.g. by use of the Model for assessment of telemedicine (MAST).
Health technology can be defined broadly as:
Any intervention that may be used to promote health, to prevent, diagnose or treat disease or for rehabilitation or long-term care. This includes the pharmaceuticals, devices, procedures and organizational systems used in health care. [ 8 ]
The discipline of HTA was first developed in the U.S. Office of Technology Assessment, which published its first report in 1976. [ 9 ] The growth of HTA internationally can be seen in the expanding membership of the International Network of Agencies for Health Technology Assessment (INAHTA), a non-profit umbrella organization established in 1993. [ 10 ] Organizations and individuals involved in the production of HTA publications may also affiliated with international societies such as Health Technology Assessment International (HTAi) [ 11 ] and International Society for Pharmacoeconomics and Outcomes Research (ISPOR). [ 12 ] Academic courses, typically in Masters programs, are also offered in health technology assessment and management. [ 13 ] [ 14 ]
The World Health Organization provides an overview of countries and their corresponding HTA agencies. [ 15 ]
The United Kingdom's National Institute for Health and Care Research (NIHR) runs several research programmes that may be viewed as falling into the realm of Health Technology Assessment. Of particular note is the NIHR Health Technology Assessment programme, its longest running, which undertakes both conventional HTA in the form of Evidence Synthesis and modelling, and evidence generation with a large portfolio of pragmatic RCTs and cohort studies . [ 16 ] The programme's research is regularly published in NIHR's journal Health Technology Assessment . [ 17 ]
Also in the UK, the Multidisciplinary Assessment of Technology Centre for Healthcare carries out HTA in collaboration with the health service, the NHS and various industrial partners. MATCH is organised into four themes addressing key HTA topics including Health Economics, Tools for Industry, User Needs and Procurement and Supply chain.
Canada also has a health technology assessment body called Canada's Drug Agency, [ 18 ] formerly called the Canadian Agency for Drugs and Technologies in Health (CADTH). [ 19 ]
As of today, 11 Italian regions have issued specific regional laws or regulations to manage HTA activities and processes at regional level: Abruzzo, Basilicata, Emilia-Romagna, Lazio, Liguria, Lombardia, Piemonte, Puglia, Sicilia, Toscana, and Veneto. In another four regions (Calabria, Marche, Umbria, and Valle D'Aosta) and in the two autonomous provinces of Bolzano and Trento, HTA is performed at different levels, even if no legislation has yet been produced. [ 20 ]
A recent study [ 21 ] explored the implementation of HTA in three middle-income countries (MICs) and its influence on health system objectives. The study investigated the impact of HTA globally through a systematic literature review. The study also surveyed stakeholders from the middle-income countries.
The results indicated that the benefits of HTA implementation in these countries largely outweigh the drawbacks. The major advantages identified include enhanced transparency and accountability in healthcare decisions, leading to more informed and equitable healthcare policies.
The study has shown that HTA has a positive impact on several aspects of healthcare systems:
It was also noted that HTA's influence extends to the broader health system goals, such as health gain, equity in health, and responsiveness to patient needs. However, the impact on direct health gains and financial protection of households is less pronounced.
The study emphasizes the gradual adoption of HTA in MICs and the necessity for continuous assessment of its impact. | https://en.wikipedia.org/wiki/Health_technology_assessment |
Healthcare Environment Services ( HES ) Limited (company number SC173861 ) was a company based in Shotts . It claimed to be the largest independent medical waste management solutions company in the UK. On 30 April 2019, HES was placed into liquidation
HES was a private limited company, incorporated on 26 March 1997. Its registered office address was Hassockrigg Ecopark, Shotts Road, Shotts, Lanarkshire, ML7 5TQ. Companies House lists two stated " Persons with significant control ", Mr Garry Pettigrew (Director and Secretary) and Mrs Alison Pettigrew (Director).
As of the 1st if May, 2019 the companies status on Companies House states that HES are "In Liquidation".
What relationship, if any, exists between HES Ltd, One Waste Solution Limited , HEG Sustainable Solutions Limited , Healthcare Sharp Systems oLimited , Healthcare Environmental (Group) Limited , and Healthcare Washroom Services Limited is unclear, other than the fact that Mr Garry Pettigrew is a named Officer in each.
HES Ltd acquired GW Butler Ltd in 2014 which led to the Competition and Markets Authority (CMA) issuing a formal enforcement order in November 2014. On 18 March 2018 the CMA cleared the completed acquisition. [ 1 ]
All the 400 staff at HES were given redundancy notices on 27 December 2018 as a result of the scandal outlined below [ 2 ] and the company went into liquidation in April 2019. [ 3 ]
The company operated waste management facilities in England:
The company operated two [three?] waste management facilities in Scotland:
In 2009 the HES Ltd won a 10-year contract to dispose of NHS Scotland's clinical waste, including waste from all the Scottish hospitals, GP's surgeries, dental practices and pharmacies, from the incumbent service provider Stericycle. [ 4 ]
Bidding for the next 10 year NHS Scotland contract ( NP805/19 Healthcare Waste Services across NHS Scotland ), worth an estimated £140 million, opened on 3 June 2018 and closed on 12 September 2018.
In April 2017 HES Limited won a contract to provide clinical waste services to GPs and pharmacies in Cumbria and north-east England, after putting in a substantially cheaper offer, of £310,000, than the incumbent contract holder, Stericycle , which bid £479,999. Stericycle launched a legal challenge against NHS England ’s decision which was dismissed in a judgment issued by the Honourable Mr Justice Fraser on 27 July 2018. The judgement severely criticized the behavior of SRCL (Stericycle). [ 5 ] [ 6 ] [ 7 ]
A scandal erupted in October 2018 when it emerged that HES Ltd, which had contracts for managing clinical waste produced by the NHS in Scotland and England, was in breach of its environmental permits at four of its six sites in England [reference] for having more waste on site than their permit allowed, and storing waste inappropriately.
HES Ltd have repeatedly stated that the backlog of clinical and healthcare waste, which has led to the compliance issues at their sites, is a direct consequence of reduced available capacity at the incinerators (particularity high-temperature incinerators), as well as the re-classification of clinical waste as "offensive", which has meant more waste NHS trusts now needs to be incinerated. Waste classed as "offensive" waste could simply have been sent to landfill, however the NHS are now classifying this same waste stream as hazardous waste meaning it must now be incinerated. Incinerating more waste helps the NHS comply with the Governments and own aspiration of reducing waste to landfill. There is some support within the industry sector that the reclassification of the "offensive" waste is putting pressure on the incinerators.
As a result of the backlog of waste at HES Ltd sites, and the ongoing enforcement action by the Environment Agency, 17 NHS trusts in Yorkshire terminated their contracts [reference]. HES Ltd was reported as saying it was going to sue the relevant trusts for compensation [ 8 ] of "upwards of £15 million" however it is unclear if they have started proceedings. [ 9 ] HES Ltd still has contracts with 30 other trusts in England, and a waste disposal contract with NHS England for primary care and pharmacy. [ 10 ]
The backlog of clinical waste and issues with HES Ltd, including the possibility of service disruption and the knock-on impact this could have on the NHS, led to the issue being discussed at a meeting of the Cobra national incident committee. [ 11 ] As part of an emergency plan the Government handed the contracts to Mitie to ensure a level of continuity in service.
Concern has been raised that the Government is employing double standards after concerns around the way waste was being stored on the site now under the control of Mitie came to light. [ 12 ] In particular HES Ltd state that the storage conditions at the Mitie sites is worse than those at their own sites, which resulted in them being subject to enforcement action, which led to the loss of contracts in the first place. However the Government have said that Mitite have had no issue finding the necessary incineration capacity to dispose of the waste, something which HES Ltd said was the principal reason for the backlog developing. HES Ltd have countered this by saying that is because Mitie are paying an excessive fee for that capacity and that "a number of clinical waste contractors have sought to profit from the current situation by increasing their standard tonnage rates". [ 13 ] Had they had that money at their disposal, they too could have paid the premium price for the capacity, however they had to work within the agreed rates in the contracts.
In November 2018 the company released minutes of a meeting where Fiona Daly, an NHS Improvement official "acknowledged there appeared to be national market capacity issue". The Environment Agency denies that there is a shortage of suitable incinerator capacity. In addition to enforcement activity to clear the sites, they have launched a criminal investigation. They had issued 13 warning notices and two compliance notices in the last year. The Scottish Environment Protection Agency had also issued enforcement notices in respect of the sites in Dundee and Shotts. [ 14 ]
The company transferred 23 workers from its Normanton site to Mitie, under the Transfer of Undertakings (Protection of Employment) Regulations 2006 , but Mitie denied that they had been transferred. [ 15 ] It emerged that Mitie was charging the 18 trusts in Yorkshire and Humber £10.4 million per year where HES had been charging them £3.3 million. [ 16 ]
On 6 December HES stopped collecting clinical waste from many NHS trusts after the company was informed it would lose the Scottish NHS contract, and its banking facilities were cut off. [ 17 ] The NHS trusts have had to set up temporary storage measures under the NHS Emergency Preparedness Resilience and Response procedures after having been let down by the company. [ 18 ]
A report, based on Environment Agency documents, published in January 2019 said that anatomical waste from NHS hospitals was not stored in fridges as it should have been. The refrigeration unit at Normanton was seen not to be working in July 2018, at which time the site had more than 356 tonnes of waste stored, five times more than was authorised. In February 2018 there were 14 carts full of anatomical waste stored outside the refrigeration unit in Newcastle. [ 19 ]
In March 2019 it emerged that there was indeed a shortage of clinical waste incineration capacity and hospitals were asked by NHS England to stockpile clinical waste. The contract with Mitie was estimated to be triple the price previously charged by HES. [ 20 ]
A new contract for waste disposal in Scotland was set up with Tradebe Healthcare National, a Spanish company, to start in August 2019. [ 21 ]
The Normanton site was taken over by Sharpsmart in 2019. They discovered 400 tonnes of clinical waste, including some marked radioactive in a bin dated June 2017. [ 22 ]
In order to operate their waste management facilities, the company must obtain and comply with the relevant environmental authorizations, from the relevant regulator. In England it is the Environment Agency (EA), and in Scotland it is the Scottish Environment Protection Agency (SEPA).
Inspection reports, and reports submitted in compliance with the conditions of the authorizations, are held on a public register.
*Operations at the Calderhead Road site ceased early 2016 and moved to the custom built site at Hassockrigg, Shotts Road.
**The rating of Excellent is misleading as the site ceased operating at this particular site as early as February 2016.
*** The number of inspections on which each compliance rating is made is highly variable.
[Data on compliance at sites regulated by EA?] | https://en.wikipedia.org/wiki/Healthcare_Environmental_Services |
In its succinct definition, healthcare engineering is "engineering involved in all aspects of healthcare". [ 1 ] The term engineering in this definition covers all engineering disciplines such as biomedical, chemical, civil, computer, electrical, environmental, hospital architecture, industrial, information, materials, mechanical, software, and systems engineering.
Based on the definition of healthcare , a more elaborated definition is: "Healthcare engineering is engineering involved in all aspects of the prevention , diagnosis , treatment , and management of illness, as well as the preservation and improvement of physical and mental health and well-being, through the services offered to humans by the medical and allied health professions ". [ 1 ]
Almost all engineering disciplines (e.g., biomedical, chemical, civil, computer, electrical, environmental, industrial, information, materials, mechanical, software, and systems engineering) have made significant contributions and brought about advances in healthcare. Contributions have also been made by healthcare professionals (e.g., physicians , dentists , nurses , pharmacists , allied health professionals , and health scientists) who are engaged in supporting, improving, and/or advancing healthcare through engineering approaches.
Healthcare engineering is expected to play a role of growing importance as healthcare continues to be one of the world's largest and fastest-growing industries [ 2 ] [ 3 ] where engineering is a major factor of advancement through creating, developing, and implementing cutting-edge devices, systems, and procedures attributed to breakthroughs in electronics, information technology , miniaturization , material science , optics , and other fields, to address challenges associated with issues such as the continued rise in healthcare costs , the quality and safety of healthcare, care of the aging population, management of common diseases, the impact of high technology, increasing demands for regulatory compliance , risk management , and reducing litigation risk. As the demand for engineers continues to increase in healthcare, healthcare engineering will be recognized as the most important profession where engineers make major contributions directly benefiting human health.
The American Society of Healthcare Engineering (ASHE), established in 1962, [ 4 ] was one of the first to publicize the term healthcare engineering . ASHE, as well as its many local affiliate societies, is devoted to the health care physical environment, including design, building, maintenance, and operation of hospitals and other health care facilities, which represents only one sector of engineers' activities in healthcare. The term healthcare engineers first appeared in the scientific literature in 1989, where the critical role of engineers in the healthcare delivery system was discussed. [ 5 ] A number of academic programs have adopted the name healthcare engineering (e.g., Indiana University , [ 6 ] Northwestern University , [ 7 ] Purdue University , [ 8 ] Texas Tech University , [ 9 ] University of Illinois , [ 10 ] University of Michigan , [ 11 ] University of North Carolina , [ 12 ] University of Southern California , [ 13 ] University of Toronto [ 14 ] [ 15 ] ), although the description or definition of the term by these programs varies, as each institution has designed its program based on its own distinctive interest, strength, and focus. The first scholarly journal dedicated to healthcare engineering, Journal of Healthcare Engineering , [ 16 ] [ 17 ] was launched in 2010 by Dr. Ming-Chien Chyu , focusing on engineering involved in all aspects of healthcare delivery processes and systems. In the meantime, a number of companies with various foci have adopted healthcare engineering in their names.
Healthcare engineering was first defined in a white paper [ 1 ] published in 2015 by Chyu and 40 co-authors who are active members of and contributors to the healthcare engineering community around the world. The white paper was reviewed by more than 280 reviewers, including members of US National Academy of Engineering , engineering deans of the world's top universities, administrators and faculty members of healthcare engineering academic programs, leaders of healthcare/medical and engineering professional societies and associations, leaders of healthcare industry and government, and healthcare engineering professionals from around the world. This white paper documents a clear, rigorous definition of healthcare engineering as an academic discipline, an area of research, a field of specialty, and a profession, and is expected to raise the status and visibility of the field, help students choose healthcare engineering-related fields as majors, help engineers and healthcare professionals choose healthcare engineering as a profession, define healthcare engineering as a specialty area for the research community, funding agencies, and conference or event organizers, help job-search databases properly categorize healthcare engineering jobs, help healthcare employers recruit from the right pool of expertise, bring academic administrators' attention to healthcare engineering in considering new program initiations, help governments and institutions of different levels put healthcare engineering into perspective for policy making, budgeting, and other purposes, and help publishers and librarians categorize literature related to healthcare engineering. Based on this white paper, a global, non-profit professional organization, Healthcare Engineering Alliance Society (HEALS), was founded by Chyu in 2015, which focuses on improving and advancing all aspects of healthcare through engineering approaches. The white paper has been cited in numerous scientific papers. [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ]
The purpose of healthcare engineering is to improve human health and well-being through engineering approaches.
Healthcare engineering covers the following two major fields: [ 1 ]
Updated ramifications and lists of topics within individual subjects are available from authoritative sources such as the leading societies/associations of individual subjects and government organizations. [ 1 ]
(I) Engineering for healthcare intervention
Fundamentals
Engineering for disease prevention, diagnosis, treatment, and management
Engineering for patient care
Engineering for medical specialties
Engineering for dental specialties
Engineering for allied health specialties
Engineering for nursing – including nursing in all related areas
Engineering for pharmacy
(II) Engineering for healthcare systems
Healthcare system management, improvement and reform
Healthcare information systems
Healthcare facilities
Healthcare policy
(III) Others
Healthcare engineering education and training
Future of healthcare
Healthcare engineering features a synergy among the healthcare and medical sectors of all engineering disciplines and the engineering and technology sectors of the health sciences , as depicted in Figure 1. [ 1 ]
Healthcare engineering professionals are mainly (a) engineers from all engineering disciplines such as biomedical, chemical, civil, computer, electrical, environmental, industrial, information, materials, mechanical, software, and systems engineering, and (b) healthcare professionals such as physicians, dentists, nurses, pharmacists, allied health professionals, and health scientists, who are engaged in supporting, improving, and/or advancing any aspect of healthcare through engineering approaches, in accordance with the above definition of healthcare engineering. [ 1 ] Since some healthcare professionals engaged in healthcare engineering may not be considered to be "engineers", "healthcare engineering professional" is a more appropriate term than "Healthcare Engineer".
Healthcare engineering professionals generally perform their jobs in, with, or for the healthcare industry . Major sectors and subsectors of healthcare industry along with healthcare engineering professionals' contributions are summarized in Table 2. [ 1 ]
Engineers from almost all engineering disciplines (such as biomedical, chemical, civil, computer, electrical, environmental, industrial, information, materials, mechanical, software, and systems engineering) are always in demand in healthcare. It is a common misconception that only engineers with a background in biomedical engineering, clinical engineering, or related areas may work in healthcare. However, there is a need for courses and certificate type of programs that prepare non-biomedical engineering students and practicing engineers for service in healthcare. On the other hand, healthcare professionals (physicians, dentists, nurses, pharmacists, allied health professionals, etc.) may benefit from training to apply engineering to their practice, problem solving, and advancing healthcare. Due to the rapid advance of technology, continuing education plays a crucial role in ensuring healthcare engineering professionals' continued competence. | https://en.wikipedia.org/wiki/Healthcare_engineering |
In computer security , heap feng shui (also known as heap grooming [ 1 ] ) is a technique used in exploits to facilitate arbitrary code execution . [ 2 ] The technique attempts to manipulate the layout of the heap by making heap allocations of carefully selected sizes. It is named after feng shui , an ancient Chinese system of aesthetics that involves the selection of precise alignments in space.
The term is general and can be used to describe a variety of techniques for bypassing heap protection strategies . The paper often credited with naming the technique, "Heap Feng Shui in JavaScript", [ 3 ] used it to refer to an exploit in which a dangling pointer was aligned with a portion of an attacker-controlled chunk. However, it has also found usage in capture the flag events to describe attacks that exploit characteristics of heap layout, such as the spacing between chunks. [ 4 ]
This computer security article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Heap_feng_shui |
Heap leaching is an industrial mining process used to extract precious metals , copper , uranium , and other compounds from ore using a series of chemical reactions that absorb specific minerals and re-separate them after their division from other earth materials. Similar to in situ mining , heap leach mining differs in that it places ore on a liner, then adds the chemicals via drip systems to the ore, whereas in situ mining lacks these liners and pulls pregnant solution up to obtain the minerals. Heap leaching is widely used in modern large-scale mining operations as it produces the desired concentrates at a lower cost compared to conventional processing methods such as flotation, agitation, and vat leaching. [ 1 ]
Additionally, dump leaching is an essential part of most copper mining operations and determines the quality grade of the produced material along with other factors
Due to the profitability that the dump leaching has on the mining process, i.e. it can contribute substantially to the economic viability of the mining process, it is advantageous to include the results of the leaching operation in the economic overall project evaluation. [ 2 ]
The process has ancient origins; one of the classical methods for the manufacture of copperas (iron sulfate) was to heap up iron pyrite and collect the leachate from the heap, which was then boiled with iron to produce iron(II) sulfate . [ 3 ]
The mined ore is usually crushed into small chunks and heaped on an impermeable plastic or clay lined leach pad where it can be irrigated with a leach solution to dissolve the valuable metals. While sprinklers are occasionally used for irrigation, more often operations use drip irrigation to minimize evaporation , provide more uniform distribution of the leach solution, and avoid damaging the exposed mineral. The solution then percolates through the heap and leaches both the target and other minerals. This process, called the "leach cycle," generally takes from one or two months for simple oxide ores (e.g. most gold ores) to two years for nickel laterite ores. The leach solution containing the dissolved minerals is then collected, treated in a process plant to recover the target mineral and in some cases precipitate other minerals, and recycled to the heap after reagent levels are adjusted. Ultimate recovery of the target mineral can range from 30% of contained run-of-mine dump leaching sulfide copper ores to over 90% for the ores that are easiest to leach, some oxide gold ores.
The essential questions to address during the process of the heap leaching are: [ 4 ]
In recent years, the addition of an agglomeration drum has improved on the heap leaching process by allowing for a more efficient leach. The rotary drum agglomerator works by taking the crushed ore fines and agglomerating them into more uniform particles. This makes it much easier for the leaching solution to percolate through the pile, making its way through the channels between particles.
The addition of an agglomeration drum also has the added benefit of being able to pre-mix the leaching solution with the ore fines to achieve a more concentrated, homogeneous mixture and allow the leach to begin prior to the heap. [ 5 ]
Although heap leach design has made significant progress over the last few years through the use of new materials and improved analytical tools, industrial experience shows that there are significant benefits from extending the design process beyond the liner and into the rock pile itself. Characterization of the physical and hydraulic (hydrodynamic) properties of ore-for-leach focuses on the direct measurement of the key properties of the ore, namely:
Theoretical and numerical analysis, and operational data show that these fundamental mechanisms are controlled by scale, dimensionality, and heterogeneity, all of which adversely affect the scalability of metallurgical and hydrodynamic properties from the lab to the field. The dismissal of these mechanisms can result in a number of practical and financial problems that will resonate throughout the life of the heap impacting the financial return of the operation. Through procedures that go beyond the commonly employed metallurgical testing and the integration of data gleaned through real time 3D monitoring, a more complete representative characterization of the physicochemical properties of the heap environment is obtained. This improved understanding results in a significantly higher degree of accuracy in terms of creating a truly representative sample of the environment within the heap. [ 6 ]
By adhering to the characterization identified above, a more comprehensive view of heap leach environments can be realized, allowing the industry to move away from the de facto black-box approach to a physicochemically inclusive industrial reactor model.
The crushed ore is irrigated with a dilute alkaline cyanide solution. The solution containing the dissolved precious metals in a pregnant solution continues percolating through the crushed ore until it reaches the liner at the bottom of the heap where it drains into a storage (pregnant solution) pond. After separating the precious metals from the pregnant solution, the dilute cyanide solution (now called "barren solution") is normally re-used in the heap-leach-process or occasionally sent to an industrial water treatment facility where the residual cyanide is treated and residual metals are removed. In very high rainfall areas, such as the tropics, in some cases there is surplus water that is then discharged to the environment, after treatment, posing possible water pollution if treatment is not properly carried out. [ citation needed ]
The production of one gold ring through this method, can generate 20 tons of waste material. [ 7 ]
During the extraction phase, the gold ions form complex ions with the cyanide:
Recuperation of the gold is readily achieved with a redox -reaction:
The most common methods to remove the gold from solution are either using activated carbon to selectively absorb it, or the Merrill-Crowe process where zinc powder is added to cause a precipitation of gold and zinc. The fine product can be either doré (gold-silver bars) or zinc-gold sludge that is then refined elsewhere.
The method is similar to the cyanide method above, except sulfuric acid is used to dissolve copper from its ores. The acid is recycled from the solvent extraction circuit (see solvent extraction-electrowinning , SX/EW) and reused on the leach pad. A byproduct is iron(II) sulfate , jarosite , which is produced as a byproduct of leaching pyrite , and sometimes even the same sulfuric acid that is needed for the process. Both oxide and sulfide ores can be leached, though the leach cycles are much different and sulfide leaching requires a bacterial, or bio-leach, component.
In 2011 leaching, both heap leaching and in-situ leaching , produced 3.4 million metric tons of copper, 22 percent of world production. [ 8 ] The largest copper heap leach operations are in Chile, Peru, and the southwestern United States.
Although heap leaching is a low cost-process, it normally has recovery rates of 60-70%. It is normally most profitable with low-grade ores. Higher-grade ores are usually put through more complex milling processes where higher recoveries justify the extra cost. The process chosen depends on the properties of the ore.
The final product is cathode copper.
This method is an acid heap leaching method like that of the copper method in that it utilises sulfuric acid instead of cyanide solution to dissolve the target minerals from crushed ore. The amount of sulfuric acid required is much higher than for copper ores, as high as 1,000 kg of acid per tonne of ore, but 500 kg is more common. The method was originally patented by Australian miner BHP and is being commercialized by Cerro Matoso in Colombia, a wholly owned subsidiary of BHP; Vale in Brazil; and European Nickel for the rock laterite deposits of Turkey, Talvivaara mine in Finland, the Balkans, and the Philippines. There currently are no operating commercial scale nickel laterite heap leach operations, but there is a sulphide HL operating in Finland.
Nickel recovery from the leach solutions is much more complex than for copper and requires various stages of iron and magnesium removal, and the process produces both leached ore residue ("ripios") and chemical precipitates from the recovery plant (principally iron oxide residues, magnesium sulfate and calcium sulfate ) in roughly equal proportions. Thus, a unique feature of nickel heap leaching is the need for a tailings disposal area.
The final product can be nickel hydroxide precipitates (NHP) or mixed metal hydroxide precipitates (MHP), which are then subject to conventional smelting to produce metallic nickel.
Similar to copper oxide heap leaching, also using dilute sulfuric acid. Rio Tinto is commercializing this technology in Namibia and Australia ; the French nuclear fuel company Orano , in Niger with two mines and Namibia; and several other companies are studying its feasibility.
The final product is yellowcake and requires significant further processing to produce fuel-grade feed.
While most mining companies have shifted from a previously accepted sprinkler method to the percolation of slowly dripping choice chemicals including cyanide or sulfuric acid closer to the actual ore bed, [ 9 ] heap leach pads have not changed too much throughout the years. There are still four main categories of pads: conventional, dump leach, valley fills, and on/off pads. [ 10 ] Typically, each pad only has a single, geomembrane liner for each pad, with a minimum thickness of 1.5mm, usually thicker.
The conventional pads simplest in design are used for mostly flat or gentle areas and hold thinner layers of crushed ore. Dump leach pads hold more ore and can usually handle a less flat terrain. Valley fills are pads situated at valley bottoms or levels that can hold everything falling into it. On/off pads involve putting significantly larger loads on the pads and removing and reloading it after every cycle.
Many of these mines which previously had digging depths of about 15 meters are digging deeper than ever before to mine materials, approximately 50 meters, sometimes more, which means that, in order to accommodate all of the ground being displaced, pads will have to hold higher weights from more crushed ore being contained in a smaller area (Lupo 2010). [ 11 ] With that increase in build up comes in potential for decrease in yield or ore quality, as well as potential either weak spots in the lining or areas of increased pressure buildup. This build up still has the potential to lead to punctures in the liner. As of 2004 cushion fabrics, which could reduce potential punctures and their leaking, were still being debated due to their tendency to increase risks if too much weight on too large a surface was placed on the cushioning (Thiel and Smith 2004). [ 12 ] In addition, some liners, depending on their composition, may react with salts in the soil as well as acid from the chemical leaching to affect the successfulness of the liner. This can be amplified over time. [ citation needed ]
Heap leach mining works well for large volumes of low grade ores, as reduced metallurgical treatment (comminution) of the ore is required in order to extract an equivalent amount of minerals when compared to milling. The significantly reduced processing costs are offset by the reduced yield of usually approximately 60-70%. The amount of overall environmental impact caused by heap leaching is often lower than more traditional techniques. [ citation needed ] It also requires less energy consumption to use this method, which many consider to be an environmental alternative.
In the United States, the General Mining Law of 1872 gave rights to explore and mine on public domain land; the original law did not require post-mining reclamation (Woody et al. 2011). Mined land reclamation requirements on federal land depended on state requirements until the passage of the Federal Land Policy and Management Act in 1976. Currently, mining on federal land must have a government-approved mining and reclamation plan before mining can start. Reclamation bonds are required. [ 13 ] Mining on either federal, state, or private land is subject to the requirements of the Clean Air Act and the Clean Water Act .
One solution proposed to reclamation problems is the privatization of the land to be mined (Woody et al. 2011).
With the rise of the environmentalist movement has also come an increased appreciation for social justice, and mining has showed similar trends lately. Societies located near potential mining sites are at increased risk to be subjected to injustices as their environment is affected by the changes made to mined lands—either public or private—that could eventually lead to problems in social structure, identity, and physical health (Franks 2009). [ 14 ] Many [ who? ] have argued that by cycling mine power through local citizens, this disagreement can be alleviated, since both interest groups would have shared and equal voice and understanding in future goals. However, it is often difficult to match corporate mining interests with local social interests, and money is often a deciding factor in the successes of any disagreements. If communities are able to feel like they have a valid understanding and power in issues concerning their local environment and society, they are more likely to tolerate and encourage the positive benefits that come with mining, as well as more effectively promote alternative methods to heap leach mining using their intimate knowledge of the local geography (Franks 2009). | https://en.wikipedia.org/wiki/Heap_leaching |
Hearing conservation programs [ 1 ] are programs that should reduce the risk of hearing loss due to hazardous noise exposure, if implemented correctly and with high quality. Hearing conservation programs require knowledge about risk factors such as noise and ototoxicity , hearing, hearing loss , protective measures to prevent hearing loss at home, in school, at work, in the military and, and at social/recreational events, and legislative requirements. [ 2 ] Regarding occupational exposures to noise, a hearing conservation program is required by the Occupational Safety and Health Administration (OSHA) "whenever employee noise exposures equal or exceed an 8-hour time-weighted average sound level (TWA) of 85 decibels (dB) measured on the A scale (slow response) or, equivalently, a dose of fifty percent." [ 3 ] This 8-hour time-weighted average is known as an exposure action value . While the Mine Safety and Health Administration (MSHA) also requires a hearing conservation program, MSHA does not require a written hearing conservation program. MSHA's hearing conservation program requirement can be found in 30 CFR § 62.150, [ 4 ] and is very similar to the OSHA hearing conservation program requirements. Therefore, only the OSHA standard 29 CFR 1910.95 will be discussed in detail.
According to Alice Sater, employers are not implementing these programs effectively, personal protective equipment does not protect workers well, and the risk of hearing loss is not reduced. [ 5 ]
The OSHA standard contains a series of program requirements.
A sound survey is often completed to determine areas of potential high noise exposure. A noise screening is completed initially to determine which areas are higher than 80 dB A. For these areas, an official sound survey will take place. [ 6 ] This type of survey is normally completed using a sound level meter (SLM). A sound level meter takes a measurement of the sound present in the environment at that moment. There are three types of sound level meters. Type 0 is a precision instrument normally used in laboratories. Type 1 is for precision measurements taken in the field. Type 2 sound level meters are less precise than type 1 and are often used to take all-purpose sound level measurements. There are also noise dosimeters that are worn on the body and measure the amount of noise exposure an individual receives over a given time period. OSHA guidelines state that either a SLM or noise dosimeter may be used for sound monitoring. [ 7 ]
Surveys must be repeated when there are significant changes in machinery and/or processes that would affect the noise level. [ 7 ]
Engineering controls and administrative controls are ranked as the most effective protection from noise in the hierarchy of controls. [ 7 ] Engineering controls are measures taken to reduce the intensity of noise at the source or between the source and a person exposed to the noise. [ 8 ] This can be done by choosing tools that make less noise, installing a barrier between the worker and the noise, enclosing the machinery all together, or making sure the machinery is maintained properly (lubricating equipment). [ 8 ] Administrative controls are limitations around noise sources that limit length of noise exposure. [ 8 ] Some known methods are running loud equipment when less workers are present, controlling the amount of time a worker is allowed around the noise source, constructing areas that allow employees a chance to escape from the noise (a sound proof room to give recovery time), or increasing the distance between the worker and the excessive noise source. [ 8 ]
If engineering controls fail to maintain an 8-hour time-weighted average below 85 dBA, then a hearing protection device (HPD) is required. [ 9 ] There are two general types of HPDs: earplugs and earmuffs. Each one has its own benefits and drawbacks. The selection of the proper HPD to be worn is commonly done by an industrial hygienist so that the proper amount of noise protection is worn. OSHA requires that HPD be given free of charge. [ 10 ]
There are four general classes of earplugs . These include: pre-molded, formable, custom molded and semi-insert.
Earmuffs are another type of HPD. The main difference between earmuffs and earplugs, is that earmuffs are not inserted inside the ear canal. Instead the muffs create a seal around the outside of the ear to prevent noise from reaching the inner ear. Earmuffs are easy to wear and often provide a more consistent fit than an earplug. There are earmuffs available that use the principle of active noise control to help reduce noise exposures. However, the protection earmuffs offer may be mitigated by large sideburns or glasses as the seal of the earmuffs may be broken by these objects. [ 11 ]
The United States Environmental Protection Agency (EPA) requires that all hearing protection devices be labeled with their associated noise reduction rating (NRR). [ 12 ] The NRR provides the estimated attenuation of the hearing protection device. The NRR obtained in the lab is often higher than the attenuation provided in the field. [ 13 ] [ 14 ] To determine the amount of noise reduction afforded by a hearing protection device for the A weighted scale, OSHA recommends that 7 dB be subtracted from the NRR. This new NRR should be subtracted from the individuals time weighted average (TWA) noise exposure. It must then be determined if the attenuation is appropriate for the level of noise the individual is exposed to. [ 15 ]
There are several fit testing devices on the market that will measure the attenuation an individual receives when wearing their HPD. These systems typically use one of two methods to verify fit. The individual wears their HPD and a microphone is placed inside the ear canal and another microphone is placed outside of the ear. A sound is played and the difference between the microphones is the attenuation for that individual, known as the personal attenuation rating (PAR). In the second method, a series of sounds are played for the individual, and the lowest level that they can detect the sound is recorded. The individual then wears the HPD and the same sounds are played. The amount that the sound has to be increased so that the individual can hear it is the PAR. [ 16 ]
Audiometric testing is used to determine hearing sensitivity and is part of a hearing conservation program. This testing is part of the hearing conservation program that is used in the identification of significant hearing loss. Audiometric testing can identify those who have permanent hearing loss. This is called noise-induced permanent threshold shift (NIPTS). [ 17 ]
Completing baseline audiograms and periodically monitoring threshold levels is one way to track any changes in hearing and identify if there is a need to make improvements to the hearing conservation program. OSHA, which monitors workplaces in the United States to ensure safe and healthful working conditions, specifies that employees should have a baseline audiogram established within 6 months of their first exposure to 85 dBA time-weighted average (TWA). If a worker is unable to obtain a baseline audiogram within 6 months of employment, HPD is required to be worn if the worker is exposed to 85 dBA or above TWA. HPD must be worn until a baseline audiogram is obtained. [ 18 ] Under the MSHA, which monitors compliance to standards within the mining industry, an existing audiogram that meets specific standards can be used for the employee's baseline. Before establishing baseline, it is important that the employee limit excessive noise exposure that could potentially cause a temporary threshold shift and affect results of testing. OSHA stipulates that an employee be noise-free for at least 14 hours prior to testing. [ 18 ]
Periodic audiometric monitoring, typically completed annually as recommended by OSHA, can identify changes in hearing. There are specific criteria that the change must meet in order to require action. The criterion most commonly used is the standard threshold shift (STS), defined by a change of 10 dB or greater averaged at 2000, 3000, and 4000 Hz. [ 18 ] Age correction factors can be applied to the change in order to compensate for hearing loss that is age-related rather than work-related. If an STS is found, OSHA requires that the employee be notified of this change within 21 days. [ 18 ] Furthermore, any employee that is not currently wearing HPD is now required to wear protection. If the employee is already wearing protection, they should be refit with a new device and retrained on appropriate use. [ 18 ]
Another determination that is made includes whether an STS is "recordable" under OSHA standards, meaning the workplace must report the change to OSHA. In order to be recordable the employee's new thresholds at 2000, 3000, and 4000 Hz must exceed an average of 25 dB HL. [ 18 ] MSHA standard differs slightly in terms of calculation and terminology. MSHA considers whether an STS is "reportable" by determining if the average amount of change that occurs exceeds 25 dB HL. [ 18 ] The various measures that are used in occupational audiometric testing allow consistency in standards within workplaces. Completing baseline and follow-up audiograms allows workplaces to detect hearing loss as early as possible and determine whether changes need to be made to provide a safe working environment for their employees.
Proper training and education of those exposed to noise is the key to preventing noise-induced hearing loss . If employees are properly trained on how to follow a hearing conservation program, then the risk of noise-induced hearing loss is reduced. By providing information on the physiological effects of noise exposure, the importance of obtaining baseline and annual audiograms, and use of appropriate hearing protection, the program will provide a thorough knowledge base for employees involved. Providing a refresher training when appropriate will support retention of this information. [ 17 ] OSHA requires this training to be completed on an annual basis. Proper training is imperative since "even with a very modest amount of instruction attenuation performance can be significantly improved." [ 19 ] [ 20 ]
To carry out a hearing conservation training, the program may use a variety of materials to relay the necessary information. An assortment of written, video, audio, and hands on experience may make the training more interactive and meaningful to employees. It is recommended that materials also be translated into languages other than English so all employees can attend and benefit from the training. Pre- and post-assessments, a safe and secure learning environment, access to training media and equipment, informational handouts/pamphlets, and examples of hearing protection devices are all resources that can contribute to successful HLPP trainings.
The initial training for employees should cover the following topics:
It is not enough to provide the employees with the information about occupational hearing loss and hearing conservation. There are many factors that may contribute to the employee's lack of compliance with training. These factors fall under three main categories: individual perceptions or beliefs, individual personality, and influencing variables.
Every worker has perceptions about their work environment, how noise and ototoxins affect them, and hearing conservation programs. Some workers may believe that they are invulnerable to hearing loss. [ 17 ] These workers may perceive that the noise is not loud enough to cause hearing loss. [ 17 ] Others know that 29% of workers may have noise-induced hearing loss, meaning 71% are not likely to develop a hearing loss due to noise. Because of these statistics, some workers may believe that they will fall into the 71%. Others may believe that they are too young to suffer from hearing loss. Still others have the incorrect belief that loud noise will make the ears tougher. A portion of workers may not realize the implications of a hearing loss and that hearing aids will be able to fix their hearing. [ 17 ] If there is not a perceived benefit to using hearing protection devices, it is less likely that individuals will participate. If workers perceive that there are barriers to taking action to prevent hearing loss, they are also less likely to participate in the program. These barriers may include hearing protection affecting their ability to perform their job well, their company being shut down due to the noise levels, hearing protector comfort, and chronic irritation and infection of the outer and middle ear. [ 17 ] [ 21 ]
A small number of individuals may see the use of HPD as weakness or not being manly. This may arise from peer pressure. [ 17 ]
Workers who have experienced or are currently experiencing tinnitus are more likely to use HPD consistently. Others who have had a temporary hearing loss may be triggered to motivate preventive action. Workers who have suffered from a temporary hearing threshold shift following loud noise exposure may serve as a motivation for the use of HPD. The use of HPDs is more common in companies with more complete hearing conservation programs.
Motivational techniques can be implemented to promote hearing conservation program compliance and the use of hearing protection. One suggestion is continued education at the workers' audiometric screening. [ 21 ] They should be asked to bring along their current hearing protection device to the screening. If the results are normal and the inspection of the hearing protection device is good, praise can be given for following protocol. If there is a shift in their hearing, instruction can be given again about the proper use of hearing protection and the importance of wearing them. Audiograms can be very useful in showing workers how noise can affect their hearing. One specific way to do this is to perform two hearing test on an employee on two different days. [ 21 ] One day the hearing test will be after wearing hearing protection all day and the other will be after not wearing hearing protection for the day. The difference can then be discussed with the worker and he/she has a tangible way to see how noise affects hearing. Another technique is using "internal triggers" to motivate employees to comply to the hearing conservation program. [ 22 ] If the individual already suffers from tinnitus and/or hearing loss they are probably more likely to use hearing protection because he/she does not want that problem to progress with noise exposure. Finally, the hearing protection offered should be comfortable so the worker will wear it. It is suggested that workers have a variety of hearing protection devices available to them, including at least one type of earmuff and two different forms of earplugs, to fit the individual needs and wants of the workers. [ 21 ]
OSHA requires that records of exposure measurements and audiometric tests be maintained. Records are also required to have the following:
Noise exposure measurement records must be maintained for at least 2 years. Audiometric test records must be retained for the duration of the affected employee's employment. Additionally, employees, former employees, representatives designated by the individual employee and the Assistant Secretary all must have access to these records. [ 23 ]
Proper program evaluation is important in maintaining the health of hearing conservation program. The National Institute for Occupational Safety and Health (NIOSH) has created a checklist to help evaluate the effectiveness of a hearing conservation program. It can be found on their website. [ 24 ] NIOSH recommends that fewer than 5% of exposed employees should have a 15 dB significant threshold shift in the same ear and same frequency. It also suggests using the term hearing loss prevention program rather than a hearing conservation program. While this change may seem superfluous, it is important to note the advancement. "Conservation" implies a response by the workplace caused by initial signs of employee hearing loss, whereas "prevention" promotes policies (such as "buy quiet") and procedures (such as hearing protection training and education) to decrease the possibility of occupational hearing loss from happening in the first place.
Simply having a hearing conservation program (even a program that complies completely with relevant government regulations) is not necessarily sufficient to prevent occupational hearing loss. [ 25 ] A 2017 Cochrane review low-quality evidence that stricter legislation might reduce noise levels. Giving workers information on their noise exposure levels by itself was not shown to decrease exposure to noise. Moderate-quality evidence indicated that training in the correct fitting of ear protection has the potential to reduce noise to safer levels in the short term, but long-term evidence on prevention of hearing loss is lacking. External solutions such as proper maintenance of equipment can lead to noise reduction, but further study of this issue under real-life conditions is needed. Other possible solutions include improved enforcement of existing legislation and better implementation of well-designed prevention programs, which have not yet been proven conclusively to be effective. Lack of evidence does not imply lack of effect; further research on the impact of generally-accepted hearing conservation practices is much needed. [ 26 ]
In the meanwhile, many hearing conservation organizations advocate for "best practices" that go beyond mere compliance in order to more successfully prevent occupational hearing loss. [ 27 ] [ 25 ] Some of these are discussed below.
The Buy Quiet policy is an easy way to progress towards a safer work environment. Many traditionally noisy tools and machines are now being redesigned in order to manufacture quieter running equipment, so a "buy quiet" purchase policy should not require new engineering solutions in most cases. [ 25 ] As a part of the "buy quiet" campaign, the New York City Department of Environmental Protection released a products and vendor guidance sheet in order to assist contractors for achieving compliance with the New York City Noise Regulations.
In order to make these plans effective, employees and administration need to be educated in occupational noise-induced hearing loss prevention. It is also necessary to identify and examine sources of noise first before being able to control the damage it may cause to hearing. For example, the National Institute for Occupational Safety and Health has conducted a study and created a database on handheld power tools for the sound power levels they expose their operators to. This Power Tools Database allows contractors in a trade-skill profession to monitor their exposure limits and allow them preparation to prevent permanent hearing damage.
Hazardous noise exposures exist outside the workplace [ 28 ] as well as on-the-job. [ 29 ] Furthermore, noise can interact with other health issues, potentially impacting the severity of the noise effects and/or the related health conditions. For these reasons, taking a Total Hearing Health approach that integrates occupational hearing loss prevention activities with overall health promotion activities can reduce the adverse effects of noise both on- and off-the-job. [ 30 ]
Total Hearing Health is based on the Total Worker Health ® concept. Total Worker Health is defined as "policies, programs, and practices that integrate protection from work-related safety and health hazards with promotion of injury and illness-prevention efforts to advance worker well-being." [ 31 ] Employers are required to protect workers from harmful working conditions; however, the Total Worker Health approach encourages organizations to address worker health and safety more broadly by establishing a comprehensive strategy to address both workplace and personal health risks as a more effective way of promoting worker health and safety. [ 31 ]
Regardless of the kind of work people do, nearly everyone will be exposed to hazardous noise levels at some point in their lives. Reducing the public health burden of hearing loss requires addressing noise risks in a holistic manner that includes strong occupational hearing loss prevention practices as well as consideration of risk factors beyond the workplace. A comprehensive approach to hearing health can mitigate interactions between noise and other health concerns. For example, noise-induced hearing loss may impact quality of life as a person ages, while age-related hearing loss may create workplace safety issues. [ 30 ] Some ideas for expanding hearing conservation towards a Total Hearing Health approach include:
Companies that have implemented this concept at their worksites have successfully increased employee engagement in hearing loss prevention and reduced noise levels. Browse through the winners of the Safe-In-Sound awards to read more about success stories at workplaces such as Domtar in Kinsgsport Mill, TN , 3M in Hutchinson, MN, and Northrop Grumman in Linthicum, MD .
There are currently no standards or regulations for workers that already have a hearing loss. [ 32 ] OSHA provides recommendations only for addressing the needs of these employees who are exposed to high noise levels. Communication and the use of hearing protection devices with hearing aids are some of the issues that these workers face.
Hearing protection is required to protect the residual hearing of workers, even if there is a diagnosis of severe to profound deafness. [ 32 ] Specialized hearing protectors are available:
Appropriate hearing protection should be determined by the worker with the hearing-impairment, as well as the professional running the conservation program. [ 32 ] Hearing aids that are turned off are not acceptable forms of hearing protection. [ 33 ]
Not only do hearing aids amplify helpful sounds, but they also amplify the background noise of the environment the worker is in. These employees may want to continue to wear their amplification because of communication needs, or localization, but amplifying the noise may exceed the OSHA 8-hour permissible exposure limit (PEL) of 90 dBA. [ 32 ] Professionals in charge of the hearing conservation program may allow workers to wear hearing aids under earmuffs on a case-by-case basis. However, when in hazardous noise, hearing aids should not be worn. [ 32 ]
Hearing aids must be removed and audiometric testing requirements must be followed (see above). Employers should consider using manual techniques to obtain thresholds instead of a microprocessor audiometer. [ 32 ] This is dependent on the severity of the hearing loss. Hearing aids can be worn during the testing instructions, but then should be removed immediately afterwards. [ 32 ]
There are not regulations to protect children from excessive noise exposure, but it is estimated that 5.2 million kids have noise-induced hearing loss (NIHL). [ 34 ] Due to increased worry among both parents and experts regarding NIHL in children, it has been suggested that hearing conservation programs be implemented in schools as part of their studies regarding health and wellness. The necessity for these programs is supported by the following reasons: 1. Children are not sheltered from loud noises in their daily lives, and 2. Promoting healthy behaviors at a young age is critical to future application. [ 35 ] The creation of a hearing conservation program for children will strongly differ from those created for the occupational settings discussed above. While children may not be exposed to factory of industrial noise on a daily basis, they may be exposed to noise sources such as firearms, music, power tools, sports, and noisy toys. All of these encounters with noise cumulatively increases their risk for developing Noise-induced hearing loss . With NIHL being a fully preventable ailment, providing children with this type of education has the potential to reduce future incidence of this condition. There are multiple organizations in existence that provide educators with the appropriate material to teach this topic; teachers simply need to be proactive about accessing them. [ 36 ] Below are examples of hearing conservation programs that have been designed specifically for children.
This is the primary goal of most hearing conservation programs at the elementary, middle, and high school levels is to spread knowledge about hearing loss and noise exposure. When an educational program is being created or adapted for use with children, behavior change theories are often employed to increase effectiveness. Behavior theory identifies possible obstacles to change while also highlighting factors that may encourage students to change. [ 21 ] The following are elements that are also considered during the implementation of a new program for children:
1. Adaptation of the program for the specific population (age, demographic, etc.)
2. Use of interactive games, lessons, and role-playing
3. Time to apply the skills that are taught
4. Reoccurring lessons on the same topic area [ 21 ]
Dangerous Decibels is a program designed to teach concepts related to the prevention of noise-induced hearing loss. Proven to be effective for children in 4th through 7th grade, children are engaged in hands-on activities during this 50-minute presentation. The class will learn about what sound is, how their ears hear and detect it, and how they can protect their hearing from dangerous decibels. Throughout the program, the class focuses on three strategies: Turn it Down, Walk Away, and Protect your Ears. [ 37 ]
Created by the American Speech-Language-Hearing Association , this campaign aims to teach children and their parents about practicing safe listening routines when listening to music through personal devices, such as an iPod. With the help of sponsors, ASHA hosts an educational concert series to promote safe music listening. [ 38 ]
Run by the Ear Science Institute of Australia, this school program was created to educate elementary-age children on the risks of high listening levels and the effects of hearing loss. Program has a mascot named Charlie and uses sound level meters, computer games, apps, and take-home packets to teach the concepts. Teachers also receive addition activities and worksheets for continued learning opportunities. [ 39 ]
Organized by the United States National Institutes of Health , this is a campaign created with the aim to increase parental awareness of both the causes and effects of noise induced hearing loss. By targeting parents instead of children, the goal is for adults to influence the behaviors of their children before bad habits are even created. Resources provided include web-based games and puzzles, downloadable graphics, and tips for school and home environments. [ 40 ]
Created by The Hearing Foundation of Canada, the Sound Sense classroom program teaches children how hearing works, how it can stop working, and offers ideas for safe listening. The classroom presentation satisfies the requirements for the science unit on sound taught in either grade 3 or 4, as well as the healthy living curriculum in grades 5 and 6. In addition, the webpage provides resources & games for children, parents, and teachers. [ 41 ]
An Australian program initiated by the HEARing Cooperative Research Centre and the National Acoustic Laboratories (NAL), HEARsmart aims to improve the hearing health of all Australians, particularly those at greatest of risk of noise-related tinnitus and hearing loss. The program has a particular focus on promoting healthy hearing habits in musicians, live music venues and patrons. Resources include: Know Your Noise - an online risk calculator and speech-in-noise test, a short video that aims to raise awareness of tinnitus in musicians, and a comprehensive website with detailed information. [ 42 ]
Just as program evaluation is necessary in workplace settings, it is also an important component of educational hearing conservation programs to determine if any changes need to be made. This evaluation may consist of two main parts: assessment of students' knowledge and assessment of their skills and behaviors. To examine the level of knowledge acquired by the students, a questionnaire is often given with the expectation of an 85% competency level among students. If proficiency is too low, changes should be implemented. If the knowledge level is adequate, assessing behaviors is then necessary to see if the children are using their newfound knowledge. This evaluation can be done through classroom observation of both the students and teachers in noisy classroom environments such as music, gym, technology, etc. [ 43 ]
The Mine Safety and Health Administration (MSHA) requires that all feasible engineering and administrative controls be employed to reduce miners' exposure levels to 90 dBA TWA. The action level for enrollment in a hearing conservation program is 85 dBA 8-hour TWA , integrating all sound levels between 80 dBA to at least 130 dBA. MSHA uses a 5-dB exchange rate (the sound level in decibels that would result in halving [if an increase in sound level] or a doubling [if a decrease.in sound level] the allowable exposure time to maintain the same noise dose). At and above exposure levels of 90 dBA TWA, the miner must wear hearing protection. At and above exposure levels above 105 dBA TWA, the miner must wear dual hearing protection. Miners may not be exposed to sounds exceeding 115 dBA with or without hearing protection devices. MSHA defines an STS as an average decrease in auditory sensitivity of 10 dB HL at the frequencies 2000, 3000, and 4000 Hz. (30 CFR Part 62 [ 44 ] ).
The Federal Railroad Administration (FRA) encourages, but does not require, railroads to use administrative controls that reduce noise exposure duration when the worker exceeds 90 dBA TWA. The FRA defines the action level for employee enrollment in a hearing conservation program as an 8-hour TWA of 85 dBA on certain railroads, integrating all sound levels between 80 dBA and 140 dBA. FRA uses a 5-dB exchange rate. Those employees who are always at or above 90 dBA TWA are required to wear hearing protection such that sound levels are attenuated below 90 dBA TWA. (49 CFR Part 229 [ 45 ] ).
The U.S. Department of Defense (DOD) specifies that engineering controls are preferential when reducing the noise levels at the source. The use of hearing protective devices is considered an "interim protective measure" while engineering controls are developed. The goal of these controls is to reduce ambient steady-state noise levels to 85 dBA regardless of TWA exposure and to reduce impulse noise levels to below 140 dBP. The DOD requires that personnel be entered into a hearing conservation program when continuous and intermittent noise levels ale grs greater than or equal to 85 dBA TWA, when impulse SPL are at or in excess of 140 dBP, or when the personnel is exposed to ultrasonic frequencies. The DOD integrates all sound levels between 80 dBA to a minimum of 130 dBA when determining an individual or representative noise dose. When used, hearing protectors must be capable of attenuating worker noise exposure below 85 dBA TWA. Hearing protection is required to be carried by personnel who work in designated noise areas, such as those exposed to gunfire or ordnance tests and Service musicians. The DOD defines a significant threshold shift as a 10 dB average decrease in hearing thresholds at 2000, 3000, and 4000 Hz in either ear, with no age corrections. It is further specified that a shift in 15 dB at 1000, 2000, 3000, or 4000 Hz is an early warning sign for an STS; follow-up retraining is required in this case. (DOD Instruction 6055.12 [ 46 ] ).
The European Union (EU) requires a hearing conservation program be implemented when the worker exposure levels exceed 80 dBA TWA. Note that this is more strict than hearing conservation regulations in the United States. The EU specifies several different exposure action values: a "lower" value of 80 dBA at which the employer must make hearing protection devices available to the employee; an "upper" value of 85 dBA at which the employee is required to wear hearing protection; and an "exposure limit" value of 87 dBA, under which the individual's noise exposure shall be limited to preserve hearing. The directive also defines a weekly noise exposure level which is applied to individuals working in circumstances of inconstant noise exposure. Finally, the EU also recommends a variety of noise reduction methods, including administrative controls to reduce worker exposure duration, the provision of quieter equipment, and adequate maintenance of machinery and other noise sources (European Parliament and Council Directive 2003|10|EC [ 47 ] ). | https://en.wikipedia.org/wiki/Hearing_conservation_program |
Hearing range describes the frequency range that can be heard by humans or other animals, though it can also refer to the range of levels . The human range is commonly given as 20 to 20,000 Hz, [ 3 ] [ 4 ] [ note 1 ] although there is considerable variation between individuals, especially at high frequencies, and a gradual loss of sensitivity to higher frequencies with age is considered normal. Sensitivity also varies with frequency, as shown by equal-loudness contours . Routine investigation for hearing loss usually involves an audiogram which shows threshold levels relative to a normal.
Several animal species can hear frequencies well beyond the human hearing range. Some dolphins and bats , for example, can hear frequencies over 100 kHz. Elephants can hear sounds at 16 Hz–12 kHz, while some whales can hear infrasonic sounds as low as 7 Hz.
The 'hairs' in hair cells in the inner ear , stereocilia , range in height from 1 μm, for auditory detection of very high frequencies, to 50 μm or more in some vestibular systems . [ 5 ]
A basic measure of hearing is afforded by an audiogram, a graph of the absolute threshold of hearing (minimum discernible sound level) at various frequencies throughout an organism's nominal hearing range. [ 6 ]
Behavioural hearing tests or physiological tests can be used to find the hearing thresholds of humans and other animals. For humans, the test involves tones being presented at specific frequencies ( pitch ) and intensities ( loudness ). When the subject hears the sound, they indicate this by raising a hand or pressing a button. The lowest intensity they can hear is recorded. The test varies for children; their response to the sound can be indicated by a turn of the head or by using a toy. The child learns what to do upon hearing the sound, such as placing a toy man in a boat. A similar technique can be used when testing animals, where food is used as a reward for responding to the sound. The information on different mammals' hearing was obtained primarily by behavioural hearing tests.
Physiological tests do not need the patient to respond consciously. [ 7 ]
In humans, sound waves funnel into the ear via the external ear canal and reach the eardrum (tympanic membrane). The compression and rarefaction of these waves set this thin membrane in motion, causing sympathetic vibration through the middle ear bones (the ossicles : malleus, incus, and stapes), the basilar fluid in the cochlea, and the hairs within it, called stereocilia . These hairs line the cochlea from base to apex, and the part stimulated and the intensity of stimulation gives an indication of the nature of the sound. Information gathered from the hair cells is sent via the auditory nerve for processing in the brain.
The commonly stated range of human hearing is 20 to 20,000 Hz. [ 3 ] [ 4 ] [ note 1 ] Under ideal laboratory conditions, humans can hear sound as low as 12 Hz [ 8 ] and as high as 28 kHz, though the threshold increases sharply at 15 kHz in adults, corresponding to the last auditory channel of the cochlea . [ 9 ] The human auditory system is most sensitive to frequencies between 2,000 and 5,000 Hz. [ 10 ] Individual hearing range varies according to the general condition of a human's ears and nervous system. The range shrinks during life, [ 11 ] usually beginning at around the age of eight with the upper frequency limit being reduced. Women lose their hearing somewhat less often than men. This is due to a lot of social and external factors. For example, men spend more time in noisy places, and this is associated not only with work but also with hobbies and other activities. Women have a sharper hearing loss after menopause. In women, hearing decrease is worse at low and partially medium frequencies, while men are more likely to suffer from hearing loss at high frequencies. [ 12 ] [ 13 ] [ 14 ]
Audiograms of human hearing are produced using an audiometer , which presents different frequencies to the subject, usually over calibrated headphones, at specified levels. The levels are weighted with frequency relative to a standard graph known as the minimum audibility curve , which is intended to represent "normal" hearing. The threshold of hearing is set at around 0 phon on the equal-loudness contours (i.e. 20 micropascals , approximately the quietest sound a young healthy human can detect), [ 15 ] but is standardised in an ANSI standard to 1 kHz. [ 16 ] Standards using different reference levels, give rise to differences in audiograms. The ASA-1951 standard, for example, used a level of 16.5 dB SPL (sound pressure level) at 1 kHz, whereas the later ANSI-1969/ISO-1963 standard uses 6.5 dB SPL , with a 10 dB correction applied for older people.
Several primates , especially small ones, can hear frequencies far into the ultrasonic range. Measured with a 60 dB SPL signal, the hearing range for the Senegal bushbaby is 92 Hz–65 kHz, and 67 Hz–58 kHz for the ring-tailed lemur . Of 19 primates tested, the Japanese macaque had the widest range, 28 Hz–34.5 kHz, compared with 31 Hz–17.6 kHz for humans. [ 17 ]
Cats have excellent hearing and can detect an extremely broad range of frequencies. They can hear higher-pitched sounds than humans or most dogs, detecting frequencies from 55 Hz up to 79 kHz . [ 17 ] [ 18 ] Cats do not use this ability to hear ultrasound for communication but it is probably important in hunting, [ 19 ] since many species of rodents make ultrasonic calls. [ 20 ] Cat hearing is also extremely sensitive and is among the best of any mammal, [ 17 ] being most acute in the range of 500 Hz to 32 kHz. [ 21 ] This sensitivity is further enhanced by the cat's large movable outer ears (their pinnae ), which both amplify sounds and help a cat sense the direction from which a noise is coming. [ 19 ]
The hearing ability of a dog is dependent on breed and age, though the range of hearing is usually around 67 Hz to 45 kHz. [ 22 ] [ 23 ] As with humans, some dog breeds' hearing ranges narrow with age, [ 24 ] such as the German shepherd and miniature poodle. When dogs hear a sound, they will move their ears towards it in order to maximize reception. In order to achieve this, the ears of a dog are controlled by at least 18 muscles, which allow the ears to tilt and rotate. The ear's shape also allows the sound to be heard more accurately. Many breeds often have upright and curved ears, which direct and amplify sounds.
As dogs hear higher frequency sounds than humans, they have a different acoustic perception of the world. [ 24 ] Sounds that seem loud to humans often emit high-frequency tones that can scare away dogs. Whistles which emit ultrasonic sound, called dog whistles , are used in dog training, as a dog will respond much better to such levels. In the wild, dogs use their hearing capabilities to hunt and locate food. Domestic breeds are often used to guard property due to their increased hearing ability. [ 23 ] So-called "Nelson" dog whistles generate sounds at frequencies higher than those audible to humans but well within the range of a dog's hearing.
Bats have evolved very sensitive hearing to cope with their nocturnal activity. Their hearing range varies by species; at the lowest it can be 1 kHz for some species and for other species the highest reaches up to 200 kHz. Bats that can detect 200 kHz cannot hear very well below 10 kHz. [ 25 ] In any case, the most sensitive range of bat hearing is narrower: about 15 kHz to 90 kHz. [ 25 ]
Bats navigate around objects and locate their prey using echolocation . A bat will produce a very loud, short sound and assess the echo when it bounces back. Bats hunt flying insects; these insects return a faint echo of the bat's call. The type of insect, how big it is and distance can be determined by the quality of the echo and time it takes for the echo to rebound. There are two types of call constant frequency (CF), and frequency modulated (FM) that descend in pitch. [ 26 ] Each type reveals different information; CF is used to detect an object, and FM is used to assess its distance. The pulses of sound produced by the bat last only a few thousandths of a second; silences between the calls give time to listen for the information coming back in the form of an echo. Evidence suggests that bats use the change in pitch of sound produced via the Doppler effect to assess their flight speed in relation to objects around them. [ 27 ] The information regarding size, shape and texture is built up to form a picture of their surroundings and the location of their prey. Using these factors a bat can successfully track change in movements and therefore hunt down their prey.
Mice have large ears in comparison to their bodies. They hear higher frequencies than humans; their frequency range is 1 kHz to 70 kHz. They do not hear the lower frequencies that humans can; they communicate using high-frequency noises some of which are inaudible by humans. The distress call of a young mouse can be produced at 40 kHz. The mice use their ability to produce sounds out of predators' frequency ranges to alert other mice of danger without exposing themselves, though notably, cats' hearing range encompasses the mouse's entire vocal range. The squeaks that humans can hear are lower in frequency and are used by the mouse to make longer distance calls, as low-frequency sounds can travel farther than high-frequency sounds. [ 28 ]
Hearing is birds' second most important sense and their ears are funnel-shaped to focus sound. The ears are located slightly behind and below the eyes, and they are covered with soft feathers – the auriculars – for protection. The shape of a bird's head can also affect its hearing, such as owls, whose facial discs help direct sound toward their ears.
The hearing range of birds is most sensitive between 1 kHz and 4 kHz, but their full range is roughly similar to human hearing, with higher or lower limits depending on the bird species. No kind of bird has been observed to react to ultrasonic sounds, but certain kinds of birds can hear infrasonic sounds. [ 29 ] "Birds are especially sensitive to pitch, tone and rhythm changes and use those variations to recognize other individual birds, even in a noisy flock. Birds also use different sounds, songs and calls in different situations, and recognizing the different noises is essential to determine if a call is warning of a predator, advertising a territorial claim or offering to share food." [ 30 ]
"Some birds, most notably oilbirds , also use echolocation, just as bats do. These birds live in caves and use their rapid chirps and clicks to navigate through dark caves where even sensitive vision may not be useful enough." [ 30 ]
Pigeons can hear infrasound. With the average pigeon being able to hear sounds as low as 0.5 Hz, they can detect distant storms, earthquakes and even volcanoes. [ 31 ] [ 32 ] This also helps them to navigate.
Greater wax moths (Galleria mellonella) have the highest recorded sound frequency range that has been recorded so far. They can hear frequencies up to 300 kHz. This is likely to help them evade bats. [ 31 ] [ 32 ]
Fish have a narrow hearing range compared to most mammals. Goldfish and catfish do possess a Weberian apparatus and have a wider hearing range than the tuna . [ 1 ]
As aquatic environments have very different physical properties than land environments, there are differences in how marine mammals hear compared with land mammals. The differences in auditory systems have led to extensive research on aquatic mammals, specifically on dolphins.
Researchers customarily divide marine mammals into five hearing groups based on their range of best underwater hearing. (Ketten, 1998): Low-frequency baleen whales like blue whales (7 Hz to 35 kHz); Mid-frequency toothed whales like most dolphins and sperm whales (150 Hz to 160 kHz) ; High-frequency toothed whales like some dolphins and porpoises (275 Hz to 160 kHz); seals (50 Hz to 86 kHz); fur seals and sea lions (60 Hz to 39 kHz). [ 33 ]
The auditory system of a land mammal typically works via the transfer of sound waves through the ear canals. Ear canals in seals , sea lions , and walruses are similar to those of land mammals and may function the same way. In whales and dolphins, it is not entirely clear how sound is propagated to the ear, but some studies strongly suggest that sound is channelled to the ear by tissues in the area of the lower jaw. One group of whales, the Odontocetes (toothed whales), use echolocation to determine the position of objects such as prey. The toothed whales are also unusual in that the ears are separated from the skull and placed well apart, which assists them with localizing sounds, an important element for echolocation.
Studies [ 34 ] have found there to be two different types of cochlea in the dolphin population. Type I has been found in the Amazon river dolphin and harbour porpoises . These types of dolphin use extremely high frequency signals for echolocation. Harbour porpoises emit sounds at two bands, one at 2 kHz and one above 110 kHz. The cochlea in these dolphins is specialised to accommodate extreme high frequency sounds and is extremely narrow at the base.
Type II cochlea are found primarily in offshore and open water species of whales, such as the bottlenose dolphin . The sounds produced by bottlenose dolphins are lower in frequency and range typically between 75 and 150,000 Hz. The higher frequencies in this range are also used for echolocation and the lower frequencies are commonly associated with social interaction as the signals travel much farther distances.
Marine mammals use vocalisations in many different ways. Dolphins communicate via clicks and whistles, and whales use low-frequency moans or pulse signals. Each signal varies in terms of frequency and different signals are used to communicate different aspects. In dolphins, echolocation is used in order to detect and characterize objects and whistles are used in sociable herds as identification and communication devices. | https://en.wikipedia.org/wiki/Hearing_range |
Heart nanotechnology is the "Engineering of functional systems at the molecular scale" ("Nanotechnology Research"). [ 1 ]
Nanotechnology deals with structures and materials that are approximately one to one-hundred nanometers in length. At this microscopic level, quantum mechanics take place and are in effect, resulting in behaviors that would seem quite strange compared to what humans see with the naked eye (regular matter). Nanotechnology is used for a wide variety of fields of technology, ranging from energy to electronics to medicine . In the category of medicine, nanotechnology is still relatively new and has not yet been widely adopted by the field. It is possible that nanotechnology could be the new breakthrough of medicine and may eventually be the solution and cure for many of the health problems that humans encounter. Nanotechnology may lead to the cure for illnesses such as the common cold , diseases, and cancer . It is already starting to be used as a treatment for some serious health issues; more specifically it is being used to treat the heart and cancer. [ citation needed ]
Nanotechnology in the field of medicine is more commonly referred to as nanomedicine . Nanomedicine that deals with helping the heart is really starting to take off and gain in popularity compared to most of the other fields that nanomedicine currently has to offer. There are several heart problems that nanotechnology has promising evidence of being effective in the treatment of heart disease in the near future.
It should hopefully be able to treat heart valves that are defective; and detect and treat arterial plaque in the heart ("Nanotechnology Made Clear"). Nanomedicine should be able to help heal the hearts of people that have already been victims of heart disease and heart attacks . On the other hand, it will also play a key role in finding people with a high risk of having heart disease, and will be able to help prevent heart attacks from happening in the first place. Nanotechnology of the heart is a lot less invasive than surgery because everything is occurring at a minuscule level in the body compared to relatively large tissues that are dealt with in surgery. With our technology today, heart surgeries are performed to treat the damaged heart tissue that resulted from a heart attack. This is a major surgery that usually takes a couple of months to recover from ("WebMD - Better Information. Better Health"). During this period, patients are extremely limited in the activities that they can do. This long recovery process is an inconvenience to the patients, and with the growth of medicine it most likely won't be very long before a more efficient method for treating heart attack patients will be developed and used. [ citation needed ] The method that is the frontrunner to replace major heart surgery is the use of nanotechnology. There are a couple alternate ways to heart surgery that nanotechnology will potentially be able to offer in the future.
With people that have heart disease or that have suffered a heart attack, their hearts are often damaged and weakened. The more minor forms of heart failure do not require surgery and are often treated with medications ("WebMD - Better Information. Better Health"). The use of nanotechnology on treating damaged hearts will not replace these milder heart problems, but rather the more serious heart problems that currently require surgery or sometimes even heart transplants .
A group of engineers, doctors and materials scientists at MIT and Children's Hospital Boston have teamed together and are starting the movement of finding a way to use nanotechnology to strengthen the weakened heart tissue ("MIT - Massachusetts Institute of Technology"). The first method uses nanotechnology combined with tissue engineering , and gold nanowires are placed and woven into the damaged parts of the heart, essentially replacing the non-functioning or dead tissues. [ 2 ]
The other approach would potentially use minuscule nanoparticles that would travel through the body and find dying heart tissue. The nanoparticles would be carrying objects such as " stem cells , growth factors , drugs and other therapeutic compounds,". [ 2 ] Then the nanoparticles would release the compounds and inject them into the damaged heart tissue. This would theoretically lead to the regeneration of the tissue. [ 2 ]
Being able to fix cardiac tissue that has been damaged from a heart attack or heart disease is not very simple and it is one of the major challenges today in the field of tissue engineering (" Popular Science "). This is because heart cells are not the easiest objects to create in a lab. It takes an enormous amount of special care and work to develop the cells so that they beat in sync with one another ("Popular Science"). Even after the heart cells have finally been made, it is also a large task to insert the cells into the inoperable parts of the heart and to get them working in unison with the tissues that were still working properly ("Popular Science").
There have been several successful examples of this with the use of a "stem-cell- based heart patch developed by Duke University researchers," ("Popular Science"). The biomaterials that make up the patch are usually made of either biological polymers like alginate or synthetic polymers such as polylactic acid ("Nature Nanotechnology"). These materials are good at organizing the cells into functioning tissues; however they act as insulators and are poor conductors of electricity, which is a major problem especially in the heart ("Nature Nanotechnology"). Since the electrical signals that are sent between calcium ions are what control when the cardiomyocytes of the heart contract, which makes the heart beat, the stem-cell heart patch is not very efficient and not as effective as doctors would like it to be ("Popular Science"). The results of the patch not being very conductive is that the cells are not able to attain a smooth, continuous beat throughout the entire tissue containing the stem cells. This results in the heart not functioning properly, which in turn could mean that more heart problems might arise due to the implanting of the stem cells.
Recently [ when? ] there have been some new developments in the field of nanotechnology that will be more efficient than the poorly conducting stem-cell-based patch ("Nature Nanotechnology"). Scientists and researchers found a way for these stem cell patches (also known as tissue scaffolds) to be conductive and therefore become exponentially [ citation needed ] more effective ("Nature Nanotechnology"). They found that by growing gold nanowires into and through the patches, they were able to greatly increase the electrical conductivity . [ 2 ] The nanowires are thicker than the original scaffold and the cells are better organized as well. [ 2 ] There is also an increase in production of the proteins needed for muscle calcium binding and contraction. [ 2 ] The gold nanowires poke through the stem cell's scaffolding material, which strengthens the electrical communication between surrounding heart cells. [ 2 ] Without the nanowires, the stem cell patches produced a minute current and the cells would only beat in small clusters at the stimulation origin. [ 2 ] With the nanowires, the cells seem to contract together even when they are clustered far away from the source of stimulation. [ 2 ] The use of gold nanowires with the stem cell heart patches is still a relatively new concept and it will probably be awhile before they will be used in humans. It is hoped that the nanowires will be tested in live animals in the near future. [ 2 ]
Another way that nanotechnology will potentially be used to help fix damaged heart tissues is through the use of guided nanoparticle "missiles". [ 2 ] These nanoparticles can cling to and attach to artery walls and secrete medicine at a slow rate ("MIT-Massachusetts Institute of Technology"). The particles, known as nanoburrs due to the fact that they are coated with little protein fragments that stick to and target certain proteins. The nanoburrs can be made to release the drug that is attached to them over the course of several days ("MIT-Massachusetts Institute of Technology"). They are unique compared to regular drugs because they can find the particular damaged tissue, attach to it, and release the drug payload that is attached to it ("MIT-Massachusetts Institute of Technology"). What happens is the nanoburrs are targeted to a certain structure, known as the basement membrane ; this membrane lines the arterial walls and is only present if the area is damaged. Nanoburrs could be able to carry drugs that are effective in treating the heart, and also potentially carry stem cells to help regenerate the damaged heart tissue ("MIT-Massachusetts Institute of Technology").
The particles are made up of three different layers and are sixty nanometers in diameter ("MIT-Massachusetts Institute of Technology").The outer layer is a coating of polymer called PEG, and its job is to protect the drug from disintegrating while it is traveling through the body. The middle layer consists of a fatty substance and the inner core contains the actual drug along with a polymer chain, which controls the amount of time it will take before the drug is released ("MIT-Massachusetts Institute of Technology").
In a study done on rats, the nanoparticles were injected directly into the rat's tail and they still were able to reach the desired target (the left carotid artery ) at a rate that was twice the amount of the non-targeted nanoparticles ("MIT-Massachusetts Institute of Technology"). Because the particles can deliver drugs over a long period of time, and can be injected intravenously, the patients would not need to have multiple repeated injections, or invasive surgeries on the heart which would be a lot more convenient. The only downside to this is that the existing delivery approaches are invasive, requiring either a direct injection into the heart, catheter procedures, or surgical implants. [ 2 ] There is no question, however, that the future of heart repairs and heart disease/attack prevention will definitely involve the use of nanotechnology in some way. [ citation needed ]
Polyketal nanoparticles are pH-sensitive, hydrophobic nanoparticles formulated from poly(1-4-phenyleneacetone dimethylene ketal). [ 3 ] They are an acid-sensitive vehicle of drug delivery, specifically designed for targeting the environments of tumors, phagosomes, and inflammatory tissue. [ 3 ] In such acidic environments, these nanoparticles undergo accelerated hydrolysis into low molecular weight hydrophilic compounds, consequently releasing their therapeutic contents at a faster rate. [ 3 ] Unlike polyester-based nanoparticles, polyketal nanoparticles do not generate acidic degradation products following hydrolysis [ 3 ] [ 4 ]
Post- myocardial infarction , inflammatory leukocytes invade the myocardium . Leukocytes contain high amounts of Nicotinamide adenine dinucleotide phosphate (NADPH) and Nox2. [ 5 ] [ 6 ] Nox2 and NADPH oxidase combine to act as a major source of cardiac superoxide production, which in excess can lead to myocyte hypertrophy, apoptosis, fibrosis, and increased matrix metalloproteinase -2 expression. [ 5 ] In a mouse-model study by Somasuntharam et al. 2013, polyketal nanoparticles were used as a delivery vehicle for siRNA to target and inhibit Nox2 in the infarcted heart. [ 7 ] Following intramyocardial injection in vivo, Nox2-siRNA nanoparticles prevented upregulation of Nox2-NADPH oxidase , and improved fractional shortening . [ 7 ] When taken up by macrophages in the myocardium following a MI, the nanoparticles degraded in the acidic environment of the endosomes / phagosomes , releasing Nox2-specific siRNA into the cytoplasm . [ 7 ]
Polyketal nanoparticles have also been used in the infarcted mouse heart to prevent ischemia - reperfusion injury caused by reactive oxygen species (ROS). [ 8 ] Levels of the antioxidant Cu/Zn-superoxide dismutase (SOD1), which scavenges harmful ROS, decrease following MI. [ 9 ] SOD1-enacapsulated polyketal nanoparticles are able to scavenge reperfusion-injury induced ROS. [ 8 ] Furthermore, this treatment improved fractional shortening, suggesting the benefit of targeted delivery by polyketals. One of the key advantages of polyketal use is that they do not exacerbate the inflammatory response, even when administered at concentrations exceeding therapeutic limits. [ 10 ] In contrast to commonly used poly(lactic-co-glycolic acid) (PLGA) nanoparticles, polyketal nanoparticle administration in mice instigates little recruitment of inflammatory cells. [ 10 ] Additionally, intramuscular injection of polyketals into the leg of rats shows no significant increases in inflammatory cytokines such as IL-6 , IL-1ß , TNF-α and IL-12 . [ 10 ] | https://en.wikipedia.org/wiki/Heart_nanotechnology |
The Heart of Europe Bio-Crystallography Meeting (short HEC-Meeting ) is an annual academic conference on structural biology , in particular protein crystallography . Researchers from universities, other research institutions and industry from Austria , Czech Republic , Germany and Poland meet to present and discuss current topics of their research. The talks are predominantly given by PhD students ( doctoral students ). An exception is the invited HEC lecture, which is held by a renowned scientist of the research field. The format of the HEC meeting has been adopted from the eleven years older Rhine-Knee Regional Meeting on Structural Biology Archived 2017-09-14 at the Wayback Machine .
The HEC-Meeting dates back to an initiative in the year 1998 of Manfred Weiss and Rolf Hilgenfeld, who were researchers at the Institute for Molecular Biotechnology (IMB) in Jena and intended to establish a meeting format similar to the Rhine-Knee Regional Meeting on Structural Biology Archived 2017-09-14 at the Wayback Machine in the New Länder . [ 1 ] Both conferences are regional meetings of German scientists together with scientific research groups of the neighbouring countries. Nine groups from Germany (the new states and West-Berlin), Poland and Czech Republic participated in the first HEC-Meeting from 8 to 10 October 1998. Later also groups from Austria and the Old Federal States participated. Due to the Covid-19 pandemic, no meeting was organized in 2020 and HEC-23 took place as an online meeting.
Former HEC-Meetings: | https://en.wikipedia.org/wiki/Heart_of_Europe_Bio-Crystallography_Meeting |
The heart symbol is an ideograph used to express the idea of the " heart " in its metaphorical or symbolic sense. Represented by an anatomically inaccurate shape, the heart symbol is often used to represent the center of emotion , including affection and love , especially romantic love . While ancient antecedents may exist, this shape for the heart became fixed in Europe in the middle ages. It is sometimes accompanied or superseded by a "wounded heart" symbol, depicted as a heart symbol pierced with an arrow , indicating lovesickness , or as a "broken" heart symbol in two or more pieces, indicating heartbreak .
Peepal leaves were used in artistic depictions by the Indus Valley civilisation : a heart-shaped pendant originating from there has been discovered and is now exhibited in the National Museum of India. [ 1 ] In the 5th–6th century BC, the heart shape was used in the Roman world to represent the seeds of the plant silphium , [ 2 ] a plant possibly used as a contraceptive and an aphrodisiac . [ 3 ] [ 4 ] Silver coins from Cyrene of the 5th–6th century BC bear a similar design, sometimes accompanied by a silphium plant and is understood to represent its seed or fruit. [ 5 ]
Since ancient times in Japan , the heart symbol has been called Inome (猪目), meaning the eye of a wild boar , and it has the meaning of warding off evil spirits. The decorations are used to decorate Shinto shrines , Buddhist temples , castles , and weapons. [ 6 ] [ 7 ] The oldest examples of this pattern are seen in some of the Japanese original tsuba (sword guard) of the style called toran gata tsuba (lit., inverted egg shaped tsuba ) that were attached to swords from the sixth to seventh centuries, and part of the tsuba was hollowed out in the shape of a heart symbol. [ 8 ] [ 9 ]
The combination of the heart shape and its use within the heart metaphor was developed in the end of the Middle Ages , although the shape has been used in many ancient epigraphy monuments and texts. With possible early examples or direct predecessors in the 13th to 14th century, the familiar symbol of the heart representing love developed in the 15th century, and became popular in Europe during the 16th. [ 10 ]
Before the 14th century, the heart shape was not associated with the meaning of the heart metaphor. The geometric shape itself is found in much earlier sources, but in such instances does not depict a heart, but typically foliage: in examples from antiquity fig leaves, and in medieval iconography and heraldry, typically the leaves of ivy and of the water-lily .
The first known depiction of a heart as a symbol of romantic love dates to the 1250s. It occurs in a miniature decorating a capital 'S' in a manuscript of the French Roman de la poire . [ 11 ] In the miniature, a kneeling lover (or more precisely, an allegory of the lover's "sweet gaze" or doux regard ) offers his heart to a damsel. The heart here resembles a pine cone (held "upside down", the point facing upward), in accord with medieval anatomical descriptions. However, in this miniature, what suggests a heart shape is only the result of a lover's finger superimposed on an object; the full shape outline of the object is partly hidden, and, therefore unknown. Moreover, the French title of the manuscript that features the miniature translates into "Novel of the pear" in English. Thus the heart-shaped object would be a pear; the conclusion that a pear represents a heart is dubious. Opinions, therefore, differ over this being the first depiction of a heart as a symbol of romantic love. [ 12 ]
Giotto in his 1305 painting in the Scrovegni Chapel ( Padua ) shows an allegory of charity (caritas) handing her heart to Jesus Christ . This heart is also depicted in the pine cone shape based on anatomical descriptions of the day (still held "upside down"). Giotto's painting exerted considerable influence on later painters, and the motive of Caritas offering a heart is shown by Taddeo Gaddi in Santa Croce , by Andrea Pisano on the bronze door of the south porch of the Florence Baptistery ( c. 1337 ), by Ambrogio Lorenzetti in the Palazzo Publico in Siena ( c. 1340 ) and by Andrea da Firenze in Santa Maria Novella in Florence ( c. 1365 ). The convention of showing the heart point upward switches in the late 14th century and becomes rare in the first half of the 15th century. [ 12 ]
The "scalloped" shape of the now-familiar heart symbol, with a dent in its base, arises in the early 14th century, at first only lightly dented, as in the miniatures in Francesco da Barberino 's Documenti d'amore (before 1320). A slightly later example with a more pronounced dent is found in a manuscript from the Cistercian monastery in Brussels. [ 13 ] The convention of showing a dent at the base of the heart thus spread at about the same time as the convention of showing the heart with its point downward. [ 14 ] The modern indented red heart has been used on playing cards since the late 15th century. [ 15 ]
Various hypotheses attempted to connect the "heart shape" as it evolved in the Late Middle Ages with instances of the geometric shape in antiquity. [ 16 ] Such theories are modern, proposed from the 1960s onward, and they remain speculative, as no continuity between the supposed ancient predecessors and the late medieval tradition can be shown. Specific suggestions include: the shape of the seed of the silphium plant, used in ancient times as an herbal contraceptive , [ 16 ] [ 17 ] and stylized depictions of features of the human female body, such as the female's breasts , buttocks , pubic mound , or spread vulva . [ 18 ]
Heart shapes can be seen on various stucco reliefs and wall panels excavated from the ruins of Ctesiphon , the Persian capital ( c. 90 BC – 637 AD ). [ 23 ] [ 24 ] [ 25 ]
The Luther rose was the seal that was designed for Martin Luther at the behest of Prince John Frederick , in 1530, while Luther was staying at the Coburg Fortress during the Diet of Augsburg . Luther wrote an explanation of the symbol to Lazarus Spengler : "a black cross in a heart, which retains its natural color, so that I myself would be reminded that faith in the Crucified saves us. 'For one who believes from the heart will be justified' ( Romans 10:10)." [ 26 ] [ unreliable source? ]
The aorta remains visible, as a protrusion at the top centered between the two "chambers" indicated in the symbol, in some depictions of the Sacred Heart well into the 18th century, and is partly still shown today (although mostly obscured by elements such as a crown, flames, rays, or a cross) but the "hearts" suit did not have this element since the 15th century.
Since the 19th century, the symbol has often been used on Valentine's Day cards , candy boxes, and similar popular culture artifacts as a symbol of romantic love .
The use of the heart symbol as a logograph for the English verb "to love" derives from the use in " I ♥ NY ," introduced in 1977. [ 29 ]
Outdoor toilets in Scandinavia traditionally had a heart shaped peephole. In homes a heart symbol made from red painted plywood, or a stuffed fabric one, is often used to assist visitors in finding the modern facility. For image see: Hjerte (symbol)
Heart symbols are frequently used to symbolize "health" or "lives" in video games . The Legend of Zelda (1986) had a "life bar" composed of heart shapes, and many other games continued this convention (the Castlevania franchise being a notable exception, where the hearts are ammunition for the secondary weapons instead of representing health). Since the 1990s, the heart symbol has also been used as an ideogram indicating health outside of the video gaming context, e.g., its use by restaurants to indicate heart-healthy nutrient content claim (e.g., "low in cholesterol "). A copyrighted "heart-check" symbol to indicate heart-healthy food was introduced by the American Heart Association in 1995. [ 30 ]
The earliest heart-shaped charges in heraldry appear in the 12th century; the hearts in the coat of arms of Denmark go back to the royal banner of the kings of Denmark , in turn based on a seal used as early as the 1190s. However, while the charges are clearly heart-shaped, they did not depict hearts in origin, or symbolize any idea related to love. Instead, they are assumed to have depicted the leaves of the water-lily . [ citation needed ] Early heraldic heart-shaped charges depicting the leaves of water-lilies are found in various other designs related to territories close to rivers or a coastline ( e.g. Flags of Frisia ).
Inverted heart symbols have been used in heraldry as stylized testicles ( coglioni in Italian) as in the canting arms of the Colleoni family of Milan. [ 32 ]
A seal attributed to William, Lord of Douglas (of 1333) shows a heart shape, identified as the heart of Robert the Bruce . The authenticity of this seal is "very questionable", [ 33 ] i.e. it could possibly date to the late 14th or even the 15th century. [ 34 ]
Heraldic charges actually representing hearts became more common in the early modern period , with the Sacred Heart depicted in ecclesiastical heraldry , and hearts representing love appearing in bourgeois coats of arms. Hearts also later became popular elements in municipal coats of arms.
There has been some conjecture regarding the link between the traditional heart symbol and images of the fruit of silphium, a (probably) extinct plant known to classical antiquity and belonging to the genus Ferula , used as a condiment and medicine, (the medicinal properties including contraceptive and abortifacient activity, linking the plant to sexuality and love). [ 35 ] Silver coins from the ancient Libya of the 6th to 5th centuries BC bear images strongly reminiscent of the heart symbol, sometimes accompanied by images of the silphium plant. [ 36 ] [ 37 ] The related Ferula species asafoetida – which was actually used as an inferior substitute for silphium – is regarded as an aphrodisiac in Tibet and India , suggesting yet a third amatory association relating to silphium. [ 38 ]
A number of parametrisations of approximately heart-shaped curves have been described.
The best-known of these is the cardioid , which is an epicycloid with one cusp ; [ 39 ] though as the cardioid lacks the point, it may be seen as a stylized water-lily leaf, a so-called seeblatt , rather than a heart. Other curves, such as the implicit curve (x 2 +y 2 −1) 3 −x 2 y 3 =0, may produce better approximations of the heart shape. [ 40 ] | https://en.wikipedia.org/wiki/Heart_symbol |
In computer science , a heartbeat is a periodic signal generated by hardware or software to indicate normal operation or to synchronize other parts of a computer system . [ 1 ] [ 2 ] Heartbeat mechanism is one of the common techniques in mission critical systems for providing high availability and fault tolerance of network services by detecting the network or systems failures of nodes or daemons which belongs to a network cluster —administered by a master server —for the purpose of automatic adaptation and rebalancing of the system by using the remaining redundant nodes on the cluster to take over the load of failed nodes for providing constant services. [ 3 ] [ 1 ] Usually a heartbeat is sent between machines at a regular interval in the order of seconds; a heartbeat message . [ 4 ] If the endpoint does not receive a heartbeat for a time—usually a few heartbeat intervals—the machine that should have sent the heartbeat is assumed to have failed. [ 5 ] Heartbeat messages are typically sent non-stop on a periodic or recurring basis from the originator's start-up until the originator's shutdown. When the destination identifies a lack of heartbeat messages during an anticipated arrival period, the destination may determine that the originator has failed, shutdown, or is generally no longer available.
A heartbeat protocol is generally used to negotiate and monitor the availability of a resource, such as a floating IP address , and the procedure involves sending network packets to all the nodes in the cluster to verify its reachability . [ 3 ] Typically when a heartbeat starts on a machine, it will perform an election process with other machines on the heartbeat network to determine which machine, if any, owns the resource. On heartbeat networks of more than two machines, it is important to take into account partitioning, where two halves of the network could be functioning but not able to communicate with each other. In a situation such as this, it is important that the resource is only owned by one machine, not one machine in each partition.
As a heartbeat is intended to be used to indicate the health of a machine, it is important that the heartbeat protocol and the transport that it runs on are as reliable as possible. Causing a failover because of a false alarm may, depending on the resource, be highly undesirable. It is also important to react quickly to an actual failure, further signifying the reliability of the heartbeat messages. For this reason, it is often desirable to have a heartbeat running over more than one transport; for instance, an Ethernet segment using UDP / IP , and a serial link.
A "cluster membership" of a node is a property of network reachability : if the master can communicate with the node x {\displaystyle x} , it's considered a member of the cluster and "dead" otherwise. [ 6 ] A heartbeat program as a whole consist of various subsystems : [ 7 ]
Heartbeat messages are sent in a periodic manner through techniques such as broadcast or multicasts in larger clusters. [ 6 ] Since CMs have transactions across the cluster, the most common pattern is to send heartbeat messages to all the nodes and " await " responses in non-blocking fashion. [ 8 ] Since the heartbeat or keepalive messages are the overwhelming majority of non-application related cluster control messages—which also goes to all the members of the cluster—major critical systems also include non- IP protocols like serial ports to deliver heartbeats. [ 9 ]
Every CM on the master server maintains a finite-state machine with three states for each node it administers: Down, Init, and Alive. [ 10 ] Whenever a new node joins, the CM changes the state of the node from Down to Init and broadcasts a "boot-up message", which the node receives the executes set of start-up procedures. It then responses with an acknowledgment message, CM then includes the node as the member of the cluster and transitions the state of the node from Init to Alive. Every node in the Alive state would receive a periodic broadcast heartbeat message from the HS subsystem and expects an acknowledgment message back within a timeout range . If CM didn't receive an acknowledgment heartbeat message back, the node is considered unavailable , and a state transition from Alive to Down takes place for that node by CM. [ 11 ] The procedures or scripts to run, and actions to take between each state transition is an implementation detail of the system.
Heartbeat network is a private network which is shared only by the nodes in the cluster, and is not accessible from outside the cluster. It is used by cluster nodes in order to monitor each node's status and communicate with each other messages necessary for maintaining the operation of the cluster. The heartbeat method uses the FIFO nature of the signals sent across the network. By making sure that all messages have been received, the system ensures that events can be properly ordered. [ 12 ]
In this communications protocol every node sends back a message in a given interval, say delta , in effect confirming that it is alive and has a heartbeat. These messages are viewed as control messages that help determine that the network includes no delayed messages. A receiver node called a "sync", maintains an ordered list of the received messages. Once a message with a timestamp later than the given marked time is received from every node, the system determines that all messages have been received since the FIFO property ensures that the messages are ordered. [ 13 ]
In general, it is difficult to select a delta that is optimal for all applications. If delta is too small, it requires too much overhead and if it is large it results in performance degradation as everything waits for the next heartbeat signal. [ 14 ] | https://en.wikipedia.org/wiki/Heartbeat_(computing) |
In computer science , a heartbeat is a periodic signal generated by hardware or software to indicate normal operation or to synchronize other parts of a computer system . [ 1 ] [ 2 ] Heartbeat mechanism is one of the common techniques in mission critical systems for providing high availability and fault tolerance of network services by detecting the network or systems failures of nodes or daemons which belongs to a network cluster —administered by a master server —for the purpose of automatic adaptation and rebalancing of the system by using the remaining redundant nodes on the cluster to take over the load of failed nodes for providing constant services. [ 3 ] [ 1 ] Usually a heartbeat is sent between machines at a regular interval in the order of seconds; a heartbeat message . [ 4 ] If the endpoint does not receive a heartbeat for a time—usually a few heartbeat intervals—the machine that should have sent the heartbeat is assumed to have failed. [ 5 ] Heartbeat messages are typically sent non-stop on a periodic or recurring basis from the originator's start-up until the originator's shutdown. When the destination identifies a lack of heartbeat messages during an anticipated arrival period, the destination may determine that the originator has failed, shutdown, or is generally no longer available.
A heartbeat protocol is generally used to negotiate and monitor the availability of a resource, such as a floating IP address , and the procedure involves sending network packets to all the nodes in the cluster to verify its reachability . [ 3 ] Typically when a heartbeat starts on a machine, it will perform an election process with other machines on the heartbeat network to determine which machine, if any, owns the resource. On heartbeat networks of more than two machines, it is important to take into account partitioning, where two halves of the network could be functioning but not able to communicate with each other. In a situation such as this, it is important that the resource is only owned by one machine, not one machine in each partition.
As a heartbeat is intended to be used to indicate the health of a machine, it is important that the heartbeat protocol and the transport that it runs on are as reliable as possible. Causing a failover because of a false alarm may, depending on the resource, be highly undesirable. It is also important to react quickly to an actual failure, further signifying the reliability of the heartbeat messages. For this reason, it is often desirable to have a heartbeat running over more than one transport; for instance, an Ethernet segment using UDP / IP , and a serial link.
A "cluster membership" of a node is a property of network reachability : if the master can communicate with the node x {\displaystyle x} , it's considered a member of the cluster and "dead" otherwise. [ 6 ] A heartbeat program as a whole consist of various subsystems : [ 7 ]
Heartbeat messages are sent in a periodic manner through techniques such as broadcast or multicasts in larger clusters. [ 6 ] Since CMs have transactions across the cluster, the most common pattern is to send heartbeat messages to all the nodes and " await " responses in non-blocking fashion. [ 8 ] Since the heartbeat or keepalive messages are the overwhelming majority of non-application related cluster control messages—which also goes to all the members of the cluster—major critical systems also include non- IP protocols like serial ports to deliver heartbeats. [ 9 ]
Every CM on the master server maintains a finite-state machine with three states for each node it administers: Down, Init, and Alive. [ 10 ] Whenever a new node joins, the CM changes the state of the node from Down to Init and broadcasts a "boot-up message", which the node receives the executes set of start-up procedures. It then responses with an acknowledgment message, CM then includes the node as the member of the cluster and transitions the state of the node from Init to Alive. Every node in the Alive state would receive a periodic broadcast heartbeat message from the HS subsystem and expects an acknowledgment message back within a timeout range . If CM didn't receive an acknowledgment heartbeat message back, the node is considered unavailable , and a state transition from Alive to Down takes place for that node by CM. [ 11 ] The procedures or scripts to run, and actions to take between each state transition is an implementation detail of the system.
Heartbeat network is a private network which is shared only by the nodes in the cluster, and is not accessible from outside the cluster. It is used by cluster nodes in order to monitor each node's status and communicate with each other messages necessary for maintaining the operation of the cluster. The heartbeat method uses the FIFO nature of the signals sent across the network. By making sure that all messages have been received, the system ensures that events can be properly ordered. [ 12 ]
In this communications protocol every node sends back a message in a given interval, say delta , in effect confirming that it is alive and has a heartbeat. These messages are viewed as control messages that help determine that the network includes no delayed messages. A receiver node called a "sync", maintains an ordered list of the received messages. Once a message with a timestamp later than the given marked time is received from every node, the system determines that all messages have been received since the FIFO property ensures that the messages are ordered. [ 13 ]
In general, it is difficult to select a delta that is optimal for all applications. If delta is too small, it requires too much overhead and if it is large it results in performance degradation as everything waits for the next heartbeat signal. [ 14 ] | https://en.wikipedia.org/wiki/Heartbeat_message |
Heartbleed is a security bug in some outdated versions of the OpenSSL cryptography library, which is a widely used implementation of the Transport Layer Security (TLS) protocol. It was introduced into the software in 2012 and publicly disclosed in April 2014. Heartbleed could be exploited regardless of whether the vulnerable OpenSSL instance is running as a TLS server or client. It resulted from improper input validation (due to a missing bounds check ) in the implementation of the TLS heartbeat extension. [ 5 ] Thus, the bug's name derived from heartbeat . [ 6 ] The vulnerability was classified as a buffer over-read , [ 7 ] a situation where more data can be read than should be allowed. [ 8 ]
Heartbleed was registered in the Common Vulnerabilities and Exposures database as CVE - 2014-0160 . [ 7 ] The federal Canadian Cyber Incident Response Centre issued a security bulletin advising system administrators about the bug. [ 9 ] A fixed version of OpenSSL was released on 7 April 2014, on the same day Heartbleed was publicly disclosed. [ 10 ]
TLS implementations other than OpenSSL, such as GnuTLS , Mozilla 's Network Security Services , and the Windows platform implementation of TLS , were not affected because the defect existed in the OpenSSL's implementation of TLS rather than in the protocol itself. [ 11 ]
System administrators were frequently slow to patch their systems. As of 20 May 2014 [update] , 1.5% of the 800,000 most popular TLS-enabled websites were still vulnerable to the bug, [ 12 ] and by 21 June 2014 [update] , 309,197 public web servers remained vulnerable. [ 13 ] According to a 23 January 2017 [update] report [ 14 ] from Shodan , nearly 180,000 internet-connected devices were still vulnerable to the bug, [ 15 ] [ 16 ] but by 6 July 2017 [update] , the number had dropped to 144,000 according to a search performed on shodan.io for the vulnerability. [ 17 ] Around two years later, 11 July 2019 [update] , Shodan reported [ 18 ] that 91,063 devices were vulnerable. The U.S. had the most vulnerable devices, with 21,258 (23%), and the 10 countries with the most vulnerable devices had a total of 56,537 vulnerable devices (62%). The remaining countries totaled 34,526 devices (38%). The report also broke the devices down by 10 other categories such as organization (the top 3 were wireless companies), product ( Apache httpd , Nginx ), and service ( HTTPS , 81%).
The Heartbeat Extension for the Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS) protocols was proposed as a standard in February 2012 by RFC 6520 . [ 19 ] It provides a way to test and keep alive secure communication links without the need to renegotiate the connection each time. In 2011, one of the RFC's authors, Robin Seggelmann, then a Ph.D. student at the Fachhochschule Münster , implemented the Heartbeat Extension for OpenSSL. Following Seggelmann's request to put the result of his work into OpenSSL, [ 20 ] [ 21 ] [ 22 ] his change was reviewed by Stephen N. Henson, one of OpenSSL's four core developers. Henson failed to notice a bug in Seggelmann's implementation, and introduced the flawed code into OpenSSL's source code repository on 31 December 2011. The defect spread with the release of OpenSSL version 1.0.1 on 14 March 2012. Heartbeat support was enabled by default, causing affected versions to be vulnerable. [ 3 ] [ 23 ]
According to Mark J. Cox of OpenSSL, Neel Mehta of Google's security team privately reported Heartbleed to the OpenSSL team on 1 April 2014 11:09 UTC. [ 24 ]
The bug was named by an engineer at Synopsys Software Integrity Group, a Finnish cyber security company that also created the bleeding heart logo, [ 25 ] designed by a Finnish graphic designer Leena Kurjenniska, and launched an informational website, heartbleed.com. [ 26 ] While Google's security team reported Heartbleed to OpenSSL first, both Google and Codenomicon discovered it independently at approximately the same time. [ 27 ] [ 28 ] Codenomicon reports 3 April 2014 as their date of discovery and their date of notification of NCSC-FI [ fi ] for vulnerability coordination. [ 29 ]
At the time of disclosure, some 17% (around half a million) of the Internet's secure web servers certified by trusted authorities were believed to be vulnerable to the attack, allowing theft of the servers' private keys and users' session cookies and passwords. [ 30 ] [ 31 ] [ 32 ] [ 33 ] [ 34 ] The Electronic Frontier Foundation , [ 35 ] Ars Technica , [ 36 ] and Bruce Schneier [ 37 ] all deemed the Heartbleed bug "catastrophic". Forbes cybersecurity columnist Joseph Steinberg wrote:
Some might argue that Heartbleed is the worst vulnerability found (at least in terms of its potential impact) since commercial traffic began to flow on the Internet. [ 38 ]
An unidentified UK Cabinet Office spokesman recommended that:
People should take advice on changing passwords from the websites they use.
Most websites have corrected the bug and are best placed to advise what action, if any, people need to take. [ 39 ]
On the day of disclosure, The Tor Project advised:
If you need strong anonymity or privacy on the Internet, you might want to stay away from the Internet entirely for the next few days while things settle. [ 40 ]
The Sydney Morning Herald published a timeline of the discovery on 15 April 2014, showing that some organizations had been able to patch the bug before its public disclosure. In some cases, it is not clear how they found out. [ 41 ]
Bodo Möller and Adam Langley of Google prepared the fix for Heartbleed. The resulting patch was added to Red Hat 's issue tracker on 21 March 2014. [ 42 ] Stephen N. Henson applied the fix to OpenSSL's version control system on 7 April. [ 43 ] The first fixed version, 1.0.1g, was released on the same day. As of 21 June 2014 [update] , 309,197 public web servers remained vulnerable. [ 13 ] As of 23 January 2017 [update] , according to a report [ 14 ] from Shodan, nearly 180,000 internet-connected devices were still vulnerable. [ 15 ] [ 16 ] The number had dropped to 144,000 as of 6 July 2017 [update] , according to a search on shodan.io for "vuln:cve-2014-0160". [ 17 ]
According to Netcraft , about 30,000 of the 500,000+ X.509 certificates which could have been compromised due to Heartbleed had been reissued by 11 April 2014, although fewer had been revoked. [ 44 ]
By 9 May 2014, only 43% of affected web sites had reissued their security certificates. In addition, 7% of the reissued security certificates used the potentially compromised keys. Netcraft stated:
By reusing the same private key, a site that was affected by the Heartbleed bug still faces exactly the same risks as those that have not yet replaced their SSL certificates . [ 45 ]
eWeek said, "[Heartbleed is] likely to remain a risk for months, if not years, to come." [ 46 ]
Cloudflare revoked all TLS certificates and estimated that publishing its certificate revocation list would cost the issuer, GlobalSign , $400,000 per month that year. [ 47 ]
The Canada Revenue Agency reported a theft of social insurance numbers belonging to 900 taxpayers, and said that they were accessed through an exploit of the bug during a 6-hour period on 8 April 2014. [ 48 ] After the discovery of the attack, the agency shut down its website and extended the taxpayer filing deadline from 30 April to 5 May. [ 49 ] The agency said it would provide credit protection services at no cost to anyone affected. On 16 April, the RCMP announced they had charged a computer science student in relation to the theft with unauthorized use of a computer and mischief in relation to data . [ 50 ] [ 51 ]
The UK parenting site Mumsnet had several user accounts hijacked, and its CEO was impersonated. [ 52 ] The site later published an explanation of the incident saying it was due to Heartbleed and the technical staff patched it promptly. [ 53 ]
Anti-malware researchers also exploited Heartbleed to their own advantage in order to access secret forums used by cybercriminals. [ 54 ] Studies were also conducted by deliberately setting up vulnerable machines. For example, on 12 April 2014, at least two independent researchers were able to steal private keys from an experimental server intentionally set up for that purpose by CloudFlare . [ 55 ] [ 56 ] Also, on 15 April 2014, J. Alex Halderman , a professor at University of Michigan , reported that his honeypot server, an intentionally vulnerable server designed to attract attacks in order to study them, had received numerous attacks originating from China. Halderman concluded that because it was a fairly obscure server, these attacks were probably sweeping attacks affecting large areas of the Internet. [ 57 ]
In August 2014, it was made public that the Heartbleed vulnerability enabled hackers to steal security keys from Community Health Systems , the second-biggest for-profit U.S. hospital chain in the United States, compromising the confidentiality of 4.5 million patient records. The breach happened a week after Heartbleed was first made public. [ 58 ]
Many major web sites patched the bug or disabled the Heartbeat Extension within days of its announcement, [ 59 ] but it is unclear whether potential attackers were aware of it earlier and to what extent it was exploited. [ citation needed ]
Based on examinations of audit logs by researchers, it has been reported that some attackers may have exploited the flaw for at least five months before discovery and announcement. [ 60 ] [ 61 ] Errata Security pointed out that a widely used non-malicious program called Masscan , introduced six months before Heartbleed's disclosure, abruptly terminates the connection in the middle of handshaking in the same way as Heartbleed, generating the same server log messages, adding "Two new things producing the same error messages might seem like the two are correlated, but of course, they aren't. [ 62 ] "
According to Bloomberg News , two unnamed insider sources informed it that the United States' National Security Agency had been aware of the flaw since shortly after its appearance but—instead of reporting it—kept it secret among other unreported zero-day vulnerabilities in order to exploit it for the NSA's own purposes. [ 63 ] [ 64 ] [ 65 ] The NSA has denied this claim, [ 66 ] as has Richard A. Clarke , a member of the National Intelligence Review Group on Intelligence and Communications Technologies that reviewed the United States' electronic surveillance policy; he told Reuters on 11 April 2014 that the NSA had not known of Heartbleed. [ 67 ] The allegation prompted the American government to make, for the first time, a public statement on its zero-day vulnerabilities policy, accepting the recommendation of the review group's 2013 report that had asserted "in almost all instances, for widely used code, it is in the national interest to eliminate software vulnerabilities rather than to use them for US intelligence collection", and saying that the decision to withhold should move from the NSA to the White House. [ 68 ]
The RFC 6520 Heartbeat Extension tests TLS/DTLS secure communication links by allowing a computer at one end of a connection to send a Heartbeat Request message, consisting of a payload, typically a text string, along with the payload's length as a 16-bit integer. The receiving computer then must send exactly the same payload back to the sender. [ citation needed ]
The affected versions of OpenSSL allocate a memory buffer for the message to be returned based on the length field in the requesting message, without regard to the actual size of that message's payload. Because of this failure to do proper bounds checking , the message returned consists of the payload, possibly followed by whatever else happened to be in the allocated memory buffer. [ citation needed ]
Heartbleed is therefore exploited by sending a malformed heartbeat request with a small payload and large length field to the vulnerable party (usually a server) in order to elicit the victim's response, permitting attackers to read up to 64 kilobytes of the victim's memory that was likely to have been used previously by OpenSSL. [ 69 ] Where a Heartbeat Request might ask a party to "send back the four-letter word 'bird'", resulting in a response of "bird", a "Heartbleed Request" (a malicious heartbeat request) of "send back the 500-letter word 'bird'" would cause the victim to return "bird" followed by whatever 496 subsequent characters the victim happened to have in active memory. Attackers in this way could receive sensitive data, compromising the confidentiality of the victim's communications. Although an attacker has some control over the disclosed memory block's size, it has no control over its location, and therefore cannot choose what content is revealed. [ citation needed ]
The affected versions of OpenSSL are OpenSSL 1.0.1 through 1.0.1f (inclusive). Subsequent versions (1.0.1g [ 70 ] and later) and previous versions (1.0.0 branch and older) are not vulnerable. [ 71 ] Installations of the affected versions are vulnerable unless OpenSSL was compiled with -DOPENSSL_NO_HEARTBEATS . [ 72 ] [ 73 ]
The vulnerable program source files are t1_lib.c and d1_both.c and the vulnerable functions are tls1_process_heartbeat() and dtls1_process_heartbeat(). [ 74 ] [ 75 ]
The problem can be fixed by ignoring Heartbeat Request messages that ask for more data than their payload need, as required by the RFC.
Version 1.0.1g of OpenSSL adds some bounds checks to prevent the buffer over-read. The test listed below was one introduced to determine whether a heartbeat request would trigger Heartbleed; it silently discards malicious requests.
The data obtained by a Heartbleed attack may include unencrypted exchanges between TLS parties likely to be confidential, including any form post data in users' requests. Moreover, the confidential data exposed could include authentication secrets such as session cookies and passwords, which might allow attackers to impersonate a user of the service. [ 76 ]
An attack may also reveal private keys of compromised parties, [ 3 ] [ 77 ] which would enable attackers to decrypt communications (future or past stored traffic captured via passive eavesdropping, unless perfect forward secrecy is used, in which case only future traffic can be decrypted if intercepted via man-in-the-middle attacks ). [ citation needed ]
An attacker having gained authentication material may impersonate the material's owner after the victim has patched Heartbleed, as long as the material is accepted (for example, until the password is changed or the private key revoked). Heartbleed therefore constitutes a critical threat to confidentiality. However, an attacker impersonating a victim may also alter data. Indirectly, Heartbleed's consequences may thus go far beyond a confidentiality breach for many systems. [ 78 ]
A survey of American adults conducted in April 2014 showed that 60 percent had heard about Heartbleed. Among those using the Internet, 39 percent had protected their online accounts, for example by changing passwords or canceling accounts; 29 percent believed their personal information was put at risk because of the Heartbleed bug; and 6 percent believed their personal information had been stolen. [ 79 ]
Although the bug received more attention due to the threat it represents for servers, [ 80 ] TLS clients using affected OpenSSL instances are also vulnerable. In what The Guardian therefore dubbed Reverse Heartbleed , malicious servers are able to exploit Heartbleed to read data from a vulnerable client's memory. [ 81 ] Security researcher Steve Gibson said of Heartbleed that:
It's not just a server-side vulnerability, it's also a client-side vulnerability because the server, or whomever you connect to, is as able to ask you for a heartbeat back as you are to ask them. [ 82 ]
The stolen data could contain usernames and passwords. [ 83 ] Reverse Heartbleed affected millions of application instances. [ 81 ] Some of the vulnerable applications are listed in the "Software applications" section below . [ citation needed ]
Cisco Systems has identified 78 of its products as vulnerable, including IP phone systems and telepresence (video conferencing) systems. [ 84 ]
An analysis posted on GitHub of the most visited websites on 8 April 2014 revealed vulnerabilities in sites including Yahoo! , Imgur , Stack Overflow , Slate , and DuckDuckGo . [ 85 ] [ 86 ] The following sites have services affected or made announcements recommending that users update passwords in response to the bug:
The Canadian federal government temporarily shut online services of the Canada Revenue Agency (CRA) and several government departments over Heartbleed bug security concerns. [ 111 ] [ 112 ] Before the CRA online services were shut down, a hacker obtained approximately 900 social insurance numbers . [ 113 ] [ 114 ] Another Canadian Government agency, Statistics Canada , had its servers compromised due to the bug and also temporarily took its services offline. [ 115 ]
Platform maintainers like the Wikimedia Foundation advised their users to change passwords. [ 108 ]
The servers of LastPass were vulnerable, [ 116 ] but due to additional encryption and forward secrecy, potential attacks were not able to exploit this bug. However, LastPass recommended that its users change passwords for vulnerable websites. [ 117 ]
The Tor Project recommended that Tor relay operators and hidden service operators revoke and generate fresh keys after patching OpenSSL, but noted that Tor relays use two sets of keys and that Tor's multi-hop design minimizes the impact of exploiting a single relay. [ 40 ] 586 relays later found to be susceptible to the Heartbleed bug were taken off-line as a precautionary measure. [ 118 ] [ 119 ] [ 120 ] [ 121 ]
Game-related services including Steam , Minecraft , Wargaming , League of Legends , GOG.com , Origin , Sony Online Entertainment , Humble Bundle , and Path of Exile were affected and subsequently fixed. [ 122 ]
Vulnerable software applications include:
Several other Oracle Corporation applications were affected. [ 129 ]
Several Linux distributions were affected, including Debian [ 132 ] (and derivatives such as Linux Mint and Ubuntu [ 133 ] ) and Red Hat Enterprise Linux [ 134 ] (and derivatives such as CentOS , [ 135 ] Oracle Linux 6 [ 129 ] and Amazon Linux [ 136 ] ), as well as the following operating systems and firmware implementations:
Several services have been made available to test whether Heartbleed affects a given site. However, many services have been claimed to be ineffective for detecting the bug. [ 148 ] The available tools include:
Other security tools have added support for finding this bug. For example, Tenable Network Security wrote a plugin for its Nessus vulnerability scanner that can scan for this fault. [ 172 ] The Nmap security scanner includes a Heartbleed detection script from version 6.45. [ 173 ]
Sourcefire has released Snort rules to detect Heartbleed attack traffic and possible Heartbleed response traffic. [ 174 ] Open source packet analysis software such as Wireshark and tcpdump can identify Heartbleed packets using specific BPF packet filters that can be used on stored packet captures or live traffic. [ 175 ]
Vulnerability to Heartbleed is resolved by updating OpenSSL to a patched version (1.0.1g or later). OpenSSL can be used either as a standalone program, a dynamic shared object , or a statically-linked library ; therefore, the updating process can require restarting processes loaded with a vulnerable version of OpenSSL as well as re-linking programs and libraries that linked it statically. In practice this means updating packages that link OpenSSL statically, and restarting running programs to remove the in-memory copy of the old, vulnerable OpenSSL code. [ citation needed ]
After the vulnerability is patched, server administrators must address the potential breach of confidentiality. Because Heartbleed allowed attackers to disclose private keys , they must be treated as compromised; key pairs must be regenerated, and certificates that use them must be reissued; the old certificates must be revoked . Heartbleed also had the potential to allow disclosure of other in-memory secrets; therefore, other authentication material (such as passwords ) should also be regenerated. It is rarely possible to confirm that a system which was affected has not been compromised, or to determine whether a specific piece of information was leaked. [ 176 ]
Since it is difficult or impossible to determine when a credential might have been compromised and how it might have been used by an attacker, certain systems may warrant additional remediation work even after patching the vulnerability and replacing credentials. For example, signatures made by keys that were in use with a vulnerable OpenSSL version might well have been made by an attacker; this raises the possibility integrity has been violated, and opens signatures to repudiation . Validation of signatures and the legitimacy of other authentications made with a potentially compromised key (such as client certificate use) must be done with regard to the specific system involved. [ citation needed ]
Since Heartbleed threatened the privacy of private keys, users of a website which was compromised could continue to suffer from Heartbleed's effects until their browser is made aware of the certificate revocation or the compromised certificate expires. [ 177 ] For this reason, remediation also depends on users making use of browsers that have up-to-date certificate revocation lists (or OCSP support) and honour certificate revocations. [ 178 ]
Although evaluating the total cost of Heartbleed is difficult, eWeek estimated US$500 million as a starting point. [ 179 ]
David A. Wheeler's paper How to Prevent the next Heartbleed analyzes why Heartbleed wasn't discovered earlier, and suggests several techniques which could have led to a faster identification, as well as techniques which could have reduced its impact. According to Wheeler, the most efficient technique which could have prevented Heartbleed is a test suite thoroughly performing robustness testing , i.e. testing that invalid inputs cause failures rather than successes. Wheeler highlights that a single general-purpose test suite could serve as a base for all TLS implementations. [ 180 ]
According to an article on The Conversation written by Robert Merkel, Heartbleed revealed a massive failure of risk analysis . Merkel thinks OpenSSL gives more importance to performance than to security, which no longer makes sense in his opinion. But Merkel considers that OpenSSL should not be blamed as much as OpenSSL users, who chose to use OpenSSL, without funding better auditing and testing. Merkel explains that two aspects determine the risk that more similar bugs will cause vulnerabilities. One, the library's source code influences the risk of writing bugs with such an impact. Secondly, OpenSSL's processes affect the chances of catching bugs quickly. On the first aspect, Merkel mentions the use of the C programming language as one risk factor which favored Heartbleed's appearance, echoing Wheeler's analysis. [ 180 ] [ 181 ]
On the same aspect, Theo de Raadt , founder and leader of the OpenBSD and OpenSSH projects, has criticized the OpenSSL developers for writing their own memory management routines and thereby, he claims, circumventing OpenBSD C standard library exploit countermeasures, saying "OpenSSL is not developed by a responsible team." [ 182 ] [ 183 ] Following Heartbleed's disclosure, members of the OpenBSD project forked OpenSSL into LibreSSL . [ 184 ] LibreSSL made a big code cleanup, removing more than 90,000 lines of C code just in its first week. [ 185 ]
The author of the change which introduced Heartbleed, Robin Seggelmann, [ 186 ] stated that he missed validating a variable containing a length and denied any intention to submit a flawed implementation. [ 20 ] Following Heartbleed's disclosure, Seggelmann suggested focusing on the second aspect, stating that OpenSSL is not reviewed by enough people. [ 187 ] Although Seggelmann's work was reviewed by an OpenSSL core developer, the review was also intended to verify functional improvements, a situation making vulnerabilities much easier to miss. [ 180 ]
OpenSSL core developer Ben Laurie claimed that a security audit of OpenSSL would have caught Heartbleed. [ 188 ] Software engineer John Walsh commented:
Think about it, OpenSSL only has two [fulltime] people to write, maintain, test, and review 500,000 lines of business critical code. [ 189 ]
The OpenSSL foundation's president, Steve Marquess, said "The mystery is not that a few overworked volunteers missed this bug; the mystery is why it hasn't happened more often." [ 189 ] David A. Wheeler described audits as an excellent way to find vulnerabilities in typical cases, but noted that "OpenSSL uses unnecessarily complex structures, which makes it harder to both humans and machines to review." He wrote:
There should be a continuous effort to simplify the code, because otherwise just adding capabilities will slowly increase the software complexity. The code should be refactored over time to make it simple and clear, not just constantly add new features. The goal should be code that is "obviously right", as opposed to code that is so complicated that "I can't see any problems". [ 180 ]
According to security researcher Dan Kaminsky , Heartbleed is sign of an economic problem which needs to be fixed. Seeing the time taken to catch this simple error in a simple feature from a "critical" dependency, Kaminsky fears numerous future vulnerabilities if nothing is done. When Heartbleed was discovered, OpenSSL was maintained by a handful of volunteers, only one of whom worked full time. [ 190 ] Yearly donations to the OpenSSL project were about US$2,000. [ 191 ] The Heartbleed website from Codenomicon advised money donations to the OpenSSL project. [ 3 ] After learning about donations for the 2 or 3 days following Heartbleed's disclosure totaling US$841, Kaminsky commented "We are building the most important technologies for the global economy on shockingly underfunded infrastructure." [ 192 ] Core developer Ben Laurie has qualified the project as "completely unfunded". [ 191 ] Although the OpenSSL Software Foundation has no bug bounty program , the Internet Bug Bounty initiative awarded US$15,000 to Google's Neel Mehta, who discovered Heartbleed, for his responsible disclosure. [ 191 ] Mehta later donated his reward to a Freedom of the Press Foundation fundraiser. [ 193 ]
Paul Chiusano suggested Heartbleed may have resulted from failed software economics. [ 194 ]
The industry's collective response to the crisis was the Core Infrastructure Initiative , a multimillion-dollar project announced by the Linux Foundation on 24 April 2014 to provide funds to critical elements of the global information infrastructure. [ 195 ] The initiative intends to allow lead developers to work full time on their projects and to pay for security audits, hardware and software infrastructure, travel, and other expenses. [ 196 ] OpenSSL is a candidate to become the first recipient of the initiative's funding. [ 195 ]
After the discovery Google established Project Zero which is tasked with finding zero-day vulnerabilities to help secure the Web and society. [ 197 ] [ 198 ] | https://en.wikipedia.org/wiki/Heartbleed |
Inter Pipeline 's Heartland Petrochemical Complex is a $3.5-billion project in Fort Saskatchewan , Alberta which will produce recyclable plastics from the province's propane . [ 1 ] With its anticipated completion in 2021, Inter Pipeline 's complex would be Canada 's "first integrated propane dehydrogenation and polypropylene facility." [ 2 ] The Complex is expected to create 2,300 jobs in construction and facility operations. [ 1 ]
Inter Pipeline's [ Notes 1 ] [ 3 ] Complex Project is supported with up to $200-million in future royalty credits under the Alberta government's Petrochemicals Diversification Program. [ 4 ] In March 2019, Navdeep Bains , Minister of Innovation, Science and Economic Development said that ISED would be investing $49 million towards the Complex as part of their "$1.6-billion plan to support jobs and workers in Canada’s oil and gas sector." [ 5 ]
Central to the project is the 97-metres high propylene-propane splitter weighing over 800 tonnes. [ 4 ]
Environmentalists opposed the proposed plant along with Pembina Pipeline Corp.’s $4.5-billion petrochemical proposed "integrated propane dehydrogenation plant and polypropylene upgrading facility", a joint venture with Kuwait's Petrochemical Industries Co, in Sturgeon. They say that "very little plastic is recycled in Canada — almost 90 per cent winds up as litter or in landfills". Notley said that "upgrading hydrocarbons at home instead of shipping raw product into the United States allows the province to ensure it has among the lowest emitting petrochemical producers in the world." [ 6 ] | https://en.wikipedia.org/wiki/Heartland_Petrochemical_Complex |
Heat-shrinkable sleeve (or commonly "shrink sleeve") is a corrosion protective coating for pipelines in the form of a wraparound or tubular sleeve that is field-applied.
The first heat-shrinkable sleeves were introduced [ when? ] as polyethylene pipeline coatings started to replace bituminous or tape coatings in the oil and gas industry. At the time, the processing for polyethylene to make the sleeve backing was new technology and the adhesives used in sleeves were much the same as those used on pipeline coating.
The technology used to make sleeves has advanced significantly since then, with new methods of cross-linking the polyolefin backings and new-generation adhesives that are formulated to provide performance under more-demanding pipeline conditions. [ 1 ]
Heat-shrinkable means just that, heat them up and they shrink, or more correctly, they recover in length. A heat-shrinkable sleeve starts out with a thick extruded poly olefin sheet (polyethylene or polypropylene ) that is formulated to be cross-linkable. After extruding the thick sheet, it is taken to the "beam" where it is passed under a unit that subjects the sheet to electron irradiation. [ 2 ] The irradiation process cross-links the polyolefin. This improves the molecular structure such that the polyolefin will work as part of a heat-shrinkable sleeve and provide the required level of mechanical protection while in-service. It makes the polyolefin perform more like a tough, heat-resistant, elastic material or rubber, [ 3 ] rather than like a plastic material.
After cross-linking, the sheet is stretched by feeding it into a machine that heats it up, stretches it and cools it down. Because the sheet has been cross-linked, after stretching, it will want to recover to its original length when re-heated.
In recent years, many manufacturers had already developed their technologies of extruding and expansion of polyolefin backing. In the past, the production process of backing was done by extruding, cross-linking and expansion. However, in order to increase the production efficiency, some of manufacturers expand the backing during extruding, and then send the backing to e-beam for the cross-linking process.
An adhesive is then applied to the sheet and various manufacturers use proprietary techniques depending on the type, viscosity and melting temperature of the adhesive. The adhesive is the key to ultimate performance of the installed system, which is why different adhesive types will be specified depending on the pipeline operating conditions.
The adhesive has many functions; it adheres the installed sleeve to the steel at the coating cutback and mainline coating, it resists shear forces imparted by soil pressure after the pipeline is buried and provides long term corrosion protection to the steel. The choice of which adhesive to use is based on the pipeline design and operating conditions. As an example, for small diameter flow lines operating at ambient temperatures, a soft mastic-based adhesive may be chosen, while on large diameter pipelines operating at higher temperatures, a hard, semi-crystalline hot-melt adhesive is used. The adhesive needs to be chosen based on its corrosion protection properties, adhesion strength, and resistance to shear forces imparted by pipe movement and the effects of soil pressures.
The coated sheet is then cut into individual sleeves suitable for application on a pipeline. [ 4 ] As mentioned before, the sheet is stretched and wants to recover when heated, so a sealing strip or "closure" is applied during sleeve installation so that the sleeve will stay in place during and after recovery.
A final component is an optional epoxy primer. Primers for heat-shrinkable sleeves work in the same manner as an FBE primer does when it is specified on 3-layer polyolefin pipeline coatings and is typically applied between 150 μm and 300 μm thick. Usually, the primer of heat shrinkable sleeve is two components non-solvent Epoxy, one is primer base and the other is curing agent.
When steel pipelines are built, they commonly consist of 10~12m long sections of steel pipe that has had a corrosion protective coating applied to it in a factory. The factory will leave an uncoated area at each end of the pipe called a "cutback" so that when welding the pipe sections together, the coating is not damaged. Heat-shrinkable sleeves are applied onto the cutback at the field weld or "field joint" during the construction of a pipeline.
As described above, the heat-shrinkable sleeves have an adhesive that sticks the sleeve to the cutback and the factory applied mainline coating and also acts as a corrosion protective layer. The backing provides mechanical protection against abrasion and soil stress forces after the pipeline is buried.
Heat wrap tape may used in addition for pipe bends, or as an alternative method for wrapping the whole pipe. | https://en.wikipedia.org/wiki/Heat-shrinkable_sleeve |
In fluid thermodynamics , a heat transfer fluid ( HTF ) is a gas or liquid that takes part in heat transfer by serving as an intermediary in cooling on one side of a process , transporting and storing thermal energy , and heating on another side of a process. Heat transfer fluids are used in countless applications and industrial processes requiring heating or cooling, typically in a closed circuit and in continuous cycles . Cooling water, for instance, cools an engine, while heating water in a hydronic heating system heats the radiator in a room.
Water is the most common heat transfer fluid because of its economy, high heat capacity and favorable transport properties. However, the useful temperature range is restricted by freezing below 0 °C and boiling at elevated temperatures depending on the system pressure . Antifreeze additives can alleviate the freezing problem to some extent. However, many other heat transfer fluids have been developed and used in a huge variety of applications. For higher temperatures, oil or synthetic hydrocarbon - or silicone -based fluids offer lower vapor pressure . Molten salts and molten metals can be used for transferring and storing heat at temperatures above 300 to 400 °C where organic fluids start to decompose. Gases such as water vapor , nitrogen , argon , helium and hydrogen have been used as heat transfer fluids where liquids are not suitable. For gases the pressure typically needs to be elevated to facilitate higher flow rates with low pumping power.
In order to prevent overheating, fluid flows inside a system or a device so as to transfer the heat outside that particular device or system.
They generally have a high boiling point and a high heat capacity . High boiling point prevents the heat transfer liquids from vaporising at high temperatures. High heat capacity enables a small amount of the refrigerant to transfer a large amount of heat very efficiently.
It must be ensured that the heat transfer liquids used should not have a low boiling point. This is because a low boiling point will result in vaporisation of the liquid at low temperatures when they are used to exchange heat with hot substances. This will produce vapors of the liquid in the machine itself where they are used.
Also, the heat transfer fluids should have high heat capacity. The heat capacity denotes the amount of heat the fluid can hold without changing its temperature. In case of liquids, it also shows the amount of heat the liquid can hold before its temperature reaches its boiling point and ultimately vaporises.
If the fluid has low heat capacity, then it will mean that a large amount of the fluid will be required to exchange a relatively small amount of heat. This will increase the cost of using heat transfer fluids and will reduce the efficiency of the process.
In case of liquid heat transfer fluids, usage of their small quantity will result in their vaporisation which can be dangerous for the equipment where they are used. The equipment will be designed for liquids but their vaporisation will include vapors in the flow channel. Also gases occupy larger volume than liquids at the same pressure. The production of vapors will increase the pressure on the walls of the pipe/channel where it will be flowing. This may cause the flow channel to rupture.
Heat transfer fluids have distinct thermal and chemical properties which determine their suitability for various industrial applications. Key characteristics include:
Heat transfer fluids are integral to various industrial applications, enabling precise temperature control in manufacturing processes. In the food industry, they are vital for processing meats and snacks. Chemical processes often rely on them for batch reactors and continuous operations. The plastics, rubber, and composites sectors use heat transfer fluids in molding and extrusion processes. They are also critical in petrochemical synthesis and distillation, oil and gas refining, and for converting materials in presses and laminating operations. [ 3 ]
In solar power plants, heat transfer fluids are used in concentrators like linear Fresnel and parabolic trough systems for efficient energy generation and thermal storage. Molten salts and synthetic heat transfer fluids are utilized based on their ability to function at various temperature ranges, contributing to the generation of electricity and the manufacturing of polysilicon for photovoltaic cells. These fluids assist in the purification and cooling steps of polysilicon production, essential for creating high-purity silicon for solar and electronic applications. [ 4 ] Technico-economic analyses are usually performed to select the appropriate heat transfer fluid. [ 5 ] Regarding the selection of a low-cost or cost-effective thermal oil, it is important to consider not only the acquisition or purchase cost, but also the operating and replacement costs. [ 5 ] An oil that is initially more expensive may prove to be more cost-effective in the long run if it offers higher thermal stability, thereby reducing the frequency of replacement. [ 5 ]
The choice of a heat transfer fluid is critical for system efficiency and longevity. Here are some commonly used fluids: | https://en.wikipedia.org/wiki/Heat-transfer_fluid |
A thermal reservoir , also thermal energy reservoir or thermal bath , is a thermodynamic system with a heat capacity so large that the temperature of the reservoir changes relatively little when a significant amount of heat is added or extracted. [ 1 ] As a conceptual simplification, it effectively functions as an infinite pool of thermal energy at a given, constant temperature. Since it can act as an inertial source and sink of heat, it is often also referred to as a heat reservoir or heat bath .
Lakes, oceans and rivers often serve as thermal reservoirs in geophysical processes, such as the weather. In atmospheric science , large air masses in the atmosphere often function as thermal reservoirs.
Since the temperature of a thermal reservoir T does not change during the heat transfer , the change of entropy in the reservoir is d S Res = δ Q T . {\displaystyle dS_{\text{Res}}={\frac {\delta Q}{T}}.}
The microcanonical partition sum Z ( E ) {\displaystyle Z(E)} of a heat bath of temperature T has the property Z ( E + Δ E ) = Z ( E ) e Δ E / k B T , {\displaystyle Z(E+\Delta E)=Z(E)e^{\Delta E/k_{\text{B}}T},} where k B {\displaystyle k_{\text{B}}} is the Boltzmann constant . It thus changes by the same factor when a given amount of energy is added. The exponential factor in this expression can be identified with the reciprocal of the Boltzmann factor .
For an engineering application, see geothermal heat pump .
This thermodynamics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Heat_bath |
As quoted in an online version of:
As quoted at http://www.webelements.com/ from these sources:
As quoted from various sources in: | https://en.wikipedia.org/wiki/Heat_capacities_of_the_elements_(data_page) |
Heat capacity or thermal capacity is a physical property of matter , defined as the amount of heat to be supplied to an object to produce a unit change in its temperature . [ 1 ] The SI unit of heat capacity is joule per kelvin (J/K).
Heat capacity is an extensive property . The corresponding intensive property is the specific heat capacity , found by dividing the heat capacity of an object by its mass. Dividing the heat capacity by the amount of substance in moles yields its molar heat capacity . The volumetric heat capacity measures the heat capacity per volume . In architecture and civil engineering , the heat capacity of a building is often referred to as its thermal mass .
The heat capacity of an object, denoted by C {\displaystyle C} , is the limit C = lim Δ T → 0 Δ Q Δ T , {\displaystyle C=\lim _{\Delta T\to 0}{\frac {\Delta Q}{\Delta T}},} where Δ Q {\displaystyle \Delta Q} is the amount of heat that must be added to the object (of mass M ) in order to raise its temperature by Δ T {\displaystyle \Delta T} .
The value of this parameter usually varies considerably depending on the starting temperature T {\displaystyle T} of the object and the pressure p {\displaystyle p} applied to it. In particular, it typically varies dramatically with phase transitions such as melting or vaporization (see enthalpy of fusion and enthalpy of vaporization ). Therefore, it should be considered a function C ( p , T ) {\displaystyle C(p,T)} of those two variables.
The variation can be ignored in contexts when working with objects in narrow ranges of temperature and pressure. For example, the heat capacity of a block of iron weighing one pound is about 204 J/K when measured from a starting temperature T = 25 °C and P = 1 atm of pressure. That approximate value is adequate for temperatures between 15 °C and 35 °C, and surrounding pressures from 0 to 10 atmospheres, because the exact value varies very little in those ranges. One can trust that the same heat input of 204 J will raise the temperature of the block from 15 °C to 16 °C, or from 34 °C to 35 °C, with negligible error.
At constant pressure, heat supplied to the system contributes to both the work done and the change in internal energy , according to the first law of thermodynamics . The heat capacity is called C p {\displaystyle C_{p}} and defined as:
C p = δ Q d T | p = const {\displaystyle C_{p}=\left.{\frac {\delta Q}{dT}}\right|_{p={\text{const}}}}
From the first law of thermodynamics follows δ Q = d U + p d V {\displaystyle \delta Q=dU+p\,dV} and the inner energy as a function of p {\displaystyle p} and T {\displaystyle T} is:
δ Q = ( ∂ U ∂ T ) p d T + ( ∂ U ∂ p ) T d p + p [ ( ∂ V ∂ T ) p d T + ( ∂ V ∂ p ) T d p ] {\displaystyle \delta Q=\left({\frac {\partial U}{\partial T}}\right)_{p}dT+\left({\frac {\partial U}{\partial p}}\right)_{T}dp+p\left[\left({\frac {\partial V}{\partial T}}\right)_{p}dT+\left({\frac {\partial V}{\partial p}}\right)_{T}dp\right]}
For constant pressure ( d p = 0 ) {\displaystyle (dp=0)} the equation simplifies to:
C p = δ Q d T | p = const = ( ∂ U ∂ T ) p + p ( ∂ V ∂ T ) p = ( ∂ H ∂ T ) p {\displaystyle C_{p}=\left.{\frac {\delta Q}{dT}}\right|_{p={\text{const}}}=\left({\frac {\partial U}{\partial T}}\right)_{p}+p\left({\frac {\partial V}{\partial T}}\right)_{p}=\left({\frac {\partial H}{\partial T}}\right)_{p}}
where the final equality follows from the appropriate Maxwell relations , and is commonly used as the definition of the isobaric heat capacity.
A system undergoing a process at constant volume implies that no expansion work is done, so the heat supplied contributes only to the change in internal energy. The heat capacity obtained this way is denoted C V . {\displaystyle C_{V}.} The value of C V {\displaystyle C_{V}} is always less than the value of C p {\displaystyle C_{p}} . ( C V < C p {\displaystyle C_{V}<C_{p}} .)
Expressing the inner energy as a function of the variables T {\displaystyle T} and V {\displaystyle V} gives:
δ Q = ( ∂ U ∂ T ) V d T + ( ∂ U ∂ V ) T d V + p d V {\displaystyle \delta Q=\left({\frac {\partial U}{\partial T}}\right)_{V}dT+\left({\frac {\partial U}{\partial V}}\right)_{T}dV+pdV}
For a constant volume ( d V = 0 {\displaystyle dV=0} ) the heat capacity reads:
C V = δ Q d T | V = const = ( ∂ U ∂ T ) V {\displaystyle C_{V}=\left.{\frac {\delta Q}{dT}}\right|_{V={\text{const}}}=\left({\frac {\partial U}{\partial T}}\right)_{V}}
The relation between C V {\displaystyle C_{V}} and C p {\displaystyle C_{p}} is then:
C p = C V + ( ( ∂ U ∂ V ) T + p ) ( ∂ V ∂ T ) p {\displaystyle C_{p}=C_{V}+\left(\left({\frac {\partial U}{\partial V}}\right)_{T}+p\right)\left({\frac {\partial V}{\partial T}}\right)_{p}}
Mayer's relation :
C p − C V = n R . {\displaystyle C_{p}-C_{V}=nR.} C p / C V = γ , {\displaystyle C_{p}/C_{V}=\gamma ,}
where:
Using the above two relations, the specific heats can be deduced as follows:
C V = n R γ − 1 , {\displaystyle C_{V}={\frac {nR}{\gamma -1}},} C p = γ n R γ − 1 . {\displaystyle C_{p}=\gamma {\frac {nR}{\gamma -1}}.} Following from the equipartition of energy , it is deduced that an ideal gas has the isochoric heat capacity
C V = n R N f 2 = n R 3 + N i 2 {\displaystyle C_{V}=nR{\frac {N_{f}}{2}}=nR{\frac {3+N_{i}}{2}}}
where N f {\displaystyle N_{f}} is the number of degrees of freedom of each individual particle in the gas, and N i = N f − 3 {\displaystyle N_{i}=N_{f}-3} is the number of internal degrees of freedom , where the number 3 comes from the three translational degrees of freedom (for a gas in 3D space). This means that a monoatomic ideal gas (with zero internal degrees of freedom) will have isochoric heat capacity C v = 3 n R 2 {\displaystyle C_{v}={\frac {3nR}{2}}} .
No change in internal energy (as the temperature of the system is constant throughout the process) leads to only work done by the total supplied heat, and thus an infinite amount of heat is required to increase the temperature of the system by a unit temperature, leading to infinite or undefined heat capacity of the system.
Heat capacity of a system undergoing phase transition is infinite , because the heat is utilized in changing the state of the material rather than raising the overall temperature.
The heat capacity may be well-defined even for heterogeneous objects, with separate parts made of different materials; such as an electric motor , a crucible with some metal, or a whole building. In many cases, the (isobaric) heat capacity of such objects can be computed by simply adding together the (isobaric) heat capacities of the individual parts.
However, this computation is valid only when all parts of the object are at the same external pressure before and after the measurement. That may not be possible in some cases. For example, when heating an amount of gas in an elastic container, its volume and pressure will both increase, even if the atmospheric pressure outside the container is kept constant. Therefore, the effective heat capacity of the gas, in that situation, will have a value intermediate between its isobaric and isochoric capacities C p {\displaystyle C_{p}} and C V {\displaystyle C_{V}} .
For complex thermodynamic systems with several interacting parts and state variables , or for measurement conditions that are neither constant pressure nor constant volume, or for situations where the temperature is significantly non-uniform, the simple definitions of heat capacity above are not useful or even meaningful. The heat energy that is supplied may end up as kinetic energy (energy of motion) and potential energy (energy stored in force fields), both at macroscopic and atomic scales. Then the change in temperature will depend on the particular path that the system followed through its phase space between the initial and final states. Namely, one must somehow specify how the positions, velocities, pressures, volumes, etc. changed between the initial and final states; and use the general tools of thermodynamics to predict the system's reaction to a small energy input. The "constant volume" and "constant pressure" heating modes are just two among infinitely many paths that a simple homogeneous system can follow.
The heat capacity can usually be measured by the method implied by its definition: start with the object at a known uniform temperature, add a known amount of heat energy to it, wait for its temperature to become uniform, and measure the change in its temperature. This method can give moderately accurate values for many solids; however, it cannot provide very precise measurements, especially for gases.
The SI unit for heat capacity of an object is joule per kelvin (J/K or J⋅K −1 ). Since an increment of temperature of one degree Celsius is the same as an increment of one kelvin, that is the same unit as J/°C.
The heat capacity of an object is an amount of energy divided by a temperature change, which has the dimension L 2 ⋅M⋅T −2 ⋅Θ −1 . Therefore, the SI unit J/K is equivalent to kilogram meter squared per second squared per kelvin (kg⋅m 2 ⋅s −2 ⋅K −1 ).
Professionals in construction , civil engineering , chemical engineering , and other technical disciplines, especially in the United States , may use the so-called English Engineering units , that include the pound (lb = 0.45359237 kg) as the unit of mass, the degree Fahrenheit or Rankine ( 5 / 9 K, about 0.55556 K) as the unit of temperature increment, and the British thermal unit (BTU ≈ 1055.06 J), [ 3 ] [ 4 ] as the unit of heat. In those contexts, the unit of heat capacity is 1 BTU/°R ≈ 1900 J/K. [ 5 ] The BTU was in fact defined so that the average heat capacity of one pound of water would be 1 BTU/°F. In this regard, with respect to mass, note conversion of 1 Btu/lb⋅°R ≈ 4,187 J/kg⋅K [ 6 ] and the calorie (below).
In chemistry, heat amounts are often measured in calories . Confusingly, two units with that name, denoted "cal" or "Cal", have been commonly used to measure amounts of heat:
With these units of heat energy, the units of heat capacity are
Most physical systems exhibit a positive heat capacity; constant-volume and constant-pressure heat capacities, rigorously defined as partial derivatives, are always positive for homogeneous bodies. [ 7 ] However, even though it can seem paradoxical at first, [ 8 ] [ 9 ] there are some systems for which the heat capacity Q / Δ T {\displaystyle Q/\Delta T} is negative . Examples include a reversibly and nearly adiabatically expanding ideal gas, which cools, Δ T {\displaystyle \Delta T} < 0, while a small amount of heat Q > 0 {\displaystyle Q>0} is put in, or combusting methane with increasing temperature, Δ T {\displaystyle \Delta T} > 0, and giving off heat, Q < 0 {\displaystyle Q<0} . Others are inhomogeneous systems that do not meet the strict definition of thermodynamic equilibrium. They include gravitating objects such as stars and galaxies, and also some nano-scale clusters of a few tens of atoms close to a phase transition. [ 10 ] A negative heat capacity can result in a negative temperature .
According to the virial theorem , for a self-gravitating body like a star or an interstellar gas cloud, the average potential energy U pot and the average kinetic energy U kin are locked together in the relation
U pot = − 2 U kin . {\displaystyle U_{\text{pot}}=-2U_{\text{kin}}.}
The total energy U (= U pot + U kin ) therefore obeys
U = − U kin . {\displaystyle U=-U_{\text{kin}}.}
If the system loses energy, for example, by radiating energy into space, the average kinetic energy actually increases. If a temperature is defined by the average kinetic energy, then the system therefore can be said to have a negative heat capacity. [ 11 ]
A more extreme version of this occurs with black holes . According to black-hole thermodynamics , the more mass and energy a black hole absorbs, the colder it becomes. In contrast, if it is a net emitter of energy, through Hawking radiation , it will become hotter and hotter until it boils away.
According to the second law of thermodynamics , when two systems with different temperatures interact via a purely thermal connection, heat will flow from the hotter system to the cooler one (this can also be understood from a statistical point of view ). Therefore, if such systems have equal temperatures, they are at thermal equilibrium . However, this equilibrium is stable only if the systems have positive heat capacities. For such systems, when heat flows from a higher-temperature system to a lower-temperature one, the temperature of the first decreases and that of the latter increases, so that both approach equilibrium. In contrast, for systems with negative heat capacities, the temperature of the hotter system will further increase as it loses heat, and that of the colder will further decrease, so that they will move farther from equilibrium. This means that the equilibrium is unstable .
For example, according to theory, the smaller (less massive) a black hole is, the smaller its Schwarzschild radius will be, and therefore the greater the curvature of its event horizon will be, as well as its temperature. Thus, the smaller the black hole, the more thermal radiation it will emit and the more quickly it will evaporate by Hawking radiation . | https://en.wikipedia.org/wiki/Heat_capacity |
The heat capacity rate is heat transfer terminology used in thermodynamics and different forms of engineering denoting the quantity of heat a flowing fluid of a certain mass flow rate is able to absorb or release per unit temperature change per unit time. [ 1 ] [ 2 ] [ 3 ] It is typically denoted as C , listed from empirical data experimentally determined in various reference works, and is typically stated as a comparison between a hot and a cold fluid, C h and C c either graphically, or as a linearized equation . It is an important quantity in heat exchanger technology common to either heating or cooling systems and needs, and the solution of many real world problems such as the design of disparate items as different as a microprocessor and an internal combustion engine .
A hot fluid's heat capacity rate can be much greater than, equal to, or much less than the heat capacity rate of the same fluid when cold. In practice, it is most important in specifying heat-exchanger systems, wherein one fluid usually of dissimilar nature is used to cool another fluid such as the hot gases or steam cooled in a power plant by a heat sink from a water source—a case of dissimilar fluids, or for specifying the minimal cooling needs of heat transfer across boundaries, such as in air cooling.
As the ability of a fluid to resist change in temperature itself changes as heat transfer occurs changing its net average instantaneous temperature, it is a quantity of interest in designs which have to compensate for the fact that it varies continuously in a dynamic system. [ 4 ] [ 5 ] While itself varying, such change must be taken into account when designing a system for overall behavior to stimuli or likely environmental conditions , and in particular the worst-case conditions encountered under the high stresses imposed near the limits of operability— for example, an air-cooled engine in a desert climate on a very hot day.
If the hot fluid had a much larger heat capacity rate, then when hot and cold fluids went through a heat exchanger, the hot fluid would have a very small change in temperature while the cold fluid would heat up a significant amount. If the cool fluid has a much lower heat capacity rate, that is desirable. If they were equal, they would both change more or less temperature equally, assuming equal mass-flow per unit time through a heat exchanger. In practice, a cooling fluid which has both a higher specific heat capacity and a lower heat capacity rate is desirable, accounting for the pervasiveness of water cooling solutions in technology—the polar nature of the water molecule creates some distinct sub-atomic behaviors favorable in practice.
C = c p d m d t {\displaystyle C=c_{p}{\frac {dm}{dt}}}
where C = heat capacity rate of the fluid of interest in W⋅K −1 , dm/dt = mass flow rate of the fluid of interest and c p = specific heat of the fluid of interest. | https://en.wikipedia.org/wiki/Heat_capacity_rate |
In thermal physics and thermodynamics , the heat capacity ratio , also known as the adiabatic index , the ratio of specific heats , or Laplace's coefficient , is the ratio of the heat capacity at constant pressure ( C P ) to heat capacity at constant volume ( C V ). It is sometimes also known as the isentropic expansion factor and is denoted by γ ( gamma ) for an ideal gas [ note 1 ] or κ ( kappa ), the isentropic exponent for a real gas. The symbol γ is used by aerospace and chemical engineers. γ = C P C V = C ¯ P C ¯ V = c P c V , {\displaystyle \gamma ={\frac {C_{P}}{C_{V}}}={\frac {{\bar {C}}_{P}}{{\bar {C}}_{V}}}={\frac {c_{P}}{c_{V}}},} where C is the heat capacity, C ¯ {\displaystyle {\bar {C}}} the molar heat capacity (heat capacity per mole), and c the specific heat capacity (heat capacity per unit mass) of a gas. The suffixes P and V refer to constant-pressure and constant-volume conditions respectively.
The heat capacity ratio is important for its applications in thermodynamical reversible processes , especially involving ideal gases ; the speed of sound depends on this factor.
To understand this relation, consider the following thought experiment . A closed pneumatic cylinder contains air. The piston is locked. The pressure inside is equal to atmospheric pressure. This cylinder is heated to a certain target temperature. Since the piston cannot move, the volume is constant. The temperature and pressure will rise. When the target temperature is reached, the heating is stopped. The amount of energy added equals C V Δ T , with Δ T representing the change in temperature.
The piston is now freed and moves outwards, stopping as the pressure inside the chamber reaches atmospheric pressure. We assume the expansion occurs without exchange of heat ( adiabatic expansion ). Doing this work , air inside the cylinder will cool to below the target temperature.
To return to the target temperature (still with a free piston), the air must be heated, but is no longer under constant volume, since the piston is free to move as the gas is reheated. This extra heat amounts to about 40% more than the previous amount added. In this example, the amount of heat added with a locked piston is proportional to C V , whereas the total amount of heat added is proportional to C P . Therefore, the heat capacity ratio in this example is 1.4.
Another way of understanding the difference between C P and C V is that C P applies if work is done to the system, which causes a change in volume (such as by moving a piston so as to compress the contents of a cylinder), or if work is done by the system, which changes its temperature (such as heating the gas in a cylinder to cause a piston to move). C V applies only if P d V = 0 {\displaystyle P\,\mathrm {d} V=0} , that is, no work is done. Consider the difference between adding heat to the gas with a locked piston and adding heat with a piston free to move, so that pressure remains constant.
In the second case, the gas will both heat and expand, causing the piston to do mechanical work on the atmosphere. The heat that is added to the gas goes only partly into heating the gas, while the rest is transformed into the mechanical work performed by the piston.
In the first, constant-volume case (locked piston), there is no external motion, and thus no mechanical work is done on the atmosphere; C V is used. In the second case, additional work is done as the volume changes, so the amount of heat required to raise the gas temperature (the specific heat capacity) is higher for this constant-pressure case.
For an ideal gas, the molar heat capacity is at most a function of temperature, since the internal energy is solely a function of temperature for a closed system , i.e., U = U ( n , T ) {\displaystyle U=U(n,T)} , where n is the amount of substance in moles. In thermodynamic terms, this is a consequence of the fact that the internal pressure of an ideal gas vanishes.
Mayer's relation allows us to deduce the value of C V from the more easily measured (and more commonly tabulated) value of C P : C V = C P − n R . {\displaystyle C_{V}=C_{P}-nR.}
This relation may be used to show the heat capacities may be expressed in terms of the heat capacity ratio ( γ ) and the gas constant ( R ): C P = γ n R γ − 1 and C V = n R γ − 1 , {\displaystyle C_{P}={\frac {\gamma nR}{\gamma -1}}\quad {\text{and}}\quad C_{V}={\frac {nR}{\gamma -1}},}
The classical equipartition theorem predicts that the heat capacity ratio ( γ ) for an ideal gas can be related to the thermally accessible degrees of freedom ( f ) of a molecule by γ = 1 + 2 f , or f = 2 γ − 1 . {\displaystyle \gamma =1+{\frac {2}{f}},\quad {\text{or}}\quad f={\frac {2}{\gamma -1}}.}
Thus we observe that for a monatomic gas, with 3 translational degrees of freedom per atom: γ = 5 3 = 1.6666 … , {\displaystyle \gamma ={\frac {5}{3}}=1.6666\ldots ,}
As an example of this behavior, at 273 K (0 °C) the noble gases He, Ne, and Ar all have nearly the same value of γ , equal to 1.664.
For a diatomic gas, often 5 degrees of freedom are assumed to contribute at room temperature since each molecule has 3 translational and 2 rotational degrees of freedom , and the single vibrational degree of freedom is often not included since vibrations are often not thermally active except at high temperatures, as predicted by quantum statistical mechanics . Thus we have γ = 7 5 = 1.4. {\displaystyle \gamma ={\frac {7}{5}}=1.4.}
For example, terrestrial air is primarily made up of diatomic gases (around 78% nitrogen , N 2 , and 21% oxygen , O 2 ), and at standard conditions it can be considered to be an ideal gas. The above value of 1.4 is highly consistent with the measured adiabatic indices for dry air within a temperature range of 0–200 °C, exhibiting a deviation of only 0.2% (see tabulation above).
For a linear triatomic molecule such as CO 2 , there are only 5 degrees of freedom (3 translations and 2 rotations), assuming vibrational modes are not excited. However, as mass increases and the frequency of vibrational modes decreases, vibrational degrees of freedom start to enter into the equation at far lower temperatures than is typically the case for diatomic molecules. For example, it requires a far larger temperature to excite the single vibrational mode for H 2 , for which one quantum of vibration is a fairly large amount of energy, than for the bending or stretching vibrations of CO 2 .
For a non-linear triatomic gas, such as water vapor, which has 3 translational and 3 rotational degrees of freedom, this model predicts γ = 8 6 = 1.3333 … . {\displaystyle \gamma ={\frac {8}{6}}=1.3333\ldots .}
As noted above, as temperature increases, higher-energy vibrational states become accessible to molecular gases, thus increasing the number of degrees of freedom and lowering γ . Conversely, as the temperature is lowered, rotational degrees of freedom may become unequally partitioned as well. As a result, both C P and C V increase with increasing temperature.
Despite this, if the density is fairly low and intermolecular forces are negligible, the two heat capacities may still continue to differ from each other by a fixed constant (as above, C P = C V + nR ), which reflects the relatively constant PV difference in work done during expansion for constant pressure vs. constant volume conditions. Thus, the ratio of the two values, γ , decreases with increasing temperature.
However, when the gas density is sufficiently high and intermolecular forces are important, thermodynamic expressions may sometimes be used to accurately describe the relationship between the two heat capacities, as explained below. Unfortunately the situation can become considerably more complex if the temperature is sufficiently high for molecules to dissociate or carry out other chemical reactions , in which case thermodynamic expressions arising from simple equations of state may not be adequate.
Values based on approximations (particularly C P − C V = nR ) are in many cases not sufficiently accurate for practical engineering calculations, such as flow rates through pipes and valves at moderate to high pressures. An experimental value should be used rather than one based on this approximation, where possible. A rigorous value for the ratio C P / C V can also be calculated by determining C V from the residual properties expressed as C P − C V = − T ( ∂ V ∂ T ) P 2 ( ∂ V ∂ P ) T = − T ( ∂ P ∂ T ) V 2 ( ∂ P ∂ V ) T . {\displaystyle C_{P}-C_{V}=-T{\frac {\left({\frac {\partial V}{\partial T}}\right)_{P}^{2}}{\left({\frac {\partial V}{\partial P}}\right)_{T}}}=-T{\frac {\left({\frac {\partial P}{\partial T}}\right)_{V}^{2}}{\left({\frac {\partial P}{\partial V}}\right)_{T}}}.}
Values for C P are readily available and recorded, but values for C V need to be determined via relations such as these. See relations between specific heats for the derivation of the thermodynamic relations between the heat capacities.
The above definition is the approach used to develop rigorous expressions from equations of state (such as Peng–Robinson ), which match experimental values so closely that there is little need to develop a database of ratios or C V values. Values can also be determined through finite-difference approximation .
This ratio gives the important relation for an isentropic ( quasistatic , reversible , adiabatic process ) process of a simple compressible calorically-perfect ideal gas :
Using the ideal gas law, P V = n R T {\displaystyle PV=nRT} :
where P is the pressure of the gas, V is the volume, and T is the thermodynamic temperature .
In gas dynamics we are interested in the local relations between pressure, density and temperature, rather than considering a fixed quantity of gas. By considering the density ρ = M / V {\displaystyle \rho =M/V} as the inverse of the volume for a unit mass, we can take ρ = 1 / V {\displaystyle \rho =1/V} in these relations.
Since for constant entropy, S {\displaystyle S} , we have P ∝ ρ γ {\displaystyle P\propto \rho ^{\gamma }} , or ln P = γ ln ρ + c o n s t a n t {\displaystyle \ln P=\gamma \ln \rho +\mathrm {constant} } , it follows that γ = ∂ ln P ∂ ln ρ | S . {\displaystyle \gamma =\left.{\frac {\partial \ln P}{\partial \ln \rho }}\right|_{S}.}
For an imperfect or non-ideal gas, Chandrasekhar [ 3 ] defined three different adiabatic indices so that the adiabatic relations can be written in the same form as above; these are used in the theory of stellar structure : Γ 1 = ∂ ln P ∂ ln ρ | S , Γ 2 − 1 Γ 2 = ∂ ln T ∂ ln P | S , Γ 3 − 1 = ∂ ln T ∂ ln ρ | S . {\displaystyle {\begin{aligned}\Gamma _{1}&=\left.{\frac {\partial \ln P}{\partial \ln \rho }}\right|_{S},\\[2pt]{\frac {\Gamma _{2}-1}{\Gamma _{2}}}&=\left.{\frac {\partial \ln T}{\partial \ln P}}\right|_{S},\\[2pt]\Gamma _{3}-1&=\left.{\frac {\partial \ln T}{\partial \ln \rho }}\right|_{S}.\end{aligned}}}
All of these are equal to γ {\displaystyle \gamma } in the case of an ideal gas. | https://en.wikipedia.org/wiki/Heat_capacity_ratio |
Heat cramps , a type of heat illness , are muscle spasms that result from loss of large amount of salt and water through exercise. Heat cramps are associated with cramping in the abdomen , arms and calves . This can be caused by inadequate consumption of fluids or electrolytes . [ 1 ] Heavy sweating causes heat cramps, especially when the water is replaced without also replacing salt or potassium . [ 2 ]
Although heat cramps can be quite painful, they usually don't result in permanent damage, though they can be a symptom of heat stroke or heat exhaustion . Heat cramps can indicate a more severe problem in someone with heart disease or if they last for longer than an hour. [ 2 ]
In order to prevent them, one may drink electrolyte solutions such as sports drinks during exercise or strenuous work or eat potassium-rich foods like bananas and apples . When heat cramps occur, the affected person should avoid strenuous work and exercise for several hours to allow for recovery. [ 2 ]
This medical diagnostic article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Heat_cramps |
The heat death paradox , also known as thermodynamic paradox , Clausius' paradox, and Kelvin's paradox , [ 1 ] is a reductio ad absurdum argument that uses thermodynamics to show the impossibility of an infinitely old universe. It was formulated in February 1862 by Lord Kelvin and expanded upon by Hermann von Helmholtz and William John Macquorn Rankine . [ 2 ] [ 3 ]
Assuming that the universe is eternal, a question arises: How is it that thermodynamic equilibrium has not already been achieved? [ 4 ]
This theoretical paradox is directed at the then-mainstream strand of belief in a classical view of a sempiternal universe, whereby its matter is postulated as everlasting and having always been recognisably the universe. Heat death paradox is born of a paradigm resulting from fundamental ideas about the cosmos. It is necessary to change the paradigm to resolve the paradox.
The paradox was based upon the rigid mechanical point of view of the second law of thermodynamics postulated by Rudolf Clausius and Lord Kelvin , according to which heat can only be transferred from a warmer to a colder object. It notes: if the universe were eternal, as claimed classically, it should already be cold and isotropic (its objects should have the same temperature, and the distribution of matter or radiation should be even). [ 4 ] Kelvin compared the universe to a clock that runs slower and slower, constantly dissipating energy in impalpable heat , although he was unsure whether it would stop for ever (reach thermodynamic equilibrium). According to this model, the existence of usable energy, which can be used to perform work and produce entropy, means that the clock has not stopped - since a conversion of heat in mechanical energy (which Kelvin called a rejuvenating universe scenario) is not contemplated. [ 5 ] [ 2 ]
According to the laws of thermodynamics, any hot object transfers heat to its cooler surroundings, until everything is at the same temperature . For two objects at the same temperature as much heat flows from one body as flows from the other, and the net effect is no change. If the universe were infinitely old, there must have been enough time for the stars to cool and warm their surroundings. Everywhere should therefore be at the same temperature and there should either be no stars, or everything should be as hot as stars. The universe should thus achieve, or asymptotically tend to, thermodynamic equilibrium, which corresponds to a state where no thermodynamic free energy is left, and therefore no further work is possible: this is the heat death of the universe, as predicted by Lord Kelvin in 1852. The average temperature of the cosmos should also asymptotically tend to Kelvin Zero , and it is possible that a maximum entropy state will be reached. [ 6 ]
In February 1862, Lord Kelvin used the existence of the Sun and the stars as an empirical proof that the universe has not achieved thermodynamic equilibrium , as entropy production and free work are still possible, and there are temperature differences between objects. Helmholtz and Rankine expanded Kelvin's work soon after. [ 2 ] Since there are stars and colder objects, the universe is not in thermodynamic equilibrium, so it cannot be infinitely old.
The paradox does not arise in the Big Bang or its successful Lambda-CDM refinement, which posit that the universe began roughly 13.8 billion years ago, not long enough ago for the universe to have approached thermodynamic equilibrium. Some proposed further refinements, termed eternal inflation , restore Kelvin's idea of unending time in the more complicated form of an eternal, exponentially-expanding multiverse in which mutually-inaccessible baby universes, some of which resemble the universe we inhabit, are continually being born.
Olbers' paradox is another paradox which aims to disprove an infinitely old static universe, but it only fits with a static universe scenario. Also, unlike Kelvin's paradox, it relies on cosmology rather than thermodynamics. The Boltzmann Brain can also be related to Kelvin's, as it focuses on the spontaneous generation of a brain (filled with false memories) from entropy fluctuations , in a universe which has been lying in a heat death state for an indefinite amount of time. [ 7 ] | https://en.wikipedia.org/wiki/Heat_death_paradox |
Heat deflection temperature or heat distortion temperature ( DTUL , HDT , or HDTUL ) is the temperature at which a polymer or plastic sample deforms under a specified load. [ 1 ] The HDT of a given plastic material is applied in many aspects to the design, engineering, and manufacturing of products which use thermoplastic components.
The heat distortion temperature is determined by the following test procedure outlined in ASTM D648. The test specimen is loaded in three-point bending in the edgewise direction. The outer fiber stress used for testing is either 0.455 MPa or 1.82 MPa, and the temperature is increased at 2 °C/min until the specimen deflects 0.25 mm. This is similar to the test procedure defined in the ISO 75 standard.
Limitations that are associated with the determination of the HDT is that the sample is not thermally isotropic and, in thick samples in particular, will contain a temperature gradient. The HDT of a particular material can also be very sensitive to stress experienced by the component which is dependent on the component’s dimensions. The selected deflection of 0.25 mm (which is 0.2% additional strain) is selected arbitrarily and has no particular physical significance.
An injection molded plastic part is considered "safe" to remove from its mold once it is near or below the HDT. This means that part deformation will be held within acceptable limits after removal. The molding of plastics by necessity occurs at high temperatures (routinely 200 °C or higher) due to the low viscosity of plastics in fluid form (this issue can be addressed to some extent by the addition of plasticizers to the melt, which is a secondary function of a plasticizer). Once plastic is in the mold, it must be cooled to a temperature to which little or no dimensional change will occur after removal. In general, plastics do not conduct heat well and so will take quite a while to cool to room temperature. One way to mitigate this is to use a cold mold (thereby increasing heat loss from the part). Even so, the cooling of the part to room temperature can limit the mass production of parts.
Choosing a resin with a higher heat deflection temperature (and therefore closer to melting temperature) can allow manufacturers to achieve a much faster molding process than they would otherwise while maintaining dimensional changes within certain limits. | https://en.wikipedia.org/wiki/Heat_deflection_temperature |
A heat engine is a system that transfers thermal energy to do mechanical or electrical work . [ 1 ] [ 2 ] While originally conceived in the context of mechanical energy, the concept of the heat engine has been applied to various other kinds of energy, particularly electrical , since at least the late 19th century. [ 3 ] [ 4 ] The heat engine does this by bringing a working substance from a higher state temperature to a lower state temperature. A heat source generates thermal energy that brings the working substance to the higher temperature state. The working substance generates work in the working body of the engine while transferring heat to the colder sink until it reaches a lower temperature state. During this process some of the thermal energy is converted into work by exploiting the properties of the working substance. The working substance can be any system with a non-zero heat capacity , but it usually is a gas or liquid. During this process, some heat is normally lost to the surroundings and is not converted to work. Also, some energy is unusable because of friction and drag .
In general, an engine is any machine that converts energy to mechanical work . Heat engines distinguish themselves from other types of engines by the fact that their efficiency is fundamentally limited by Carnot's theorem of thermodynamics . [ 5 ] Although this efficiency limitation can be a drawback, an advantage of heat engines is that most forms of energy can be easily converted to heat by processes like exothermic reactions (such as combustion), nuclear fission , absorption of light or energetic particles, friction , dissipation and resistance . Since the heat source that supplies thermal energy to the engine can thus be powered by virtually any kind of energy, heat engines cover a wide range of applications.
Heat engines are often confused with the cycles they attempt to implement. Typically, the term "engine" is used for a physical device and "cycle" for the models.
In thermodynamics , heat engines are often modeled using a standard engineering model such as the Otto cycle . The theoretical model can be refined and augmented with actual data from an operating engine, using tools such as an indicator diagram . Since very few actual implementations of heat engines exactly match their underlying thermodynamic cycles, one could say that a thermodynamic cycle is an ideal case of a mechanical engine. In any case, fully understanding an engine and its efficiency requires a good understanding of the (possibly simplified or idealised) theoretical model, the practical nuances of an actual mechanical engine and the discrepancies between the two.
In general terms, the larger the difference in temperature between the hot source and the cold sink, the larger is the potential thermal efficiency of the cycle. On Earth, the cold side of any heat engine is limited to being close to the ambient temperature of the environment, or not much lower than 300 kelvin , so most efforts to improve the thermodynamic efficiencies of various heat engines focus on increasing the temperature of the source, within material limits. The maximum theoretical efficiency of a heat engine (which no engine ever attains) is equal to the temperature difference between the hot and cold ends divided by the temperature at the hot end, each expressed in absolute temperature .
The efficiency of various heat engines proposed or used today has a large range:
The efficiency of these processes is roughly proportional to the temperature drop across them. Significant energy may be consumed by auxiliary equipment, such as pumps, which effectively reduces efficiency.
Although some cycles have a typical combustion location (internal or external), they can often be implemented with the other. For example, John Ericsson [ 9 ] developed an external heated engine running on a cycle very much like the earlier Diesel cycle . In addition, externally heated engines can often be implemented in open or closed cycles. In a closed cycle the working fluid is retained within the engine at the completion of the cycle whereas is an open cycle the working fluid is either exchanged with the environment together with the products of combustion in the case of the internal combustion engine or simply vented to the environment in the case of external combustion engines like steam engines and turbines .
Everyday examples of heat engines include the thermal power station , internal combustion engine , firearms , refrigerators and heat pumps . Power stations are examples of heat engines run in a forward direction in which heat flows from a hot reservoir and flows into a cool reservoir to produce work as the desired product. Refrigerators, air conditioners and heat pumps are examples of heat engines that are run in reverse, i.e. they use work to take heat energy at a low temperature and raise its temperature in a more efficient way than the simple conversion of work into heat (either through friction or electrical resistance). Refrigerators remove heat from within a thermally sealed chamber at low temperature and vent waste heat at a higher temperature to the environment and heat pumps take heat from the low temperature environment and 'vent' it into a thermally sealed chamber (a house) at higher temperature.
In general heat engines exploit the thermal properties associated with the expansion and compression of gases according to the gas laws or the properties associated with phase changes between gas and liquid states.
Earth's atmosphere and hydrosphere —Earth's heat engine—are coupled processes that constantly even out solar heating imbalances through evaporation of surface water, convection, rainfall, winds and ocean circulation, when distributing heat around the globe. [ 10 ]
A Hadley cell is an example of a heat engine. It involves the rising of warm and moist air in the earth's equatorial region and the descent of colder air in the subtropics creating a thermally driven direct circulation, with consequent net production of kinetic energy. [ 11 ]
In phase change cycles and engines, the working fluids are gases and liquids. The engine converts the working fluid from a gas to a liquid, from liquid to gas, or both, generating work from the fluid expansion or compression.
In these cycles and engines the working fluid is always a gas (i.e., there is no phase change):
In these cycles and engines the working fluid are always like liquid:
A domestic refrigerator is an example of a heat pump : a heat engine in reverse. Work is used to create a heat differential. Many cycles can run in reverse to move heat from the cold side to the hot side, making the cold side cooler and the hot side hotter. Internal combustion engine versions of these cycles are, by their nature, not reversible.
Refrigeration cycles include:
The Barton evaporation engine is a heat engine based on a cycle producing power and cooled moist air from the evaporation of water into hot dry air.
Mesoscopic heat engines are nanoscale devices that may serve the goal of processing heat fluxes and perform useful work at small scales. Potential applications include e.g. electric cooling devices. In such mesoscopic heat engines, work per cycle of operation fluctuates due to thermal noise. There is exact equality that relates average of exponents of work performed by any heat engine and the heat transfer from the hotter heat bath. [ 13 ] This relation transforms the Carnot's inequality into exact equality. This relation is also a Carnot cycle equality
The efficiency of a heat engine relates how much useful work is output for a given amount of heat energy input.
From the laws of thermodynamics , after a completed cycle: [ 14 ]
In other words, a heat engine absorbs heat energy from the high temperature heat source, converting part of it to useful work and giving off the rest as waste heat to the cold temperature heat sink.
In general, the efficiency of a given heat transfer process is defined by the ratio of "what is taken out" to "what is put in". (For a refrigerator or heat pump, which can be considered as a heat engine run in reverse, this is the coefficient of performance and it is ≥ 1.) In the case of an engine, one desires to extract work and has to put in heat Q h {\displaystyle Q_{h}} , for instance from combustion of a fuel, so the engine efficiency is reasonably defined as
The efficiency is less than 100% because of the waste heat Q c < 0 {\displaystyle Q_{c}<0} unavoidably lost to the cold sink (and corresponding compression work put in) during the required recompression at the cold temperature before the power stroke of the engine can occur again.
The theoretical maximum efficiency of any heat engine depends only on the temperatures it operates between. This efficiency is usually derived using an ideal imaginary heat engine such as the Carnot heat engine , although other engines using different cycles can also attain maximum efficiency. Mathematically, after a full cycle, the overall change of entropy is zero:
Δ S h + Δ S c = Δ c y c l e S = 0 {\displaystyle \ \ \ \Delta S_{h}+\Delta S_{c}=\Delta _{cycle}S=0}
Note that Δ S h {\displaystyle \Delta S_{h}} is positive because isothermal expansion in the power stroke increases the multiplicity of the working fluid while Δ S c {\displaystyle \Delta S_{c}} is negative since recompression decreases the multiplicity. If the engine is ideal and runs reversibly , Q h = T h Δ S h {\displaystyle Q_{h}=T_{h}\Delta S_{h}} and Q c = T c Δ S c {\displaystyle Q_{c}=T_{c}\Delta S_{c}} , and thus [ 15 ] [ 14 ]
Q h / T h + Q c / T c = 0 {\displaystyle Q_{h}/T_{h}+Q_{c}/T_{c}=0} ,
which gives Q c / Q h = − T c / T h {\displaystyle Q_{c}/Q_{h}=-T_{c}/T_{h}} and thus the Carnot limit for heat-engine efficiency,
where T h {\displaystyle T_{h}} is the absolute temperature of the hot source and T c {\displaystyle T_{c}} that of the cold sink, usually measured in kelvins .
The reasoning behind this being the maximal efficiency goes as follows. It is first assumed that if a more efficient heat engine than a Carnot engine is possible, then it could be driven in reverse as a heat pump. Mathematical analysis can be used to show that this assumed combination would result in a net decrease in entropy . Since, by the second law of thermodynamics , this is statistically improbable to the point of exclusion, the Carnot efficiency is a theoretical upper bound on the reliable efficiency of any thermodynamic cycle.
Empirically, no heat engine has ever been shown to run at a greater efficiency than a Carnot cycle heat engine.
Figure 2 and Figure 3 show variations on Carnot cycle efficiency with temperature. Figure 2 indicates how efficiency changes with an increase in the heat addition temperature for a constant compressor inlet temperature. Figure 3 indicates how the efficiency changes with an increase in the heat rejection temperature for a constant turbine inlet temperature.
By its nature, any maximally efficient Carnot cycle must operate at an infinitesimal temperature gradient; this is because any transfer of heat between two bodies of differing temperatures is irreversible, therefore the Carnot efficiency expression applies only to the infinitesimal limit. The major problem is that the objective of most heat-engines is to output power, and infinitesimal power is seldom desired.
A different measure of ideal heat-engine efficiency is given by considerations of endoreversible thermodynamics , where the system is broken into reversible subsystems, but with non reversible interactions between them. A classical example is the Curzon–Ahlborn engine, [ 16 ] very similar to a Carnot engine, but where the thermal reservoirs at temperature T h {\displaystyle T_{h}} and T c {\displaystyle T_{c}} are allowed to be different from the temperatures of the substance going through the reversible Carnot cycle: T h ′ {\displaystyle T'_{h}} and T c ′ {\displaystyle T'_{c}} . The heat transfers between the reservoirs and the substance are considered as conductive (and irreversible) in the form d Q h , c / d t = α ( T h , c − T h , c ′ ) {\displaystyle dQ_{h,c}/dt=\alpha (T_{h,c}-T'_{h,c})} . In this case, a tradeoff has to be made between power output and efficiency. If the engine is operated very slowly, the heat flux is low, T ≈ T ′ {\displaystyle T\approx T'} and the classical Carnot result is found
but at the price of a vanishing power output. If instead one chooses to operate the engine at its maximum output power, the efficiency becomes
This model does a better job of predicting how well real-world heat-engines can do (Callen 1985, see also endoreversible thermodynamics ):
As shown, the Curzon–Ahlborn efficiency much more closely models that observed.
Heat engines have been known since antiquity but were only made into useful devices at the time of the industrial revolution in the 18th century. They continue to be developed today.
Engineers have studied the various heat-engine cycles to improve the amount of usable work they could extract from a given power source. The Carnot cycle limit cannot be reached with any gas-based cycle, but engineers have found at least two ways to bypass that limit and one way to get better efficiency without bending any rules:
Each process is one of the following: | https://en.wikipedia.org/wiki/Heat_engine |
A heat exchanger is a system used to transfer heat between a source and a working fluid . Heat exchangers are used in both cooling and heating processes. [ 1 ] The fluids may be separated by a solid wall to prevent mixing or they may be in direct contact. [ 2 ] They are widely used in space heating , refrigeration , air conditioning , power stations , chemical plants , petrochemical plants , petroleum refineries , natural-gas processing , and sewage treatment . The classic example of a heat exchanger is found in an internal combustion engine in which a circulating fluid known as engine coolant flows through radiator coils and air flows past the coils, which cools the coolant and heats the incoming air . Another example is the heat sink , which is a passive heat exchanger that transfers the heat generated by an electronic or a mechanical device to a fluid medium, often air or a liquid coolant. [ 3 ]
There are three primary classifications of heat exchangers according to their flow arrangement . In parallel-flow heat exchangers, the two fluids enter the exchanger at the same end, and travel in parallel to one another to the other side. In counter-flow heat exchangers the fluids enter the exchanger from opposite ends. The counter current design is the most efficient, in that it can transfer the most heat from the heat (transfer) medium per unit mass due to the fact that the average temperature difference along any unit length is higher . See countercurrent exchange . In a cross-flow heat exchanger, the fluids travel roughly perpendicular to one another through the exchanger.
For efficiency, heat exchangers are designed to maximize the surface area of the wall between the two fluids, while minimizing resistance to fluid flow through the exchanger. The exchanger's performance can also be affected by the addition of fins or corrugations in one or both directions, which increase surface area and may channel fluid flow or induce turbulence.
The driving temperature across the heat transfer surface varies with position, but an appropriate mean temperature can be defined. In most simple systems this is the " log mean temperature difference " (LMTD). Sometimes direct knowledge of the LMTD is not available and the NTU method is used.
By maximum operating temperature, heat exchangers can be divided into low-temperature and high-temperature ones. The former work up to 500–650°C depending on the industry and generally don't require special design and material considerations. The latter work up to 1000 or even 1400°C. [ 4 ] [ 5 ] [ 6 ]
Double pipe heat exchangers are the simplest exchangers used in industries. On one hand, these heat exchangers are cheap for both design and maintenance, making them a good choice for small industries. On the other hand, their low efficiency coupled with the high space occupied in large scales, has led modern industries to use more efficient heat exchangers like shell and tube or plate. However, since double pipe heat exchangers are simple, they are used to teach heat exchanger design basics to students as the fundamental rules for all heat exchangers are the same.
1. Double-pipe heat exchanger
When one fluid flows through the smaller pipe, the other flows through the annular gap between the two pipes. These flows may be parallel or counter-flows in a double pipe heat exchanger.
(a) Parallel flow, where both hot and cold liquids enter the heat exchanger from the same side, flow in the same direction and exit at the same end. This configuration is preferable when the two fluids are intended to reach exactly the same temperature, as it reduces thermal stress and produces a more uniform rate of heat transfer.
(b) Counter-flow, where hot and cold fluids enter opposite sides of the heat exchanger, flow in opposite directions, and exit at opposite ends. This configuration is preferable when the objective is to maximize heat transfer between the fluids, as it creates a larger temperature differential when used under otherwise similar conditions. [ citation needed ]
The figure above illustrates the parallel and counter-flow flow directions of the fluid exchanger.
2. Shell-and-tube heat exchanger
In a shell-and-tube heat exchanger, two fluids at different temperatures flow through the heat exchanger. One of the fluids flows through the tube side and the other fluid flows outside the tubes, but inside the shell (shell side).
Baffles are used to support the tubes, direct the fluid flow to the tubes in an approximately natural manner, and maximize the turbulence of the shell fluid. There are many various kinds of baffles, and the choice of baffle form, spacing, and geometry depends on the allowable flow rate of the drop in shell-side force, the need for tube support, and the flow-induced vibrations. There are several variations of shell-and-tube exchangers available; the differences lie in the arrangement of flow configurations and details of construction.
In application to cool air with shell-and-tube technology (such as intercooler / charge air cooler for combustion engines ), fins can be added on the tubes to increase heat transfer area on air side and create a tubes & fins configuration.
3. Plate Heat Exchanger
A plate heat exchanger contains an amount of thin shaped heat transfer plates bundled together. The gasket arrangement of each pair of plates provides two separate channel system. Each pair of plates form a channel where the fluid can flow through. The pairs are attached by welding and bolting methods. The following shows the components in the heat exchanger.
In single channels the configuration of the gaskets enables flow through. Thus, this allows the main and secondary media in counter-current flow. A gasket plate heat exchanger has a heat region from corrugated plates. The gasket function as seal between plates and they are located between frame and pressure plates. Fluid flows in a counter current direction throughout the heat exchanger. An efficient thermal performance is produced. Plates are produced in different depths, sizes and corrugated shapes. There are different types of plates available including plate and frame, plate and shell and spiral plate heat exchangers. The distribution area guarantees the flow of fluid to the whole heat transfer surface. This helps to prevent stagnant area that can cause accumulation of unwanted material on solid surfaces. High flow turbulence between plates results in a greater transfer of heat and a decrease in pressure.
4. Condensers and Boilers
Heat exchangers using a two-phase heat transfer system are condensers, boilers and evaporators. Condensers are instruments that take and cool hot gas or vapor to the point of condensation and transform the gas into a liquid form. The point at which liquid transforms to gas is called vaporization and vice versa is called condensation. Surface condenser is the most common type of condenser where it includes a water supply device. Figure 5 below displays a two-pass surface condenser.
The pressure of steam at the turbine outlet is low where the steam density is very low where the flow rate is very high. To prevent a decrease in pressure in the movement of steam from the turbine to condenser, the condenser unit is placed underneath and connected to the turbine. Inside the tubes the cooling water runs in a parallel way, while steam moves in a vertical downward position from the wide opening at the top and travel through the tube.
Furthermore, boilers are categorized as initial application of heat exchangers. The word steam generator was regularly used to describe a boiler unit where a hot liquid stream is the source of heat rather than the combustion products. Depending on the dimensions and configurations the boilers are manufactured. Several boilers are only able to produce hot fluid while on the other hand the others are manufactured for steam production.
Shell and tube heat exchangers consist of a series of tubes which contain fluid that must be either heated or cooled. A second fluid runs over the tubes that are being heated or cooled so that it can either provide the heat or absorb the heat required. A set of tubes is called the tube bundle and can be made up of several types of tubes: plain, longitudinally finned, etc. Shell and tube heat exchangers are typically used for high-pressure applications (with pressures greater than 30 bar and temperatures greater than 260 °C). [ 7 ] This is because the shell and tube heat exchangers are robust due to their shape. Several thermal design features must be considered when designing the tubes in the shell and tube heat exchangers:
There can be many variations on the shell and tube design. Typically, the ends of each tube are connected to plenums (sometimes called water boxes) through holes in tubesheets. The tubes may be straight or bent in the shape of a U, called U-tubes.
Fixed tube liquid-cooled heat exchangers especially suitable for marine and harsh applications can be assembled with brass shells, copper tubes, brass baffles, and forged brass integral end hubs. [ citation needed ] (See: Copper in heat exchangers ).
Another type of heat exchanger is the plate heat exchanger . These exchangers are composed of many thin, slightly separated plates that have very large surface areas and small fluid flow passages for heat transfer. Advances in gasket and brazing technology have made the plate-type heat exchanger increasingly practical. In HVAC applications, large heat exchangers of this type are called plate-and-frame ; when used in open loops, these heat exchangers are normally of the gasket type to allow periodic disassembly, cleaning, and inspection. There are many types of permanently bonded plate heat exchangers, such as dip-brazed, vacuum-brazed, and welded plate varieties, and they are often specified for closed-loop applications such as refrigeration . Plate heat exchangers also differ in the types of plates that are used, and in the configurations of those plates. Some plates may be stamped with "chevron", dimpled, or other patterns, where others may have machined fins and/or grooves.
When compared to shell and tube exchangers, the stacked-plate arrangement typically has lower volume and cost. Another difference between the two is that plate exchangers typically serve low to medium pressure fluids, compared to medium and high pressures of shell and tube. A third and important difference is that plate exchangers employ more countercurrent flow rather than cross current flow, which allows lower approach temperature differences, high temperature changes, and increased efficiencies.
A third type of heat exchanger is a plate and shell heat exchanger, which combines plate heat exchanger with shell and tube heat exchanger technologies. The heart of the heat exchanger contains a fully welded circular plate pack made by pressing and cutting round plates and welding them together. Nozzles carry flow in and out of the platepack (the 'Plate side' flowpath). The fully welded platepack is assembled into an outer shell that creates a second flowpath (the 'Shell side'). Plate and shell technology offers high heat transfer, high pressure, high operating temperature , compact size, low fouling and close approach temperature. In particular, it does completely without gaskets, which provides security against leakage at high pressures and temperatures.
A fourth type of heat exchanger uses an intermediate fluid or solid store to hold heat, which is then moved to the other side of the heat exchanger to be released. Two examples of this are adiabatic wheels, which consist of a large wheel with fine threads rotating through the hot and cold fluids, and fluid heat exchangers.
This type of heat exchanger uses "sandwiched" passages containing fins to increase the effectiveness of the unit. The designs include crossflow and counterflow coupled with various fin configurations such as straight fins, offset fins and wavy fins.
Plate and fin heat exchangers are usually made of aluminum alloys, which provide high heat transfer efficiency. The material enables the system to operate at a lower temperature difference and reduce the weight of the equipment. Plate and fin heat exchangers are mostly used for low temperature services such as natural gas, helium and oxygen liquefaction plants, air separation plants and transport industries such as motor and aircraft engines .
Advantages of plate and fin heat exchangers:
Disadvantages of plate and fin heat exchangers:
The usage of fins in a tube-based heat exchanger is common when one of the working fluids is a low-pressure gas, and is typical for heat exchangers that operate using ambient air, such as automotive radiators and HVAC air condensers . Fins dramatically increase the surface area with which heat can be exchanged, which improves the efficiency of conducting heat to a fluid with very low thermal conductivity , such as air. The fins are typically made from aluminium or copper since they must conduct heat from the tube along the length of the fins, which are usually very thin.
The main construction types of finned tube exchangers are:
Stacked-fin or spiral-wound construction can be used for the tubes inside shell-and-tube heat exchangers when high efficiency thermal transfer to a gas is required.
In electronics cooling, heat sinks , particularly those using heat pipes , can have a stacked-fin construction.
A pillow plate heat exchanger is commonly used in the dairy industry for cooling milk in large direct-expansion stainless steel bulk tanks . Nearly the entire surface area of a tank can be integrated with this heat exchanger, without gaps that would occur between pipes welded to the exterior of the tank. Pillow plates can also be constructed as flat plates that are stacked inside a tank. The relatively flat surface of the plates allows easy cleaning, especially in sterile applications.
The pillow plate can be constructed using either a thin sheet of metal welded to the thicker surface of a tank or vessel, or two thin sheets welded together. The surface of the plate is welded with a regular pattern of dots or a serpentine pattern of weld lines. After welding the enclosed space is pressurised with sufficient force to cause the thin metal to bulge out around the welds, providing a space for heat exchanger liquids to flow, and creating a characteristic appearance of a swelled pillow formed out of metal.
A waste heat recovery unit (WHRU) is a heat exchanger that recovers heat from a hot gas stream while transferring it to a working medium, typically water or oils. The hot gas stream can be the exhaust gas from a gas turbine or a diesel engine or a waste gas from industry or refinery.
Large systems with high volume and temperature gas streams, typical in industry, can benefit from steam Rankine cycle (SRC) in a waste heat recovery unit, but these cycles are too expensive for small systems. The recovery of heat from low temperature systems requires different working fluids than steam.
An organic Rankine cycle (ORC) waste heat recovery unit can be more efficient at low temperature range using refrigerants that boil at lower temperatures than water. Typical organic refrigerants are ammonia , pentafluoropropane (R-245fa and R-245ca), and toluene .
The refrigerant is boiled by the heat source in the evaporator to produce super-heated vapor. This fluid is expanded in the turbine to convert thermal energy to kinetic energy, that is converted to electricity in the electrical generator. This energy transfer process decreases the temperature of the refrigerant that, in turn, condenses. The cycle is closed and completed using a pump to send the fluid back to the evaporator.
Another type of heat exchanger is called " (dynamic) scraped surface heat exchanger ". This is mainly used for heating or cooling with high- viscosity products, crystallization processes, evaporation and high- fouling applications. Long running times are achieved due to the continuous scraping of the surface, thus avoiding fouling and achieving a sustainable heat transfer rate during the process.
In addition to heating up or cooling down fluids in just a single phase , heat exchangers can be used either to heat a liquid to evaporate (or boil) it or used as condensers to cool a vapor and condense it to a liquid. In chemical plants and refineries , reboilers used to heat incoming feed for distillation towers are often heat exchangers. [ 9 ] [ 10 ]
Distillation set-ups typically use condensers to condense distillate vapors back into liquid.
Power plants that use steam -driven turbines commonly use heat exchangers to boil water into steam . Heat exchangers or similar units for producing steam from water are often called boilers or steam generators.
In the nuclear power plants called pressurized water reactors , special large heat exchangers pass heat from the primary (reactor plant) system to the secondary (steam plant) system, producing steam from water in the process. These are called steam generators . All fossil-fueled and nuclear power plants using steam-driven turbines have surface condensers to convert the exhaust steam from the turbines into condensate (water) for re-use. [ 11 ] [ 12 ]
To conserve energy and cooling capacity in chemical and other plants, regenerative heat exchangers can transfer heat from a stream that must be cooled to another stream that must be heated, such as distillate cooling and reboiler feed pre-heating.
This term can also refer to heat exchangers that contain a material within their structure that has a change of phase. This is usually a solid to liquid phase due to the small volume difference between these states. This change of phase effectively acts as a buffer because it occurs at a constant temperature but still allows for the heat exchanger to accept additional heat. One example where this has been investigated is for use in high power aircraft electronics.
Heat exchangers functioning in multiphase flow regimes may be subject to the Ledinegg instability .
Direct contact heat exchangers involve heat transfer between hot and cold streams of two phases in the absence of a separating wall. [ 13 ] Thus such heat exchangers can be classified as:
Most direct contact heat exchangers fall under the Gas – Liquid category, where heat is transferred between a gas and liquid in the form of drops, films or sprays. [ 7 ]
Such types of heat exchangers are used predominantly in air conditioning , humidification , industrial hot water heating , water cooling and condensing plants. [ 14 ]
Microchannel heat exchangers are multi-pass parallel flow heat exchangers consisting of three main elements: manifolds (inlet and outlet), multi-port tubes with the hydraulic diameters smaller than 1mm, and fins. All the elements usually brazed together using controllable atmosphere brazing process. Microchannel heat exchangers are characterized by high heat transfer ratio, low refrigerant charges, compact size, and lower airside pressure drops compared to finned tube heat exchangers. [ citation needed ] Microchannel heat exchangers are widely used in automotive industry as the car radiators, and as condenser, evaporator, and cooling/heating coils in HVAC industry.
Micro heat exchangers , Micro-scale heat exchangers , or microstructured heat exchangers are heat exchangers in which (at least one) fluid flows in lateral confinements with typical dimensions below 1 mm. The most typical such confinement are microchannels , which are channels with a hydraulic diameter below 1 mm. Microchannel heat exchangers can be made from metal or ceramics. [ 16 ] Microchannel heat exchangers can be used for many applications including:
One of the widest uses of heat exchangers is for refrigeration and air conditioning . This class of heat exchangers is commonly called air coils , or just coils due to their often-serpentine internal tubing, or condensers in the case of refrigeration , and are typically of the finned tube type. Liquid-to-air, or air-to-liquid HVAC coils are typically of modified crossflow arrangement. In vehicles, heat coils are often called heater cores .
On the liquid side of these heat exchangers, the common fluids are water, a water-glycol solution, steam, or a refrigerant . For heating coils , hot water and steam are the most common, and this heated fluid is supplied by boilers , for example. For cooling coils , chilled water and refrigerant are most common. Chilled water is supplied from a chiller that is potentially located very far away, but refrigerant must come from a nearby condensing unit. When a refrigerant is used, the cooling coil is the evaporator , and the heating coil is the condenser in the vapor-compression refrigeration cycle. HVAC coils that use this direct-expansion of refrigerants are commonly called DX coils . Some DX coils are "microchannel" type. [ 8 ]
On the air side of HVAC coils a significant difference exists between those used for heating, and those for cooling. Due to psychrometrics , air that is cooled often has moisture condensing out of it, except with extremely dry air flows. Heating some air increases that airflow's capacity to hold water. So heating coils need not consider moisture condensation on their air-side, but cooling coils must be adequately designed and selected to handle their particular latent (moisture) as well as the sensible (cooling) loads. The water that is removed is called condensate .
For many climates, water or steam HVAC coils can be exposed to freezing conditions. Because water expands upon freezing, these somewhat expensive and difficult to replace thin-walled heat exchangers can easily be damaged or destroyed by just one freeze. As such, freeze protection of coils is a major concern of HVAC designers, installers, and operators.
The introduction of indentations placed within the heat exchange fins controlled condensation, allowing water molecules to remain in the cooled air. [ 21 ]
The heat exchangers in direct-combustion furnaces , typical in many residences, are not 'coils'. They are, instead, gas-to-air heat exchangers that are typically made of stamped steel sheet metal. The combustion products pass on one side of these heat exchangers, and air to heat on the other. A cracked heat exchanger is therefore a dangerous situation that requires immediate attention because combustion products may enter living space.
Although double-pipe heat exchangers are the simplest to design, the better choice in the following cases would be the helical-coil heat exchanger (HCHE):
These have been used in the nuclear industry as a method for exchanging heat in a sodium system for large liquid metal fast breeder reactors since the early 1970s, using an HCHE device invented by Charles E. Boardman and John H. Germer . [ 24 ] There are several simple methods for designing HCHE for all types of manufacturing industries, such as using the Ramachandra K. Patil (et al.) method from India and the Scott S. Haraburda method from the United States . [ 22 ] [ 23 ]
However, these are based upon assumptions of estimating inside heat transfer coefficient, predicting flow around the outside of the coil, and upon constant heat flux. [ 25 ]
A modification to the perpendicular flow of the typical HCHE involves the replacement of shell with another coiled tube, allowing the two fluids to flow parallel to one another, and which requires the use of different design calculations. [ 26 ] These are the Spiral Heat Exchangers (SHE), which may refer to a helical (coiled) tube configuration, more generally, the term refers to a pair of flat surfaces that are coiled to form the two channels in a counter-flow arrangement. Each of the two channels has one long curved path. A pair of fluid ports are connected tangentially to the outer arms of the spiral, and axial ports are common, but optional. [ 27 ]
The main advantage of the SHE is its highly efficient use of space. This attribute is often leveraged and partially reallocated to gain other improvements in performance, according to well known tradeoffs in heat exchanger design. (A notable tradeoff is capital cost vs operating cost.) A compact SHE may be used to have a smaller footprint and thus lower all-around capital costs, or an oversized SHE may be used to have less pressure drop, less pumping energy , higher thermal efficiency , and lower energy costs.
The distance between the sheets in the spiral channels is maintained by using spacer studs that were welded prior to rolling. Once the main spiral pack has been rolled, alternate top and bottom edges are welded and each end closed by a gasketed flat or conical cover bolted to the body. This ensures no mixing of the two fluids occurs. Any leakage is from the periphery cover to the atmosphere, or to a passage that contains the same fluid. [ 28 ]
Spiral heat exchangers are often used in the heating of fluids that contain solids and thus tend to foul the inside of the heat exchanger. The low pressure drop lets the SHE handle fouling more easily. The SHE uses a “self cleaning” mechanism, whereby fouled surfaces cause a localized increase in fluid velocity, thus increasing the drag (or fluid friction ) on the fouled surface, thus helping to dislodge the blockage and keep the heat exchanger clean. "The internal walls that make up the heat transfer surface are often rather thick, which makes the SHE very robust, and able to last a long time in demanding environments." [ citation needed ] They are also easily cleaned, opening out like an oven where any buildup of foulant can be removed by pressure washing .
Self-cleaning water filters are used to keep the system clean and running without the need to shut down or replace cartridges and bags.
There are three main types of flows in a spiral heat exchanger:
The Spiral heat exchanger is good for applications such as pasteurization, digester heating, heat recovery, pre-heating (see: recuperator ), and effluent cooling. For sludge treatment, SHEs are generally smaller than other types of heat exchangers. [ citation needed ] These are used to transfer the heat.
Due to the many variables involved, selecting optimal heat exchangers is challenging. Hand calculations are possible, but many iterations are typically needed. As such, heat exchangers are most often selected via computer programs, either by system designers, who are typically engineers , or by equipment vendors.
To select an appropriate heat exchanger, the system designers (or equipment vendors) would firstly consider the design limitations for each heat exchanger type.
Though cost is often the primary criterion, several other selection criteria are important:
Small-diameter coil technologies are becoming more popular in modern air conditioning and refrigeration systems because they have better rates of heat transfer than conventional sized condenser and evaporator coils with round copper tubes and aluminum or copper fin that have been the standard in the HVAC industry. Small diameter coils can withstand the higher pressures required by the new generation of environmentally friendlier refrigerants. Two small diameter coil technologies are currently available for air conditioning and refrigeration products: copper microgroove [ 31 ] and brazed aluminum microchannel. [ citation needed ]
Choosing the right heat exchanger (HX) requires some knowledge of the different heat exchanger types, as well as the environment where the unit must operate. Typically in the manufacturing industry, several differing types of heat exchangers are used for just one process or system to derive the final product. For example, a kettle HX for pre-heating, a double pipe HX for the 'carrier' fluid and a plate and frame HX for final cooling. With sufficient knowledge of heat exchanger types and operating requirements, an appropriate selection can be made to optimise the process. [ 32 ]
Online monitoring of commercial heat exchangers is done by tracking the overall heat transfer coefficient. The overall heat transfer coefficient tends to decline over time due to fouling.
By periodically calculating the overall heat transfer coefficient from exchanger flow rates and temperatures, the owner of the heat exchanger can estimate when cleaning the heat exchanger is economically attractive.
Integrity inspection of plate and tubular heat exchanger can be tested in situ by the conductivity or helium gas methods. These methods confirm the integrity of the plates or tubes to prevent any cross contamination and the condition of the gaskets.
Mechanical integrity monitoring of heat exchanger tubes may be conducted through Nondestructive methods such as eddy current testing.
Fouling occurs when impurities deposit on the heat exchange surface.
Deposition of these impurities can decrease heat transfer effectiveness significantly over time and are caused by:
The rate of heat exchanger fouling is determined by the rate of particle deposition less re-entrainment/suppression. This model was originally proposed in 1959 by Kern and Seaton.
Crude Oil Exchanger Fouling . In commercial crude oil refining, crude oil is heated from 21 °C (70 °F) to 343 °C (649 °F) prior to entering the distillation column. A series of shell and tube heat exchangers typically exchange heat between crude oil and other oil streams to heat the crude to 260 °C (500 °F) prior to heating in a furnace. Fouling occurs on the crude side of these exchangers due to asphaltene insolubility. The nature of asphaltene solubility in crude oil was successfully modeled by Wiehe and Kennedy. [ 33 ] The precipitation of insoluble asphaltenes in crude preheat trains has been successfully modeled as a first order reaction by Ebert and Panchal [ 34 ] who expanded on the work of Kern and Seaton.
Cooling Water Fouling .
Cooling water systems are susceptible to fouling. Cooling water typically has a high total dissolved solids content and suspended colloidal solids. Localized precipitation of dissolved solids occurs at the heat exchange surface due to wall temperatures higher than bulk fluid temperature. Low fluid velocities (less than 3 ft/s) allow suspended solids to settle on the heat exchange surface. Cooling water is typically on the tube side of a shell and tube exchanger because it's easy to clean. To prevent fouling, designers typically ensure that cooling water velocity is greater than 0.9 m/s and bulk fluid temperature is maintained less than 60 °C (140 °F). Other approaches to control fouling control combine the "blind" application of biocides and anti-scale chemicals with periodic lab testing.
Plate and frame heat exchangers can be disassembled and cleaned periodically. Tubular heat exchangers can be cleaned by such methods as acid cleaning, sandblasting , high-pressure water jet , bullet cleaning, or drill rods.
In large-scale cooling water systems for heat exchangers, water treatment such as purification, addition of chemicals , and testing, is used to minimize fouling of the heat exchange equipment. Other water treatment is also used in steam systems for power plants, etc. to minimize fouling and corrosion of the heat exchange and other equipment.
A variety of companies have started using water borne oscillations technology to prevent biofouling . Without the use of chemicals, this type of technology has helped in providing a low-pressure drop in heat exchangers.
The design and manufacturing of heat exchangers has numerous regulations, which vary according to the region in which they will be used.
Design and manufacturing codes include: ASME Boiler and Pressure Vessel Code (US); PD 5500 (UK); BS 1566 (UK); [ 35 ] EN 13445 (EU); CODAP (French); Pressure Equipment Safety Regulations 2016 (PER) (UK); Pressure Equipment Directive (EU); NORSOK (Norwegian); TEMA ; [ 36 ] API 12; and API 560. [ citation needed ]
The human nasal passages serve as a heat exchanger, with cool air being inhaled and warm air being exhaled. Its effectiveness can be demonstrated by putting the hand in front of the face and exhaling, first through the nose and then through the mouth. Air exhaled through the nose is substantially cooler. [ 37 ] [ 38 ] This effect can be enhanced with clothing, by, for example, wearing a scarf over the face while breathing in cold weather.
In species that have external testes (such as human), the artery to the testis is surrounded by a mesh of veins called the pampiniform plexus . This cools the blood heading to the testes, while reheating the returning blood.
" Countercurrent " heat exchangers occur naturally in the circulatory systems of fish , whales and other marine mammals . Arteries to the skin carrying warm blood are intertwined with veins from the skin carrying cold blood, causing the warm arterial blood to exchange heat with the cold venous blood. This reduces the overall heat loss in cold water. Heat exchangers are also present in the tongues of baleen whales as large volumes of water flow through their mouths. [ 39 ] [ 40 ] Wading birds use a similar system to limit heat losses from their body through their legs into the water.
Carotid rete is a counter-current heat exchanging organ in some ungulates . The blood ascending the carotid arteries on its way to the brain, flows via a network of vessels where heat is discharged to the veins of cooler blood descending from the nasal passages. The carotid rete allows Thomson's gazelle to maintain its brain almost 3 °C (5.4 °F) cooler than the rest of the body, and therefore aids in tolerating bursts in metabolic heat production such as associated with outrunning cheetahs (during which the body temperature exceeds the maximum temperature at which the brain could function). [ 41 ] Humans with other primates lack a carotid rete. [ 42 ]
Heat exchangers are widely used in industry both for cooling and heating large scale industrial processes. The type and size of heat exchanger used can be tailored to suit a process depending on the type of fluid, its phase, temperature, density, viscosity, pressures, chemical composition and various other thermodynamic properties.
In many industrial processes there is waste of energy or a heat stream that is being exhausted, heat exchangers can be used to recover this heat and put it to use by heating a different stream in the process. This practice saves a lot of money in industry, as the heat supplied to other streams from the heat exchangers would otherwise come from an external source that is more expensive and more harmful to the environment.
Heat exchangers are used in many industries, including:
In waste water treatment, heat exchangers play a vital role in maintaining optimal temperatures within anaerobic digesters to promote the growth of microbes that remove pollutants. Common types of heat exchangers used in this application are the double pipe heat exchanger as well as the plate and frame heat exchanger.
In commercial aircraft heat exchangers are used to take heat from the engine's oil system to heat cold fuel. [ 43 ] This improves fuel efficiency, as well as reduces the possibility of water entrapped in the fuel freezing in components. [ 44 ]
Estimated at US$17.5 billion in 2021, the global demand of heat exchangers is expected to experience robust growth of about 5% annually over the next years. The market value is expected to reach US$27 billion by 2030. With an expanding desire for environmentally friendly options and increased development of offices, retail sectors, and public buildings, market expansion is due to grow. [ 45 ]
A simple heat exchange [ 46 ] [ 47 ] might be thought of as two straight pipes with fluid flow, which are thermally connected. Let the pipes be of equal length L , carrying fluids with heat capacity C i {\displaystyle C_{i}} (energy per unit mass per unit change in temperature) and let the mass flow rate of the fluids through the pipes, both in the same direction, be j i {\displaystyle j_{i}} (mass per unit time), where the subscript i applies to pipe 1 or pipe 2.
Temperature profiles for the pipes are T 1 ( x ) {\displaystyle T_{1}(x)} and T 2 ( x ) {\displaystyle T_{2}(x)} where x is the distance along the pipe. Assume a steady state, so that the temperature profiles are not functions of time. Assume also that the only transfer of heat from a small volume of fluid in one pipe is to the fluid element in the other pipe at the same position, i.e., there is no transfer of heat along a pipe due to temperature differences in that pipe. By Newton's law of cooling the rate of change in energy of a small volume of fluid is proportional to the difference in temperatures between it and the corresponding element in the other pipe:
( this is for parallel flow in the same direction and opposite temperature gradients, but for counter-flow heat exchange countercurrent exchange the sign is opposite in the second equation in front of γ ( T 1 − T 2 ) {\displaystyle \gamma (T_{1}-T_{2})} ), where u i ( x ) {\displaystyle u_{i}(x)} is the thermal energy per unit length and γ is the thermal connection constant per unit length between the two pipes. This change in internal energy results in a change in the temperature of the fluid element. The time rate of change for the fluid element being carried along by the flow is:
where J i = C i j i {\displaystyle J_{i}={C_{i}}{j_{i}}} is the "thermal mass flow rate". The differential equations governing the heat exchanger may now be written as:
Since the system is in a steady state, there are no partial derivatives of temperature with respect to time, and since there is no heat transfer along the pipe, there are no second derivatives in x as is found in the heat equation . These two coupled first-order differential equations may be solved to yield:
where k 1 = γ / J 1 {\displaystyle k_{1}=\gamma /J_{1}} , k 2 = γ / J 2 {\displaystyle k_{2}=\gamma /J_{2}} ,
(this is for parallel-flow, but for counter-flow the sign in front of k 2 {\displaystyle k_{2}} is negative, so that if k 2 = k 1 {\displaystyle k_{2}=k_{1}} , for the same "thermal mass flow rate" in both opposite directions, the gradient of temperature is constant and the temperatures linear in position x with a constant difference ( T 2 − T 1 ) {\displaystyle (T_{2}-T_{1})} along the exchanger, explaining why the counter current design countercurrent exchange is the most efficient )
and A and B are two as yet undetermined constants of integration. Let T 10 {\displaystyle T_{10}} and T 20 {\displaystyle T_{20}} be the temperatures at x=0 and let T 1 L {\displaystyle T_{1L}} and T 2 L {\displaystyle T_{2L}} be the temperatures at the end of the pipe at x=L. Define the average temperatures in each pipe as:
Using the solutions above, these temperatures are:
Choosing any two of the temperatures above eliminates the constants of integration, letting us find the other four temperatures. We find the total energy transferred by integrating the expressions for the time rate of change of internal energy per unit length:
By the conservation of energy, the sum of the two energies is zero. The quantity T ¯ 2 − T ¯ 1 {\displaystyle {\overline {T}}_{2}-{\overline {T}}_{1}} is known as the Log mean temperature difference , and is a measure of the effectiveness of the heat exchanger in transferring heat energy. | https://en.wikipedia.org/wiki/Heat_exchanger |
In physics and engineering , heat flux or thermal flux , sometimes also referred to as heat flux density [ 1 ] , heat-flow density or heat-flow rate intensity , is a flow of energy per unit area per unit time . Its SI units are watts per square metre (W/m 2 ). It has both a direction and a magnitude, and so it is a vector quantity. To define the heat flux at a certain point in space, one takes the limiting case where the size of the surface becomes infinitesimally small.
Heat flux is often denoted ϕ → q {\displaystyle {\vec {\phi }}_{\mathrm {q} }} , the subscript q specifying heat flux, as opposed to mass or momentum flux . Fourier's law is an important application of these concepts.
For most solids in usual conditions, heat is transported mainly by conduction and the heat flux is adequately described by Fourier's law.
ϕ q = − k d T ( x ) d x {\displaystyle \phi _{\text{q}}=-k{\frac {\mathrm {d} T(x)}{\mathrm {d} x}}}
where k {\displaystyle k} is the thermal conductivity . The negative sign shows that heat flux moves from higher temperature regions to lower temperature regions.
The multi-dimensional case is similar, the heat flux goes "down" and hence the temperature gradient has the negative sign:
ϕ → q = − k ∇ T {\displaystyle {\vec {\phi }}_{\mathrm {q} }=-k\nabla T} where ∇ {\displaystyle {\nabla }} is the gradient operator .
The measurement of heat flux can be performed in a few different manners.
A commonly known, but often impractical, method is performed by measuring a temperature difference over a piece of material with a well-known thermal conductivity . This method is analogous to a standard way to measure an electric current, where one measures the voltage drop over a known resistor . Usually this method is difficult to perform since the thermal resistance of the material being tested is often not known. Accurate values for the material's thickness and thermal conductivity would be required in order to determine thermal resistance. Using the thermal resistance, along with temperature measurements on either side of the material, heat flux can then be indirectly calculated.
A second method of measuring heat flux is by using a heat flux sensor , or heat flux transducer, to directly measure the amount of heat being transferred to/from the surface that the heat flux sensor is mounted to. The most common type of heat flux sensor is a differential temperature thermopile which operates on essentially the same principle as the first measurement method that was mentioned except it has the advantage in that the thermal resistance/conductivity does not need to be a known parameter. These parameters do not have to be known since the heat flux sensor enables an in-situ measurement of the existing heat flux by using the Seebeck effect . However, differential thermopile heat flux sensors have to be calibrated in order to relate their output signals [μV] to heat flux values [W/(m 2 ⋅K)]. Once the heat flux sensor is calibrated it can then be used to directly measure heat flux without requiring the rarely known value of thermal resistance or thermal conductivity.
One of the tools in a scientist's or engineer's toolbox is the energy balance . Such a balance can be set up for any physical system, from chemical reactors to living organisms, and generally takes the following form
where the three ∂ E ∂ t {\displaystyle {\big .}{\frac {\partial E}{\partial t}}} terms stand for the time rate of change of respectively the total amount of incoming energy, the total amount of outgoing energy and the total amount of accumulated energy.
Now, if the only way the system exchanges energy with its surroundings is through heat transfer, the heat rate can be used to calculate the energy balance, since
where we have integrated the heat flux ϕ → q {\displaystyle {\vec {\phi }}_{\mathrm {q} }} over the surface S {\displaystyle S} of the system.
In real-world applications one cannot know the exact heat flux at every point on the surface, but approximation schemes can be used to calculate the integral, for example Monte Carlo integration . | https://en.wikipedia.org/wiki/Heat_flux |
Heat flux measurements of thermal insulation are applied in laboratory and industrial environments to obtain reference or in-situ measurements of the thermal properties of an insulation material. Thermal insulation is tested using nondestructive testing techniques relying on heat flux sensors . Procedures and requirements for in-situ measurements are standardized in ASTM C1041 standard: "Standard Practice for In-Situ Measurements of Heat Flux in Industrial Thermal Insulation Using Heat Flux Transducers". [ 1 ]
On-site heat flux measurements are often focused on testing the thermal transport properties of for example pipes, tanks, ovens and boilers, by calculating the heat flux q or the apparent thermal conductivity λ {\displaystyle \lambda } . The real-time energy gain or loss is measured under pseudo steady state -conditions with minimal disturbance by a heat flux transducer (HFT). This on-site method is for flat surfaces (non-pipes) only.
After successful application of these preparations connect the sensor to a datalogger or integrating voltmeter and wait until pseudo steady-state is achieved. It is advised to average the readings over a short time period when steady-state is achieved. This voltage measurement is the final measurement, but for good measure these steps should be applied on multiple relevant locations on the insulation.
The heat flux q {\displaystyle q} can be calculated from the voltage by:
The apparent thermal conductivity can be calculated from:
The interpretation and precision of the results depends on the section of measurement, the choice of HFT and external conditions. The correct heat flux sensor and measurement test section are of importance for a good in-situ measurement and should be based on manufacturer recommendations, past experience and careful consideration of the testing area.
ASTM C1041: Standard Practice for In-Situ Measurements of Heat Flux in Industrial Thermal Insulation Using Heat Flux Transducers
Sweden, 1979. (Draft Translation, March 1982, U.S. Army Corps of Engineers) | https://en.wikipedia.org/wiki/Heat_flux_measurements_of_thermal_insulation |
A heat gun is a device used to emit a stream of hot air, usually at temperatures between 100 and 550 °C (373 and 823 K; 212 and 1,022 °F), with some hotter models running around 760 °C (1,030 K; 1,400 °F), which can be held by hand. Heat guns usually have the form of an elongated body pointing at what is to be heated, with a handle fixed to it at right angles and a pistol grip trigger in the same pistol form factor as many other power tools .
Though it shares similarities to a hair dryer , it is not meant as a substitute for the latter, which safely spreads the heat out across its nozzle to prevent scalp burning and has a limited temperature range, while heat guns have a concentrated element and nozzle, along with higher temperatures, which can easily scald the scalp or catch the hair on fire.
A heat gun comprises a source of heat, usually an electrically heated element or a propane / liquified petroleum gas , a mechanism to move the hot air such as an electric fan , unless gas pressure is sufficient; a nozzle to direct the air, which may be a simple tube pointing in one direction, or specially shaped for purposes such as concentrating the heat on a small area or thawing a pipe but not the wall behind; a housing to contain the components and keep the operator safe; a mechanism to switch it on and off and control the temperature such as a trigger; a handle; and a built-in or external stand if the gun is to be used hands-free. Gas-powered soldering irons sometimes have interchangeable hot air blower tips to produce a very narrow stream of hot air suitable for working with surface-mount devices and shrinking heat-shrink tubing .
Focused infrared heaters are also used for localised heating.
Heat guns are used in physics , materials science , chemistry , engineering , and other laboratory and workshop settings. Different types of heat gun operating at different temperatures and with different airflow can be used to strip paint , [ 1 ] shrink heat shrink tubing , shrink film , and shrink wrap packaging , dry out damp wood , bend and weld plastic , soften adhesives , and thaw frozen pipes. [ 2 ]
Heat guns, often called hot air guns or hot air stations for this application, are used in electronics to desolder and rework surface-mounted circuit board components .
Heat guns are also used for functional testing of overheat protection devices, in order to safely simulate an overheat condition.
Household use of heat guns is common. Heat guns and lighter weight hair dryers are sometimes used to remove paint splashes and wallpapers. Heat guns are also used to make plastics such as PVC piping pliable for the purposes of bending, to soften wax and adhesives such as that used in electronics, and to thaw out frozen copper pipes. There are also heat gun form factors friendly for food purposes such as melting hard candies, searing meats, or to start a charcoal fire or grill. Heat guns are sometimes used to upholster furniture and repair leather and vinyl goods.
For removing lead paint temperatures below 590 °C (863 K; 1,094 °F) are used to minimize vaporization. [ 3 ] | https://en.wikipedia.org/wiki/Heat_gun |
A heat kernel signature (HKS) is a feature descriptor for use in deformable shape analysis and belongs to the group of spectral shape analysis methods. For each point in the shape, HKS defines its feature vector representing the point's local and global geometric properties. Applications include segmentation, classification, structure discovery, shape matching and shape retrieval.
HKS was introduced in 2009 by Jian Sun, Maks Ovsjanikov and Leonidas Guibas . [ 1 ] It is based on the heat kernel , which is a fundamental solution to the heat equation . HKS is one of the many recently introduced shape descriptors which are based on the Laplace–Beltrami operator associated with the shape. [ 2 ]
Shape analysis is the field of automatic digital analysis of shapes, e.g., 3D objects. For many shape analysis tasks (such as shape matching/retrieval), feature vectors for certain key points are used instead of using the complete 3D model of the shape. An important requirement of such feature descriptors is for them to be invariant under certain transformations. For rigid transformations , commonly used feature descriptors include shape context , spin images, integral volume descriptors and multiscale local features, among others. [ 2 ] HKS allows isometric transformations which generalizes rigid transformations.
HKS is based on the concept of heat diffusion over a surface. Given an initial heat distribution u 0 ( x ) {\displaystyle u_{0}(x)} over the surface, the heat kernel h t ( x , y ) {\displaystyle h_{t}(x,y)} relates the amount of heat transferred from x {\displaystyle x} to y {\displaystyle y} after time t {\displaystyle t} . The heat kernel is invariant under isometric transformations and stable under small perturbations to the isometry. [ 1 ] In addition, the heat kernel fully characterizes shapes up to an isometry and represents increasingly global properties of the shape with increasing time. [ 3 ] Since h t ( x , y ) {\displaystyle h_{t}(x,y)} is defined for a pair of points over a temporal domain, using heat kernels directly as features would lead to a high complexity. HKS instead restricts itself to just the temporal domain by considering only h t ( x , x ) {\displaystyle h_{t}(x,x)} . HKS inherits most of the properties of heat kernels under certain conditions. [ 1 ]
The heat diffusion equation over a compact Riemannian manifold M {\displaystyle M} (possibly with a boundary) is given by
where Δ {\displaystyle \Delta } is the Laplace–Beltrami operator and u ( x , t ) {\displaystyle u(x,t)} is the heat distribution at a point x {\displaystyle x} at time t {\displaystyle t} . The solution to this equation can be expressed as [ 1 ]
The eigen decomposition of the heat kernel is expressed as
where λ i {\displaystyle \lambda _{i}} and ϕ i {\displaystyle \phi _{i}} are the i t h {\displaystyle i^{th}} eigenvalue and i t h {\displaystyle i^{th}} eigenfunction of Δ {\displaystyle \Delta } , respectively. The heat kernel fully characterizes a surface up to an isometry: For any surjective map T : M → N {\displaystyle T:M\rightarrow N} between two Riemannian manifolds M {\displaystyle M} and N {\displaystyle N} , if h t ( x , y ) = h t ( T ( x ) , T ( y ) ) {\displaystyle h_{t}(x,y)=h_{t}(T(x),T(y))} then T {\displaystyle T} is an isometry, and vice versa. [ 1 ] For a concise feature descriptor, HKS restricts the heat kernel only to the temporal domain
HKS, similar to the heat kernel, characterizes surfaces under the condition that the eigenvalues of Δ {\displaystyle \Delta } for M {\displaystyle M} and N {\displaystyle N} are non-repeating. The terms exp ( − λ i t ) {\displaystyle \exp(-\lambda _{i}t)} can be intuited as a bank of low-pass filters, with λ i {\displaystyle \lambda _{i}} determining the cutoff frequencies. [ 2 ]
Since h t ( x , x ) {\displaystyle h_{t}(x,x)} is, in general, a non-parametric continuous function, HKS is in practice represented as a discrete sequence of { h t 1 ( x , x ) , … , h t n ( x , x ) } {\displaystyle \{h_{t_{1}}(x,x),\ldots ,h_{t_{n}}(x,x)\}} values sampled at times t 1 , … , t n {\displaystyle t_{1},\ldots ,t_{n}} .
In most applications, the underlying manifold for an object is not known. The HKS can be computed if a mesh representation of the manifold is available, by using a discrete approximation to Δ {\displaystyle \Delta } and using the discrete analogue of the heat equation. In the discrete case, the Laplace–Beltrami operator is a sparse matrix and can be written as, [ 1 ]
where A {\displaystyle A} is a positive diagonal matrix with entries A ( i , i ) {\displaystyle A(i,i)} corresponding to the area of the triangles in the mesh sharing the vertex i {\displaystyle i} , and W {\displaystyle W} is a symmetric semi-definite weighting matrix. L {\displaystyle L} can be decomposed into L = Φ Λ Φ T A {\displaystyle L=\Phi \Lambda \Phi ^{T}A} , where Λ {\displaystyle \Lambda } is a diagonal matrix of the eigenvalues of L {\displaystyle L} arranged in the ascending order, and Φ {\displaystyle \Phi } is the matrix with the corresponding orthonormal eigenvectors. The discrete heat kernel is the matrix given by,
The elements k t ( i , j ) {\displaystyle k_{t}(i,j)} represents the heat diffusion between vertices i {\displaystyle i} and j {\displaystyle j} after time t {\displaystyle t} . The HKS is then given by the diagonal entries of this matrix, sampled at discrete time intervals. Similar to the continuous case, the discrete HKS is robust to noise. [ 1 ]
The main property that characterizes surfaces using HKS up to an isometry holds only when the eigenvalues of the surfaces are non-repeating. There are certain surfaces (especially those with symmetry) where this condition is violated. A sphere is a simple example of such a surface.
The time parameter in the HKS is closely related to the scale of global information. However, there is no direct way to choose the time discretization. The existing method chooses time samples logarithmically which is a heuristic with no guarantees [ 4 ]
The discrete heat kernel requires eigendecomposition of a matrix of size n × n {\displaystyle n\times n} , where n {\displaystyle n} is the number of vertices in the mesh representation of the manifold. Computing the eigendecomposition is an expensive operation, especially as n {\displaystyle n} increases.
Note, however, that because of the inverse exponential dependence on the eigenvalue, typically only a small (less than 100) eigenvectors are sufficient to obtain a good approximation of the HKS.
The performance guarantees for HKS only hold for truly isometric transformations. However, deformations for real shapes are often not isometric. A simple example of such transformation is closing of the fist by a person, where the geodesic distances between two fingers changes.
Source: [ 2 ]
The (continuous) HKS at a point x {\displaystyle x} , h t ( x , x ) {\displaystyle h_{t}(x,x)} on the Riemannian manifold is related to the scalar curvature s ( x ) {\displaystyle s(x)} by,
Hence, HKS can as be interpreted as the curvature of x {\displaystyle x} at scale t {\displaystyle t} .
The WKS [ 4 ] follows a similar idea to the HKS, replacing the heat equation with the Schrödinger wave equation ,
where ψ ( x , t ) {\displaystyle \psi (x,t)} is the complex wave function. The average probability of measuring the particle at a point x {\displaystyle x} is given by,
where f {\displaystyle f} is the initial energy distribution. By fixing a family of these energy distributions f i ( x ) {\displaystyle f_{i}(x)} , the WKS can be obtained as a discrete sequence { p f 1 ( x ) , … , p f n ( x ) } {\displaystyle \{p_{f_{1}}(x),\ldots ,p_{f_{n}}(x)\}} . Unlike HKS, the WKS can be intuited as a set of band-pass filters leading to better feature localization. However, the WKS does not represent large-scale features well (as they are filtered out) yielding poor performance at shape matching applications.
Similar to the HKS, the GPS [ 5 ] is based on the Laplace-Beltrami operator. GPS at a point x {\displaystyle x} is a vector of scaled eigenfunctions of the Laplace–Beltrami operator computed at x {\displaystyle x} . The GPS is a global feature whereas the scale of the HKS can be varied by varying the time parameter for heat diffusion. Hence, the HKS can be used in partial shape matching applications whereas the GPS cannot.
SGWS [ 6 ] provides a general form for spectral descriptors , where one can obtain HKS by specifying the filter function. SGWS is a multiresolution local descriptor that is not only isometric invariant, but also compact, easy to compute and combines the advantages of both band-pass and low-pass filters.
Even though the HKS represents the shape at multiple scales, it is not inherently scale invariant. For example, the HKS for a shape and its scaled version are not the same without pre-normalization. A simple way to ensure scale invariance is by pre-scaling each shape to have the same surface area (e.g. 1). Using the notation above, this means:
s = ∑ j A j A = A / s λ i = s λ i for each i ϕ i = s ϕ i for each i {\displaystyle {\begin{aligned}s&=\sum _{j}A_{j}\\A&=A/s\\\lambda _{i}&=s\lambda _{i}{\text{ for each }}i\\\phi _{i}&={\sqrt {s}}\phi _{i}{\text{ for each }}i\\\end{aligned}}}
Alternatively, scale-invariant version of the HKS can also be constructed by generating a Scale space representation . [ 7 ] In the scale-space, the HKS of a scaled shape corresponds to a translation up to a multiplicative factor. The Fourier transform of this HKS changes the time-translation into the complex plane, and the dependency on translation can be eliminated by considering the modulus of the transform. Demo of Scale-invariant HKS on YouTube .
An alternative scale invariant HKS can be established by working out its construction through a scale invariant metric, as defined in. [ 8 ]
The HKS is defined for a boundary surface of a 3D shape, represented as a 2D Riemannian manifold. Instead of considering only the boundary, the entire volume of the 3D shape can be considered to define the volumetric version of the HKS. [ 9 ] The Volumetric HKS is defined analogous to the normal HKS by considering the heat equation over the entire volume (as a 3-submanifold) and defining a Neumann boundary condition over the 2-manifold boundary of the shape. Volumetric HKS characterizes transformations up to a volume isometry, which represent the transformation for real 3D objects more faithfully than boundary isometry. [ 9 ]
The scale-invariant HKS features can be used in the bag-of-features model for shape retrieval applications. [ 10 ] The features are used to construct geometric words by taking into account their spatial relations, from which shapes can be constructed (analogous to using features as words and shapes as sentences). Shapes themselves are represented using compact binary codes to form an indexed collection. Given a query shape, similar shapes in the index with possibly isometric transformations can be retrieved by using the Hamming distance of the code as the nearness-measure. | https://en.wikipedia.org/wiki/Heat_kernel_signature |
The heat loss due to linear thermal bridging ( H T B {\displaystyle H_{TB}} ) is a physical quantity used when calculating the energy performance of buildings. It appears in both United Kingdom [ 1 ] and Irish [ 2 ] methodologies.
The calculation of the heat loss due to linear thermal bridging is relatively simple, given by the formula below: [ 3 ]
In the formula, y = 0.08 {\displaystyle y=0.08} if Accredited Construction details used, and y = 0.15 {\displaystyle y=0.15} otherwise, and ∑ A e x p {\displaystyle \sum A_{exp}} is the sum of all the exposed areas of the building envelope ,
This article about energy economics is a stub . You can help Wikipedia by expanding it .
This thermodynamics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Heat_loss_due_to_linear_thermal_bridging |
A heat map (or heatmap ) is a 2-dimensional data visualization technique that represents the magnitude of individual values within a dataset as a color. The variation in color may be by hue or intensity .
In some applications such as crime analytics or website click-tracking, color is used to represent the density of data points rather than a value associated with each point.
"Heat map" is a relatively new term, but the practice of shading matrices has existed for over a century. [ 1 ]
Heat maps originated in 2D displays of the values in a data matrix. Larger values were represented by small dark gray or black squares (pixels) and smaller values by lighter squares. The earliest known example dates to 1873, when Toussaint Loua used a hand-drawn and colored shaded matrix to visualize social statistics across the districts of Paris . [ 1 ] The idea of reordering rows and columns to reveal structure in a data matrix, known as seriation , was introduced by Flinders Petrie in 1899. In 1950, Louis Guttman developed the Scalogram, a method for ordering binary matrices to expose a one-dimensional scale structure. In 1957, Peter Sneath displayed the results of a cluster analysis by permuting the rows and the columns of a matrix to place similar values near each other according to the clustering. This idea was implemented by Robert Ling in 1973 with a computer program called SHADE. Ling used overstruck printer characters to represent different shades of gray, one character-width per pixel. [ 1 ] Leland Wilkinson developed the first computer program in 1994 ( SYSTAT ) to produce cluster heat maps with high-resolution color graphics. The Eisen et al. display shown in the figure is a replication of the earlier SYSTAT design. [ 1 ]
Software designer Cormac Kinney trademarked the term 'heat map' in 1991 to describe computer software used to display real-time financial market information. [ 2 ] In 1998 the trademark was acquired by SS&C Technologies, Inc. , but the company did not extend the license, so it was annulled in 2006. [ 3 ]
There are two primary categories of heat maps: spatial and grid. Additionally, there are over ten various types of heat maps.
A spatial heat map displays the magnitude of a spatial phenomena as color, usually cast over a map. In the image labeled "Spatial Heat Map Example," temperature is displayed by color range across a map of the world. Color ranges from blue (cold) to red (hot).
A grid heat map displays magnitude as color in a two-dimensional matrix, with each dimension representing a category of trait and the color representing the magnitude of some measurement on the combined traits from each of the two categories. For example, one dimension might represent year, and the other dimension might represent month, and the value measured might be temperature. This heat map would show how temperature changed over the years in each month. Grid heat maps are further categorized into two different types of matrices: clustered, and correlogram. [ citation needed ]
In a grid heat map, colors are presented in a grid of a fixed size, with every cell in the grid also being an equal size and shape. The goal is to detect clustering, or suggest the presence of clusters.
A spatial heat map is often used on maps or satellite imagery (see GIS ), where there is no concept of cells, and instead the colours vary continuously.
Heat maps have a wide range of possibilities amongst applications due to their ability to simplify data and make for visually appealing to read data analysis. Many applications using different types of heat maps are listed below.
Business Analysis : Heat maps are used in business analytics to give a visual representation about a company's current functioning, performance, and the need for improvements. Heat maps are a way to analyze a company's existing data and update it to reflect growth and other specific efforts. Heat maps visually appeal to team members and clients of the business or company.
Websites: There are many different ways heat maps are used within websites to determine a visiting users actions. Typically, there are multiple heat maps used together to determine insight to a website on what are the best and worst performing elements on the page. Some specific heat maps used for website analysis are listed below.
Exploratory Data Analysis : Working with small and large data sets, data scientists and data analysts look at and determine essential relationships and characteristics amongst different points in a data set as well as features of those data points. Data scientists and analysts work with a team of others in different professions. The use of heat maps make for a visually easy way to summarize findings and main components. There are other ways to represent data, however heat maps can visualize these data points and their relationships in a high dimensional space without becoming too compact and visually unappealing. Heat maps in data analysis, allow for specific variables of rows and/or columns on the axes and even on the diagonal.
Biology : In the biological field, heat maps are used to visually represent large and small sets of data. The focus is towards patterns and similarities in DNA , RNA , gene expression , etc. Working with these sets of data, data scientists in bioinformatics , focus on different concepts, some of which being community detection, association and correlation, and the concept of centrality, where heat maps are a compelling way to visually summarize results and to share amongst other professions not in the field of biology or bioinformatics . The two heat maps to the right, labeled "Data Analysis Heat Map Example," show different ways in which one may present genomic data over a specific region ( Hist1 region) to someone outside the field of biology so they have a better understanding of the general concept a biologist or data scientist are trying to present. [ 5 ]
Financial Analysis : The values of different product and assets fluctuate both rapidly and/or gradually over time. The need to log changes to the daily markets is imperative. It allows for the ability to draw predictions from patterns while being able to revisit past numerical data. Heat maps are able to remove the tedious process and enable the user to visualize data points and compare amongst the different performers. [ 6 ]
Geographical Visualization : Heat maps are used to visualize and display a geographic distribution of data. Heat maps represent different densities of data points on a geographical map to help users see the intensities of certain phenomena and to show items of most or least importance. Heat maps used in geographical visualization are sometimes confused with Choropleth maps , but the difference comes with how certain data is presented which differentiate the two. [ 7 ]
Sports: Heat maps can be used in many sports and can influence manager's and/or coaches decisions based on high and low densities of data displayed. Users can identify patterns within the game, the strategies of opponents and one's own team, make more informed decisions benefitting the player, team, and business, and can enhance performance in different areas by identifying enhancement is needed. Heat maps also visualize comparisons and relationships amongst different teams in the same sport or between different sports all together. [ 8 ]
Cybersecurity : In intrusion detection systems and log analysis, heat maps are used to highlight unusual access patterns, port scanning attempts, and malicious IP clustering. They help SOC (Security Operations Center) analysts quickly spot anomalies in large datasets. [ 9 ]
Urban Planning : Heat maps are used in urban planning to visualize traffic congestion, pedestrian flow , and environmental conditions for data-driven infrastructure development (Batty et al., 2012). Environmental heat maps track air quality and urban heat islands, guiding green space planning (EEA, 2021). Noise pollution heat maps aid zoning and mitigation near residential areas (EEA, 2020). Commercial planners use foot traffic heat maps to optimize retail layouts (SmartSantander, 2014). Integrated in smart city systems, these maps enhance livability, sustainability, and safety (Batty et al., 2012). [ 10 ] [ 11 ] [ 12 ] [ 13 ]
Many different color schemes can be used to illustrate the heat map, with perceptual advantages and disadvantages for each. Choosing a good color scheme is integral to accurately and effectively displaying data, whereas a poor color scheme can lead viewers to inaccurate conclusions or exclude those with color deficiencies from proper analysis of said data. [ 14 ]
Rainbow color maps, while a common choice, suffer from both accessibility and data continuity concerns. [ 15 ] Rainbow maps pose a challenge for users with Color vision deficiencies , particularly in those with issues distinguishing red and green – a condition affecting a significant portion of the population. [ 15 ] In addition to accessibility issues, rainbow heat map colors are not perceptually uniform; equal increments in data values do not correspond to equal changes in color. [ 16 ] The lack of uniformity can create misleading visual effects, like an artificial boundary or gradient. [ 16 ] These effects can compromise the accuracy of effectiveness of the visualization. This example of the amplitude with the colors showcasing the phase angle can be hard to interpret with the entire rainbow of colors. In this case, the rainbow color scheme may cloud interpretation for those with color vision deficiencies or create confusion by some of the hard color boundaries across the diagram. [ 16 ] To address these challenges, perceptually uniform color sets have been created to accommodate visual impairments and maintain consistent color differences proportional to differences in data. [ 17 ]
Perceptually uniform color schemes are carefully designed to maintain consistent perceptual differences and offer a better viewing experience for viewers with color vision deficiencies. When implementing these color schemes into a heat map, designers must consider the data context and intended emphasis. These schemes follow three main patterns: sequential gradients (varying intensity of a single hue), diverging palettes (two contrasting hues with a neutral midpoint), and qualitative sets for categorical data. [ 18 ] Scientific visualization has produced several perceptually uniform color sets (like Viridis, Magma, and Cividis) that address both uniformity and accessibility concerns. [ 19 ]
Device limitations can also significantly heat map visualization effectiveness. When displayed on low-resolution screens, highly detailed color gradients may appear pixelated or banded, reducing the quality of the visualization. This is known as Color quantization , which can obscure or wrongly emphasize pieces of data. To mitigate these effects, designers should consider all devices that will display their heat map, and their color limitations. Comprehensive testing and using a scheme with few colors is the safest bet when creating a heat map that will be viewed across multiple device types.
Grey-scale compatibility is essential for heat map accessibility, especially when considering print media, black and white only displays, or monochromatic vision. When converting to grey-scale, many color schemes will lose their distinctive data mappings, allowing for different values to appear identical in luminescence. Grey-scale friendly color schemes are designed to incorporate contrast between data points even when color is removed, such as the “virdis” family. [ 20 ]
Several heat map software implementations are freely available:
Choropleth maps and heat maps are often used in place of one another incorrectly when referring to data visualized geographically. [ 45 ] Both techniques show the proportion of a variable of interest, but the two differ in how the boundaries for the variable's data aggregations are constructed. If the data were collected and aggregated using irregular boundaries, such as administrative units, then a heat map displaying that data will be the same as a choropleth map, encouraging confusion about how the two differ.
Choropleth maps show data grouped by geographic boundaries like countries, states, provinces or even floodplains. Each region has a singular value, visualized by color intensity, shading or pattern. The figure on the right displaying a choropleth map showing the United States' population density by state may be used as an example. The figure illustrates a singular value (population) denoted by blue color intensity proportionate to the state's value relative to all other states' values, bounded by each state's border.
Similarly, heat maps may also visualize data over a geographic region. However, unlike choropleth maps, heat maps show the proportion of a variable over an arbitrary, but usually small grid size, independent of geographic boundaries. [ 46 ] [ 47 ] The figure on the right displaying a heat map of world population is an example. The figure illustrates a single value (population) bounded in an arbitrary grid (square kilometers) with each cell in the grid represented by a color intensity proportionate to the value of the cell relative to all other cells. Some heat maps that are created using approximated regional data may show familiar geographic borders in the visualization where none really exist. The illusion of geographic borders is due to the existence of patterns within the dataset rather than the visualization technique. The figure on the right displaying a heat map of world population also contains this occurrence. Areas in rural parts of the United States and South America may closely resemble familiar geographic borders in those regions.
Another example of a heatmap over a geographic area is a visualization of lake effect snow around Buffalo, New York, in mid-October 2006. This figure shows another usage of heat maps with geographic areas, and how useful they can be in showcasing the effects of weather on specific areas as opposed to countries or states. | https://en.wikipedia.org/wiki/Heat_map |
A heat number is a unique identification coupon number that is stamped on a material plate after it is removed from the ladle and rolled at a steel mill . It serves as a traceable identifier that links the metal product to its specific batch or "heat," allowing access to detailed records about the material's composition, manufacturing process, and quality assurance. [ 1 ]
Industry quality standards require materials to be tested at the manufacturer and the results of these tests be submitted through a report, also called a mill sheet, mill certificate or mill test certificate (MTC). The only way to trace a steel plate back to its mill sheet is the heat number. A heat number is similar to a lot number , which is used to identify production runs of any other product for quality control purposes.
Usually, but not universally, the numbers indicate: | https://en.wikipedia.org/wiki/Heat_number |
The heating value (or energy value or calorific value ) of a substance , usually a fuel or food (see food energy ), is the amount of heat released during the combustion of a specified amount of it.
The calorific value is the total energy released as heat when a substance undergoes complete combustion with oxygen under standard conditions . The chemical reaction is typically a hydrocarbon or other organic molecule reacting with oxygen to form carbon dioxide and water and release heat. It may be expressed with the quantities:
There are two kinds of enthalpy of combustion, called high(er) and low(er) heat(ing) value, depending on how much the products are allowed to cool and whether compounds like H 2 O are allowed to condense.
The high heat values are conventionally measured with a bomb calorimeter . Low heat values are calculated from high heat value test data. They may also be calculated as the difference between the heat of formation Δ H ⦵ f of the products and reactants (though this approach is somewhat artificial since most heats of formation are typically calculated from measured heats of combustion).. [ 1 ]
For a fuel of composition C c H h O o N n , the (higher) heat of combustion is 419 kJ/mol × ( c + 0.3 h − 0.5 o ) usually to a good approximation (±3%), [ 2 ] [ 3 ] though it gives poor results for some compounds such as (gaseous) formaldehyde and carbon monoxide , and can be significantly off if o + n > c , such as for glycerine dinitrate, C 3 H 6 O 7 N 2 . [ 4 ]
By convention, the (higher) heat of combustion is defined to be the heat released for the complete combustion of a compound in its standard state to form stable products in their standard states: hydrogen is converted to water (in its liquid state), carbon is converted to carbon dioxide gas, and nitrogen is converted to nitrogen gas. That is, the heat of combustion, Δ H ° comb , is the heat of reaction of the following process:
Chlorine and sulfur are not quite standardized; they are usually assumed to convert to hydrogen chloride gas and SO 2 or SO 3 gas, respectively, or to dilute aqueous hydrochloric and sulfuric acids , respectively, when the combustion is conducted in a bomb calorimeter containing some quantity of water. [ 5 ] [ 6 ]
Zwolinski and Wilhoit defined, in 1972, "gross" and "net" values for heats of combustion. In the gross definition the products are the most stable compounds, e.g. H 2 O (l), Br 2 (l), I 2 (s) and H 2 SO 4 (l). In the net definition the products are the gases produced when the compound is burned in an open flame, e.g. H 2 O (g), Br 2 (g), I 2 (g) and SO 2 (g). In both definitions the products for C, F, Cl and N are CO 2 (g), HF (g), Cl 2 (g) and N 2 (g), respectively. [ 7 ]
The heating value of a fuel can be calculated with the results of ultimate analysis of fuel. From analysis, percentages of the combustibles in the fuel ( carbon , hydrogen , sulfur ) are known. Since the heat of combustion of these elements is known, the heating value can be calculated using Dulong's Formula:
HHV [kJ/g]= 33.87m C + 122.3(m H − m O ÷ 8) + 9.4m S
where m C , m H , m O , m N , and m S are the contents of carbon, hydrogen, oxygen, nitrogen, and sulfur on any (wet, dry or ash free) basis, respectively. [ 8 ]
The higher heating value (HHV; gross energy , upper heating value , gross calorific value GCV , or higher calorific value ; HCV ) indicates the upper limit of the available thermal energy produced by a complete combustion of fuel. It is measured as a unit of energy per unit mass or volume of substance. The HHV is determined by bringing all the products of combustion back to the original pre-combustion temperature, including condensing any vapor produced. Such measurements often use a standard temperature of 25 °C (77 °F; 298 K) [ citation needed ] . This is the same as the thermodynamic heat of combustion since the enthalpy change for the reaction assumes a common temperature of the compounds before and after combustion, in which case the water produced by combustion is condensed to a liquid. The higher heating value takes into account the latent heat of vaporization of water in the combustion products, and is useful in calculating heating values for fuels where condensation of the reaction products is practical (e.g., in a gas-fired boiler used for space heat). In other words, HHV assumes all the water component is in liquid state at the end of combustion (in product of combustion) and that heat delivered at temperatures below 150 °C (302 °F) can be put to use.
The lower heating value (LHV; net calorific value ; NCV , or lower calorific value ; LCV ) is another measure of available thermal energy produced by a combustion of fuel, measured as a unit of energy per unit mass or volume of substance. In contrast to the HHV, the LHV considers energy losses such as the energy used to vaporize water – although its exact definition is not uniformly agreed upon. One definition is simply to subtract the heat of vaporization of the water from the higher heating value. This treats any H 2 O formed as a vapor that is released as a waste. The energy required to vaporize the water is therefore lost.
LHV calculations assume that the water component of a combustion process is in vapor state at the end of combustion, as opposed to the higher heating value (HHV) (a.k.a. gross calorific value or gross CV ) which assumes that all of the water in a combustion process is in a liquid state after a combustion process.
Another definition of the LHV is the amount of heat released when the products are cooled to 150 °C (302 °F). This means that the latent heat of vaporization of water and other reaction products is not recovered. It is useful in comparing fuels where condensation of the combustion products is impractical, or heat at a temperature below 150 °C (302 °F) cannot be put to use.
One definition of lower heating value, adopted by the American Petroleum Institute (API), uses a reference temperature of 60 °F ( 15 + 5 ⁄ 9 °C).
Another definition, used by Gas Processors Suppliers Association (GPSA) and originally used by API (data collected for API research project 44), is the enthalpy of all combustion products minus the enthalpy of the fuel at the reference temperature (API research project 44 used 25 °C. GPSA currently uses 60 °F), minus the enthalpy of the stoichiometric oxygen (O 2 ) at the reference temperature, minus the heat of vaporization of the vapor content of the combustion products.
The definition in which the combustion products are all returned to the reference temperature is more easily calculated from the higher heating value than when using other definitions and will in fact give a slightly different answer.
Gross heating value accounts for water in the exhaust leaving as vapor, as does LHV, but gross heating value also includes liquid water in the fuel prior to combustion. This value is important for fuels like wood or coal , which will usually contain some amount of water prior to burning.
The higher heating value is experimentally determined in a bomb calorimeter . The combustion of a stoichiometric mixture of fuel and oxidizer (e.g. two moles of hydrogen and one mole of oxygen) in a steel container at 25 °C (77 °F) is initiated by an ignition device and the reactions allowed to complete. When hydrogen and oxygen react during combustion, water vapor is produced. The vessel and its contents are then cooled to the original 25 °C and the higher heating value is determined as the heat released between identical initial and final temperatures.
When the lower heating value (LHV) is determined, cooling is stopped at 150 °C and the reaction heat is only partially recovered. The limit of 150 °C is based on acid gas dew-point.
Note: Higher heating value (HHV) is calculated with the product of water being in liquid form while lower heating value (LHV) is calculated with the product of water being in vapor form.
The difference between the two heating values depends on the chemical composition of the fuel. In the case of pure carbon or carbon monoxide, the two heating values are almost identical, the difference being the sensible heat content of carbon dioxide between 150 °C and 25 °C ( sensible heat exchange causes a change of temperature, while latent heat is added or subtracted for phase transitions at constant temperature. Examples: heat of vaporization or heat of fusion ). For hydrogen, the difference is much more significant as it includes the sensible heat of water vapor between 150 °C and 100 °C, the latent heat of condensation at 100 °C, and the sensible heat of the condensed water between 100 °C and 25 °C. In all, the higher heating value of hydrogen is 18.2% above its lower heating value (142 MJ/kg vs. 120 MJ/kg). For hydrocarbons, the difference depends on the hydrogen content of the fuel. For gasoline and diesel the higher heating value exceeds the lower heating value by about 10% and 7%, respectively, and for natural gas about 11%.
A common method of relating HHV to LHV is:
where H v is the heat of vaporization of water at the datum temperature (typically 25 °C), n H 2 O ,out is the number of moles of water vaporized and n fuel,in is the number of moles of fuel combusted. [ 9 ]
Engine manufacturers typically rate their engines fuel consumption by the lower heating values since the exhaust is never condensed in the engine, and doing this allows them to publish more attractive numbers than are used in conventional power plant terms. The conventional power industry had used HHV (high heat value) exclusively for decades, even though virtually all of these plants did not condense exhaust either. American consumers should be aware that the corresponding fuel-consumption figure based on the higher heating value will be somewhat higher.
The difference between HHV and LHV definitions causes endless confusion when quoters do not bother to state the convention being used. [ 10 ] since there is typically a 10% difference between the two methods for a power plant burning natural gas. For simply benchmarking part of a reaction the LHV may be appropriate, but HHV should be used for overall energy efficiency calculations if only to avoid confusion, and in any case, the value or convention should be clearly stated.
Both HHV and LHV can be expressed in terms of AR (all moisture counted), MF and MAF (only water from combustion of hydrogen). AR, MF, and MAF are commonly used for indicating the heating values of coal:
The International Energy Agency reports the following typical higher heating values per Standard cubic metre of gas: [ 14 ]
The lower heating value of natural gas is normally about 90% of its higher heating value. This table is in Standard cubic metres (1 atm , 15 °C), to convert to values per Normal cubic metre (1 atm, 0 °C), multiply above table by 1.0549. | https://en.wikipedia.org/wiki/Heat_of_combustion |
Heat of formation group additivity methods in thermochemistry enable the calculation and prediction of heat of formation of organic compounds based on additivity . This method was pioneered by S. W. Benson. [ 1 ]
Starting with simple linear and branched alkanes and alkenes the method works by collecting a large number of experimental heat of formation data (see: Heat of Formation table ) and then divide each molecule up into distinct groups each consisting of a central atom with multiple ligands:
To each group is then assigned an empirical incremental value which is independent on its position inside the molecule and independent of the nature of its neighbors:
The following example illustrates how these values can be derived.
The experimental heat of formation of ethane is -20.03 kcal/mol and ethane consists of 2 P groups. Likewise propane (-25.02 kcal/mol) can be written as 2P+S, isobutane (-32.07) as 3P+T and neopentane (-40.18 kcal/mol) as 4P+Q. These four equations and 4 unknowns work out to estimations for P (-10.01 kcal/mol), S (-4.99 kcal/mol), T (-2.03 kcal/mol) and Q (-0.12 kcal/mol). Of course the accuracy will increase when the dataset increases.
the data allow the calculation of heat of formation for isomers. For example, the pentanes:
The group additivities for alkenes are:
In alkenes the cis isomer is always less stable than the trans isomer by 1.10 kcal/mol.
More group additivity tables exist for a wide range of functional groups.
An alternative model has been developed by S. Gronert based not on breaking molecules into fragments but based on 1,2 and 1,3 interactions [ 2 ] [ 3 ]
The Gronert equation reads: Δ H f = − 146.0 ∗ n C − C − 124.2 ∗ n C − H − 66.2 ∗ n C = C + 10.2 ∗ n C − C − C + 9.3 ∗ n C − C − H + 6.6 ∗ n H − C − H + f ( C , H ) {\displaystyle \ \Delta H_{f}=-146.0*n_{C-C}-124.2*n_{C-H}-66.2*n_{C=C}+10.2*n_{C-C-C}+9.3*n_{C-C-H}+6.6*n_{H-C-H}+f(C,H)}
f ( C , H ) = ( 231.3 ∗ n C + 52.1 ∗ n H ) {\displaystyle \ f(C,H)=(231.3*n_{C}+52.1*n_{H})}
The pentanes are now calculated as:
Key in this treatment is the introduction of 1,3-repulsive and destabilizing interactions and this type of steric hindrance should exist considering the molecular geometry of simple alkanes. In methane the distance between the hydrogen atoms is 1.8 angstrom but the combined van der Waals radii of hydrogen are 2.4 angstrom implying steric hindrance. Also in propane the methyl to methyl distance is 2.5 angstrom whereas the combined van der Waals radii are much larger (4 angstrom).
In the Gronert model these repulsive 1,3 interactions account for trends in bond dissociation energies which for example decrease going from methane to ethane to isopropane to neopentane. In this model the homolysis of a C-H bond releases strain energy in the alkane. In traditional bonding models the driving force is the ability of alkyl groups to donate electrons to the newly formed free radical carbon. | https://en.wikipedia.org/wiki/Heat_of_formation_group_additivity |
A heat pump is a device that uses electricity to transfer heat from a colder place to a warmer place. Specifically, the heat pump transfers thermal energy using a heat pump and refrigeration cycle , cooling the cool space and warming the warm space. [ 1 ] In winter a heat pump can move heat from the cool outdoors to warm a house; the pump may also be designed to move heat from the house to the warmer outdoors in summer. As they transfer heat rather than generating heat, they are more energy-efficient than heating by gas boiler . [ 2 ]
A gaseous refrigerant is compressed so its pressure and temperature rise. When operating as a heater in cold weather, the warmed gas flows to a heat exchanger in the indoor space where some of its thermal energy is transferred to that indoor space, causing the gas to condense into a liquid. The liquified refrigerant flows to a heat exchanger in the outdoor space where the pressure falls, the liquid evaporates and the temperature of the gas falls. It is now colder than the temperature of the outdoor space being used as a heat source. It can again take up energy from the heat source, be compressed and repeat the cycle.
Air source heat pumps are the most common models, while other types include ground source heat pumps , water source heat pumps and exhaust air heat pumps . [ 3 ] Large-scale heat pumps are also used in district heating systems. [ 4 ]
Because of their high efficiency and the increasing share of fossil-free sources in electrical grids, heat pumps are playing a role in climate change mitigation . [ 5 ] [ 6 ] Consuming 1 kWh of electricity, they can transfer 1 [ 7 ] to 4.5 kWh of thermal energy into a building. The carbon footprint of heat pumps depends on how electricity is generated , but they usually reduce emissions. [ 8 ] Heat pumps could satisfy over 80% of global space and water heating needs with a lower carbon footprint than gas-fired condensing boilers : however, in 2021 they only met 10%. [ 4 ]
Heat flows spontaneously from a region of higher temperature to a region of lower temperature. Heat does not flow spontaneously from lower temperature to higher, but it can be made to flow in this direction if work is performed. The work required to transfer a given amount of heat is usually much less than the amount of heat; this is the motivation for using heat pumps in applications such as the heating of water and the interior of buildings. [ 9 ]
The amount of work required to drive an amount of heat Q from a lower-temperature reservoir such as ambient air to a higher-temperature reservoir such as the interior of a building is: W = Q C O P {\displaystyle W={\frac {Q}{\mathrm {COP} }}} where
The coefficient of performance of a heat pump is greater than one so the work required is less than the heat transferred, making a heat pump a more efficient form of heating than electrical resistance heating. As the temperature of the higher-temperature reservoir increases in response to the heat flowing into it, the coefficient of performance decreases, causing an increasing amount of work to be required for each unit of heat being transferred. [ 9 ]
The coefficient of performance, and the work required by a heat pump can be calculated easily by considering an ideal heat pump operating on the reversed Carnot cycle :
This is the theoretical amount of heat pumped but in practice it will be less for various reasons, for example if the outside unit has been installed where there is not enough airflow. More data sharing with owners and academics—perhaps from heat meters —could improve efficiency in the long run. [ 11 ]
Milestones:
An air source heat pump (ASHP) is a heat pump that can absorb heat from air outside a building and release it inside; it uses the same vapor-compression refrigeration process and much the same equipment as an air conditioner , but in the opposite direction. ASHPs are the most common type of heat pump and, usually being smaller, tend to be used to heat individual houses or flats rather than blocks, districts or industrial processes. [ 20 ]
Air-to-air heat pumps provide hot or cold air directly to rooms, but do not usually provide hot water. Air-to-water heat pumps use radiators or underfloor heating to heat a whole house and are often also used to provide domestic hot water .
An ASHP can typically gain 4 kWh thermal energy from 1 kWh electric energy. They are optimized for flow temperatures between 30 and 40 °C (86 and 104 °F), suitable for buildings with heat emitters sized for low flow temperatures. With losses in efficiency, an ASHP can even provide full central heating with a flow temperature up to 80 °C (176 °F). [ 21 ]
As of 2023 [update] about 10% of building heating worldwide is from ASHPs. They are the main way to phase out gas boilers (also known as "furnaces") from houses, to avoid their greenhouse gas emissions . [ 22 ]
Air-source heat pumps are used to move heat between two heat exchangers, one outside the building which is fitted with fins through which air is forced using a fan and the other which either directly heats the air inside the building or heats water which is then circulated around the building through radiators or underfloor heating which releases the heat to the building. These devices can also operate in a cooling mode where they extract heat via the internal heat exchanger and eject it into the ambient air using the external heat exchanger. Some can be used to heat water for washing which is stored in a domestic hot water tank. [ 23 ]
Air-source heat pumps are relatively easy and inexpensive to install, so are the most widely used type. In mild weather, coefficient of performance (COP) may be between 2 and 5, while at temperatures below around −8 °C (18 °F) an air-source heat pump may still achieve a COP of 1 to 4. [ 24 ]
A ground source heat pump (also geothermal heat pump) is a heating/cooling system for buildings that use a type of heat pump to transfer heat to or from the ground, taking advantage of the relative constancy of temperatures of the earth through the seasons. Ground-source heat pumps (GSHPs)—or geothermal heat pumps (GHP), as they are commonly termed in North America—are among the most energy-efficient technologies for providing HVAC and water heating , using less energy than can be achieved by use of resistive electric heaters .
Exhaust air heat pumps extract heat from the exhaust air of a building and require mechanical ventilation . Two classes exist:
A solar-assisted heat pump (SAHP) is a system that combines a heat pump and thermal solar panels and/or PV solar panels in a single integrated system. [ 27 ] Heat pumps require a low temperature heat source which can be provided by solar energy. Typically, these two technologies are used separately (or only placing them in parallel) to produce warm air or hot water . [ 28 ] In this system the solar thermal panel performs the function of the low temperature heat source and the heat produced is used to feed the heat pump's evaporator. [ 29 ] The goal of this system is to get high coefficient of performance ( COP ) and then produce energy in a more efficient and less expensive way. Air source heat pumps which are preheated by solar air collectors have an additional benefit of lower maintenance as the outside fan unit can be protected from the harsh winter environment.
Solar PV energy can power the heat pump electrically to enable electrification of heating buildings [ 30 ] and greenhouses . [ 31 ] These systems enable electrification [ 32 ] of heating/cooling and are normally driven by economics [ 33 ] and decarbonization goals. [ 34 ] Such systems have been shown to be economic in the Middle East, [ 35 ] North America, [ 36 ] Asia [ 37 ] and Europe. [ 38 ]
A water-source heat pump works in a similar manner to a ground-source heat pump, except that it takes heat from a body of water rather than the ground. The body of water does, however, need to be large enough to be able to withstand the cooling effect of the unit without freezing or creating an adverse effect for wildlife. [ 39 ] The largest water-source heat pump was installed in the Danish town of Esbjerg in 2023. [ 40 ] [ 41 ]
A thermoacoustic heat pump operates as a thermoacoustic heat engine without refrigerant but instead uses a standing wave in a sealed chamber driven by a loudspeaker to achieve a temperature difference across the chamber. [ 42 ]
Electrocaloric heat pumps are solid state. [ 43 ]
The International Energy Agency estimated that, as of 2021, heat pumps installed in buildings have a combined capacity of more than 1000 GW. [ 4 ] They are used for heating, ventilation, and air conditioning (HVAC) and may also provide domestic hot water and tumble clothes drying. [ 44 ] The purchase costs are supported in various countries by consumer rebates. [ 45 ]
In HVAC applications, a heat pump is typically a vapor-compression refrigeration device that includes a reversing valve and optimized heat exchangers so that the direction of heat flow (thermal energy movement) may be reversed. The reversing valve switches the direction of refrigerant through the cycle and therefore the heat pump may deliver either heating or cooling to a building.
Because the two heat exchangers, the condenser and evaporator, must swap functions, they are optimized to perform adequately in both modes. Therefore, the Seasonal Energy Efficiency Rating (SEER in the US) or European seasonal energy efficiency ratio of a reversible heat pump is typically slightly less than those of two separately optimized machines. For equipment to receive the US Energy Star rating, it must have a rating of at least 14 SEER. Pumps with ratings of 18 SEER or above are considered highly efficient. The highest efficiency heat pumps manufactured are up to 24 SEER. [ 46 ]
Heating seasonal performance factor (in the US) or Seasonal Performance Factor (in Europe) are ratings of heating performance. The SPF is Total heat output per annum / Total electricity consumed per annum in other words the average heating COP over the year. [ 47 ]
Window mounted heat pumps run on standard 120v AC outlets and provide heating, cooling, and humidity control. They are more efficient with lower noise levels, condensation management, and a smaller footprint than window mounted air conditioners that just do cooling. [ 48 ]
In water heating applications, heat pumps may be used to heat or preheat water for swimming pools, homes or industry. Usually heat is extracted from outdoor air and transferred to an indoor water tank. [ 49 ] [ 50 ]
Large (megawatt-scale) heat pumps are used for district heating . [ 51 ] However as of 2022 [update] about 90% of district heat is from fossil fuels . [ 52 ] In Europe, heat pumps account for a mere 1% of heat supply in district heating networks but several countries have targets to decarbonise their networks between 2030 and 2040. [ 4 ] Possible sources of heat for such applications are sewage water, ambient water (e.g. sea, lake and river water), industrial waste heat , geothermal energy , flue gas , waste heat from district cooling and heat from solar seasonal thermal energy storage . [ 53 ] Large-scale heat pumps for district heating combined with thermal energy storage offer high flexibility for the integration of variable renewable energy . Therefore, they are regarded as a key technology for limiting climate change by phasing out fossil fuels . [ 53 ] [ 54 ] They are also a crucial element of systems which can both heat and cool districts . [ 55 ]
There is great potential to reduce the energy consumption and related greenhouse gas emissions in industry by application of industrial heat pumps, for example for process heat . [ 56 ] [ 57 ] Short payback periods of less than 2 years are possible, while achieving a high reduction of CO 2 emissions (in some cases more than 50%). [ 58 ] [ 59 ] Industrial heat pumps can heat up to 200 °C, and can meet the heating demands of many light industries . [ 60 ] [ 61 ] In Europe alone, 15 GW of heat pumps could be installed in 3,000 facilities in the paper, food and chemicals industries. [ 4 ]
The performance of a heat pump is determined by the ability of the pump to extract heat from a low temperature environment (the source ) and deliver it to a higher temperature environment (the sink ). [ 62 ] Performance varies, depending on installation details, temperature differences, site elevation, location on site, pipe runs, flow rates, and maintenance.
In general, heat pumps work most efficiently (that is, the heat output produced for a given energy input) when the difference between the heat source and the heat sink is small. When using a heat pump for space or water heating, therefore, the heat pump will be most efficient in mild conditions, and decline in efficiency on very cold days. Performance metrics supplied to consumers attempt to take this variation into account.
Common performance metrics are the SEER (in cooling mode) and seasonal coefficient of performance (SCOP) (commonly used just for heating), although SCOP can be used for both modes of operation. [ 62 ] Larger values of either metric indicate better performance. [ 62 ] When comparing the performance of heat pumps, the term performance is preferred to efficiency , with coefficient of performance (COP) being used to describe the ratio of useful heat movement per work input. [ 62 ] An electrical resistance heater has a COP of 1.0, which is considerably lower than a well-designed heat pump which will typically have a COP of 3 to 5 with an external temperature of 10 °C and an internal temperature of 20 °C. Because the ground is a constant temperature source, a ground-source heat pump is not subjected to large temperature fluctuations, and therefore is the most energy-efficient type of heat pump. [ 62 ]
The "seasonal coefficient of performance" (SCOP) is a measure of the aggregate energy efficiency measure over a period of one year which is dependent on regional climate. [ 62 ] One framework for this calculation is given by the Commission Regulation (EU) No. 813/2013. [ 63 ]
A heat pump's operating performance in cooling mode is characterized in the US by either its energy efficiency ratio (EER) or seasonal energy efficiency ratio (SEER), both of which have units of BTU/(h·W) (note that 1 BTU/(h·W) = 0.293 W/W) and larger values indicate better performance.
The carbon footprint of heat pumps depends on their individual efficiency and how electricity is produced. An increasing share of low-carbon energy sources such as wind and solar will lower the impact on the climate.
In most settings, heat pumps will reduce CO 2 emissions compared to heating systems powered by fossil fuels . [ 70 ] In regions accounting for 70% of world energy consumption , the emissions savings of heat pumps compared with a high-efficiency gas boiler are on average above 45% and reach 80% in countries with cleaner electricity mixes. [ 4 ] These values can be improved by 10 percentage points, respectively, with alternative refrigerants. In the United States, 70% of houses could reduce emissions by installing a heat pump. [ 71 ] [ 4 ] The rising share of renewable electricity generation in many countries is set to increase the emissions savings from heat pumps over time. [ 4 ]
Heating systems powered by green hydrogen are also low-carbon and may become competitors, but are much less efficient due to the energy loss associated with hydrogen conversion, transport and use. In addition, not enough green hydrogen is expected to be available before the 2030s or 2040s. [ 72 ] [ 73 ]
Vapor-compression uses a circulating refrigerant as the medium which absorbs heat from one space, compresses it thereby increasing its temperature before releasing it in another space. The system normally has eight main components: a compressor , a reservoir, a reversing valve which selects between heating and cooling mode, two thermal expansion valves (one used when in heating mode and the other when used in cooling mode) and two heat exchangers, one associated with the external heat source/sink and the other with the interior. In heating mode the external heat exchanger is the evaporator and the internal one being the condenser; in cooling mode the roles are reversed.
Circulating refrigerant enters the compressor in the thermodynamic state known as a saturated vapor [ 74 ] and is compressed to a higher pressure, resulting in a higher temperature as well. The hot, compressed vapor is then in the thermodynamic state known as a superheated vapor and it is at a temperature and pressure at which it can be condensed with either cooling water or cooling air flowing across the coil or tubes. In heating mode this heat is used to heat the building using the internal heat exchanger, and in cooling mode this heat is rejected via the external heat exchanger.
The condensed, liquid refrigerant, in the thermodynamic state known as a saturated liquid , is next routed through an expansion valve where it undergoes an abrupt reduction in pressure. That pressure reduction results in the adiabatic flash evaporation of a part of the liquid refrigerant. The auto-refrigeration effect of the adiabatic flash evaporation lowers the temperature of the liquid and-vapor refrigerant mixture to where it is colder than the temperature of the enclosed space to be refrigerated.
The cold mixture is then routed through the coil or tubes in the evaporator. A fan circulates the warm air in the enclosed space across the coil or tubes carrying the cold refrigerant liquid and vapor mixture. That warm air evaporates the liquid part of the cold refrigerant mixture. At the same time, the circulating air is cooled and thus lowers the temperature of the enclosed space to the desired temperature. The evaporator is where the circulating refrigerant absorbs and removes heat which is subsequently rejected in the condenser and transferred elsewhere by the water or air used in the condenser.
To complete the refrigeration cycle , the refrigerant vapor from the evaporator is again a saturated vapor and is routed back into the compressor.
Over time, the evaporator may collect ice or water from ambient humidity . The ice is melted through defrosting cycle. An internal heat exchanger is either used to heat/cool the interior air directly or to heat water that is then circulated through radiators or underfloor heating circuit to either heat or cool the buildings.
Heat input can be improved if the refrigerant enters the evaporator with a lower vapor content. This can be achieved by cooling the liquid refrigerant after condensation. The gaseous refrigerant condenses on the heat exchange surface of the condenser. To achieve a heat flow from the gaseous flow center to the wall of the condenser, the temperature of the liquid refrigerant must be lower than the condensation temperature.
Additional subcooling can be achieved by heat exchange between relatively warm liquid refrigerant leaving the condenser and the cooler refrigerant vapor emerging from the evaporator. The enthalpy difference required for the subcooling leads to the superheating of the vapor drawn into the compressor. When the increase in cooling achieved by subcooling is greater that the compressor drive input required to overcome the additional pressure losses, such a heat exchange improves the coefficient of performance. [ 75 ]
One disadvantage of the subcooling of liquids is that the difference between the condensing temperature and the heat-sink temperature must be larger. This leads to a moderately high pressure difference between condensing and evaporating pressure, whereby the compressor energy increases. [ citation needed ]
Pure refrigerants can be divided into organic substances ( hydrocarbons (HCs), chlorofluorocarbons (CFCs), hydrochlorofluorocarbons (HCFCs), hydrofluorocarbons (HFCs), hydrofluoroolefins (HFOs), and HCFOs), and inorganic substances ( ammonia ( NH 3 ), carbon dioxide ( CO 2 ), and water ( H 2 O ) [ 76 ] ). [ 77 ] Their boiling points are usually below −25 °C. [ 78 ]
In the past 200 years, the standards and requirements for new refrigerants have changed. Nowadays low global warming potential (GWP) is required, in addition to all the previous requirements for safety, practicality, material compatibility, appropriate atmospheric life, [ clarification needed ] and compatibility with high-efficiency products. By 2022, devices using refrigerants with a very low GWP still have a small market share but are expected to play an increasing role due to enforced regulations, [ 79 ] as most countries have now ratified the Kigali Amendment to ban HFCs. [ 80 ] Isobutane (R600A) and propane (R290) are far less harmful to the environment than conventional hydrofluorocarbons (HFC) and are already being used in air-source heat pumps . [ 81 ] Propane may be the most suitable for high temperature heat pumps. [ 82 ] Ammonia (R717) and carbon dioxide ( R-744 ) also have a low GWP. As of 2023 [update] smaller CO 2 heat pumps are not widely available and research and development of them continues. [ 83 ] A 2024 report said that refrigerants with GWP are vulnerable to further international restrictions. [ 84 ]
Until the 1990s, heat pumps, along with fridges and other related products used chlorofluorocarbons (CFCs) as refrigerants, which caused major damage to the ozone layer when released into the atmosphere . Use of these chemicals was banned or severely restricted by the Montreal Protocol of August 1987. [ 85 ]
Replacements, including R-134a and R-410A , are hydrofluorocarbons (HFC) with similar thermodynamic properties with insignificant ozone depletion potential (ODP) but had problematic GWP. [ 86 ] HFCs are powerful greenhouse gases which contribute to climate change. [ 87 ] [ 88 ] Dimethyl ether (DME) also gained in popularity as a refrigerant in combination with R404a. [ 89 ] More recent refrigerants include difluoromethane (R32) with a lower GWP, but still over 600.
Devices with R-290 refrigerant (propane) are expected to play a key role in the future. [ 82 ] [ 93 ] The 100-year GWP of propane, at 0.02, is extremely low and is approximately 7000 times less than R-32. However, the flammability of propane requires additional safety measures: the maximum safe charges have been set significantly lower than for lower flammability refrigerants (only allowing approximately 13.5 times less refrigerant in the system than R-32). [ 94 ] [ 95 ] [ 96 ] This means that R-290 is not suitable for all situations or locations. Nonetheless, by 2022, an increasing number of devices with R-290 were offered for domestic use, especially in Europe. [ citation needed ]
At the same time, [ when? ] HFC refrigerants still dominate the market. Recent government mandates have seen the phase-out of R-22 refrigerant. Replacements such as R-32 and R-410A are being promoted as environmentally friendly but still have a high GWP. [ 97 ] A heat pump typically uses 3 kg of refrigerant. With R-32 this amount still has a 20-year impact equivalent to 7 tons of CO 2 , which corresponds to two years of natural gas heating in an average household. Refrigerants with a high ODP have already been phased out. [ citation needed ]
Financial incentives aim to protect consumers from high fossil gas costs and to reduce greenhouse gas emissions , [ 98 ] and are currently available in more than 30 countries around the world, covering more than 70% of global heating demand in 2021. [ 4 ]
Food processors, brewers, petfood producers and other industrial energy users are exploring whether it is feasible to use renewable energy to produce industrial-grade heat. Process heating accounts for the largest share of onsite energy use in Australian manufacturing, with lower-temperature operations like food production particularly well-suited to transition to renewables.
To help producers understand how they could benefit from making the switch, the Australian Renewable Energy Agency (ARENA) provided funding to the Australian Alliance for Energy Productivity (A2EP) to undertake pre-feasibility studies at a range of sites around Australia, with the most promising locations advancing to full feasibility studies. [ 99 ]
In an effort to incentivize energy efficiency and reduce environmental impact, the Australian states of Victoria, New South Wales, and Queensland have implemented rebate programs targeting the upgrade of existing hot water systems. These programs specifically encourage the transition from traditional gas or electric systems to heat pump based systems. [ 100 ] [ 101 ] [ 102 ] [ 103 ] [ 104 ]
In 2022, the Canada Greener Homes Grant [ 105 ] provides up to $5000 for upgrades (including certain heat pumps), and $600 for energy efficiency evaluations.
Purchase subsidies in rural areas in the 2010s reduced burning coal for heating, which had been causing ill health. [ 106 ]
In the 2024 report by the International Energy Agency (IEA) titled "The Future of Heat Pumps in China," it is highlighted that China, as the world's largest market for heat pumps in buildings, plays a critical role in the global industry. The country accounts for over one-quarter of global sales, with a 12% increase in 2023 alone, despite a global sales dip of 3% the same year. [ 107 ]
Heat pumps are now used in approximately 8% of all heating equipment sales for buildings in China as of 2022, and they are increasingly becoming the norm in central and southern regions for both heating and cooling. Despite their higher upfront costs and relatively low awareness, heat pumps are favored for their energy efficiency, consuming three to five times less energy than electric heaters or fossil fuel-based solutions. Currently, decentralized heat pumps installed in Chinese buildings represent a quarter of the global installed capacity, with a total capacity exceeding 250 GW, which covers around 4% of the heating needs in buildings. [ 107 ]
Under the Announced Pledges Scenario (APS), which aligns with China's carbon neutrality goals, the capacity is expected to reach 1,400 GW by 2050, meeting 25% of heating needs. This scenario would require an installation of about 100 GW of heat pumps annually until 2050. Furthermore, the heat pump sector in China employs over 300,000 people, with employment numbers expected to double by 2050, underscoring the importance of vocational training for industry growth. This robust development in the heat pump market is set to play a significant role in reducing direct emissions in buildings by 30% and cutting PM2.5 emissions from residential heating by nearly 80% by 2030. [ 107 ] [ 108 ]
To speed up the deployment rate of heat pumps, the European Commission launched the Heat Pump Accelerator Platform in November 2024. [ 109 ] It will encourage industry experts, policymakers, and stakeholders to collaborate, share best practices and ideas, and jointly discuss measures that promote sustainable heating solutions. [ 110 ]
Until 2027 fixed heat pumps have no Value Added Tax (VAT). [ 111 ] As of 2022 [update] the installation cost of a heat pump is more than a gas boiler, but with the "Boiler Upgrade Scheme" [ 112 ] government grant and assuming electricity/gas costs remain similar their lifetime costs would be similar on average. [ 113 ] However lifetime cost relative to a gas boiler varies considerably depending on several factors, such as the quality of the heat pump installation and the tariff used. [ 114 ] In 2024 England was criticised for still allowing new homes to be built with gas boilers, unlike some other counties where this is banned. [ 115 ]
The High-efficiency Electric Home Rebate Program was created in 2022 to award grants to State energy offices and Indian Tribes in order to establish state-wide high-efficiency electric-home rebates. Effective immediately, American households are eligible for a tax credit to cover the costs of buying and installing a heat pump, up to $2,000. Starting in 2023, low- and moderate-level income households will be eligible for a heat-pump rebate of up to $8,000. [ 116 ]
In 2022, more heat pumps were sold in the United States than natural gas furnaces. [ 117 ]
In November 2023 Biden's administration allocated 169 million dollars from the Inflation Reduction Act to speed up production of heat pumps. It used the Defense Production Act to do so, in a stated bid to advance national security. [ 118 ] | https://en.wikipedia.org/wiki/Heat_pump |
In combustion , heat release parameter (or gas expansion parameter ) is a dimensionless parameter which measures the amount of heat released by an adibatic combustion process. [ 1 ] [ 2 ] It is defined as
where
In typical combustion process, q ≈ 2 − 7 {\displaystyle q\approx 2-7} . For isobaric combustion, using ideal gas law , the parameter can be expressed in terms of density , [ 3 ] i.e.,
The ratio of burnt gas to unburnt gas temperature is
The gas expansion ratio is simply defined by
which is related to α {\displaystyle \alpha } by r = 1 + q . {\displaystyle r=1+q.}
This combustion article is a stub . You can help Wikipedia by expanding it .
This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Heat_release_parameter |
A heat sealer is a machine used to seal products , packaging , and other thermoplastic materials using heat . This can be with uniform thermoplastic monolayers or with materials having several layers, at least one being thermoplastic. Heat sealing can join two similar materials together or can join dissimilar materials, one of which has a thermoplastic layer.
Heat sealing is the process of sealing one thermoplastic to another similar thermoplastic using heat and pressure. [ 1 ] The direct contact method of heat sealing utilizes a constantly heated die or sealing bar to apply heat to a specific contact area or path to seal or weld the thermoplastics together. Heat sealing is used for many applications, including heat seal connectors, thermally activated adhesives, film media, plastic ports or foil sealing .
Heat seal connectors are used to join LCDs to PCBs in many consumer electronics , as well as in medical and telecommunication devices.
Heat sealing of products with thermal adhesives is used to hold clear display screens onto consumer electronic products and for other sealed thermo-plastic assemblies or devices where heat staking or ultrasonic welding are not an option due to part design requirements or other assembly considerations.
Heat sealing also is used in the manufacturing of bloodtest film and filter media for the blood, virus and many other test strip devices used in the medical field today. Laminate foils and films often are heat sealed over the top of thermoplastic medical trays, Microtiter (microwell) plates, bottles and containers to seal and/or prevent contamination for medical test devices, sample collection trays and containers used for food products.
Plastic bags and other packaging is often formed and sealed by heat sealers. Medical and fluid bags used in the medical, bioengineering and food industries. Fluid bags are made out of a multitude of varying materials such as foils, filter media, thermoplastics and laminates. [ citation needed ]
A type of heat sealer is also used to piece together plastic side panels for light-weight agricultural buildings such as greenhouses and sheds . This version is guided along the floor by four wheels.
Good seals are a result of time , temperature and pressure for the correct clean material. [ 6 ] [ 7 ] [ 8 ] Several standard test methods are available to measure the strength of heat seals. In addition, package testing is used to determine the ability of completed packages to withstand specified pressure or vacuum. Several methods are available to determine the ability of a sealed package to retain its integrity, barrier characteristics, and sterility.
Heat sealing processes can be controlled by a variety of quality management systems such as HACCP , statistical process control , ISO 9000 , etc. Verification and validation protocols are used to ensure that specifications are met and final materials/packages are suited for end-use. [ 9 ]
The efficacy of heat seals is often detailed in governing specifications , contracts , and regulations . Quality management systems sometimes ask for periodic subjective evaluations: For example, some seals can be evaluated by a simple pull to determine the existence of a bond and the mechanism of failure. With some plastic films, observation can be enhanced by using polarized light which highlights the birefringence of the heat seal. Some seals for sensitive products require thorough verification and validation protocols that use quantitative testing. Test methods might include:
Seal strength testing, also known as peel testing, measures the strength of seals within flexible barrier materials. This measurement can then be used to determine consistency within the seal, as well as evaluation of the opening force of the package system. Seal strength is a quantitative measure for use in process validation, process control and capability. Seal strength is not only relevant to opening force and package integrity, but to measuring the packaging processes’ ability to produce consistent seals.
The burst test is used to determine the packages strength and precession. The burst test is performed by pressurizing the package until it bursts. The results for the burst test include the burst pressure data and a description of where the seal failure occurred. This test method covers the burst test as defined in ASTM F1140. The Creep test determines the packages ability to hold pressure for an extended period. The creep test is performed by setting the pressure at about 80% of the minimum burst pressure of a previous burst test. The time to seal failure or a pre-set time is measured.
Determination of package integrity . The package is submerged in a transparent container filled with a mixture of water and dye. Vacuum is created inside the container and maintained for a specific length of time. When the vacuum is released, any punctured packages will draw in dye revealing the imperfect seal. | https://en.wikipedia.org/wiki/Heat_sealer |
The heat shock response ( HSR ) is a cell stress response that increases the number of molecular chaperones to combat the negative effects on proteins caused by stressors such as increased temperatures , oxidative stress , and heavy metals . [ 1 ] In a normal cell, proteostasis (protein homeostasis) must be maintained because proteins are the main functional units of the cell. [ 2 ] Many proteins take on a defined configuration in a process known as protein folding in order to perform their biological functions. If these structures are altered, critical processes could be affected, leading to cell damage or death. [ 3 ] The heat shock response can be employed under stress to induce the expression of heat shock proteins (HSP), many of which are molecular chaperones, that help prevent or reverse protein misfolding and provide an environment for proper folding. [ 4 ]
Protein folding is already challenging due to the crowded intracellular space where aberrant interactions can arise; it becomes more difficult when environmental stressors can denature proteins and cause even more non-native folding to occur. [ 5 ] If the work by molecular chaperones is not enough to prevent incorrect folding, the protein may be degraded by the proteasome or autophagy to remove any potentially toxic aggregates. [ 6 ] Misfolded proteins, if left unchecked, can lead to aggregation that prevents the protein from moving into its proper conformation and eventually leads to plaque formation, which may be seen in various diseases. [ 7 ] Heat shock proteins induced by the HSR can help prevent protein aggregation that is associated with common neurodegenerative diseases such as Alzheimer's , Huntington's , or Parkinson's disease . [ 8 ]
With the introduction of environmental stressors, the cell must be able to maintain proteostasis. Acute or chronic subjection to these harmful conditions elicits a cytoprotective response to promote stability to the proteome. [ 9 ] HSPs (e.g. HSP70 , HSP90 , HSP60 , etc.) are present under normal conditions but under heat stress, they are upregulated by the transcription factor heat shock factor 1 ( HSF1 ). [ 10 ] [ 11 ] There are four different transcription factors found in vertebrates (HSF 1–4) where the main regulator of HSPs is HSF1, while σ 32 is the heat shock transcription factor in E. coli. [ 12 ] [ 13 ] When not bound to DNA, HSF1 is in a monomeric state where it is inactive and negatively regulated by chaperones. [ 14 ] When a stress occurs, these chaperones are released due to the presence of denatured proteins and various conformational changes to HSF1 cause it to undergo nuclear localization where it becomes active through trimerization. [ 15 ] [ 14 ] Newly trimerized HSF1 will bind to heat shock elements (HSE) located in promoter regions of different HSPs to activate transcription of HSP mRNA. The mRNA will eventually be transcribed and comprise the upregulated HSPs that can alleviate the stress at hand and restore proteostasis. [ 12 ] HSF1 will also regulate expression of HSPs through epigenetic modifications. The HSR will eventually attenuate as HSF1 returns to its monomeric form, negatively regulated through association with HSP70 and HSP90 along with additional post-translational modifications. [ 16 ] The HSR is not only involved with increasing transcription levels of HSPs; other facets include stress-induced mRNA stability preventing errors in mRNA and enhanced control during translation to thwart misfolding. [ 17 ]
Molecular chaperones are typically referred to as proteins that associate with and help other proteins reach a native conformation while not being present in the end state. [ 18 ] Chaperones bind to their substrate (i.e. a misfolded protein) in an ATP-dependent manner to perform a specific function. [ 19 ] Exposed hydrophobic residues are a major problem with regards to protein aggregation because they can interact with one another and form hydrophobic interactions. [ 20 ] It is the job of chaperones to prevent this aggregation by binding to the residues or providing proteins a "safe" environment to fold properly. [ 21 ] Heat shock proteins are also believed to play a role in the presentation of pieces of proteins (or peptides ) on the cell surface to help the immune system recognize diseased cells. [ 22 ] The major HSPs involved in the HSR include HSP70, HSP90, and HSP60. [ 5 ] Chaperones include the HSP70s and HSP90s while HSP60s are considered to be chaperonins. [ 17 ]
The HSP70 chaperone family is the main HSP system within cells, playing a key role in translation, post-translation, prevention of aggregates and refolding of aggregated proteins. [ 23 ] When a nascent protein is being translated, HSP70 is able to associate with the hydrophobic regions of the protein to prevent faulty interactions until translation is complete. [ 24 ] Post-translational protein folding occurs in a cycle where the protein becomes bound/released from the chaperone allowing burying hydrophobic groups and aiding in overcoming the energy needed to fold in a timely fashion. [ 25 ] HSP70 plays a part in de-aggregating proteins using the aforementioned mechanism; the chaperone will bind to exposed hydrophobic residues and either partially or fully disassemble the protein, allowing HSP70 to assist in the proper refolding. [ 26 ] When proteins are beyond the point of refolding, HSP70s can help direct these potentially toxic aggregates to be degraded by the proteasome or through autophagy. [ 27 ] HSP90s are parallel to HSP70s with respect to the refolding or proteins and use in protein clearance. [ 4 ] One difference between the two HSPs is HSP90s ability to keep proteins in an unfolded yet stable configuration until a signal causes the protein to translocate and complete its folding. [ 24 ]
Sometimes, HSP70 is unable to effectively aid a protein in reaching its final 3-D structure; The main reason being the thermodynamic barriers for folding are too high for the chaperone to meet. [ 23 ] Because the intracellular space is very crowded, sometimes proteins need an isolated space to prevent aberrant interactions between other proteins, which is provided by chaperonins or HSP60s . [ 7 ] HSP60s are barrel shaped and suited to bind to the hydrophobic residues of proteins. [ 28 ] Once a cap binds to the chaperonin, the protein is free within the barrel to undergo hydrophobic collapse and reach a stable conformation. [ 29 ] Once the cap is removed, the protein can either be correctly folded and move on to perform its function or return to a HSP if it is still not folded accurately. [ 30 ] These chaperones function to remove aggregation and significantly speed up protein folding. [ 20 ]
Discovery of the heat shock response is attributed to Italian geneticist Ferruccio Ritossa , who observed changes called chromosomal "puffs" in response to heat exposure while working with the polytene chromosomes of Drosophila . [ 31 ] [ 32 ] By his own account, the discovery was the serendipitous result of unintentional elevated temperature in a laboratory incubator. [ 33 ] Ritossa's observations, reported in 1962, [ 34 ] were later described as "the first known environmental stress acting directly on gene activity" [ 31 ] but were not initially widely cited. [ 31 ] [ 35 ] The significance of these observations became clearer in the 1970s, as a distinct class of heat shock proteins were discovered in the laboratory of Herschel K. Mitchell , [ 36 ] and as heat shock responses were reported in other organisms and came to be recognized as universal. [ 31 ] [ 35 ] [ 37 ] | https://en.wikipedia.org/wiki/Heat_shock_response |
A heat sink (also commonly spelled heatsink ) is a passive heat exchanger that transfers the heat generated by an electronic or a mechanical device to a fluid medium, often air or a liquid coolant, where it is dissipated away from the device, thereby allowing regulation of the device's temperature. In computers, heat sinks are used to cool CPUs , GPUs , and some chipsets and RAM modules. Heat sinks are used with other high-power semiconductor devices such as power transistors and optoelectronics such as lasers and light-emitting diodes (LEDs), where the heat dissipation ability of the component itself is insufficient to moderate its temperature.
A heat sink is designed to maximize its surface area in contact with the cooling medium surrounding it, such as the air. Air velocity, choice of material, protrusion design and surface treatment are factors that affect the performance of a heat sink. Heat sink attachment methods and thermal interface materials also affect the die temperature of the integrated circuit. Thermal adhesive or thermal paste improve the heat sink's performance by filling air gaps between the heat sink and the heat spreader on the device. A heat sink is usually made out of a material with a high thermal conductivity , such as aluminium or copper.
A heat sink transfers thermal energy from a higher-temperature device to a lower-temperature fluid medium. The fluid medium is frequently air, but can also be water, refrigerants, or even oil. If the fluid medium is water, the heat sink is frequently called a cold plate. In thermodynamics a heat sink is a heat reservoir that can absorb an arbitrary amount of heat without significantly changing temperature. Practical heat sinks for electronic devices must have a temperature higher than the surroundings to transfer heat by convection, radiation, and conduction. The power supplies of electronics are not absolutely efficient, so extra heat is produced that may be detrimental to the function of the device. As such, a heat sink is included in the design to disperse heat.
Fourier's law of heat conduction shows that when there is a temperature gradient in a body, heat will be transferred from the higher-temperature region to the lower-temperature region. The rate at which heat is transferred by conduction, q k {\displaystyle q_{k}} , is proportional to the product of the temperature gradient and the cross-sectional area through which heat is transferred. When it is simplified to a one-dimensional form in the x direction, it can be expressed as:
For a heat sink in a duct, where air flows through the duct, the heat-sink base will usually be hotter than the air flowing through the duct. Applying the conservation of energy, for steady-state conditions, and Newton's law of cooling to the temperature nodes shown in the diagram gives the following set of equations:
where
Using the mean air temperature is an assumption that is valid for relatively short heat sinks. When compact heat exchangers are calculated, the logarithmic mean air temperature is used.
The above equations show that:
Natural convection requires free flow of air over the heat sink. If fins are not aligned vertically, or if fins are too close together to allow sufficient air flow between them, the efficiency of the heat sink will decline.
For semiconductor devices used in a variety of consumer and industrial electronics, the idea of thermal resistance simplifies the selection of heat sinks. The heat flow between the semiconductor die and ambient air is modeled as a series of resistances to heat flow; there is a resistance from the die to the device case, from the case to the heat sink, and from the heat sink to the ambient air. The sum of these resistances is the total thermal resistance from the die to the ambient air. Thermal resistance is defined as temperature rise per unit of power, analogous to electrical resistance, and is expressed in units of degrees Celsius per watt (°C/W). If the device dissipation in watts is known, and the total thermal resistance is calculated, the temperature rise of the die over the ambient air can be calculated.
The idea of thermal resistance of a semiconductor heat sink is an approximation. It does not take into account non-uniform distribution of heat over a device or heat sink. It only models a system in thermal equilibrium and does not take into account the change in temperatures with time. Nor does it reflect the non-linearity of radiation and convection with respect to temperature rise. However, manufacturers tabulate typical values of thermal resistance for heat sinks and semiconductor devices, which allows selection of commercially manufactured heat sinks to be simplified. [ 2 ]
Commercial extruded aluminium heat sinks have a thermal resistance (heat sink to ambient air) ranging from 0.4 °C/W for a large sink meant for TO-3 devices, up to as high as 85 °C/W for a clip-on heat sink for a TO-92 small plastic case. [ 2 ] The popular 2N3055 power transistor in a TO-3 case has an internal thermal resistance from junction to case of 1.52 °C/W . [ 3 ] The contact between the device case and heat sink may have a thermal resistance between 0.5 and 1.7 °C/W , depending on the case size and use of grease or insulating mica washer. [ 2 ]
The materials for heat sink applications should have high heat capacity and thermal conductivity in order to absorb more heat energy without shifting towards a very high temperature and transmit it to the environment for efficient cooling. [ 4 ] The most common heat sink materials are aluminium alloys . [ 5 ] Aluminium alloy 1050 has one of the higher thermal conductivity values at 229 W/(m·K) and heat capacity of 922 J/(kg·K), [ 6 ] but is mechanically soft. Aluminium alloys 6060 (low-stress), 6061 , and 6063 are commonly used, with thermal conductivity values of 166 and 201 W/(m·K) respectively. The values depend on the temper of the alloy. One-piece aluminium heat sinks can be made by extrusion , casting , skiving or milling .
Copper has excellent heat-sink properties in terms of its thermal conductivity, corrosion resistance, biofouling resistance, and antimicrobial resistance (see also Copper in heat exchangers ). Copper has around twice the thermal conductivity of aluminium, around 400 W/(m·K) for pure copper. Its main applications are in industrial facilities, power plants, solar thermal water systems, HVAC systems, gas water heaters, forced air heating and cooling systems, geothermal heating and cooling, and electronic systems.
Copper is three times as dense [ 5 ] and more expensive than aluminium, and copper is less ductile than aluminum. [ 5 ] One-piece copper heat sinks can be made by skiving or milling . Sheet-metal fins can be soldered onto a rectangular copper body. [ 7 ] [ 8 ]
Fin efficiency is one of the parameters that makes a higher-thermal-conductivity material important. A fin of a heat sink may be considered to be a flat plate with heat flowing in one end and being dissipated into the surrounding fluid as it travels to the other. [ 9 ] As heat flows through the fin, the combination of the thermal resistance of the heat sink impeding the flow and the heat lost due to convection, the temperature of the fin and, therefore, the heat transfer to the fluid, will decrease from the base to the end of the fin. Fin efficiency is defined as the actual heat transferred by the fin, divided by the heat transfer were the fin to be isothermal (hypothetically the fin having infinite thermal conductivity). These equations are applicable for straight fins: [ 10 ]
where
Fin efficiency is increased by decreasing the fin aspect ratio (making them thicker or shorter), or by using a more conductive material (copper instead of aluminium, for example).
Another parameter that concerns the thermal conductivity of the heat-sink material is spreading resistance. Spreading resistance occurs when thermal energy is transferred from a small area to a larger area in a substance with finite thermal conductivity. In a heat sink, this means that heat does not distribute uniformly through the heat-sink base. The spreading resistance phenomenon is shown by how the heat travels from the heat source location and causes a large temperature gradient between the heat source and the edges of the heat sink. This means that some fins are at a lower temperature than if the heat source were uniform across the base of the heat sink. This nonuniformity increases the heat sink's effective thermal resistance.
To decrease the spreading resistance in the base of a heat sink:
A pin fin heat sink is a heat sink that has pins that extend from its base. The pins can be cylindrical, elliptical, or square. A second type of heat sink fin arrangement is the straight fin. A variation on the straight fin heat sink is a cross-cut heat sink. A third type of heat sink is the flared fin heat sink, where the fins are not parallel to one another. Flaring the fins decreases flow resistance and makes more air go through the heat-sink fin channel; otherwise, more air would bypass the fins. Slanting them keeps the overall dimensions the same, but offers longer fins. Examples of the three types are shown in the image on the right.
Forghan, et al. [ 11 ] have published data on tests conducted on pin fin, straight fin, and flared fin heat sinks. They found that for low air approach velocity, typically around 1 m/s, the thermal performance is at least 20% better than straight fin heat sinks. Lasance and Eggink [ 12 ] also found that for the bypass configurations that they tested, the flared heat sink performed better than the other heat sinks tested.
Generally, the more surface area a heat sink has, the better its performance. [ 1 ] Real-world performance depends on the design and application. The concept of a pin fin heat sink is to pack as much surface area into a given volume as possible, while working in any orientation of fluid flow. [ 1 ] Kordyban [ 1 ] has compared the performance of a pin-fin and a straight-fin heat sink of similar dimensions. Although the pin-fin has 194 cm 2 surface area while the straight-fin has 58 cm 2 , the temperature difference between the heat-sink base and the ambient air for the pin fin is 50 °C , but for the straight-fin it was 44 °C, or 6 °C better than the pin fin. Pin fin heat sink performance is significantly better than straight fins when used in their optimal application where the fluid flows axially along the pins rather than only tangentially across the pins.
Cavities (inverted fins) embedded in a heat source are the regions formed between adjacent fins that stand for the essential promoters of nucleate boiling or condensation. These cavities are usually utilized to extract heat from a variety of heat-generating bodies to a heat sink. [ 13 ] [ 14 ]
Placing a conductive thick plate as a heat-transfer interface between a heat source and a cold flowing fluid (or any other heat sink) may improve the cooling performance. In such arrangement, the heat source is cooled under the thick plate instead of being cooled in direct contact with the cooling fluid. It is shown [ citation needed ] that the thick plate can significantly improve the heat transfer between the heat source and the cooling fluid by conducting the heat current in an optimal manner. The two most attractive advantages of this method are that no additional pumping power and no extra heat-transfer surface area, that is quite different from fins (extended surfaces).
The heat transfer from the heat sink occurs by convection of the surrounding air, conduction through the air, and radiation .
Heat transfer by radiation is a function of both the heat-sink temperature and the temperature of the surroundings that the heat sink is optically coupled with. When both of these temperatures are on the order of 0 °C to 100 °C, the contribution of radiation compared to convection is generally small, and this factor is often neglected. In this case, finned heat sinks operating in either natural-convection or forced-flow will not be affected significantly by surface emissivity .
In situations where convection is low, such as a flat non-finned panel with low airflow, radiative cooling can be a significant factor. Here the surface properties may be an important design factor. Matte-black surfaces radiate much more efficiently than shiny bare metal. [ 15 ] [ 16 ] A shiny metal surface has low emissivity. The emissivity of a material is tremendously frequency-dependent and is related to absorptivity (of which shiny metal surfaces have very little). For most materials, the emissivity in the visible spectrum is similar to the emissivity in the infrared spectrum; [ citation needed ] however, there are exceptions
– notably, certain metal oxides that are used as " selective surfaces ".
In vacuum or outer space , there is no convective heat transfer, thus in these environments, radiation is the only factor governing heat flow between the heat sink and the environment. For a satellite in space, a 100 °C (373 K) surface facing the Sun will absorb a lot of radiant heat, because the Sun 's surface temperature is nearly 6000 K, whereas the same surface facing deep space will radiate a lot of heat, since deep space has an effective temperature of only several Kelvin.
Heat dissipation is an unavoidable by-product of electronic devices and circuits. [ 9 ] In general, the temperature of the device or component will depend on the thermal resistance from the component to the environment, and the heat dissipated by the component. To ensure that the component does not overheat , a thermal engineer seeks to find an efficient heat transfer path from the device to the environment. The heat transfer path may be from the component to a printed circuit board (PCB), to a heat sink, to air flow provided by a fan, but in all instances, eventually to the environment.
Two additional design factors also influence the thermal/mechanical performance of the thermal design:
As power dissipation of components increases and component package size decreases, thermal engineers must innovate to ensure components won't overheat . Devices that run cooler last longer. A heat sink design must fulfill both its thermal as well as its mechanical requirements. Concerning the latter, the component must remain in thermal contact with its heat sink with reasonable shock and vibration. The heat sink could be the copper foil of a circuit board, or a separate heat sink mounted onto the component or circuit board. Attachment methods include thermally conductive tape or epoxy, wire-form z clips , flat spring clips, standoff spacers, and push pins with ends that expand after installing.
Thermally conductive tape is one of the most cost-effective heat sink attachment materials. [ 17 ] It is suitable for low-mass heat sinks and for components with low power dissipation. It consists of a thermally conductive carrier material with a pressure-sensitive adhesive on each side.
This tape is applied to the base of the heat sink, which is then attached to the component. Following are factors that influence the performance of thermal tape: [ 17 ]
Epoxy is more expensive than tape, but provides a greater mechanical bond between the heat sink and component, as well as improved thermal conductivity. [ 17 ] The epoxy chosen must be formulated for this purpose. Most epoxies are two-part liquid formulations that must be thoroughly mixed before being applied to the heat sink, and before the heat sink is placed on the component. The epoxy is then cured for a specified time, which can vary from 2 hours to 48 hours. Faster cure time can be achieved at higher temperatures. The surfaces to which the epoxy is applied must be clean and free of any residue.
The epoxy bond between the heat sink and component is semi-permanent/permanent. [ 17 ] This makes re-work very difficult and at times impossible. The most typical damage caused by rework is the separation of the component die heat spreader from its package.
More expensive than tape and epoxy, wire form z-clips attach heat sinks mechanically. To use the z-clips, the printed circuit board must have anchors. Anchors can be either soldered onto the board, or pushed through. Either type requires holes to be designed into the board. The use of RoHS solder must be allowed for because such solder is mechanically weaker than traditional Pb/Sn solder.
To assemble with a z-clip , attach one side of it to one of the anchors. Deflect the spring until the other side of the clip can be placed in the other anchor. The deflection develops a spring load on the component, which maintains very good contact. In addition to the mechanical attachment that the z-clip provides, it also permits using higher-performance thermal interface materials, such as phase change types. [ 17 ]
Available for processors and ball grid array (BGA) components, clips allow the attachment of a BGA heat sink directly to the component. The clips make use of the gap created by the ball grid array (BGA) between the component underside and PCB top surface. The clips therefore require no holes in the PCB. They also allow for easy rework of components.
For larger heat sinks and higher preloads, push pins with compression springs are very effective. [ 17 ] The push pins, typically made of brass or plastic, have a flexible barb at the end that engages with a hole in the PCB; once installed, the barb retains the pin. The compression spring holds the assembly together and maintains contact between the heat sink and component. Care is needed in selection of push pin size. Too great an insertion force can result in the die cracking and consequent component failure.
For very large heat sinks, there is no substitute for the threaded standoff and compression spring attachment method. [ 17 ] A threaded standoff is essentially a hollow metal tube with internal threads. One end is secured with a screw through a hole in the PCB. The other end accepts a screw which compresses the spring, completing the assembly. A typical heat sink assembly uses two to four standoffs, which tends to make this the most costly heat sink attachment design. Another disadvantage is the need for holes in the PCB.
Thermal contact resistance occurs due to the voids created by surface roughness effects, defects and misalignment of the interface. The voids present in the interface are filled with air. Heat transfer is therefore due to conduction across the actual contact area and to conduction (or natural convection) and radiation across the gaps. [ 10 ] If the contact area is small, as it is for rough surfaces, the major contribution to the resistance is made by the gaps. [ 10 ] To decrease the thermal contact resistance, the surface roughness can be decreased while the interface pressure is increased. However, these improving methods are not always practical or possible for electronic equipment. Thermal interface materials (TIM) are a common way to overcome these limitations.
Properly applied thermal interface materials displace the air that is present in the gaps between the two objects with a material that has a much-higher thermal conductivity. Air has a thermal conductivity of 0.022 W/(m·K) [ 18 ] while TIMs have conductivities of 0.3 W/(m·K) [ 19 ] and higher.
When selecting a TIM, care must be taken with the values supplied by the manufacturer. Most manufacturers give a value for the thermal conductivity of a material. However, the thermal conductivity does not take into account the interface resistances. Therefore, if a TIM has a high thermal conductivity, it does not necessarily mean that the interface resistance will be low.
Selection of a TIM is based on three parameters: the interface gap which the TIM must fill, the contact pressure, and the electrical resistivity of the TIM. The contact pressure is the pressure applied to the interface between the two materials. The selection does not include the cost of the material. Electrical resistivity may be important depending upon electrical design details.
Light-emitting diode (LED) performance and lifetime are strong functions of their temperature. [ 20 ] Effective cooling is therefore essential. A case study of a LED based downlighter shows an example of the calculations done in order to calculate the required heat sink necessary for the effective cooling of lighting system. [ 21 ] The article also shows that in order to get confidence in the results, multiple independent solutions are required that give similar results. Specifically, results of the experimental, numerical and theoretical methods should all be within 10% of each other to give high confidence in the results.
Temporary heat sinks are sometimes used while soldering circuit boards, preventing excessive heat from damaging sensitive nearby electronics. In the simplest case, this means partially gripping a component using a heavy metal crocodile clip, hemostat , or similar clamp. Modern semiconductor devices, which are designed to be assembled by reflow soldering, can usually tolerate soldering temperatures without damage. On the other hand, electrical components such as magnetic reed switches can malfunction if exposed to hotter soldering irons, so this practice is still very much in use. [ 22 ]
In general, a heat sink performance is a function of material thermal conductivity, dimensions, fin type, heat transfer coefficient , air flow rate, and duct size. To determine the thermal performance of a heat sink, a theoretical model can be made. Alternatively, the thermal performance can be measured experimentally. Due to the complex nature of the highly 3D flow in present applications, numerical methods or computational fluid dynamics (CFD) can also be used. This section will discuss the aforementioned methods for the determination of the heat sink thermal performance.
One of the methods to determine the performance of a heat sink is to use heat transfer and fluid dynamics theory. One such method has been published by Jeggels, et al., [ 23 ] though this work is limited to ducted flow. Ducted flow is where the air is forced to flow through a channel which fits tightly over the heat sink. This makes sure that all the air goes through the channels formed by the fins of the heat sink. When the air flow is not ducted, a certain percentage of air flow will bypass the heat sink. Flow bypass was found to increase with increasing fin density and clearance, while remaining relatively insensitive to inlet duct velocity. [ 24 ]
The heat sink thermal resistance model consists of two resistances, namely the resistance in the heat sink base, R b {\displaystyle R_{b}} , and the resistance in the fins, R f {\displaystyle R_{f}} . The heat sink base thermal resistance, R b {\displaystyle R_{b}} , can be written as follows if the source is a uniformly applied the heat sink base. If it is not, then the base resistance is primarily spreading resistance:
where t b {\displaystyle t_{b}} is the heat sink base thickness, k {\displaystyle k} is the heat sink material thermal conductivity and A b {\displaystyle A_{b}} is the area of the heat sink base.
The thermal resistance from the base of the fins to the air, R f {\displaystyle R_{f}} , can be calculated by the following formulas:
The flow rate can be determined by the intersection of the heat sink system curve and the fan curve. The heat sink system curve can be calculated by the flow resistance of the channels and inlet and outlet losses as done in standard fluid mechanics text books, such as Potter, et al. [ 26 ] and White. [ 27 ]
Once the heat sink base and fin resistances are known, then the heat sink thermal resistance, R h s {\displaystyle R_{hs}} can be calculated as:
Using the equations 5 to 13 and the dimensional data in, [ 23 ] the thermal resistance for the fins was calculated for various air flow rates. The data for the thermal resistance and heat transfer coefficient are shown in the diagram, which shows that for an increasing air flow rate, the thermal resistance of the heat sink decreases.
Experimental tests are one of the more popular ways to determine the heat sink thermal performance. In order to determine the heat sink thermal resistance, the flow rate, input power, inlet air temperature and heat sink base temperature need to be known. Vendor-supplied data is commonly provided for ducted test results. [ 28 ] However, the results are optimistic and can give misleading data when heat sinks are used in an unducted application. More details on heat sink testing methods and common oversights can be found in Azar, et al. [ 28 ]
In industry, thermal analyses are often ignored in the design process or performed too late — when design changes are limited and become too costly. [ 9 ] Of the three methods mentioned in this article, theoretical and numerical methods can be used to determine an estimate of the heat sink or component temperatures of products before a physical model has been made. A theoretical model is normally used as a first order estimate. Online heat sink calculators [ 29 ] can provide a reasonable estimate of forced and natural convection heat sink performance based on a combination of theoretical and empirically derived correlations. Numerical methods or computational fluid dynamics (CFD) provide a qualitative (and sometimes even quantitative) prediction of fluid flows. [ 30 ] [ 31 ] What this means is that it will give a visual or post-processed result of a simulation, like the images in figures 16 and 17, and the CFD animations in figure 18 and 19, but the quantitative or absolute accuracy of the result is sensitive to the inclusion and accuracy of the appropriate parameters.
CFD can give an insight into flow patterns that are difficult, expensive or impossible to study using experimental methods. [ 30 ] Experiments can give a quantitative description of flow phenomena using measurements for one quantity at a time, at a limited number of points and time instances. If a full-scale model is not available or not practical, scale models or dummy models can be used. The experiments can have a limited range of problems and operating conditions. Simulations can give a prediction of flow phenomena using CFD software for all desired quantities, with high resolution in space and time and virtually any problem and realistic operating conditions. However, if critical, the results may need to be validated. [ 1 ] | https://en.wikipedia.org/wiki/Heat_sink |
Heat transfer is a discipline of thermal engineering that concerns the generation, use, conversion, and exchange of thermal energy ( heat ) between physical systems. Heat transfer is classified into various mechanisms, such as thermal conduction , thermal convection , thermal radiation , and transfer of energy by phase changes . Engineers also consider the transfer of mass of differing chemical species (mass transfer in the form of advection ), either cold or hot, to achieve heat transfer. While these mechanisms have distinct characteristics, they often occur simultaneously in the same system.
Heat conduction, also called diffusion, is the direct microscopic exchanges of kinetic energy of particles (such as molecules) or quasiparticles (such as lattice waves) through the boundary between two systems. When an object is at a different temperature from another body or its surroundings, heat flows so that the body and the surroundings reach the same temperature, at which point they are in thermal equilibrium . Such spontaneous heat transfer always occurs from a region of high temperature to another region of lower temperature, as described in the second law of thermodynamics .
Heat convection occurs when the bulk flow of a fluid (gas or liquid) carries its heat through the fluid. All convective processes also move heat partly by diffusion, as well. The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". The former process is often called "forced convection." In this case, the fluid is forced to flow by use of a pump, fan, or other mechanical means.
Thermal radiation occurs through a vacuum or any transparent medium ( solid or fluid or gas ). It is the transfer of energy by means of photons or electromagnetic waves governed by the same laws. [ 1 ]
Heat transfer is the energy exchanged between materials (solid/liquid/gas) as a result of a temperature difference. The thermodynamic free energy is the amount of work that a thermodynamic system can perform. Enthalpy is a thermodynamic potential , designated by the letter "H", that is the sum of the internal energy of the system (U) plus the product of pressure (P) and volume (V). Joule is a unit to quantify energy , work, or the amount of heat. [ 2 ]
Heat transfer is a process function (or path function), as opposed to functions of state ; therefore, the amount of heat transferred in a thermodynamic process that changes the state of a system depends on how that process occurs, not only the net difference between the initial and final states of the process.
Thermodynamic and mechanical heat transfer is calculated with the heat transfer coefficient , the proportionality between the heat flux and the thermodynamic driving force for the flow of heat. Heat flux is a quantitative, vectorial representation of heat flow through a surface. [ 3 ]
In engineering contexts, the term heat is taken as synonymous with thermal energy. This usage has its origin in the historical interpretation of heat as a fluid ( caloric ) that can be transferred by various causes, [ 4 ] and that is also common in the language of laymen and everyday life.
The transport equations for thermal energy ( Fourier's law ), mechanical momentum ( Newton's law for fluids ), and mass transfer ( Fick's laws of diffusion ) are similar, [ 5 ] [ 6 ] and analogies among these three transport processes have been developed to facilitate the prediction of conversion from any one to the others. [ 6 ]
Thermal engineering concerns the generation, use, conversion, storage, and exchange of heat transfer. As such, heat transfer is involved in almost every sector of the economy. [ 7 ] Heat transfer is classified into various mechanisms, such as thermal conduction , thermal convection , thermal radiation , and transfer of energy by phase changes .
The fundamental modes of heat transfer are:
By transferring matter, energy—including thermal energy—is moved by the physical transfer of a hot or cold object from one place to another. This can be as simple as placing hot water in a bottle and heating a bed, or the movement of an iceberg in changing ocean currents. A practical example is thermal hydraulics . This can be described by the formula: ϕ q = v ρ c p Δ T {\displaystyle \phi _{q}=v\rho c_{p}\Delta T} where
On a microscopic scale, heat conduction occurs as hot, rapidly moving or vibrating atoms and molecules interact with neighboring atoms and molecules, transferring some of their energy (heat) to these neighboring particles. In other words, heat is transferred by conduction when adjacent atoms vibrate against one another, or as electrons move from one atom to another. Conduction is the most significant means of heat transfer within a solid or between solid objects in thermal contact . Fluids—especially gases—are less conductive. Thermal contact conductance is the study of heat conduction between solid bodies in contact. [ 8 ] The process of heat transfer from one place to another place without the movement of particles is called conduction, such as when placing a hand on a cold glass of water—heat is conducted from the warm skin to the cold glass, but if the hand is held a few inches from the glass, little conduction would occur since air is a poor conductor of heat. Steady-state conduction is an idealized model of conduction that happens when the temperature difference driving the conduction is constant so that after a time, the spatial distribution of temperatures in the conducting object does not change any further (see Fourier's law ). [ 9 ] In steady state conduction, the amount of heat entering a section is equal to amount of heat coming out, since the temperature change (a measure of heat energy) is zero. [ 8 ] An example of steady state conduction is the heat flow through walls of a warm house on a cold day—inside the house is maintained at a high temperature and, outside, the temperature stays low, so the transfer of heat per unit time stays near a constant rate determined by the insulation in the wall and the spatial distribution of temperature in the walls will be approximately constant over time.
Transient conduction (see Heat equation ) occurs when the temperature within an object changes as a function of time. Analysis of transient systems is more complex, and analytic solutions of the heat equation are only valid for idealized model systems. Practical applications are generally investigated using numerical methods, approximation techniques, or empirical study. [ 8 ]
The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". All convective processes also move heat partly by diffusion, as well. Another form of convection is forced convection. In this case, the fluid is forced to flow by using a pump, fan, or other mechanical means.
Convective heat transfer , or simply, convection, is the transfer of heat from one place to another by the movement of fluids , a process that is essentially the transfer of heat via mass transfer . The bulk motion of fluid enhances heat transfer in many physical situations, such as between a solid surface and the fluid. [ 10 ] Convection is usually the dominant form of heat transfer in liquids and gases. Although sometimes discussed as a third method of heat transfer, convection is usually used to describe the combined effects of heat conduction within the fluid (diffusion) and heat transference by bulk fluid flow streaming. [ 11 ] The process of transport by fluid streaming is known as advection, but pure advection is a term that is generally associated only with mass transport in fluids, such as advection of pebbles in a river. In the case of heat transfer in fluids, where transport by advection in a fluid is always also accompanied by transport via heat diffusion (also known as heat conduction) the process of heat convection is understood to refer to the sum of heat transport by advection and diffusion/conduction.
Free, or natural, convection occurs when bulk fluid motions (streams and currents) are caused by buoyancy forces that result from density variations due to variations of temperature in the fluid. Forced convection is a term used when the streams and currents in the fluid are induced by external means—such as fans, stirrers, and pumps—creating an artificially induced convection current. [ 12 ]
Convective cooling is sometimes described as Newton's law of cooling :
The rate of heat loss of a body is proportional to the temperature difference between the body and its surroundings .
However, by definition, the validity of Newton's law of cooling requires that the rate of heat loss from convection be a linear function of ("proportional to") the temperature difference that drives heat transfer, and in convective cooling this is sometimes not the case. In general, convection is not linearly dependent on temperature gradients , and in some cases is strongly nonlinear. In these cases, Newton's law does not apply.
In a body of fluid that is heated from underneath its container, conduction, and convection can be considered to compete for dominance. If heat conduction is too great, fluid moving down by convection is heated by conduction so fast that its downward movement will be stopped due to its buoyancy , while fluid moving up by convection is cooled by conduction so fast that its driving buoyancy will diminish. On the other hand, if heat conduction is very low, a large temperature gradient may be formed and convection might be very strong.
The Rayleigh number ( R a {\displaystyle \mathrm {Ra} } ) is the product of the Grashof ( G r {\displaystyle \mathrm {Gr} } ) and Prandtl ( P r {\displaystyle \mathrm {Pr} } ) numbers. It is a measure that determines the relative strength of conduction and convection. [ 13 ]
R a = G r ⋅ P r = g Δ ρ L 3 μ α = g β Δ T L 3 ν α {\displaystyle \mathrm {Ra} =\mathrm {Gr} \cdot \mathrm {Pr} ={\frac {g\Delta \rho L^{3}}{\mu \alpha }}={\frac {g\beta \Delta TL^{3}}{\nu \alpha }}} where
The Rayleigh number can be understood as the ratio between the rate of heat transfer by convection to the rate of heat transfer by conduction; or, equivalently, the ratio between the corresponding timescales (i.e. conduction timescale divided by convection timescale), up to a numerical factor. This can be seen as follows, where all calculations are up to numerical factors depending on the geometry of the system.
The buoyancy force driving the convection is roughly g Δ ρ L 3 {\displaystyle g\Delta \rho L^{3}} , so the corresponding pressure is roughly g Δ ρ L {\displaystyle g\Delta \rho L} . In steady state , this is canceled by the shear stress due to viscosity, and therefore roughly equals μ V / L = μ / T conv {\displaystyle \mu V/L=\mu /T_{\text{conv}}} , where V is the typical fluid velocity due to convection and T conv {\displaystyle T_{\text{conv}}} the order of its timescale. [ 14 ] The conduction timescale, on the other hand, is of the order of T cond = L 2 / α {\displaystyle T_{\text{cond}}=L^{2}/\alpha } .
Convection occurs when the Rayleigh number is above 1,000–2,000.
Radiative heat transfer is the transfer of energy via thermal radiation , i.e., electromagnetic waves . [ 1 ] It occurs across vacuum or any transparent medium ( solid or fluid or gas ). [ 15 ] Thermal radiation is emitted by all objects at temperatures above absolute zero , due to random movements of atoms and molecules in matter. Since these atoms and molecules are composed of charged particles ( protons and electrons ), their movement results in the emission of electromagnetic radiation which carries away energy. Radiation is typically only important in engineering applications for very hot objects, or for objects with a large temperature difference.
When the objects and distances separating them are large in size and compared to the wavelength of thermal radiation, the rate of transfer of radiant energy is best described by the Stefan-Boltzmann equation . For an object in vacuum, the equation is: ϕ q = ϵ σ T 4 . {\displaystyle \phi _{q}=\epsilon \sigma T^{4}.}
For radiative transfer between two objects, the equation is as follows: ϕ q = ϵ σ F ( T a 4 − T b 4 ) , {\displaystyle \phi _{q}=\epsilon \sigma F(T_{a}^{4}-T_{b}^{4}),} where
The blackbody limit established by the Stefan-Boltzmann equation can be exceeded when the objects exchanging thermal radiation or the distances separating them are comparable in scale or smaller than the dominant thermal wavelength . The study of these cases is called near-field radiative heat transfer .
Radiation from the sun, or solar radiation, can be harvested for heat and power. [ 17 ] Unlike conductive and convective forms of heat transfer, thermal radiation – arriving within a narrow-angle i.e. coming from a source much smaller than its distance – can be concentrated in a small spot by using reflecting mirrors, which is exploited in concentrating solar power generation or a burning glass . [ 18 ] For example, the sunlight reflected from mirrors heats the PS10 solar power tower and during the day it can heat water to 285 °C (545 °F). [ 19 ]
The reachable temperature at the target is limited by the temperature of the hot source of radiation. (T 4 -law lets the reverse flow of radiation back to the source rise.) The (on its surface) somewhat 4000 K hot sun allows to reach coarsely 3000 K (or 3000 °C, which is about 3273 K) at a small probe in the focus spot of a big concave, concentrating mirror of the Mont-Louis Solar Furnace in France. [ 20 ]
Phase transition or phase change, takes place in a thermodynamic system from one phase or state of matter to another one by heat transfer. Phase change examples are the melting of ice or the boiling of water.
The Mason equation explains the growth of a water droplet based on the effects of heat transport on evaporation and condensation.
Phase transitions involve the four fundamental states of matter :
The boiling point of a substance is the temperature at which the vapor pressure of the liquid equals the pressure surrounding the liquid [ 22 ] [ 23 ] and the liquid evaporates resulting in an abrupt change in vapor volume.
In a closed system , saturation temperature and boiling point mean the same thing. The saturation temperature is the temperature for a corresponding saturation pressure at which a liquid boils into its vapor phase. The liquid can be said to be saturated with thermal energy. Any addition of thermal energy results in a phase transition.
At standard atmospheric pressure and low temperatures , no boiling occurs and the heat transfer rate is controlled by the usual single-phase mechanisms. As the surface temperature is increased, local boiling occurs and vapor bubbles nucleate, grow into the surrounding cooler fluid, and collapse. This is sub-cooled nucleate boiling , and is a very efficient heat transfer mechanism. At high bubble generation rates, the bubbles begin to interfere and the heat flux no longer increases rapidly with surface temperature (this is the departure from nucleate boiling , or DNB).
At similar standard atmospheric pressure and high temperatures , the hydrodynamically quieter regime of film boiling is reached. Heat fluxes across the stable vapor layers are low but rise slowly with temperature. Any contact between the fluid and the surface that may be seen probably leads to the extremely rapid nucleation of a fresh vapor layer ("spontaneous nucleation "). At higher temperatures still, a maximum in the heat flux is reached (the critical heat flux , or CHF).
The Leidenfrost Effect demonstrates how nucleate boiling slows heat transfer due to gas bubbles on the heater's surface. As mentioned, gas-phase thermal conductivity is much lower than liquid-phase thermal conductivity, so the outcome is a kind of "gas thermal barrier ".
Condensation occurs when a vapor is cooled and changes its phase to a liquid. During condensation, the latent heat of vaporization must be released. The amount of heat is the same as that absorbed during vaporization at the same fluid pressure. [ 24 ]
There are several types of condensation:
Melting is a thermal process that results in the phase transition of a substance from a solid to a liquid . The internal energy of a substance is increased, typically through heat or pressure, resulting in a rise of its temperature to the melting point , at which the ordering of ionic or molecular entities in the solid breaks down to a less ordered state and the solid liquefies. Molten substances generally have reduced viscosity with elevated temperature; an exception to this maxim is the element sulfur , whose viscosity increases to a point due to polymerization and then decreases with higher temperatures in its molten state. [ 25 ]
Heat transfer can be modeled in various ways.
The heat equation is an important partial differential equation that describes the distribution of heat (or temperature variation) in a given region over time. In some cases, exact solutions of the equation are available; [ 26 ] in other cases the equation must be solved numerically using computational methods such as DEM-based models for thermal/reacting particulate systems (as critically reviewed by Peng et al. [ 27 ] ).
Lumped system analysis often reduces the complexity of the equations to one first-order linear differential equation, in which case heating and cooling are described by a simple exponential solution, often referred to as Newton's law of cooling .
System analysis by the lumped capacitance model is a common approximation in transient conduction that may be used whenever heat conduction within an object is much faster than heat conduction across the boundary of the object. This is a method of approximation that reduces one aspect of the transient conduction system—that within the object—to an equivalent steady-state system. That is, the method assumes that the temperature within the object is completely uniform, although its value may change over time.
In this method, the ratio of the conductive heat resistance within the object to the convective heat transfer resistance across the object's boundary, known as the Biot number , is calculated. For small Biot numbers, the approximation of spatially uniform temperature within the object can be used: it can be presumed that heat transferred into the object has time to uniformly distribute itself, due to the lower resistance to doing so, as compared with the resistance to heat entering the object. [ 28 ]
Climate models study the radiant heat transfer by using quantitative methods to simulate the interactions of the atmosphere, oceans, land surface, and ice. [ 29 ]
Heat transfer has broad application to the functioning of numerous devices and systems. Heat-transfer principles may be used to preserve, increase, or decrease temperature in a wide variety of circumstances. [ 30 ] Heat transfer methods are used in numerous disciplines, such as automotive engineering , thermal management of electronic devices and systems , climate control , insulation , materials processing , chemical engineering and power station engineering.
Thermal insulators are materials specifically designed to reduce the flow of heat by limiting conduction, convection, or both. Thermal resistance is a heat property and the measurement by which an object or material resists to heat flow (heat per time unit or thermal resistance) to temperature difference.
Radiance , or spectral radiance, is a measure of the quantity of radiation that passes through or is emitted. Radiant barriers are materials that reflect radiation, and therefore reduce the flow of heat from radiation sources. Good insulators are not necessarily good radiant barriers, and vice versa. Metal, for instance, is an excellent reflector and a poor insulator.
The effectiveness of a radiant barrier is indicated by its reflectivity , which is the fraction of radiation reflected. A material with a high reflectivity (at a given wavelength) has a low emissivity (at that same wavelength), and vice versa. At any specific wavelength, reflectivity=1 - emissivity. An ideal radiant barrier would have a reflectivity of 1, and would therefore reflect 100 percent of incoming radiation. Vacuum flasks , or Dewars, are silvered to approach this ideal. In the vacuum of space, satellites use multi-layer insulation , which consists of many layers of aluminized (shiny) Mylar to greatly reduce radiation heat transfer and control satellite temperature. [ 31 ]
A heat engine is a system that performs the conversion of a flow of thermal energy (heat) to mechanical energy to perform mechanical work . [ 32 ] [ 33 ]
A thermocouple is a temperature-measuring device and a widely used type of temperature sensor for measurement and control, and can also be used to convert heat into electric power.
A thermoelectric cooler is a solid-state electronic device that pumps (transfers) heat from one side of the device to the other when an electric current is passed through it. It is based on the Peltier effect .
A thermal diode or thermal rectifier is a device that causes heat to flow preferentially in one direction.
A heat exchanger is used for more efficient heat transfer or to dissipate heat. Heat exchangers are widely used in refrigeration , air conditioning , space heating , power generation , and chemical processing. One common example of a heat exchanger is a car's radiator, in which the hot coolant fluid is cooled by the flow of air over the radiator's surface. [ 34 ] [ 35 ]
Common types of heat exchanger flows include parallel flow, counter flow, and cross flow. In parallel flow, both fluids move in the same direction while transferring heat; in counter flow, the fluids move in opposite directions; and in cross flow, the fluids move at right angles to each other. Common types of heat exchangers include shell and tube , double pipe , extruded finned pipe, spiral fin pipe, u-tube, and stacked plate. Each type has certain advantages and disadvantages over other types. [ further explanation needed ]
A heat sink is a component that transfers heat generated within a solid material to a fluid medium, such as air or a liquid. Examples of heat sinks are the heat exchangers used in refrigeration and air conditioning systems or the radiator in a car. A heat pipe is another heat-transfer device that combines thermal conductivity and phase transition to efficiently transfer heat between two solid interfaces.
Efficient energy use is the goal to reduce the amount of energy required in heating or cooling. In architecture, condensation and air currents can cause cosmetic or structural damage. An energy audit can help to assess the implementation of recommended corrective procedures. For instance, insulation improvements, air sealing of structural leaks, or the addition of energy-efficient windows and doors. [ 36 ]
Climate engineering consists of carbon dioxide removal and solar radiation management . Since the amount of carbon dioxide determines the radiative balance of Earth's atmosphere, carbon dioxide removal techniques can be applied to reduce the radiative forcing . Solar radiation management is the attempt to absorb less solar radiation to offset the effects of greenhouse gases .
An alternative method is passive daytime radiative cooling , which enhances terrestrial heat flow to outer space through the infrared window (8–13 μm). [ 37 ] [ 38 ] Rather than merely blocking solar radiation, this method increases outgoing longwave infrared (LWIR) thermal radiation heat transfer with the extremely cold temperature of outer space (~2.7 K ) to lower ambient temperatures while requiring zero energy input. [ 39 ] [ 40 ]
The greenhouse effect is a process by which thermal radiation from a planetary surface is absorbed by atmospheric greenhouse gases and clouds, and is re-radiated in all directions, resulting in a reduction in the amount of thermal radiation reaching space relative to what would reach space in the absence of absorbing materials. This reduction in outgoing radiation leads to a rise in the temperature of the surface and troposphere until the rate of outgoing radiation again equals the rate at which heat arrives from the Sun. [ 42 ]
The principles of heat transfer in engineering systems can be applied to the human body to determine how the body transfers heat. Heat is produced in the body by the continuous metabolism of nutrients which provides energy for the systems of the body. [ 43 ] The human body must maintain a consistent internal temperature to maintain healthy bodily functions. Therefore, excess heat must be dissipated from the body to keep it from overheating. When a person engages in elevated levels of physical activity, the body requires additional fuel which increases the metabolic rate and the rate of heat production. The body must then use additional methods to remove the additional heat produced to keep the internal temperature at a healthy level.
Heat transfer by convection is driven by the movement of fluids over the surface of the body. This convective fluid can be either a liquid or a gas. For heat transfer from the outer surface of the body, the convection mechanism is dependent on the surface area of the body, the velocity of the air, and the temperature gradient between the surface of the skin and the ambient air. [ 44 ] The normal temperature of the body is approximately 37 °C. Heat transfer occurs more readily when the temperature of the surroundings is significantly less than the normal body temperature. This concept explains why a person feels cold when not enough covering is worn when exposed to a cold environment. Clothing can be considered an insulator which provides thermal resistance to heat flow over the covered portion of the body. [ 45 ] This thermal resistance causes the temperature on the surface of the clothing to be less than the temperature on the surface of the skin. This smaller temperature gradient between the surface temperature and the ambient temperature will cause a lower rate of heat transfer than if the skin were not covered.
To ensure that one portion of the body is not significantly hotter than another portion, heat must be distributed evenly through the bodily tissues. Blood flowing through blood vessels acts as a convective fluid and helps to prevent any buildup of excess heat inside the tissues of the body. This flow of blood through the vessels can be modeled as pipe flow in an engineering system. The heat carried by the blood is determined by the temperature of the surrounding tissue, the diameter of the blood vessel, the thickness of the fluid , the velocity of the flow, and the heat transfer coefficient of the blood. The velocity, blood vessel diameter, and fluid thickness can all be related to the Reynolds Number , a dimensionless number used in fluid mechanics to characterize the flow of fluids.
Latent heat loss, also known as evaporative heat loss, accounts for a large fraction of heat loss from the body. When the core temperature of the body increases, the body triggers sweat glands in the skin to bring additional moisture to the surface of the skin. The liquid is then transformed into vapor which removes heat from the surface of the body. [ 46 ] The rate of evaporation heat loss is directly related to the vapor pressure at the skin surface and the amount of moisture present on the skin. [ 44 ] Therefore, the maximum of heat transfer will occur when the skin is completely wet. The body continuously loses water by evaporation but the most significant amount of heat loss occurs during periods of increased physical activity.
Evaporative cooling happens when water vapor is added to the surrounding air. The energy needed to evaporate the water is taken from the air in the form of sensible heat and converted into latent heat, while the air remains at a constant enthalpy . Latent heat describes the amount of heat that is needed to evaporate the liquid; this heat comes from the liquid itself and the surrounding gas and surfaces. The greater the difference between the two temperatures, the greater the evaporative cooling effect. When the temperatures are the same, no net evaporation of water in the air occurs; thus, there is no cooling effect.
In quantum physics , laser cooling is used to achieve temperatures of near absolute zero (−273.15 °C, −459.67 °F) of atomic and molecular samples to observe unique quantum effects that can only occur at this heat level.
Magnetic evaporative cooling is a process for lowering the temperature of a group of atoms, after pre-cooled by methods such as laser cooling. Magnetic refrigeration cools below 0.3K, by making use of the magnetocaloric effect .
Radiative cooling is the process by which a body loses heat by radiation. Outgoing energy is an important effect in the Earth's energy budget . In the case of the Earth-atmosphere system, it refers to the process by which long-wave (infrared) radiation is emitted to balance the absorption of short-wave (visible) energy from the Sun. The thermosphere (top of atmosphere) cools to space primarily by infrared energy radiated by carbon dioxide (CO 2 ) at 15 μm and by nitric oxide (NO) at 5.3 μm. [ 48 ] Convective transport of heat and evaporative transport of latent heat both remove heat from the surface and redistribute it in the atmosphere.
Thermal energy storage includes technologies for collecting and storing energy for later use. It may be employed to balance energy demand between day and nighttime. The thermal reservoir may be maintained at a temperature above or below that of the ambient environment. Applications include space heating, domestic or process hot water systems, or generating electricity.
In 1701, Isaac Newton anonymously published an article in Philosophical Transactions noting (in modern terms) that the rate of temperature change of a body is proportional to the difference in temperatures ( graduum caloris , "degrees of heat") between the body and its surroundings. [ 49 ] The phrase "temperature change" was later replaced with "heat loss", and the relationship was named Newton's law of cooling. In general, the law is valid only if the temperature difference is small and the heat transfer mechanism remains the same.
In heat conduction, the law is valid only if the thermal conductivity of the warmer body is independent of temperature. The thermal conductivity of most materials is only weakly dependent on temperature, so in general the law holds true.
In convective heat transfer, the law is valid for forced air or pumped fluid cooling, where the properties of the fluid do not vary strongly with temperature, but it is only approximately true for buoyancy-driven convection, where the velocity of the flow increases with temperature difference.
In the case of heat transfer by thermal radiation, Newton's law of cooling holds only for very small temperature differences.
In a 1780 letter to Benjamin Franklin , Dutch-born British scientist Jan Ingenhousz relates an experiment which enabled him to rank seven different metals according to their thermal conductivities: [ 50 ]
You remembre you gave me a wire of five metals all drawn thro the same hole Viz. one, of gould, one of silver, copper steel and iron. I supplyed here the two others Viz. the one of tin the other of lead. I fixed these seven wires into a wooden frame at an equal distance of one an other ... I dipt the seven wires into this melted wax as deep as the wooden frame ... By taking them out they were covred with a coat of wax ... When I found that this crust was there about of an equal thikness upon all the wires, I placed them all in a glased earthen vessel full of olive oil heated to some degrees under boiling, taking care that each wire was dipt just as far in the oil as the other ... Now, as they had been all dipt alike at the same time in the same oil, it must follow, that the wire, upon which the wax had been melted the highest, had been the best conductor of heat. ... Silver conducted heat far the best of all other metals, next to this was copper, then gold, tin, iron, steel, Lead.
During the years 1784 – 1798, the British physicist Benjamin Thompson (Count Rumford) lived in Bavaria , reorganizing the Bavarian army for the Prince-elector Charles Theodore among other official and charitable duties. The Elector gave Thompson access to the facilities of the Electoral Academy of Sciences in Mannheim . During his years in Mannheim and later in Munich , Thompson made a large number of discoveries and inventions related to heat.
In 1785 Thompson performed a series of thermal conductivity experiments, which he describes in great detail in the Philosophical Transactions article "New Experiments upon Heat" from 1786. [ 51 ] [ 52 ] The fact that good electrical conductors are often also good heat conductors and vice versa must have been well known at the time, for Thompson mentions it in passing. [ 53 ] He intended to measure the relative conductivities of mercury, water, moist air, "common air" (dry air at normal atmospheric pressure), dry air of various rarefication, and a " Torricellian vacuum ".
From the striking analogy between the electric fluid and heat respecting their conductors and non-conductors (having found that bodies, in general, which are conductors of the electric fluid, are likewise good conductors of heat, and, on the contrary, that electric bodies, or such as are bad conductors of the electric fluid, are likewise bad conductors of heat), I was led to imagine that the Torricellian vacuum, which is known to afford so ready a passage to the electric fluid, would also have afforded a ready passage to heat.
For these experiments, Thompson employed a thermometer inside a large, closed glass tube. Under the circumstances described, heat may—unbeknownst to Thompson—have been transferred more by radiation than by conduction . [ 54 ] These were his results.
After the experiments, Thompson was surprised to observe that a vacuum was a significantly poorer heat conductor than air "which of itself is reckoned among the worst", [ 55 ] but only a very small difference between common air and rarefied air. [ 56 ] He also noted the great difference between dry air and moist air, [ 57 ] and the great benefit this affords. [ 58 ]
I cannot help observing, with what infinite wisdom and goodness Divine Providence appears to have guarded us against the evil effects of excessive heat and cold in the atmosphere; for if it were possible for the air to be equally damp during the severe cold of the winter ... as it sometimes is in summer, its conducing power, and consequently its apparent coldness ... would become quite intolerable; but, happily for us, its power to hold water in solution is diminished, and with it its power to rob us of our animal heat.
Every body knows how very disagreeable a very moderate degree of cold is when the air is very damp; and from hence it appears, why the thermometer is not always a just measure of the apparent or sensible heat of the atmosphere. If colds ... are occasioned by our bodies being robbed of our animal heat, the reason is plain why those disorders prevail most during the cold autumnal rains, and upon the breaking up of the frost in the spring. It is likewise plain [why] ... inhabiting damp houses, is so very dangerous; and why the evening air is so pernicious in summer ... and why it is not so during the hard frosts of winter.
Thompson concluded with some comments on the important difference between temperature and sensible heat .
The ... sensation of hot or cold depends not intirely upon the temperature of the body exciting in us those sensations ... but upon the quantity of heat it is capable of communicating to us, or receiving from us ... and this depends in a great measure upon the conducing powers of the bodies in question. The sensation of hot is the entrance of heat into our bodies; that of cold is its exit ... This is another proof that the thermometer cannot be a just measure of sensible heat ... or rather, that the touch does not afford us a just indication of ... real temperatures.
In the 1830s, in The Bridgewater Treatises , the term convection is attested in a scientific sense. In treatise VIII by William Prout , in the book on chemistry , it says: [ 59 ]
This motion of heat takes place in three ways, which a common fire-place very well illustrates. If, for instance, we place a thermometer directly before a fire, it soon begins to rise, indicating an increase of temperature. In this case the heat has made its way through the space between the fire and the thermometer, by the process termed radiation . If we place a second thermometer in contact with any part of the grate, and away from the direct influence of the fire, we shall find that this thermometer also denotes an increase of temperature; but here the heat must have travelled through the metal of the grate, by what is termed conduction . Lastly, a third thermometer placed in the chimney, away from the direct influence of the fire, will also indicate a considerable increase of temperature; in this case a portion of the air, passing through and near the fire, has become heated, and has carried up the chimney the temperature acquired from the fire. There is at present no single term in our language employed to denote this third mode of the propagation of heat; but we venture to propose for that purpose, the term convection , [in footnote: [Latin] Convectio , a carrying or conveying] which not only expresses the leading fact, but also accords very well with the two other terms.
Later, in the same treatise VIII, in the book on meteorology , the concept of convection is also applied to "the process by which heat is communicated through water". | https://en.wikipedia.org/wiki/Heat_transfer |
Heat transfer physics describes the kinetics of energy storage , transport, and energy transformation by principal energy carriers : phonons (lattice vibration waves), electrons , fluid particles , and photons . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] Heat is thermal energy stored in temperature-dependent motion of particles including electrons, atomic nuclei, individual atoms, and molecules. Heat is transferred to and from matter by the principal energy carriers. The state of energy stored within matter, or transported by the carriers, is described by a combination of classical and quantum statistical mechanics . The energy is different made (converted) among various carriers.
The heat transfer processes (or kinetics) are governed by the rates at which various related physical phenomena occur, such as (for example) the rate of particle collisions in classical mechanics . These various states and kinetics determine the heat transfer, i.e., the net rate of energy storage or transport. Governing these process from the atomic level (atom or molecule length scale) to macroscale are the laws of thermodynamics , including conservation of energy .
Heat is thermal energy associated with temperature-dependent motion of particles. The macroscopic energy equation for infinitesimal volume used in heat transfer analysis is [ 6 ] ∇ ⋅ q = − ρ c p ∂ T ∂ t + ∑ i , j s ˙ i − j , {\displaystyle \nabla \cdot \mathbf {q} =-\rho c_{p}{\frac {\partial T}{\partial t}}+\sum _{i,j}{\dot {s}}_{i-j},} where q is heat flux vector, − ρc p ( ∂T / ∂t ) is temporal change of internal energy ( ρ is density, c p is specific heat capacity at constant pressure, T is temperature and t is time), and s ˙ {\displaystyle {\dot {s}}} is the energy conversion to and from thermal energy ( i and j are for principal energy carriers). So, the terms represent energy transport, storage and transformation. Heat flux vector q is composed of three macroscopic fundamental modes, which are conduction ( q k = − k ∇ T , k : thermal conductivity ), convection ( q u = ρc p u T , u : velocity), and radiation ( q r = 2 π ∫ 0 ∞ ∫ 0 π s I p h , ω sin ( θ ) d θ d ω {\textstyle \mathbf {q} _{r}=2\pi \int _{0}^{\infty }\int _{0}^{\pi }\mathbf {s} I_{ph,\omega }\sin(\theta )d\theta \,d\omega } , ω : angular frequency, θ : polar angle, I ph,ω : spectral, directional radiation intensity, s : unit vector), i.e., q = q k + q u + q r .
Once states and kinetics of the energy conversion and thermophysical properties are known, the fate of heat transfer is described by the above equation. These atomic-level mechanisms and kinetics are addressed in heat transfer physics. The microscopic thermal energy is stored, transported, and transformed by the principal energy carriers: phonons ( p ), electrons ( e ), fluid particles ( f ), and photons ( ph ). [ 7 ]
Thermophysical properties of matter and the kinetics of interaction and energy exchange among the principal carriers are based on the atomic-level configuration and interaction. [ 1 ] Transport properties such as thermal conductivity are calculated from these atomic-level properties using classical and quantum physics . [ 5 ] [ 8 ] Quantum states of principal carriers (e.g.. momentum, energy) are derived from the Schrödinger equation (called first principle or ab initio ) and the interaction rates (for kinetics) are calculated using the quantum states and the quantum perturbation theory (formulated as the Fermi golden rule ). [ 9 ] Variety of ab initio (Latin for from the beginning) solvers (software) exist (e.g., ABINIT , CASTEP , Gaussian , Q-Chem , Quantum ESPRESSO , SIESTA , VASP , WIEN2k ). Electrons in the inner shells (core) are not involved in heat transfer, and calculations are greatly reduced by proper approximations about the inner-shells electrons. [ 10 ]
The quantum treatments, including equilibrium and nonequilibrium ab initio molecular dynamics (MD), involving larger lengths and times are limited by the computation resources, so various alternate treatments with simplifying assumptions have been used and kinetics. [ 11 ] In classical (Newtonian) MD, the motion of atom or molecule (particles) is based on the empirical or effective interaction potentials, which in turn can be based on curve-fit of ab initio calculations or curve-fit to thermophysical properties. From the ensembles of simulated particles, static or dynamics thermal properties or scattering rates are derived. [ 12 ] [ 13 ]
At yet larger length scales (mesoscale, involving many mean free paths), the Boltzmann transport equation (BTE) which is based on the classical Hamiltonian-statistical mechanics is applied. BTE considers particle states in terms of position and momentum vectors ( x , p ) and this is represented as the state occupation probability. The occupation has equilibrium distributions (the known boson, fermion, and Maxwell–Boltzmann particles) and transport of energy (heat) is due to nonequilibrium (cause by a driving force or potential). Central to the transport is the role of scattering which turn the distribution toward equilibrium. The scattering is presented by the relations time or the mean free path. The relaxation time (or its inverse which is the interaction rate) is found from other calculations ( ab initio or MD) or empirically. BTE can be numerically solved with Monte Carlo method , etc. [ 14 ]
Depending on the length and time scale, the proper level of treatment ( ab initio , MD, or BTE) is selected. Heat transfer physics analyses may involve multiple scales (e.g., BTE using interaction rate from ab initio or classical MD) with states and kinetic related to thermal energy storage, transport and transformation.
So, heat transfer physics covers the four principal energy carries and their kinetics from classical and quantum mechanical perspectives. This enables multiscale ( ab initio , MD, BTE and macroscale) analyses, including low-dimensionality and size effects. [ 2 ]
Phonon (quantized lattice vibration wave) is a central thermal energy carrier contributing to heat capacity (sensible heat storage) and conductive heat transfer in condensed phase, and plays a very important role in thermal energy conversion. Its transport properties are represented by the phonon conductivity tensor K p (W/m-K, from the Fourier law q k,p = - K p ⋅∇ T ) for bulk materials, and the phonon boundary resistance AR p,b [K/(W/m 2 )] for solid interfaces, where A is the interface area. The phonon specific heat capacity c v,p (J/kg-K) includes the quantum effect. The thermal energy conversion rate involving phonon is included in s ˙ i - j {\displaystyle {\dot {s}}_{i{\mbox{-}}j}} . Heat transfer physics describes and predicts, c v,p , K p , R p,b (or conductance G p,b ) and s ˙ i - j {\displaystyle {\dot {s}}_{i{\mbox{-}}j}} , based on atomic-level properties.
For an equilibrium potential ⟨ φ ⟩ o of a system with N atoms, the total potential ⟨ φ ⟩ is found by a Taylor series expansion at the equilibrium and this can be approximated by the second derivatives (the harmonic approximation) as ⟨ φ ⟩ = ⟨ φ ⟩ o + ∑ i ∑ α ∂ ⟨ φ ⟩ ∂ d i α | o d i α + 1 2 ∑ i , j ∑ α , β ∂ 2 ⟨ φ ⟩ ∂ d i α ∂ d j β | o d i α d j β + 1 6 ∑ i , j , k ∑ α , β , γ ∂ 3 ⟨ φ ⟩ ∂ d i α ∂ d j β ∂ d k γ | o d i α d j β d k γ + ⋯ ≈ ⟨ φ ⟩ o + 1 2 ∑ i , j ∑ α , β Γ α β d i α d j β , {\displaystyle {\begin{aligned}\langle \varphi \rangle &=\langle \varphi \rangle _{\mathrm {o} }+\left.\sum _{i}\sum _{\alpha }{\frac {\partial \langle \varphi \rangle }{\partial d_{i\alpha }}}\right|_{\mathrm {o} }d_{i\alpha }+\left.{\frac {1}{2}}\sum _{i,j}\sum _{\alpha ,\beta }{\frac {\partial ^{2}\langle \varphi \rangle }{\partial d_{i\alpha }\partial d_{j\beta }}}\right|_{\mathrm {o} }d_{i\alpha }d_{j\beta }+\left.{\frac {1}{6}}\sum _{i,j,k}\sum _{\alpha ,\beta ,\gamma }{\frac {\partial ^{3}\langle \varphi \rangle }{\partial d_{i\alpha }\partial d_{j\beta }\partial d_{k\gamma }}}\right|_{\mathrm {o} }d_{i\alpha }d_{j\beta }d_{k\gamma }+\cdots \\&\approx \langle \varphi \rangle _{\mathrm {o} }+{\frac {1}{2}}\sum _{i,j}\sum _{\alpha ,\beta }\Gamma _{\alpha \beta }d_{i\alpha }d_{j\beta },\end{aligned}}}
where d i is the displacement vector of atom i , and Γ is the spring (or force) constant as the second-order derivatives of the potential. The equation of motion for the lattice vibration in terms of the displacement of atoms [ d ( jl , t ): displacement vector of the j -th atom in the l -th unit cell at time t ] is m j d 2 d ( j l , t ) d t 2 = − ∑ j ′ l ′ Γ ( j j ′ l l ′ ) ⋅ d ( j ′ l ′ , T ) , {\displaystyle m_{j}{\frac {d^{2}\mathbf {d} (jl,t)}{dt^{2}}}=-\sum _{j'l'}{\boldsymbol {\Gamma }}{\binom {j\ j^{\prime }}{l\ l'}}\cdot \mathbf {d} (j'l',T),} where m is the atomic mass and Γ is the force constant tensor. The atomic displacement is the summation over the normal modes [ s α : unit vector of mode α , ω p : angular frequency of wave, and κ p : wave vector]. Using this plane-wave displacement, the equation of motion becomes the eigenvalue equation [ 15 ] [ 16 ] M ω p 2 ( κ p , α ) s α ( κ p ) = D ( κ p ) s α ( κ p ) , {\displaystyle \mathbf {M} \omega _{p}^{2}({\boldsymbol {\kappa }}_{p},\alpha )\mathbf {s} _{\alpha }({\boldsymbol {\kappa }}_{p})=\mathbf {D} ({\boldsymbol {\kappa }}_{p})\mathbf {s} _{\alpha }({\boldsymbol {\kappa }}_{p}),} where M is the diagonal mass matrix and D is the harmonic dynamical matrix. Solving this eigenvalue equation gives the relation between the angular frequency ω p and the wave vector κ p , and this relation is called the phonon dispersion relation . Thus, the phonon dispersion relation is determined by matrices M and D , which depend on the atomic structure and the strength of interaction among constituent atoms (the stronger the interaction and the lighter the atoms, the higher is the phonon frequency and the larger is the slope dω p / d κ p ). The Hamiltonian of phonon system with the harmonic approximation is [ 15 ] [ 17 ] [ 18 ] H p = ∑ x 1 2 m p 2 ( x ) + 1 2 ∑ x , x ′ d i ( x ) D i j ( x − x ′ ) d j ( x ′ ) , {\displaystyle \mathrm {H} _{p}=\sum _{x}{\frac {1}{2m}}\mathbf {p} ^{2}(\mathbf {x} )+{\frac {1}{2}}\sum _{\mathbf {x} ,\mathbf {x} '}\mathbf {d} _{i}(\mathbf {x} )D_{ij}(\mathbf {x} -\mathbf {x} ')\mathbf {d} _{j}(\mathbf {x} '),} where D ij is the dynamical matrix element between atoms i and j , and d i ( d j ) is the displacement of i ( j ) atom, and p is momentum. From this and the solution to dispersion relation, the phonon annihilation operator for the quantum treatment is defined as b κ , α = 1 N 1 / 2 ∑ κ p , α e − i ( κ p ⋅ x ) s α ( κ p ) ⋅ [ ( m ω p , α 2 ℏ ) 1 / 2 d ( x ) + i ( 1 2 ℏ m ω p , α ) 1 / 2 p ( x ) ] , {\displaystyle b_{\kappa ,\alpha }={\frac {1}{N^{1/2}}}\sum _{\kappa _{p},\alpha }e^{-i({\boldsymbol {\kappa }}_{p}\cdot \mathbf {x} )}\mathbf {s} _{\alpha }({\boldsymbol {\kappa }}_{p})\cdot \left[\left({\frac {m\omega _{p,\alpha }}{2\hbar }}\right)^{1/2}\mathbf {d} (\mathbf {x} )+i\left({\frac {1}{2\hbar m\omega _{p,\alpha }}}\right)^{1/2}\mathbf {p} (\mathbf {x} )\right],} where N is the number of normal modes divided by α and ħ is the reduced Planck constant . The creation operator is the adjoint of the annihilation operator, b κ , α † = 1 N 1 / 2 ∑ κ p , α e i ( κ p ⋅ x ) s α ( κ p ) ⋅ [ ( m ω p , α 2 ℏ ) 1 / 2 d ( x ) − i ( 1 2 ℏ m ω p , α ) 1 / 2 p ( x ) ] . {\displaystyle b_{\kappa ,\alpha }^{\dagger }={\frac {1}{N^{1/2}}}\sum _{\kappa _{p},\alpha }e^{i({\boldsymbol {\kappa }}_{p}\cdot \mathbf {x} )}\mathbf {s} _{\alpha }({\boldsymbol {\kappa }}_{p})\cdot \left[\left({\frac {m\omega _{p,\alpha }}{2\hbar }}\right)^{1/2}\mathbf {d} (\mathbf {x} )-i\left({\frac {1}{2\hbar m\omega _{p,\alpha }}}\right)^{1/2}\mathbf {p} (\mathbf {x} )\right].} The Hamiltonian in terms of b κ,α † and b κ,α is H p = Σ κ , α ħω p,α [ b κ,α † b κ,α + 1/2] and b κ,α † b κ,α is the phonon number operator . The energy of quantum-harmonic oscillator is E p = Σ κ , α [ f p ( κ , α ) + 1/2] ħω p,α ( κ p ), and thus the quantum of phonon energy ħω p .
The phonon dispersion relation gives all possible phonon modes within the Brillouin zone (zone within the primitive cell in reciprocal space ), and the phonon density of states D p (the number density of possible phonon modes). The phonon group velocity u p,g is the slope of the dispersion curve, dω p / d κ p . Since phonon is a boson particle, its occupancy follows the Bose–Einstein distribution { f p o = [exp( ħω p / k B T )-1] −1 , k B : Boltzmann constant }. Using the phonon density of states and this occupancy distribution, the phonon energy is E p ( T ) = ∫ D p ( ω p ) f p ( ω p ,T ) ħω p dω p , and the phonon density is n p ( T ) = ∫ D p ( ω p ) f p ( ω p ,T ) dω p . Phonon heat capacity c v,p (in solid c v,p = c p,p , c v,p : constant-volume heat capacity, c p,p : constant-pressure heat capacity) is the temperature derivatives of phonon energy for the Debye model (linear dispersion model), is [ 19 ] c v , p = d E p d T | v = 9 k B m ( T T D ) 3 n ∫ 0 T D / T x 4 e x ( e x − 1 ) 2 d x ( x = ℏ ω k B T ) , {\displaystyle c_{v,p}=\left.{\frac {dE_{p}}{dT}}\right|_{v}={\frac {9k_{\mathrm {B} }}{m}}\left({\frac {T}{T_{D}}}\right)^{3}n\int _{0}^{T_{D}/T}{\frac {x^{4}e^{x}}{\left(e^{x}-1\right)^{2}}}dx\qquad (x={\frac {\hbar \omega }{k_{\mathrm {B} }T}}),} where T D is the Debye temperature , m is atomic mass, and n is the atomic number density (number density of phonon modes for the crystal 3 n ). This gives the Debye T 3 law at low temperature and Dulong-Petit law at high temperatures.
From the kinetic theory of gases, [ 20 ] thermal conductivity of principal carrier i ( p , e , f and ph ) is k i = 1 3 n i c v , i u i λ i , {\displaystyle k_{i}={\frac {1}{3}}n_{i}c_{v,i}u_{i}\lambda _{i},} where n i is the carrier density and the heat capacity is per carrier, u i is the carrier speed and λ i is the mean free path (distance traveled by carrier before an scattering event). Thus, the larger the carrier density, heat capacity and speed, and the less significant the scattering, the higher is the conductivity. For phonon λ p represents the interaction (scattering) kinetics of phonons and is related to the scattering relaxation time τ p or rate (= 1/ τ p ) through λ p = u p τ p . Phonons interact with other phonons, and with electrons, boundaries, impurities, etc., and λ p combines these interaction mechanisms through the Matthiessen rule . At low temperatures, scattering by boundaries is dominant and with increase in temperature the interaction rate with impurities, electron and other phonons become important, and finally the phonon-phonon scattering dominants for T > 0.2 T D . The interaction rates are reviewed in [ 21 ] and includes quantum perturbation theory and MD.
A number of conductivity models are available with approximations regarding the dispersion and λ p . [ 17 ] [ 19 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] Using the single-mode relaxation time approximation (∂ f p ′ /∂ t | s = − f p ′ / τ p ) and the gas kinetic theory, Callaway phonon (lattice) conductivity model as [ 21 ] [ 26 ] k p , s = 1 8 π 3 ∑ α ∫ c v , p τ p ( u p , g ⋅ s ) 2 d κ for component along s , {\displaystyle k_{p,\mathbf {s} }={\frac {1}{8\pi ^{3}}}\sum _{\alpha }\int c_{v,p}\tau _{p}(\mathbf {u} _{p,g}\cdot \mathbf {s} )^{2}d\kappa \ \ \ \ \ {\text{ for component along }}\mathbf {s} ,} k p = 1 6 π 3 ∑ α ∫ c v , p τ p u p , g 2 κ 2 d κ for isotropic conductivity . {\displaystyle k_{p}={\frac {1}{6\pi ^{3}}}\sum _{\alpha }\int c_{v,p}\tau _{p}{u}_{p,g}^{2}\kappa ^{2}d\kappa \ \ \ \ \ \ \ \ {\text{for isotropic conductivity}}.}
With the Debye model (a single group velocity u p,g , and a specific heat capacity calculated above), this becomes k p = ( 48 π 2 ) 1 / 3 k B 3 T 3 a h P 2 T D ∫ 0 T / T D τ p x 4 e x ( e x − 1 ) 2 d x , {\displaystyle k_{p}=\left(48\pi ^{2}\right)^{1/3}{\frac {k_{\mathrm {B} }^{3}T^{3}}{ah_{\mathrm {P} }^{2}T_{\mathrm {D} }}}\int _{0}^{T/T_{\mathrm {D} }}\tau _{p}{\frac {x^{4}e^{x}}{\left(e^{x}-1\right)^{2}}}dx,}
where a is the lattice constant a = n −1/3 for a cubic lattice, and n is the atomic number density. Slack phonon conductivity model mainly considering acoustic phonon scattering (three-phonon interaction) is given as [ 27 ] [ 28 ] k p = k p , S = 3.1 × 10 12 ⟨ M ⟩ V a 1 / 3 T D , ∞ 3 T ⟨ γ G 2 ⟩ N o 2 / 3 high temperatures ( T > 0.2 T D , phonon-phonon scattering only) , {\displaystyle k_{p}=k_{p,S}={\frac {3.1\times 10^{12}\langle M\rangle V_{a}^{1/3}T_{D,\infty }^{3}}{T\langle \gamma _{G}^{2}\rangle N_{o}^{2/3}}}\qquad {\text{ high temperatures }}(T>0.2T_{D},{\text{ phonon-phonon scattering only)}},} where ⟨ M ⟩ is the mean atomic weight of the atoms in the primitive cell, V a =1/ n is the average volume per atom, T D,∞ is the high-temperature Debye temperature, T is the temperature, N o is the number of atoms in the primitive cell, and ⟨γ 2 G ⟩ is the mode-averaged square of the Grüneisen constant or parameter at high temperatures. This model is widely tested with pure nonmetallic crystals, and the overall agreement is good, even for complex crystals.
Based on the kinetics and atomic structure consideration, a material with high crystalline and strong interactions, composed of light atoms (such as diamond and graphene) is expected to have large phonon conductivity. Solids with more than one atom in the smallest unit cell representing the lattice have two types of phonons, i.e., acoustic and optical. (Acoustic phonons are in-phase movements of atoms about their equilibrium positions, while optical phonons are out-of-phase movement of adjacent atoms in the lattice.) Optical phonons have higher energies (frequencies), but make smaller contribution to conduction heat transfer, because of their smaller group velocity and occupancy.
Phonon transport across hetero-structure boundaries (represented with R p,b , phonon boundary resistance ) according to the boundary scattering approximations are modeled as acoustic and diffuse mismatch models. [ 29 ] Larger phonon transmission (small R p,b ) occurs at boundaries where material pairs have similar phonon properties ( u p , D p , etc.), and in contract large R p,b occurs when some material is softer (lower cut-off phonon frequency) than the other.
Quantum electron energy states for electron are found using the electron quantum Hamiltonian, which is generally composed of kinetic (- ħ 2 ∇ 2 /2 m e ) and potential energy terms ( φ e ). Atomic orbital, a mathematical function describing the wave-like behavior of either an electron or a pair of electrons in an atom , can be found from the Schrödinger equation with this electron Hamiltonian. Hydrogen-like atoms (a nucleus and an electron) allow for closed-form solution to Schrödinger equation with the electrostatic potential (the Coulomb law ). The Schrödinger equation of atoms or atomic ions with more than one electron has not been solved analytically, because of the Coulomb interactions among electrons. Thus, numerical techniques are used, and an electron configuration is approximated as product of simpler hydrogen-like atomic orbitals (isolate electron orbitals). Molecules with multiple atoms (nuclei and their electrons) have molecular orbital (MO, a mathematical function for the wave-like behavior of an electron in a molecule), and are obtained from simplified solution techniques such as linear combination of atomic orbitals (LCAO). The molecular orbital is used to predict chemical and physical properties, and the difference between highest occupied molecular orbital ( HOMO ) and the lowest unoccupied molecular orbital ( LUMO ) is a measure of excitability of the molecules.
In a crystal structure of metallic solids, the free electron model (zero potential, φ e = 0) for the behavior of valence electrons is used. However, in a periodic lattice (crystal) , there is periodic crystal potential, so the electron Hamiltonian becomes [ 19 ] H e = − ℏ 2 2 m e ∇ 2 + φ c ( x ) , {\displaystyle \mathrm {H} _{e}=-{\frac {\hbar ^{2}}{2m_{e}}}\nabla ^{2}+\varphi _{c}(\mathbf {x} ),} where m e is the electron mass, and the periodic potential is expressed as φ c ( x ) = Σ g φ g exp[ i ( g ∙ x )] ( g : reciprocal lattice vector). The time-independent Schrödinger equation with this Hamiltonian is given as (the eigenvalue equation) H e ψ e , x ( x ) = E e ( κ e ) ψ e , x ( x ) , {\displaystyle \mathrm {H} _{e}\psi _{e,\mathbf {x} }(\mathbf {x} )=E_{e}({\boldsymbol {\kappa }}_{e})\psi _{e,\mathbf {x} }(\mathbf {x} ),} where the eigenfunction ψ e,κ is the electron wave function, and eigenvalue E e ( κ e ), is the electron energy ( κ e : electron wavevector). The relation between wavevector, κ e and energy E e provides the electronic band structure . In practice, a lattice as many-body systems includes interactions between electrons and nuclei in potential, but this calculation can be too intricate. Thus, many approximate techniques have been suggested and one of them is density functional theory (DFT), uses functionals of the spatially dependent electron density instead of full interactions. DFT is widely used in ab initio software ( ABINIT , CASTEP , Quantum ESPRESSO , SIESTA , VASP , WIEN2k , etc.). The electron specific heat is based on the energy states and occupancy distribution (the Fermi–Dirac statistics ). In general, the heat capacity of electron is small except at very high temperature when they are in thermal equilibrium with phonons (lattice). Electrons contribute to heat conduction (in addition to charge carrying) in solid, especially in metals. Thermal conductivity tensor in solid is the sum of electric and phonon thermal conductivity tensors K = K e + K p .
Electrons are affected by two thermodynamic forces [from the charge, ∇( E F / e c ) where E F is the Fermi level and e c is the electron charge and temperature gradient, ∇(1/ T )] because they carry both charge and thermal energy, and thus electric current j e and heat flow q are described with the thermoelectric tensors ( A ee , A et , A te , and A tt ) from the Onsager reciprocal relations [ 30 ] as j e = A e e ⋅ ∇ E F e c + A e t ⋅ ∇ 1 T , and {\displaystyle \mathbf {j} _{e}=\mathbf {A} _{ee}\cdot \nabla {\frac {E_{\mathrm {F} }}{e_{c}}}+\mathbf {A} _{et}\cdot \nabla {\frac {1}{T}},\ \ {\text{and}}} q = A t e ⋅ ∇ E F e c + A t t ⋅ ∇ 1 T . {\displaystyle \mathbf {q} =\mathbf {A} _{te}\cdot \nabla {\frac {E_{\mathrm {F} }}{e_{c}}}+\mathbf {A} _{tt}\cdot \nabla {\frac {1}{T}}.}
Converting these equations to have j e equation in terms of electric field e e and ∇ T and q equation with j e and ∇ T , (using scalar coefficients for isotropic transport, α ee , α et , α te , and α tt instead of A ee , A et , A te , and A tt ) j e = α e e e e − α e t T 2 ∇ T ( e e = α e e − 1 j e + α e e − 1 α e t T 2 ∇ T ) , {\displaystyle \mathbf {j} _{e}=\alpha _{ee}\mathbf {e} _{e}-{\frac {\alpha _{et}}{T^{2}}}\nabla T\qquad (\mathbf {e} _{e}=\alpha _{ee}^{-1}\mathbf {j} _{e}+{\frac {\alpha _{ee}^{-1}\alpha _{et}}{T^{2}}}\nabla T),} q = α t e α e e − 1 j e − α t t − α t e α e e − 1 α e t T 2 ∇ T . {\displaystyle \mathbf {q} =\alpha _{te}\alpha _{ee}^{-1}\mathbf {j} _{e}-{\frac {\alpha _{tt}-\alpha _{te}\alpha _{ee}^{-1}\alpha _{et}}{T^{2}}}\nabla T.}
Electrical conductivity/resistivity σ e (Ω −1 m −1 )/ ρ e (Ω-m), electric thermal conductivity k e (W/m-K) and the Seebeck/Peltier coefficients α S (V/K)/ α P (V) are defined as, σ e = 1 ρ e = α e e , k e = α t t − α t e α e e − 1 α e t T 2 , a n d α S = α e t α e e − 1 T 2 ( α S = α P T ) . {\displaystyle \sigma _{e}={\frac {1}{\rho _{e}}}=\alpha _{ee},\ \ k_{e}={\frac {\alpha _{tt}-\alpha _{te}\alpha _{ee}^{-1}\alpha _{et}}{T^{2}}},\mathrm {and} \ \alpha _{\mathrm {S} }={\frac {\alpha _{et}\alpha _{ee}^{-1}}{T^{2}}}\ \ (\alpha _{\mathrm {S} }=\alpha _{\mathrm {P} }T).}
Various carriers (electrons, magnons , phonons, and polarons ) and their interactions substantially affect the Seebeck coefficient. [ 31 ] [ 32 ] The Seebeck coefficient can be decomposed with two contributions, α S = α S,pres + α S,trans , where α S,pres is the sum of contributions to the carrier-induced entropy change, i.e., α S,pres = α S,mix + α S,spin + α S,vib ( α S,mix : entropy-of-mixing, α S,spin : spin entropy, and α S,vib : vibrational entropy). The other contribution α S,trans is the net energy transferred in moving a carrier divided by qT ( q : carrier charge). The electron's contributions to the Seebeck coefficient are mostly in α S,pres . The α S,mix is usually dominant in lightly doped semiconductors. The change of the entropy-of-mixing upon adding an electron to a system is the so-called Heikes formula α S , m i x = 1 q ∂ S m i x ∂ N = k B q ln ( 1 − f e o f e o ) , {\displaystyle \alpha _{\mathrm {S,mix} }={\frac {1}{q}}{\frac {\partial S_{\mathrm {mix} }}{\partial N}}={\frac {k_{\mathrm {B} }}{q}}\ln \left({\frac {1-f_{e}^{\mathrm {o} }}{f_{e}^{\mathrm {o} }}}\right),} where f e o = N / N a is the ratio of electrons to sites (carrier concentration). Using the chemical potential ( μ ), the thermal energy ( k B T ) and the Fermi function, above equation can be expressed in an alternative form, α S,mix = ( k B / q )[( E e − μ )/( k B T )].
Extending the Seebeck effect to spins, a ferromagnetic alloy can be a good example. The contribution to the Seebeck coefficient that results from electrons' presence altering the systems spin entropy is given by α S,spin = Δ S spin / q = ( k B / q )ln[(2 s + 1)/(2 s 0 +1)], where s 0 and s are net spins of the magnetic site in the absence and presence of the carrier, respectively. Many vibrational effects with electrons also contribute to the Seebeck coefficient. The softening of the vibrational frequencies produces a change of the vibrational entropy is one of examples. The vibrational entropy is the negative derivative of the free energy, i.e., S v i b = − ∂ F m i x ∂ T = 3 N k B T ∫ 0 ω { ℏ ω 2 k B T coth ( ℏ ω 2 k B T ) − ln [ 2 sinh ( ℏ ω 2 k B T ) ] } D p ( ω ) d ω , {\displaystyle S_{\mathrm {vib} }=-{\frac {\partial F_{\mathrm {mix} }}{\partial T}}=3Nk_{\mathrm {B} }T\int _{0}^{\omega }\left\{{\frac {\hbar \omega }{2k_{\mathrm {B} }T}}\coth \left({\frac {\hbar \omega }{2k_{\mathrm {B} }T}}\right)-\ln \left[2\sinh \left({\frac {\hbar \omega }{2k_{\mathrm {B} }T}}\right)\right]\right\}D_{p}(\omega )d\omega ,} where D p ( ω ) is the phonon density-of-states for the structure. For the high-temperature limit and series expansions of the hyperbolic functions, the above is simplified as α S,vib = (Δ S vib / q ) = ( k B / q )Σ i (-Δ ω i / ω i ).
The Seebeck coefficient derived in the above Onsager formulation is the mixing component α S,mix , which dominates in most semiconductors. The vibrational component in high-band gap materials such as B 13 C 2 is very important. Considering the microscopic transport (transport is a results of nonequilibrium), j e = − e c ℏ 3 ∑ p u e f e ′ = − e c ℏ 3 k B T ∑ p u e τ e ( − ∂ f e o ∂ E e ) ( u e ⋅ F t e ) , {\displaystyle \mathbf {j} _{e}=-{\frac {e_{c}}{\hbar ^{3}}}\sum _{p}\mathbf {u} _{e}f_{e}^{\prime }=-{\frac {e_{c}}{\hbar ^{3}k_{\mathrm {B} }T}}\sum _{p}\mathbf {u} _{e}\tau _{e}\left(-{\frac {\partial f_{e}^{\mathrm {o} }}{\partial E_{e}}}\right)(\mathbf {u} _{e}\cdot \mathbf {F} _{te}),} q = 1 ℏ 3 ∑ p ( E e − E F ) u e f e ′ = 1 ℏ 3 k B T ∑ p u e τ e ( − ∂ f e o ∂ E e ) ( E e − E F ) ( u e ⋅ F t e ) , {\displaystyle \mathbf {q} ={\frac {1}{\hbar ^{3}}}\sum _{p}(E_{e}-E_{\mathrm {F} })\mathbf {u} _{e}f_{e}^{\prime }={\frac {1}{\hbar ^{3}k_{\mathrm {B} }T}}\sum _{p}\mathbf {u} _{e}\tau _{e}\left(-{\frac {\partial f_{e}^{\mathrm {o} }}{\partial E_{e}}}\right)(E_{e}-E_{\mathrm {F} })(\mathbf {u} _{e}\cdot \mathbf {F} _{te}),}
where u e is the electron velocity vector, f e ( f e o ) is the electron nonequilibrium (equilibrium) distribution, τ e is the electron scattering time, E e is the electron energy, and F te is the electric and thermal forces from ∇( E F / e c ) and ∇(1/ T ).
Relating the thermoelectric coefficients to the microscopic transport equations for j e and q, the thermal, electric, and thermoelectric properties are calculated. Thus, k e increases with the electrical conductivity σe and temperature T , as the Wiedemann–Franz law presents [ k e /( σ e T e ) = (1/3)( πk B / e c ) 2 = 2.44 × 10 −8 W-Ω/K 2 ]. Electron transport (represented as σ e ) is a function of carrier density n e,c and electron mobility μ e ( σ e = e c n e,c μ e ). μ e is determined by electron scattering rates γ ˙ e {\displaystyle {\dot {\gamma }}_{e}} (or relaxation time, τ e = 1 / γ ˙ e {\displaystyle \tau _{e}=1/{\dot {\gamma }}_{e}} ) in various interaction mechanisms including interaction with other electrons, phonons, impurities and boundaries.
Electrons interact with other principal energy carriers. Electrons accelerated by an electric field are relaxed through the energy conversion to phonon (in semiconductors, mostly optical phonon), which is called Joule heating . Energy conversion between electric potential and phonon energy is considered in thermoelectrics such as Peltier cooling and thermoelectric generator. Also, study of interaction with photons is central in optoelectronic applications (i.e. light-emitting diode , solar photovoltaic cells , etc.). Interaction rates or energy conversion rates can be evaluated using the Fermi golden rule (from the perturbation theory) with ab initio approach.
Fluid particle is the smallest unit (atoms or molecules) in the fluid phase (gas, liquid or plasma) without breaking any chemical bond. Energy of fluid particle is divided into potential, electronic, translational, vibrational, and rotational energies. The heat (thermal) energy storage in fluid particle is through the temperature-dependent particle motion (translational, vibrational, and rotational energies). The electronic energy is included only if temperature is high enough to ionize or dissociate the fluid particles or to include other electronic transitions. These quantum energy states of the fluid particles are found using their respective quantum Hamiltonian. These are H f , t = −( ħ 2 /2 m )∇ 2 , H f,v = −( ħ 2 /2 m )∇ 2 + Γ x 2 /2 and H f , r = −( ħ 2 /2 I f )∇ 2 for translational, vibrational and rotational modes. (Γ: spring constant , I f : the moment of inertia for the molecule). From the Hamiltonian, the quantized fluid particle energy state E f and partition functions Z f [with the Maxwell–Boltzmann (MB) occupancy distribution ] are found as [ 33 ]
Here, g f is the degeneracy, n , l , and j are the transitional, vibrational and rotational quantum numbers, T f,v is the characteristic temperature for vibration (= ħω f,v / k B , : vibration frequency), and T f,r is the rotational temperature [= ħ 2 /(2 I f k B )]. The average specific internal energy is related to the partition function through Z f , e f = ( k B T 2 / m ) ( ∂ l n Z f / ∂ T ) | N , V . {\displaystyle e_{f}=(k_{\mathrm {B} }T^{2}/m)(\partial \mathrm {ln} Z_{f}/\partial T)|_{N,V}.}
With the energy states and the partition function, the fluid particle specific heat capacity c v,f is the summation of contribution from various kinetic energies (for non-ideal gas the potential energy is also added). Because the total degrees of freedom in molecules is determined by the atomic configuration, c v,f has different formulas depending on the configuration, [ 33 ]
where R g is the gas constant (= N A k B , N A : the Avogadro constant) and M is the molecular mass (kg/kmol). (For the polyatomic ideal gas, N o is the number of atoms in a molecule.) In gas, constant-pressure specific heat capacity c p,f has a larger value and the difference depends on the temperature T , volumetric thermal expansion coefficient β and the isothermal compressibility κ [ c p,f – c v,f = Tβ 2 /( ρ f κ ), ρ f : the fluid density]. For dense fluids that the interactions between the particles (the van der Waals interaction) should be included, and c v,f and c p,f would change accordingly.
The net motion of particles (under gravity or external pressure) gives rise to the convection heat flux q u = ρ f c p,f u f T . Conduction heat flux q k for ideal gas is derived with the gas kinetic theory or the Boltzmann transport equations, and the thermal conductivity is k f = 1 3 n f c p , f ⟨ u f 2 ⟩ τ f - f , {\displaystyle k_{f}={\tfrac {1}{3}}n_{f}c_{p,f}\langle u_{f}^{2}\rangle \tau _{f{\mbox{-}}f},} where ⟨ u f 2 ⟩ 1/2 is the RMS ( root mean square ) thermal velocity (3 k B T / m from the MB distribution function, m : atomic mass) and τ f-f is the relaxation time (or intercollision time period) [(2 1/2 π d 2 n f ⟨ u f ⟩) −1 from the gas kinetic theory, ⟨ u f ⟩: average thermal speed (8 k B T / πm ) 1/2 , d : the collision diameter of fluid particle (atom or molecule), n f : fluid number density].
k f is also calculated using molecular dynamics (MD), which simulates physical movements of the fluid particles with the Newton equations of motion (classical) and force field (from ab initio or empirical properties). For calculation of k f , the equilibrium MD with Green–Kubo relations , which express the transport coefficients in terms of integrals of time correlation functions (considering fluctuation), or nonequilibrium MD (prescribing heat flux or temperature difference in simulated system) are generally employed.
Fluid particles can interact with other principal particles. Vibrational or rotational modes, which have relatively high energy, are excited or decay through the interaction with photons. Gas lasers employ the interaction kinetics between fluid particles and photons, and laser cooling has been also considered in CO 2 gas laser. [ 34 ] [ 35 ] Also, fluid particles can be adsorbed on solid surfaces ( physisorption and chemisorption ), and the frustrated vibrational modes in adsorbates (fluid particles) is decayed by creating e − - h + pairs or phonons. These interaction rates are also calculated through ab initio calculation on fluid particle and the Fermi golden rule. [ 36 ]
Photon is the quanta of electromagnetic (EM) radiation and energy carrier for radiation heat transfer . The EM wave is governed by the classical Maxwell equations , and the quantization of EM wave is used for phenomena such as the blackbody radiation (in particular to explain the ultraviolet catastrophe ). The quanta EM wave (photon) energy of angular frequency ω ph is E ph = ħω ph , and follows the Bose–Einstein distribution function ( f ph ). The photon Hamiltonian for the quantized radiation field ( second quantization ) is [ 37 ] [ 38 ] H p h = 1 2 ∫ ( ε o e e 2 + μ o − 1 b e 2 ) d V = ∑ α ℏ ω p h , α ( c α † c α + 1 2 ) , {\displaystyle \mathrm {H} _{ph}={\frac {1}{2}}\int \left(\varepsilon _{\mathrm {o} }\mathbf {e} _{e}^{2}+\mu _{\mathrm {o} }^{-1}\mathbf {b} _{e}^{2}\right)dV=\sum _{\alpha }\hbar \omega _{ph,\alpha }\left(c_{\alpha }^{\dagger }c_{\alpha }+{\frac {1}{2}}\right),} where e e and b e are the electric and magnetic fields of the EM radiation, ε o and μ o are the free-space permittivity and permeability, V is the interaction volume, ω ph,α is the photon angular frequency for the α mode and c α † and c α are the photon creation and annihilation operators. The vector potential a e of EM fields ( e e = −∂ a e /∂ t and b e = ∇× a e ) is a e ( x , t ) = ∑ α ( ℏ 2 ε o ω p h , α V ) 1 / 2 s p h , α ( c α e i κ α ⋅ x + c α † e − i κ α ⋅ x ) , {\displaystyle \mathbf {a} _{e}(\mathbf {x} ,t)=\sum _{\alpha }\left({\frac {\hbar }{2\varepsilon _{\mathrm {o} }\omega _{ph,\alpha }V}}\right)^{1/2}\mathbf {s} _{ph,\alpha }\left(c_{\alpha }e^{i{\boldsymbol {\kappa }}_{\alpha }\cdot \mathbf {x} }+c_{\alpha }^{\dagger }e^{-i{\boldsymbol {\kappa }}_{\alpha }\cdot \mathbf {x} }\right),} where s ph,α is the unit polarization vector, κ α is the wave vector.
Blackbody radiation among various types of photon emission employs the photon gas model with thermalized energy distribution without interphoton interaction. From the linear dispersion relation (i.e., dispersionless), phase and group speeds are equal ( u ph = d ω ph / dκ = ω ph / κ , u ph : photon speed) and the Debye (used for dispersionless photon) density of states is D ph,b,ω dω = ω ph 2 dω ph / π 2 u ph 3 . With D ph,b,ω and equilibrium distribution f ph , photon energy spectral distribution dI b,ω or dI b,λ ( λ ph : wavelength) and total emissive power E b are derived as d I b , ω = D p h , b , ω f p h u p h d ω p h 4 π = ℏ ω p h 3 4 π 3 u p h 2 1 e ℏ ω p h / k B T − 1 d ω p h or d I b , λ = 4 π ℏ u p h 2 d λ p h λ p h 5 ( e 2 π ℏ u p h / λ p h k B T − 1 ) {\displaystyle dI_{b,\omega }={\frac {D_{ph,b,\omega }f_{ph}u_{ph}d\omega _{ph}}{4\pi }}={\frac {\hbar \omega _{ph}^{3}}{4\pi ^{3}u_{ph}^{2}}}{\frac {1}{e^{\hbar \omega _{ph}/k_{\mathrm {B} }T}-1}}d\omega _{ph}\ {\text{or}}\ dI_{b,\lambda }={\frac {4\pi \hbar u_{ph}^{2}d\lambda _{ph}}{\lambda _{ph}^{5}(e^{2\pi \hbar u_{ph}/\lambda _{ph}k_{\mathrm {B} }T}-1)}}} ( Planck law ), E b = ∫ 0 ∞ d E b , λ = σ S B T 4 , where σ S B = π 2 k B 4 60 ℏ 3 u p h 2 {\displaystyle E_{b}=\int _{0}^{\infty }dE_{b,\lambda }=\sigma _{\mathrm {SB} }T^{4}\ {\text{, where}}\ \sigma _{\mathrm {SB} }={\frac {\pi ^{2}k_{\mathrm {B} }^{4}}{60\hbar ^{3}u_{ph}^{2}}}} ( Stefan–Boltzmann law ).
Compared to blackbody radiation, laser emission has high directionality (small solid angle ΔΩ) and spectral purity (narrow bands Δ ω ). Lasers range far-infrared to X-rays/γ-rays regimes based on the resonant transition ( stimulated emission ) between electronic energy states. [ 39 ]
Near-field radiation from thermally excited dipoles and other electric/magnetic transitions is very effective within a short distance (order of wavelength) from emission sites. [ 40 ] [ 41 ] [ 42 ]
The BTE for photon particle momentum p ph = ħω ph s / u ph along direction s experiencing absorption/emission s ˙ f , p h − e {\displaystyle \textstyle {\dot {s}}_{f,ph-e}\ } (= u ph σ ph,ω [ f ph ( ω ph , T ) - f ph ( s )], σ ph,ω : spectral absorption coefficient ), and generation/removal s ˙ f , p h , i {\displaystyle \textstyle {\dot {s}}_{f,ph,i}} , is [ 43 ] [ 44 ] ∂ f p h ∂ t + u p h s ⋅ ∇ f p h = ∂ f p h ∂ t | s + u p h σ p h , ω [ f p h ( ω p h , T ) − f p h ( s ) ] + s ˙ f , p h , i . {\displaystyle {\frac {\partial f_{ph}}{\partial t}}+u_{ph}\mathbf {s} \cdot \nabla f_{ph}=\left.{\frac {\partial f_{ph}}{\partial t}}\right|_{s}+u_{ph}\sigma _{ph,\omega }[f_{ph}(\omega _{ph},T)-f_{ph}(\mathbf {s} )]+{\dot {s}}_{f,ph,i}.}
In terms of radiation intensity ( I ph,ω = u ph f ph ħω ph D ph,ω /4 π , D ph,ω : photon density of states), this is called the equation of radiative transfer (ERT) [ 44 ] ∂ I p h , ω ( ω p h , s ) u p h ∂ t + s ⋅ ∇ I p h , ω ( ω p h , s ) = ∂ I p h , ω ( ω p h , s ) u p h ∂ t | s + σ p h , ω [ I p h , ω ( ω p h , T ) − I p h ( ω p h , s ) ] + s ˙ p h , i . {\displaystyle {\frac {\partial I_{ph,\omega }(\omega _{ph},\mathbf {s} )}{u_{ph}\partial t}}+\mathbf {s} \cdot \nabla I_{ph,\omega }(\omega _{ph},\mathbf {s} )=\left.{\frac {\partial I_{ph,\omega }(\omega _{ph},\mathbf {s} )}{u_{ph}\partial t}}\right|_{s}+\sigma _{ph,\omega }[I_{ph,\omega }(\omega _{ph},T)-I_{ph}(\omega _{ph},\mathbf {s} )]+{\dot {s}}_{ph,i}.} The net radiative heat flux vector is q r = q p h = ∫ 0 ∞ ∫ 4 π s I p h , ω d Ω d ω . {\textstyle \mathbf {q} _{r}=\mathbf {q} _{ph}=\int _{0}^{\infty }\int _{4\pi }\mathbf {s} I_{ph,\omega }d\Omega d\omega .}
From the Einstein population rate equation , spectral absorption coefficient σ ph,ω in ERT is, [ 45 ] σ p h , ω = ℏ ω γ ˙ p h , a n e u p h , {\displaystyle \sigma _{ph,\omega }={\frac {\hbar \omega {\dot {\gamma }}_{ph,a}n_{e}}{u_{ph}}},} where γ ˙ p h , a {\displaystyle {\dot {\gamma }}_{ph,a}} is the interaction probability (absorption) rate or the Einstein coefficient B 12 (J −1 m 3 s −1 ), which gives the probability per unit time per unit spectral energy density of the radiation field (1: ground state, 2: excited state), and n e is electron density (in ground state). This can be obtained using the transition dipole moment μ e with the FGR and relationship between Einstein coefficients. Averaging σ ph,ω over ω gives the average photon absorption coefficient σ ph .
For the case of optically thick medium of length L , i.e., σ ph L >> 1, and using the gas kinetic theory, the photon conductivity k ph is 16 σ SB T 3 /3 σ ph ( σ SB : Stefan–Boltzmann constant , σ ph : average photon absorption), and photon heat capacity n ph c v,ph is 16 σ SB T 3 / u ph .
Photons have the largest range of energy and central in a variety of energy conversions. Photons interact with electric and magnetic entities. For example, electric dipole which in turn are excited by optical phonons or fluid particle vibration, or transition dipole moments of electronic transitions. In heat transfer physics, the interaction kinetics of phonon is treated using the perturbation theory (the Fermi golden rule) and the interaction Hamiltonian. The photon-electron interaction is [ 46 ] H p h − e = − e c m e ( a + a † ) a e ⋅ p e = − ( ℏ ω p h , α 2 ε o V ) 1 / 2 ( s p h , α ⋅ e c x e ) ( a + a † ) ( c e i κ ⋅ x + c † e − i κ ⋅ x ) , {\displaystyle \mathrm {H} _{ph-e}=-{\frac {e_{c}}{m_{e}}}\left(a+a^{\dagger }\right)\mathbf {a} _{e}\cdot \mathbf {p} _{e}=-\left({\frac {\hbar \omega _{ph,\alpha }}{2\varepsilon _{o}V}}\right)^{1/2}(\mathbf {s} _{ph,\alpha }\cdot e_{c}\mathbf {x} _{e})\left(a+a^{\dagger }\right)\left(ce^{i\mathrm {\kappa } \cdot \mathrm {x} }+c^{\dagger }e^{-i\mathrm {\kappa } \cdot \mathrm {x} }\right),} where p e is the dipole moment vector and a † and a are the creation and annihilation of internal motion of electron. Photons also participate in ternary interactions, e.g., phonon-assisted photon absorption/emission (transition of electron energy level). [ 47 ] [ 48 ] The vibrational mode in fluid particles can decay or become excited by emitting or absorbing photons. Examples are solid and molecular gas laser cooling. [ 49 ] [ 50 ] [ 51 ]
Using ab initio calculations based on the first principles along with EM theory, various radiative properties such as dielectric function ( electrical permittivity , ε e,ω ), spectral absorption coefficient ( σ ph,ω ), and the complex refraction index ( m ω ), are calculated for various interactions between photons and electric/magnetic entities in matter. [ 52 ] [ 53 ] For example, the imaginary part ( ε e,c,ω ) of complex dielectric function ( ε e,ω = ε e,r,ω + i ε e,c,ω ) for electronic transition across a bandgap is [ 3 ] ε e , c , ω = 4 π 2 ω 2 V ∑ i ∈ V B , j ∈ C B ∑ κ w κ | p i j | 2 δ ( E κ , j − E κ , i − ℏ ω ) , {\displaystyle \varepsilon _{e,c,\omega }={\frac {4\pi ^{2}}{\omega ^{2}V}}\sum _{i\in \mathrm {VB} ,j\in \mathrm {CB} }\sum _{\kappa }w_{\kappa }|p_{ij}|^{2}\delta (E_{\kappa ,j}-E_{\kappa ,i}-\hbar \omega ),} where V is the unit-cell volume, VB and CB denote the valence and conduction bands, w κ is the weight associated with a κ -point, and p ij is the transition momentum matrix element.
The real part is ε e,r,ω is obtained from ε e,c,ω using the Kramers-Kronig relation [ 54 ] ε e , r , ω = 1 + 4 π P ∫ 0 ∞ d ω ′ ω ′ ε e , c , ω ′ ω ′ 2 − ω 2 . {\displaystyle \varepsilon _{e,r,\omega }=1+{\frac {4}{\pi }}\mathbb {P} \int _{0}^{\infty }\mathrm {d} \omega '{\frac {\omega '\varepsilon _{e,c,\omega '}}{\omega '^{2}-\omega ^{2}}}.} Here, P {\displaystyle \mathbb {P} } denotes the principal value of the integral .
In another example, for the far IR regions where the optical phonons are involved, the dielectric function ( ε e,ω ) are calculated as ε e , ω ε e , ∞ = 1 + ∑ j ω L O , j 2 − ω T O , j 2 ω T O , j 2 − ω 2 − i γ ω , {\displaystyle {\frac {\varepsilon _{e,\omega }}{\varepsilon _{e,\infty }}}=1+\sum _{j}{\frac {\omega _{\mathrm {LO} ,j}^{2}-\omega _{\mathrm {TO} ,j}^{2}}{\omega _{\mathrm {TO} ,j}^{2}-\omega ^{2}-i\gamma \omega }},} where LO and TO denote the longitudinal and transverse optical phonon modes, j is all the IR-active modes, and γ is the temperature-dependent damping term in the oscillator model. ε e,∞ is high frequency dielectric permittivity, which can be calculated DFT calculation when ions are treated as external potential.
From these dielectric function ( ε e,ω ) calculations (e.g., Abinit , VASP , etc.), the complex refractive index m ω (= n ω + i κ ω , n ω : refraction index and κ ω : extinction index) is found, i.e., m ω 2 = ε e,ω = ε e,r,ω + i ε e,c,ω ). The surface reflectance R of an ideal surface with normal incident from vacuum or air is given as [ 55 ] R = [( n ω - 1) 2 + κ ω 2 ]/[( n ω + 1) 2 + κ ω 2 ]. The spectral absorption coefficient is then found from σ ph,ω = 2 ω κ ω / u ph . The spectral absorption coefficient for various electric entities are listed in the below table. [ 56 ] | https://en.wikipedia.org/wiki/Heat_transfer_physics |
Fins are extensions on exterior surfaces of objects that increase the rate of heat transfer to or from the object by increasing convection . This is achieved by increasing the surface area of the body, which in turn increases the heat transfer rate by a sufficient degree. This is an efficient way of increasing the rate, since the alternative way of doing so is by increasing either the heat transfer coefficient (which depends on the nature of materials being used and the conditions of use) or the temperature gradient (which depends on the conditions of use). Clearly, changing the shape of the bodies is more convenient. Fins are therefore a very popular solution to increase the heat transfer from surfaces and are widely used in a number of objects. The fin material should preferably have high thermal conductivity .
In most applications the fin is surrounded by a fluid in motion, [ 1 ] which heats or cools it quickly due to the large surface area, and subsequently the heat gets transferred to or from the body quickly due to the high thermal conductivity of the fin.
In order to design a fin for optimal heat transfer performance with minimal cost, the dimensions and shape of the fin have to be calculated for specific applications. A common way of doing so is by creating a model of the fin and then simulating it under required service conditions.
Consider a body with fins on its outer surface, with air flowing around it.
The heat transfer rate depends on
Modelling of the fins in this case involves, experimenting on this physical model and optimizing the number of fins and fin pitch for maximum performance . [ 2 ]
One of the experimentally obtained equations for heat transfer coefficient for the fin surface for low wind velocities is:
k = 2.11 v 0.71 θ 0.44 a − 0.14 {\displaystyle k=2.11v^{0.71}\theta ^{0.44}a^{-0.14}}
where
k= Fin surface heat transfer coefficient [W/m 2 K ]
a=fin length [mm]
v=wind velocity [km/h]
θ=fin pitch [mm]
Another equation for high fluid velocities, obtained from experiments conducted by Gibson, is
k = 241.7 [ 0.0247 − 0.00148 ( a 0.8 / θ 0.4 ) ] v 0.73 {\displaystyle k=241.7[0.0247-0.00148(a^{0.8}/\theta ^{0.4})]v^{0.73}}
where
k=Fin surface heat transfer coefficient[W/m 2 K ]
a=Fin length[mm]
θ=Fin pitch[mm]
v=Wind velocity[km/h]
A more accurate equation for fin surface heat transfer coefficient is:
k a v g = ( 2.47 − 2.55 / θ 0.4 ) v 0.9 0.0872 θ + 4.31 {\displaystyle k_{avg}=(2.47-2.55/\theta ^{0.4})v^{0.9}0.0872\theta +4.31}
where
k (avg)= Fin surface heat transfer coefficient[W/m 2 K ]
θ=Fin pitch[mm]
v=Wind velocity[km/h]
All these equations can be used to evaluate average heat transfer coefficient for various fin designs.
The momentum conservation equation for this case is given as follows:
∂ ( ρ v ) ∂ t + v ∇ . ( ρ v ) = − ∇ P + ∇ . τ + F + ρ g {\displaystyle {\partial (\rho v) \over \partial t}+v\nabla .(\rho v)=-\nabla P+\nabla .\tau +F+\rho g}
This is used in combination with the continuity equation.
The energy equation is also needed, which is:
∂ ( ρ E ) ∂ t + ∇ . [ v ( ρ E + p ) ] = ∇ . [ k e f f ∇ T − Σ j h j J j + ( τ . v ) ] {\displaystyle {\partial (\rho E) \over \partial t}+\nabla .[v(\rho E+p)]=\nabla .[k_{eff}\nabla T-\Sigma _{j}h_{j}J_{j}+(\tau .v)]} .
The above equation, on solving, gives the temperature profile for the fluid region.
When solved as a scalar equation, it can be used to calculate the temperatures at the fin and cylinder surfaces, by reducing to:
∇ 2 T + q . k = 1 α ∂ T ∂ t {\displaystyle \nabla ^{2}T+{{\overset {.}{q}} \over k}={1 \over \alpha }{\partial T \over \partial t}}
Where:
q = internal heat generation = 0 (in this case).
Also dT/dt = 0 due to steady state assumption.
These flow and energy equations can be set up and solved in any simulation software , e.g. Fluent . In order to do so, all parameters of flow and thermal conditions like fluid velocity and temperature of body have to be specified according to the requirement. Also, the boundary conditions and assumptions if any must be specified.
This results in velocity profiles and temperature profiles for various surfaces and this knowledge can be used to design the fin. | https://en.wikipedia.org/wiki/Heat_transfer_through_fins |
Heat treating (or heat treatment ) is a group of industrial , thermal and metalworking processes used to alter the physical , and sometimes chemical , properties of a material. The most common application is metallurgical . Heat treatments are also used in the manufacture of many other materials, such as glass . Heat treatment involves the use of heating or chilling, normally to extreme temperatures, to achieve the desired result such as hardening or softening of a material. Heat treatment techniques include annealing , case hardening , precipitation strengthening , tempering , carburizing , normalizing and quenching . Although the term heat treatment applies only to processes where the heating and cooling are done for the specific purpose of altering properties intentionally, heating and cooling often occur incidentally during other manufacturing processes such as hot forming or welding.
Metallic materials consist of a microstructure of small crystals called "grains" or crystallites . The nature of the grains (i.e. grain size and composition) is one of the most effective factors that can determine the overall mechanical behavior of the metal. Heat treatment provides an efficient way to manipulate the properties of the metal by controlling the rate of diffusion and the rate of cooling within the microstructure. Heat treating is often used to alter the mechanical properties of a metallic alloy , manipulating properties such as the hardness , strength , toughness , ductility , and elasticity . [ 1 ]
There are two mechanisms that may change an alloy's properties during heat treatment: the formation of martensite causes the crystals to deform intrinsically, and the diffusion mechanism causes changes in the homogeneity of the alloy. [ 2 ]
The crystal structure consists of atoms that are grouped in a very specific arrangement, called a lattice. In most elements, this order will rearrange itself, depending on conditions like temperature and pressure. This rearrangement called allotropy or polymorphism , may occur several times, at many different temperatures for a particular metal. In alloys, this rearrangement may cause an element that will not normally dissolve into the base metal to suddenly become soluble , while a reversal of the allotropy will make the elements either partially or completely insoluble. [ 3 ]
When in the soluble state, the process of diffusion causes the atoms of the dissolved element to spread out, attempting to form a homogenous distribution within the crystals of the base metal. If the alloy is cooled to an insoluble state, the atoms of the dissolved constituents (solutes) may migrate out of the solution. This type of diffusion, called precipitation , leads to nucleation , where the migrating atoms group together at the grain-boundaries. This forms a microstructure generally consisting of two or more distinct phases . [ 4 ] For instance, steel that has been heated above the austenizing temperature (red to orange-hot, or around 1,500 °F (820 °C) to 1,600 °F (870 °C) depending on carbon content), and then cooled slowly, forms a laminated structure composed of alternating layers of ferrite and cementite , becoming soft pearlite . [ 5 ] After heating the steel to the austenite phase and then quenching it in water, the microstructure will be in the martensitic phase. This is due to the fact that the steel will change from the austenite phase to the martensite phase after quenching. Some pearlite or ferrite may be present if the quench did not rapidly cool off all the steel. [ 4 ]
Unlike iron-based alloys, most heat-treatable alloys do not experience a ferrite transformation. In these alloys, the nucleation at the grain-boundaries often reinforces the structure of the crystal matrix. These metals harden by precipitation. Typically a slow process, depending on temperature, this is often referred to as "age hardening". [ 6 ]
Many metals and non-metals exhibit a martensite transformation when cooled quickly (with external media like oil, polymer, water, etc.). When a metal is cooled very quickly, the insoluble atoms may not be able to migrate out of the solution in time. This is called a " diffusionless transformation ." When the crystal matrix changes to its low-temperature arrangement, the atoms of the solute become trapped within the lattice. The trapped atoms prevent the crystal matrix from completely changing into its low-temperature allotrope, creating shearing stresses within the lattice. When some alloys are cooled quickly, such as steel, the martensite transformation hardens the metal, while in others, like aluminum, the alloy becomes softer. [ 7 ] [ 8 ]
The specific composition of an alloy system will usually have a great effect on the results of heat treating. If the percentage of each constituent is just right, the alloy will form a single, continuous microstructure upon cooling. Such a mixture is said to be eutectoid . However, If the percentage of the solutes varies from the eutectoid mixture, two or more different microstructures will usually form simultaneously. A hypo eutectoid solution contains less of the solute than the eutectoid mix, while a hypereutectoid solution contains more. [ 9 ]
A eutectoid ( eutectic -like) alloy is similar in behavior to a eutectic alloy . A eutectic alloy is characterized by having a single melting point . This melting point is lower than that of any of the constituents, and no change in the mixture will lower the melting point any further. When a molten eutectic alloy is cooled, all of the constituents will crystallize into their respective phases at the same temperature.
A eutectoid alloy is similar, but the phase change occurs, not from a liquid, but from a solid solution . Upon cooling a eutectoid alloy from the solution temperature, the constituents will separate into different crystal phases , forming a single microstructure . A eutectoid steel, for example, contains 0.77% carbon . Upon cooling slowly, the solution of iron and carbon (a single phase called austenite ) will separate into platelets of the phases ferrite and cementite . This forms a layered microstructure called pearlite .
Since pearlite is harder than iron, the degree of softness achievable is typically limited to that produced by the pearlite. Similarly, the hardenability is limited by the continuous martensitic microstructure formed when cooled very fast. [ 10 ]
A hypoeutectic alloy has two separate melting points. Both are above the eutectic melting point for the system but are below the melting points of any constituent forming the system. Between these two melting points, the alloy will exist as part solid and part liquid. The constituent with the higher melting point will solidify first. When completely solidified, a hypoeutectic alloy will often be in a solid solution.
Similarly, a hypoeutectoid alloy has two critical temperatures, called "arrests". Between these two temperatures, the alloy will exist partly as the solution and partly as a separate crystallizing phase, called the "pro eutectoid phase". These two temperatures are called the upper (A 3 ) and lower (A 1 ) transformation temperatures. As the solution cools from the upper transformation temperature toward an insoluble state, the excess base metal will often be forced to "crystallize-out", becoming the pro eutectoid. This will occur until the remaining concentration of solutes reaches the eutectoid level, which will then crystallize as a separate microstructure.
For example, a hypoeutectoid steel contains less than 0.77% carbon. Upon cooling a hypoeutectoid steel from the austenite transformation temperature, small islands of proeutectoid-ferrite will form. These will continue to grow and the carbon will recede until the eutectoid concentration in the rest of the steel is reached. This eutectoid mixture will then crystallize as a microstructure of pearlite. Since ferrite is softer than pearlite, the two microstructures combine to increase the ductility of the alloy. Consequently, the hardenability of the alloy is lowered. [ 11 ]
A hypereutectic alloy also has different melting points. However, between these points, it is the constituent with the higher melting point that will be solid. Similarly, a hypereutectoid alloy has two critical temperatures. When cooling a hypereutectoid alloy from the upper transformation temperature, it will usually be the excess solutes that crystallize-out first, forming the pro-eutectoid. This continues until the concentration in the remaining alloy becomes eutectoid, which then crystallizes into a separate microstructure.
A hypereutectoid steel contains more than 0.77% carbon. When slowly cooling hypereutectoid steel, the cementite will begin to crystallize first. When the remaining steel becomes eutectoid in composition, it will crystallize into pearlite. Since cementite is much harder than pearlite, the alloy has greater hardenability at a cost in ductility. [ 9 ] [ 11 ]
Proper heat treating requires precise control over temperature, time held at a certain temperature and cooling rate. [ 12 ]
With the exception of stress-relieving, tempering, and aging, most heat treatments begin by heating an alloy beyond a certain transformation, or arrest (A), temperature. This temperature is referred to as an "arrest" because at the A temperature the metal experiences a period of hysteresis . At this point, all of the heat energy is used to cause the crystal change, so the temperature stops rising for a short time (arrests) and then continues climbing once the change is complete. [ 13 ] Therefore, the alloy must be heated above the critical temperature for a transformation to occur. The alloy will usually be held at this temperature long enough for the heat to completely penetrate the alloy, thereby bringing it into a complete solid solution. Iron, for example, has four critical-temperatures, depending on carbon content. Pure iron in its alpha (room temperature) state changes to nonmagnetic gamma-iron at its A 2 temperature, and weldable delta-iron at its A 4 temperature. However, as carbon is added, becoming steel, the A 2 temperature splits into the A 3 temperature, also called the austenizing temperature (all phases become austenite, a solution of gamma iron and carbon) and its A 1 temperature (austenite changes into pearlite upon cooling). Between these upper and lower temperatures the pro eutectoid phase forms upon cooling.
Because a smaller grain size usually enhances mechanical properties, such as toughness , shear strength and tensile strength , these metals are often heated to a temperature that is just above the upper critical temperature, in order to prevent the grains of solution from growing too large. For instance, when steel is heated above the upper critical-temperature, small grains of austenite form. These grow larger as the temperature is increased. When cooled very quickly, during a martensite transformation, the austenite grain-size directly affects the martensitic grain-size. Larger grains have large grain-boundaries, which serve as weak spots in the structure. The grain size is usually controlled to reduce the probability of breakage. [ 14 ]
The diffusion transformation is very time-dependent. Cooling a metal will usually suppress the precipitation to a much lower temperature. Austenite, for example, usually only exists above the upper critical temperature. However, if the austenite is cooled quickly enough, the transformation may be suppressed for hundreds of degrees below the lower critical temperature. Such austenite is highly unstable and, if given enough time, will precipitate into various microstructures of ferrite and cementite. The cooling rate can be used to control the rate of grain growth or can even be used to produce partially martensitic microstructures. [ 15 ] However, the martensite transformation is time-independent. If the alloy is cooled to the martensite transformation (M s ) temperature before other microstructures can fully form, the transformation will usually occur at just under the speed of sound. [ 16 ]
When austenite is cooled but kept above the martensite start temperature Ms so that a martensite transformation does not occur, the austenite grain size will have an effect on the rate of nucleation, but it is generally temperature and the rate of cooling that controls the grain size and microstructure. When austenite is cooled extremely slowly, it will form large ferrite crystals filled with spherical inclusions of cementite. This microstructure is referred to as "sphereoidite". If cooled a little faster, then coarse pearlite will form. Even faster, and fine pearlite will form. If cooled even faster, bainite will form, with more complete bainite transformation occurring depending on the time held above martensite start Ms. Similarly, these microstructures will also form, if cooled to a specific temperature and then held there for a certain time. [ 17 ]
Most non-ferrous alloys are also heated in order to form a solution. Most often, these are then cooled very quickly to produce a martensite transformation, putting the solution into a supersaturated state. The alloy, being in a much softer state, may then be cold worked . This causes work hardening that increases the strength and hardness of the alloy. Moreover, the defects caused by plastic deformation tend to speed up precipitation, increasing the hardness beyond what is normal for the alloy. Even if not cold worked, the solutes in these alloys will usually precipitate, although the process may take much longer. Sometimes these metals are then heated to a temperature that is below the lower critical (A 1 ) temperature, preventing recrystallization, in order to speed-up the precipitation. [ 18 ] [ 19 ] [ 20 ]
Complex heat treating schedules, or "cycles", are often devised by metallurgists to optimize an alloy's mechanical properties. In the aerospace industry, a superalloy may undergo five or more different heat treating operations to develop the desired properties. [ citation needed ] This can lead to quality problems depending on the accuracy of the furnace's temperature controls and timer. These operations can usually be divided into several basic techniques.
Annealing consists of heating a metal to a specific temperature and then cooling at a rate that will produce a refined microstructure , either fully or partially separating the constituents. The rate of cooling is generally slow. Annealing is most often used to soften a metal for cold working, to improve machinability, or to enhance properties like electrical conductivity .
In ferrous alloys, annealing is usually accomplished by heating the metal beyond the upper critical temperature and then cooling very slowly, resulting in the formation of pearlite . In both pure metals and many alloys that cannot be heat treated, annealing is used to remove the hardness caused by cold working. The metal is heated to a temperature where recrystallization can occur, thereby repairing the defects caused by plastic deformation. In these metals, the rate of cooling will usually have little effect. Most non-ferrous alloys that are heat-treatable are also annealed to relieve the hardness of cold working. These may be slowly cooled to allow full precipitation of the constituents and produce a refined microstructure.
Ferrous alloys are usually either "full annealed" or "process annealed". Full annealing requires very slow cooling rates, in order to form coarse pearlite. In process annealing, the cooling rate may be faster; up to, and including normalizing. The main goal of process annealing is to produce a uniform microstructure. Non-ferrous alloys are often subjected to a variety of annealing techniques, including "recrystallization annealing", "partial annealing", "full annealing", and "final annealing". Not all annealing techniques involve recrystallization, such as stress relieving. [ 21 ]
Normalizing is a technique used to provide uniformity in grain size and composition ( equiaxed crystals ) throughout an alloy. The term is often used for ferrous alloys that have been austenitized and then cooled in the open air. [ 21 ] Normalizing not only produces pearlite but also martensite and sometimes bainite , which gives harder and stronger steel but with less ductility for the same composition than full annealing.
In the normalizing process the steel is heated to about 40 degrees Celsius above its upper critical temperature limit, held at this temperature for some time, and then cooled in air.
Stress-relieving is a technique to remove or reduce the internal stresses created in metal. These stresses may be caused in a number of ways, ranging from cold working to non-uniform cooling. Stress-relieving is usually accomplished by heating a metal below the lower critical temperature and then cooling uniformly. [ 21 ] Stress relieving is commonly used on items like air tanks, boilers and other pressure vessels , to remove a portion of the stresses created during the welding process. [ 22 ]
Some metals are classified as precipitation hardening metals . When a precipitation hardening alloy is quenched, its alloying elements will be trapped in solution, resulting in a soft metal. Aging a "solutionized" metal will allow the alloying elements to diffuse through the microstructure and form intermetallic particles. These intermetallic particles will nucleate and fall out of the solution and act as a reinforcing phase, thereby increasing the strength of the alloy. Alloys may age " naturally" meaning that the precipitates form at room temperature, or they may age "artificially" when precipitates only form at elevated temperatures. In some applications, naturally aging alloys may be stored in a freezer to prevent hardening until after further operations - assembly of rivets, for example, maybe easier with a softer part.
Examples of precipitation hardening alloys include 2000 series, 6000 series, and 7000 series aluminium alloy , as well as some superalloys and some stainless steels . Steels that harden by aging are typically referred to as maraging steels , from a combination of the term "martensite aging". [ 21 ]
Quenching is a process of cooling a metal at a rapid rate. This is most often done to produce a martensite transformation. In ferrous alloys, this will often produce a harder metal, while non-ferrous alloys will usually become softer than normal.
To harden by quenching, a metal (usually steel or cast iron) must be heated above the upper critical temperature (Steel: above 815~900 degrees Celsius [ 23 ] ) and then quickly cooled. Depending on the alloy and other considerations (such as concern for maximum hardness vs. cracking and distortion), cooling may be done with forced air or other gases , (such as nitrogen ). Liquids may be used, due to their better thermal conductivity , such as oil , water, a polymer dissolved in water, or a brine . Upon being rapidly cooled, a portion of austenite (dependent on alloy composition) will transform to martensite , a hard, brittle crystalline structure. The quenched hardness of a metal depends on its chemical composition and quenching method. Cooling speeds, from fastest to slowest, go from brine, polymer (i.e. mixtures of water + glycol polymers), freshwater, oil, and forced air. However, quenching certain steel too fast can result in cracking, which is why high-tensile steels such as AISI 4140 should be quenched in oil, tool steels such as ISO 1.2767 or H13 hot work tool steel should be quenched in forced air, and low alloy or medium-tensile steels such as XK1320 or AISI 1040 should be quenched in brine.
Some Beta titanium based alloys have also shown similar trends of increased strength through rapid cooling. [ 24 ] However, most non-ferrous metals, like alloys of copper , aluminum , or nickel , and some high alloy steels such as austenitic stainless steel (304, 316), produce an opposite effect when these are quenched: they soften. Austenitic stainless steels must be quenched to become fully corrosion resistant, as they work-harden significantly. [ 21 ]
Untempered martensitic steel, while very hard, is too brittle to be useful for most applications. A method for alleviating this problem is called tempering. Most applications require that quenched parts be tempered. Tempering consists of heating steel below the lower critical temperature, (often from 400˚F to 1105˚F or 205˚C to 595˚C, depending on the desired results), to impart some toughness . Higher tempering temperatures (maybe up to 1,300˚F or 700˚C, depending on the alloy and application) are sometimes used to impart further ductility, although some yield strength is lost.
Tempering may also be performed on normalized steels. Other methods of tempering consist of quenching to a specific temperature, which is above the martensite start temperature, and then holding it there until pure bainite can form or internal stresses can be relieved. These include austempering and martempering . [ 21 ]
Steel that has been freshly ground or polished will form oxide layers when heated. At a very specific temperature, the iron oxide will form a layer with a very specific thickness, causing thin-film interference . This causes colors to appear on the surface of the steel. As the temperature is increased, the iron oxide layer grows in thickness, changing the color. [ 25 ] These colors, called tempering colors, have been used for centuries to gauge the temperature of the metal. [ 26 ]
The tempering colors can be used to judge the final properties of the tempered steel. Very hard tools are often tempered in the light to the dark straw range, whereas springs are often tempered to the blue. However, the final hardness of the tempered steel will vary, depending on the composition of the steel. Higher-carbon tool steel will remain much harder after tempering than spring steel (of slightly less carbon) when tempered at the same temperature. The oxide film will also increase in thickness over time. Therefore, steel that has been held at 400˚F for a very long time may turn brown or purple, even though the temperature never exceeded that needed to produce a light straw color. Other factors affecting the final outcome are oil films on the surface and the type of heat source used. [ 26 ]
Many heat treating methods have been developed to alter the properties of only a portion of an object. These tend to consist of either cooling different areas of an alloy at different rates, by quickly heating in a localized area and then quenching, by thermochemical diffusion, or by tempering different areas of an object at different temperatures, such as in differential tempering . [ citation needed ]
Some techniques allow different areas of a single object to receive different heat treatments. This is called differential hardening . It is common in high quality knives and swords . The Chinese jian is one of the earliest known examples of this, and the Japanese katana may be the most widely known. The Nepalese Khukuri is another example. This technique uses an insulating layer, like layers of clay, to cover the areas that are to remain soft. The areas to be hardened are left exposed, allowing only certain parts of the steel to fully harden when quenched. [ citation needed ]
Flame hardening is used to harden only a portion of the metal. Unlike differential hardening, where the entire piece is heated and then cooled at different rates, in flame hardening, only a portion of the metal is heated before quenching. This is usually easier than differential hardening, but often produces an extremely brittle zone between the heated metal and the unheated metal, as cooling at the edge of this heat-affected zone is extremely rapid. [ citation needed ]
Induction hardening is a surface hardening technique in which the surface of the metal is heated very quickly, using a no-contact method of induction heating . The alloy is then quenched, producing a martensite transformation at the surface while leaving the underlying metal unchanged. This creates a very hard, wear-resistant surface while maintaining the proper toughness in the majority of the object. Crankshaft journals are a good example of an induction hardened surface. [ 27 ]
Case hardening is a thermochemical diffusion process in which an alloying element, most commonly carbon or nitrogen, diffuses into the surface of a monolithic metal. The resulting interstitial solid solution is harder than the base material, which improves wear resistance without sacrificing toughness. [ 21 ]
Laser surface engineering is a surface treatment with high versatility, selectivity and novel properties. Since the cooling rate is very high in laser treatment, metastable even metallic glass can be obtained by this method.
Although quenching steel causes the austenite to transform into martensite, all of the austenite usually does not transform. Some austenite crystals will remain unchanged even after quenching below the martensite finish (M f ) temperature. Further transformation of the austenite into martensite can be induced by slowly cooling the metal to extremely low temperatures. Cold treating generally consists of cooling the steel to around -115˚F (-81˚C), but does not eliminate all of the austenite. Cryogenic treating usually consists of cooling to much lower temperatures, often in the range of -315˚F (-192˚C), to transform most of the austenite into martensite.
Cold and cryogenic treatments are typically done immediately after quenching, before any tempering, and will increase the hardness, wear resistance, and reduce the internal stresses in the metal but, because it is really an extension of the quenching process, it may increase the chances of cracking during the procedure. The process is often used for tools, bearings, or other items that require good wear resistance. However, it is usually only effective in high-carbon or high-alloy steels in which more than 10% austenite is retained after quenching. [ 28 ] [ 29 ]
The heating of steel is sometimes used as a method to alter the carbon content. When steel is heated in an oxidizing environment, the oxygen combines with the iron to form an iron-oxide layer, which protects the steel from decarburization. When the steel turns to austenite, however, the oxygen combines with iron to form a slag, which provides no protection from decarburization. The formation of slag and scale actually increases decarburization, because the iron oxide keeps oxygen in contact with the decarburization zone even after the steel is moved into an oxygen-free environment, such as the coals of a forge. Thus, the carbon atoms begin combining with the surrounding scale and slag to form both carbon monoxide and carbon dioxide , which is released into the air.
Steel contains a relatively small percentage of carbon, which can migrate freely within the gamma iron. When austenitized steel is exposed to air for long periods of time, the carbon content in the steel can be lowered. This is the opposite from what happens when steel is heated in a reducing environment , in which carbon slowly diffuses further into the metal. In an oxidizing environment, the carbon can readily diffuse outwardly, so austenitized steel is very susceptible to decarburization. This is often used for cast steel, where a high carbon-content is needed for casting, but a lower carbon-content is desired in the finished product. It is often used on cast-irons to produce malleable cast iron , in a process called "white tempering". This tendency to decarburize is often a problem in other operations, such as blacksmithing, where it becomes more desirable to austenize the steel for the shortest amount of time possible to prevent too much decarburization. [ 30 ]
Usually the end condition is specified instead of the process used in heat treatment. [ 31 ]
Case hardening is specified by "hardness" and "case depth". The case depth can be specified in two ways: total case depth or effective case depth. The total case depth is the true depth of the case. For most alloys, the effective case depth is the depth of the case that has a hardness equivalent of HRC50; however, some alloys specify a different hardness (40-60 HRC) at effective case depth; this is checked on a Tukon microhardness tester. This value can be roughly approximated as 65% of the total case depth; however, the chemical composition and hardenability can affect this approximation. If neither type of case depth is specified the total case depth is assumed. [ 31 ]
For case hardened parts the specification should have a tolerance of at least ±0.005 in (0.13 mm). If the part is to be ground after heat treatment, the case depth is assumed to be after grinding. [ 31 ]
The Rockwell hardness scale used for the specification depends on the depth of the total case depth, as shown in the table below. Usually, hardness is measured on the Rockwell "C" scale, but the load used on the scale will penetrate through the case if the case is less than 0.030 in (0.76 mm). Using Rockwell "C" for a thinner case will result in a false reading. [ 31 ]
For cases that are less than 0.015 in (0.38 mm) thick a Rockwell scale cannot reliably be used, so file hard is specified instead. [ 31 ] File hard is approximately equivalent to 58 HRC. [ 32 ]
When specifying the hardness either a range should be given or the minimum hardness specified. If a range is specified at least 5 points should be given. [ 31 ]
Only hardness is listed for through hardening. It is usually in the form of HRC with at least a five-point range. [ 31 ]
The hardness for an annealing process is usually listed on the HRB scale as a maximum value. [ 31 ] It is a process to refine grain size, improve strength, remove residual stress, and affect the electromagnetic properties...
Furnaces used for heat treatment can be split into two broad categories: batch furnaces and continuous furnaces. Batch furnaces are usually manually loaded and unloaded, whereas continuous furnaces have an automatic conveying system to provide a constant load into the furnace chamber. [ 33 ]
Batch systems usually consist of an insulated chamber with a steel shell, a heating system , and an access door to the chamber. [ 33 ]
Many basic box-type furnaces have been upgraded to a semi-continuous batch furnace with the addition of integrated quench tanks and slow-cool chambers. These upgraded furnaces are a very commonly used piece of equipment for heat-treating. [ 33 ]
Also known as a " bogie hearth", the car furnace is an extremely large batch furnace. The floor is constructed as an insulated movable car that is moved in and out of the furnace for loading and unloading. The car is usually sealed using sand seals or solid seals when in position. Due to the difficulty in getting a sufficient seal, car furnaces are usually used for non-atmosphere processes. [ citation needed ]
Similar in type to the car furnace, except that the car and hearth are rolled into position beneath the furnace and raised by means of a motor-driven mechanism, elevator furnaces can handle large heavy loads and often eliminate the need for any external cranes and transfer mechanisms. [ 33 ]
Bell furnaces have removable covers called bells , which are lowered over the load and hearth by crane. An inner bell is placed over the hearth and sealed to supply a protective atmosphere. An outer bell is lowered to provide the heat supply. [ 33 ]
Furnaces that are constructed in a pit and extend to floor level or slightly above are called pit furnaces. Workpieces can be suspended from fixtures, held in baskets, or placed on bases in the furnace. Pit furnaces are suited to heating long tubes, shafts, and rods by holding them in a vertical position. This manner of loading provides minimal distortion. [ 33 ]
Salt baths are used in a wide variety of heat treatment processes including neutral hardening, liquid carburising, liquid nitriding , austempering , martempering and tempering .
Parts are loaded into a pot of molten salt where they are heated by conduction , giving a very readily available source of heat. The core temperature of a part rises in temperature at approximately the same rate as its surface in a salt bath. [ 33 ]
Salt baths utilize a variety of salts for heat treatment, with cyanide salts being the most extensively used. Concerns about associated occupation health and safety, and expensive waste management and disposal due to their environmental effects have made the use of salt baths less attractive in recent years. Consequently, many salt baths are being replaced by more environmentally friendly fluidized bed furnaces. [ 34 ]
A fluidised bed consists of a cylindrical retort made from high-temperature alloy, filled with sand-like aluminum oxide particulate. Gas (air or nitrogen) is bubbled through the oxide and the sand moves in such a way that it exhibits fluid-like behavior, hence the term fluidized . The solid-solid contact of the oxide gives very high thermal conductivity and excellent temperature uniformity throughout the furnace, comparable to those seen in a salt bath. [ 33 ] | https://en.wikipedia.org/wiki/Heat_treating |
Heated glass is a resistance heater created when a transparent, electrically conductive coating is applied to float glass and then subjected to an electric current . The electric current in the coating creates heat energy , which warms the glass until the glass radiates heat.
The manufacturing process begins with the application of a microscopic Tin dioxide coating to a pane of float glass. This coating is transparent and conducts electricity. Then, two busbars are applied to the glass as follows: the busbars must be parallel and applied to opposing edges on the same side of the glass pane. [ 1 ] The surface of the glass between the busbars must be flat.
An electric current flows across the tin(II) oxide coating from one busbar to the other. The electrical resistance of the coating produces heat energy, which radiates from the glass. The busbars are connected to a power control unit that regulates the flow of electricity and thus the temperature of the glass.
In modern architectural projects the heated glass is completely translucent. This technology uses a special metallic coating on the surface of the glass invisible to the naked human eye. [ 2 ]
A pane of heated glass can achieve temperatures up to 350 degrees Fahrenheit (177 degrees Celsius). The standard desirable temperature range in buildings is between 104 and 113 degrees Fahrenheit (40 to 44 degrees Celsius). For industrial purposes higher temperatures may be warranted.
The first heated glass was created in 1931 by Protes Glass Company, offered for cars. Their product was not a success. [ 3 ]
Heated glass was first used on a wide scale in World War II in Naval Ships and on to aircraft windshields from frosting over in cold weather It is still used in both for this purpose.
Heated glass has been used in architectural applications for the past 30 years to prevent condensation [ 4 ] and provide radiant heat. Condensation in buildings can have serious consequences to health and property values. Heated or radiant glass is generally an enhanced standard two pane insulated glass window using various bus bar technologies to convey the electric current to heat the glass. Some technologies are patented and permit larger glass areas to be heated than others. In
One university study shows that this heated glass technology is more efficient than other electric heating and can be more efficient than natural gas heating. [ 5 ] Some environmentalists dispute the idea that this is an efficient heating system because even high e-value windows are poor insulators compared to insulated walls, and they believe heating window ejects much of the radiant heat outside. Another criticism is that this type of heating may encourage the use of larger windows in a house, making them less energy efficient. [ 6 ] This technology has evolved since the late 1950s where it firstly evolved to be used for melting snow on glass roofs and was then effectively inverted and used as the heat source inside the building.
A common commercial use of heated glass is to prevent frost from forming on the glass doors of supermarket freezers. In addition, display cases (such as in convenience stores and delis) use heated glass shelves to keep cooked food items from cooling. [ 7 ] | https://en.wikipedia.org/wiki/Heated_glass |
A heater core is a radiator -like device used in heating the cabin of a vehicle . Hot coolant from the vehicle's engine is passed through a winding tube of the core, a heat exchanger between coolant and cabin air. Fins attached to the core tubes serve to increase surface area for heat transfer to air that is forced past them by a fan, thereby heating the passenger compartment.
The internal combustion engine in most cars and trucks is cooled by a water and antifreeze mixture that is circulated through the engine and radiator by a water pump to enable the radiator to give off engine heat to the atmosphere. Some of that coolant can be diverted through the heater core to give some engine heat to the cabin, or adjust the temperature of the conditioned air.
A heater core is a small radiator located under the dashboard of the vehicle, and it consists of conductive aluminium or brass tubing with cooling fins to increase surface area. Hot coolant passing through the heater core gives off heat before returning to the engine cooling circuit.
The squirrel cage fan of the vehicle's ventilation system forces air through the heater core to transfer heat from the coolant to the cabin air, which is directed into the vehicle through vents at various points.
Once the engine has warmed up, the coolant is kept at a more or less constant temperature by the thermostat . The temperature of the air entering the vehicle's interior can be controlled by using a valve limiting the amount of coolant that goes through the heater core. Another method is blocking off the heater core with a door, directing part (or all) of the incoming air around the heater core completely, so it does not get heated (or re-heated if the air conditioning compressor is active). Some cars use a combination of these systems.
Simpler systems allow the driver to control the valve or door directly (usually by means of a rotary knob, or a lever). More complicated systems use a combination of electromechanical actuators and thermistors to control the valve or doors to deliver air at a precise temperature value selected by the user.
Cars with dual climate function (allowing driver and passenger to each set a different temperature) may use a heater core split in two, where different amounts of coolant flow through the heater core on either side to obtain the desired heating.
In a car equipped with air conditioning , outside air, or cabin air if the recirculation flap has been set to close the external air passages, is first forced, often after being filtered by a cabin air filter , through the air conditioner's evaporator coil. This can be thought of as a heater core filled with very cold liquid that is undergoing a phase change to gas (the evaporation), a process which cools rather than heats the incoming air. In order to obtain the desired temperature, incoming air may first be cooled by the air conditioning and then heated again by the heater core. In a vehicle fitted with manual controls for the heater and air conditioning compressor, using both systems together will dehumidify the air in the cabin, as the evaporator coil removes moisture from the air due to condensation. This can result in increased air comfort levels inside the vehicle. Automatic temperature control systems can take the best course of action in regulating the compressor operation, amount of reheating and blower speed depending upon the external air temperature, the internal one and the cabin air temperature value or a rapid defrost effect requested by the user.
Because the heater core cools the heated coolant from the engine by transferring its heat to the cabin air, it can also act as an auxiliary radiator for the engine. If the radiator is working improperly, the operator may turn the heat on (together with the cabin blower fan placed on full speed, and with the windows opened) in the passenger cabin, resulting in a certain cooling effect on the overheated engine coolant. This idea only works to a certain degree, as the heater core is not large enough nor does it have enough cold air going through it to cool large amounts of coolant significantly.
The heater core is made up of small piping that has numerous bends. Clogging of the piping may occur if the coolant system is not flushed or if the coolant is not changed regularly. If clogging occurs the heater core will not work properly. If coolant flow is restricted, heating capacity will be reduced or even lost altogether if the heater core becomes blocked. Control valves may also clog or get stuck. Where a blend door is used instead of a control valve as a method of controlling the air's heating amount, the door itself or its control mechanism can become stuck due to thermal expansion . If the climate control unit is automatic, actuators can also fail.
Another possible problem is a leak in one of the connections to the heater core. This may first be noticeable by smell ( ethylene glycol is widely used as coolant and has a sweet smell); it may also cause (somewhat greasy) fogging of the windshield above the windshield heater vent. Glycol may also leak directly into the car, causing wet upholstery or carpeting.
Electrolysis can cause excessive corrosion leading to the heater core rupturing. Coolant will spray directly into the passenger compartment followed with white coloured smoke, a significant driving hazard.
Because the heater core is usually located under the dashboard inside of the vehicle and is enclosed in the ventilation system's ducting, servicing it often requires disassembling a large part of the dashboard, which can be labour-intensive and therefore expensive.
Since the heater core relies on the coolant's heat to warm the cabin air up, it will not begin working until the engine's coolant warms up enough. This problem can be resolved by equipping the vehicle with an auxiliary heating system , which can either use electricity or burn the vehicle's fuel in order to rapidly bring the engine's coolant to operating temperatures.
Engines that do not have a water cooling system cannot heat the cabin via a heater core; one alternative is to guide air around the (very hot) engine exhaust manifold and then into the vehicle's interior. Temperature control is achieved by mixing with unheated outside air. Air-cooled Volkswagen engines use this method. Another example is the air-cooled Briggs & Stratton Vanguard , used in the ultra and microlight flight amateur construction scene. This method for cockpit heating is a simple option for the Spacek SD-1 Minisport and other homebuilt sportplanes. However, depending on the design, this can cause a safety issue where a leak in the exhaust system will begin to fill the passenger cabin with deadly fumes.
Car heat cores are also used for Do-It-Yourself projects, such as for cooling homemade liquid cooling systems for computers . [ 1 ] | https://en.wikipedia.org/wiki/Heater_core |
The Heath-Brown–Moroz constant C , named for Roger Heath-Brown and Boris Moroz , is defined as
where p runs over the primes . [ 1 ] [ 2 ]
This constant is part of an asymptotic estimate for the distribution of rational points of bounded height on the cubic surface X 0 3 = X 1 X 2 X 3 . Let H be a positive real number and N ( H ) the number of solutions to the equation X 0 3 = X 1 X 2 X 3 with all the X i non-negative integers less than or equal to H and their greatest common divisor equal to 1. Then
This number theory -related article is a stub . You can help Wikipedia by expanding it .
This article about a number is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Heath-Brown–Moroz_constant |
Heather Dewey-Hagborg (born June 4, 1982, Philadelphia, Pennsylvania ) is an information artist and bio-hacker . [ 1 ] She is best known for her project Stranger Visions , a series of portraits created from DNA she recovered from discarded items, such as hair, cigarettes and chewing gum while living in Brooklyn, New York . [ 2 ] From the extracted DNA, she determined gender, ethnicity and other factors and then used face-generating software and a 3D printer to create a speculative, algorithmically determined 3D portrait. While critical of technology and surveillance, her work has also been noted as provocative in its lack of legal precedent. [ 3 ] [ 4 ]
Dr. Dewey-Hagborg is an information and bio artist whose works explore the intersection between art and science. [ 5 ] As a student in the Information Arts program at Bennington College , [ 6 ] she participated in computer science classes, which laid the groundwork for the science-based artwork she would later envision using algorithms , electronics , and computer programming . [ 5 ] She earned a Bachelor of Arts (B.A.) degree in 2003.
Dewey-Hagborg continued refining her work as an artist and computer programmer, studying artificial intelligence , [ 5 ] while obtaining a Master of Professional Studies (M.P.S.) in Interactive Telecommunications from New York University (NYU) [ 7 ] in 2007. It was here she curated a robotic performance art show called Robots on the March! in March 2005, and exhibited a piece called Lighter than Air: an experiment in constructing an autonomous flying robot . [ 8 ] [ 9 ]
As a final project at NYU, Dewey-Hagborg explored the question "Can computers be creative?" in an exhibit she called Spurious Memories . She developed an autonomous face categorizing and generating software program which recognized facial components, made comparisons and adjustments, and produced unique representations of the human face through mass exposure to facial images. [ 10 ] Dewey-Hagborg continued her education at Rensselaer Polytechnic Institute [ 11 ] [ 6 ] [ 12 ] [ 13 ] [ 14 ] and graduated with a PhD in electronic arts in 2016.
As an educator her areas of interest include art and technology, multimedia, digital photography, research-based art and programming, and computer science. [ 5 ] [ 15 ] Dewey-Hagborg worked as a teaching assistant at Rensselaer Polytechnic Institute, [ 15 ] an adjunct professor at NYU's Interactive Telecommunications Program, [ 16 ] an adjunct professor at NYU's Courant Institute of Mathematical Sciences, and taught art and technology studies at the School of the Art Institute of Chicago . [ 17 ] [ 15 ] [ 18 ]
As of August 2019, Dewey-Hagborg lives and works in Abu Dhabi, and is a Visiting Assistant Professor of Interactive Media at NYU Abu Dhabi. Her courses include Communication and Technology, and Understanding Interactive Media, and Bioart Practices. [ 19 ]
Dewey-Hagborg's Totem (2010) was a site-specific multimedia sculpture characterizing her earlier work. Totem , an idol, was designed to explore the implications of language and artificial intelligence using machine learning technology. [ 20 ] Exploiting audio surveillance techniques to eavesdrop on and record conversations at the installation site, Dewey-Hagborg wrote algorithms to then isolate word sequences and grammatical structures into commonly used units. Influenced by Hebbian theory , she programmed the sculpture's computer to generate speech based on the most frequently occurring language structures in any given recording period. Over time, the least frequently elicited words or units would fade or be dropped from the sculpture's spoken vocabulary. The remaining units, stored in the sculpture's memory, were then spoken at random intervals. [ 21 ]
Martha Schwendener, of The New York Times , wrote that Totem showed promise, but, because of audio difficulties and its fragmented, randomly generated speech, the piece "failed to connect human speech, meaning, and technology in a profound fashion." [ 22 ]
Stranger Visions (2012–2013) is a science-based, artistic exploration using DNA as a starting point for lifelike, computer generated 3-D portraits. [ 14 ] [ 23 ] [ 24 ] [ 25 ]
She began this project questioning how much information could be understood about a person using genetic detritus left behind by strangers in New York City . [ 11 ] [ 1 ] [ 26 ] [ 5 ] [ 27 ] "I was really struck by this idea that the very things that make us human – hair, skin, saliva, and fingernails – become a real liability for us as we constantly shed them in public. Anyone could come along and mine them for information." [ 26 ] She hoped, by producing realistic sculptures of anonymous people using clues from their DNA, to spark a debate about the potential use or misuse of DNA profiling, privacy, and genetic surveillance. [ 11 ] [ 28 ] [ 29 ] [ 30 ]
As part of her research for Stranger Visions , she took a three-week crash-course [ 26 ] in biotechnology at the Genspace laboratory in New York [ 26 ] [ 28 ] where she learned about the significant amount of personal information that an amateur biologist could learn about someone through biotech processes. [ 1 ] [ 26 ] [ 27 ]
She began the process of extracting DNA from the samples she collected. The extraction involves treating a hair sample, for example, with a gel that dissolves the hair, and a primer specifically developed to help locate characteristics like eye color or gender along the genome . [ 12 ] [ 32 ] She might repeat this process up to 40 times, [ 33 ] looking for genetic variants influencing traits like eye color, hair color, and racial ancestry, in order to complete a portrait. [ 3 ]
Once the DNA strands are extracted from the samples, she then amplifies, or copies, specific regions of the genome, using a technique called Polymerase Chain Reaction , or PCR, a process advanced by Kary Mullis , a winner of the Nobel Prize in Chemistry (1993). [ 11 ] [ 26 ] These amplified regions of the genome make it possible to identifying single nucleotide polymorphisms , or SNPs (pronounced "snips"), [ 30 ] which contain variables in the base pairs that give clues to a person's individual genetic make up (e.g., whether or not a person's eyes might be blue, brown or green). These results are then sent for analysis to a company for sequencing. She used 23andMe , [ 5 ] [ 33 ] a DNA analysis service, for Stranger Visions .
The genetic blueprint [ 12 ] [ 32 ] she receives in return is a text file full of coded information identifying the unique positioning of the 4 nucleobases adenine , thymine , cytosine , and guanine , or ATC and G, that make up the sections of the genome she is interested in. [ 12 ] This data is then entered into a customized computer program she wrote. [ 6 ] The program interprets the code and provides her with a list of traits, including propensity for obesity, eye color, hair color, hair curl, skin tone, freckles, and gender. [ 26 ] [ 27 ] She then takes these traits, as many as 50, and enters them into a face-generating program to configure the 3-D portraits. [ 1 ] [ 26 ] Her previous experience with facial recognition algorithms gave her the ability to repurpose an existing facial recognition program , from Basel , Switzerland. [ 36 ] She reworked the program to generate faces instead of just recognizing facial features. [ 5 ] The resulting model changes facial dimensions (e.g., width of the nose and mouth) and characteristics with the genetic information it receives. Before making the final 3-D print, [ 27 ] She generates several different versions of the face, finally choosing the one she finds most aesthetically pleasing. [ 6 ] [ 33 ]
Critics of Dewey-Hagborg's Stranger Visions question whether or not the work crosses ethical and legal boundaries. [ 2 ] [ 37 ] They make a distinction between an artist's right to express societal concerns through artwork and the act of collecting personal, genetic information without informed consent. [ 32 ] The fact that DNA samples are regularly "left behind" or abandoned does not mean those people have relinquished their right to decide how that information is used. [ 28 ] [ 3 ] [ 30 ]
Some laws, like that of the Human Tissue Act of 2004 in the United Kingdom, prohibit private individuals from collecting biological samples for DNA analysis. [ 27 ] What laws that exist to regulate the collection and use of DNA samples in the United States are not consistent among the states and rarely address the private sector. [ 28 ] Only some states, like New York, outlaw most DNA testing without written consent. [ 2 ] Others worry about the misuse of the information, fearing discrimination based on existing medical or mental health issues or a predisposition for disease-related illnesses or "unreasonable" searches of DNA evidence by law enforcement. [ 28 ] [ 3 ] [ 27 ] [ 30 ] [ 33 ] [ 38 ] One scientist and one gallery, according to Dewey-Hagborg, turned down her proposal fearing the project would "cause a fright" among people. [ 11 ] [ 26 ]
Other critics focus on the growing do-it-yourself or biohacking movement. Supporters like Genspace's Ellen Jorgensen claim projects like Stranger Visions engage the public and make the new technology more accessible. [ 24 ] Detractors fear unintended or unexpected consequences from unregulated experiments conducted by D.I.Y. amateur biologists developed in non-traditional laboratory settings. [ 1 ] [ 3 ] [ 27 ]
Still others, including Daniel MacArthur, an assistant professor at Harvard Medical School , John D. Hawks , an anthropologist at the University of Wisconsin-Madison , Michelle N. Meyer, an academic fellow at the Petrie-Flom Center for Health Law Policy, Biotechnology & Bioethics at Harvard Medical School, and Arthur Caplan , PhD, Director of the Division of Medical Ethics, N.Y.U., report that the technological capability to construct an accurate likeness of a human face based on DNA evidence is not currently available. [ 14 ] [ 33 ]
Although it is possible to identify certain genetic markers linked to facial structures, scientists have yet to isolate all the genes and their variations needed to produce an accurate likeness with a computer simulation. [ 30 ] Meyer, who analyzed the data from Dewey-Hagborg's website concludes:
So far as I can tell, she's working with sex; ancestral groups that are usually very broad, and in any event only reflects half of the individual's DNA (from which she presumably guesses hair color and texture and bone structure); and a decent guess at eye color. There are hundreds of thousands (at least) of people who would fit these descriptions even if each of her phenotype predictions were accurate, and in many cases, one or more of the predictions are probably going to be wrong. [ 14 ] [ 6 ] [ 12 ] [ 30 ]
The environment, the probabilistic nature of interpreting the DNA data collected, and limitations of computer technology all influence the outcome. [ 28 ] [ 5 ] She likens her work to that of a sketch artist. [ 12 ] At most, her portraits bear only a vague, family resemblance to the people whose genetic information was used as a foundation for the portraits. [ 6 ] [ 33 ]
Stranger Visions was on view in the exhibition Mutations-Créations / Imprimer le Monde and is in the permanent collection of Centre Pompidou in Paris, France. [ 39 ] A public version of the genetic profiling code is available on github . [ 40 ]
In 2013, Dewey-Hagborg was contacted by an assistant medical examiner in Delaware, [ 11 ] as a result of her work with Stranger Visions . The project involved developing a portrait of an unidentified woman whose case has remained unsolved for 20 years. She agreed to be an adviser to assist with the case. [ 5 ] Though the resulting portrait based on the unidentified woman's DNA could only be as accurate as existing technology allowed, leaving room for speculation, Dewey-Hagborg viewed working on the case as the only potential use for this type of face-generating technology. [ 12 ] [ 14 ] "If you can add anything at all to her description, if you can increase the possibility her loved ones may find her even one little bit I think it's worth it." [ 14 ] Critics of Dewey-Hagborg's involvement in the Delaware case express concern for what they call "D.I.Y. forensic science" and question the role of civilians in state investigations. [ 27 ]
Dewey-Hagborg's work with Stranger Visions and interest in issues surrounding genetic surveillance lead to the development of two products whose purpose is to eliminate DNA traces. The first, Erase , is a bleaching spray that cleans surfaces (e.g., cups, silverware) of DNA evidence. The second, Replace , is a spray consisting of a blend of genes designed to introduce foreign DNA evidence to the surface, therefore masking any of the original DNA remaining in that area. [ 1 ] [ 37 ] Dewey-Hagborg views these as a "citizens' defense against the looming DNA surveillance state." [ 41 ]
In the summer of 2017 Dewey-Hagborg's collaborative exhibition with transgender activist Chelsea E. Manning A Becoming Resemblance opened at Fridman Gallery in New York City, curated by Roddy Schrock. [ 42 ] [ 43 ] For the exhibition, Dewey-Hagborg created 3-D printed portraits of Manning, based on cheek swabs and hair clippings that Manning sent her while incarcerated for leaking classified information to WikiLeaks . Dewey-Hagborg created Probably Chelsea, 30 portraits based on Manning's maternal DNA, their variances in skin color and features presents the malleability of DNA data, and Radical Love , two portraits out of many that Manning selected because they best conveyed her appearance at the time of her gender transition within maximum security prison, which did not allow photography. [ 42 ] The installation demonstrated how much the human genome is up for interpretation once condensed and subjectively interpreted." [ 44 ] Probably Chelsea has since traveled to numerous institutions for exhibition, including Transmediale 2018: Face Value , January–April 2018 in Berlin, [ 45 ] MU Art Space, Genomic Intimacy , May–July 2018 in Eindhoven, Netherlands [ 46 ] and Perth Institute of Contemporary Art, Hyperprometheus , October–December, 2018. [ 47 ]
Probably Chelsea is in the permanent museum collection of the Exploratorium in San Francisco, California. [ 48 ] Radical Love is on view in the permanent collections of the New York Historical Society [ 49 ] and the Victoria and Albert Museum in London. [ 50 ]
Dewey-Hagborg’s Xeno in Vivo (2024) was a live multimedia opera performance. Xeno in Vivo premiered at the Exploratorium in San Francisco on March 7 and 8, 2024. The opera delves into the topic of xenotransplantation , the transplantation of living cells, tissues, or organs from one species to another. It examines whether CRISPR gene editing is a radical new technology or merely an extension of the ancient Western practice of selective breeding. Along with audio and video media featuring laboratories of scientists and non-human animals, Dewey-Hagborg narrates the multimedia performance, recounting conversations she had with scientists about the ethical implications of xenotransplantation. The live production integrates projections, sculpture, and live beating heart cells on stage with four opera singers, accompanied by original music composed by Bethany Barrett. [ 51 ]
Dewey-Hagborg's work has been exhibited at The Monitor Digital Festival in Guadalajara , Mexico, [ 29 ] PS1 MoMA, Long Island City, New York , [ 5 ] [ 29 ] the New York Public Library in New York City, the Science Gallery at Trinity College Dublin, Ireland , [ 5 ] [ 52 ] the UTS Gallery in Sydney, Australia , [ 5 ] Wei-Ling Gallery [ 53 ] in Kuala Lumpur , Malaysia , the Jaaga Art and Technology Center in Bangalore, India , the Museum Boijmans in Rotterdam, Netherlands , and the Ars Electronica Center in Linz, Austria . [ 5 ]
Dewey-Hagborg has also produced the following selected works: | https://en.wikipedia.org/wiki/Heather_Dewey-Hagborg |
Heather D. Willauer (born 1974) is an American analytical chemist and inventor working in Washington, D.C. , at the United States Naval Research Laboratory (NRL). Leading a research team, Willauer has patented a method for removing dissolved carbon dioxide (CO 2 ) from seawater , in parallel with hydrogen (H 2 ) recovered by conventional water electrolysis . Willauer is also searching to improve the catalysts required to enable a continuous Fischer–Tropsch process to recombine carbon monoxide (CO) and hydrogen gases into complex hydrocarbon liquids to synthesize jet fuel for Navy aircraft.
Especially significant for the Navy is the possibility of maintaining naval air operations in remote areas without depending too much on long-distance transport of jet fuel across oceans. The Navy is also studying the feasibility of constructing on-shore facilities capable of synthesizing kerosene from hydrogen and CO 2 , both extracted from seawater constituents. Because of the very high electrical power required by water electrolysis to produce considerable amounts of hydrogen, nuclear power plants or ocean thermal energy conversion (OTEC) are necessary to fuel the industrial installations built on-shore on remote islands close to the sea in strategic locations.
Willauer attended Berry College in Georgia , graduating with a bachelor's degree in chemistry in 1996. [ 1 ] In mid-1999 she participated in the 11th International Conference on Partitioning in Aqueous Two-Phase Systems, held in Gulf Shores, Alabama . [ 2 ] In 2002, she earned a doctorate in analytical chemistry from the University of Alabama , writing her thesis on "Fundamentals of phase behavior and solute partitioning in ABS and applications to the paper industry," the "ABS" an abbreviation for " aqueous biphasic systems ". [ 3 ] She began working with the NRL as an associate, then in 2004 she advanced to the position of research chemist. [ 1 ]
Willauer started researching biphasic systems and phase transitions after graduating from Berry College. In 1998 she studied aqueous biphasic systems (ABS) for the potential of recapturing valuable dyes from textile manufacturing effluent. She investigated ions and catalysts . [ 4 ]
In the 2000s, Willauer began studying methods for extracting CO 2 and H 2 from seawater, for the purpose of reacting these molecules into hydrocarbons by using the Fischer–Tropsch process . [ 5 ] She also investigated modified iron (Fe) catalysts and studied zeolite (nanoporous aluminosilicate ) catalyst supports for recombining these molecules into jet fuel.
Previous studies had concluded that CO 2 , under the form of the bicarbonate anion (HCO 3 – ) dominant (96% mole fraction ) in the seawater inorganic carbon species could not be economically removed from seawater. [ 6 ] However, by acidifying seawater by means of an adapted electrolysis cell with cation permeable membranes (dubbed a three-chambered electrochemical acidification cell), [ 7 ] it is possible to economically convert HCO 3 – into CO 2 at a pH lower than 6 and to increase the extraction yield. In January 2011, the NRL installed a prototype of seawater electrolysis cell at Naval Air Station Key West in Florida. [ 8 ]
In 2017, Willauer et al. were granted a patent for a CO 2 extraction device from seawater, in the form of an electrolytic- cation exchange module (E-CEM). The E-CEM is seen as a "key step" in the production of synthetic fuel from seawater. Other researchers named in the patent are Felice DiMascio, Dennis R. Hardy, Jeffrey Baldwin, Matthew Bradley, James Morris, Ramagopal Ananth and Frederick W. Williams. [ 9 ]
Willauer et al. (2012) estimated that jet fuel could be synthesized from seawater in quantities up to 100,000 US gal (380,000 L) per day, at a cost of three to six U.S. dollars per gallon. [ 10 ] [ 11 ] [ 7 ] Willauer et al. (2014) showed that the Fischer-Tropsch catalyst could be modified to synthesize various fuels such as methanol and natural gas , as well as the olefins that can be used as the building blocks for jet fuel.
Willauer et al. calculated that about 23,000 US gal (87,000 L) of seawater must be driven through the process to obtain the quantities of hydrogen and CO 2 necessary to synthesize one gallon of jet fuel.
Seawater was chosen because it contains 140 times more CO 2 by volume than the atmosphere, and conventional water electrolysis also yields H 2 . The equipment for processing seawater is much smaller than that for processing air. Willauer considered that seawater was the "best option" for a source of synthetic jet fuel. [ 12 ] [ 13 ] By April 2014, the Willauer's team had not yet made fuel to the quality standard required for military jets, [ 14 ] [ 15 ] but they were able in September 2013 to use the fuel to fly a radio-controlled model airplane powered by a common two-stroke internal combustion engine. [ 8 ]
Because the process requires a considerable input of electrical energy [ 11 ] (~ 250 MW electricity mainly for the H 2 production by water electrolysis and also to a lesser extent for the CO 2 recovery from seawater), [ 11 ] it cannot be performed on a large ship, even on a nuclear aircraft-carrier. The installations processing seawater to obtain H 2 and CO 2 (in fact CO), the two essential ingredients necessary for the Fischer–Tropsch process , must be constructed on-shore, close to the sea, on islands in strategic remote locations ( e.g. , Hawai , Guam , Diego-Garcia ) and powered by a nuclear reactor, or by ocean thermal energy conversion (OTEC). | https://en.wikipedia.org/wiki/Heather_Willauer |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.