text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
A hospital information system ( HIS ) is an element of health informatics that focuses mainly on the administrational needs of hospitals . In many implementations, a HIS is a comprehensive, integrated information system designed to manage all the aspects of a hospital's operation, such as medical, administrative, financial, and legal issues and the corresponding processing of services. Hospital information system is also known as hospital management software or hospital management system ( HMS ). More generally an HIS is a form of medical information system ( MIS ). [ 1 ]
Hospital information systems provide a common source of information about a patient's health history, and doctors schedule timing. The system has to keep data in a secure place and controls who can reach the data in certain circumstances. These systems enhance the ability of health care professionals to coordinate care by providing a patient's health information and visit history at the place and time that it is needed. Patient's laboratory test information also includes visual results such as X-ray , which may be reachable by professionals. HIS provide internal and external communication among health care providers. Portable devices such as smartphones and tablet computers may be used at the bedside.
Hospital information systems are often composed of one or several software components with specialty-specific extensions, as well as of a large variety of sub-systems in medical specialties from a multi-vendor market. Specialized implementations name for example laboratory information system ( LIS ), Policy and Procedure Management System, radiology information system ( RIS ) or picture archiving and communication system ( PACS ). [ citation needed ]
Potential benefits of hospital information systems include: | https://en.wikipedia.org/wiki/Hospital_information_system |
The term host cell reactivation or HCR was first used to describe the survival of UV -irradiated bacteriophages, that were transfected to UV-pretreated cells. [ 1 ] This phenomenon was first thought to be the result of homologous recombination between both bacteria and phage, but later recognized as enzymatic repair. [ 2 ] [ 3 ] [ 4 ] Modifications of the assay were later developed, using transient expression plasmid DNA vectors on immortalized fibroblasts , [ 5 ] and lately on human lymphocytes . [ 6 ]
The HCR assay known also as plasmid reactivation assay, indirectly monitors cellular transcriptional repair system, that is activated by the transcriptional-inhibited damage inflicted by UV-Radiation into the plasmid . Given that UV-induced DNA damage is used as mutagen , the cell uses nucleotide excision repair NER pathway, that is activated by distortion in the DNA helix. [ 1 ]
The Host-Cell Reactivation Assay or HCR is a technique used to measure the DNA repair capacity of cell of a particular DNA alteration. In the HCR assay the ability of an intact cell to repair exogenous DNA is measured [ 7 ] The host cell is transfected with a damaged plasmid containing a reporter gene , usually luciferase , which has been deactivated due to the damage. The ability of the cell to repair the damage in the plasmid, after it has been introduced to the cell, allows the reporter gene to be reactivated. Earlier versions of this assay were based on the chloramphenicol acetyltransferase (CAT) gene, [ 5 ] but the version of the assay using luciferase as reporter gene is as much as 100-fold more sensitive. [ 1 ] | https://en.wikipedia.org/wiki/Host-cell_reactivation |
Host-directed therapeutics , also called host targeted therapeutics, act via a host -mediated response to pathogens rather than acting directly on the pathogen , like traditional antibiotics . They can change the local environment in which the pathogen exists to make it less favorable for the pathogen to live and/or grow. With these therapies, pathogen killing, e.g. bactericidal effects, will likely only occur when it is co-delivered with a traditional agent that acts directly on the pathogen , such as an antibiotic , antifungal , or antiparasitic agent. [ 1 ] [ 2 ] [ 3 ] Several antiviral agents are host-directed therapeutics, and simply slow the virus progression rather than kill the virus . Host-directed therapeutics may limit pathogen proliferation, e.g., have bacteriostatic effects. Certain agents also have the ability to reduce bacterial load by enhancing host cell responses even in the absence of traditional antimicrobial agents. [ 4 ] [ 5 ] [ 6 ]
Intracellular pathogens often reside in immune cells like macrophages . These pathogens can be obligate or facultative intracellular pathogens . Changing the innate immune response of these host-cells can alter the pathogen's ability to live inside the cell. Many of these immunomodulatory host-directed therapies are adjuvants or pathogen-associated molecular patterns . They can include Toll-like receptors (TLRs), NOD-like receptors (NLRs), C-type lectin receptors (CLRs), mannose receptor (MR), dendritic cell -specific intracellular adhesion molecule 3 ( ICAM3 )-grabbing nonintegrin ( DC-SIGN ), complement receptors , Fc receptors , and DNA sensors (e.g., STING ). Epithelial cells also host pathogens , like Salmonella enterica . These immunomodulatory agents can also alter the epithelial cell environments, since they also have a role in innate signalling. [ citation needed ]
Autophagy modulators are one type of method to enhance host cell functions. Pathogens like Mycobacterium tuberculosis (MTB), will be degraded in the autophagosome during an effective host response that will clear the bacteria. Because bacteria and other pathogens like MTB can take over cellular responses like autophagy , they can increase their survival in the body. By reactivating effective autophagy processes the pathogen could be cleared. Examples of this has been shown with MTB , [ 1 ] and Listeria monocytogenes . [ 1 ] OSU-03012 is thought to modulate autophagy in its effect on Salmonella enterica , [ 7 ] [ 8 ] and Francisella tularensis . [ 9 ] [ 10 ]
Modifying lung and macrophage pathology has been shown to have a role in the host-directed therapies for MTB . [ 1 ] | https://en.wikipedia.org/wiki/Host-directed_therapeutics |
HostLink [ 1 ] is communication protocol for use with or between PLC 's made by Omron . It is an ASCII -based protocol generally used for communication over RS-232 or RS-422 . The protocol enables communication between various pieces of equipment in an industrial environment for programming or controlling those pieces of equipment.
The maximum allowed message size is 30 words per message. Larger messages can be sent by ' fragmentation ' process, where the same slave returns a series of messages to build up the entire response. PLC host computers can transfer procedures, and monitor PLC data area, and control the PLC using the HostLink protocol.
This computer networking article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/HostLink_Protocol |
When considering pathogens , host adaptation can have varying descriptions. For example, in the case of Salmonella , host adaptation is used to describe the "ability of a pathogen to circulate and cause disease in a particular host population." [ 1 ] Another usage of host adaptation, still considering the case of Salmonella , refers to the evolution of a pathogen such that it can infect, cause disease, and circulate in another host species. [ 2 ]
While there might be pathogens that can infect other hosts and cause disease, the inability to pervade, or spread, throughout the infected host species indicates that the pathogen is not adapted to that host species. In this case, the ability or lack thereof of a pathogen to adapt to its host environment is an indicator of the pathogen's fitness or virulence . If a pathogen has high fitness in the host environment, or is virulent, it will be able to grow and spread quickly within its host. Conversely, if the pathogen is not well adapted to its host environment, then it will not spread or infect the way a well adapted pathogen would.
Pathogens like Salmonella, which is a food borne pathogen, are able to adapt to the host environment and maintain virulence via several pathways. In a paper by Baumler et al. 1998, [ 3 ] characters of Salmonella, such as its ability to cause intestinal infection were attributed to virulence factors like its ability to invade intestinal epithelial cells, induce neutrophil recruitment and interfere with the secretion of intestinal fluid. Phylogenetic analysis also revealed that many strains or lineages of Salmonella exist, which is advantageous for the pathogen because its genetic diversity can acts as fodder for natural selection to tinder with. For instance, if a particular Salmonella strain is more fit in the host stomach environment, compared to other Salmonella strains, then the former will be positively selected for and increase in prevalence. Eventually this strain will colonize and infect the stomach. The other less fit strains will be selected against and will thus not persist. Another major host adaptation on the part of Salmonella was its adaptation to host blood temperatures. Because Salmonella can thrive at the human host temperature, 98.6 degrees F, it is fit for the host environment and hence survives well in it. Adaptations like these are simple yet very effective ways of infecting hosts because they use the host's body and important feature of its body as a stepping stone in the infection process.
Another intestinal pathogen in the genus Cryptosporidium , which was not always a human pathogen, "recently" adapted to the human host environment. Numerous phylogenetic analyses in a paper by Xiao et al. 2002 [ 4 ] indicated that the Cryptosporidium parvum bovine genotype and Cryptosporidium meleagridis were originally parasites of rodents and mammals, respectively. However, this parasite 'recently' expanded into humans. As was previously mentioned, the ability to survive in different host species is an adaptation that is highly advantageous to pathogens because it increases their chances for survival and circulation. Some pathogens can evolve to become resistant to the body's natural immune defenses and/or to outside interventions like drugs. For instance, Clostridioides difficile is the most frequent cause of nosocomial diarrhea worldwide, and reports in the early 2000s indicated the advent of a hypervirulent strain in North America and Europe. In study by Stabler et al. 2006, [ 5 ] comparative phylogenomics (whole-genome comparisons using DNA microarrays combined with Bayesian phylogenies) were used to model the phylogeny of C. difficile. Phylogenetic analysis identified four distinct statistically significant 'clusters' making a hypervirulent clade, a toxin A− B+ clade, and two clades with human and animal isolates. Genetic differences between the four groups revealed significant findings related to virulence. The authors saw that hypervirulent strains had undergone various types of niche adaptation like antibiotic resistance, motility, adhesion, and enteric metabolism.
Some commensal organisms, or organisms that occur in the body naturally and benefit from living in the host without causing it harm or conferring any significant benefit, also have the potential to become pathogens. This specific type of commensal/pathogen hybrid is called an opportunistic pathogen . Not all commensals are opportunistic pathogens. However, opportunistic pathogens are commensals by nature. They are not harmful to the body when the body's immune system is functioning normally, but if the host immune system becomes compromised, or loses its ability to function at its full or near-full potential, opportunistic pathogens switch from being a commensal organism to a pathogen. This is where the name opportunistic pathogen comes from: they are only pathogens when the opportunity to infect the host is there. An example of an opportunistic pathogen is Candida albicans . Candida albicans is a type of fungus/yeast found in the intestines and mucous membranes (like the vagina and throat) of healthy humans. It is also found on the skin of healthy humans. In healthy humans- meaning humans with functioning immune systems- Candida will not cause infections. It will simply co-exist with the host. However, if a person receives chemotherapy or has HIV/AIDS, which weakens the immune system (thus compromising it), Candida albicans will cause infections. [ 6 ] It can cause infections as innocuous as yeast infections or thrush and it can cause infections as serious as systemic candidiasis which is fatal in about 50% of cases. [ 7 ] Though the mechanisms Candida albicans uses to switch from being a commensal to a pathogen are largely unknown, the reasons for its strength as a pathogen are broadly known. Candida has plenty of phenotypic and genotypic plasticity which means it generates change quickly. As a result of constant diversification, candida has many opportunities to make advantageous mutations. Additionally, Candida can change morphology. It can convert from the yeast for to the filamentous form and vice versa, depending on which stage of infection it is in. In the beginning stages of infection, Candida is more likely to be in the filamentous form because this allows it to adhere to and infect cells more efficiently. Other adaptations of the commensal pathogen include the ability to grow at host temperature, create biofilms , resist reactive oxygen species (ROS) created as part of the human immune response to fight off infection, adapt to different pHs [ 8 ] (relevant for being carried in the blood in different parts of the body) and adapt to low nutrient or low glucose environments like the liver [ 9 ] Because Candida albicans is very good at adapting to the fluctuating environments of the humans body (i.e. its changing temperature, pH, oxygen reactivity and more) candida albicans is a good pathogen.
Host adaptation can also be used in reference to the host. Hosts have the ability to adapt to protect themselves against pathogens. For instance, the innate and acquired immune responses are adaptations of the human body that exist for the sole purpose of warding off disease. Additionally, as was previously mentioned with the case of reactive oxygen species, the body has various other ways off warding off threats. Sexual reproduction is also a feature that humans and other sexually reproducing organisms have to protect themselves against pathogens. For instance, in what's called the red queen hypothesis , hosts are constantly shifting genetically via sexual reproduction in order to continue changing so pathogens have less of a chance to be well adjusted to the host. If the host keeps changing via gene shuffling in the form of reproduction, then hosts will have to continuously evolve with the host to keep up with its changes. This sets up a moving target for co-evolving pathogens. | https://en.wikipedia.org/wiki/Host_adaptation |
Host cell proteins ( HCPs ) are process-related protein impurities that are produced by the host organism during biotherapeutic manufacturing and production. During the purification process , a majority of produced HCPs are removed from the final product (>99% of impurities removed). However, residual HCPs still remain in the final distributed pharmaceutical drug. Examples of HCPs that may remain in the desired pharmaceutical product include: monoclonal antibodies (mAbs), antibody-drug-conjugates (ADCs), therapeutic proteins , vaccines , and other protein-based biopharmaceuticals . [ 1 ] [ 2 ] [ 3 ]
HCPs may cause immunogenicity in individuals or reduce the potency, stability or overall effectiveness of a drug. National regulatory organisations, such as the FDA and EMA provide guidelines on acceptable levels of HCPs that may remain in pharmaceutical products before they are made available to the public. The accepted level of HCPs in a final product is evaluated on a case-by-case basis, and depends on multiple factors including: dose, frequency of drug administration, type of drug and severity of disease.
The acceptable range of HCPs in a final pharmaceutical product is large due to limitations with the detection and analytical methods that currently exist. [ 4 ] Analysis of HCPs is complex as the HCP mixture consists of a large variety of protein species, all of which are unique to the specific host organisms, and unrelated to the intended and desired recombinant protein . [ 5 ] Analysing these large varieties of protein species at very minute concentrations is difficult and requires extremely sensitive equipment which has not been fully developed yet. The reason that HCP levels need to be monitored is due to the uncertain effects they have on the body. At trace amounts, the effects of HCPs on patients are unknown and specific HCPs may affect protein stability and drug effectiveness, or cause immunogenicity in patients. [ 6 ] [ 7 ] If the stability of the drug is affected, durability of the active substance in the pharmaceutical product could decrease. The effects that the drug is intended to have on patients could also possibly be increased or decreased, leading to health complications that may arise. The degree of immunogenicity on a long-term basis is difficult, and almost impossible, to determine and consequences can include severe threats to the patient’s health. [ 5 ]
HCPs in biopharmaceutical products pose a potential safety risk to humans by introducing foreign proteins and biomolecules to the human immune system . Since common host cells used to produce biopharmaceutical drugs are E. coli , [ 8 ] yeast , [ 9 ] mouse myeloma cell line ( NS0 ) [ 10 ] and Chinese hamster ovary ( CHO ), [ 11 ] the resultant HCPs are genetically different to what the human body [ 12 ] recognizes. As a consequence of this, the presence of HCPs in humans can activate an immune response, which can lead to possibly severe health concerns.
There is a correlation between the amount of foreign antigens (HPCs) in our body and the level of immune response our body produces. The more HCPs present in a drug, the higher the immune response that will be activated. Several studies have linked a reduction in HCPs to a decline in specific inflammatory cytokines . [ 5 ] Other HCPs may be very similar to a human protein and may induce an immune response with cross reactivity against the human protein or the drug substance protein. The exact consequences of HCPs for an individual patient is uncertain and difficult to determine with the current analytical methods used in biopharmaceutical production and analysis. [ 5 ]
HCPs are identified during the manufacturing of biopharmaceuticals as part of the quality control process. [ 5 ]
During the production process several factors, including the genes of the host cell, the way of product expression and the purification steps, influence the final HCP composition and abundance. [ 5 ] Several studies report that HCPs are often co-purified along with the product itself by interacting with the recombinant protein. [ 6 ]
Enzyme linked immunosorbent assay ( ELISA ) is the predominant method for HCP analysis in pharmaceutical products due to its high sensitivity to proteins, which allows it to detect the low levels of HCPs in produced drugs. [ 4 ] Even though the developmental process requires an extended period of work and several tests with animal models, analysis of HCP content in the final product can be rapidly performed and interpreted. [ 1 ] Whilst ELISA possesses the sensitivity to undergo HCP analysis, several limitations are associated with the procedure. The HCP quantification relies mainly on the quantity and affinity of anti-HCP antibodies for detection of the HCP antigens . Anti-HCP antibody pools cannot cover the entire HCP population and weakly immunogenic proteins are impossible to detect, since equivalent antibodies are not generated in the process. [ 4 ]
In addition, methods such as the combination of mass spectrometry (MS) and liquid chromatography ( LC-MS ) have been developed to allow for more efficient and effective HCP analysis and purification. These methods are able to:
Recently, the MS method has been further improved through the method SWATH LC-MS. SWATH is a data independent acquisition (DIA) form of mass spectrometry, where the mass range is partitioned in small mass windows, which is then analysed with tandem MS ( MS/MS ). The key advantages are the reproducibility for both individual HCP identification and absolute quantification by applying internal protein standards. [ 14 ]
Despite the solid improvements of this method of protein analysis, there are also limitations, the main of which is that it requires a high level of expertise and advanced instrumentation to conduct the analysis. [ 13 ] | https://en.wikipedia.org/wiki/Host_cell_protein |
Host Controller Interface or Host controller interface may refer to: | https://en.wikipedia.org/wiki/Host_controller_interface |
Caenorhabditis elegans - microbe interactions are defined as any interaction that encompasses the association with microbes that temporarily or permanently live in or on the nematode C. elegans. The microbes can engage in a commensal , mutualistic or pathogenic interaction with the host. These include bacterial, viral, unicellular eukaryotic, and fungal interactions. In nature C. elegans harbours a diverse set of microbes. [ 1 ] In contrast, C. elegans strains that are cultivated in laboratories for research purposes have lost the natural associated microbial communities and are commonly maintained on a single bacterial strain, Escherichia coli OP50.
However, E. coli OP50 does not allow for reverse genetic screens because RNAi libraries have only been generated in strain HT115. This limits the ability to study bacterial effects on host phenotypes. [ 2 ] The host microbe interactions of C. elegans are closely studied because of their orthologs in humans. [ 2 ] Therefore, the better we understand the host interactions of C. elegans the better we can understand the host interactions within the human body.
C. elegans is a well-established model organism in different research fields, yet its ecology however is only poorly understood. They have a short development cycle only lasting three days with a total life span of about two weeks. [ 2 ] C. elegans were previously considered a soil-living nematode, [ 3 ] [ 4 ] [ 5 ] but in the last 10 years it was shown that natural habitats of C. elegans are microbe-rich, such as compost heaps, rotten plant material, and rotten fruits. [ 3 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] Most of the studies on C. elegans are based on the N2 strain, which has adapted to laboratory conditions. [ 10 ] [ 11 ] [ 12 ] Only in the last few years the natural ecology of C. elegans has been studied in more detail [ 13 ] and one current research focus is its interaction with microbes. [ 14 ] As C. elegans feeds on bacteria ( microbivory ), the intestine of worms isolated from the wild is usually filled with a large number of bacteria. [ 9 ] [ 15 ] [ 16 ] In contrast to the very high diversity of bacteria in the natural habitat of C. elegans , the lab strains are only fed with one bacterial strain, the Escherichia coli derivate OP50
. [ 17 ] OP50 was not co-isolated with C. elegans from nature, but was rather used because of its high convenience for laboratory maintenance. [ 18 ] Bleaching is a common method in the laboratory to clean C. elegans of contaminations and to synchronize a population of worms. [ 19 ] During bleaching the worms are treated with 5N NaOH and household bleach , leading to the death of all worms and survival of only the nematode eggs. [ 19 ] The larvae hatching from these eggs lack any microbes, as none of the currently known C. elegans -associated microbes can be transferred vertically . Since most laboratory strains are kept under these gnotobiotic conditions, nothing is known about the composition of the C. elegans microbiota. [ 20 ] The ecology of C. elegans can only be fully understood in the light of the multiple interactions with the microorganisms, which it encounters in the wild. The effect of microbes on C. elegans can vary from beneficial to lethal.
In its natural habitat C. elegans is constantly confronted with a variety of bacteria that could have both negative and positive effects on its fitness. To date, most research on C. elegans -microbe interactions focused on interactions with pathogens. Only recently, some studies addressed the role of commensal and mutualistic bacteria on C. elegans fitness. In these studies, C. elegans was exposed to various soil bacteria, either isolated in a different context or from C. elegans lab strains transferred to soil. [ 21 ] [ 22 ] These bacteria can affect C. elegans either directly through specific metabolites, or they can cause a change in the environmental conditions and thus induce a physiological response in the host. [ 21 ] Beneficial bacteria can have a positive effect on the lifespan, generate certain pathogen resistances, or influence the development of C. elegans .
The lifespan of C. elegans is prolonged when grown on plates with Pseudomonas sp. or Bacillus megaterium compared to individuals living on E.coli . [ 21 ] The lifespan extension mediated by B. megaterium is greater than that caused by Pseudomonas sp. . As determined by microarray analysis (a method, which allows the identification of C. elegans genes that are differentially expressed in response to different bacteria), 14 immune defence genes were up-regulated when C. elegans was grown on B. megaterium , while only two were up-regulated when fed with Pseudomonas sp. In addition to immune defence genes, other upregulated genes are involved in the synthesis of collagen and other cuticle components, indicating that the cuticle might play an important role in the interaction with microbes. Although some of the genes are known to be important for C. elegans lifespan extension, the precise underlying mechanisms still remain unclear. [ 21 ]
The microbial communities residing inside the host body have now been recognized to be important for effective immune responses. [ 22 ] Yet the molecular mechanisms underlying this protection are largely unknown. Bacteria can help the host to fight against pathogens either by directly stimulating the immune response or by competing with the pathogenic bacteria for available resources. [ 23 ] [ 24 ] In C. elegans , some associated bacteria seem to generate protection against pathogens. For example, when C. elegans is grown on Bacillus megaterium or Pseudomonas mendocina , worms are more resistant to infection with the pathogenic bacterium Pseudomonas aeruginosa [21], which is a common bacterium in C. elegans’ natural environment and therefore a potential natural pathogen. [ 25 ] This protection is characterized by prolonged survival on P. aeruginosa in combination with a delayed colonization of C. elegans by the pathogen. Due to its comparatively large size B. megaterium is not an optimal food source for C. elegans , [ 26 ] resulting in a delayed development and a reduced reproductive rate. The ability of B. megaterium to enhance resistance against the infection with P. aeruginosa seems to be linked to the decrease in reproductive rate. However, the protection against P. aeruginosa infection provided by P. mendocina is reproduction independent, and depends on the p38 mitogen-activated protein kinase pathway . P. mendocina is able to activate the p38 MAPK pathway and thus to stimulate the immune response of C. elegans against the pathogen. [ 22 ] A common way for an organism to protect itself against microbes is to increase fecundation to increase the surviving individuals in the face of an attack. This defense against parasites are genetically linked to stress response pathways and dependent on the innate immune system. [ 27 ]
Under natural conditions it might be advantageous for C. elegans to develop as fast as possible to be able to reproduce rapidly. The bacterium Comamonas DA1877 accelerates the development of C. elegans . [ 28 ] Neither TOR (target of rapamycin), nor insulin signalling seem to mediate this effect on the accelerated development. It is thus possible that secreted metabolites of Comamonas , which might be sensed by C. elegans , lead to faster development. Worms that were fed with Comamonas DA1877 also showed a reduced number of offspring and a reduced lifespan. [ 28 ] [ 29 ] Another microbe that accelerates C. elegans' growth are L . sphaericus. This bacteria significantly increased the growth rate of C. elegans when compared to their normal diet of E. coli OP50. [ 30 ] C. elegans are mostly grown and observed in a controlled laboratory with a controlled diet, therefore, they may show differential growth rates with naturally occurring microbes.
In its natural environment C. elegans is confronted with a variety of different potential pathogens. C. elegans has been used intensively as a model organism for studying host-pathogen interactions and the immune system. [ 5 ] [ 31 ] These studies revealed that C. elegans has well-functioning innate immune defenses . The first line of defense is the extremely tough cuticle that provides an external barrier against pathogen invasion. [ 32 ] In addition, several conserved signaling pathways contribute to defense, including the DAF-2 / DAF-16 insulin-like receptor pathway and several MAP kinase pathways, which activate physiological immune responses. [ 33 ] Finally, pathogen avoidance behavior represents another line of C. elegans immune defense. [ 34 ] All these defense mechanisms do not work independently, but jointly to ensure an optimal defense response against pathogens. [ 31 ] Many microorganisms were found to be pathogenic for C. elegans under laboratory conditions. To identify potential C. elegans pathogens, worms in the L4 larval stage are transferred to a medium that contains the organism of interest, which is a bacterium in most cases. Pathogenicity of the organism can be inferred by measuring the lifespan of worms. There are several known human pathogens that have a negative effect on C. elegans survival. Pathogenic bacteria can also form biofilms, whose sticky exopolymer matrix could impede C. elegans motility [ 35 ] and cloaks bacterial quorum sensing chemoattractants from predator detection. [ 36 ] Biofilms can secrete iron siderophores which can be detected by C.elegans [ 37 ] . However, only very few natural C. elegans pathogens are currently known. [ 5 ]
One of the best studied natural pathogens of C. elegans is the microsporidium Nematocida parisii , which was directly isolated from wild-caught C. elegans . N. parisii is an intracellular parasite that is exclusively transmitted horizontally from one animal to another. The microsporidian spores are likely to exit the cells by disrupting a conserved cytoskeletal structure in the intestine called the terminal web. It seems that none of the known immune pathways of C. elegans is involved in mediating resistance against N. parisii . Microsporidia were found in several nematodes isolated from different locations, indicating that microsporidia are common natural parasites of C. elegans . The N. parisii - C. elegans system represents a very useful tool to study infection mechanisms of intracellular parasites. [ 5 ] Additionally, a new species of microsporidia was recently found in a wild caught C. elegans that genome sequencing places in the same genus Nematocida as prior microsporidia seen in these nematodes. This new species was named Nematocida displodere, after a phenotype seen in late infected worms that explode at the vulva to release infectious spores. N. displodere was shown to infect a broad range of tissues and cell types in C. elegans , including the epidermis, muscle, neurons, intestine, seam cells, and coelomocytes. Strangely, the majority of intestinal infection fails to grow to later parasite stages, while the muscle and epidermal infection thrives. [ 38 ] This is in stark contrast to N. parisii which infects and completes its entire life cycle in the C. elegans intestine. These related Nematocida species are being used to study the host and pathogen mechanisms responsible for allowing or blocking eukaryotic parasite growth in different tissue niches.
Another eukaryotic pathogen is the fungus Drechmeria coniospora , which has not been directly co-isolated with C. elegans from nature, but is still considered to be a natural pathogen of C. elegans . D. coniospora attaches to the cuticle of the worm at the vulva, mouth, and anus and its hyphae penetrate the cuticle. In this way D. coniospora infects the worm from the outside, while the majority of bacterial pathogens infect the worm from the intestinal lumen. [ 39 ] [ 40 ]
In 2011 the first naturally associated virus was isolated from C. elegans found outside of a laboratory. The Orsay virus is an RNA virus that is closely related to nodaviruses . The virus is not stably integrated into the host genome. It is transmitted horizontally under laboratory conditions. An antiviral RNAi pathway is essential for C. elegans resistance against Orsay virus infection. [ 41 ] To date there has not been a virus, other intracellular pathogens, or multicellular parasite that have been able to affect the nematode. Because of this we cannot use C. elegans as an experimental system for these interactions. In 2005, two reports have shown that vesicular stomatitis virus (VSV), an arbovirus with a many invertebrate and vertebrate host range, could replicate in primary cells derived from C. elegans embryos. [ 42 ]
Two bacterial strains of the genus Leucobacter were co-isolated from nature with the two Caenorhabditis species C. briggsae and C. n. spp 11 , and named Verde 1 and Verde 2. These two Leucobacter strains showed contrasting pathogenic effects in C. elegans . Worms that were infected with Verde 2 produced a deformed anal region (“Dar” phenotype), while infections with Verde 1 resulted in slower growth due to coating of the cuticle with the bacterial strain. In liquid culture Verde 1 infected worms stuck together with their tails and formed so called “worm stars”. The trapped worms cannot free themselves and eventually die. After death C. elegans is then used as a food source for the bacteria. Only larvae in the L4 stage seem to be able to escape by autotomy . They split their bodies into half, so that the anterior half can escape. The “half-worms” remain viable for several days. [ 43 ] The Gram-positive bacterium Bacillus thuringiensis is likely associated with C. elegans in nature. B. thuringiensis is a soil bacterium that is often used in infection experiments with C. elegans . [ 44 ] [ 45 ] It produces spore-forming toxins, called crystal (Cry) toxins, which are associated with spores. These are jointly taken up by C. elegans orally. Inside the host, the toxins bind to the surface of intestinal cells, where the formation of pores in intestinal cells is induced, causing their destruction. The resulting change in milieu in the gut leads to germination of the spores, which subsequently proliferate in the worm body. [ 46 ] [ 47 ] [ 48 ] An aspect of the C. elegans – B. thuringiensis system is the high variability in pathogenicity between different strains. [ 45 ] [ 48 ] There are highly pathogenic strains, but also strains that are less or even non-pathogenic. [ 45 ] [ 48 ] | https://en.wikipedia.org/wiki/Host_microbe_interactions_in_Caenorhabditis_elegans |
Host signal processing (HSP) is a term used in computing to describe hardware such as a modem or printer which is emulated (to various degrees) in software. Intel refers to the technology as native signal processing (NSP). HSP replaces dedicated DSP or ASIC hardware by using the general purpose CPU of the host computer.
Modems using HSP are known as winmodems (a term trademarked by 3COM / USRobotics, but genericized) or softmodems . Printers using HSP are known as GDI printers (after the MS Windows GDI software interface), winprinters (named after winmodems) or softprinters .
The Apple II Disk II floppy drive used the host CPU to process drive control signals, instead of a microcontroller . This instance of HSP predates the usage of the terms HSP and NSP.
In the mid- to late-1990s, Intel pursued native signal processing technology to improve multimedia handling. [ 1 ] According to testimony by Intel, Microsoft opposed development of NSP because the technology could reduce the necessity of the Microsoft Windows operating system. [ 1 ] Intel claims to have terminated development of NSP because of threats from Microsoft. [ 1 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Host_signal_processing |
In parasitology and epidemiology , a host switch (or host shift ) is an evolutionary change of the host specificity of a parasite or pathogen . For example, the human immunodeficiency virus used to infect and circulate in non-human primates in West-central Africa, but switched to humans in the early 20th century. [ 1 ] [ 2 ]
All symbiotic species, such as parasites, pathogens and mutualists , exhibit a certain degree of host specificity. This means that pathogens are highly adapted to infect a specific host—in terms of but not limited to receptor binding, countermeasures for host restriction factors, and transmission methods. They occur in the body (or on the body surface) of a single host species or—more often—on a limited set of host species. In the latter case, the suitable host species tend to be taxonomically related, sharing similar morphology and physiology. [ 3 ]
Speciation is the creation of a new and distinct species through evolution and so unique differences exist between all life on earth. As dogs and birds are very different classes of animals—for one, dogs have fur coats and birds have feathers and wings—we therefore know that their fundamental biological makeup is as different as their physical appearance, ranging from their internal cellular mechanisms to their response to infection, and so species-specific pathogens must overcome multiple host range barriers in order for their new host to support their infection.
Recent studies have proposed to discriminate between two different types of evolutionary change in host specificity. [ 5 ] [ 6 ]
According to this view, host switch can be a sudden and accidental colonization of a new host species by a few parasite individuals capable of establishing a new and viable population there. After a switch of this type, the new population is more-or-less isolated from the population on the donor host species. The new population does not affect the further fate of the conspecific parasites on the donor host, and may finally lead to parasite speciation. This type of switch is more likely to target an increasing host population that harbours a relatively poor parasite/pathogen fauna, such as the pioneer populations of invasive species. The switch of HIV to the human host is of this type.
Alternatively, in the case of a multi-host parasite host-shift may occur as a gradual change of the relative role of one host species, which becomes primary rather than secondary host. The former primary host slowly becomes a secondary host, or may even, eventually, be totally abandoned. This process is slower and more predictable, and does not increase parasite diversity. It will typically occur in a shrinking host population harbouring a parasite/pathogen fauna which is relatively rich for the host population size.
All diseases have an origin. Some disease circulate in human populations and are already known to epidemiologists, but evolution of the disease can result in a new strain of this disease emerging that makes it stronger – for example, multi-drug resistant tuberculosis . In other cases, diseases can be discovered that have not previously been observed or studied. These can emerge due to host switch events allowing the pathogen to evolve to become human-adapted and are only discovered due to an infection outbreak.
A pathogen that switches host emerges as a new form of the virus capable of circulating within a new population. Diseases that emerge in this sense can occur more often through human over exposure to the wildlife. This can be as a result of urbanisation , deforestation , destruction of wildlife habitats and changes to agricultural practise. The more exposure humans have to the wild, the more spillover infections occur and pathogens are exposed to human-specific selection pressures . The pathogen is therefore driven towards specific-specific adaptation and is more likely to gain the necessary mutations to jump the species barrier and become human-infective.
The problem with diseases emerging in new species is that the host population will be immunologically naïve. This means that the host has never been previously exposed to the pathogen and has no pre-existing antibodies or protection from the infection. This make host switching dangerous and can result in more pathogenic infections. The pathogen is not adapted to surviving in this new host and this imbalance of coevolutionary history may result in aggressive infections. However, this balance must be brought under control for the pathogen to maintain its infection in the new host and not burn through the population.
A pathogen undergoing a host switch is driven by selection pressures to acquire the necessary changes allowing for survival and transmission in the new host species. According to a 2008 Microbiology and Molecular Biology Review, [ 7 ] this process of host switching can be defined by three stages:
Exposure to new environments and host species is what allows pathogens to evolve. The early isolated infection events exposes the pathogen to the selection pressure of survival in that new species of which some will eventually adapt to. This gives raise to pathogens with the primary adaptations allowing the smaller outbreaks within this potential new host, increasing exposure and driving further evolution. This gives rise to complete host adaptation and the capability for a larger epidemic and the pathogen can sustainably survive in its new host – i.e. host switch. Sufficiently adapted pathogens may also reach pandemic status meaning the disease has infected the whole country or spread around the world.
A zoonosis is a specific kind of cross-species infection in which diseases are transmitted from vertebrate animals to humans. An important feature of a zoonotic disease is they originate from animal reservoirs which are essential to the survival of zoonotic pathogens. [ 8 ] They naturally exist in animal populations asymptomatically - or causing mild disease - making it challenging to find the natural host ( disease reservoir ) and impossible to eradicate as the virus will always continue to live in wild animal species.
Those zoonotic pathogens that permanently make the jump from vertebrate animals to human populations have performed a host switch and thus can continue to survive as they are adapted to transmission in human populations. However, not all zoonotic infections complete the host switch and only exist as smaller isolated events. These are known as spillovers . This means that humans can become infected from an animal pathogen, but it does not necessarily take hold and become a human transmitted disease that circulates in human populations. This is because the host switch adaptations required to make the pathogen sustainable and transmissible in the new host does not occur.
Some cross-species transmission events are important as they can show that a pathogen is getting closer to epidemic/pandemic potential. Small epidemics show that the pathogen is getting more adapted to human transmission and gaining stability to exist in the human population. However, there are some pathogens that do not possess this ability to spread between humans. This is the case for spillover events such as rabies . Humans infected from the bite of rabid animals do not tend to pass on the disease and so are classed as dead-end hosts. [ 9 ]
An extensive list of zoonotic infections can be found at Zoonosis .
The following pathogens are examples of diseases that have crossed the species barrier into the human population and highlight the complexity of the switch.
Influenza —also known as the flu—is one of the most well-known viruses that continues to pose a huge burden on today's health care systems and is the most common cause of human respiratory infections . [ 10 ] Influenza is an example of how a virus can continuously jump the species barrier in multiple isolated instances over time creating different human infecting strains circulating our populations – for example, H1N1 , H5N1 and H7N9 . These host switch events create pandemic strains that eventually transition into seasonal flu that annually circulates in the human population in colder months.
Influenza A viruses (IAVs) are classified by two defining proteins. These proteins are present in all influenza viral strains but small differences allow for differentiation of new strains. These identifiers are:
IAVs naturally exist in wild birds without causing disease or symptoms. These birds, especially waterfowl and shore birds, are the reservoir host of the majority of IAVs with these HA and NA protein antigens . [ 11 ] From these animals, the virus spills into other species (e.g. pigs, humans, dogs [ 10 ] ) creating smaller scale infections until the virus has acquired significant mutations to spread and maintain itself in another species. The RNA polymerase enzyme of influenza has low level accuracy due to a lack of a proofreading mechanism and therefore has a high error rate in terms of genetic replication . [ 10 ] Because of this, influenza has the capacity to mutate frequently dependent of the current selection pressures and has the capability to adapt to surviving in different host species.
Comparing IAVs in birds and humans, one of the main barriers to host switching is the type of cells the virus can recognise and bind to (cell tropism ) in order to initiate infection and viral replication. An avian influenza virus is adapted to binding to the gastrointestinal tract of birds. [ 11 ] In bird populations, the virus is shed from the excretory system into the water and ingested by other birds to colonise their guts. This is not the case in humans as influenza, in this species, produces a respiratory infection. The virus here binds to respiratory tissue and is transmitted through breathing, talking and coughing, therefore the virus has to adapt in order to switch to the human host from avian populations. Additionally, the respiratory tract is mildly acidic and so the virus must also mutate to overcome these conditions in order to successfully colonise mammalian lungs and respiratory tracts. Acidic conditions are a trigger for viral uncoating as it is normally a sign the virus has penetrated a cell, however premature uncoating will result in virus exposure to the immune system leading killing of the virus. [ 12 ]
IAVs binds to host cells using the HA protein. These proteins recognise sialic acid that reside on the terminal regions of external glycoproteins on host cell membranes. However, HA proteins have different specificities for isomers of sialic acid depending on which species the IAV is adapted for. IAVs adapted for birds recognise α2-3 sialic acid isomers whereas human adapted IAV HAs bind to α2-6 isomers. [ 10 ] These are the isomers of sialic acid mostly present in the regions of the host that each IAVs infected respectively - i.e. the gastrointestinal tract of birds and the respiratory tract of humans. Therefore, in order to commit to a host switch, the HA specificity must mutate to the substrate receptors of the new host.
In the final stages of infection, the HA proteins are cleaved to activate the virus. [ 10 ] Certain hemagglutinin subtypes (H5 and H7) have the capacity to obtain additional mutations. These exist at the HA activation cleavage site which changes the HA specificity. This results in a broadening of the range of protease enzymes that can bind to and activate the virus. Therefore, this makes the virus more pathogenic and can make IAV infections more aggressive. [ 10 ]
Successfully binding to different host tissue is not the only requirement of host switch for influenza A. The influenza genome is replicated using the virus RNA-dependent RNA polymerase but it must adapt to use host specific cofactors in order to function. [ 13 ] The polyermase is a heterotrimeric complex and consists of 3 major domains: PB1, PB2 and PA. Each plays their own role in replication of the viral genome but PB2 is an important factor in the host range barrier as it interacts with host cap proteins . [ 10 ] Specifically, residue 627 of the PB2 unit shows to play a defining role in the host switch from avian to human adapted influenza strains. In IAVs, the residue at position 627 is glutamic acid (E) whereas in mammal infecting influenza, this residue is mutated to lysine (K). [ 13 ] [ 14 ] Therefore, the virus must undergo an E627K mutation in order to perform a mammalian host switch. This region surrounding residue 627 forms a cluster protruding from the enzyme core. With lysine, this PB2 surface region can form a basic patch enabling host cofactor interaction, whereas the glutamic acid residue found in IAVs disrupt this basic region and subsequent interactions. [ 13 ]
The cellular protein ANP32A has been shown to account for contrasting levels of avian influenza interaction efficiency with different host species. [ 13 ] [ 15 ] The key difference between ANP32A is that the avian form contains an additional 33 amino acids than the mammalian form. [ 15 ] When mammals cells are infected with avian IAVs, the polymerase enzyme efficiency is sub-optimal as the avian virus is not adapted to surviving in mammalian cells. However, when that mammalian cell contains the avian ANP32A protein, viral replication is mostly restored, [ 15 ] showing that the ANP32A is likely to positively interact and optimise the polymerase action. Mutations in the PB2 making the influenza mammal-adapted allow for the interaction between the viral polymerase and the mammalian ANP32A protein and therefore essential for the host switch.
There are many factor that determine a successful influenza host switch from avian to a mammalian host:
Each factor has a role to play and so the virus must acquire them all in order to undergo the host switch. This is a complex process and requires time for the virus to sufficiently adapt and mutate. Once each mutation has been achieved, the virus can infect human populations and has the potential to reach pandemic levels. However, this is dependent on virulence and rate of transmission, and host switching will change these parameters of viral infection.
HIV is the human immunodeficiency virus and attacks cells of the immune system depleting the body's defence against incoming pathogens. In particular, HIV infects CD4 + T helper lymphocytes , a cell involved in the organisation and coordination of the immune response. This means that the body can recognise incoming pathogens but cannot trigger their defences against them. [ 17 ] When HIV sufficiently diminishes the immune system, it causes a condition known as acquired immunodeficiency syndrome or AIDS characterised by severe weight loss, fever, swollen lymph nodes and susceptibility to other severe infections [ 18 ]
HIV is a type of lentivirus of which two types are known to cause AIDS: HIV-1 and HIV-2 , [ 17 ] [ 19 ] both of which jumped into the human population from numerous cross-species transmission events by the equivalent disease in primates known as simian immunodeficiency virus (SIV) . SIVs are found in many different primate species, including chimpanzees and mandrills found in sub-Saharan Africa, and for the most part are largely non pathogenic [ 19 ] HIV-1 and HIV-2 having similar features but are antigenically different and so are classed as different types of HIV. [ 19 ] Most transmission events are unsuccessful in switching its host, however in the context of HIV-1, four distinct forms emerged categorised as groups M, N, O and P of which group M is associated with pandemic HIV-1 and accounts for the majority of global cases. Each type is proposed to have emerged through bush-meat hunting and the exposure to the body fluids of infected primates, [ 19 ] including blood.
Host-specific selection pressures would bring about a change in the viral proteome of HIVs to suit the new host and therefore these regions would not be conserved when compared to SIVs. Through these viral proteomic comparisons, the viral matrix protein Gag-30 was identified as having differing amino acids at position 30. This amino acid is conserved as a methionine in SIVs but mutated to an arginine or lysine in HIV-1 groups M, N and O, [ 19 ] [ 20 ] suggesting a strong selection pressure in the new host. This observation was supported by other data including the fact that this mutation was reversed when HIV-1 was used to infect primates meaning that the arginine or lysine converted back to the methionine originally observed in SIVs. [ 20 ] This reinforces the idea of the strong, opposing host-specific selection pressure between humans and primates. Additionally, it was observed that methionine containing viruses replicated more efficiently in primates and arginine/lysine containing viruses in humans. [ 20 ] This is evidence of the reason behind the mutation (optimal levels of replication in host CD4 + T lymphocytes), however the exact function and action of the position 30 amino acid is unknown.
Tetherin is a defence protein in the innate immune response whose production is activation by interferon . Tetherin specifically inhibits the infective capabilities of HIV-1 by blocking its release from the cells it infects. [ 21 ] This prevents the virus from leaving to infect more cells and halts the progression of the infection giving the host defences time to destroy the viral-infected cells. Adapted viruses tend to have countermeasures to defend themselves against tetherin normally through degradation through specific regions of the protein. These anti-tetherin techniques are different between SIVs and HIV-1 showing that tetherin interaction is a host range restriction that must be overcome to enable a primate-human host switch. SIVs use the Nef protein to remove tetherin from the cell membrane whereas HIV-1 uses the Vpu protein degrade the defence protein. [ 19 ]
Tetherin is a conserved viral-defence mechanism across species but its exact sequence and structure shows some differences. The regions making up tetherin include the cytoplasmic region, transmembrane region, a coiled-coiled extracellular domain and a GPI anchor ; [ 19 ] however, human tetherin defers to other primates by having a deletion in the cytoplasmic region. [ 22 ] This incomplete cytoplasmic domain renders Nef proteins found in SIVs ineffective as an anti-tetherin response in humans and so in order to switch from non-human primates to a human host, the SIV must activate the Vpu protein which instead blocks tetherin through interaction with the conserved transmembrane region. [ 22 ]
The two factors that are involved in the host range barrier for SIV to HIV viruses are:
Only a SIV virus containing both mutations of the Gag-30 protein and acquisition of the Vpu anti-tetherin protein will be able to undergo a host switch from primates to humans and become a HIV. This evolutionary adaptations allows the virus to acquire optimal levels of polymerase action in human infected cells and the ability to prevent destruction of the virus by tetherin. | https://en.wikipedia.org/wiki/Host_switch |
In supramolecular chemistry , [ 1 ] host–guest chemistry describes complexes that are composed of two or more molecules or ions that are held together in unique structural relationships by forces other than those of full covalent bonds . Host–guest chemistry encompasses the idea of molecular recognition and interactions through non-covalent bonding . Non-covalent bonding is critical in maintaining the 3D structure of large molecules, such as proteins, and is involved in many biological processes in which large molecules bind specifically but transiently to one another.
Although non-covalent interactions could be roughly divided into those with more electrostatic or dispersive contributions, there are few commonly mentioned types of non-covalent interactions: ionic bonding , hydrogen bonding , van der Waals forces and hydrophobic interactions . [ 2 ]
Host-guest interaction has raised dramatical attention since it was discovered. It is an important field because many biological processes require the host-guest interaction, and it can be useful in some material designs. There are several typical host molecules, such as, cyclodextrin, crown ether, et al .
"Host molecules" usually have "pore-like" structure that is able to capture a "guest molecules". Although called molecules, hosts and guests are often ions. The driving forces of the interaction might vary, such as hydrophobic effect and van der Waals forces [ 5 ] [ 6 ] [ 7 ] [ 8 ]
Binding between host and guest can be highly selective, in which case the interaction is called molecular recognition . Often, a dynamic equilibrium exists between the unbound and the bound states:
The "host" component is often the larger molecule, and it encloses the smaller, "guest", molecule. In biological systems, the analogous terms of host and guest are commonly referred to as enzyme and substrate respectively. [ 9 ]
Closely related to host–guest chemistry, are inclusion compounds (also known as an inclusion complexes ). Here, a chemical complex in which one chemical compound (the "host") has a cavity into which a "guest" compound can be accommodated. The interaction between the host and guest involves purely van der Waals bonding . The definition of inclusion compounds is very broad, extending to channels formed between molecules in a crystal lattice in which guest molecules can fit.
Inclusion Compound : A complex in which one component (the host) forms a cavity or, in the case of a crystal, a crystal lattice containing spaces in the shape of long tunnels or channels in which molecular entities of a second chemical species (the guest) are located. There is no covalent bonding between guest and host, the attraction being generally due to van der Waals forces. [ 10 ]
Yet another related class of compounds are clathrates , which often consisting of a lattice that traps or contains molecules. [ 11 ] The word clathrate is derived from the Latin clathratus ( clatratus ), meaning 'with bars, latticed '. [ 12 ]
Molecular encapsulation concerns the confinement of a guest within a larger host. In some cases, true host-guest reversibility is observed, in other cases, the encapsulated guest cannot escape. [ 13 ]
An important implication of encapsulation (and host-guest chemistry in general) is that the guest behaves differently from the way it would when in solution. Guest molecules that would react by bimolecular pathways are often stabilized because they cannot combine with other reactants. The spectroscopic signatures of trapped guests are of fundamental interest. Compounds normally highly unstable in solution have been isolated at room temperature when molecularly encapsulated. Examples include cyclobutadiene , [ 15 ] arynes or cycloheptatetraene. [ 16 ] [ 17 ] Large metalla-assemblies, known as metallaprisms , contain a conformationally flexible cavity that allows them to host a variety of guest molecules. These assemblies have shown promise as agents of drug delivery to cancer cells.
Encapsulation can control reactivity. For instance, excited state reactivity of free 1-phenyl-3-tolyl-2-proponanone (abbreviated A-CO-B) yields products A-A, B-B, and AB, which result from decarbonylation followed by random recombination of radicals A• and B•. Whereas, the same substrate upon encapsulation reacts to yield the controlled recombination product A-B, and rearranged products (isomers of A-CO-B). [ 18 ]
Organic hosts are occasionally called cavitands . The original definition proposed by Cram includes many classes of molecules: cyclodextrins , calixarenes , pillararenes and cucurbiturils . [ 19 ]
Calixarenes and related formaldehyde-arene condensates ( resorcinarenes and pyrogallolarenes ) form a class of hosts that form inclusion compounds. [ 5 ] [ 20 ] A related family of formaldehyde-derived oligomeric rings are pillararenes (pillered arenes). One famous illustration of the stabilizing effect of host-guest complexation is the stabilization of cyclobutadiene by such an organic host. [ 21 ]
Cyclodextrin (CD) are tubular molecules composed of several glucose units connected by ether bonds. The three kinds of CDs, α-CD (6 units), β-CD (7 units), and γ-CD (8 units) differ in their cavity sizes: 5, 6, and 8 Å, respectively. α-CD can thread onto one PEG chain, while γ-CD can thread onto 2 PEG chains. β-CD can bind with thiophene-based molecule. [ 5 ] Cyclodextrins are well established hosts for the formation of inclusion compounds. [1] [2] [3] Illustrative is the case of ferrocene which is inserted into the cyclodextrin at 100 °C under hydrothermal conditions. [ 22 ]
Cucurbiturils are macrocyclic molecules made of glycoluril ( =C 4 H 2 N 4 O 2 = ) monomers linked by methylene bridges ( −CH 2 − ). The oxygen atoms are located along the edges of the band and are tilted inwards, forming a partly enclosed cavity ( cavitand ). . Cucurbit[n]urils have similar size of γ-CD, which also behave similarly ( e.g. , 1 cucurbit[n]uril can thread onto 2 PEG chains). [ 5 ]
The structure of cryptophanes contain 6 phenyl rings, mainly connected in 4 ways . Due to the phenyl groups and aliphatic chains, the cages inside cryptophanes are highly hydrophobic, suggesting the capability of capturing non-polar molecules. Based on this, cryptophanes can be employed to capture xenon in aqueous solution, which could be helpful in biological studies. [ 5 ]
Crown ethers bind cations. Small crown ethers, e.g. 12-crown-4 bind well to small ions such as Li+ and large crowns, such as 24-crown-8 bind better to larger ions. [ 5 ] Beyond binding ionic guests, crown ethers also bind to some neutral molecules, e.g. , 1, 2, 3- triazole. Crown ethers can also be threaded with slender linear molecules and/or polymers, giving rise to supramolecular structures called rotaxanes . Given that the crown ethers are not bound to the chains, they can move up and down the threading molecule. [ 8 ] Crown ether complexes of metal cations (and the corresponding complexes of Cryptands ) are not considered to be inclusion complexes since the guest is bound by forces stronger than van der Waals bonding.
Zeolites have open framework structures with cavities in which guest species can reside. Aluminosilicates being their composition, zeolites are rigid. Many structures are known, some of which are considerably useful as catalysts and for separations. [ 11 ]
Silica clathrasil are compounds structurally similar to clathrate hydrates with a SiO 2 framework and can be found in a range of marine sediment. [ 23 ]
Clathrate compounds with formula A 8 B 16 X 30 , where A is an alkaline earth metal , B is a group III element, and X is an element from group IV have been explored for thermoelectric devices. Thermoelectric materials follow a design strategy called the phonon glass electron crystal concept. [ 24 ] [ 25 ] Low thermal conductivity and high electrical conductivity is desired to produce the Seebeck Effect . When the guest and host framework are appropriately tuned, clathrates can exhibit low thermal conductivity, i.e., phonon glass behavior, while electrical conductivity through the host framework is undisturbed allowing clathrates to exhibit electron crystal .
Hofmann clathrates are coordination polymers with the formula Ni(CN) 4 ·Ni(NH 3 ) 2 (arene). These materials crystallize with small aromatic guests (benzene, certain xylenes), and this selectivity has been exploited commercially for the separation of these hydrocarbons. [ 11 ] Metal organic frameworks (MOFs) form clathrates.
Urea , a small molecule with the formula O=C(NH 2 ) 2 , has the peculiar property of crystallizing in open but rigid networks. The cost of efficient molecular packing is compensated by hydroge-bonding. Ribbons of hydrogen-bonded urea molecules form tunnel-like host into which many organic guests bind. Urea-clathrates have been well investigated for separations. [ 26 ] Beyond urea, several other organic molecules form clathrates: thiourea , hydroquinone , and Dianin's compound . [ 11 ]
When the host and guest molecules combine to form a single complex, the equilibrium is represented as
and the equilibrium constant, K, is defined as
where [X] denotes the concentration of a chemical species X (all activity coefficients are assumed to have a numerical values of 1).
The mass-balance equations, at any data point,
where T G {\displaystyle T_{G}} and T H {\displaystyle T_{H}} represent the total concentrations, of host and guest, can be reduced to a single quadratic equation in, say, [G] and so can be solved analytically for any given value of K. The concentrations [H] and [HG] can then derived.
The next step in the calculation is to calculate the value, X i c a l c {\displaystyle X_{i}^{calc}} , of a quantity corresponding to the quantity observed X i o b s {\displaystyle X_{i}^{obs}} . Then, a sum of squares, U, over all data points, np, can be defined as
and this can be minimized with respect to the stability constant value, K, and a parameter such the chemical shift of the species HG (nmr data) or its molar absorbency (uv/vis data). This procedure is applicable to 1:1 adducts.
With nuclear magnetic resonance (NMR) spectra the observed chemical shift value, δ , arising from a given atom contained in a reagent molecule and one or more complexes of that reagent, will be the concentration-weighted average of all shifts of those chemical species. Chemical exchange is assumed to be rapid on the NMR time-scale.
Using UV-vis spectroscopy, the absorbance of each species is proportional to the concentration of that species, according to the Beer–Lambert law .
where λ is a wavelength, ℓ {\displaystyle \ell } is the optical path length of the cuvette which contains the solution of the N compounds ( chromophores ), ε i , λ {\displaystyle \varepsilon _{i,\lambda }} is the molar absorbance (also known as the extinction coefficient) of the i th chemical species at the wavelength λ, c i is its concentration. When the concentrations have been calculated as above and absorbance has been measured for samples with various concentrations of host and guest, the Beer–Lambert law provides a set of equations, at a given wavelength, that which can be solved by a linear least-squares process for the unknown extinction coefficient values at that wavelength.
Host-guest structures can be probed by their luminescence. A rigid matrix protects emitters from being quenched, extending the lifetime of phosphoresce. [ 27 ] In this circumstance, α-CD and CB could be used, [ 28 ] [ 29 ] in which the phosphor is served as a guest to interact with the host. For example, 4-phenylpyridium derivatives interacted with CB, and copolymerize with acrylamide . The resulting polymer yielded ~2 s of phosphorescence lifetime. Additionally, Zhu et al. used crown ether and potassium ion to modify the polymer, and enhance the emission of phosphorescence. [ 30 ]
Another technique for evaluating host-guest interactions is calorimetry .
Host guest complexation is pervasive in biochemistry. Many protein hosts recognize and hence selectively bind other biomolecules. When the protein host is an enzyme, the guests are called substrates. While these concepts are well established in biological systems, the applications of synthetic host-guest chemistry remains mostly in the realm of aspiration. One major exception, being zeolites where host-guest chemistry is their raison d'etre.
A self-healing hydrogel constructed from modified cyclodextrin and adamantane . [ 31 ] [ 33 ] Another strategy is to use the interaction between the polymer backbone and host molecule (host molecule threading onto the polymer). If the threading process is fast enough, self-healing can also be achieved. [ 32 ]
Cyclodextrin forms inclusion compounds with fragrances which are more stable towards exposure to light and air. When incorporated into textiles the fragrance lasts much longer due to the slow-release action. [ 34 ]
Photolytically sensitive caged compounds have been examined as containers for releasing a drug or reagent . [ 35 ] [ 36 ]
An encryption system constructed by pillar[5]arene, spiropyran and pentanenitrile (free state and grafted to polymer) was constructed by Wang et al . After UV irradiation, spiropyran would transform into merocyanine. When the visible light was shined on the material, the merocyanine close to the pillar[5]arene-free pentanenitrile complex had faster transformation to spiropyran; on the contrary, the one close to pillar[5]arene-grafted pentanenitrile complex has much slower transformation rate. This spiropyran-merocyanine transformation can be used for message encryption. [ 37 ] Another strategy is based on the metallacages and polycyclic aromatic hydrocarbons. [ 38 ] Because of the fluorescnece emission differences between the complex and the cages, the information could be encrypted.
Although some host-guest interactions are not strong, increasing the amount of the host-guest interaction can improve the mechanical properties of the materials. As an example, threading the host molecules onto the polymer is one of the commonly used strategies for increasing the mechanical properties of the polymer. It takes time for the host molecules to de-thread from the polymer, which can be a way of energy dissipation. [ 33 ] [ 39 ] [ 40 ] Another method is to use the slow exchange host-guest interaction. Though the slow exchange improves the mechanical properties, simultaneously, self-healing properties will be sacrificed. [ 41 ]
Silicon surfaces functionalized with tetraphosphonate cavitands have been used to singularly detect sarcosine in water and urine solutions. [ 42 ]
Traditionally, chemical sensing has been approached with a system that contains a covalently bound indicator to a receptor though a linker. Once the analyte binds, the indicator changes color or fluoresces. This technique is called the indicator-spacer-receptor approach (ISR). [ 43 ] In contrast to ISR, indicator-displacement assay (IDA) utilizes a non-covalent interaction between a receptor (the host), indicator, and an analyte (the guest). Similar to ISR, IDA also utilizes colorimetric (C-IDA) and fluorescence (F-IDA) indicators. In an IDA assay, a receptor is incubated with the indicator. When the analyte is added to the mixture, the indicator is released to the environment. Once the indicator is released it either changes color (C-IDA) or fluoresces (F-IDA). [ 44 ]
IDA offers several advantages versus the traditional ISR chemical sensing approach. First, it does not require the indicator to be covalently bound to the receptor. Secondly, since there is no covalent bond, various indicators can be used with the same receptor. Lastly, the media in which the assay may be used is diverse. [ 45 ]
Chemical sensing techniques such as C-IDA have biological implications. For example, protamine is a coagulant that is routinely administered after cardiopulmonary surgery that counter acts the anti-coagulant activity of herapin. In order to quantify the protamine in plasma samples, a colorimetric displacement assay is used. Azure A dye is blue when it is unbound, but when it is bound to herapin, it shows a purple color. The binding between Azure A and heparin is weak and reversible. This allows protamine to displace Azure A. Once the dye is liberated it displays a purple color. The degree to which the dye is displaced is proportional to the amount of protamine in the plasma. [ 46 ]
F-IDA has been used by Kwalczykowski and co-workers to monitor the activities of helicase in E.coli . In this study they used thiazole orange as the indicator. The helicase unwinds the dsDNA to make ssDNA. The fluorescence intensity of thiazole orange has a greater affinity for dsDNA than ssDNA and its fluorescence intensity increases when it is bound to dsDNA than when it is unbound. [ 47 ] [ 48 ]
A crystalline solid has been traditionally viewed as a static entity where the movements of its atomic components are limited to its vibrational equilibrium. As seen by the transformation of graphite to diamond, solid to solid transformation can occur under physical or chemical pressure. It has been proposed that the transformation from one crystal arrangement to another occurs in a cooperative manner. [ 49 ] [ 50 ] Most of these studies have been focused in studying an organic or metal-organic framework. [ 51 ] [ 52 ] In addition to studies of macromolecular crystalline transformation, there are also studies of single-crystal molecules that can change their conformation in the presence of organic solvents. An organometallic complex has been shown to morph into various orientations depending on whether it is exposed to solvent vapors or not. [ 53 ]
Host guest systems have been proposed to remove hazardous materials. Certain calix[4]arenes bind cesium-137 ions, which could in principle be applied to clean up radioactive wastes. Some receptors bind carcinogens. [ 54 ] [ 55 ]
According to food chemist Udo Pollmer of the European Institute of Food and Nutrition Sciences in Munich , alcohol can be molecularly encapsulated in cyclodextrines , a sugar derivate. In this way, encapsuled in small capsules, the fluid can be handled as a powder. The cyclodextrines can absorb an estimated 60 percent of their own weight in alcohol. [ 56 ] A US patent has been registered for the process as early as 1974. [ 57 ] | https://en.wikipedia.org/wiki/Host–guest_chemistry |
The host-pathogen interaction is defined as how microbes or viruses sustain themselves within host organisms on a molecular, cellular, organismal or population level. This term is most commonly used to refer to disease-causing microorganisms although they may not cause illness in all hosts. [ 1 ] Because of this, the definition has been expanded to how known pathogens survive within their host , whether they cause disease or not.
On the molecular and cellular level, microbes can infect the host and divide rapidly, causing disease by being there and causing a homeostatic imbalance in the body, or by secreting toxins which cause symptoms to appear. Viruses can also infect the host with virulent DNA, which can affect normal cell processes ( transcription , translation , etc.), protein folding, or evading the immune response . [ 2 ]
One of the first pathogens observed by scientists was Vibrio cholerae , described in detail by Filippo Pacini in 1854. His initial findings were just drawings of the bacteria but, up until 1880, he published many other papers concerning the bacteria. He described how it causes diarrhea as well as developed effective treatments against it. Most of these findings went unnoticed until Robert Koch rediscovered the organism in 1884 and linked it to the disease.
Giardia lamblia was discovered by Leeuwenhoeck in the 1600s [ 2 ] [ 3 ] but was not found to be pathogenic until the 1970s , when an EPA-sponsored symposium was held following a large outbreak in Oregon involving the parasite. Since then, many other organisms have been identified as pathogens, such as H. pylori and E. coli , which have allowed scientists to develop antibiotics to combat these harmful microorganisms.
Pathogens include bacteria , fungi , protozoa , parasitic worms (helminths), and viruses .
Each of these different types of organisms can then be further classified as a pathogen based on its mode of transmission. This includes the following: food borne, airborne, waterborne, blood-borne, and vector-borne. Many pathogenic bacteria, such as food-borne Staphylococcus aureus and Clostridium botulinum , secrete toxins into the host to cause symptoms. HIV and hepatitis B are viral infections caused by blood-borne pathogens. Aspergillus the most common pathogenic fungi, secretes aflatoxin , which acts as a carcinogen and contaminates many foods, especially those grown underground (nuts, potatoes, etc.). [ 4 ]
Within the host, pathogens can do a variety of things to cause disease and trigger the immune response.
Microbes and fungi cause symptoms due to their high rate of reproduction and tissue invasion. This causes an immune response, resulting in common symptoms as phagocytes break down the bacteria within the host.
Some bacteria, such as H. pylori , can secrete toxins into the surrounding tissues, resulting in cell death or inhibition of normal tissue function.
Viruses, however, use a completely different mechanism to cause disease. Upon entry into the host, they can do one of two things. Many times, viral pathogens enter the lytic cycle; this is when the virus inserts its DNA or RNA into the host cell, replicates, and eventually causes the cell to lyse, releasing more viruses into the environment. The lysogenic cycle, however, is when the viral DNA is incorporated into the host genome, allowing it to go unnoticed by the immune system. Eventually, it gets reactivated and enters the lytic cycle, giving it an indefinite "shelf life" so to speak. [ 5 ]
There are three types of host-pathogen interactions based on how the pathogen interacts with the host. Commensalism is when the pathogen benefits while the host gains nothing from the interaction. An example of this is Bacteroides thetaiotaomicron, which resides in the human intestinal tract but provides no known benefits. [ 6 ] Mutualism occurs when both the pathogen and the host benefit from the interaction, as seen in the human stomach. Many of the bacteria aid in breaking down nutrients for the host, and in return, our bodies act as their ecosystem. [ 7 ] Parasitism occurs when the pathogen benefits from the relationship while the host is harmed. This can be seen in the unicellular Plasmodium falciparum parasite which causes malaria in humans. [ 8 ]
Although pathogens do have the capability to cause disease, they do not always do so. This is described as context-dependent pathogenicity. Scientists believe that this variability comes from both genetic and environmental factors within the host. One example of this in humans is E. coli . Normally, this bacteria flourishes as a part of the normal, healthy microbiota in the intestines. However, if it relocates to a different region of the digestive tract or the body, it can cause intense diarrhea. So, while E. coli is classified as a pathogen, but it does not always act as such. [ 9 ] This example can also be applied to S. aureus and other common microbial flora in humans.
Currently, antimicrobials are the primary treatment method for pathogens. These drugs are specifically designed to kill microbes or inhibit further growth within the host environment. Multiple terms can be used to describe antimicrobial drugs. Antibiotics are chemicals made by microbes that can be used against other pathogens, such as penicillin and erythromycin. Semi-synthetics are antimicrobials that are derived from bacteria, but they are enhanced to have a greater effect. In contrast to both of these, synthetic are strictly made in the lab to combat pathogenicity. Each of these three types of antimicrobials can be classified into two subsequent groups: bactericidal and bacteriostatic. Bactericidal substances kill microorganisms while bacteriostatic substances inhibit microbial growth. [ 10 ]
The main problem with pathogenic drug treatments in the modern world is drug resistance. Many patients don't take the full treatment of drugs, leading to the natural selection of resistant bacteria. One example of this is methicillin-resistant Staphylococcus aureus ( MRSA ). Because of antibiotic overuse, only the bacteria which have developed genetic mutations to combat the drug can survive. This reduces drug effectiveness and renders many treatments useless. [ 11 ]
Thanks to network analysis of host–pathogen interactions and large-scale analyses of RNA sequencing data from infected host cells, [ 12 ] we know that pathogen proteins causing an extensive rewiring of the host interactome have a higher impact in pathogen fitness during infection. These observations suggest that hubs in the host–pathogen interactome should be explored as promising targets for antimicrobial drug design. [ 13 ] Dual-species proteomics could also be employed to study host-pathogen interactions by simultaneously quantifying proteins newly synthesized by the host and pathogen. [ 14 ] Currently, many scientists are aiming to understand genetic variability and how it contributes to pathogen interaction and variability within the host. They are also aiming to limit the transmission methods for many pathogens to prevent rapid spread in hosts. As we learn more about the host–pathogen interaction and the amount of variability within hosts, [ 15 ] the definition of the interaction needs to be redefined. Casadevall proposes that pathogenicity should be determined based on the amount of damage caused to the host, classifying pathogens into different categories based on how they function in the host. [ 16 ] However, in order to cope with the changing pathogenic environment, treatment methods need to be revised to deal with drug-resistant microbes. | https://en.wikipedia.org/wiki/Host–pathogen_interaction |
Hot carrier injection ( HCI ) is a phenomenon in solid-state electronic devices where an electron or a “ hole ” gains sufficient kinetic energy to overcome a potential barrier necessary to break an interface state. The term "hot" refers to the effective temperature used to model carrier density, not to the overall temperature of the device. Since the charge carriers can become trapped in the gate dielectric of a MOS transistor , the switching characteristics of the transistor can be permanently changed. Hot-carrier injection is one of the mechanisms that adversely affects the reliability of semiconductors of solid-state devices. [ 1 ]
The term “hot carrier injection” usually refers to the effect in MOSFETs , where a carrier is injected from the conducting channel in the silicon substrate to the gate dielectric , which usually is made of silicon dioxide (SiO 2 ).
To become “hot” and enter the conduction band of SiO 2 , an electron must gain a kinetic energy of ~3.2 eV . For holes, the valence band offset in this case dictates they must have a kinetic energy of 4.6 eV. The term "hot electron" comes from the effective temperature term used when modelling carrier density (i.e., with a Fermi-Dirac function) and does not refer to the bulk temperature of the semiconductor (which can be physically cold, although the warmer it is, the higher the population of hot electrons it will contain all else being equal).
The term “hot electron” was originally introduced to describe non-equilibrium electrons (or holes) in semiconductors. [ 2 ] More broadly, the term describes electron distributions describable by the Fermi function , but with an elevated effective temperature. This greater energy affects the mobility of charge carriers and as a consequence affects how they travel through a semiconductor device. [ 3 ]
Hot electron s can tunnel out of the semiconductor material, instead of recombining with a hole or being conducted through the material to a collector. Consequent effects include increased leakage current and possible damage to the encasing dielectric material if the hot carrier disrupts the atomic structure of the dielectric.
Hot electrons can be created when a high-energy photon of electromagnetic radiation (such as light) strikes a semiconductor. The energy from the photon can be transferred to an electron, exciting the electron out of the valence band, and forming an electron-hole pair. If the electron receives enough energy to leave the valence band, and to surpass the conduction band, it becomes a hot electron. Such electrons are characterized by high effective temperatures. Because of the high effective temperatures, hot electrons are very mobile, and likely to leave the semiconductor and travel into other surrounding materials.
In some semiconductor devices, the energy dissipated by hot electron phonons represents an inefficiency as energy is lost as heat. For instance, some solar cells rely on the photovoltaic properties of semiconductors to convert light to electricity. In such cells, the hot electron effect is the reason that a portion of the light energy is lost to heat rather than converted to electricity. [ 4 ]
Hot electrons arise generically at low temperatures even in degenerate semiconductors or metals. [ 5 ] There are a number of models to describe the hot-electron effect. [ 6 ] The simplest predicts an electron-phonon (e-p) interaction based on a clean three-dimensional free-electron model. [ 7 ] [ 8 ] Hot electron effect models illustrate a correlation between power dissipated, the electron gas temperature and overheating.
In MOSFETs , hot electrons have sufficient energy to tunnel through the thin gate oxide to show up as gate current, or as substrate leakage current. In a MOSFET, when a gate is positive, and the switch is on, the device is designed with the intent that electrons will flow laterally through the conductive channel, from the source to the drain. Hot electrons may jump from the channel region or from the drain, for instance, and enter the gate or the substrate. These hot electrons do not contribute to the amount of current flowing through the channel as intended and instead are a leakage current.
Attempts to correct or compensate for the hot electron effect in a MOSFET may involve locating a diode in reverse bias at gate terminal or other manipulations of the device (such as lightly doped drains or double-doped drains).
When electrons are accelerated in the channel, they gain energy along the mean free path.
This energy is lost in two different ways:
The probability to hit either an atom or a Si-H bond is random, and the average energy involved in each process is the same in both case.
This is the reason why the substrate current is monitored during HCI stress.
A high substrate current means a large number of created electron-hole pairs and thus an efficient Si-H bond breakage mechanism.
When interface states are created, the threshold voltage is modified and the subthreshold slope is degraded. This leads to lower current, and degrades the operating frequency of integrated circuit.
Advances in semiconductor manufacturing techniques and ever increasing demand for faster and more complex integrated circuits (ICs) have driven the associated Metal–Oxide–Semiconductor field-effect transistor (MOSFET) to scale to smaller dimensions.
However, it has not been possible to scale the supply voltage used to operate these ICs proportionately due to factors such as compatibility with previous generation circuits, noise margin , power and delay requirements, and non-scaling of threshold voltage , subthreshold slope , and parasitic capacitance .
As a result, internal electric fields increase in aggressively scaled MOSFETs, which comes with the additional benefit of increased carrier velocities (up to velocity saturation ), and hence increased switching speed, [ 9 ] but also presents a major reliability problem for the long term operation of these devices, as high fields induce hot carrier injection which affects device reliability.
Large electric fields in MOSFETs imply the presence of high-energy carriers, referred to as “ hot carriers ”. These hot carriers that have sufficiently high energies and momenta to allow them to be injected from the semiconductor into the surrounding dielectric films such as the gate and sidewall oxides as well as the buried oxide in the case of silicon on insulator (SOI) MOSFETs .
The presence of such mobile carriers in the oxides triggers numerous physical damage processes that can drastically change the device characteristics over prolonged periods. The accumulation of damage can eventually cause the circuit to fail as key parameters such as threshold voltage shift due to such damage. The accumulation of damage resulting degradation in device behavior due to hot carrier injection is called “ hot carrier degradation ”.
The useful life-time of circuits and integrated circuits based on such a MOS device are thus affected by the life-time of the MOS device itself. To assure that integrated circuits manufactured with minimal geometry devices will not have their useful life impaired, the life-time of the component MOS devices must have their HCI degradation well understood. Failure to accurately characterize HCI life-time effects can ultimately affect business costs such as warranty and support costs and impact marketing and sales promises for a foundry or IC manufacturer.
Hot carrier degradation is fundamentally the same as the ionization radiation effect known as the total dose damage to semiconductors, as experienced in space systems due to solar proton , electron, X-ray and gamma ray exposure.
HCI is the basis of operation for a number of non-volatile memory technologies such as EPROM cells. As soon as the potential detrimental influence of HC injection on the circuit reliability was recognized, several fabrication strategies were devised to reduce it without compromising the circuit performance.
NOR flash memory exploits the principle of hot carriers injection by deliberately injecting carriers across the gate oxide to charge the floating gate . This charge alters the MOS transistor threshold voltage to represent a logic '0' state . An uncharged floating gate represents a '1' state. Erasing the NOR Flash memory cell removes stored charge through the process of Fowler–Nordheim tunneling .
Because of the damage to the oxide caused by normal NOR Flash operation, HCI damage is one of the factors that cause the number of write-erase cycles to be limited. Because the ability to hold charge and the formation of damage traps in the oxide affects the ability to have distinct '1' and '0' charge states, HCI damage results in the closing of the non-volatile memory logic margin window over time. The number of write-erase cycles at which '1' and '0' can no longer be distinguished defines the endurance of a non-volatile memory. | https://en.wikipedia.org/wiki/Hot-carrier_injection |
Hot-dip galvanization is a form of galvanization . It is the process of coating iron and steel with zinc , which alloys with the surface of the base metal when immersing the metal in a bath of molten zinc at a temperature of around 450 °C (842 °F). When exposed to the atmosphere, the pure zinc (Zn) reacts with oxygen ( O 2 ) to form zinc oxide ( ZnO ), which further reacts with carbon dioxide ( CO 2 ) to form zinc carbonate ( ZnCO 3 ), a usually dull grey, fairly strong material that protects the steel underneath from further corrosion in many circumstances. Galvanized steel is widely used in applications where corrosion resistance is needed without the cost of stainless steel , and is considered superior in terms of cost and life-cycle. It can be identified by the crystallization patterning on the surface (often called a "spangle"). [ 1 ]
Galvanized steel can be welded; however, one must exercise caution around the resulting toxic zinc fumes. Galvanized fumes are released when the galvanized metal reaches a certain temperature. This temperature varies by the galvanization process used. In long-term, continuous exposure, the recommended maximum temperature for hot-dip galvanized steel is 200 °C (392 °F), according to the American Galvanizers Association. The use of galvanized steel at temperatures above this will result in peeling of the zinc at the inter-metallic layer [ citation needed ] . Electrogalvanized sheet steel is often used in automotive manufacturing to enhance the corrosion performance of exterior body panels; this is, however, a completely different process which tends to achieve lower coating thicknesses of zinc.
Like other corrosion protection systems, galvanizing protects steel by acting as a barrier between steel and the atmosphere. However, zinc is a more electropositive (active) metal in comparison to steel. This is a unique characteristic for galvanizing, which means that when a galvanized coating is damaged and steel is exposed to the atmosphere, zinc can continue to protect steel through galvanic corrosion (often within an annulus of 5 mm, above which electron transfer rate decreases).
The process of hot-dip galvanizing results in a metallurgical bond between zinc and steel, with a series of distinct iron-zinc alloys. The resulting coated steel can be used in much the same way as uncoated.
A typical hot-dip galvanizing line operates as follows: [ 2 ]
Lead is often added to the molten zinc bath to improve the fluidity of the bath (thus limiting excess zinc on the dipped product by improved drainage properties), help prevent floating dross , make dross recycling easier and protect the kettle from uneven heat distribution from the burners. Environmental regulations in the United States disapprove of lead in the kettle bath. Lead is either added to primary Z1 grade zinc or already contained in used secondary zinc. A third, declining method is to use low Z5 grade zinc. [ 3 ]
Steel strip can be hot-dip galvanized in a continuous line. Hot-dip galvanized steel strip (also sometimes loosely referred to as galvanized iron) is extensively used for applications requiring the strength of steel combined with the resistance to corrosion of zinc, such as roofing and walling , safety barriers, handrails , consumer appliances and automotive body parts. One common use is in metal pails . Galvanised steel is also used in most heating and cooling duct systems in buildings
Individual metal articles, such as steel girders or wrought iron gates, can be hot-dip galvanized by a process called batch galvanizing. Other modern techniques have largely replaced hot-dip for these sorts of roles. This includes electrogalvanizing , which deposits the layer of zinc from an aqueous electrolyte by electroplating , forming a thinner and much stronger bond.
In some cases, it may be desirable to have designated parts of the metal as non-galvanized. This is often desired when metal will be welded after galvanization. To accomplish this, a galvanizer will typically use a masking compound to coat the areas that will not be galvanized during the hot dip process.
In 1742, French chemist Paul Jacques Malouin described a method of coating iron by dipping it in molten zinc in a presentation to the French Royal Academy.
In 1772, Luigi Galvani , for whom galvanizing was named, discovered the electrochemical process that takes place between metals during an experiment with frog legs.
In 1801, Alessandro Volta furthered the research on galvanizing when he discovered the electro-potential between two metals, creating a corrosion cell.
In 1836, French chemist Stanislas Sorel obtained a patent for a method of coating iron with zinc , after first cleaning it with 9% sulfuric acid ( H 2 SO 4 ) and fluxing it with ammonium chloride ( NH 4 Cl ).
A hot-dip galvanized coating is relatively easier and cheaper to specify than an organic paint coating of equivalent corrosion protection performance. The British, European and International standard for hot-dip galvanizing is BS EN ISO 1461, which specifies a minimum coating thickness to be applied to steel in relation to the steels section thickness e.g. a steel fabrication with a section size thicker than 6 mm shall have a minimum galvanized coating thickness of 85 μm .
Further performance and design information for galvanizing can be found in BS EN ISO 14713-1 and BS EN ISO 14713-2.
The durability performance of a galvanized coating depends solely on the corrosion rate of the environment in which it is placed. Corrosion rates for different environments can be found in BS EN ISO 14713-1, where typical corrosion rates are given, along with a description of the environment in which the steel would be used. | https://en.wikipedia.org/wiki/Hot-dip_galvanization |
The hot-wire barretter was a demodulating detector, invented in 1902 by Reginald Fessenden , that found limited use in early radio receivers . In effect, it was a highly sensitive thermoresistor , which could demodulate amplitude-modulated signals, something that the coherer (the standard detector of the time) could not do. [ 1 ]
The first device used to demodulate amplitude modulated signals, it was later superseded by the electrolytic detector , also generally attributed to Fessenden. The barretter principle is still used as a detector for microwave radiation, similar to a bolometer .
Fessenden's 1902 patent describes the construction of the device. A fine platinum wire, about 0.003 inches (0.08 mm) in diameter, is embedded in the middle of a silver tube having a diameter of about 0.1 inches (2.5 mm). This compound wire is then drawn until the silver wire has a diameter of about 0.002 inches (0.05 mm); as the platinum wire within it is reduced in the same ratio, it is drawn down to a final diameter of 0.00006 inches (1.5 μm). The result is called Wollaston wire .
The silver cladding is etched off a short piece of the composite wire, leaving an extremely fine platinum wire; this is supported, on two heavier silver wires, in a loop inside a glass bulb. The leads are taken out through the glass envelope, and the whole device is put under vacuum and then sealed.
The hot-wire barretter depends upon the increase of a metal resistivity with increasing temperature. The device is biased by a direct current adjusted to heat the wire to its most sensitive temperature. When there is an oscillating current from the antenna through the extremely fine platinum wire loop, the wire is further heated as the current increases and cools as the current decreases again. As the wire heats and cools, it varies its resistance in response to the signals passing through it. Because of the low thermal mass of the wire, it is capable of responding quickly enough to vary its resistance in response to audio signals. However, it cannot vary its resistance fast enough to respond to the much higher radio frequencies. The signal is demodulated because the current supplied by the biasing source varies with the changing wire resistance. Headphones are connected in series with the DC circuit, and the variations in the current are rendered as sound. | https://en.wikipedia.org/wiki/Hot-wire_barretter |
HotHardware is an online publication about computer hardware , consumer electronics and related technologies, mobile computing and PC gaming . [ 1 ] [ 2 ] It regularly features coverage of new products and technologies from vendors including Intel , [ 3 ] Dell , [ 4 ] AMD , [ 5 ] and NVIDIA . [ 6 ] "Daily Hardware Round-ups" also offer reviews and news submitted by other technology-related sites.
Content is organized by category and is searchable through a content management system (CMS), with a blog -style comments section for registered users, and a web forum with integrated comments section, topic tagging/filing and a content rating system . Forum members can also take part in contests to win hardware that has been featured on the site.
This article about a United States publishing company is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/HotHardware |
A hot air engine [ 1 ] (historically called an air engine or caloric engine [ 2 ] ) is any heat engine that uses the expansion and contraction of air under the influence of a temperature change to convert thermal energy into mechanical work . These engines may be based on a number of thermodynamic cycles encompassing both open cycle devices such as those of Sir George Cayley [ 3 ] and John Ericsson [ 4 ] and the closed cycle engine of Robert Stirling . [ 5 ] Hot air engines are distinct from the better known internal combustion based engine and steam engine .
In a typical implementation, air is repeatedly heated and cooled in a cylinder and the resulting expansion and contraction are used to move a piston and produce useful mechanical work .
The term "hot air engine" specifically excludes any engine performing a thermodynamic cycle in which the working fluid undergoes a phase transition , such as the Rankine cycle . Also excluded are conventional internal combustion engines , in which heat is added to the working fluid by combustion of fuel within the working cylinder. Continuous combustion types, such as George Brayton 's Ready Motor and the related gas turbine , could be seen as borderline cases.
The expansive property of heated air was known to the ancients. Hero of Alexandria 's Pneumatica describes devices that might be used to automatically open temple doors when a fire was lit on a sacrificial altar. Devices called hot air engines, or simply air engines , have been recorded from as early as 1699. In 1699, Guillaume Amontons (1663–1705) presented, to the Royal Academy of Sciences in Paris, a report on his invention: a wheel that was made to turn by heat. [ 6 ] The wheel was mounted vertically. Around the wheel's hub were water-filled chambers. Air-filled chambers on the wheel's rim were heated by a fire under one side of the wheel. The heated air expanded and, via tubes, forced water from one chamber to another, unbalancing the wheel and causing it to turn.
See:
It is likely that Robert Stirling 's air engine of 1818, which incorporated his innovative Economiser (patented in 1816) was the first air engine put to practical work. [ 11 ] The economiser, now known as the regenerator , stored heat from the hot portion of the engine as the air passed to the cold side, and released heat to the cooled air as it returned to the hot side. This innovation improved the efficiency of Stirling's engine and should be present in any air engine that is properly called a Stirling engine .
Stirling patented a second hot air engine, together with his brother James, in 1827. They inverted the design so that the hot ends of the displacers were underneath the machinery and they added a compressed air pump so the air within could be increased in pressure to around 20 atmospheres. It is stated by Chambers to have been unsuccessful, owing to mechanical defects and to “the unforeseen accumulation of heat, not fully extracted by the sieves or small passages in the cool part of the regenerator, of which the external surface was not sufficiently large to throw off the unrecovered heat when the engine was working with highly compressed air.”
Parkinson and Crossley, English patent, 1828 came up with their own hot air engine. In this engine the air-chamber is partly exposed, by submergence in cold water, to external cold, and its upper portion is heated by steam. An internal vessel moves up and down in this chamber, and in so doing displaces the air, alternately exposing it to the hot and cold influences of the cold water and the hot steam, changing its temperature and expansive condition. The fluctuations cause the reciprocation of a piston in a cylinder to whose ends the air-chamber is alternately connected.
In 1829 Arnott patented his air expansion machine where a fire is placed on a grate near the bottom of a close cylinder, and the cylinder is full of fresh air recently admitted. A loose piston is pulled upwards so that all the air in the cylinder above will be made to pass by a tube through the fire, and will receive an increased elasticity tending to the expansion or increase of volume, which the fire is capable of giving it.
He is followed the next year (1830) by Captain Ericsson who patented his second hot air engine. The specification describes it more particularly, as consisting of a “circular chamber, in which a cone is made to revolve on a shaft or axis by means of leaves or wings, alternately exposed to the pressure of steam; these wings or leaves being made to work through slits or openings of a circular plane, which revolves obliquely to, and is thereby kept in contact with the side of the cone.”
Ericsson built his third hot air engine (the caloric engine) in 1833 "which excited so much interest a few years ago in England; and which, if it should be brought into practical operation, will prove the most important mechanical invention ever conceived by the human mind, and one that will confer greater benefits on civilized life than any that has ever preceded it. For the object of it is the production of mechanical power by the agency of heat, at an expenditure of fuel so exceedingly small, that man will have an almost unlimited mechanical force at his command, in regions where fuel may now be said hardly to exist".
1838 sees the patent of Franchot hot air engine, certainly the hot air engine that was best following the Carnot requirements.
So far all these air engines have been unsuccessful, but the technology was maturing. In 1842, James Stirling, the brother of Robert, build the famous Dundee Stirling Engine. This one at least lasted 2–3 years but then was discontinued due to improper technical contrivances.
Hot air engines is a story of trials and errors, and it took another 20 years before hot air engines could be used on an industrial scale. The first reliable hot air engines were built by Shaw, Roper, Ericsson. Several thousands of them were built.
Hot engines found a market for pumping water (mainly to a household water tank) as the water inlet provided the cold required to maintain the temperature difference, though they did find other commercial uses.
A hot air engine thermodynamic cycle can (ideally) be made out of 3 or more processes (typically 4). The processes can be any of these:
Some examples (not all hot air cycles, as defined above) are as follows:
Yet another example is the Vuilleumier cycle . [ 17 ] | https://en.wikipedia.org/wiki/Hot_air_engine |
Hot air ovens are electrical devices which use dry heat to sterilize . They were originally developed by Louis Pasteur , [ 1 ] and are essentially the same as fan ovens used for cooking food. Generally, they use a thermostat to control the temperature. Their double walled insulation keeps the heat in and conserves energy , the inner layer being a poor conductor and outer layer being metallic. There is also an air filled space in between to aid insulation . An air circulating fan helps in uniform distribution of the heat. These are fitted with the adjustable wire mesh plated trays or aluminium trays and may have an on/off rocker switch, as well as indicators and controls for temperature and holding time. The capacities of these ovens vary. Power supply needs vary from country to country, depending on the voltage and frequency ( hertz ) used. Temperature sensitive tapes or biological indicators using bacterial spores can be used as controls, to test for the efficacy of the device during use.
They do not require water and there is not much pressure build up within the oven, unlike an autoclave , making them safer to work with. This also makes them more suitable to be used in a laboratory environment.
They are much smaller than autoclaves but can still be as effective.
They can be more rapid than an autoclave and higher temperatures can be reached compared to other means.
As they use dry heat instead of moist heat , some pathogens like prions , may not be killed by them every time, based on the principle of thermal inactivation by oxidation. [ citation needed ]
A complete cycle involves heating the oven to the required temperature, maintaining that temperature for the proper time interval for that temperature, turning the machine off and cooling the articles in the closed oven till they reach room temperature. The standard settings for a hot air oven are:
....plus the time required to preheat the chamber before beginning the sterilization cycle. If the door is opened before time, heat escapes and the process becomes incomplete. Thus the cycle must be properly repeated all over.
These are widely used to sterilize articles that can withstand high temperatures and not get burnt, like glassware and powders. Linen gets burnt and surgical sharps lose their sharpness. | https://en.wikipedia.org/wiki/Hot_air_oven |
In aviation , hot and high is a condition of low air density due to high ambient temperature and high airport elevation . Air density decreases with increasing temperature and altitude. The lower air density reduces the power output from an aircraft's engine and also requires a higher true airspeed before the aircraft can become airborne. Aviators gauge air density by calculating the density altitude . [ 1 ]
An airport may be especially hot or high, without the other condition being present. Temperature and pressure altitude can change from one hour to the next. The fact that temperature generally decreases as altitude increases mitigates the "hot and high" effect to a small extent.
Some ways to increase aircraft performance in hot and high conditions include:
Auxiliary rockets and/or jet engines can help a fully loaded aircraft to take off within the length of the runway. The rockets are usually one-time units that are jettisoned after takeoff. This practice was common in the 1950s and 60s, when the lower levels of thrust from military turbojets was inadequate for takeoff from shorter runways or with very heavy payloads. It is now seldom used.
Auxiliary jets and rockets have rarely been used on civil aircraft due to the risk of aircraft damage and loss of control if something were to go wrong during their use. Boeing did, however, produce a version of its popular Boeing 727 with JATO primarily for "hot and high" operations out of Mexico City Airport ( MMMX ) and La Paz, Bolivia. The boosters were located adjacent to the main landing gear at the wing root on each side of the aircraft and only intended to operate as an emergency fallback in the case of an engine failure during takeoff. [ 2 ]
Several manufacturers of early jet airliners offered variants optimized for hot and high operations. Such aircraft generally offered the largest wings and/or the most powerful engines in the model lineup coupled with a small fuselage to reduce weight. Some such aircraft include:
The marketing failure of most of these airplanes demonstrated that airlines were generally unwilling to accept reduced efficiency at cruise and smaller ultimate load-carrying capacity in return for a slight performance gain at particular airports. Rather than accepting these drawbacks, it was easier for airlines to demand the construction of longer runways, operate with smaller loads as conditions dictated, or simply drop the unprofitable destinations.
Furthermore, as the second generation of jet airliners began to appear in the 1970s, some aircraft were designed to eliminate the need for a special "hot and high" variant – for instance, the Airbus A300 can perform a 15/0 takeoff, where the leading edge slats are adjusted to 15 degrees and the flaps kept retracted. This takeoff technique is only used at hot and high airports, for it enables a higher climb limit weight and improves second segment climb performance.
Most jetliner manufacturers have dropped the "hot and high" variants from their model lineups.
Notable examples of hot and high airports include: [ citation needed ] | https://en.wikipedia.org/wiki/Hot_and_high |
In physical chemistry , a hot atom is an atom that has a high kinetic or internal energy. [ 1 ]
When molecule AB adsorbs on a surface dissociatively,
In case 2, B gains a high translational energy from the adsorption energy of A, and hot atom B is generated. For example, the hydrogen molecule , because of its light mass, gets a high translational energy. Such a hot atom does not fly into vacuum but is trapped on the surface, where it diffuses with high energy.
Hot atoms are expected to play important roles in catalytic reactions . For example, a reaction of a hydrogen atom with hydrogen atoms on a silicon surface and a reaction of an oxygen atom with oxygen molecules on Pt(111) have been reported. Hot atoms can also be generated by degenerating molecules on a metal surface with UV light. It has been reported that the reactivity of an oxygen atom generated in such a way on a platinum surface is different from that of chemisorbed oxygen atoms. Elucidating the role of hot atoms on surfaces will lead to a deeper understanding of the mechanism of reactions.
This physical chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hot_atom |
Hot blast is the preheated air blown into a blast furnace or other metallurgical process. This technology, which considerably reduces the fuel consumed, was one of the most important technologies developed during the Industrial Revolution . [ 1 ] Hot blast also allowed higher furnace temperatures, which increased the capacity of furnaces. [ 2 ] [ 3 ]
As first developed, it worked by alternately storing heat from the furnace flue gas in a firebrick-lined vessel with multiple chambers, then blowing combustion air through the hot chamber. This is known as regenerative heating . Hot blast was invented and patented for iron furnaces by James Beaumont Neilson in 1828 at Wilsontown Ironworks [ citation needed ] in Scotland, but was later applied in other contexts, including late bloomeries . Later the carbon monoxide in the flue gas was burned to provide additional heat.
James Beaumont Neilson , previously foreman at Glasgow gas works, invented the system of preheating the blast for a furnace. He found that by increasing the temperature of the incoming air to 149 °C (300 °F), he could reduce the fuel consumption from 8.06 tons of coal to 5.16 tons of coal per ton of produced iron with further reductions at even higher temperatures. [ 4 ] He, with partners including Charles Macintosh , patented this in 1828. [ 5 ] Initially the heating vessel was made of wrought iron plates, but these oxidized, and he substituted a cast iron vessel. [ 4 ]
On the basis of a January 1828 patent, Thomas Botfield has a historical claim as the inventor of the hot blast method. Neilson is credited as inventor of hot blast, because he won patent litigation. [ 1 ] Neilson and his partners engaged in substantial litigation to enforce the patent against infringers. [ 5 ] The spread of this technology across Britain was relatively slow. By 1840, 58 ironmasters had taken out licenses, yielding a royalty income of £30,000 per year. By the time the patent expired there were 80 licenses. In 1843, just after it expired, 42 of the 80 furnaces in south Staffordshire were using hot blast, and uptake in south Wales was even slower. [ 6 ]
Other advantages of hot blast were that raw coal could be used instead of coke . In Scotland, the relatively poor "black band" ironstone could be profitably smelted. [ 5 ] It also increased the daily output of furnaces. In the case of Calder ironworks from 5.6 tons per day in 1828 to 8.2 in 1833, which made Scotland the lowest cost steel producing region in Britain in the 1830s. [ 7 ]
Early hot blast stoves were troublesome, as thermal expansion and contraction could cause breakage of pipes. This was somewhat remedied by supporting the pipes on rollers. It was also necessary to devise new methods of connecting the blast pipes to the tuyeres , as leather could no longer be used. [ 8 ]
Ultimately this principle was applied even more efficiently in regenerative heat exchangers , such as the Cowper stove (which preheat incoming blast air with waste heat from flue gas; these are used in modern blast furnaces), and in the open hearth furnace (for making steel) by the Siemens-Martin process. [ 9 ]
Independently, George Crane and David Thomas , of the Yniscedwyn Works in Wales , conceived of the same idea, and Crane filed for a British patent in 1836. They began producing iron by the new process on February 5, 1837. Crane subsequently bought Gessenhainer's patent and patented additions to it, controlling the use of the process in both Britain and the US. While Crane remained in Wales, Thomas moved to the US on behalf of the Lehigh Coal & Navigation Company and founded the Lehigh Crane Iron Company to utilize the process. [ 10 ]
Hot blast allowed the use of anthracite in iron smelting. It also allowed use of lower quality coal because less fuel meant proportionately less sulfur and ash. [ 11 ]
At the time the process was invented, good coking coal was only available in sufficient quantities in Great Britain and western Germany, [ 12 ] so iron furnaces in the US were using charcoal . This meant that any given iron furnace required vast tracts of forested land for charcoal production, and generally went out of blast when the nearby woods had been felled. Attempts to use anthracite as a fuel had ended in failure, as the coal resisted ignition under cold blast conditions. In 1831, Dr. Frederick W. Gessenhainer filed for a US patent on the use of hot blast and anthracite to smelt iron. He produced a small quantity of anthracite iron by this method at Valley Furnace near Pottsville, Pennsylvania in 1836, but due to breakdowns and his illness and death in 1838, he was not able to develop the process into large-scale production. [ 10 ]
Anthracite was displaced by coke in the US after the Civil War. Coke was more porous and able to support the heavier loads in the vastly larger furnaces of the late 19th century. [ 2 ] : 90 [ 13 ] : 139 | https://en.wikipedia.org/wiki/Hot_blast |
Hot isostatic pressing ( HIP ) is a manufacturing process, used to reduce the porosity of metals and increase the density of many ceramic materials. This improves the material's mechanical properties and workability.
The HIP process subjects a component to both elevated temperature and isostatic gas pressure within a high-pressure containment vessel, unlike the cold isostatic pressing (CIP), where the component is maintained at room temperature. [ 1 ] The pressurizing gas most widely used is argon . An inert gas is used so that the material does not chemically react. The choice of metal can minimize negative effects of chemical reactions. Nickel, stainless or mild steel , or other metals can be chosen depending on the desired redox conditions. The chamber is heated, causing the pressure inside the vessel to increase. Many systems use associated gas pumping to achieve the necessary pressure level. Pressure is applied to the material from all directions (hence the term "isostatic").
For processing castings , metal powders can also be turned to compact solids by this method, the inert gas is applied between 7,350 psi (50.7 MPa) and 45,000 psi (310 MPa), with 15,000 psi (100 MPa) being most common. Process soak temperatures range from 900 °F (482 °C) for aluminium castings to 2,400 °F (1,320 °C) for nickel -based superalloys . When castings are treated with HIP, the simultaneous application of heat and pressure eliminates internal voids and microporosity through a combination of plastic deformation , creep , and diffusion bonding ; this process improves fatigue resistance of the component. Primary applications are the reduction of microshrinkage , the consolidation of powder metals, ceramic composites and metal cladding . Hot isostatic pressing is thus also used as part of a sintering ( powder metallurgy ) process and for fabrication of metal matrix composites , [ 2 ] often being used for postprocessing in additive manufacturing . [ 3 ]
The process can be used to produce waste form classes. Calcined radioactive waste (waste with additives) is packed into a thin walled metal canister. The adsorbed gases are removed with high heat and the remaining material compressed to full density using argon gas during the heat cycle. This process can shrink steel canisters to minimize space in disposal containers and during transport. It was invented in the 1950s at the Battelle Memorial Institute [ 4 ] and has been used to prepare nuclear fuel for submarines since the 1960s. It is used to prepare inactive ceramics as well, and the Idaho National Laboratory has validated it for the consolidation of radioactive ceramic waste forms. ANSTO (Australian Nuclear Science and Technology Organisation) is using HIP as part of a process to immobilize waste radionuclides from molybdenum-99 production. [ citation needed ] | https://en.wikipedia.org/wiki/Hot_isostatic_pressing |
The hot plate test is a test of the pain response in animals, similar to the tail flick test . Both hot plate and tail-flick methods are used generally for centrally acting analgesic, [ 1 ] while peripherally acting drugs are ineffective in these tests but sensitive to acetic acid-induced writhing test. [ 2 ]
The hot plate test is used in basic pain research and in testing the effectiveness of analgesics by observing the reaction to pain caused by heat. It was proposed by Eddy and Leimbach in 1953. [ 3 ] They used a behavioral model of nociception where behaviors such as jumping and hind paw-licking are elicited following a noxious thermal stimulus. Licking is a rapid response to painful thermal stimuli that is a direct indicator of nociceptive threshold . Jumping represents a more elaborated response, with a latency, and encompasses an emotional component of escaping. [ 4 ]
Significant differences in pain sensitivity in male and female mice have been observed in laboratory studies. [ citation needed ] The SSRI antidepressant paroxetine did not display a gender difference in antinociceptive effects in mice. [ 8 ]
Voltage-gated ion channels are implicated in pain sensation and transmission signaling mechanisms within both peripheral nociceptors and the spinal cord . Specific ion channel isoforms such as Nav1.7 and Nav1.8 sodium channels and Cav3.2 T-type calcium channels have distinct pro-nociceptive roles. [ 9 ]
Activation of the μ-opioid receptor (MOR) and norepinephrine reuptake inhibition (NRI) are mechanisms of acute and chronic pain. OPRM1 knockout mice were used to determine the relative contribution of MOR activation to tapentadol and morphine induced analgesia. Wild-type mice exhibited an antinociceptive effect ten times that of OPM1 knockouts. However, the OPRM1 knockouts still exhibited a slight analgesic effect to tapentadol but not to morphine. This indicated that the antinociceptive effect of tapentadol is based on a combined mechanism of action involving both MOR and NRI. [ 10 ]
Diazepam is a GABAA receptor benzodiazepine ligand that is an anxiety modulator . Studies using diazepam with the hot plate test showed that diazepam modified the behavioral structure of the pain response not from pain modulation but rather by reducing anxiety levels. [ 11 ]
The Ethical Committee of the International Association for the Study of Pain has developed guidelines for the ethical use of this procedure. [ 12 ] In the United States, such experiments must be approved by an Institutional Animal Care and Use Committee . [ 13 ] | https://en.wikipedia.org/wiki/Hot_plate_test |
Hot potassium carbonate , HPC, is a method used to remove carbon dioxide from gas mixtures, [ 1 ] in some contexts referred to as carbon scrubbing . The inorganic, basic compound potassium carbonate is mixed with a gas mixture and the liquid absorbs carbon dioxide through chemical processes. [ 2 ] The technology is a form of chemical absorption, [ 3 ] and was developed for natural gas sweetening (i.e., removal of acidic from raw natural gas). Currently it is also considered, among others, as a post-combustion capture process, in the contexts of carbon capture and storage and carbon capture and utilization . As a post-combustion CO 2 capture process, the technology is planned to be used on full scale on a heat plant in Stockholm from 2025. [ 4 ]
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hot_potassium_carbonate |
A hot spare or warm spare or hot standby is a component used as a failover mechanism to provide reliability in system configurations . The hot spare is active and connected as part of a working system. When a key component fails, the hot spare is switched into operation. More generally, a hot standby can be used to refer to any device or system that is held in readiness to overcome an otherwise significant start-up delay.
Examples of hot spares are components such as A/V switches , computers , network printers , and hard disks . The equipment is powered on, or considered "hot," but not actively functioning in (i.e. used by) the system.
Electrical generators may be held on hot standby, or a steam train may be held at the shed fired up (literally hot) ready to replace a possible failure of an engine in service.
In designing a reliable system, it is recognized that there will be failures. At the extreme, a complete system can be duplicated and kept up to date—so in the event of the primary system failing, the secondary system can be switched in with little or no interruption. More often, a hot spare is a single vital component without which the entire system would fail. The spare component is integrated into the system in such a way that in the event of a problem, the system can be altered to use the spare component. This may be done automatically or manually, but in either case it is normal to have some means of error detection. A hot spare does not necessarily give 100% availability or protect against temporary loss of the system during the switching process; it is designed to significantly reduce the time that the system is unavailable.
Hot standby may have a slightly different connotation of being active but not productive to hot spare, that is it is a state rather than object. For example, in a national power grid, the supply of power needs to be balanced to demand over a short term. It can take many hours to bring a coal-fired power station up to productive temperatures. To allow for load balancing, generator turbines may be kept running with the generators switched off so as peaks of demand occur, the generators can rapidly be switched on to balance the load. Being in the state of being ready to run is known as hot standby. Though it is not a modern phenomenon, steam train operators might hold a spare steam engine at a terminus fired up, as starting an engine cold would take a significant amount of time.
The spare may be similar component or system, or it may be a system of reduced performance, designed to cope for the duration of the time to repair and recover the original component. In high availability systems, it is common to design so that not only is there a spare that can quickly be switched in, but also that the failed component can be repaired or replaced without stopping the system - this is known as hot swapping . It may be considered that the probability of a second failure is low, and therefore the system is designed simply to allow operation to continue until a suitable maintenance period. The appropriate solution is normally determined by balancing the costs of implementing the availability against the likelihood of a problem and the severity of that problem. there are two types of hot standby:
1. hot standby master - slave
2. hot standby in shearing mode
A hot spare disk is a disk or group of disks used to automatically or manually, depending upon the hot spare policy, replace a failing or failed disk in a RAID configuration. The hot spare disk reduces the mean time to recovery (MTTR) for the RAID redundancy group, thus reducing the probability of a second disk failure and the resultant data loss that would occur in any singly redundant RAID (e.g., RAID-1, RAID-5, RAID-10). Typically, a hot spare is available to replace a number of different disks and systems employing a hot spare normally require a redundant group to allow time for the data to be generated onto the spare disk. During this time the system is exposed to data loss due to a subsequent failure, and therefore the automatic switching to a spare disk reduces the time of exposure to that risk compared to manual discovery and implementation.
The concept of hot spares is not limited to hardware, but also software systems can be held in a state of readiness, for example a database server may have a software copy on hot standby, possibly even on the same machine to cope with the various factors that make a database unreliable, such as the impact of disc failure, poorly written queries or database software errors.
At least two units of the same type will be powered up, receiving the same set of inputs, performing identical computations and producing identical outputs in a nearly-synchronous manner. The outputs are typically physical outputs (individual ON/OFF type digital signals, or analog signals), or serial data messages wrapped in suitable protocols depending upon the nature of their intended use. Outputs from only one unit (designated as the master or on-line unit, via application logic) are used to control external devices (such as switches, signals, on-board propulsion/braking control devices, etc.) or simply to provide displays. The other unit is a hot-standby or a hot spare unit, ready to take over if the master unit fails. When the master unit fails, an automatic failover to the hot spare occurs within a very short time and the outputs from the hot spare, now the master unit, are delivered to the controlled devices and displays. The controlled devices and displays may experience a short blip or disturbance during the failover time. However, they can be designed to tolerate/ignore the disturbances so that the overall system operation is not affected.
This means that a device, or section of a device, that may need to be activated instantly, is kept with the vacuum tubes (pre-)heated but the anode voltage supply switched off. This causes normal cathode coatings to fail prematurely. [ 1 ] | https://en.wikipedia.org/wiki/Hot_spare |
Hot spots in subatomic physics are regions of high energy density or temperature in hadronic or nuclear matter.
Hot spots are a manifestation of the finite size of the system: in subatomic physics this refers both to atomic nuclei , which consist of nucleons , as well as to nucleons themselves, which are made of quarks and gluons , Other manifestations of finite sizes of these systems are seen in scattering of electrons on nuclei and nucleons. For nuclei in particular finite size effects manifest themselves also in the isomeric shift and isotopic shift .
The formation of hot spots assumes the establishment of local equilibrium , which in its turn occurs if the thermal conductivity in the medium is sufficiently small.
The notions of equilibrium and heat are statistical. The use of statistical methods assumes a large number of degrees of freedom. In macroscopic physics this number usually refers to the number of atoms or molecules, while in nuclear and particle physics it refers to the energy level density. [ 1 ]
Local equilibrium is the precursor of global equilibrium and the hot spot effect can be used to determine how fast, if at all, the transition from local to global equilibrium takes place. That this transition does not always happen follows from the fact that the duration of a strong interaction reaction is quite short (of the order of 10 −22 –10 −23 seconds) and the propagation of "heat", i.e. of the excitation, through the finite sized body of the system takes a finite time, which is determined by the thermal conductivity of the matter the system is made of.
Indications of the transition between local and global equilibrium in strong interaction particle physics started to emerge in the 1960s and early 1970s. In high-energy strong interactions equilibrium is usually not complete. In these reactions, with the increase of laboratory energy one observes that the transverse momenta of produced particles have a tail, which deviates from the single exponential Boltzmann spectrum, characteristic for global equilibrium. The slope or the effective temperature of this transverse momentum tail increases with increasing energy. These large transverse momenta were interpreted as being due to particles, which "leak" out before equilibrium is reached. Similar observations had been made in nuclear reactions and were also attributed to pre-equilibrium effects. This interpretation suggested that the equilibrium is neither instantaneous, nor global, but rather local in space and time. By predicting a specific asymmetry in peripheral high-energy hadron reactions based on the hot spot effect Richard M. Weiner [ 2 ] proposed a direct test of this hypothesis as well as of the assumption that the heat conductivity in hadronic matter is relatively small. The theoretical analysis of the hot spot effect in terms of propagation of heat was performed in Ref. [ 3 ]
In high-energy hadron reactions one distinguishes peripheral reactions with low multiplicity and central collisions with high multiplicity. Peripheral reactions are also characterized by the existence of a leading particle which retains a large proportion of the incoming energy. By taking the notion of peripheral literally Ref.2 suggested that in this kind of reaction the surface of the colliding hadrons is locally excited giving rise to a hot spot, which is de-excited by two processes: 1) emission of particles into the vacuum 2) propagation of “heat” into the body of the target (projectile) wherefrom it is eventually also emitted through particle production. Particles produced in process 1) will have higher energies than those due to process 2), because in the latter process the excitation energy is in part degraded. This gives rise to an asymmetry with respect to the leading particle, which should be detectable in an experimental event by event analysis. This effect was confirmed by Jacques Goldberg [ 4 ] in K− p→ K− p π+ π− reactions at 14 GEV/c. This experiment represents the first observation of local equilibrium in hadronic interactions, allowing in principle a quantitative determination of heat conductivity in hadronic matter along the lines of Ref.3. This observation came as a surprise, [ 5 ] because, although the electron proton scattering experiments had shown beyond any doubt that the nucleon had a finite size, it was a-priori not clear whether this size was sufficiently big for the hot spot effect to be observable, i. e. whether heat conductivity in hadronic matters was sufficiently small. Experiment4 suggests that this is the case.
In atomic nuclei, because of their larger dimensions as compared with nucleons, statistical and thermodynamical concepts have been used already in the 1930s. Hans Bethe [ 6 ] had suggested that propagation of heat in nuclear matter could be studied in central collisions and Sin-Itiro Tomonaga [ 7 ] had calculated the corresponding heat conductivity. The interest in this phenomenon was resurrected in the 1970s by the work of Weiner and Weström [ 8 ] [ 9 ] who established the link between the hot spot model and the pre-equilibrium approach used in low-energy heavy-ion reactions. [ 10 ] [ 11 ] Experimentally the hot spot model in nuclear reactions was confirmed in a series of investigations [ 12 ] [ 13 ] [ 14 ] [ 15 ] some of which of rather sophisticated nature including polarization measurements of protons [ 16 ] and gamma rays. [ 17 ] Subsequently on the theoretical side the link between hot spots and limiting fragmentation [ 18 ] and transparency [ 19 ] in high-energy heavy ion reactions was analyzed and “drifting hot spots” for central collisions were studied. [ 20 ] [ 21 ] With the advent of heavy ion accelerators experimental studies of hot spots in nuclear matter became a subject of current interest and a series of special meetings [ 22 ] [ 23 ] [ 24 ] [ 25 ] was dedicated to the topic of local equilibrium in strong interactions. The phenomena of hot spots, heat conduction and preequilibrium play also an important part in high-energy heavy ion reactions and in the search for the phase transition to quark matter. [ 26 ]
Solitary waves ( solitons ) are a possible physical mechanism for the creation of hot spots in nuclear interactions. Solitons are a solution of the hydrodynamic equations characterized by a stable localized high density region and small spatial volume. They were predicted [ 27 ] [ 28 ] to appear in low-energy heavy ion collisions at velocities of the projectile slightly exceeding the velocity of sound (E/A ~ 10-20 MeV; here E is the incoming energy and A the atomic number). Possible evidence [ 29 ] for this phenomenon is provided by the experimental observation [ 30 ] that the linear momentum transfer in 12C induced heavy-ion reactions is limited. | https://en.wikipedia.org/wiki/Hot_spot_effect_in_subatomic_physics |
Taiwan is part of the collision zone between the Yangtze Plate and Philippine Sea Plate . Eastern and southern Taiwan are the northern end of the Philippine Mobile Belt .
Located next to an oceanic trench and volcanic system in a tectonic collision zone, Taiwan has evolved a unique environment that produces high-temperature springs with crystal-clear water, usually both clean and safe to drink. These hot springs are commonly used for spas and resorts.
Soaking in hot springs became popular in Taiwan around 1895 during the 50-year long colonial rule by Japan.
The first mention of Taiwan's hot springs came from a 1697 manuscript, Beihai Jiyou [ zh ] , but they were not developed until 1893, when a German businessman discovered Beitou and later established a small local spa.
Under Japanese rule , the government constantly promoted and further enhanced the natural hot springs. The Japanese rule brought with them their rich onsen culture of spring soaking, which had a great influence on Taiwan. [ 1 ]
In March 1896, Hirado Gengo [ zh ] from Osaka, Japan opened Taiwan's first hot spring hotel, called Tenguan ( 天狗庵 ) . He not only heralded a new era of hot spring bathing in Beitou, but also paved the road for a whole new hot spring culture for Taiwan. In the Japanese onsen culture, hot springs are claimed to offer many health benefits. As well as raising energy levels, the minerals in the water are commonly suggested to help treat chronic fatigue , eczema or arthritis .
During Japanese rule, the four major hot springs in Taiwan were in modern-day Beitou, Yangmingshan , Guanziling and Sichongxi . [ 1 ] However, under Republic of China administration starting from 1945, the hot spring culture in Taiwan gradually lost momentum. It was not until 1999 that the authorities again started large-scale promotion of Taiwan's hot springs, setting off a renewed hot spring fever.
In recent years, hot spring spas and resorts on Taiwan have gained more popularity. [ 1 ] With the support of the government, the hot spring has become not only another industry but also again part of Taiwanese culture .
Taiwan has one of the highest concentrations (more than 100 hot springs) and greatest variety of thermal springs in the world varying from hot springs to cold springs, mud springs, and seabed hot springs. [ 1 ]
Taiwan is located on a faultline where several continental plates meet; the Philippine Sea Plate and the Eurasian Plate intersect in the Circum-Pacific seismic zone . [ 2 ] | https://en.wikipedia.org/wiki/Hot_springs_in_Taiwan |
Hot start PCR is a modified form of conventional polymerase chain reaction (PCR) that reduces the presence of undesired products and primer dimers due to non-specific DNA amplification at room (or colder) temperatures. [ 1 ] [ 2 ] Many variations and modifications of the PCR procedure have been developed in order to achieve higher yields; hot start PCR is one of them. [ 3 ] Hot start PCR follows the same principles as the conventional PCR - in that it uses DNA polymerase to synthesise DNA from a single stranded template. [ 4 ] However, it utilizes additional heating and separation methods, such as inactivating or inhibiting the binding of Taq polymerase and late addition of Taq polymerase, to increase product yield as well as provide a higher specificity and sensitivity. [ 5 ] Non-specific binding and priming or formation of primer dimers are minimized by completing the reaction mix after denaturation . [ 6 ] Some ways to complete reaction mixes at high temperatures involve modifications that block DNA polymerase activity in low temperatures, [ 1 ] [ 7 ] use of modified deoxyribonucleotide triphosphates (dNTPs), [ 8 ] and the physical addition of one of the essential reagents after denaturation. [ 9 ]
Through these additional methods, hot start PCR is able to decrease the amount of non-specific amplifications which naturally occur during lower temperatures – which remains a problem for conventional PCR. These modifications work overall to ensure that specific enzymes in solution will remain inactive or are inhibited until the optimal annealing temperature is reached. [ 10 ] Inhibiting formation of non-specific PCR products, especially in early cycles, results in a substantial increase in sensitivity of amplification by PCR. This is of utmost importance in diagnostic applications of PCR or RT-PCR.
Polymerase chain reaction (PCR) is a molecular biology technique used to amplify specific DNA segments by several orders of magnitude. The specific segments of DNA is amplified over three processes, denaturation, annealing and extension – where the DNA strands are separated by raising the temperature to the optimal from room temperature before primers bind and polymerase aligns nucleotides to the template strand. It uses DNA polymerase , which is slightly active at low temperatures. [ 1 ] In conventional PCR, the reaction mix is completed at room temperature, and due to DNA polymerase activity, primers may form primer dimers or anneal to DNA non-specifically. During the PCR procedure, DNA polymerase will extend any piece of DNA with bound primers, generating target products but also nonspecific products which lower the yield. In hot start PCR, some of the reagents are kept separate until the mixture is heated to the specific annealing temperature. This reduces annealing time, which in turn reduces the likelihood of non-specific DNA extension and the influence of non-specific primer binding prior to denaturation. [ 6 ] [ 5 ]
In conventional PCR, lower temperatures below the optimal annealing temperature (50-65 °C) results in off target modifications such as non-specific amplifications where primers will bind non-specifically to the nucleic acid. [ 5 ] These non-specific primer complexes, which are in excess in the mixture, are the cause behind the synthesis of by-products such as primer dimer and mis-priming. [ 10 ] Mis-priming greatly impedes and reduces the efficiency of PCR amplification through actively competing with the target sequences for amplification. Similarly, primer dimers form complexes which decreases the amount of copy number amplifications obtained. [ 10 ] This can be controlled by implementing hot start PCR which allows primer extensions to be blocked until the optimal temperatures are met. [ 2 ]
In hot start PCR, important reagents (such as DNA polymerase and magnesium cofactors ) are prevented from reacting in the PCR mixture until the optimal temperatures are met through physical separation or chemical modifications. [ 5 ] [ 2 ] Hot start PCR can also occur when the Taq polymerase is inhibited/inactivated or its addition is delayed until optimal annealing temperatures, through deoxyribonucleotide triphosphate modifications or by modifying the primers through caging and secondary structure manipulation.
Hot start PCR is often a better approach opposed to traditional PCR in circumstances where there is a low concentration of DNA in the reaction mix, the DNA template is highly complex, or if there are several pairs of oligonucleotide primers in the PCR. [ 3 ]
Hot start PCR is a method which prevents DNA polymerase extension at lower temperature to prevent non-specific binding to minimise yield loss. Hot start PCR reduces the amount of non-specific binding through limiting reagents until the heating steps of PCR – limit the reaction early by limiting Taq DNA polymerase in a reaction. Non-specific binding often leads to primer dimers and mis-primed/false primed targets. [ 11 ] These can be rectified through modified methods such as:
Enzyme linked antibodies/Taq DNA polymerase complexed with Anti Taq DNA polymerase antibodies:
The enzyme linked antibodies inactivate the Taq DNA polymerase. The antibodies link and bind to the polymerase, preventing early DNA amplification which could occur at lower temperatures. Once the optimal annealing temperature is met, the antibodies will begin to degrade and dissociate, releasing the Taq DNA polymerase into the reaction and allowing the amplification process to start. [ 2 ] [ 12 ] Platinum Taq DNA polymerase and AccuStart Taq DNA polymerase ( both developed by Ayoub Rashtchian at Life technologies and Quanta BioSciences, respectively) are examples of commercially available antibody based hot start Taq DNA polymerases. These Taq DNA polymerase are precomplexed with a mixture of monoclonal antibodies specific to Taq DNA polymerase. [ 13 ]
Wax beads:
A physical barrier is created between Taq DNA polymerase and the remainder of the PCR components by the wax beads which are temperature dependent. Once the temperature rises over 70 °C, during the denaturation step in the first cycle, the wax bead melts, allowing the Taq DNA polymerase to escape past the barrier and be released into the reaction – starting the amplification process. The wax layer then moves to the top of the reaction mixture during the amplification stage to later act as a vapour barrier. [ 2 ]
Highly specific oligonucleotides:
Oligonucleotides are short polymers of nucleic acid which easily bind. Highly specific oligonucleotides, such as aptamers, bind to Taq DNA polymerase at lower temperatures making it inactive in the mixture. Only at higher temperatures will the oligonucleotides separate from the Taq allowing it to react. [ 5 ]
These are the most effective methods for hot start PCR, the enzyme linked antibodies and highly specific oligonucleotides methods in particular are most suited during procedures which require a shorter inactivation time. [ 14 ] However, other methods are known to be implemented such as:
Preheating:
The PCR machine is heated in advance whilst the components are mixed over ice and then immediately placed into the PCR machine once it reaches optimum temperature. This would eliminate the warm-up process required, reduce non-specific annealing of the primers and ensures that any miss paired primers in the mixture are separated. [ 15 ]
Freezing:
Freezing acts as a form of physical separation much like the wax beads. The reaction mixture containing primers, the template strand, water and deoxyribonucleotide triphosphate (dNTP) is frozen before Taq polymerase and the remaining PCR components are added on top of the frozen mixture. This acts to prevent non-specific binding. [ 15 ]
Later addition of Taq:
The components of PCR in the reaction mix are prepared and heated without the addition of Taq. Taq is only later introduced into the mixture once the optimal temperature is reached. However, this method is the least reliable and may lead to a contamination of the components. [ 15 ]
Another method is through deoxyribonucleotide triphosphate mediated hot start PCR which modifies the nucleotide bases through a protecting group.
Hot start dNTP can be chemically modified to include a heat sensitive protecting group at the 3 prime terminus. This modification will prevent the nucleotides from interacting with the Taq polymerase to bind to the template strand until after the optimal temperatures are reached therefore, the protecting group will be removed during the heat activation step. The hot start dNTP, dA, dT, dC and dG replace the natural nucleotides. [ 16 ] [ 17 ] Using all four of the modified nucleotides is recommended, however, previous research shows that by replacing either one or two of the natural nucleotides with the modified dNTPs would be enough to ensure that non-specific amplification does not occur. [ 16 ] [ 17 ] Another chemical modification of nucleic acid is through the heat-reversible covalent modification which acts to impede the hybridisation of the primers to the template of interest. The guanosine amino group interact with glyoxal to form dG. [ 18 ]
Secondary structure:
Certain secondary structure may impede the functions of the primers. For example, oligonucleotides with a hairpin structure cannot act efficiently as a primer. However, after heating the reaction mix to the annealing temperature the primer will undergo a conformation change allowing the primer to form a linear structure instead, which enables the primer to attach to the target segment and begin PCR. [ 19 ] [ 20 ] Actually, there is a more stable configuration to the hairpin primers termed ‘double-bubble’ primers, that form a head to tail homodimer configuration that can be utilized both for reverse- transcription and for hot- start PCR. [ 21 ]
Photochemically removable cages:
A caging group which is a protecting group that is photochemically removable, such as caged thymidine phosphoramidites, is incorporated into a oligonucleotide primer. This allows the function of the primer to be activated and deactivated through the use of UV irradiation (365 nm). Therefore, primers can be activated after the annealing temperature is reached. [ 22 ]
Magnesium is required in PCR and acts as a co-factor because Taq polymerase is magnesium dependent. [ 23 ] Increasing the concentration of magnesium and phosphate to the standard buffer reagents creates a magnesium precipitate , providing a hot start for the reaction as there is no magnesium for the DNA polymerase until during the thermal cycling stage. During thermal cycling, the magnesium will dissolve back into solution and become available for the polymerase to use allowing it to function normally. [ 24 ]
Hot start PCR is advantageous in that it requires less handling and reduces the risk of contamination. Hot start PCR can either be chemically modified or antibody based which provide different advantages to the procedure. In chemically modified hot start PCR, the procedure can be taken under room temperature and significantly decreases the formation of primer-dimers by preventing primers from binding to one another before the PCR process has begun as well as limiting non-specific priming. Similarly, hot start PCR inhibits the binding of primers to the template sequences which have a low homology which leads to mispriming. It can also improve specificity and sensitivity, due to the stringent conditions, as well as increase the product yield of the targeted fragment. [ 5 ] In antibody based hot start PCR, the polymerase is activated after the initial denaturation step during the cycling process, therefore decreasing the time required. This also leads to a high specificity . [ 14 ]
Along with its advantages, hot start PCR also has limitations which must be considered before implementing the method. Hot start PCR requires the addition of heat for longer periods of time as opposed to conventional PCR, therefore, the template DNA is more susceptible to being damaged. The increased heating time also means that the procedure is not compatible for certain procedures such as the one tube, single buffer reverse transcription-PCR method which requires lower temperature to undergo the reverse transcription step. [ 12 ] In chemically modified hot start PCR, the amplification process of DNA can be negatively affected firstly due to a significant increase in the reactivation time required for the polymerase to activate and secondly if the length of the target DNA template is too long. [ 14 ] In antibody based procedures, each enzyme requires a different antibody and therefore the cost to perform the procedure is higher. [ 15 ] There is also evidence that many commercial hot start enzymes actually have some level of activity prior to denaturation, and few suppliers provide any information about testing for this residual activity. [ 25 ] This means that the benefits attributed to hot start enzymes may not be realized, or at best will vary between batches or manufacturer. | https://en.wikipedia.org/wiki/Hot_start_PCR |
Hot tapping, [ 1 ] [ 2 ] or pressure tapping, is the method of making a connection to existing piping or pressure vessels without the interrupting or emptying of that section of pipe or vessel. This means that a pipe or tank can continue to be in operation whilst maintenance or modifications are being done to it. The process is also used to drain off pressurized casing fluids and add test points or various sensors such as temperature and pressure. Hot taps can range from a ½ inch hole designed for something as simple as quality control testing, up to a 48-inch tap for the installation of a variety of ports, valves, t-sections or other pipes.
Hot Tap Procedures:
A. A hot tap saddle, service saddle or welded threadolet, valve installed, assembly is pressure tested and hot tap machine attached.
B. Valve opened, hot tap completed, coupon or cut portion retained by latches on pilot drill. Pressure is contained within the hot tapping machine.
C. Cutter and coupon retracted and valve closed. Fluid is drained and hot tapping machine is removed. The tapped valve is now ready for the contractor's tie-in or IFT's linestop/stopple equipment to be inserted. [ 3 ]
Hot tapping is also the first procedure in line stopping , where a hole saw is used to make an opening in the pipe, so a line plugging head can be inserted.
Situations in which welding operations are prohibited on equipment which contains:
Based on the above, welding on equipment or pipe which contains hazardous substances or conditions as listed below (even in small quantities) shall not be performed unless positive evidence has been obtained that welding/hot tapping can be applied safely.
Substances
Constraints based on general hazard in the event of line puncturing during welding, not the welding process. Conditions:
Note: The above list is not exhaustive, but gives an indication only. | https://en.wikipedia.org/wiki/Hot_tapping |
In metallurgy , hot working refers to processes where metals are plastically deformed above their recrystallization temperature. Being above the recrystallization temperature allows the material to recrystallize during deformation. This is important because recrystallization keeps the materials from strain hardening , which ultimately keeps the yield strength and hardness low and ductility high. [ 1 ] This contrasts with cold working .
Many kinds of working, including rolling , forging , extrusion , and drawing , can be done with hot metal.
The lower limit of the hot working temperature is determined by its recrystallization temperature. As a guideline, the lower limit of the hot working temperature of a material is 60% its melting temperature (on an absolute temperature scale ). The upper limit for hot working is determined by various factors, such as: excessive oxidation, grain growth, or an undesirable phase transformation. In practice materials are usually heated to the upper limit first to keep forming forces as low as possible and to maximize the amount of time available to hot work the workpiece. [ 1 ]
The most important aspect of any hot working process is controlling the temperature of the workpiece. 90% of the energy imparted into the workpiece is converted into heat. Therefore, if the deformation process is quick enough the temperature of the workpiece should rise, however, this does not usually happen in practice. Most of the heat is lost through the surface of the workpiece into the cooler tooling. This causes temperature gradients in the workpiece, usually due to non-uniform cross-sections where the thinner sections are cooler than the thicker sections. Ultimately, this can lead to cracking in the cooler, less ductile surfaces. One way to minimize the problem is to heat the tooling. The hotter the tooling the less heat lost to it, but as the tooling temperature rises, the tool life decreases. Therefore, the tooling temperature must be compromised; commonly, hot working tooling is heated to 500–850 °F (260–454 °C). [ 2 ]
The advantages are: [ 1 ]
Usually the initial workpiece that is hot worked was originally cast . The microstructure of cast items does not optimize the engineering properties, from a microstructure standpoint. Hot working improves the engineering properties of the workpiece because it replaces the microstructure with one that has fine spherical shaped grains . These grains increase the strength, ductility, and toughness of the material. [ 2 ]
The engineering properties can also be improved by reorienting the inclusions (impurities). In the cast state the inclusions are randomly oriented, which, when intersecting the surface, can be a propagation point for cracks. When the material is hot worked the inclusions tend to flow with the contour of the surface, creating stringers . As a whole the strings create a flow structure , where the properties are anisotropic (different based on direction). With the stringers oriented parallel to the surface it strengthens the workpiece, especially with respect to fracturing . The stringers act as "crack-arrestors" because the crack will want to propagate through the stringer and not along it. [ 2 ]
The disadvantages are: [ 1 ] | https://en.wikipedia.org/wiki/Hot_working |
A hotbed is a biological term for an area of decaying organic matter that is warmer than its surroundings. The heat gradient is generated by the decomposition of organic substituent within the pile by microorganism metabolization.
A hotbed covered with a small glass cover (also called a hotbox ) is used as a small version of a hothouse (heated greenhouse or cold frame ). Oftentimes, this bed is made of manure from animals such as horses , which pass undigested plant cellulose in their droppings , creating a good environment for microorganisms to come and break down the cellulose and create a hotbed. [ 1 ] (The digestive systems of ruminants such as cattle and sheep destroy and use all cellulose in their food, and their droppings remain cold and do not heat up.)
Hotbeds employed in gardens are generally simple in application. [ 2 ] Experimental research from Neugebauer (2018) concluded that other forms of organic waste, such as compost, can be used in place of manure in hotbeds, providing not only means of promoting plant growth, but also an ecologically friendly way to dispose of waste. [ 2 ] Data from this study does suggest that the amount of heat released by hotbeds does decrease after some time, however. [ 2 ] Additionally, although not experimentally supported, the article from Neugebauer (2018) provides an idea that perhaps the carbon dioxide released from the hotbed is taken up by the plants, further improving the rate at which the plants grow. [ 2 ]
Some egg-laying animals, such as the brush turkey , make or use hotbeds to incubate their eggs.
By extension, the term hotbed is used metaphorically to describe an environment that is ideal for the growth or development of something, especially of something undesirable.
This ecology -related article is a stub . You can help Wikipedia by expanding it .
This horticulture article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hotbed |
The Hotbit HB-8000 is an MSX home computer developed and sold by the Brazilian subsidiary of Sharp Corporation through its Epcom home computer division in mid-1980s. [ 1 ] [ 2 ] [ 3 ] The MSX machines were very popular in Brazil at the time, [ 4 ] [ 5 ] and they virtually killed all the other competing 8 bit microcomputers in the Brazilian market. [ 1 ] [ 6 ]
The Hotbit had three versions: 1.0 and 1.1 with gray and white case and 1.2, with a black case and a ROM slightly modified to solve an ASCII table compatibility issue with the other popular Brazilian MSX, the Gradiente Expert .
The HB-8000 had the following technical specifications: [ 7 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hotbit |
Hotelling's lemma is a result in microeconomics that relates the supply of a good to the maximum profit of the producer. It was first shown by Harold Hotelling , and is widely used in the theory of the firm .
Specifically, it states: The rate of an increase in maximized profits with respect to a price increase is equal to the net supply of the good. In other words, if the firm makes its choices to maximize profits, then the choices can be recovered from the knowledge of the maximum profit function.
Let p {\displaystyle p} denote a variable price, and w {\displaystyle w} be a constant cost of each input. Let x : R + → X {\displaystyle x:{\mathbb {R} ^{+}}\rightarrow X} be a mapping from the price to a set of feasible input choices X ⊂ R + {\displaystyle X\subset {\mathbb {R} ^{+}}} . Let f : R + → R + {\displaystyle f:{\mathbb {R} ^{+}}\rightarrow {\mathbb {R} ^{+}}} be the production function , and y ( p ) ≜ f ( x ( p ) ) {\displaystyle y(p)\triangleq f(x(p))} be the net supply.
The maximum profit can be written by
Then the lemma states that if the profit π {\displaystyle \pi } is differentiable at p {\displaystyle p} , the maximizing net supply is given by
The lemma is a corollary of the envelope theorem .
Specifically, the maximum profit can be rewritten as π ( p , x ∗ ) = p ⋅ f ( x ∗ ( p ) ) − w ⋅ x ∗ ( p ) {\displaystyle \pi (p,x^{*})=p\cdot f(x^{*}(p))-w\cdot x^{*}(p)} where x ∗ {\displaystyle x^{*}} is the maximizing input corresponding to y ∗ {\displaystyle y^{*}} . Due to the optimality, the first order condition gives
By taking the derivative by p {\displaystyle p} at x ∗ {\displaystyle x^{*}} ,
where the second equality is due to ( 1 ). QED
Consider the following example. [ 1 ] Let output y {\displaystyle y} have price p {\displaystyle p} and inputs x 1 {\displaystyle x_{1}} and x 2 {\displaystyle x_{2}} have prices w 1 {\displaystyle w_{1}} and w 2 {\displaystyle w_{2}} . Suppose the production function is y = x 1 1 / 3 x 2 1 / 3 {\displaystyle y=x_{1}^{1/3}x_{2}^{1/3}} . The unmaximized profit function is π ( p , w 1 , w 2 , x 1 , x 2 ) = p y − w 1 x 1 − w 2 x 2 {\displaystyle \pi (p,w_{1},w_{2},x_{1},x_{2})=py-w_{1}x_{1}-w_{2}x_{2}} . From this can be derived the profit-maximizing choices of inputs and the maximized profit function, a function just of the input and output prices, which is
π ( p , w 1 , w 2 ) = 1 27 p 3 w 1 w 2 {\displaystyle \pi (p,w_{1},w_{2})={\frac {1}{27}}{\frac {p^{3}}{w_{1}w_{2}}}}
Hotelling's Lemma says that from the maximized profit function we can find the profit-maximizing choices of output and input by taking partial derivatives:
∂ π ( p , w 1 , w 2 ) ∂ p = y = 1 9 p 2 w 1 w 2 {\displaystyle {\frac {\partial \pi (p,w_{1},w_{2})}{\partial p}}=y={\frac {1}{9}}{\frac {p^{2}}{w_{1}w_{2}}}}
∂ π ( p , w 1 , w 2 ) ∂ w 1 = − x 1 = − 1 27 p 3 w 1 2 w 2 {\displaystyle {\frac {\partial \pi (p,w_{1},w_{2})}{\partial w_{1}}}=-x_{1}=-{\frac {1}{27}}{\frac {p^{3}}{w_{1}^{2}w_{2}}}}
∂ π ( p , w 1 , w 2 ) ∂ w 2 = − x 2 = − 1 27 p 3 w 1 w 2 2 {\displaystyle {\frac {\partial \pi (p,w_{1},w_{2})}{\partial w_{2}}}=-x_{2}=-{\frac {1}{27}}{\frac {p^{3}}{w_{1}w_{2}^{2}}}}
Note that Hotelling's lemma gives the net supplies, which are positive for outputs and negative for inputs, since profit rises with output prices and falls with input prices.
A number of criticisms have been made with regards to the use and application of Hotelling's lemma in empirical work.
C. Robert Taylor points out that the accuracy of Hotelling's lemma is dependent on the firm maximizing profits, meaning that it is producing profit maximizing output y ∗ {\displaystyle y^{*}} and cost minimizing input x ∗ {\displaystyle x^{*}} . If a firm is not producing at these optima, then Hotelling's lemma would not hold. [ 2 ] | https://en.wikipedia.org/wiki/Hotelling's_lemma |
Hotspot Ecosystem Research and Man's Impact On European Seas ( HERMIONE ) is an international multidisciplinary project, started in April 2009, that studies deep-sea ecosystems . [ 1 ] [ 2 ] HERMIONE scientists study the distribution of hotspot ecosystems, how they function and how they interconnect, partially in the context of how these ecosystems are being affected by climate change [ 3 ] and impacted by humans through overfishing , resource extraction , seabed installations ( oil platforms , etc.) and pollution . Major aims of the project are to understand how humans are affecting the deep-sea environment and to provide policy makers with accurate scientific information, enabling effective management strategies to protect deep sea ecosystems. The HERMIONE project is funded by the European Commission 's Seventh Framework Programme , and is the successor to the HERMES project , which concluded in March 2009. [ 4 ]
Europe's deep-ocean margin, from the Arctic to the Iberian Margin , and across the Mediterranean to the Black Sea , spans a distance of over 15,000 km and hosts a number of diverse habitats and ecosystems. Deep water coral reefs, undersea mountains populated by a multitude of organisms, vast submarine canyon systems, and hydrothermal vents are some of the features contained therein. [ 5 ] The traditional view of the deep-sea realm as a hostile and barren place was discredited long ago, and scientists now know that much of Europe's deep sea is rich and diverse. [ 6 ]
However, the deep sea is increasingly threatened by humans: most of this deep-ocean frontier lies within Europe's Exclusive Economic Zone (EEZ) and has significant potential for the exploitation of biological, energy, and mineral resources. Research and exploration over the last two decades has shown clear signs of direct and indirect anthropogenic impacts in the deep sea, resulting from such activities as overfishing , [ 7 ] littering and pollution . This raises concerns because deep-sea processes and ecosystems are not only important for the marine web of life , but also fundamentally contribute to the global biogeochemical cycle . [ citation needed ]
Continuing with the knowledge obtained by the HERMES project (EC FP6), which contributed significantly to our understanding of deep-sea ecosystems, [ 8 ] the HERMIONE project investigates ecosystems at critical sites on Europe's deep-ocean margin, aiming to make major advances in knowledge of their distribution and functioning, and their contribution to ecosystem goods and services. [ clarification needed ] HERMIONE places special emphasis on human impact on the deep sea and on the translation of scientific information into science policy for the sustainable use of marine resources. To design and implement effective governance strategies and management plans to protect our deep seas for the future, understanding the extent, natural dynamics and interconnection of ocean ecosystems, and integrating socio-economic research with natural science, are important. To achieve this, HERMIONE uses a highly interdisciplinary and integrated approach, engaging experts in biology , ecology , biodiversity , oceanography , geology , sedimentology , geophysics and biogeochemistry , who will work alongside socio-economists and policy-makers.
The HERMIONE project focuses on deep-sea "hotspot" ecosystems including submarine canyons , open slopes and deep basins, chemosynthetic environments, deep water coral reefs, and seamounts . Hotspot ecosystems support high species diversity, numbers of individuals, or both, and are therefore important in maintaining margin-wide biodiversity and abundance. [ 9 ] HERMIONE research ranges from investigation of the ecosystems' dimensions, distribution, interconnection and functioning, to understanding the potential impacts of climate change and anthropogenic disturbance. The ultimate objective is to provide stakeholders and policymakers with the scientific knowledge necessary to support deep-sea governance, sustainable management and conservation of these ecosystems.
To obtain the data needed, HERMIONE scientists are spending over 1000 days at sea, using more than 50 research vessels across Europe. Sharing vessels and equipment between partners will bring benefits through shared knowledge, expertise and data, and will also maximise the research effort, increasing efficiency and productivity. State-of-the-art technology will be used, with Remotely Operated Vehicles (ROVs) one of the critical pieces of equipment being used for a wide range of delicate manoeuvres and high-resolution surveys, from precision sampling of methane gas at cold seeps to microbathymetry mapping to examine the structure of the seabed. Large arrays of instrumented moorings , shared by different partner institutions, will be deployed in common experimental areas, allowing HERMIONE to develop experimental strategies beyond any national capacity.
The HERMIONE study sites were selected on the following basis:
The HMMV, PAP, MAR and central Mediterranean sites link to the ESONET long-term monitoring sites and will provide valuable background information.
Deep water coral reefs are found along the northeast Atlantic and central Mediterranean margins, and are important biodiversity hotspots . [ 10 ] [ 11 ] The recent HERMES project lists more than 2000 species associated with cold-water coral reefs worldwide. [ 12 ] As well as flourishing live coral, the dead coral frameworks and rubble that are frequently found close by attract a myriad of fauna from the microscopic to the mega, [ 13 ] and may be fundamental in coral ecosystem replenishment. Coral reefs provide a habitat for fish, [ 14 ] a refuge from predators, a rich food source, a nursery for young fish, and are also potential sources of a wide range of medicines to treat ailments from cancer to cardiovascular disease.
There are several known coral hotspot areas on Europe's deep-ocean margin, including the Scandinavian, Rockall-Porcupine and central Mediterranean margins, and there remain many questions about them, such as how each of the sites are connected to one another, [ 15 ] how they arose, what drives the distribution of the reefs, [ 16 ] [ 17 ] how the larvae disperse and settle, how the corals and associated species reproduce , finding their physiological thresholds, how they will fare with increased ocean warming , [ 18 ] [ 19 ] and whether ocean warming induces a spread of coral reefs further north into the Arctic Ocean. New research will also build on previous work to define the physical environment around cold-water coral reefs such as hydrodynamic and sedimentary regimes, which will help to understand biological responses. [ 20 ] [ 21 ]
HERMIONE scientists use cutting-edge technology to try to answer these questions. [ 2 ] High-resolution mapping of the seafloor will be carried out to determine the location and distribution of cold-water corals, and photographic observations will be made to assess changes in the status of known reefs over time, such as their response to climatic variation or their recovery from destruction by fishing trawlers. To assess biodiversity and its relationship with environmental factors such as climate change, DNA barcoding and other molecular techniques will be used.
Submarine canyons are deep, steep-sided valleys that form on continental margins. Stretching from the shelf to the deep sea, they dissect much of the European margin. They are one of the most complex seascapes known to humans; their rugged topography and challenging environmental conditions mean that they are also one of the least explored. Advances in technology over the last two decades have allowed scientists to uncover some of the mysteries of canyons, the size of which often rival the Grand Canyon , [ 22 ] USA.
One of the most important discoveries is that canyons are major sources and sinks for sediment and organic matter on continental margins. [ 23 ] [ 24 ] They act as fast-track pathways for sediment and organic matter from the shelf to the deep sea, [ 25 ] and can act as temporary depots for sediment and carbon storage. Particle flux through canyons has been found to be between two and four times greater than on the open slope, [ 25 ] though the transfer of particles through canyons is thought to be largely "event-driven", [ 26 ] [ 27 ] [ 28 ] which introduces a highly variable aspect to canyon conditions. Determining what drives sediment transport and deposition within canyons is one of the major challenges for HERMIONE.
The capacity of canyons to focus and concentrate organic matter can promote high abundances and diversity of fauna. However, variability in environmental conditions and topography is very high, both within and between canyons, and this is reflected in the variability of the structure and dynamics of the biological communities. [ 29 ] Our understanding of biological processes in canyons has greatly improved with the use of submersibles and ROVs, but this research has also revealed that the relationships between fauna and canyons are more complex than previously thought. [ 30 ] [ 31 ] The diversity of submarine canyons and their fauna means that it is difficult to make generalisations that can be used to create policies for canyon ecosystem management. It is important that the role of canyons in maintaining biodiversity, and how potential anthropogenic impacts may affect this, [ 32 ] [ 33 ] is better understood. HERMIONE will address this challenge by examining canyon ecosystems from different biogeochemical provinces and topographic settings, in light of the complex interactions among habitat (topography, water masses, currents), mass and energy transfer, and biological communities.
Open slopes and deep basins make up > 90% of the ocean floor and 65% of the Earth's surface, and many of the goods and services provided by the deep sea (e.g., oil, gas, climate regulation and food) are produced and stored by them. They are intricately involved in global biogeochemical and ecological processes, and so are essential for the functioning of our biosphere and human wellbeing.
Recent research in the HERMES (EC-FP6) project gathered a large body of information on local biodiversity at large scales, different latitudes and in different hotspot ecosystems, but the research also highlighted the high degree of complexity of deep-sea habitats. This information is fundamental to our understanding of the factors that control biodiversity at much larger scales, from hundreds to thousands of kilometres. HERMIONE will conduct further studies on the mosaic of habitats found in deep-sea slopes and basins, and will investigate the relationships within and between these habitats, their biodiversity and ecology, and their interconnection with other hotspot ecosystems.
Investigating the impacts of anthropogenic activities and climate change in the deep sea is a theme that runs through all HERMIONE research. To the biological communities on open slopes and in deep basins, seafloor warming through climate change is a major threat. Up to 85% of methane reservoirs along the continental margin could be destabilised, which would not only release climate-warming methane gas into the atmosphere, but would also have unknown and potentially devastating consequences on benthic communities. The role of climatic variation on deep-sea benthos is not well understood, although large-scale changes in the structure of seafloor communities have been observed over the last two decades. The use of long-term, deep-sea observatories, e.g., the Hausgarten deep-sea observatory in the Arctic and the time-series analysis of the Catalan margin and Southern Adriatic Sea, will help HERMIONE scientists to examine recent changes in benthic communities, and to study decadal variability in physical processes, such as the dense shelf water cascading events in submarine canyons. [ 28 ]
HERMIONE aims to provide quantitative estimates of the potential consequences of biodiversity loss on ecosystem functioning, to examine how deep-sea benthos adapt to large-scale changes, and, for the first time, to create conceptual models integrating deep-sea biodiversity and quantitative analyses of ecosystem functioning and processes.
Seamounts are underwater mountains that rise from the depths of the ocean, and whose summits can sometimes be found just a few hundred metres below the sea surface. To be classified as a seamount the summit must be 1000 m higher than the surrounding seafloor, [ 34 ] and under this definition there are an estimated 1000–2800 seamounts in the Atlantic Ocean and around 60 in the Mediterranean Sea. [ 35 ]
Seamounts enhance water flow through localised tides, eddies, and upwelling, and these physical processes may enhance primary production. [ 36 ] Seamounts may therefore be considered as hotspots of marine life; fauna benefit from the enhanced hydrodynamics and phytoplankton supply, and thrive on the slopes and summits. Suspension feeders, such as gorgonian sea fans and the cold-water corals like Lophelia pertusa, often dominate the rich benthic (seafloor-dwelling) communities. [ 37 ] The enhanced abundance and diversity of fauna is not limited to benthic species, as fish are known to aggregate over seamounts. [ 38 ] Unfortunately, this knowledge has led to increasing commercial exploitation of seamount fish by the fishing industry, and a number of seamount fish populations have already been depleted. Part of HERMIONE research will assess the threats and impacts of human activities on seamounts, including comparing data from seamounts in different stages of fisheries exploitation to understand more about the impacts of fishing activities., both on target species and non-target species, and their habitats.
Despite our increasing knowledge on seamounts, there is still very little known about the relationships between their ecosystem functioning and biodiversity, and that of the surrounding areas. This information is vital in order to improve our understanding of connectivity between seamount hotspots and adjacent areas, and HERMIONE research will aim to discover whether seamounts act as centres of speciation (the evolution of new species), or if they play a role as "stepping stones", allowing fauna to colonise and disperse across the oceans.
Chemosynthetic environments - such as hot vents, cold seeps, mud volcanoes and sulphidic brine pools - show the highest biomass and productivity of all deep-sea ecosystems. The chemicals found in the fluids, gases and mud that escape from such systems provide an energy source for chemosynthetic bacteria and archaea , which are the primary producers in these systems. A huge variety of fauna profits from the association with chemosynthetic microbes, supporting large communities that can exist independently of sunlight. Some of these environments, such as methane (cold) seeps, can support up to 50,000 times more biomass than communities that rely on photosynthetic production alone. [ 39 ] Owing to the extreme gradients and diversity in physical and chemical factors, hydrothermal vents also remain incredibly fascinating ecosystems. HERMIONE researchers aim to illustrate the tight coupling between geosphere and biosphere processes, as well as their immense heterogeneity and interconnectivity, by observing and comparing the spatial and temporal variation of chemosynthetic environments in European Sea’s.
Methane cycling and carbonate formation by microorganisms in chemosynthetic environments have implications for the control of greenhouse gases . [ 40 ] [ 41 ] Methane can be trapped and stored under the seabed as a gas hydrate , and under different conditions, can either be controlled by microbial consumption, or can escape into the surrounding seawater, and ultimately the atmosphere. Our understanding of the biological controls of methane seepage and feedback mechanisms for global warming is limited. The distribution and structure of cold seep communities can act as an indicator for changes in methane fluxes in the deep sea, e.g. by seafloor warming. [ 42 ] Using multibeam echosounder data and 3D seismic data with in situ studies at seep sites, and by investigating the life histories of fauna at such ecosystems, HERMIONE scientists aim to understand more about their interconnectivity and resilience, and the implications for climate change.
The great variety of fauna present in chemosynthetic environments is a real challenge to scientists. Only a tiny fraction of microorganisms at vents and seeps has been identified, and a huge amount is still to be discovered. Their identification, their association with fauna, and the relationship between their diversity, function and habitat, are vital areas of research as biological communities act as important filters, controlling up to 100% of vent and seep emissions. [ 42 ] By using DNA barcoding and genome analysis in addition to traditional methods of identification and experimentation, HERMIONE scientists will study the relationship between community structure and ecosystem functioning at a variety of vents, seeps, brine pools and mud volcanoes.
With increasing ocean exploration over the last two decades has come the realisation that humans have had an extensive impact on the world’s oceans, not just close to our shores, but also reaching down into the deep sea. From destructive fishing practices and exploitation of mineral resources to pollution and litter, evidence of human impact can be found in virtually all deep-sea ecosystems. [ 43 ] [ 44 ] In response, the international community has set a series of ambitious goals aimed at protecting the marine environment and its resources for future generations. Three of these initiatives, decided on by world leaders during the 2002 World Summit on Sustainable Development (Johannesburg), are to achieve a significant reduction in biodiversity loss by 2010, to introduce an ecosystems approach to marine resource assessment and management by 2010, and to designate a network of marine protected areas by 2012. A crucial requirement for implementing these is the availability of high-quality scientific data and knowledge, as well as effective science-policy interfaces to ensure the policy relevance of research and to enable the rapid translation of scientific information into science policy.
HERMIONE aims to provide this by filling the knowledge gap about threatened deep-sea ecosystems and their current status with respect to anthropogenic impacts (e.g. litter, chemical contamination). Socio-economists and natural scientists work together in HERMIONE, researching the socio-economics of anthropogenic impacts, mapping human activities that affect the deep sea, assessing the potential for valuing deep-sea ecosystem goods and services, studying governance options and designing and implementing real-time science-policy interfaces.
HERMIONE natural and social science results will provide national, regional (EU), and global policy-makers and other stakeholders with the information needed to establish policies to ensure the sustainable use of the deep ocean and conservation of deep-sea ecosystems. | https://en.wikipedia.org/wiki/Hotspot_Ecosystem_Research_and_Man's_Impact_On_European_Seas |
Hotspot Ecosystems Research on the Margins of European Seas , or HERMES , was an international multidisciplinary project, from April 2005 to March 2009, that studied deep-sea ecosystems along Europe's deep-ocean margin. [ 1 ] [ 2 ] [ 3 ]
The HERMES project was funded by the European Commission 's Sixth Framework Programme , and was the predecessor to the HERMIONE project , which started in April 2009. [ 4 ]
This oceanography article is a stub . You can help Wikipedia by expanding it .
This article about climate change is a stub . You can help Wikipedia by expanding it .
See guidelines for writing about climate change . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Hotspot_Ecosystems_Research_on_the_Margins_of_European_Seas |
Houda-Imane Faraoun , also spelled Feraoun , is an Algerian physicist and materials scientist who was appointed Minister of Post, Information Technology & Communication in the government of Algerian Prime Minister Abdelmalek Sellal on 1 May 2015. [ 2 ] She is also a professor of physics at the University of Tlemcen , a post she has held in various capacities since 2006. [ 3 ] She holds a PhD in physics from the University Djillali Liabès of Sidi Bel Abbès and a PhD in mechanical engineering from the University of Technology of Belfort-Montbéliard . [ 4 ]
In 2021 she was sentenced to prison for embezzlement.
Faraoun was born in Sidi Bel Abbès, Algeria on 16 June 1979. [ 4 ] She received her baccalaureate at the age of 16. In 1999, at the age of 20, she received a degree (DES) in Physics from the University of Sidi Bel Abbès and began her studies for a doctorate in Solid-state Physics at the same institution. [ 3 ] A determined student, she concurrently pursued a doctorate at the University of Technology of Belfort-Montbéliard in Mechanical Engineering. In 2005, she received PhDs from both universities. [ 4 ]
In 2006, Faraoun was named an instructor and researcher at the University of Tlemcen , a city in western Algeria. From 2010 to 2011, she served as the head of the university's Condensed Matter Physics department. [ 4 ] While still teaching, Faraoun served as the director general of Algeria's Agency for Thematic Research, Science & Technology (ATRST) from 2012 to 2015. Her research at the University of Tlemcen focuses on materials science and computational physics . Over the course of her career, she has published more than forty articles in international scientific journals. [ 4 ] Her most recent scientific publication, "Determination of Mechanical Properties of Porous Silicon with Image Analysis and Finite Element," was presented at the 8th International Conference on Material Sciences in December 2014. [ 5 ]
Faraoun was appointed as Minister of Post, Information Technology & Communication by Prime Minister Abdelmalek Sellal on 1 May 2015 during a larger cabinet re-shuffling. [ 6 ] She is the youngest minister in the current Algerian cabinet, and one of the youngest women ministers in the country's history. [ 3 ] Along with Aïcha Tagabou and Mounia Meslem , she is one of only three women in the current Algerian cabinet. [ 7 ] In 2015, Forbes named Faraoun No. 9 on its list of the Ten Most Powerful Arab Women in Government. [ 2 ] On 24 December 2020 she and former Minister of Industry Djamila Tamazirt appeared before the investigating judge at the fourth chamber of the specialized criminal pole at the tribunal of Sidi M'hamed. She was sentenced to three years in prison and a fine of one million dinars for "squandering public funds" and "granting of undue privileges and abuse of office" related to two contracts for optical fiber. The sentence was confirmed on 9 February 2022. [ 1 ] | https://en.wikipedia.org/wiki/Houda-Imane_Faraoun |
In astronomy and celestial navigation , the hour angle is the dihedral angle between the meridian plane (containing Earth's axis and the zenith ) and the hour circle (containing Earth's axis and a given point of interest). [ 1 ]
It may be given in degrees, time, or rotations depending on the application.
The angle may be expressed as negative east of the meridian plane and positive west of the meridian plane, or as positive westward from 0° to 360°. The angle may be measured in degrees or in time, with 24 h = 360° exactly.
In celestial navigation , the convention is to measure in degrees westward from the prime meridian ( Greenwich hour angle , GHA ), from the local meridian ( local hour angle , LHA ) or from the first point of Aries ( sidereal hour angle , SHA ).
The hour angle is paired with the declination to fully specify the location of a point on the celestial sphere in the equatorial coordinate system . [ 2 ]
The local hour angle (LHA) of an object in the observer's sky is LHA object = LST − α object {\text{LHA}}_{\text{object}}={\text{LST}}-\alpha _{\text{object}} or LHA object = GST + λ observer − α object {\text{LHA}}_{\text{object}}={\text{GST}}+\lambda _{\text{observer}}-\alpha _{\text{object}} where LHA object is the local hour angle of the object, LST is the local sidereal time , α object {\displaystyle \alpha _{\text{object}}} is the object's right ascension , GST is Greenwich sidereal time and λ observer {\displaystyle \lambda _{\text{observer}}} is the observer's longitude (positive east from the prime meridian ). [ 3 ] These angles can be measured in time (24 hours to a circle) or in degrees (360 degrees to a circle)—one or the other, not both.
Negative hour angles (−180° < LHA object < 0°) indicate the object is approaching the meridian, positive hour angles (0° < LHA object < 180°) indicate the object is moving away from the meridian; an hour angle of zero means the object is on the meridian.
Right ascension is frequently given in sexagesimal hours-minutes-seconds format (HH:MM:SS) in astronomy, though may be given in decimal hours, sexagesimal degrees (DDD:MM:SS), or, decimal degrees.
Observing the Sun from Earth, the solar hour angle is an expression of time, expressed in angular measurement, usually degrees, from solar noon . At solar noon the hour angle is zero degrees, with the time before solar noon expressed as negative degrees, and the local time after solar noon expressed as positive degrees. For example, at 10:30 AM local apparent time the hour angle is −22.5° (15° per hour times 1.5 hours before noon). [ 4 ]
The cosine of the hour angle (cos( h )) is used to calculate the solar zenith angle . At solar noon, h = 0.000 so cos( h ) = 1 , and before and after solar noon the cos(± h ) term = the same value for morning (negative hour angle) or afternoon (positive hour angle), so that the Sun is at the same altitude in the sky at 11:00AM and 1:00PM solar time. [ 5 ]
The sidereal hour angle (SHA) of a body on the celestial sphere is its angular distance west of the March equinox generally measured in degrees. The SHA of a star varies by less than a minute of arc per year, due to precession , while the SHA of a planet varies significantly from night to night. SHA is often used in celestial navigation and navigational astronomy, and values are published in nautical almanacs . [ citation needed ] | https://en.wikipedia.org/wiki/Hour_angle |
In astronomy , the hour circle is the great circle through a given object and the two celestial poles . [ 1 ] Together with declination and distance (from the planet's centre of mass ), it determines the location of any celestial object . As such, it is a higher concept than the meridian as defined in astronomy, which takes account of the terrain and depth to the centre of Earth at a ground observer's location. The hour circles, specifically, are perfect circles perpendicular (at right angles ) to the celestial equator . By contrast, the declination of an object viewed on the celestial sphere is the angle of that object to/from the celestial equator (thus ranging from +90° to −90°).
The location of stars , planets , and other similarly distant objects is usually expressed in the following parameters, one for each of the three spatial dimensions: their declination , right ascension ( epoch -fixed hour angle ), and distance. These are as located at the vernal equinox for the epoch (e.g. J2000 ) stated. [ 2 ]
A meridian on the celestial sphere matches an hour circle at any time. The hour circle is a subtype whereby it is expressed in hours as opposed to degrees, radians , or other units of angle. The hour circles make for easy prediction of the angle (and time due to Earth's fairly regular rotation , approximately equal to the time) between the observation of two objects at the same, or similar declination. The hour circles (meridians) are measured in hours (or hours, minutes, and seconds); one rotation (360°) is equivalent to 24 hours; 1 hour is equivalent to 15°.
An astronomical meridian follows the same concept and, almost precisely, the orientation of a meridian (also known as longitude ) on a globe . | https://en.wikipedia.org/wiki/Hour_circle |
The House Energy Rating ( HER ) or House Energy Rating Scheme ( HERS ) are worldwide standard measures of comparison by which one can evaluate the energy efficiency of a new or an existing building . The comparison is generally done for energy requirements for heating and cooling of indoor space. The energy is the main criterion considered by any international building energy rating scheme but there are some other important factors such as production of greenhouse gases emission, indoor environment quality, cost efficiency and thermal comfort, which are considered by some schemes. Basically, the energy rating of a residential building provides detailed information on the energy consumption and the relative energy efficiency of the building. Hence, HERs inform consumers about the relative energy efficiency of homes and encourage them to use this information in making their house purchase decision. [ 1 ]
There are many energy rating tools by which one can calculate the energy performance of a building. Basically all these tools involve a numerical description or prepare a computer-based model for the rating of a building against standard occupancy and activity templates. [ 2 ] HERS uses computer-simulation based methods for assessing the energy efficiency of buildings under standard conditions and its potential for improvement.
HERS is a standardized scheme for evaluation of a home's energy efficiency and expected energy costs. A HERS represents the guideline of a House energy rating. In all countries, HERS show variations in objectives, assessment methodologies and measurement criteria but after all this variation, the goal of all HERS is approximately same and these generate the output in same way.
Basically HERS generates three types of outputs [ 3 ]
As per national energy policies, HERS are known by different names in different countries. The implementation and promotion of HERS in a country depends upon the national energy policy. Beside all this, the aim of all HERS is almost same and it can be classified into three types:
Increase in population , economic issues are the some factors which have escalated the energy demand across the world. In developed countries , the growth rate of energy consumption rate is 1.1% per year while in developing countries the energy consumption growth rate approximately 3 time to that of developed countries. Beside this high growth in energy consumption it also causes to increase the production of green house gases to the atmosphere . The increase in demand of energy, limited resource of convention energy sources, hike in conventional fuel prices, global warming are some important factors which Impetus us to adopt energy saving techniques and alternative sources of energy. [ 4 ] Building sector consumes one third of world's resources. Building currently shares approximately 40% of energy in most of the countries and are considered among the largest end-use sector. As per International energy agency (IEA) world energy consumption and green house gases level is going to increase rapidly every year. IEA recognize the building sector as one of the most cost effective sector where energy consumption can be reduced. It is estimated that the energy consumption can be reduced to 1509 million tonnes of equivalent (Mtoe) and at the same time it will cause to reduce the green house gases production up to 12.6 gigatonnes (Gt) by 2050. [ 5 ] The international energy outlook report reveals that the energy consumption is increasing in each year and the energy increment trends are shown in the right-hand figure.
So, we conclude that the building sector is one of the largest sector where energy consumption and green house gases emission can be reduced effectively by improving the energy efficiency of buildings and hence HERS can play a vital role in achieving all this.
In Australia , Five star is the first house-rating scheme, which was developed in 1980 by the GMI council of Australia. This scheme was basically based on the three basic elements, glass, mass and insulation of dwelling . Due to many limitations this system failed to attain popularity and in 1990s they develop Victorian scheme. This scheme attains some popularity but it was also not suitable for all climate of Australia. In 1993 a more flexible HERE, known by Graded five star rating system was developed. This rating scheme was much flexible and was suitable to all climatic conditions of Australia. Presently there are different HERS available in Australia which are used in different Australian states. Some of HERS used in this country are:
In Brazil , the first national program for energy efficiency in buildings (HERS), PROCEL EDIFICA was developed in 2003. The use of this rating scheme was extended to public and commercial sector in 2007 and from 2012 the operational rating is mandatory for both residential and commercial buildings. The rating system consist of a scale ranging from A to E basis, where A represent the most efficient and E represent the least efficient rating. The rating scheme consider three aspects of buildings;
The three groups are evaluate individually and the combined results of these indicates the level of efficiency of a building.
In Canada , home energy ratings have been in existence since 1997. The two government energy rating programs are:
Both of these programs use HOT2XP and HOT 2000 as their rating tools.
Beside the aforesaid government rating programs in Canada, there are two standard bases are available for evaluating the building are;
LEED is used in Canada as one of home rating scheme. This rating scheme is an adaption of the US green building council's LEED and has been modified as per the Canadian climate, construction and regulation policies.
In China , the Ministry of Housing and Urban-Rural Development (MOHURD) developed a national building energy rating and labeling HERS in 2008. This HERS is mandatory for government buildings, big commercial complexes and those buildings applying for public retrofit funding or green label. this HERS consist a star rating scheme, ranging from 1 to 5 star. As per this HERS, more the star, more will be the energy efficiency of the building. The rating level of buildings is determined in three parameters;
In Denmark , the energy rating scheme are in existence since 1981. Denmark is the first country in Europe (EU) to begin issuing Energy Performance certificates (EPCs). The EPCs are mandatory in all types of buildings in Denmark. The rating system in Denmark includes three parts.
In France , "Diagnostic de performance energetique" (DPE [ 7 ] ) is used as HERS. This scheme was developed in November 2006 and in July 2007 its use becomes mandatory for all those buildings whose registration had been filed after 1 July 2007. This rating scheme consist of two types of measurements.
Both of the measurements comprises 7-label ratings, ranging from A (best) to G (worst) which are presented by the color coding. In ratings, the green color represents A and red color represents G label. In both cases of measurements, the buildings are evaluated in terms of necessary resource for heating, hot water production and air conditioning. The PDE of a building remains valid for 10 years. [ 8 ]
In Ireland , the building energy ratings are in existence since 2007. In this country Building Energy Rating (BER) is used as EPC. The scheme was mandatory for new dwelling and in 2008, its use was extended to non-residential and public buildings, in 2009 the HERS cover the all types of buildings. BER is a calculation based HERS. Due to transparency in this HERS, there is a more awareness among the people and is accepted widely.
In Portugal , the EPCs scheme was launched in July 2007 and was implemented for new buildings. The use of this scheme was extended to existing buildings in January 2009. This rating scheme covers mainly the indoor air quality and energy performance of the buildings. This rating scheme is also a calculation based HERS. The compliance in the country is high and the EPC is issued only when 90% of building completion and transaction observed. There is a national database who covers all EPCs registration record and this is available for all countrymen.
The United Kingdom (UK) is one of the countries where HERS has been developed and implemented strongly from a very long time ago. In UK, National Home Energy Rating (NHER) scheme is used widely. NHER scheme measure the thermal efficiency of the dwellings on a scale of 0-10 in terms of energy running cost. The dwelling rating is done through computer modeling which uses a computer program based on Building Research Establishment Domestic Energy Model (BREDEM). Basically NHER measure the energy efficiency of dwellings as a function of energy cost per square meter. The energy usage is calculated by considering the all aspects of buildings (location, design, construction, water heating, cooking, ventilation and appliances, lighting etc.) and for dwelling energy rating it use some standard assumptions, such as occupancy scenario, thermostat setting, occupant stay timings.
In the United States , HERS are since 1980s. Among the various HERS energy rated homes of America is used widely. It is used in more than 18 states of US. This scheme uses a 100 points scale of efficiency and it is further divided into 10 categories of star rating which ranges from one star to five star plus. In this rating scheme a higher star rated house represents higher energy efficiency of the house.
The energy efficiency rating in this HERS represents the predicted energy consumption, represents the form of normalized annual energy consumption. This rating scheme consist a detailed measure of CFLs, water heater tanks, ceiling, floors and pipe insulation , efficient refrigerator and freezer, high efficient space and water heating equipments, air leakage and controls.
The other important rating schemes used in US are: [ 9 ] | https://en.wikipedia.org/wiki/House_Energy_Rating |
A house mark was originally a mark of property, later also used as a family or clan emblem, incised on the facade of a building, on animals, in signet and similar in the farmer and burgher culture of Germany , the Netherlands and the Nordic countries .
These marks have the appearance of glyphs or runes consisting of a pattern of simple lines, without the application of colour.
The form of house marks is based on function. They should be easy to cut, scratch or engrave with a knife or similar tool. At the same time, they should be distinctive and easy to remember. House marks differ from the more complicated patterns of a coat of arms or flags, which include surfaces and solid colors.
House marks can be made from one or two lines and up to quite a complex pattern of line figures. Based on appearance, house marks resemble line figures in rock carvings and in early writing systems . It is unclear how extensively such ancient line figures were used as marks for people or property ownership.
The basic forms of a house mark is often runes , characters and numbers, stylized figures, international symbols like crosses, stars, and astrological or astronomical characters.
One characteristic of house marks is that they may consist of a basic form with addition or deduction of lines. In this way, related people can have marks that resemble each other, but differ by details. This is equivalent to cadency and adding brisures as a method to change a coat of arms.
Many house marks are placed in shield -shaped frames. We see this in seals, on buildings and on tombstones , for both farmers and city dwellers in Scandinavia and German areas, during the 1700s and 1800s. Some of these house mark shields also had color and approached the heraldic coat of arms.
The use of house marks dates back to long before literacy was common. The purpose of a house mark is to have a recognisable mark that a person, a nuclear family, multiple generations of an extended family or an owner of a property can use to mark objects, cattle , or buildings for recognition of ownership.
Besides farmers, house marks have also been used by merchants , tradesman , artisans , and other town burghers on for example Bryggen in Bergen , on building blocks in the Nidaros Cathedral , and on personal seals in other Norwegian cities. There are also house marks written by hand on documents, for instance house marks of mining workers at Røros .
The Norwegian word b o merke or b oe merke probably came from Denmark . There is no Norwegian reference before the 17th century. Today b o merke is mainly written as b u merke in Norwegian . Both in Denmark and Sweden , the word bomerke (with multiple spellings) is used since the 14th century and in the 16th century. In the Icelandic codes of law from the Middle Ages , one finds the word einkunn used to denote owner marks used to tag animals. It is likely that this word has also been used in Norway. [ 1 ]
In Finnish, the word puumerkki ("insignia") means a distinguishing mark or sign used by illiterate persons as a replacement of a written signature in official documents. | https://en.wikipedia.org/wiki/House_mark |
Household hazardous waste (HHW) was a term coined by Dave Galvin from Seattle, Washington in 1982 as part of the fulfillment of a US EPA grant. [ 1 ] This new term was reflective of the recent passage of the Resource Conservation and Recovery Act of 1976 (RCRA 1976) in the US. This act and subsequent regulations strengthened the environmental protection requirements for landfills, in Subpart D, and created a "cradle to grave" management system for hazardous wastes, in Subpart C. From RCRA 1976 the US EPA promulgated rules in 1980 which explicitly excluded any wastes from household origins from regulation as a hazardous waste at the federal level. [ 2 ] [ 3 ] Most US states adopted parallel regulations to RCRA 1976 but were allowed to be more stringent. California took advantage of this allowance and chose to not exempt household origin wastes from their state hazardous waste laws. [ 4 ] HHW products exhibit many of the same dangerous characteristics as fully regulated hazardous waste which are their potential for reactivity , ignitability, corrosivity , toxicity , or persistence . Examples include drain cleaners , oil paint , motor oil , antifreeze , fuel , poisons , pesticides , herbicides and rodenticides , fluorescent lamps , lamp ballasts containing PCBs, some smoke detectors , and in some states, consumer electronics (such as televisions, computers , and cell phones ). Except for California, most states exclude HHW from their hazardous waste regulations and regulate the management of HHW largely under their solid waste regulatory schemes.
Certain items such as batteries and fluorescent lamps can be returned to retail stores for disposal. The Call2Recycle maintains a list of battery recycling locations and your local environmental organization should have list of fluorescent lamp recycling locations. The classification "household hazardous waste" has been used for decades and does not accurately reflect the larger group of materials that during the past several years have become known as "household hazardous wastes". These include items such as latex paint, non-hazardous household products and other items that do not generally exhibit hazardous characteristics which are routinely included in "household hazardous waste" disposal programs. The term "home generated special materials" more accurately identifies a broader range of items that public agencies are targeting as recyclable and/or should not be disposed of into a landfill.
HHW is not regulated by the EPA . Many states and local solid waste management departments have created and funded Household Hazardous Waste collection programs to offer safe disposal options. These programs may include home collection service, permanent facilities and one day collection events.
Most U.S. states and federal regulations continue to permit homeowner disposal of household hazardous waste into the solid waste stream, although some state and local agencies are increasingly banning certain HHW from solid waste disposal.
The most extensive overview of this topic including history, policy and technical issues is contained in the 2018 book Handbook on Household Hazardous Waste (2nd Ed.) , Amy Cabaniss, Editor. [ 5 ] An additional HHW overview resource is in Chapter 10 of the Handbook of Solid Waste Management, George Tchobanoglous and Frank Kreith , Editors. [ 6 ] A more recent (2022) compilation of 30 articles regarding HHW policy and technical issues in the US is found in the Chronicle of the HHW Corner . [ 7 ]
The professional organization most focused on HHW issues is the North American Hazardous Materials Management Association, NAHMMA. [ 8 ] NAHMMA has chapters in many states, [ 9 ] holds an annual conference, [ 10 ] provides training and offers professional publications. [ 11 ] In collaboration with the Solid Waste Association of North America (SWANA) NAHMMA offers certification to HHW collection program professionals. Occasional or periodic HHW collection events at temporary sites and permanent HHW collection facilities are common in many communities. The HHW Collection Facility Design Guide [ 12 ] has been used by many communities to develop permanent collection facilities in the US.
In Florida , [ 13 ] and in other United States states, responsibility for proper disposal of HHW falls upon the generator. Some states allow collection of small business hazardous wastes at the same location as household hazardous wastes. However, it is more common for public collection facilities to limit hazardous waste collection to households. In 1992 the US EPA issued a policy that allowed the option to collect and mix household hazardous wastes with conditionally exempt hazardous wastes from small businesses. [ 14 ] This has encouraged a trend of local collection programs evolving from household hazardous waste only to also include small business hazardous waste collection.
California has introduced an Electronic Waste Recycling Act . While most states recognize the exemption for home generated hazardous waste in 40 CFR, California has established Section 25218 of the Health and Safety Code to regulate all aspects of home generated special materials (HHW). 25218 details the types of programs e.g. Door-to-Door, Permanent HHW Collection Facilities, Mobile Collection Events, etc. Public agencies must sponsor (as the generator) all HHW programs as their EPA ID number is used. All HHW programs are monitored by DTSC and/or the local CUPA organization. A Permit-by-Rule must be obtained from DTSC or the CUPA before implementing most HHW collection activities.
HHW programs has introduced the Covered Device Recycling Act .
Similar regulations, such as the Waste Electrical and Electronic Equipment Directive are being introduced in the countries of the European Union . | https://en.wikipedia.org/wiki/Household_hazardous_waste |
In computer programming , housekeeping can refer to either a standard entry or exit routine appended to a user-written block of code (such as a subroutine or function , sometimes as a function prologue and epilogue ) at its entry and exit or to any other automated or manual software process whereby a computer is cleaned up after usage (e.g. freeing resources such as virtual memory ). This might include such activities as removing or archiving logs that the system has made as a result of the users activities, or deletion of temporary files which may otherwise simply take up space. Housekeeping can be described as a necessary chore, required to perform a particular computer's normal activity but not necessarily part of the algorithm . [ 1 ] For cleaning up computer disk storage , utility software usually exists for this purpose such as data compression software - to "shrink" files and release disk space and defragmentation programs - to improve disk performance. [ 2 ]
Housekeeping could include (but is not limited to) the following activities:
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Housekeeping_(computing) |
In molecular biology , housekeeping genes are typically constitutive genes that are required for the maintenance of basic cellular function, and are expressed in all cells of an organism under normal and patho-physiological conditions. [ 1 ] [ 2 ] [ 3 ] [ 4 ] Although some housekeeping genes are expressed at relatively constant rates in most non-pathological situations, the expression of other housekeeping genes may vary depending on experimental conditions. [ 1 ] [ 5 ]
The origin of the term "housekeeping gene" remains obscure. Literature from 1976 used the term to describe specifically tRNA and rRNA . [ 6 ] For experimental purposes, the expression of one or multiple housekeeping genes is used as a reference point for the analysis of expression levels of other genes. The key criterion for the use of a housekeeping gene in this manner is that the chosen housekeeping gene is uniformly expressed with low variance under both control and experimental conditions. Validation of housekeeping genes should be performed before their use in gene expression experiments such as RT-PCR . Recently a web-based database of human and mouse housekeeping genes and reference genes/transcripts, named Housekeeping and Reference Transcript Atlas (HRT Atlas), was developed to offer updated list of housekeeping genes and reliable candidate reference genes/transcripts for RT-qPCR data normalization. [ 1 ] This database can be accessed at http://www.housekeeping.unicamp.br .
Housekeeping genes account for majority of the active genes in the genome, and their expression is obviously vital to survival. The housekeeping gene expression levels are fine-tuned to meet the metabolic requirements in various tissues. Biochemical studies on transcription initiation of the housekeeping gene promoters have been difficult, partly due to the less-characterized promoter motifs and transcription initiation process.
Human housekeeping gene promoters are generally depleted of TATA -box, have high GC content and high incidence of CpG Islands . [ 7 ] In Drosophila, where promoter specific CpG Islands are absent, housekeeping gene promoters contain DNA elements like DRE, E-box or DPE. [ 8 ] Transcription start sites of housekeeping genes can span over a region of around 100 bp whereas transcription start sites of developmentally regulated genes are usually focused in a narrow region. [ 9 ] [ 10 ] [ 11 ] Little is known about how the dispersed transcription initiation of housekeeping gene is established. There are transcription factors that are specifically enriched on and regulate housekeeping gene promoters. [ 12 ] [ 13 ] Furthermore, housekeeping promoters are regulated by housekeeping enhancers but not developmentally regulated enhancers. [ 14 ]
The following is a partial list of "housekeeping genes." For a more complete and updated list, see HRT Atlas database compiled by Bidossessi W. Hounkpe et al. [ 1 ] The database was constructed by mining more than 12000 human and mouse RNA-seq datasets. [ 1 ]
RPS19BP1
There is significant overlap in function with regards to some of these proteins. In particular, the Rho-related genes are important in nuclear trafficking (i.e.: mitosis) as well as with mobility along the cytoskeleton in general. These genes of particular interest in cancer research.
[ 15 ]
(Note that COX1, COX2, and COX3 are mitochondrially encoded)
[ 2 ] [ 17 ] [ 18 ]
A specialized form of cell signaling
Although this page is devoted to genes that should be ubiquitously expressed, this section is for genes whose current name reflects their relative upregulation in testes | https://en.wikipedia.org/wiki/Housekeeping_gene |
In engineering , a housing or enclosure is a container , a protective exterior (e.g. shell ) or an enclosing structural element (e.g. chassis or exoskeleton ) designed to enable easier handling, provide attachment points for internal mechanisms (e.g. mounting brackets for electrical components , cables and pipings ), maintain cleanliness of the contents by shielding dirt / dust , fouling and other contaminations, or protect interior mechanisms (e.g. delicate integrated electrical fittings) from structural stress and/or potential physical, thermal, chemical, biological or radiational damages from the surrounding environment. Housing may also be the body of a device, vital to its function.
Housing is an exterior case or enclosure used to protect an interior mechanism. The housing prevents the interior mechanism from being fouled by outside debris. It may also have integrated fittings or brackets to keep internal components in place; sometimes a housing is the body of the device, vital to its function. [ 1 ]
Housings are most commonly made of metal or plastic. [ citation needed ] The design of housing is specific to the item and its use. Housing may provide a number of functions.
Housing prevents the interior mechanism from being fouled by outside debris. Housings are sometimes made watertight, especially when the interior mechanisms contain electronics . [ 2 ]
Housings are commonly used to protect gearboxes , [ 3 ] where the housing also is responsible for containing the lubricant . Housings can also play a safety role, by providing a barrier between people and dangerous or fast-moving mechanisms. [ 4 ]
Housing may need to provide a user interface for the internal devices, such as for televisions and video game controllers .
Housing may include decorative elements . When these elements are removable and replaceable panels, they may be known as faceplates. Interchangeable faceplates provide a method to update the cosmetics of the housing without replacing the entire enclosure.
This engineering-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Housing_(engineering) |
The Houtermans Award is given annually by the European Association of Geochemistry for outstanding contributions to geochemistry made by scientists under 35 years old or within 6 years of their PhD award. [ 1 ] The award is named after Fritz Houtermans and consists of an engraved medal and an honorarium of 1000 Euros. The F.G. Houtermans award is the only category from Geochemica Society and European Association of Geochemistry awards with equal gender representation in the last decade. [ 2 ]
Source: ERG
This geochemistry article is a stub . You can help Wikipedia by expanding it .
This science awards article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Houtermans_Award |
Hovlinc RNA is a self-cleaving ribozyme of about 168 nucleotides found in a very long noncoding RNA (vlincRNA) in humans, chimpanzees, and gorillas. [ 1 ] The word "hovlinc" comes from "hominin vlincRNA-located" RNA. Hovlinc is only a fourth known case of a ribozyme in human. Self-cleavage activity of Hovlinc has been shown in human, chimpanzees and bonobos, but is absent in gorillas, raising questions about Hovlinc's biological function and evolution .
There are only a few known examples of ribozymes in human, [ 2 ] including Hovlinc, Mammalian CPEB3 ribozyme , Hammerhead ribozyme (HH9 and HH10 [ 3 ] ) and B2 SINE ribozyme. [ 4 ] Presumably Hovlinc acquired its self-cleaving activity about 10 to 13 million of years ago, which coincides with the last common ancestor of humans, gorillas, and chimpanzees. Hovlinc presents catalytic activity in hominids but not in gorillas where a mutation abolished the self-cleavage activity.
Hovlinc is a very structured RNA that contains four stem loops joined in a central loop, it also features large pseudoknots that help to bring together the second and fourth helices, which helps the RNA to get the more compacted structure that allows catalytic activity. | https://en.wikipedia.org/wiki/Hovlinc |
How Round Is Your Circle? Where Engineering and Mathematics Meet is a book on the mathematics of physical objects, for a popular audience. It was written by chemical engineer John Bryant and mathematics educator Chris Sangwin, and published by the Princeton University Press in 2008.
The book has 13 chapters, [ 1 ] whose topics include:
The book emphasizes the construction of physical models, and includes many plates of the authors' own models, [ 3 ] detailed construction plans, and illustrations. [ 4 ]
Doug Manchester characterizes the topic of the book as "recreational engineering". [ 5 ] It only requires a standard background in mathematics including basic geometry, trigonometry, and a small amount of calculus. [ 3 ] Owen Smith calls it "a great book for engineers and mathematicians, as well as the interested lay person", writing that it is particularly good at laying bare the mathematical foundations of seemingly-simple problems. [ 4 ] Similarly, Ronald Huston recommends it to "mathematicians, engineers, and physicists", as well as interested members of the general public. [ 1 ]
Matthew Killeya writes approvingly of the book's intuitive explanations for its calculations and the motivation it adds to the mathematics it applies. [ 8 ] However, although reviewer Tim Erickson calls the book "exuberant and eclectic", [ 6 ] reviewers Andrew Whelan and William Satzer disagree, both finding fault with the book's lack of focus. [ 2 ] [ 7 ] | https://en.wikipedia.org/wiki/How_Round_Is_Your_Circle? |
How to Clone a Mammoth: The Science of De-Extinction is a 2015 non-fiction book by biologist Beth Shapiro and published by Princeton University Press . The book describes the current state of de-extinction technology and what the processes involved require in order to accomplish the potential resurrection of extinct species.
The book is laid out as a step-by-step guide on how to clone an animal, with each chapter detailing a different topic that needs to be explored and answered before de-extinction of a species will be complete. This also involves a particular focus on resurrection of the mammoth . [ 4 ]
Several chapters deal with the genetic material itself and how to obtain it, along with the difficulties of recovering viable DNA samples from mummified or fossilized remains. Due to the actions of nucleases after cell death, most DNA of extinct species is fragmented into small pieces that have to be reconstructed at least partially if it is to be cloned. This fragmentation means that recovery of a full extinct genome is largely impossible. Thus, only partial genes can be utilized and the most viable method is to use a close evolutionary relative of the extinct species and insert the genes that differ into an embryo of the living species. [ 4 ] For mammoth de-extinction, any trait consideration would involve the Asian elephant , the closest still-living relative. Using genes from extrapolated mammoth DNA, the Asian elephant could be made to survive across a wider range, including cold environments, protecting it against possible extinction. This gene transfer to benefit living species is one of the primary sources of research done with de-extinction technology in addition to the desire to revive lost species. [ 5 ]
Three following chapters discuss current technology available for moving genes and creating modified elephant genomes, including CRISPR (Clustered Regularly Interspersed Short Palindromic Repeats) and TALENS (Transcription Activator-like Effector Nucleases). [ 6 ] The final chapters discuss the environmental benefits and potential drawbacks of mammoths or other extinct species being reintroduced. For mammoths specifically, their heavy weight and specific methods of foraging help grasslands grow in colder climates, potentially turning Siberian permafrost into a tundra -like region with numerous plant species. [ 7 ]
Shaoni Bhattacharya in New Scientist noted that while the book "can be a little academic", Shapiro manages to explain "complex molecular biology clearly" and that it "really comes alive, though, when she describes her own expeditions". [ 8 ] Writing for National Geographic , Riley Black describes Shapiro's writing style as "sharp, witty, and impeccably-argued" and says that she writes "finely-honed prose" that "cuts through the hype that has clouded the debate" on whether it is possible to clone extinct animals and also whether such efforts should instead be directed toward assisting species that are currently in danger of extinction. [ 9 ] Caspar Henderson for The Spectator called the book's writing "lively, skeptical and nuanced" and stated that Shapiro covered topics with "great clarity". [ 10 ] In an article for Science , A. Rus Hoelzel characterized the writing as "rich in anecdote and scientifically precise". [ 11 ]
Alec Rodriguez praised the book's writing in a Yale Scientific article, approving of the conciseness and yet approachable technical detail that is included in the book while still remaining smooth in its flow between subjects. Rodriguez concluded that the book also "leaves the reader optimistic" in regards to future scientific advancements and the usage of Pleistocene Park . [ 12 ] Times Higher Education ' s Tiffany Taylor considered the work a "thought-provoking book [that] offers excitement and wonder" and that, through Shapiro's writing and direct discussion, the book manages to "paint a scientifically accurate yet magical world where Pleistocene giants might roam the Arctic tundra once again, and where we have the chance to undo some past mistakes". [ 13 ] A review in Publishers Weekly applauded the book's attempt to state plainly the science involved and determined that readers will "emerge with the ability to think more deeply about the facts of de-extinction and cloning at a time when hyperbolic and emotionally manipulative claims about such scientific breakthroughs are all too common". [ 14 ] Kent H. Redford in the journal Oryx recommended that others read the book, adding that it "will make everyone think, will make some mad, others inspired, and hopefully will educate all conservationists to the extraordinary potential opportunities, good and bad, that de-extinction presents". [ 15 ] In The Quarterly Review of Biology , Derek D. Turner called the writing "careful, accessible, and thoughtful", while also pointing out that the book as a whole "conveys a sense of excitement about the science, but without the uncritical techno-optimism that one sees in many popular articles". [ 16 ] Philip J. Seddon in an article for the journal Trends in Ecology and Evolution described the book as an "important contribution to the ongoing debate" by how it changed the focus on what de-extinction is about to "ecological resurrection, and not species resurrection". [ 17 ] | https://en.wikipedia.org/wiki/How_to_Clone_a_Mammoth |
How to Solve It (1945) is a small volume by mathematician George Pólya , describing methods of problem solving . [ 1 ]
This book has remained in print continually since 1945.
How to Solve It suggests the following steps when solving a mathematical problem :
If this technique fails, Pólya advises: [ 6 ] "If you cannot solve the proposed problem, try to solve first some related problem. Could you imagine a more accessible related problem?"
"Understand the problem" is often neglected as being obvious and is not even mentioned in many mathematics classes. Yet students are often stymied in their efforts to solve it, simply because they don't understand it fully, or even in part. In order to remedy this oversight, Pólya taught teachers how to prompt each student with appropriate questions, [ 7 ] depending on the situation, such as:
The teacher is to select the question with the appropriate level of difficulty for each student to ascertain if each student understands at their own level, moving up or down the list to prompt each student, until each one can respond with something constructive.
Pólya mentions that there are many reasonable ways to solve problems. [ 3 ] The skill at choosing an appropriate strategy is best learned by solving many problems. You will find choosing a strategy increasingly easy. A partial list of strategies is included:
Also suggested:
Pólya lays a big emphasis on the teachers' behavior. A teacher should support students with devising their own plan with a question method that goes from the most general questions to more particular questions, with the goal that the last step to having a plan is made by the student. He maintains that just showing students a plan, no matter how good it is, does not help them.
This step is usually easier than devising the plan. [ 23 ] In general, all you need is care and patience, given that you have the necessary skills. Persist with the plan that you have chosen. If it continues not to work, discard it and choose another. Don't be misled; this is how mathematics is done, even by professionals. [ 3 ]
Pólya mentions that much can be gained by taking the time to reflect and look back at what you have done, what worked and what did not, and with thinking about other problems where this could be useful. [ 24 ] [ 25 ] Doing this will enable you to predict what strategy to use to solve future problems, if these relate to the original problem.
The book contains a dictionary-style set of heuristics , many of which have to do with generating a more accessible problem. For example: | https://en.wikipedia.org/wiki/How_to_Solve_It |
How to Solve it by Computer is a computer science book by R. G. Dromey , [ 1 ] first published by Prentice-Hall in 1982.
It is occasionally used as a textbook , especially in India. [ 2 ] [ 3 ] [ 4 ] [ 5 ]
It is an introduction to the why s of algorithms and data structures .
Features of the book:
The very fundamental algorithms portrayed by this book are mostly presented in pseudocode and/or Pascal notation. | https://en.wikipedia.org/wiki/How_to_Solve_it_by_Computer |
Howard Martin Temin (December 10, 1934 – February 9, 1994) was an American geneticist and virologist . He discovered reverse transcriptase in the 1970s [ 1 ] at the University of Wisconsin–Madison , for which he shared the 1975 Nobel Prize in Physiology or Medicine with Renato Dulbecco and David Baltimore . [ 2 ] [ 3 ]
Temin was born in Philadelphia, Pennsylvania , to Jewish parents, Annette (Lehman), an activist, and Henry Temin, an attorney. [ 4 ] As a high school student at Central High School in Philadelphia, he participated in the Jackson Laboratory 's Summer Student Program in Bar Harbor, Maine . The director of the program, C.C. Little , told his parents that Temin was "unquestionably the finest scientist of the fifty-seven students who have attended the program since the beginning...I can't help but feel this boy is destined to become a really great man in the field of science." [ 4 ] Temin said that his experience at Jackson's Laboratory is what originally interested him in science. [ 5 ]
Temin's parents raised their family to have values associated with social justice and independent thinking, which was evident throughout his life. For Temin's bar mitzvah , the family donated money that would have been spent on the party to a local camp for displaced persons. Temin was also the valedictorian of his class and he devoted his speech to relevant issues at the time including the recent hydrogen bomb activity and the news of sending a man to the moon. [ 4 ]
Temin received a bachelor's degree from Swarthmore College in 1955 majoring and minoring in biology in the honors program. He received his doctorate degree in animal virology from the California Institute of Technology in 1960. [ 5 ]
Temin's first exposure to experimental science was during his time at the California Institute of Technology as a graduate student in laboratory of Professor Renato Dulbecco . [ 5 ] Temin originally studied embryology at Caltech, but after about a year and a half, he switched to animal virology . He became interested Dulbecco's lab after a chance run-in with Harry Rubin , a postdoctoral fellow in Dulbecco's lab. In the lab, Temin studied the Rous sarcoma virus , a tumor-causing virus that infects chickens. [ 4 ] During his research on the virus, he observed that mutations in the virus yielded alterations in the structural characteristics of the infected cell – thus, integration into the cell's genome was occurring. As part of his doctoral thesis, Temin stated that the Rous Sarcoma Virus has "some kind of close relationship with the genome of the infected cell". [ 4 ] Following receiving his doctorate, Temin continued to work in Dulbecco's lab as a postdoctoral fellow.
In 1960, the McArdle Laboratory for Cancer Research at the University of Wisconsin–Madison recruited Temin as a virologist ; a position that had been hard to fill because, at the time, virology was not considered pertinent to cancer research. Even though Temin knew he would be completely independent in Madison, because of the lack of research involving virology and oncology together, Temin stated that he was "supremely self-confident". [ 5 ] When he first arrived in Madison in 1960, he found an unprepared laboratory in the basement of a rundown building with an office that could be considered a closet. Until a more suitable laboratory could be prepared, he continued his research with RSV at a friend's laboratory at the University of Illinois . Later that year, he returned to Madison, continued his RSV research in his own lab, and began his position as an assistant professor. [ 5 ]
While studying the Rous sarcoma virus at UW-Madison, Temin began to refer to the genetic material that the virus introduced to the cells, the " provirus ". Using the antibiotic, actinomycin D , which inhibits the expression of DNA, he determined that the provirus was DNA or was located on the cell's DNA. [ 6 ] [ 7 ] [ 8 ] These results implied that the infecting Rous sarcoma virus was somehow generating complementary double-stranded DNA. Temin's description of how tumor viruses act on the genetic material of the cell through reverse transcription was revolutionary. This upset the widely held belief at the time of a popularized version of the "Central Dogma" of molecular biology posited by Nobel laureate Francis Crick , one of the co-discoverers of the structure of DNA (along with James Watson and Rosalind Franklin ). Crick had claimed only that sequence information cannot flow out of protein into DNA or RNA, but he was commonly interpreted as saying that information flows exclusively from DNA to RNA to protein . Many highly respected scientists disregarded his work and declared it impossible. Despite the lack of support from the scientific community, Temin continued to search for evidence to support his idea. In 1969, Temin and a postdoctoral fellow, Satoshi Mizutani, began searching for the enzyme that was responsible for the phenomenon of viral RNA being transferred into proviral DNA. [ 4 ] Later that year, Temin showed that certain tumor viruses carried the enzymatic ability to reverse the flow of information from RNA back to DNA using reverse transcriptase . Reverse transcriptase was also independently and simultaneously discovered in association with the murine leukemia virus by David Baltimore at the Massachusetts Institute of Technology . [ 9 ] In 1975, Baltimore and Temin shared the Nobel Prize of Physiology or Medicine. [ 10 ] Both scientists completed their initial work with RNA-dependent DNA polymerase with the Rous sarcoma virus .
The discovery of reverse transcriptase is one of the most important of the modern era of medicine, as reverse transcriptase is the central enzyme in several widespread viral diseases such as AIDS and Hepatitis B . Reverse transcriptase is also an important component of several important techniques in molecular biology, such as the reverse transcription polymerase chain reaction , and diagnostic medicine.
Temin has mentored some PhD students, including Edward F. Fritsch , co-author of the most-cited book of all time. [ 11 ]
Temin was a member of the American Academy of Arts and Sciences (1973), [ 12 ] the United States National Academy of Sciences (1974), and the American Philosophical Society (1978). [ 13 ] In 1992 Temin received the National Medal of Science . Temin was elected a Foreign Member of the Royal Society (ForMemRS) in 1988 . [ 14 ] [ 15 ]
Following winning the Nobel Prize, Temin focused his research mainly on studying the viral sequences that control the packaging of viral RNA, developing a new vaccine for HIV , and studying the mechanisms of retroviral variation. [ 5 ]
After receiving the Nobel Prize in 1975, Temin went from a rebel in the scientific community to a highly respected researcher. Temin began receiving international recognition for his work, and used his newly acquired fame to improve the world. An example of this was in October 1976; Temin helped scientists in the Soviet Union that were targeted by the KGB, the secret police in the Soviet Union. The Jewish Soviet scientists had been stripped of their jobs and oppressed after requesting visas to emigrate to Israel. Temin made it his mission to personally visit the scientists and their families. He gave them gifts that could be resold to help them financially, and he gave the scientists copies of scientific journals, which had been banned by the KGB. [ 16 ] On one occasion, Howard Temin gave a lecture to some of the Jewish Soviet scientists in someone's home. The next morning, almost all of scientists that had attended the lecture were arrested. After they were released, Temin tape-recorded one of the scientist's account of the event and gave the tape to newspapers in the United States so that the situation that Jewish scientists were facing would be publicized. [ 4 ]
Another example of Temin trying to improve the world was at the Nobel Prize reception. After receiving the Nobel Prize from King Carl Gustav of Sweden; Temin addressed the smokers in the audience, which included the Queen of Denmark , saying he was "outraged that one major measure available to prevent much cancer, namely the cessation of smoking, had not been more widely adopted". He had also insisted that the ashtray located on the laureates' table be removed. [ 4 ]
After winning the Nobel Prize , Temin also became more active in the scientific community outside of research. He was involved in over 14 scientific journals. In 1979, he became an advisory member for the director of the National Institute of Health (NIH) and a member of the human gene therapy subgroup of the recombinant DNA advisory committee. He was also a member of the National Cancer Advisory Board, and the chairman of the AIDS subcommittee. At the National Institute of Allergy and Infectious Diseases (NIAID), he was the chairman of a genetic variation advisory panel on the development of AIDS, and was a member of vaccine advisory board. In the National Academy of Sciences (NAS), he was a member of the Waksman Award committee and report review committee. In 1986, Temin became a member of the Institute of Medicine (IOM)/NAS committee for national strategy for public policy issues associated with AIDS. The last committee Temin served on was the World Health Organization Advisory Council. [ 5 ]
In 1981, Temin became a founding member of the World Cultural Council . [ 17 ]
Temin taught and conducted research at UW-Madison until he died of lung cancer , on February 9, 1994. [ 14 ] He was survived by his wife Rayla, a geneticist at UW-Madison, two daughters, and two brothers, Peter Temin , also an academic, and Michael Temin, a lawyer. | https://en.wikipedia.org/wiki/Howard_Martin_Temin |
The Howard N. Potts Medal was one of The Franklin Institute Awards for science and engineering award presented by the Franklin Institute of Philadelphia, Pennsylvania . It is named for Howard N. Potts . The first Howard N. Potts Medal was awarded in 1911 but was merged in 1991, along with other Franklin Institute historical awards, into the Benjamin Franklin Medal . [ 1 ]
The following people received the Howard N. Potts Medal: [ 2 ] [ 3 ] | https://en.wikipedia.org/wiki/Howard_N._Potts_Medal |
Howard E. Zimmerman (July 5, 1926 – February 12, 2012) was a professor of chemistry at the University of Wisconsin–Madison . [ 3 ] He was elected to the National Academy of Sciences in 1980 [ 4 ] and the recipient of the 1986 American Institute of Chemists Chemical Pioneer Award . [ 5 ] [ 6 ]
Howard E. Zimmerman was a native of Connecticut . [ 7 ] During World War II , he served in the U.S. Armored Corps in Europe where he was a tank gunner. His final rank was technical sergeant . [ 8 ] He obtained a B. S. in Chemistry in 1950 and a Ph.D. in 1953 both from Yale University . [ 8 ] He was a Postdoctoral Research Fellow with a National Research Council fellowship from 1953 to 1954 working with R. B. Woodward (Harvard). [ 8 ] From 1954 to 1960 he was assistant professor at Northwestern University . [ 8 ] Beginning in 1960 he was Associate Professor and then Professor at the University of Wisconsin , [ 8 ] and from 1990 he was Hilldale and A. C. Cope Professor of Chemistry. His publications number over 285 (including 11 chapters).
Zimmerman gave ACS Short Courses on organic quantum chemistry and molecular orbital theory . He authored a 1975 textbook entitled Quantum Mechanics for Organic Chemists . [ 9 ] Zimmerman was the organizer of the 1972 IUPAC Photochemistry Symposium ( Baden-Baden ) and of five Pacifichem Symposia – the last being Pacifichem 2010. | https://en.wikipedia.org/wiki/Howard_Zimmerman |
In fluid dynamics , Howarth–Dorodnitsyn transformation (or Dorodnitsyn-Howarth transformation ) is a density -weighted coordinate transformation, which reduces variable-density flow conservation equations to simpler form (in most cases, to incompressible form). The transformation was first used by Anatoly Dorodnitsyn in 1942 and later by Leslie Howarth in 1948. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] The transformation of y {\displaystyle y} coordinate (usually taken as the coordinate normal to the predominant flow direction) to η {\displaystyle \eta } is given by
where ρ {\displaystyle \rho } is the density and ρ ∞ {\displaystyle \rho _{\infty }} is the density at infinity. The transformation is extensively used in boundary layer theory and other gas dynamics problems.
Keith Stewartson and C. R. Illingworth, independently introduced in 1949, [ 6 ] [ 7 ] a transformation that extends the Howarth–Dorodnitsyn transformation to compressible flows. The transformation reads as [ 8 ]
where x {\displaystyle x} is the streamwise coordinate, y {\displaystyle y} is the normal coordinate, c {\displaystyle c} denotes the sound speed and p {\displaystyle p} denotes the pressure. For ideal gas, the transformation is defined as
where γ {\displaystyle \gamma } is the specific heat ratio . | https://en.wikipedia.org/wiki/Howarth–Dorodnitsyn_transformation |
The Howlett Line was a critical Confederate earthworks dug during the Bermuda Hundred Campaign of the United States Civil War in May 1864. Specifically, the line stretched across the Bermuda Hundred peninsula from the James River to the Appomattox River . [ 1 ] It was named for the Dr. Howlett's House that overlooked the James River at the north end of the line. [ 2 ] The Howlett Line became famous as the "Cork in the Bottle" by keeping the 30,000-man strong General Butler's Army of the James at bay. [ 2 ]
Following the Battle of Ware Bottom Church (May 20, 1864), the confederates began digging a decisive set of defensive earthworks that became known as the Howlett Line. [ 1 ] It continued for three miles from river to river and aimed to hold down the railroad and turnpike of crucial importance, which connected Richmond and Petersburg that were at the time slightly defended. [ 3 ] Gen. Daniel H. Hill directed the construction of the Confederate fortifications. [ 4 ]
In Personal Memoirs Gen. Ulysses S. Grant described a conversation with his Chief Engineer Gen. John G. Barnard regarding Butler's predicament:
He said that the general occupied a place between the James and Appomattox rivers which was of great strength, and where with an inferior force he could hold it for an indefinite length of time against a superior; but that he could do nothing offensively. I then asked him why Butler could not move out from his lines and push across the Richmond and Petersburg Railroad to the rear and on the south side of Richmond . He replied that it was impracticable, because the enemy had substantially the same line across the neck of land that General Butler had. He then took out his pencil and drew a sketch of the locality, remarking that the position was like a bottle and that Butler's line of intrenchments across the neck represented the cork; that the enemy had built an equally strong line immediately in front of him across the neck; and it was therefore as if Butler was in a bottle. He was perfectly safe against an attack; but, as Barnard expressed it, the enemy had corked the bottle and with a small force could hold the cork in its place.
Battery Dantzler, called Howlett's Battery before it was renamed to honor Col. Olin M. Dantzler of the 22nd South Carolina, and Parker's Virginia Battery anchored the Howlett Line. [ 6 ] During the Second Battle of Petersburg (June 15, 1864) Gen. P. G. T. Beauregard pulled part of the troops from the Howlett Line to reinforce his main defenses. Overall, the construction of Confederate fortifications and trenches known as the Howlett Line held Butler in place until General Robert E. Lee evacuated the position on April 2, 1865. [ 7 ]
The Chesterfield Historical Society of Virginia works on preservation of the extant earthworks in the Howlett Line Park, which belongs to the Chesterfield County Parks System. [ 8 ] [ 9 ] | https://en.wikipedia.org/wiki/Howlett_Line |
Hox genes play a massive role in some amphibians and reptiles in their ability to regenerate lost limbs, especially HoxA and HoxD genes. [ 1 ]
If the processes involved in forming new tissue can be reverse-engineered into humans, it may be possible to heal injuries of the spinal cord or brain, repair damaged organs and reduce scarring and fibrosis after surgery. [ 2 ] [ 3 ] Despite the large conservation of the Hox genes through evolution, mammals and humans specifically cannot regenerate any of their limbs. This raises a question as to why humans which also possess an analog to these genes cannot regrow and regenerate limbs. Beside the lack of specific growth factor , studies have shown that something as small as base pair differences between amphibian and human Hox analogs play a crucial role in human inability to reproduce limbs. [ 4 ] Undifferentiated stem cells and the ability to have polarity in tissues is vital to this process.
Some amphibians and reptiles have the ability to regenerate limbs, eyes, spinal cords, hearts, intestines, and upper and lower jaws. The Japanese fire belly newt can regenerate its eye lens 18 times over a period of 16 years and retain its structural and functional properties. [ 5 ] The cells at the site of the injury have the ability to un differentiate , reproduce rapidly, and differentiate again to create a new limb or organ.
Hox genes are a group of related genes that control the body plan of an embryo along the head-tail axis. They are responsible for body segment differentiation and express the arrangement of numerous body components during initial embryonic development. [ 6 ] Primarily, these sets of genes are utilized during the development of body plans by coding for the transcription factors that trigger production of body segment specific structures. Additionally in most animals, these genes are laid out along the chromosome similar to the order in which they are expressed along the anterior–posterior axis. [ 7 ]
Variants of the Hox genes are found almost in every phylum with the exception of the sponge which use a different type of developmental genes. [ 8 ] The homology of these genes is of important interest to scientists as they may hold more answers to the evolution of many species . In fact, these genes demonstrate such a high degree of homology that a human Hox gene variant – HOXB4 – could mimic the function of its homolog in the fruit fly ( Drosophila ). [ 9 ] Studies suggest that the regulation and other target genes in different species are actually what causes such a great difference in phenotypic difference between species. [ 10 ]
Hox genes contain a DNA sequence known as the homeobox that are involved in the regulation of patterns of anatomical development. They contain a specific DNA sequence with the aim of providing instructions for making a string of 60 protein building blocks - amino acids- which are referred to as the homeodomain . [ 11 ] Most homeodomain-containing proteins function as transcription factors and fundamentally bind and regulate the activity of different genes. The homeodomain is the segment of the protein that binds to precise regulatory regions of the target genes. [ 6 ] Genes within the homeobox family are implicated in a wide variety of significant activities during growth. [ 11 ] These activities include directing the development of limbs and organs along the anterior-posterior axis and regulating the process by which cells mature to carry out specific functions, a process known as cellular differentiation . Certain homeobox genes can act tumor suppressors , which means they help prevent cells from growing and dividing too rapidly or in an uncontrolled way. [ 6 ]
Due to the fact that homeobox genes have so many important functions, mutations in these genes are accountable for a wide array of developmental disorders. [ 11 ] Changes in certain homeobox genes often result in eye disorders, cause abnormal head, face, and tooth development. Additionally, increased or decreased activity of certain homeobox genes has been associated with several forms of cancer later in life. [ 6 ]
Essentially, Hox genes contribute to the specification of three main components of limb development , including the stylopod, zeugopod and autopod. [ 12 ] Certain mutations in Hox genes can potentially lead to the proximal and/or distal losses along with different abnormalities. Three different models have been created for outlining the patterning of these regions. [ 12 ] The Zone of polarizing activity (ZPA) in the limb bud has pattern-organizing activity through the utilization of a morphogen gradient of a protein called Sonic hedgehog (Shh). [ 12 ] Sonic hedgehog is turned on in the posterior region via the early expression of HoxD genes, along with the expression of Hoxb8. Shh is maintained in the posterior through a feedback loop between the ZPA and the AER. Shh cleaves the Ci/Gli3 transcriptional repressor complex to convert the transcription factor Gli3 to an activator, which activates the transcription of HoxD genes along the anterior/posterior axis. [ 12 ] It is evident that different Hox genes are critical for proper limb development in different amphibians.
Researchers conducted a study targeting the Hox-9 to Hox-13 genes in different species of frogs and other amphibians. Similar to an ancient tetrapod group with assorted limb types, it is important to note that amphibians are required for the understanding of the origin and diversification of limbs in different land vertebrates. [ 11 ] A PCR ( Polymerase Chain Reaction ) study was conducted in two species of each amphibian order to identify Hox-9 to Hox-13. Fifteen distinct posterior Hox genes and one retro-pseudogene were identified, and the former confirm the existence of four Hox clusters in each amphibian order. [ 11 ] Certain genes expected to occur in all tetrapods, based on the posterior Hox complement of mammals, fishes and coelacanth, were not recovered. HoxD-12 is absent in frogs and possibly other amphibians. By definition, the autopodium is distal segment of a limb, comprising the hand or foot. Considering Hox-12’s function in autopodium development, the loss of this gene may be related to the absence of the fifth finger in frogs and salamanders. [ 11 ]
As previously mentioned, Hox genes encode transcription factors that regulate embryonic and post-embryonic developmental processes. [ 13 ] [ 14 ] The expression of Hox genes is regulated in part by the tight, spatial arrangement of conserved coding and non-coding DNA regions. [ 13 ] The potential for evolutionary alterations in Hox cluster composition is viewed to be small among vertebrates. On the other hand, recent studies of a small number of non-mammalian taxa propose greater dissimilarity than initially considered. [ 13 ] Next, generation sequencing of considerable genomic fragments greater than 100 kilobases from the eastern newt ( Notophthalmus viridescens ) was analyzed. Subsequently, it was found that the composition of Hox cluster genes were conserved relative to orthologous regions from other vertebrates. Furthermore, it was found that the length of introns and intergenic regions varied. [ 13 ] In particular, the distance between HoxD13 and HoxD11 is longer in newt than orthologous regions from vertebrate species with expanded Hox clusters and is predicted to exceed the length of the entire HoxD clusters (HoxD13–HoxD4) of humans, mice, and frogs. [ 13 ] Many recurring DNA sequences were recognized for newt Hox clusters, counting an enrichment of DNA transposon-like sequences similar to non-coding genomic fragments. Researchers found the results to suggest that Hox cluster expansion and transposon accumulation are common features of non-mammalian tetrapod vertebrates. [ 13 ]
After the loss of a limb, cells draw together to form a clump known as a blastema . [ 15 ] This superficially appears undifferentiated, but cells that originated in the skin later develop into new skin, muscle cells into new muscle and cartilage cells into new cartilage. It is only the cells from just beneath the surface of the skin that are pluripotent and able to develop into any type of cell. [ 16 ] Salamander Hox genomic regions show elements of conservation and variety in comparison to other vertebrate species. Whereas the structure and organization of Hox coding genes is conserved, newt Hox clusters show variation in the lengths of introns and intergenic regions , and the HoxD13–11 region exceeds the lengths of orthologous segments even among vertebrate species with expanded Hox clusters. [ 13 ] Researchers have suggested that the HoxD13–11 expansion predated a basal salamander genome size amplification that occurred approximately 191 million years ago, because it preserved in all three extant amphibian groups. [ 13 ] Supplementary verification supports the proposal that Hox clusters are acquiescent to structural evolution and variation is present in the lengths of introns and intergenic regions, relatively high numbers of repetitive sequences, and non-random accumulations of DNA transposons in newts and lizards . [ 13 ] Researchers found that the non-random accretion of DNA-like transposons could possibly change developmental encoding by generating sequence motifs for transcriptional control.
In conclusion, the available data from several non-mammalian tetrapods suggest that Hox structural flexibility is the rule, not the exception. [ 13 ] It is thought that this elasticity may allow for developmental variation across non-mammalian taxa. This is of course true for both embryogenesis and during the redeployment of Hox genes during post-embryonic developmental processes, such as metamorphosis and regeneration. [ 13 ]
Another phenomena that exists in animal models is the presence of gradient fields in early development. More specifically, this has been shown in the aquatic amphibian: the newt . These "gradient fields" as they are known in developmental biology , have the ability to form the appropriate tissues that they are designed to form when cells from other parts of the embryo are introduced or transplanted into specific fields. The first reporting of this was in 1934. Originally, the specific mechanism behind this rather bizarre phenomenon was not known, however Hox genes have been shown to be prevalent behind this process. More specifically, a concept now known as polarity has been implemented as one - but not the only one - of the mechanisms that are driving this development.
Studies done by Oliver and colleagues in 1988 showed that different concentrations of XIHbox 1 antigen was present along the anterior-posterior mesoderm of various developing animal models. [ 17 ] One conclusion that this varied concentration of protein expression is actually causing differentiation amongst various tissues and could be one of the culprits behind these so-called "gradient fields". [ 18 ] While the protein products of Hox genes are strongly involved in these fields and differentiation in amphibians and reptiles, there are other causality factors involved. For example, retinoic acid and other growth factors have been shown to play a role in these gradient fields. [ 19 ] | https://en.wikipedia.org/wiki/Hox_genes_in_amphibians_and_reptiles |
In model theory , a branch of mathematical logic , the Hrushovski construction generalizes the Fraïssé limit by working with a notion of strong substructure ≤ {\displaystyle \leq } rather than ⊆ {\displaystyle \subseteq } . It can be thought of as a kind of "model-theoretic forcing", where a (usually) stable structure is created, called the generic or rich [ 1 ] model. The specifics of ≤ {\displaystyle \leq } determine various properties of the generic, with its geometric properties being of particular interest. It was initially used by Ehud Hrushovski to generate a stable structure with an "exotic" geometry, thereby refuting Zil'ber's Conjecture.
The initial applications of the Hrushovski construction refuted two conjectures and answered a third question in the negative. Specifically, we have:
Let L be a finite relational language. Fix C a class of finite L -structures which are closed under isomorphisms and
substructures. We want to strengthen the notion of substructure; let ≤ {\displaystyle \leq } be a relation on pairs from C satisfying:
Definition. An embedding f : A ↪ D {\displaystyle f:A\hookrightarrow D} is strong if f ( A ) ≤ D . {\displaystyle f(A)\leq D.}
Definition. The pair ( C , ≤ ) {\displaystyle (\mathbf {C} ,\leq )} has the amalgamation property if A ≤ B 1 , B 2 {\displaystyle A\leq B_{1},B_{2}} then there is a D ∈ C {\displaystyle D\in \mathbf {C} } so that each B i {\displaystyle B_{i}} embeds strongly into D {\displaystyle D} with the same image for A . {\displaystyle A.}
Definition. For infinite D {\displaystyle D} and A ∈ C , {\displaystyle A\in \mathbf {C} ,} we say A ≤ D {\displaystyle A\leq D} iff A ≤ X {\displaystyle A\leq X} for A ⊆ X ⊆ D , X ∈ C . {\displaystyle A\subseteq X\subseteq D,X\in \mathbf {C} .}
Definition. For any A ⊆ D , {\displaystyle A\subseteq D,} the closure of A {\displaystyle A} in D , {\displaystyle D,} denoted by cl D ( A ) , {\displaystyle \operatorname {cl} _{D}(A),} is the smallest superset of A {\displaystyle A} satisfying cl ( A ) ≤ D . {\displaystyle \operatorname {cl} (A)\leq D.}
Definition. A countable structure G {\displaystyle G} is ( C , ≤ ) {\displaystyle (\mathbf {C} ,\leq )} -generic if:
Theorem. If ( C , ≤ ) {\displaystyle (\mathbf {C} ,\leq )} has the amalgamation property, then there is a unique ( C , ≤ ) {\displaystyle (\mathbf {C} ,\leq )} -generic.
The existence proof proceeds in imitation of the existence proof for Fraïssé limits. The uniqueness proof comes from an easy back and forth argument. | https://en.wikipedia.org/wiki/Hrushovski_construction |
In mathematics , Lawson's conjecture states that the Clifford torus is the only minimally embedded torus in the 3-sphere S 3 . [ 1 ] [ 2 ] The conjecture was featured by the Australian Mathematical Society Gazette as part of the Millennium Problems series. [ 3 ]
In March 2012, Simon Brendle gave a proof of this conjecture, based on maximum principle techniques. [ 4 ]
This differential geometry -related article is a stub . You can help Wikipedia by expanding it .
This topology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hsiang–Lawson's_conjecture |
In algebra, Hua 's identity [ 1 ] named after Hua Luogeng , states that for any elements a , b in a division ring , a − ( a − 1 + ( b − 1 − a ) − 1 ) − 1 = a b a {\displaystyle a-\left(a^{-1}+\left(b^{-1}-a\right)^{-1}\right)^{-1}=aba} whenever a b ≠ 0 , 1 {\displaystyle ab\neq 0,1} . Replacing b {\displaystyle b} with − b − 1 {\displaystyle -b^{-1}} gives another equivalent form of the identity: ( a + a b − 1 a ) − 1 + ( a + b ) − 1 = a − 1 . {\displaystyle \left(a+ab^{-1}a\right)^{-1}+(a+b)^{-1}=a^{-1}.}
The identity is used in a proof of Hua's theorem , [ 2 ] which states that if σ {\displaystyle \sigma } is a function between division rings satisfying σ ( a + b ) = σ ( a ) + σ ( b ) , σ ( 1 ) = 1 , σ ( a − 1 ) = σ ( a ) − 1 , {\displaystyle \sigma (a+b)=\sigma (a)+\sigma (b),\quad \sigma (1)=1,\quad \sigma (a^{-1})=\sigma (a)^{-1},} then σ {\displaystyle \sigma } is a homomorphism or an antihomomorphism . This theorem is connected to the fundamental theorem of projective geometry .
One has ( a − a b a ) ( a − 1 + ( b − 1 − a ) − 1 ) = 1 − a b + a b ( b − 1 − a ) ( b − 1 − a ) − 1 = 1. {\displaystyle (a-aba)\left(a^{-1}+\left(b^{-1}-a\right)^{-1}\right)=1-ab+ab\left(b^{-1}-a\right)\left(b^{-1}-a\right)^{-1}=1.}
The proof is valid in any ring as long as a , b , a b − 1 {\displaystyle a,b,ab-1} are units . [ 3 ]
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hua's_identity |
In mathematics , Hua's lemma , [ 1 ] named for Hua Loo-keng , is an estimate for exponential sums .
It states that if P is an integral-valued polynomial of degree k , ε {\displaystyle \varepsilon } is a positive real number, and f a real function defined by
then
where ( λ , μ ( λ ) ) {\displaystyle (\lambda ,\mu (\lambda ))} lies on a polygonal line with vertices
This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hua's_lemma |
Huang's law is the observation in computer science and engineering that advancements in graphics processing units (GPUs) are growing at a rate much faster than with traditional central processing units (CPUs). The observation is in contrast to Moore's law that predicted the number of transistors in a dense integrated circuit (IC) doubles about every two years. [ 1 ] Huang's law states that the performance of GPUs will more than double every two years. [ 2 ] The hypothesis is subject to questions about its validity.
The observation was made by Jensen Huang , the chief executive officer of Nvidia , at its 2018 GPU Technology Conference (GTC) held in San Jose, California . [ 3 ] He observed that Nvidia's GPUs were "25 times faster than five years ago" whereas Moore's law would have expected only a ten-fold increase. [ 2 ] As microchip components become smaller, it became harder for chip advancement to meet the speed of Moore's law. [ 4 ]
In 2006, Nvidia's GPU had a 4x performance advantage over other CPUs. In 2018 the Nvidia GPU was 20 times faster than a comparable CPU node: the GPUs were 1.7x faster each year. Moore's law would predict a doubling every two years, however Nvidia's GPU performance was more than tripled every two years, fulfilling Huang's law. [ 5 ]
Huang's law claims that a synergy between hardware , software , and artificial intelligence makes the new 'law' possible. [ A ] Huang said, "The innovation isn't just about chips," he said, "It's about the entire stack." He said that graphics processors especially are important to a new paradigm. [ 3 ] Elimination of bottlenecks can speed up the process and create advantages in getting to the goal. "Nvidia is a one trick pony," Huang has said. [ 7 ] According to Huang: "Accelerated computing is liberating, ... Let's say you have an airplane that has to deliver a package. It takes 12 hours to deliver it. Instead of making the plane go faster, concentrate on how to deliver the package faster, look at 3D printing at the destination." The object "… is to deliver the goal faster." [ 7 ]
For artificial intelligence tasks, Huang said that training the convolutional network AlexNet took six days on two of Nvidia's GTX 580 processors to complete the training process but only 18 minutes on a modern DGX-2 AI server, resulting in a speed-up factor of 500. Compared to Moore's law, which focuses purely on CPU transistors, Huang's law describes a combination of advances in architecture, interconnects, memory technology, and algorithms. [ 2 ] [ 6 ]
Bharath Ramsundar wrote that deep learning is being coupled with "[i]mprovements in custom architecture". For example, machine learning systems have been implemented in the blockchain world, where Bitmain assaulted "many cryptocurrencies by designing custom mining ASICs ( application-specific integrated circuits )" which had been envisioned as undoable. "Nvidia's grand achievement however is in making the case that these improvement in architectures are not merely isolated victories for specific applications but perhaps broadly applicable to all of computer science." They have suggested that broad harnessing of GPUs and the GPU stack ( cf., CPU stack) can deliver "dramatic growth in deep learning architecture." "The magic" of Huang's law promise is that as nascent deep learning powered software becomes more availed, the improvements from GPU scaling and more generally from architectural improvements" will concretely improve "performance and behavior of modern software stacks." [ 8 ]
There has been criticism. Journalist Joel Hruska writing in ExtremeTech in 2020 said "there is no such thing as Huang's Law", calling it an "illusion" that rests on the gains made possible by Moore's law; and that it is too soon to determine a law exists. [ 9 ] The research nonprofit Epoch has found that, between 2006 and 2021, GPU price performance (in terms of FLOPS/$) has tended to double approximately every 2.5 years, much slower than predicted by Huang's law. [ 10 ] | https://en.wikipedia.org/wiki/Huang's_law |
Huawei ICT Academy is a global university-enterprise cooperation project led by Huawei . By the end of 2024, Huawei had partnered with more than 3,000 universities to build Huawei ICT Academies, which have collectively trained more than 1.3 million students. [ 2 ] [ 3 ]
The ICT Academy Support Center (IASC) is a partner certified and authorized by Huawei to assist in the development and operation of the Huawei ICT Academy. [ 4 ]
Huawei ICT Academy provides courses, talent development, and certification standards for universities to help cultivate ICT professionals. The program includes the Huawei ICT Competition, which serves as an international competition and communication platform to foster learning and industry engagement. [ 5 ]
Huawei ICT Academy had released over 80 courses in multiple languages, including Chinese, English, French, Arabic, Portuguese, Spanish, German, Russian, Korean, Indonesian, Japanese, and Turkish.
General courses : Introductory ICT topics covering AI, algorithm and program design, computer networks, data management, and analytics.
Professional and certification courses : More advanced, industry-aligned courses covering WLAN, 5G networks, data communication, AI applications, and IoT technologies. | https://en.wikipedia.org/wiki/Huawei_ICT_Academy |
The Huawei Mate 60 (stylized as HUAWEI Mate60 ) is a series of high-end 2023 smartphone product by the Chinese Huawei corporation from its Huawei Mate series . [ 3 ] It has a Kirin 9000s SoC chipset designed by HiSilicon and produced by the SMIC foundry. [ 4 ] The device supports satellite network communications and 5G . [ 5 ]
The Huawei Mate 60 is the first Huawei smartphone to feature a 7nm SoC designed and manufactured in mainland China, despite the imposition of US sanctions on the company. [ 6 ] [ 7 ]
The CPU HiSilicon Kirin 9000S is a SoC supposed to consist of four high-performance cores (one at up to 2.62 GHz and three at up to 2,150 MHz) that is based on HiSilicon's custom TaiShan microarchitecture and four energy-efficient cores (up to 1,530 MHz) based on ARM's Cortex 510 . [ 8 ] The smartphone also uses the Maleoon 910 graphics processing unit operating at up to 750 MHz. [ 8 ]
According to third-party testing, after plugging in the SIM card the network standard indication of the phone does not show a 5G connection, and Huawei does not mention supporting 5G in the parameter details; the actual network speed test shows that its performance is 5G. [ 9 ] Reports also believe that it has the ability to support 5G. [ 5 ] [ 10 ] [ 11 ]
Huawei focuses more on promoting its capabilities as a satellite communication terminal. [ citation needed ] The Mate 60 series smartphone supports satellite call functions through the Tiantong system , [ 12 ] [ 13 ] [ 14 ] [ 15 ] and short message sending and receiving functions through the Beidou system . [ 16 ] [ 17 ]
Mate 60 also supports NearLink , a short-range wireless communication technology that combines the features of Bluetooth and Wi-Fi with enhanced prerequisites, and can be used in the future on Internet of Things and Internet of Vehicles . [ 18 ] [ 19 ]
At the end of 2023, Huawei Mate 60 Pro+ was the best smartphone camera in the world according to DxOMark . [ 20 ] [ 21 ]
The Mate 60 was launched with the operating system HarmonyOS 4. In the second quarter of 2025, an update from HarmonyOS 4 to the operating system HarmonyOS NEXT 5.0.1, which does not support Android apps, was to become available. [ 22 ]
The launch of the Huawei Mate 60 garnered significant attention, and was widely touted as a victory against US government sanctions intended to stop Chinese companies from producing or obtaining advanced chips. [ 23 ] [ 24 ] Huawei's breakthrough raised concerns within the US government that technological restrictions alone were unable to prevent Huawei from obtaining advanced chips: [ 25 ] the U.S. Department of Commerce launched an investigation into the situation at the end of 2023. [ 26 ]
On 5 March 2024, a report by Counterpoint Research claimed that although overall Chinese smartphone sales were 7% lower in the first six weeks of 2024, compared with the same period in 2023, Apple’s recently launched flagship iPhone 15 was selling exceptionally badly, with Apple’s overall smartphone unit sales falling 24% in the relevant period, because buyers were turning towards devices made by Huawei. [ 27 ] According to Counterpoint Research, Huawei saw unit sales rise by 64% in the period. [ 28 ]
Huawei had told their customers that stores in Shenzhen would only have a certain amount of phones to sell, which resulted in long lines outside of every store. [ 29 ] On August 30, 2023, Huawei Mall launched the Mate 60 pre-order page. [ 30 ] On September 3 of that same year, the Mate 60 Pro was fully on-sale. At 18:08, online platforms such as Huawei Mall, Taobao , Tmall , and JD.com sold out all available colors in just one minute after opening sales to the public. There were also lines of people waiting to buy at Huawei stores across China. [ 31 ] On September 8, Huawei Mall launched the Mate 60 Pro+ pre-order page. [ 32 ] | https://en.wikipedia.org/wiki/Huawei_Mate_60_Pro |
The Huawei Mate 60 (stylized as HUAWEI Mate60 ) is a series of high-end 2023 smartphone product by the Chinese Huawei corporation from its Huawei Mate series . [ 3 ] It has a Kirin 9000s SoC chipset designed by HiSilicon and produced by the SMIC foundry. [ 4 ] The device supports satellite network communications and 5G . [ 5 ]
The Huawei Mate 60 is the first Huawei smartphone to feature a 7nm SoC designed and manufactured in mainland China, despite the imposition of US sanctions on the company. [ 6 ] [ 7 ]
The CPU HiSilicon Kirin 9000S is a SoC supposed to consist of four high-performance cores (one at up to 2.62 GHz and three at up to 2,150 MHz) that is based on HiSilicon's custom TaiShan microarchitecture and four energy-efficient cores (up to 1,530 MHz) based on ARM's Cortex 510 . [ 8 ] The smartphone also uses the Maleoon 910 graphics processing unit operating at up to 750 MHz. [ 8 ]
According to third-party testing, after plugging in the SIM card the network standard indication of the phone does not show a 5G connection, and Huawei does not mention supporting 5G in the parameter details; the actual network speed test shows that its performance is 5G. [ 9 ] Reports also believe that it has the ability to support 5G. [ 5 ] [ 10 ] [ 11 ]
Huawei focuses more on promoting its capabilities as a satellite communication terminal. [ citation needed ] The Mate 60 series smartphone supports satellite call functions through the Tiantong system , [ 12 ] [ 13 ] [ 14 ] [ 15 ] and short message sending and receiving functions through the Beidou system . [ 16 ] [ 17 ]
Mate 60 also supports NearLink , a short-range wireless communication technology that combines the features of Bluetooth and Wi-Fi with enhanced prerequisites, and can be used in the future on Internet of Things and Internet of Vehicles . [ 18 ] [ 19 ]
At the end of 2023, Huawei Mate 60 Pro+ was the best smartphone camera in the world according to DxOMark . [ 20 ] [ 21 ]
The Mate 60 was launched with the operating system HarmonyOS 4. In the second quarter of 2025, an update from HarmonyOS 4 to the operating system HarmonyOS NEXT 5.0.1, which does not support Android apps, was to become available. [ 22 ]
The launch of the Huawei Mate 60 garnered significant attention, and was widely touted as a victory against US government sanctions intended to stop Chinese companies from producing or obtaining advanced chips. [ 23 ] [ 24 ] Huawei's breakthrough raised concerns within the US government that technological restrictions alone were unable to prevent Huawei from obtaining advanced chips: [ 25 ] the U.S. Department of Commerce launched an investigation into the situation at the end of 2023. [ 26 ]
On 5 March 2024, a report by Counterpoint Research claimed that although overall Chinese smartphone sales were 7% lower in the first six weeks of 2024, compared with the same period in 2023, Apple’s recently launched flagship iPhone 15 was selling exceptionally badly, with Apple’s overall smartphone unit sales falling 24% in the relevant period, because buyers were turning towards devices made by Huawei. [ 27 ] According to Counterpoint Research, Huawei saw unit sales rise by 64% in the period. [ 28 ]
Huawei had told their customers that stores in Shenzhen would only have a certain amount of phones to sell, which resulted in long lines outside of every store. [ 29 ] On August 30, 2023, Huawei Mall launched the Mate 60 pre-order page. [ 30 ] On September 3 of that same year, the Mate 60 Pro was fully on-sale. At 18:08, online platforms such as Huawei Mall, Taobao , Tmall , and JD.com sold out all available colors in just one minute after opening sales to the public. There were also lines of people waiting to buy at Huawei stores across China. [ 31 ] On September 8, Huawei Mall launched the Mate 60 Pro+ pre-order page. [ 32 ] | https://en.wikipedia.org/wiki/Huawei_Mate_60_Pro+ |
The Huawei Mate 60 (stylized as HUAWEI Mate60 ) is a series of high-end 2023 smartphone product by the Chinese Huawei corporation from its Huawei Mate series . [ 3 ] It has a Kirin 9000s SoC chipset designed by HiSilicon and produced by the SMIC foundry. [ 4 ] The device supports satellite network communications and 5G . [ 5 ]
The Huawei Mate 60 is the first Huawei smartphone to feature a 7nm SoC designed and manufactured in mainland China, despite the imposition of US sanctions on the company. [ 6 ] [ 7 ]
The CPU HiSilicon Kirin 9000S is a SoC supposed to consist of four high-performance cores (one at up to 2.62 GHz and three at up to 2,150 MHz) that is based on HiSilicon's custom TaiShan microarchitecture and four energy-efficient cores (up to 1,530 MHz) based on ARM's Cortex 510 . [ 8 ] The smartphone also uses the Maleoon 910 graphics processing unit operating at up to 750 MHz. [ 8 ]
According to third-party testing, after plugging in the SIM card the network standard indication of the phone does not show a 5G connection, and Huawei does not mention supporting 5G in the parameter details; the actual network speed test shows that its performance is 5G. [ 9 ] Reports also believe that it has the ability to support 5G. [ 5 ] [ 10 ] [ 11 ]
Huawei focuses more on promoting its capabilities as a satellite communication terminal. [ citation needed ] The Mate 60 series smartphone supports satellite call functions through the Tiantong system , [ 12 ] [ 13 ] [ 14 ] [ 15 ] and short message sending and receiving functions through the Beidou system . [ 16 ] [ 17 ]
Mate 60 also supports NearLink , a short-range wireless communication technology that combines the features of Bluetooth and Wi-Fi with enhanced prerequisites, and can be used in the future on Internet of Things and Internet of Vehicles . [ 18 ] [ 19 ]
At the end of 2023, Huawei Mate 60 Pro+ was the best smartphone camera in the world according to DxOMark . [ 20 ] [ 21 ]
The Mate 60 was launched with the operating system HarmonyOS 4. In the second quarter of 2025, an update from HarmonyOS 4 to the operating system HarmonyOS NEXT 5.0.1, which does not support Android apps, was to become available. [ 22 ]
The launch of the Huawei Mate 60 garnered significant attention, and was widely touted as a victory against US government sanctions intended to stop Chinese companies from producing or obtaining advanced chips. [ 23 ] [ 24 ] Huawei's breakthrough raised concerns within the US government that technological restrictions alone were unable to prevent Huawei from obtaining advanced chips: [ 25 ] the U.S. Department of Commerce launched an investigation into the situation at the end of 2023. [ 26 ]
On 5 March 2024, a report by Counterpoint Research claimed that although overall Chinese smartphone sales were 7% lower in the first six weeks of 2024, compared with the same period in 2023, Apple’s recently launched flagship iPhone 15 was selling exceptionally badly, with Apple’s overall smartphone unit sales falling 24% in the relevant period, because buyers were turning towards devices made by Huawei. [ 27 ] According to Counterpoint Research, Huawei saw unit sales rise by 64% in the period. [ 28 ]
Huawei had told their customers that stores in Shenzhen would only have a certain amount of phones to sell, which resulted in long lines outside of every store. [ 29 ] On August 30, 2023, Huawei Mall launched the Mate 60 pre-order page. [ 30 ] On September 3 of that same year, the Mate 60 Pro was fully on-sale. At 18:08, online platforms such as Huawei Mall, Taobao , Tmall , and JD.com sold out all available colors in just one minute after opening sales to the public. There were also lines of people waiting to buy at Huawei stores across China. [ 31 ] On September 8, Huawei Mall launched the Mate 60 Pro+ pre-order page. [ 32 ] | https://en.wikipedia.org/wiki/Huawei_Mate_60_RS |
The Huawei Mate X is an Android -based high end foldable smartphone produced by Huawei . It was unveiled at MWC 2019 on 25 February 2019 [ 2 ] and was originally scheduled to launch in June 2019, but the launch was pushed back to allow for extensive testing in light of the failures reported by users of a similar product, the Galaxy Fold from Samsung. [ 3 ] [ 4 ] The Mate X launched in China only in November 2019. [ 5 ] [ 6 ] [ 7 ] [ 8 ] Huawei announced the Mate Xs on 24 February 2020 as a hardware revision of the original Mate X; it was released in "global markets" outside China in March 2020. The device features a more durable display, improved hinge function and a redesigned cooling system, as well as the newer Kirin 990 5G SoC and Android 10 with EMUI 10. [ 9 ]
The Mate X has an 8-inch OLED display that can fold outwards resulting in a 6.6-inch main display and a 6.38-inch rear display. [ 10 ] The screen is covered by plastic and is secured by a push-button latch when closed. It uses the Kirin 980 and has 8 GB of RAM with 512 GB of UFS 2.1 storage. The latter is expandable up to 256 GB via Huawei's proprietary Nano Memory. The device contains two batteries split between the two halves, totaling a 4500 mAh capacity, and can quick charge at 55W.
The Mate X has a bar situated on the right side of the device with four cameras and an LED flash on the back, the power button/fingerprint sensor on the left side and a USB-C port on the bottom. The cameras include a 40 MP main lens, a 16 MP wide-angle lens, an 8 MP telephoto lens and a time-of-flight sensor. The bar is roughly twice as thick as the rest of the device, giving the user a grip to hold it. Unlike the Galaxy Fold, it comes standard with 5G enabled by Huawei's own Balong 5000 modem. [ 11 ] [ 12 ] [ 13 ]
The Mate X runs on Android 9.0 "Pie" with Huawei's EMUI 9.1 skin.
This mobile technology –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Huawei_Mate_X |
The Huawei Mate X2 is an Android -based high end foldable smartphone produced by Huawei . [ 3 ] The phone, unveiled on 22 February 2021, serves as the successor to the Mate X and Mate Xs. The phone was vastly redesigned from the previous generation, adopting a dual-screen design very similar to the Samsung Galaxy Z Fold 2 . [ 4 ]
Unlike the Mate X and Mate Xs, the Mate X2 has dual displays: a foldable 8 inch display that is concealed when folded, and a smaller 6.45 inch display on the outside. The display format is very similar to the Samsung Galaxy Z Fold 2 , which was released the previous year. [ 5 ] The quad-camera array is situated on the back, opposite the second screen, and a selfie camera is present in a cutout in the upper left-hand corner of that smaller display. Unlike the Galaxy Z Fold 2, the Mate X2 lacks a camera on the side of the main screen. The device comes in four colors, Black, White, Light Blue, and Rose Gold. [ 2 ]
[ 1 ]
This mobile technology –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Huawei_Mate_X2 |
The Huawei Mate XT Ultimate Design is the world's first double-folding, or tri-fold / [ 4 ] [ 5 ] largest foldable smartphone . [ a ] [ 6 ] [ 7 ] [ 8 ] It was announced on 10 September 2024 and made available for pre-order in China the same day. [ 9 ] [ 10 ] On 18 February 2025, it was announced globally and made available for pre-order in Kuala Lumpur, Malaysia.
RAM: 16GB
Storage: 256 GB, 512 GB, or 1 TB
Diagonal display size: (depends on number of unfolded panels)
Cameras:
Battery:
Out of the box, the Huawei Mate XT Ultimate Design dual-SIM (Nano+ Nano) runs Harmony OS 4.2. Its 10.2-inch (3,184 x 2,232 pixel) flexible LTPO OLED screen folds into two different sizes when folded: 7.9 inches (2,048 x 2,232 pixels) when folded once, and 6.4 inches (1,008 x 2,232 pixels) when folded twice. [ 11 ]
Details on the processor powering the Huawei Mate XT Ultimate Design, which features 16GB of RAM, have been released by the brand. It uses the Kirin 9010 , which is a 7nm processor. [ 12 ] There are three storage configurations available: 256GB, 512GB, and 1TB. [ 13 ] The Huawei Mate XT Ultimate Design base edition costs CNY 19,999 (about Rs. 2,37,000, or $2,800), and it has 16GB of RAM and 256GB of built-in storage. Furthermore, at CNY 21,999 (approximately Rs. 2,59,500 approximately $3090) and CNY 23,999 (approximately Rs. 2,84,000 approximately $3370), respectively, the phone will be available with storage capacities of 512GB and 1TB. [ 14 ] [ 15 ]
The 50-megapixel camera on the outside of the Huawei Mate XT Ultimate Design features optical image stabilization (OIS) and a variable aperture that goes from f/1.2 to f/4.0. It also has a 12-megapixel periscope telephoto camera with 5.5x optical zoom, OIS, and an f/3.4 aperture, as well as a 12-megapixel ultrawide camera with an f/2.2 aperture . The smartphone's display has an 8-megapixel camera for selfies and video chats, located in a hole-punch cutout in the center of the screen. [ 13 ]
The Huawei Mate XT Ultimate Design has a USB 3.1 Type-C port, GPS, NFC, Bluetooth 5.2, Wi-Fi 6, 5G, and 4G LTE connectivity choices. In addition to having a side-mounted fingerprint scanner for biometric verification, the phone is equipped with a powerful 5,600mAh battery that supports both fast wired charging at 66W and wireless charging at 50W. Additionally, it offers reverse charging capabilities, allowing users to charge other devices either with 5W wired reverse charging or 7.5W wireless reverse charging. It weighs 298g and has dimensions of 156.7x73x12.8mm for a single screen, 156.7x143x7.45mm for a dual screen, and 156.7x219x3.6mm for a triple screen. [ 13 ]
The device can be used with a case that has a kickstand, and a foldable keyboard with a built-in trackpad to provide a desktop PC-like experience. [ 16 ] [ 17 ] [ 18 ] [ 19 ]
The cost of replacing and maintaining the equipment will also be high. According to GSMArena, Huawei has released the official list of Mate XT repair charges. Replacing the screen will cost ¥ 7,999. If the user chooses to keep the old display, the repair will cost ¥9,799. In case the motherboard malfunctions, a replacement for the 1 TB model might cost as much as ¥10,699. [ 20 ] [ 21 ] | https://en.wikipedia.org/wiki/Huawei_Mate_XT |
The Huawei Watch and latest Huawei Watch 4 series are HarmonyOS -based (formerly Android Wear and LiteOS -based) smartwatches developed by Huawei . The Huawei Watch is the first smartwatch produced by Huawei. [ 1 ] It was announced at the 2015 Mobile World Congress [ 2 ] and released at IFA Berlin on September 2. [ 1 ] The Huawei Watch 3 was introduced in June 2021 [ 3 ] [ 4 ] after the United States Department of Commerce added Huawei to its Entity List in May 2019. [ 5 ]
First generation Huawei Watch's form factor is based on the circular design of traditional watches, supporting a 42 mm (1.4 inch) AMOLED screen. The screen's resolution is 400 x 400 pixels and 285.7 ppi. The case is 316L stainless steel, covered with sapphire crystal glass in front and available in six finishes: Black Leather, Steel Link Bracelet, Stainless Steel Mesh, Black-plated Link Bracelet, Alligator-pressed Brown Leather, and Rose Gold-plated Link Bracelet. [ 6 ] [ 7 ]
The watch uses a 1.2 GHz Qualcomm Snapdragon 400 APQ8026 processor. All versions of the Huawei Watch have 512MB of RAM and 4GB of internal storage, along with a gyroscope, accelerometer, vibration motor, and heart rate sensor. It supports Wi-Fi and Bluetooth 4.1 LE, and support for GPS locating. [ 6 ] The watch uses a magnetic charging cradle, with a day and a half of battery life. [ 8 ]
The first generation Huawei Watch runs on the Android Wear operating system. It works with iOS (8.2 and later) and Android (4.3 and later) devices. It currently supports Google Now voice commands and is compatible with Wear OS . The watch can process calls and receive messages and emails. [ 9 ]
In Tech Advisor ' s review, Chris Martin wrote, "This is a great looking smartwatch, although it is quite large. Specs match other Android Wear smartwatches but we're worried about the small battery." [ 10 ] The President of Huawei U.S., Xu Zhejiang, said, "It embodies Huawei's technology innovation heritage, pursuit of premium design, and integration of useful functionality that we strive to develop in each product." [ 11 ] The Phandroid said, "it is the classiest Android Wear smartwatch available right now". [ 12 ]
In March 2020, Huawei announced the Huawei Watch GT 2e powered by Huawei's proprietary OS. It launched in India in May 2020. The smartwatch features the same 1.39-inch AMOLED touch display with a 454 x 454p resolution. It is powered by Huawei’s in-house Kirin A1 chipset along with GPS and Bluetooth audio that has 4GB of onboard storage. The onboard software can track over 100 different sports and exercises. The watch also features oxygen saturation monitoring with an SpO2 sensor that can calculate the wearer's maximum rate of oxygen consumption. [ 13 ]
Huawei Watch GT 2 Pro released September 2020 powered by LiteOS [ 14 ]
In June 2021, Huawei announced the Huawei Watch 3 running on HarmonyOS 2.0. [ 15 ]
Huawei launched the Huawei Watch GT 3 SE on October 29, 2022, intended as a more cost-effective iteration of their Watch GT3 Pro. The watch targets customers who seek a more budget-friendly option.
Initially, the Huawei Watch GT 3 SE will be on the market for only Vietnam and Poland, with plans for a later release in other regions. The model is available in two colors, black and green, and is priced at around €170 in Poland.
As a successor to the Huawei Watch GT 3, Huawei launched Watch GT 4 and Watch Ultimate Design on 25 September, 2023 alongside Watch 4 released earlier in the year in June 2023. This version runs on HarmonyOS 4 and works with HarmonyOS, Android and iOS smartphones. [ 16 ] [ 17 ] [ 18 ]
Huawei Watch Fit 3 successor to Huawei Watch Fit (2020) and Huawei Watch Fit 2 (2022) series, announced and released on May 7, 2024 with a smaller screen and HarmonyOS 4.2 preinstalled. [ 19 ] | https://en.wikipedia.org/wiki/Huawei_Watch |
A huaico or huayco (from the Quechua wayqu , meaning "depth, valley") is an Andean term for the mudslide and flash flood caused by torrential rains occurring high in the mountains, especially during the weather phenomenon known as El Niño . [ 1 ] [ 2 ]
National forests such as the San Matías–San Carlos Protection Forest were created in Peru to protect vegetation, which reduces runoff, and prevent huaicos. [ 3 ] [ 4 ]
The indigenous Mapuche residents of Lo Barnechea , in present-day Santiago Province, Chile , were called Huaicoches in their Mapudungun language : Huaico ( flash flood ) and che (people). [ 5 ]
" Cabeça d'água " (lit. "Water head") is a term in Brazil describing similar phenomena: During orographic rain , rivers in mountain ranges are often struck by very rapid flooding, which produces a downward wave that can carry large river rocks, vegetation, and people. Several fatalities have been recorded due to water heads, usually from people not familiar with local conditions. [ 6 ]
This article about atmospheric science is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Huayco |
A huayra furnace or huayrachinas (meaning "place through which wind blows" in Imperial Quechua ) is an Andean artisan furnace of Prehispanic design. Huayras were wind-driven and used to smelt copper. In Bolivia they were in use at least until the late 19th century and were known form colonial-era description of 1640. Museo Nacional de La Paz in Bolivia host a reconstruction of a huayra. [ 1 ] In the Atacama Desert's Tarapacá valley alone had 26 archaeological huayra sites identified by 2013. [ 1 ]
This South American archaeology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Huayra_furnace |
HubMed is an alternative, third-party interface to PubMed , the database of biomedical literature produced by the National Library of Medicine . [ 1 ] It transforms data from PubMed and integrates it with data from other sources. [ 1 ] Features include relevance-ranked search results, direct citation export, tagging and graphical display of related articles. [ 2 ] [ 3 ] | https://en.wikipedia.org/wiki/HubMed |
In computer science , hub labels or the hub-labelling algorithm is a speedup technique that consumes much fewer resources than the lookup table but is still extremely fast for finding the shortest paths between nodes in a graph , which may represent, for example, road networks. [ 1 ]
This method allows at the most with two SELECT statements and the analysis of two strings to compute the shortest path between two vertices of a graph.
For a graph that is oriented like a road graph, this technique requires the prior computation of two tables from structures constructed using the method of the contraction hierarchies .
In the end, these two computed tables will have as many rows as nodes present within the graph. For each row (each node), a label will be calculated.
A label is a string containing the distance information between the current node (the node of the row) and all the other nodes that can be reached with an ascending search on the relative multi-level structure. The advantage of these distances is that they all represent the shortest paths.
So, for future queries, the search of a shortest path will start from the source on the first table and the destination on the second table, from which it will search within the labels for the common nodes with the associated distance information. Only the smallest sum of distances will be kept as the shortest path result. | https://en.wikipedia.org/wiki/Hub_labels |
The Hubbard model is an approximate model used to describe the transition between conducting and insulating systems . [ 1 ] It is particularly useful in solid-state physics . The model is named for John Hubbard .
The Hubbard model states that each electron experiences competing forces: one pushes it to tunnel to neighboring atoms, while the other pushes it away from its neighbors. [ 2 ] Its Hamiltonian thus has two terms: a kinetic term allowing for tunneling ("hopping") of particles between lattice sites and a potential term reflecting on-site interaction. The particles can either be fermions , as in Hubbard's original work, or bosons , in which case the model is referred to as the " Bose–Hubbard model ".
The Hubbard model is a useful approximation for particles in a periodic potential at sufficiently low temperatures, where all the particles may be assumed to be in the lowest Bloch band , and long-range interactions between the particles can be ignored. If interactions between particles at different sites of the lattice are included, the model is often referred to as the "extended Hubbard model". In particular, the Hubbard term, most commonly denoted by U , is applied in first principles based simulations using Density Functional Theory , DFT. The inclusion of the Hubbard term in DFT simulations is important as this improves the prediction of electron localisation and thus it prevents the incorrect prediction of metallic conduction in insulating systems. [ 3 ]
The Hubbard model introduces short-range interactions between electrons to the tight-binding model , which only includes kinetic energy (a "hopping" term) and interactions with the atoms of the lattice (an "atomic" potential). When the interaction between electrons is strong, the behavior of the Hubbard model can be qualitatively different from a tight-binding model. For example, the Hubbard model correctly predicts the existence of Mott insulators : materials that are insulating due to the strong repulsion between electrons, even though they satisfy the usual criteria for conductors, such as having an odd number of electrons per unit cell.
The model was originally proposed in 1963 to describe electrons in solids. [ 4 ] Hubbard, Martin Gutzwiller and Junjiro Kanamori each independently proposed it. [ 2 ]
Since then, it has been applied to the study of high-temperature superconductivity , quantum magnetism, and charge density waves. [ 5 ]
The Hubbard model is based on the tight-binding approximation from solid-state physics , which describes particles moving in a periodic potential, typically referred to as a lattice . For real materials, each lattice site might correspond with an ionic core, and the particles would be the valence electrons of these ions. In the tight-binding approximation, the Hamiltonian is written in terms of Wannier states , which are localized states centered on each lattice site. Wannier states on neighboring lattice sites are coupled, allowing particles on one site to "hop" to another. Mathematically, the strength of this coupling is given by a "hopping integral", or "transfer integral", between nearby sites. The system is said to be in the tight-binding limit when the strength of the hopping integrals falls off rapidly with distance. This coupling allows states associated with each lattice site to hybridize, and the eigenstates of such a crystalline system are Bloch's functions , with the energy levels divided into separated energy bands . The width of the bands depends upon the value of the hopping integral.
The Hubbard model introduces a contact interaction between particles of opposite spin on each site of the lattice. When the Hubbard model is used to describe electron systems, these interactions are expected to be repulsive, stemming from the screened Coulomb interaction . However, attractive interactions have also been frequently considered. The physics of the Hubbard model is determined by competition between the strength of the hopping integral, which characterizes the system's kinetic energy , and the strength of the interaction term. The Hubbard model can therefore explain the transition from metal to insulator in certain interacting systems. For example, it has been used to describe metal oxides as they are heated, where the corresponding increase in nearest-neighbor spacing reduces the hopping integral to the point where the on-site potential is dominant. Similarly, the Hubbard model can explain the transition from conductor to insulator in systems such as rare-earth pyrochlores as the atomic number of the rare-earth metal increases, because the lattice parameter increases (or the angle between atoms can also change) as the rare-earth element atomic number increases, thus changing the relative importance of the hopping integral compared to the on-site repulsion.
The hydrogen atom has one electron, in the so-called s orbital, which can either be spin up ( ↑ {\displaystyle \uparrow } ) or spin down ( ↓ {\displaystyle \downarrow } ). This orbital can be occupied by at most two electrons, one with spin up and one down (see Pauli exclusion principle ).
Under band theory , for a 1D chain of hydrogen atoms, the 1s orbital forms a continuous band, which would be exactly half-full. The 1D chain of hydrogen atoms is thus predicted to be a conductor under conventional band theory. This 1D string is the only configuration simple enough to be solved directly. [ 2 ]
But in the case where the spacing between the hydrogen atoms is gradually increased, at some point the chain must become an insulator.
Expressed using the Hubbard model, the Hamiltonian is made up of two terms. The first term describes the kinetic energy of the system, parameterized by the hopping integral, t {\displaystyle t} . The second term is the on-site interaction of strength U {\displaystyle U} that represents the electron repulsion. Written out in second quantization notation, the Hubbard Hamiltonian then takes the form
where n ^ i σ = c ^ i σ † c ^ i σ {\displaystyle {\hat {n}}_{i\sigma }={\hat {c}}_{i\sigma }^{\dagger }{\hat {c}}_{i\sigma }} is the spin-density operator for spin σ {\displaystyle \sigma } on the i {\displaystyle i} -th site. The density operator is n ^ i = n ^ i ↑ + n ^ i ↓ {\displaystyle {\hat {n}}_{i}={\hat {n}}_{i\uparrow }+{\hat {n}}_{i\downarrow }} and occupation of i {\displaystyle i} -th site for the wavefunction Φ {\displaystyle \Phi } is n i = ⟨ Φ | n ^ i | Φ ⟩ {\displaystyle n_{i}=\langle \Phi \vert {\hat {n}}_{i}\vert \Phi \rangle } . Typically t is taken to be positive, and U may be either positive or negative, but is assumed to be positive when considering electronic systems.
Without the contribution of the second term, the Hamiltonian resolves to the tight binding formula from regular band theory.
Including the second term yields a realistic model that also predicts a transition from conductor to insulator as the ratio of interaction to hopping, U / t {\displaystyle U/t} , is varied. This ratio can be modified by, for example, increasing the inter-atomic spacing, which would decrease the magnitude of t {\displaystyle t} without affecting U {\displaystyle U} . In the limit where U / t ≫ 1 {\displaystyle U/t\gg 1} , the chain simply resolves into a set of isolated magnetic moments . If U / t {\displaystyle U/t} is not too large, the overlap integral provides for superexchange interactions between neighboring magnetic moments, which may lead to a variety of interesting magnetic correlations, such as ferromagnetic, antiferromagnetic, etc. depending on the model parameters. The one-dimensional Hubbard model was solved by Lieb and Wu using the Bethe ansatz . Essential progress was achieved in the 1990s: a hidden symmetry was discovered, and the scattering matrix , correlation functions , thermodynamic and quantum entanglement were evaluated. [ 6 ]
Although Hubbard is useful in describing systems such as a 1D chain of hydrogen atoms, it is important to note that more complex systems may experience other effects that the Hubbard model does not consider. In general, insulators can be divided into Mott–Hubbard insulators and charge-transfer insulators .
A Mott–Hubbard insulator can be described as
This can be seen as analogous to the Hubbard model for hydrogen chains, where conduction between unit cells can be described by a transfer integral.
However, it is possible for the electrons to exhibit another kind of behavior:
This is known as charge transfer and results in charge-transfer insulators . Unlike Mott–Hubbard insulators electron transfer happens only within a unit cell.
Both of these effects may be present and compete in complex ionic systems.
The fact that the Hubbard model has not been solved analytically in arbitrary dimensions has led to intense research into numerical methods for these strongly correlated electron systems. [ 7 ] [ 8 ] One major goal of this research is to determine the low-temperature phase diagram of this model, particularly in two-dimensions. Approximate numerical treatment of the Hubbard model on finite systems is possible via various methods.
One such method, the Lanczos algorithm , can produce static and dynamic properties of the system. Ground state calculations using this method require the storage of three vectors of the size of the number of states. The number of states scales exponentially with the size of the system, which limits the number of sites in the lattice to about 20 on 21st century hardware. With projector and finite-temperature auxiliary-field Monte Carlo , two statistical methods exist that can obtain certain properties of the system. For low temperatures, convergence problems appear that lead to an exponential computational effort with decreasing temperature due to the so-called fermion sign problem .
The Hubbard model can be studied within dynamical mean-field theory (DMFT). This scheme maps the Hubbard Hamiltonian onto a single-site impurity model , a mapping that is formally exact only in infinite dimensions and in finite dimensions corresponds to the exact treatment of all purely local correlations only. DMFT allows one to compute the local Green's function of the Hubbard model for a given U {\displaystyle U} and a given temperature. Within DMFT, the evolution of the spectral function can be computed and the appearance of the upper and lower Hubbard bands can be observed as correlations increase.
Stacks of heterogeneous 2-dimensional transition metal dichalcogenides (TMD) have been used to simulate geometries in more than one dimension. Tungsten diselenide and tungsten sulfide were stacked. This created a moiré superlattice consisting of hexagonal supercells (repetition units defined by the relationship of the two materials). Each supercell then behaves as though it were a single atom. The distance between supercells is roughly 100x that of the atoms within them. This larger distance drastically reduces electron tunneling across supercells. [ 2 ]
They can be used to form Wigner crystals . Electrodes can be attached to regulate an electric field . The electric field controls how many electrons fill each supercell. The number of electrons per supercell effectively determines which "atom" the lattice simulates. One electron/cell behaves like hydrogen, two/cell like helium, etc. As of 2022, supercells with up to eight electrons ( oxygen ) could be simulated. One result of the simulation showed that the difference between metal and insulator is a continuous function of the electric field strength. [ 2 ]
A "backwards" stacking regime allows the creation of a Chern insulator via the anomalous quantum Hall effect (with the edges of the device acting as a conductor while the interior acted as an insulator.) The device functioned at a temperature of 5 Kelvins , far above the temperature at which the effect had first been observed. [ 2 ] | https://en.wikipedia.org/wiki/Hubbard_model |
The Hubble Heritage Project was founded in 1998 by Keith Noll, Howard Bond, Forrest Hamilton, Anne Kinney, and Zoltan Levay at the Space Telescope Science Institute . [ 1 ] Until its end in 2016, [ 1 ] the Hubble Heritage Project released, on an almost monthly basis, pictures of celestial objects like planets , stars , galaxies and galaxy clusters .
The team of astronomers and image processing specialists selected images from the Hubble Space Telescope's public data archive and planned new observations with the goal of producing aesthetically impactful, full color images that preserved the scientific integrity of the data.
The Project was recognized for its contribution to public inspiration. Achievements for the team include the Astronomical Society of the Pacific 2003 Klumpke-Roberts Award for "outstanding contributions to the public understanding and appreciation of astronomy." In 2002, two Heritage images were selected in the Rochester Institute of Technology 's "Images From Science" traveling gallery exhibit. Several images have been selected by the US and UK postal systems. In 2000, a first-class US postage stamp showing the Ring Nebula was one of five Hubble images selected to be part of a commemorative series of stamps honoring astronomer Edwin P. Hubble . [ 2 ]
The website of the project contained information about the NASA / ESA Hubble Space Telescope and the images are now preserved on the Hubble Space Telescope's outreach website, Hubblesite (see link below). | https://en.wikipedia.org/wiki/Hubble_Heritage_Project |
The Hubble Origins Probe ( HOP ) was a proposal for an orbital telescope made in 2005 in response to the first cancellation of the fourth Hubble Space Telescope (HST) servicing mission. [ 1 ] It would have used an Atlas V rocket or similar launch vehicle to launch a much lighter, unaberrated mirror and optical telescope assembly, using the instruments that had already been built for SM4, along with a new wide-field imager. It would have cost between $700 million and $1 billion. [ 2 ]
Funding for the mission was never allocated; in February 2005, Sean O'Keefe , the NASA administrator who had cancelled SM4, resigned. Michael D. Griffin , NASA administrator after O'Keefe, reinstated the servicing missions, [ 3 ] making HOP redundant. | https://en.wikipedia.org/wiki/Hubble_Origins_Probe |
The Hubble sequence is a morphological classification scheme for galaxies published by Edwin Hubble in 1926. [ 1 ] [ 2 ] [ 3 ] [ 4 ] It is often colloquially known as the Hubble tuning-fork diagram because the shape in which it is traditionally represented resembles a tuning fork .
It was invented by John Henry Reynolds and Sir James Jeans. [ 5 ]
The tuning fork scheme divided regular galaxies into three broad classes – ellipticals , lenticulars and spirals – based on their visual appearance (originally on photographic plates ). A fourth class contains galaxies with an irregular appearance. The Hubble sequence is the most commonly used system for classifying galaxies, both in professional astronomical research and in amateur astronomy .
On the left (in the sense that the sequence is usually drawn) lie the ellipticals . Elliptical galaxies have relatively smooth, featureless light distributions and appear as ellipses in photographic images. They are denoted by the letter E, followed by an integer n representing their degree of ellipticity in the sky. By convention, n is ten times the ellipticity of the galaxy, rounded to the nearest integer, where the ellipticity is defined as e = 1 − b / a for an ellipse with a the semi-major axis length and b the semi-minor axis length. [ 6 ] The ellipticity increases from left to right on the Hubble diagram, with near-circular (E0) galaxies situated on the very left of the diagram. It is important to note that the ellipticity of a galaxy on the sky is only indirectly related to the true 3-dimensional shape (for example, a flattened, discus-shaped galaxy can appear almost round if viewed face-on or highly elliptical if viewed edge-on). Observationally, the most flattened "elliptical" galaxies have ellipticities e = 0.7 (denoted E7). However, from studying the light profiles and the ellipticity profiles, rather than just looking at the images, it was realised in the 1960s that the E5–E7 galaxies are probably misclassified lenticular galaxies with large-scale disks seen at various inclinations to our line-of-sight. [ 7 ] [ 8 ] Observations of the kinematics of early-type galaxies further confirmed this. [ 9 ] [ 10 ] [ 11 ]
Examples of elliptical galaxies: M49 , M59 , M60 , M87 , NGC 4125 .
At the centre of the Hubble tuning fork, where the two spiral-galaxy branches and the elliptical branch join, lies an intermediate class of galaxies known as lenticulars and given the symbol S0. These galaxies consist of a bright central bulge , similar in appearance to an elliptical galaxy , surrounded by an extended, disk -like structure. Unlike spiral galaxies , the disks of lenticular galaxies have no visible spiral structure and are not actively forming stars in any significant quantity.
When simply looking at a galaxy's image, lenticular galaxies with relatively face-on disks are difficult to distinguish from ellipticals of type E0–E3, making the classification of many such galaxies uncertain. When viewed edge-on, the disk becomes more apparent and prominent dust-lanes are sometimes visible in absorption at optical wavelengths.
At the time of the initial publication of Hubble's galaxy classification scheme, the existence of lenticular galaxies was purely hypothetical. Hubble believed that they were necessary as an intermediate stage between the highly flattened "ellipticals" and spirals. Later observations (by Hubble himself, among others) showed Hubble's belief to be correct and the S0 class was included in the definitive exposition of the Hubble sequence by Allan Sandage . [ 12 ] Missing from the Hubble sequence are the early-type galaxies with intermediate-scale disks, in between the E0 and S0 types, Martha Liller denoted them ES galaxies in 1966.
Lenticular and spiral galaxies, taken together, are often referred to as disk galaxies . The bulge-to-disk flux ratio in lenticular galaxies can take on a range of values, just as it does for each of the spiral galaxy morphological types (Sa, Sb, etc.). [ 13 ]
Examples of lenticular galaxies: M85 , M86 , NGC 1316 , NGC 2787 , NGC 5866 , Centaurus A .
On the right of the Hubble sequence diagram are two parallel branches encompassing the spiral galaxies . A spiral galaxy consists of a flattened disk , with stars forming a (usually two-armed) spiral structure, and a central concentration of stars known as the bulge . Roughly half of all spirals are also observed to have a bar-like structure, with the bar extending from the central bulge, and the arms begin at the ends of the bar. In the tuning-fork diagram, the regular spirals occupy the upper branch and are denoted by the letter S, while the lower branch contains the barred spirals, given the symbol SB. Both type of spirals are further subdivided according to the detailed appearance of their spiral structures. Membership of one of these subdivisions is indicated by adding a lower-case letter to the morphological type, as follows:
Hubble originally described three classes of spiral galaxy. This was extended by Gérard de Vaucouleurs [ 14 ] to include a fourth class:
Although strictly part of the de Vaucouleurs system of classification, the Sd class is often included in the Hubble sequence. The basic spiral types can be extended to enable finer distinctions of appearance. For example, spiral galaxies whose appearance is intermediate between two of the above classes are often identified by appending two lower-case letters to the main galaxy type (for example, Sbc for a galaxy that is intermediate between an Sb and an Sc).
Our own Milky Way is generally classed as Sc or SBc, [ 15 ] making it a barred spiral with well-defined arms.
Examples of regular spiral galaxies: ( visually ) M31 (Andromeda Galaxy), M74 , M81 , M104 (Sombrero Galaxy), M51a (Whirlpool Galaxy), NGC 300 , NGC 772 .
Examples of barred spiral galaxies: M91 , M95 , NGC 1097 , NGC 1300 , NGC1672 , NGC 2536 , NGC 2903 .
Galaxies that do not fit into the Hubble sequence, because they have no regular structure (either disk-like or ellipsoidal), are termed irregular galaxies . Hubble defined two classes of irregular galaxy: [ 16 ]
In his extension to the Hubble sequence, de Vaucouleurs called the Irr I galaxies 'Magellanic irregulars', after the Magellanic Clouds – two satellites of the Milky Way which Hubble classified as Irr I. The discovery of a faint spiral structure [ 17 ] in the Large Magellanic Cloud led de Vaucouleurs to further divide the irregular galaxies into those that, like the LMC, show some evidence for spiral structure (these are given the symbol Sm) and those that have no obvious structure, such as the Small Magellanic Cloud (denoted Im). In the extended Hubble sequence, the Magellanic irregulars are usually placed at the end of the spiral branch of the Hubble tuning fork.
Examples of irregular galaxies: M82 , NGC 1427A , Large Magellanic Cloud , Small Magellanic Cloud .
Elliptical and lenticular galaxies are commonly referred to together as "early-type" galaxies, while spirals and irregular galaxies are referred to as "late types". This nomenclature is the source of the common, [ 18 ] but erroneous, belief that the Hubble sequence was intended to reflect a supposed evolutionary sequence, from elliptical galaxies through lenticulars to either barred or regular spirals . In fact, Hubble was clear from the beginning that no such interpretation was implied:
The nomenclature, it is emphasized, refers to position in the sequence, and temporal connotations are made at one's peril. The entire classification is purely empirical and without prejudice to theories of evolution... [ 3 ]
The evolutionary picture appears to be lent weight by the fact that the disks of spiral galaxies are observed to be home to many young stars and regions of active star formation , while elliptical galaxies are composed of predominantly old stellar populations. In fact, current evidence suggests the opposite: the early Universe appears to be dominated by spiral and irregular galaxies. In the currently favored picture of galaxy formation , present-day ellipticals formed as a result of mergers between these earlier building blocks; while some lenticular galaxies may have formed this way, others may have accreted their disks around pre-existing spheroids. [ 19 ] Some lenticular galaxies may also be evolved spiral galaxies, whose gas has been stripped away leaving no fuel for continued star formation, [ 20 ] although the galaxy LEDA 2108986 opens the debate on this.
A common criticism of the Hubble scheme is that the criteria for assigning galaxies to classes are subjective, leading to different observers assigning galaxies to different classes (although experienced observers usually agree to within less than a single Hubble type). [ 21 ] [ 22 ] Although not really a shortcoming, since the 1961 Hubble Atlas of Galaxies , [ 23 ] the primary criteria used to assign the morphological type (a, b, c, etc.) has been the nature of the spiral arms, rather than the bulge-to-disk flux ratio, and thus a range of flux ratios exist for each morphological type, [ 23 ] [ 24 ] [ 25 ] as with the lenticular galaxies.
Another criticism of the Hubble classification scheme is that, being based on the appearance of a galaxy in a two-dimensional image, the classes are only indirectly related to the true physical properties of galaxies. In particular, problems arise because of orientation effects. The same galaxy would look very different, if viewed edge-on, as opposed to a face-on or 'broadside' viewpoint. As such, the early-type sequence is poorly represented: The ES galaxies are missing from the Hubble sequence, and the E5–E7 galaxies are actually S0 galaxies. Furthermore, the barred ES and barred S0 galaxies are also absent.
Visual classifications are also less reliable for faint or distant galaxies, and the appearance of galaxies can change depending on the wavelength of light in which they are observed.
Nonetheless, the Hubble sequence is still commonly used in the field of extragalactic astronomy and Hubble types are known to correlate with many physically relevant properties of galaxies, such as luminosities , colours, masses (of stars and gas) and star formation rates. [ 26 ]
In June 2019, citizen scientists in the Galaxy Zoo project argued that the usual Hubble classification, particularly concerning spiral galaxies , may not be supported by evidence. Consequently, the scheme may need revision. [ 27 ] [ 28 ] | https://en.wikipedia.org/wiki/Hubble_sequence |
The Hubble–Reynolds law models the surface brightness of elliptical galaxies as
Where I ( R ) {\displaystyle I(R)} is the surface brightness at radius R {\displaystyle R} , I 0 {\displaystyle I_{0}} is the central brightness, and R H {\displaystyle R_{H}} is the radius at which the surface brightness is diminished by a factor of 1/4. It is asymptotically similar to the De Vaucouleurs' law which is a special case of the Sersic profile for elliptical galaxies. [ 1 ]
The law is named for the astronomers Edwin Hubble and John Henry Reynolds . It was first formulated by Reynolds in 1913 [ 2 ] from his observations of galaxies (then still known as nebulae). It was later re-derived by Hubble in 1930 [ 3 ] specifically in observations of elliptical galaxies.
This astrophysics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hubble–Reynolds_law |
Huber's equation , first derived by a Polish engineer Tytus Maksymilian Huber , is a basic formula in elastic material tension calculations, an equivalent of the equation of state , but applying to solids. In most simple expression and commonly in use it looks like this: [ 1 ]
σ r e d = ( σ 2 ) + 3 ( τ 2 ) {\displaystyle \sigma _{red}={\sqrt {({\sigma }^{2})+3({\tau }^{2})}}}
where σ {\displaystyle \sigma } is the tensile stress , and τ {\displaystyle \tau } is the shear stress , measured in newtons per square meter (N/m 2 , also called pascals , Pa), while σ r e d {\displaystyle \sigma _{red}} —called a reduced tension—is the resultant tension of the material.
Finds application in calculating the span width of the bridges, their beam cross-sections, etc. [ citation needed ]
This classical mechanics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Huber's_equation |
Hubert Jacob Paul Schoemaker (March 23, 1950 – January 1, 2006) [ 2 ] was a Dutch biotechnologist . He was a co-founder and the president of one of America's first biotechnology companies, Centocor , which was founded in 1979 for the commercialising of monoclonal antibodies. In 1999 he founded Neuronyx, Inc., for the manufacture of stem cells and the development of stem-cell therapies . [ 3 ]
Schoemaker was born in Deventer , Netherlands . He attended St. Bernardus School in Deventer, and Canisius College, Nijmegen . In 1969 he moved to the United States to attend the University of Notre Dame , where he majored in chemistry, graduating in May 1972. Soon after he married Ann Postorino. [ 4 ]
He then earned a doctorate in biochemistry in 1975 from the Massachusetts Institute of Technology . Supervised by Paul Schimmel, his doctoral research was an investigation of the structure function relationships of transfer RNAs and their complexes. [ 4 ]
After declining postdoctoral research positions with Stanley Cohen and Klaus Weber , Schoemaker chose to work as a research scientist in industry.
His choice was influenced by the severe disabilities suffered by his first daughter, Maureen, who was born with lissencephaly and needed specialised care. This inspired Schoemaker to become involved in commercial biotechnology. [ 1 ]
In 1976 Schoemaker joined Corning Medical, a Boston-based division of Corning Glass Works . At Corning Schoemaker rapidly progressed from being a specialist in immunoassay development for diagnostics to heading research and development. Among his achievements at the company was devising effective diagnostic kit tests for thyroid disorders. [ 1 ]
In 1979 Schoemaker became involved in the founding of Centocor together with a former Corning Medical colleague Ted Allen and the bioentrepreneur Michael Wall with whom he had some dealings while at Corning. [ 1 ] Inspired by the work of Hilary Koprowski , who developed some of the earliest monoclonal antibodies against tumour antigens and influenza viral antigens, the objective of Centocor was to commercialise monoclonal antibodies for diagnostics and therapeutics. [ 5 ] In 1980 Schoemaker joined Centocor and soon after became its first chief executive officer. [ 1 ]
From the start Centocor decided to fill its product pipeline through partnerships with research institutions and marketing alliances. Central to this policy was Schoemaker's ability to network and the company's decision to design diagnostic kits so that were compatible with existing diagnostic systems. Under Schoemaker's leadership Centocor rapidly grew into a profitable diagnostic business. By 1985 the company had revenues of approximately $50 million. In part this success was built upon the swift approval the company won for two of its tests. The first was for gastrointestinal cancer test and the other was for hepatitis B. Between 1983 and 1986 Centocor introduced three other diagnostic tests to the market: one for ovarian cancer (the first diagnostic test available for the disease), one for breast cancer and one for colorectal cancer. [ 6 ]
Despite the company's success on the diagnostic front, Schoemaker was plunged in 1992 into efforts to save the company from bankruptcy when its first therapeutic, Centoxin, a drug designed to treat septic shock, failed to win FDA approval. [ 7 ] In part the crisis had come about as a result of the company's executives trying to go it alone in developing the drug. What saved the company was a return to the policy of collaboration. Learning from its mistakes with Centoxin, in December 1994 Centocor gained marketing approval for ReoPro, a monoclonal antibody drug for cardiovascular disease. The first therapeutic to ever receive simultaneous US and European approvals, and the second monoclonal antibody to ever win approval as a drug, ReoPro marked a milestone for both Centocor and for monoclonal antibodies therapeutics. ReoPro was to be followed in August 1998 by the approval of Centocor's Remicade , a drug to treat auto-immune disorders like Crohn's disease and rheumatoid arthritis . [ 8 ]
After selling Centocor to Johnson and Johnson in 1999, Schoemaker went on to form Neuronyx, Inc., a biotech company focused on developing cellular therapies. After Schoemaker died in 2006 the company was continued by his wife Anne Faulkner Schoemaker. Initial work focused on using stem cells taken from adult bone marrow to help regenerate heart tissue damaged during heart attacks. [ 9 ] Later the company turned direction to looking at the development of a treatment for incision wounds in women following breast cancer reconstruction surgery. The company later changed its name to Garnet BioTherapeutics. [ 10 ] Despite promising clinical results and raising more than $55 million in venture capital funding, the company was unable to continue. [ 11 ]
Schoemaker was diagnosed in 1994 with a form of brain cancer, medulloblastoma . [ 1 ] He died on January 1, 2006, at age 55. [ 2 ] | https://en.wikipedia.org/wiki/Hubert_Schoemaker |
Hubertina Dorothy Clayton Hogan (December 25, 1924 – April 14, 2017) was an American textile chemist, employed for most of her career in the United States Army's Combat Capabilities Development Command laboratories in Natick, Massachusetts .
Hubertina "Tina" Clayton was from Trenton, New Jersey , the daughter of Joseph Aloysius Clayton and Elsie Papendick Clayton (later Dietrich). Her father was a World War I veteran. She was named for her paternal grandmother, Hubertina Brandt Clayton, who lived with her family. [ 1 ] She graduated from Trenton Cathedral High School in Trenton in 1943, [ 2 ] and from Seton Hill University in 1947. [ 3 ] [ 4 ] She pursued further studies in biochemistry at the University of Pennsylvania in Philadelphia, [ 3 ] [ 4 ] and in textile chemistry at Lowell Technological Institute . [ 5 ] Her master's thesis was titled "The relationship between hydrogen ion concentration and a water-oil repellent fluorochemical finish" (1973). [ 6 ]
From 1952 into the 1970s, [ 7 ] Hogan was a research chemist at the Army's Clothing, Equipment, and Materials Engineering Laboratory in Natick, Massachusetts. [ 8 ] Her research involved textiles and their properties of repelling or absorbing environmental hazards in combat settings. [ 9 ] She "developed the analytical method for determining chrome content of feathers and down." [ 10 ] She was a member of the American Leather Chemists Association beginning in 1955. [ 11 ]
Professional publications by Hogan included articles in Textile Research Journal , [ 12 ] Journal of the American Leather Chemists Association , [ 13 ] [ 14 ] and Analytical Chemistry . [ 15 ]
Clayton married John Daniel Hogan in 1967, in Massachusetts; her husband died in 1973. She lived with her mother in Jobstown, New Jersey in the 1980s. She died in Cumming, Georgia , in 2017, aged 92 years. [ 17 ] | https://en.wikipedia.org/wiki/Hubertina_D._Hogan |
A hubometer (from hub , center of a wheel; -ometer , measure of) or hubodometer , is a device mounted on the axle of any land vehicle to measure the distance traveled by a vehicle based on the rotations of the wheel hub.
The whole device rotates with the wheel, except for an eccentrically mounted weight on an internal shaft. The weight remains pointing downwards, and drives the counting mechanism as the body of the hubometer rotates around it.
Hubometers are essential for semi-trailers, serving as the primary method to track the accumulated distance traveled throughout the vehicle's lifespan. They find application in buses, trucks, or trailers, particularly those whose tires are provided to the vehicle operator through an independent company under a "price per thousand kilometers" contract. In this arrangement, the tire company installs the hubometer to obtain accurate measurements of the distance covered.
In New Zealand , hubodometers [ 1 ] are used for the calculation of road user charges [ 2 ] for HGVs powered by a fuel not taxed at source.
At the Veeder Manufacturing Company in Hartford, Connecticut production of cyclometers, hubodometers, and other scientific tools was underway for contracts with the United States government. Designed by Curtis Veeder [ 3 ] in 1895, the cyclometer measured the distance traveled by bicycles [ 4 ] as Curtis was a bicycle enthusiast. He would later adapt the invention to measure distance traveled for automobiles , hubodometers, [ 5 ] as well as hand-turned cyclometers for use by the US Weather Bureau . The Veeder Manufacturing Company would produce these tools for use by the US government during World War One. These devices would be placed on the wheel of an automobile to measure the distance traveled by counting the rotations of its wheels.
Curtis Veeder acquired the Root Company of Bristol in 1928 before retiring to form the Veeder-Root Corporation. [ 5 ] It has remained in operation up to the present day. Utilizing his industrial riches, Veeder constructed an intricate stone mansion located on Elizabeth Street in Hartford. This mansion currently houses the Connecticut Museum of Culture and History. [ 6 ] [ 7 ] [ 8 ]
This article about an automotive part or component is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hubometer |
In mathematics , Hudde's rules are two properties of polynomial roots described by Johann Hudde .
1. If r is a double root of the polynomial equation
2. If for x = a the polynomial
Hudde was working with Frans van Schooten on a Latin edition of La Géométrie of René Descartes . In the 1659 edition of the translation, Hudde contributed two letters: "Epistola prima de Redvctione Ǣqvationvm" (pages 406 to 506), and "Epistola secvnda de Maximus et Minimus" (pages 507 to 16). These letters may be read by the Internet Archive link below.
This polynomial -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hudde's_rules |
Hudson's equation , also known as Hudson formula , is an equation used by coastal engineers to calculate the minimum size of riprap ( armourstone ) required to provide satisfactory stability characteristics for rubble structures such as breakwaters under attack from storm wave conditions.
The equation was developed by the United States Army Corps of Engineers , Waterways Experiment Station (WES), following extensive investigations by Hudson (1953, 1959, 1961a, 1961b) [ 1 ] [ 2 ] [ 3 ]
The equation itself is:
where:
This equation was rewritten as follows in the nineties:
where:
The armourstone may be considered stable if the stability number N s = H s / Δ D n50 < 1.5 to 2, with damage rapidly increasing for N s > 3. This formula has been for many years the US standard for the design of rock structures under influence of wave action [ 4 ] Obviously, these equations may be used for preliminary design, but scale model testing (2D in wave flume, and 3D in wave basin) is absolutely needed before construction is undertaken.
The drawback of the Hudson formula is that it is only valid for relatively steep waves (so for waves during storms, and less for swell waves ). Also it is not valid for breakwaters and shore protections with an impermeable core. It is not possible to estimate the degree of damage on a breakwater during a storm with this formula. Therefore nowadays for armourstone the Van der Meer formula or a variant of it is used. For concrete breakwater elements often a variant of the Hudson formula is used. [ 5 ] | https://en.wikipedia.org/wiki/Hudson's_equation |
The Huggins Equation is an empirical equation used to relate the reduced viscosity of a dilute polymer solution to the concentration of the polymer in solution. It is named after Maurice L. Huggins . The Huggins equation states:
η s c = [ η ] + k H [ η ] 2 c {\displaystyle {\frac {\eta _{s}}{c}}=[\eta ]+k_{H}[\eta ]^{2}c}
Where η s {\displaystyle {\eta _{s}}} is the specific viscosity of a solution at a given concentration of a polymer in solution, [ η ] {\displaystyle [\eta ]} is the intrinsic viscosity of the solution, k H {\displaystyle k_{H}} is the Huggins coefficient, and c {\displaystyle c} is the concentration of the polymer in solution. [ 1 ] In isolation, n s {\displaystyle n_{s}} is the specific viscosity of a solution at a given concentration.
The Huggins equation is valid when [ η ] c {\displaystyle [\eta ]c} is much smaller than 1, indicating that it is a dilute solution. [ 2 ] The Huggins coefficient used in this equation is an indicator of the strength of a solvent. The coefficient typically ranges from k H ≈ 0.3 {\displaystyle k_{H}\approx 0.3} (for strong solvents) to k H ≈ 0.5 {\displaystyle k_{H}\approx 0.5} (for poor solvents). [ 3 ]
The Huggins equation is a useful tool because it can be used to determine the intrinsic viscosity, [ η ] {\displaystyle [\eta ]} , from experimental data by plotting η s c {\displaystyle {\frac {\eta _{s}}{c}}} versus the concentration of the solution, c {\displaystyle c} . [ 4 ] [ 5 ]
This article about polymer science is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Huggins_equation |
A Hughes–Ingold symbol describes various details of the reaction mechanism and overall result of a chemical reaction . [ 1 ] For example, an S N 2 reaction is a substitution reaction ("S") by a nucleophilic process ("N") that is bimolecular ("2" molecular entities involved) in its rate-determining step . By contrast, an E2 reaction is an elimination reaction , an S E 2 reaction involves electrophilic substitution , and an S N 1 reaction is unimolecular. The system is named for British chemists Edward D. Hughes and Christopher Kelk Ingold .
This chemical reaction article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hughes–Ingold_symbol |
Hugo Müller (29 July 1833 – 23 May 1915) was an Anglo-German analytical chemist , botanist and industrialist. He was a Fellow of the Royal Society . He is known for being the first person to synthesize hexachlorobenzene .
Hugo Müller was born on 29 July, 1833, [ 1 ] in Tirschenreuth , Germany. [ 2 ] He studied chemistry in Leipzig University . He was a student of Friedrich Wöhler at the University of Göttingen , where he earned his PhD. [ 2 ] He was also assistant to Justus von Liebig at the University of Munich . [ 2 ]
Müller moved to United Kingdom in 1855 to work with Warren De la Rue as recommended by Liebig. [ 2 ] De la Rue and Müller developed a chloride of silver battery. [ 2 ]
He earned a Legum Doctor degree from the University of St Andrews and a Doctor of Science degree from the University of Manchester . [ 2 ] He also worked with John Stenhouse . Müller discovered that iodine could be used as a catalyst in chlorination . [ 2 ]
In 1864, Müller discovered how to transform mono-functional carboxylic acids into di-functional ones by introducing an additional carboxyl group. Using this technique Müller managed to synthesize of succinic acid from propionic acid . [ 3 ] Around the same time, Hermann Kolbe had independently discovered a similar reaction, transforming acetic acid into malonic acid —a process Müller had also investigated. [ 3 ] During a meeting of the Chemical Society , Müller commented on an earlier publication by Maxwell Simpson on obtaining succinic acid from ethylene , but Müller still claimed priority over the succinic to propionic acid process. [ 3 ] After Edward Frankland mediated the dispute through correspondence, Müller and Kolbe agreed to publish their findings side by side in the Journal of the Chemical Society . [ 3 ] The priority of the discovery was also disputed by Hans Hübner who had partially published his work on the same reaction and by Friedrich Konrad Beilstein who accused Müller, Kolbe and others of unethical practices. [ 3 ]
Kolbe and Müller agreed to collaborate, with Kolbe reserving exclusivity on cyanoacetic acids. However, the same year Müller also published the synthesis of trichloroacetic acid with potassium cyanide without notifying Kolbe. [ 3 ] From that point forward, the two ceased all communication. [ 3 ]
The same year, Müller also synthesized the compound hexachlorobenzene by the reaction of benzene and antimony pentachloride . [ 4 ]
Müller became a consultant for De La Rue company before leaving academic research to pursue an industrial career. [ 2 ] He became a partner of the company and remained there until 1902. [ 2 ] The company manufactured postage stamps and bank notes . [ 5 ] [ 2 ] After retiring, he continued his research at the Davy-Faraday Laboratory . [ 2 ] His work extended to horticulture . [ 1 ] Studying the species of Primula , he discovered that their bloom is related the presence of flavone . [ 1 ]
Müller married Elizabeth Russell in 1878, who survived him and passed away in 1931. [ 1 ] They had two daughters. [ 2 ] Müller was naturalized as a British citizen after marriage. [ 2 ]
His work on horticulture led him to develop a vast garden in his home at Camberley , Surrey , England . [ 2 ]
Hugo Müller passed away on 23 May, 1915 at his home. [ 2 ]
Since his school days Müller had collected mineral specimens, which his widow Elisabeth presented to the Oxford University Museum Of Natural History in 1915. [ 5 ]
Müller was elected as a Fellow of the Royal Society on 7 June 1866. [ 1 ] He was their treasurer but resigned from the society when World War I started, due to personal convictions. [ 2 ]
He joined the Chemical Society in 1859, becoming its foreign secretary from 1869 to 1885 and president of the society from 1885 to 1887. [ 2 ] He was also a member of the Royal Horticultural Society . [ 2 ] | https://en.wikipedia.org/wiki/Hugo_Müller |
Hugo Van Heuverswyn (born 1948) is a Belgian molecular biologist , biotech pioneer, entrepreneur and businessman. He has been the chairman of the VIB , Flanders Institute for Biotechnology , since its inception in 1995.
Hugo Van Heuverswyn obtained a chemistry degree at the University of Ghent in 1971 and a PhD in molecular biology in 1978 in the group of Prof. Walter Fiers, a first-tier pioneer in the field of modern biotechnology, [ 1 ] in whose laboratory Hugo and his colleagues realized the first ever decoding of a complete viral DNA genome (SV40). In the subsequent period 1979 – 1981 he was appointed visiting professor at the Oswaldo Cruz Foundation in Rio de Janeiro, where he established, together with Dr. Carlos Morel, the first DNA sequencing laboratory in Latin America.
Scientist and Entrepreneur:
After his return to Belgium in 1981, he was invited to set up Biogent , a Belgian subsidiary of Biogen (one of the first biotech companies worldwide), to pursue the molecular cloning of TNF (Tumor Necrosis Factor) and other cytokines, a new class of biomolecules which at that time had just started to be discovered, but today are revolutionizing the field of immunology and anti-cancer therapy. In 1985, Hugo Van Heuverswyn initiated together with Rudi Mariën, the creation of INNOGENETICS , at a time that venture capital was still non-existing in Belgium. He continued to serve as CEO and board member till 2000, two years after INNX had become the first biotech company to list on EASDAQ in 1998 (a new European stock exchange for growth companies). At that time INNX had grown to over 700 employees, had filed numerous patents, had put more than 50 highly innovative IVD products in the market and realized a yearly turnover of over 50 million euro. In 2001, Hugo Van Heuverswyn founded, together with 2 former INNX colleagues, BioMARIC , a new Belgian biotech company active in prevention and control of infectious diseases, where he acts as CEO until today.
In addition to his private activities, Hugo Van Heuverswyn was also continuously involved in the creation of the biotechnology ecosystem in Flanders. For 18 years, from its inception in 1995 till 2013, he was Chairman of the Flemish Institute of Biotechnology (VIB) , during which period he created, together with Rudy DeKeyser, flanders.bio (2003), the association of Flemish biotech companies where he is still serving as a board member, and in 2016 he co-founded Flanders Vaccine , a non-profit organization aiming to stimulate the translation of basic research into clinical applications in the field of vaccination and immunotherapy. At present his main interest goes to translational research, One Health and personalized medicine, particularly in the field of microbiota and infection and immunity. | https://en.wikipedia.org/wiki/Hugo_Van_Heuverswyn |
Hui Wu ( Chinese : 吴慧 ; pinyin : Wú Huì ) [ 1 ] is a Chinese materials chemist and engineer. She is a senior scientist at the National Institute of Standards and Technology Center for Neutron Research. Wu researches the synthesis, structure, solid state chemistry, and properties of complex oxides and hydrides. She received the Department of Commerce Bronze Medal for producing an entirely new route to synthesizing hydrogen-storage materials for fuel cells based on the complex chemistry of amines and boranes.
In 1999, Wu completed a dual B.S. in materials science and engineering and environmental science and engineering at Tsinghua University (THU). In 2001, she earned a M.S. in materials science and engineering at THU. She conducted her master's thesis, Structure Characterization and Performance of Porous Chemical Adsorbents for Indoor-Air Purification under advisor Feiyu Kang . Wu completed a Ph.D. in materials science and engineering at University of Pennsylvania in 2005. Her dissertation was titled Non-stoichiometric Ordered Perovskites for Microwave Applications . Wu's doctoral advisor was Peter K. Davies . From 2005 to 2007, Wu conducted a postdoc as research associate in the National Institute of Standards and Technology Center for Neutron Research. Her postdoctoral advisor was Terrence J. Udovic . She researched the development and processing of novel metal hydride materials for hydrogen storage and hydrogen-storage materials using neutron scattering techniques. [ 2 ]
From 2007 to 2015, Wu worked as a scientist in the NIST Center for Neutron Research and for the department of materials science and engineering at University of Maryland, College Park . In 2015, she was promoted to senior scientist at NIST. [ 2 ]
Wu's background is in advanced materials development including the development of novel materials for energy-related applications (hydrogen-storage and full cell), synthesis and characterization of new materials for nanoelectronics applications, and the study of materials used for indoor air purification. Wu is also experienced in solid state physics and chemistry including X-ray and neutron scattering, electron microscopy, and thermal analysis. [ 2 ]
In 2004, Wu won the best poster award at the Materials Research Society Solid State Chemistry Symposium. In 2005, she won the University of Pennsylvania S. J. Stein Prize for or superior achievement in the field of new or unique materials or applications for materials in electronics. Wu was honored with the outstanding poster presentation at the 14th annual NIST chapter of Sigma Xi poster competition. [ 2 ] In 2010, Wu received the Sidhu Award from the Pittsburgh Diffraction Society for her exceptional contribution to the structural investigation of new materials for energy storage applications. In 2017, she received the Department of Commerce Bronze Medal for producing an entirely new route to synthesizing hydrogen-storage materials for fuel cells based on the complex chemistry of amines and boranes. In 2018, she was recognized by Clarivate Analytics as a highly cited researcher in the field of Cross-Field. [ 3 ]
This article incorporates public domain material from the National Institute of Standards and Technology | https://en.wikipedia.org/wiki/Hui_Wu |
The Huihui Lifa ( Traditional Chinese : 回回歷法; Simplified Chinese : 回回历法; pinyin : Huíhuí Lìfǎ) was a set of astronomical tables published throughout China from the time of the Ming Dynasty in the late 14th century through the early 18th century. The tables were based on a translation into Chinese of the Zij (Islamic astronomical tables), the title Huihui Lifa literally meaning "Muslim System of Calendar Astronomy".
Around 1384, during the Ming Dynasty , Hongwu Emperor ordered the Chinese translation and compilation of Islamic astronomical tables , a task that was carried out by the scholars Mashayihei (مشایخی), a Muslim astronomer, and Wu Bozong , a Chinese scholar-official.
These tables came to be known as the Huihui Lifa ( Muslim System of Calendrical Astronomy ), and were published in China a number of times until the early 18th century, [ 1 ] despite the fact the Qing Dynasty had officially abandoned the tradition of Chinese-Islamic astronomy in 1659. [ 1 ]
In the early Joseon period, the Islamic calendar served as a basis for calendar reform owing to its superior accuracy over the existing Chinese-based calendars. [ 2 ] A Korean translation of the Huihui Lifa was studied in Korea under the Joseon Dynasty during the time of Sejong in the 15th century. [ 1 ] The tradition of Chinese-Islamic astronomy survived in Korea until the early 19th century. [ 1 ] | https://en.wikipedia.org/wiki/Huihui_Lifa |
Hulbert Harrington Warner (1842–1923) was a Rochester, New York businessman and philanthropist who made his fortune from the sales of patent medicine .
He was born near Syracuse, New York , in a small settlement called Warners . Warners had been named for Warner's grandfather, Seth, who had moved there in 1807 from Stockbridge, Massachusetts . In 1865, Warner moved to Michigan to engage in the stove and hardware business. In 1870, Warner moved to Rochester and entered into the first business that would make him a millionaire, selling fire- and burglar-proof safes. The demand for safes had escalated dramatically after the discovery of oil in western Pennsylvania; by decade's end, it is estimated that Warner and his sales agents had sold 60,000 safes worth an estimated $10 million ($326 million in present terms).
Warner was married twice. He married Martha L. Keeney of Skaneateles, New York in 1864. Martha died suddenly in 1871, and is buried at Lakeview Cemetery in Skaneateles.
In 1872, Warner remarried, this time to Emily Olive Stoddard of Michigan. Although the details of his second marriage remain vague, it appears that Warner and Stoddard separated in 1893. It appears that the couple may have had one child, Maud, but there is little information available about her.
Warner later lived with Christina de Martinez of Mexico. Warner and Martinez were never actually married (and it appears that Warner and Stoddard were never divorced), but Martinez took Warner's name as her own and they resided in the same household after Warner moved to Minneapolis.
Based upon the history recounted in Warner's early almanacs, Warner used a portion of the wealth he accumulated from the safe business to purchase the formula for a patent medicine from Dr. Charles Craig of Rochester. Warner developed an unexpectedly severe case of Bright's disease , a kidney disease. While close to death, Warner used a vegetable concoction sold by Craig and was restored to health. Based upon his admiration for Craig's Original Kidney Cure, Warner purchased the formula and the rights to the product and in 1879 introduced Warner's Safe Kidney & Liver Cure.
Although Warner's early publications herald Craig's potion as a revelation, references to Craig soon disappeared from Warner's advertising, and ultimately the two ended up in court when Craig attempted to reenter the patent medicine business with a cure remarkably similar to the one he had sold to Warner.
In addition to his Kidney & Liver Cure, Warner also introduced a Safe Nervine, Safe Diabetes Cure, Safe Tonic, Safe Tonic Bitters, Safe Bitters, Safe Rheumatic Cure, Safe Pills, and later his Tippecanoe Bitters. The Warner's patent medicine products, with the exception of the Safe Pills and Tippecanoe, appeared in a unique bottle, which featured an embossed safe on the front. This drew upon his earlier business and implied to his potential customers that his product posed no risk.
In January, 1884, Warner opened his new Rochester headquarters in a lavish multi-story building on St. Paul Street. The H. H. Warner Building became the centerpiece of his medicine production and turned out an estimated 7,000 US gallons (26,000 L) of Safe Cure per day. It also served as the headquarters for his promotional department, which published an untold number of almanac and advertising circulars distributed with his medicines to local druggists and grocers. The Warner Building still exists today and houses a variety of businesses. Its granite façade still bears the initial "W".
In 1887, Warner introduced a new product line, which he called his Log Cabin Remedies . Unlike his Safe Cures, these products appeared in amber bottles with three slanted panels with the name of the particular remedy embossed. The bottles were in red, white, blue, and yellow boxes that featured the image of a log cabin viewed from a window. The Log Cabin Remedies did not replace the Safe Cure line; they only supplemented it. Warner realized that the nation was in a head-long race for expansion westward and his marketing pitch appealed to the American desire for self-reliance. Indeed, the entire thrust of Warner's marketing from its inception can best be described as appealing to his customer's desire to "heal thyself".
Based upon his success in marketing his Safe Cure products in the United States, Warner quickly decided to expand his operation internationally. In 1883, he opened offices in Toronto , Ontario , Canada and London , England . The bottles from Toronto have become known as "3-Cities", because they featured the names of all of his offices at that time: Rochester, London, and Toronto.
In 1887, he opened offices in Melbourne, Australia and Frankfurt, Germany . In 1888, he expanded to Pressburg in Hungary; however, this office lasted only two years. In 1891, he opened an office in Dunedin, New Zealand; the bottles from that office have become known as "4-Cities", bearing the names of Rochester, Toronto, London, and Melbourne. The Dundein office was likely little more than a laboratory and, in fact, bottles from the Melbourne and Dundein offices were likely produced in either Rochester or London and shipped to the southern-hemisphere offices due to the primitive state of glass production that existed there at the time.
Warner's advertising also boasts offices in Kreuslingen, Switzerland; Brussels; and Paris. No bottles with these cities embossed have ever appeared, and only one bottle labeled in French is known to exist.
Warner's offices lasted well into the 20th century, with the Rochester office closing around 1944.
Having made millions on his second business in patent medicine, Warner embarked on various philanthropic endeavors, most notably his sponsorship of the Warner Observatory in Rochester.
Prior to opening his patent medicine business, Warner had chanced to meet Dr. Lewis Swift, an astronomer, who was ready to leave Rochester for Colorado when Warner convinced him to stay and operate his new observatory. The observatory was completed in 1883 at the then-staggering cost of $100,000 (in current terms, $3.37 million). It was equipped with a state-of-the-art telescope and was pronounced as being the best-equipped private observatory in the world.
The Observatory was used as a marketing centerpiece by Warner. His almanacs at the time ran essay contents and featured images of the Observatory. Swift used the Observatory to good end, and reportedly discovered six new comets and 900 nebulae.
At one point, Warner offered a reward of $200 for each new comet discovered. This offer was of great help to the young astronomer Edward Emerson Barnard , who claimed eight such awards and used the proceeds to set himself and his new wife up in a newly built house in Nashville , Tennessee .
Astronomer Swift and his telescope left Rochester in 1894. The Observatory was demolished in December, 1931.
Warner also used his money to construct a lavish mansion for himself on East Avenue in Rochester. The house fell into disuse and was later demolished.
Warner's patent-medicine empire reached its pinnacle in the late 1880s and began its gradual decline. Flush with success, Warner spent money on highly speculative investments in mining, all of which failed. In an effort to generate more capital, he took the company public, which did generate some revenue.
He sold the company to an English investment group in 1889, which incorporated it as H. H. Warner & Co., Ltd . Warner bought up 80 percent of the English stock, and took the position of managing director of the company. However, Warner's speculative investments and his waning interest in the business took its toll. When the Panic of 1893 hit, Warner was unable to generate additional capital through stock sales, forcing him into bankruptcy. The American branch of his company was sold to a group of Rochester investors, who continued to operate it as the Warner's Safe Remedies Company .
After failing in Rochester, Warner lived for a time in New York City , then moved to Philadelphia , where he may have attempted to start a new patent medicine business, although this is unconfirmed. He ultimately landed in Minneapolis , where he promoted the Nuera Manufacturing Co. , also known as Neura Remedy Co., with the help of his common-law wife Christina de Martinez. He also operated the Warner Renowned Remedies Company , which produced some products offered by mail order.
Warner died in January, 1923, and is buried alongside his first wife, Martha, in Lakeview Cemetery in Skaneateles, NY.
His legacy is his patent medicine empire that produced remedies sold around the world as well as the bottles in which those remedies were contained. The bottles are prized by collectors. | https://en.wikipedia.org/wiki/Hulbert_Harrington_Warner |
The Hulett was an ore unloader that was widely used on the Great Lakes of North America. It was unsuited to tidewater ports because it could not adjust for rising and falling tides, although one was used in New York City.
The Hulett was invented by George Hulett of Conneaut, Ohio , in the late 19th century; he received a patent for his invention in 1898. The first working machine was built the following year at Conneaut Harbor. [ 1 ] It was steam powered, successful, and many more were built along the Great Lakes , especially the southern shore of Lake Erie to unload boats full of taconite from the iron mines near Lake Superior . John W. Ahlberg converted the Huletts in Conneaut to electricity in the 1920's. Substantial improvements were later made on the design by Samuel T. Wellman . [ citation needed ]
The Hulett machine revolutionised iron ore shipment on the Great Lakes. Previous methods of unloading lake freighters , involving hoists and buckets and much hand labor, cost approximately 18¢/ton. Unloading with Huletts cost only 6¢/ton. (in 1901 dollars) Unloading only took 5 to 10 hours, as opposed to days for previous methods. Lake boat designs changed to accommodate the Hulett unloader, and became much larger, [ 1 ] doubling in length and quadrupling in capacity. [ citation needed ]
By 1913, 54 Hulett machines were in service. Two were built at Lake Superior (unloading coal) and five at Gary, Indiana , but the vast majority were along the shores of Lake Erie. The additional unloading capacity they brought helped permit a greater than doubling of the ore traffic in the 1900–1912 period. A total of approximately 75 Huletts were built. [ citation needed ] One was installed in New York City to unload garbage. [ 1 ]
The lake's Huletts were used until about 1992, when self-unloading boats were standard on the American side of the lake. [ 1 ] All have since been dismantled. [ citation needed ] In 1999, only six remained, the group of four at Whiskey Island in Cleveland , the oldest. Another set was used unloading barges of coal in South Chicago until 2002 and were demolished in the spring of 2010. [ citation needed ]
In spite of the Cleveland machines being on the National Register of Historic Places and designated as a Historic Mechanical Engineering Landmark , they were demolished in 2000 by the Cleveland-Cuyahoga County Port Authority to enable development of the underlying land. [ 2 ] The Port Authority disassembled and retained two Huletts, to enable their reconstruction at another site, but the reconstruction never started. [ 1 ] In March 2024 the Port Authority initially chose a demolition contractor that intended to reassemble one unloader in nearby Canton , [ 3 ] but they chose another contractor later in the month, which expects to salvage the arms and buckets. [ 4 ] The last two remaining Huletts were scrapped in June 2024.
The electrically powered Hulett unloader rode on two parallel tracks along the docks , one near the edge and one further back, ordinarily with four railroad tracks in between. Steel towers, riding on wheeled trucks, supported girders that spanned the railroad tracks.
Along these girders ran a carriage which could move toward or away from the dock face. This in turn carried a large walking beam which could be raised or lowered; at the dock end of this was a vertical column with a large scoop bucket on the end. A parallel beam was mounted halfway down this column to keep the column vertical as it was raised or lowered. The machine's operator, stationed in the vertical beam above the bucket for maximum cargo visibility, could spin the beam at any angle. The scoop bucket was lowered into the ship's hold, closed to capture a quantity (10 tons approx.) of ore, raised, and moved back toward the dock. The workmen who operated the Hulett uploaders were known as Ore Hogs . [ 5 ]
To reduce the required motion of the carriage, a moving receiving hopper ran between the main girders. It was moved to the front for the main bucket to discharge its load, and then moved back to dump it into a waiting railroad car, or out onto a cantilever frame at the back to dump the load onto a stockpile.
The Hulett could move along the dock to align with the holds on an ore boat. When the hold was almost empty, the Hulett could not finish the job itself. Workmen entered the hold and shoveled the remaining ore into the Hulett's bucket. In a later development, a wheeled excavator was chained to the Hulett's bucket and lowered into the hold to fill the Hulett. | https://en.wikipedia.org/wiki/Hulett |
A hull is the watertight body of a ship , boat , submarine , or flying boat . The hull may open at the top (such as a dinghy ), or it may be fully or partially covered with a deck. Atop the deck may be a deckhouse and other superstructures , such as a funnel, derrick, or mast . The line where the hull meets the water surface is called the waterline .
There is a wide variety of hull types that are chosen for suitability for different usages, the hull shape being dependent upon the needs of the design. Shapes range from a nearly perfect box, in the case of scow barges, to a needle-sharp surface of revolution in the case of a racing multihull sailboat. The shape is chosen to strike a balance between cost, hydrostatic considerations (accommodation, load carrying, and stability), hydrodynamics (speed, power requirements, and motion and behavior in a seaway) and special considerations for the ship's role, such as the rounded bow of an icebreaker or the flat bottom of a landing craft .
In a typical modern steel ship, the hull will have watertight decks, and major transverse members called bulkheads . There may also be intermediate members such as girders , stringers and webs , and minor members called ordinary transverse frames, frames, or longitudinals, depending on the structural arrangement . The uppermost continuous deck may be called the "upper deck", "weather deck", "spar deck", " main deck ", or simply "deck". The particular name given depends on the context—the type of ship or boat, the arrangement, or even where it sails.
In a typical wooden sailboat, the hull is constructed of wooden planking, supported by transverse frames (often referred to as ribs) and bulkheads, which are further tied together by longitudinal stringers or ceiling. Often but not always there is a centerline longitudinal member called a keel . In fiberglass or composite hulls, the structure may resemble wooden or steel vessels to some extent, or be of a monocoque arrangement. In many cases, composite hulls are built by sandwiching thin fiber-reinforced skins over a lightweight but reasonably rigid core of foam, balsa wood, impregnated paper honeycomb, or other material.
Perhaps the earliest proper hulls were built by the Ancient Egyptians , who by 3000 BC knew how to assemble wooden planks into a hull. [ 1 ]
Hulls come in many varieties and can have composite shape, (e.g., a fine entry forward and inverted bell shape aft), but are grouped primarily as follows:
At present, the most widely used form is the round bilge hull. [ 2 ]
With a small payload, such a craft has less of its hull below the waterline , giving less resistance and more speed. With a greater payload, resistance is greater and speed lower, but the hull's outward bend provides smoother performance in waves. As such, the inverted bell shape is a popular form used with planing hulls. [ citation needed ] [ clarification needed ]
A chined hull does not have a smooth rounded transition between bottom and sides. Instead, its contours are interrupted by sharp angles where predominantly longitudinal panels of the hull meet. The sharper the intersection (the more acute the angle), the "harder" the chine. More than one chine per side is possible.
The Cajun "pirogue" is an example of a craft with hard chines.
Benefits of this type of hull include potentially lower production cost and a (usually) fairly flat bottom, making the boat faster at planing . A hard chined hull resists rolling (in smooth water) more than does a hull with rounded bilges (the chine creates turbulence and drag resisting the rolling motion, as it moves through the water, the rounded-bilge provides less flow resistance around the turn). In rough seas, this can make the boat roll more, as the motion drags first down, then up, on a chine: round-bilge boats are more seakindly in waves, as a result.
Chined hulls may have one of three shapes:
Each of these chine hulls has its own unique characteristics and use. The flat-bottom hull has high initial stability but high drag. To counter the high drag, hull forms are narrow and sometimes severely tapered at bow and stern. [ citation needed ] This leads to poor stability when heeled in a sailboat. [ citation needed ] This is often countered by using heavy interior ballast on sailing versions. They are best suited to sheltered inshore waters. Early racing power boats were fine forward and flat aft. This produced maximum lift and a smooth, fast ride in flat water, but this hull form is easily unsettled in waves. The multi-chine hull approximates a curved hull form. It has less drag than a flat-bottom boat. Multi chines are more complex to build but produce a more seaworthy hull form. They are usually displacement hulls. V or arc-bottom chine boats have a V shape between 6° and 23°. This is called the deadrise angle. The flatter shape of a 6-degree hull will plane with less wind or a lower-horsepower engine but will pound more in waves. The deep V form (between 18 and 23 degrees) is only suited to high-powered planing boats. They require more powerful engines to lift the boat onto the plane but give a faster, smoother ride in waves. Displacement chined hulls have more wetted surface area, hence more drag, than an equivalent round-hull form, for any given displacement.
Smooth curve hulls are hulls that use, just like the curved hulls, a centreboard, or an attached keel. [ citation needed ]
Semi round bilge hulls are somewhat less round. The advantage of the semi-round is that it is a nice middle between the S-bottom [ clarification needed ] and chined hull. Typical examples of a semi-round bilge hull can be found in the Centaur and Laser sailing dinghies .
S-bottom hulls are sailing boat hulls with a midships transverse half-section shaped like an s . [ clarification needed ] In the s-bottom, the hull has round bilges and merges smoothly with the keel, and there are no sharp corners on the hull sides between the keel centreline and the sheer line. Boats with this hull form may have a long fixed deep keel, or a long shallow fixed keel with a centreboard swing keel inside. Ballast may be internal, external, or a combination. This hull form was most popular in the late 19th and early to mid 20th centuries. [ citation needed ] Examples of small sailboats that use this s-shape are the Yngling and Randmeer .
Hull forms are defined as follows:
Block measures that define the principal dimensions. They are:
Form derivatives that are calculated from the shape and the block measures. They are:
Coefficients [ 5 ] help compare hull forms as well:
Note: C b = C p ⋅ C m {\displaystyle C_{b}=C_{p}\cdot C_{m}}
Use of computer-aided design has superseded paper-based methods of ship design that relied on manual calculations and lines drawing. Since the early 1990s, a variety of commercial and freeware software packages specialized for naval architecture have been developed that provide 3D drafting capabilities combined with calculation modules for hydrostatics and hydrodynamics. These may be referred to as geometric modeling systems for naval architecture. [ 6 ] | https://en.wikipedia.org/wiki/Hull_(watercraft) |
Hull speed or displacement speed is the speed at which the wavelength of a vessel's bow wave is equal to the waterline length of the vessel. As boat speed increases from rest, the wavelength of the bow wave increases, and usually its crest-to-trough dimension (height) increases as well. When hull speed is exceeded, a vessel in displacement mode will appear to be climbing up the back of its bow wave.
From a technical perspective, at hull speed the bow and stern waves interfere constructively, creating relatively large waves, and thus a relatively large value of wave drag. Ship drag for a displacement hull increases smoothly with speed as hull speed is approached and exceeded, often with no noticeable inflection at hull speed.
The concept of hull speed is not used in modern naval architecture , where considerations of speed/length ratio or Froude number are considered more helpful.
As a ship moves in the water, it creates standing waves that oppose its movement . This effect increases dramatically in full-formed hulls at a Froude number of about 0.35 (which corresponds to a speed/length ratio (see below for definition) of slightly less than 1.20 knot·ft −½ ) because of the rapid increase of resistance from the transverse wave train. When the Froude number grows to ~0.40 (speed/length ratio ~1.35), the wave-making resistance increases further from the divergent wave train. This trend of increase in wave-making resistance continues up to a Froude number of ~0.45 (speed/length ratio ~1.50), and peaks at a Froude number of ~0.50 (speed/length ratio ~1.70).
This very sharp rise in resistance at speed/length ratio around 1.3 to 1.5 probably seemed insurmountable in early sailing ships and so became an apparent barrier. This led to the concept of hull speed.
Hull speed can be calculated by the following formula:
v h u l l ≈ 1.34 × L W L {\displaystyle v_{hull}\approx 1.34\times {\sqrt {L_{WL}}}}
where
If the length of waterline is given in metres and desired hull speed in knots, the coefficient is 2.43 kn·m −½ . The constant may be given as 1.34 to 1.51 knot·ft −½ in imperial units (depending on the source), or 4.50 to 5.07 km·h −1 ·m −½ in metric units, or 1.25 to 1.41 m·s −1 ·m −½ in SI units.
The ratio of speed to L W L {\displaystyle {\sqrt {L_{WL}}}} is often called the "speed/length ratio", even though it is a ratio of speed to the square root of length.
Because the hull speed is related to the length of the boat and the wavelength of the wave it produces as it moves through water, there is another formula that arrives at the same values for hull speed based on the waterline length.
v h u l l = L W L ⋅ g 2 π {\displaystyle v_{hull}={\sqrt {L_{WL}\cdot g \over 2\pi }}}
where
This equation is the same as the equation used to calculate the speed of surface water waves in deep water. It dramatically simplifies the units on the constant before the radical in the empirical equation, while giving a deeper understanding of the principles at play.
Wave-making resistance depends on the proportions and shape of the hull: many modern displacement designs can exceed their hull speed even without planing . These include hulls with very fine ends, long hulls with relatively narrow beam and wave-piercing designs. Such hull forms are commonly used by canoes , competitive rowing boats , catamarans , and fast ferries . For example, racing kayaks can exceed hull speed by more than 100% even though they do not plane.
Heavy boats with hulls designed for planing generally cannot exceed hull speed without planing.
Ultra light displacement boats are designed to plane and thereby circumvent the limitations of hull speed.
Semi-displacement hulls are usually intermediate between these two extremes. | https://en.wikipedia.org/wiki/Hull_speed |
Human-centered design ( HCD , also human-centered design , as used in ISO standards) is an approach to problem-solving commonly used in process, product, service and system design, management, and engineering frameworks that develops solutions to problems by involving the human perspective in all steps of the problem-solving process. Human involvement typically takes place in initially observing the problem within context, brainstorming, conceptualizing, developing concepts and implementing the solution.
Human-centered design is an approach to interactive systems development that aims to make systems usable and useful by focusing on the users, their needs and requirements, and by applying human factors/ergonomics, and usability knowledge and techniques. This approach enhances effectiveness and efficiency, improves human well-being, user satisfaction, accessibility and sustainability; and counteracts possible adverse effects of use on human health, safety and performance.
Human-centered design builds upon participatory action research by moving beyond participants' involvement and producing solutions to problems rather than solely documenting them. Initial stages usually revolve around immersion, observing, and contextual framing— in which innovators immerse themselves in the problem and community. Subsequent stages may then focus on community brainstorming, modeling and prototyping and implementation in community spaces. [ 1 ] Human-centered design can be seen as a philosophy that focuses on analyzing the needs of the user through extensive research. User-oriented design is capable of driving innovation and encourages the practice of iterative design, which can create small improvements in existing products and newer products, thus giving room for the potential to transform markets. [ 2 ]
Human-centered design has its origins at the intersection of numerous fields including engineering, psychology, anthropology and the arts. As an approach to creative problem-solving in technical and business fields its origins are often traced to the founding of the Stanford University design program in 1958 by Professor John E. Arnold who first proposed the idea that engineering design should be human-centered. This work coincided with the rise of creativity techniques and the subsequent design methods movement in the 1960s. Since then, as creative design processes and methods have been increasingly popularized for business purposes, the standardized and defined human-centered design is mistakenly equated with the vaguely outlined " design thinking ".
In Architect or Bee? , Mike Cooley coined the term "human-centered systems" in the context of the transition in his profession from traditional drafting at a drawing board to computer-aided design . [ 2 ] Human-centered systems, [ 3 ] as used in economics, computing and design, aim to preserve or enhance human skills, in both manual and office work, in environments in which technology tends to undermine the skills that people use in their work. [ 4 ] [ 5 ] [ 6 ]
Human centeredness asserts firstly, that we must always put people before machines, however complex or elegant that machine might be, and, secondly, it marvels and delights at the ability and ingenuity of human beings. The Human Centered Systems movement looks sensitively at these forms of science and technology which meet our cultural, historical and societal requirements, and seeks to develop more appropriate forms of technology to meet our long-term aspirations. In the Human Centered System, there exists a symbiotic relation between the human and the machine, in which the human being would handle the qualitative subjective judgements and the machine the quantitative elements. It involves a radical redesign of the interface technologies and at a philosophical level, the objective is to provide tools (in the Heidegger sense) which would support human skill and ingenuity rather than machines which would objectivise that knowledge
The user-oriented framework relies heavily on user participation and user feedback in the planning process. [ 8 ] Users are able to provide new perspective and ideas, which can be considered in a new round of improvements and changes. [ 8 ] It is said that increased user participation in the design process can garner a more comprehensive understanding of the design issues, due to more contextual and emotional transparency between researcher and participant. [ 8 ] A key element of human centered design is applied ethnography, which is a research method adopted from cultural anthropology . [ 8 ] This research method requires researchers to be fully immersed in the observation so that implicit details are also recorded. [ 8 ]
Even after decades of thought on Human Centered Design, management and finance systems still believe that "another's liability is one's asset" could be true of porous human bodies, embedded in nature and inseparable from each other. On the contrary, our biological and ecological interconnections ensure that "another's liability is our liability". Sustainable business systems can only emerge if these biological and ecological interconnections are accepted and accounted for.
Using a human-centered approach to design and development has substantial economic and social benefits for users, employers and suppliers. Highly usable systems and products tend to be more successful both technically and commercially. In some areas, such as consumer products, purchasers will pay a premium for well-designed products and systems. Support and help-desk costs are reduced when users can understand and use products without additional assistance. In most countries, employers and suppliers have legal obligations to protect users from risks to their health, and safety and human-centered methods can reduce these risks (e.g. musculoskeletal risks). Systems designed using human-centered methods improve quality, for example, by:
Human-centered design may be utilized in multiple fields, including sociological sciences and technology. It has been noted for its ability to consider human dignity, access, and ability roles when developing solutions. [ 9 ] Because of this, human-centered design may more fully incorporate culturally sound, human-informed, and appropriate solutions to problems in a variety of fields rather than solely product and technology-based fields. Because human-centered design focuses on the human experience, researchers and designers can address "issues of social justice and inclusion and encourage ethical, reflexive design." [ 10 ]
Human-centered design arises from underlying principles of human factors. When it comes to those two concepts, they are quite interconnected; human factors are about discovering the attributes of human cognition and behavior that are important for making technology work for people. [ 11 ] It is what allows humans as a species to innovate over time. [ dubious – discuss ] Human-centered design was used to discover that Blackberries have less human usability than an iPhone and that important controls on a panel that look too similar will be easily confused and may cause an increased risk of human error.
An important distinction between human-centered design and any other form of design is that human-centered design is not just about aesthetics, and is not always designing for interfaces. It could be designing for controls in the world, tasks in the world, hardware, decision-making, or cognition. [ 11 ] For instance, if a nurse is too tired from a long shift, they might confuse the pumps through which might be administered a bag of penicillin to a patient. In this case, the human-centered design would encompass a task redesign, a possible institute policy redesign, and an equipment redesign.
Typically, human-centered design is more focused on "methodologies and techniques for interacting with people in such a manner as to facilitate the detection of meanings, desires and needs, either by verbal or non-verbal means." [ 12 ] In contrast, user-centered design is another approach and framework of processes which considers the human role in product use, but focuses largely on the production of interactive technology designed around the user's physical attributes rather than social problem-solving. [ 13 ]
In the context of health-seeking behaviors, Human Centered Design can be used to understand why people do or do not seek out health services , even when those services are available and affordable. Human centered design is a powerful tool for improving health-seeking behaviors. This understanding can then be used to develop interventions to address the barriers and promote desired behaviors. Demand-related challenges associated with the acceptability, responsiveness, and quality of services can be addressed by working directly with users to understand their needs and perspectives. [ 14 ] HCD can help in designing interventions that are more likely to be effective. The integration of the principle of Human Centered Design and anti-racism practices can help in addressing existing health disparities present in the healthcare system, and can help to center the needs of people who being to marginalized communities. This type of design can create fair and equitable health outcomes for marginalized communities, who are often left out due to unmet needs. Researchers who apply Human Centered design are thoughtfully approaching the needs of populations who are traditionally excluded, therefore dismantling oppressive systems which previously or have continued to reinforce structural racism. [ 15 ]
Human-centered design has been both lauded and criticised for its ability to actively solve problems with affected communities. Criticisms include the inability of human-centered design to push the boundaries of available technology by solely tailoring to the demands of present-day solutions, rather than focus on possible future solutions. [ 16 ] In addition, human-centered design often considers context, but does not offer tailored approaches for very specific groups of people. New research on innovative approaches include youth-centered health design, which focuses on youth as the central aspect with particular needs and limitations not always addressed by human-centered design approaches. [ 17 ] Nevertheless, human-centered design that doesn't reflect very specific groups of users and their needs is human-centered design poorly executed, since the principles of human-system interaction require the reflection of those specified needs.
Whilst users are very important for some types of innovation (namely incremental innovation), focusing too much on the user may result in producing an outdated or no longer necessary product or service. This is because the insights that you achieve from studying the user today are insights that are related to the users of today and the environment she or he lives in today. If your solution will be available only two or three years from now, your user may have developed new preferences, wants and needs by then. [ 18 ]
Human-Centered AI (HCAI) is a methodical approach to AI system design that prioritizes human values and requirements. [ 19 ] This method places a strong emphasis on boosting human self-efficacy, encouraging innovation, guaranteeing accountability, and promoting social interaction. By putting these human goals first, HCAI also tackles important concerns like privacy, security, environmental preservation, social justice, and human rights. This represents a dramatic change from an algorithmic approach to a human-centered system design, which has been compared to a second Copernican Revolution .
HCAI introduces a two-dimensional framework that demonstrates the possibility of combining high levels of human control with high levels of automation. [ 19 ] This framework suggests a move away from viewing AI as autonomous teammates, instead positioning AI as powerful tools and tele-operated devices that empower users.
Furthermore, HCAI proposes a three-level governance structure to enhance the reliability and trustworthiness of AI systems. At the first level, software engineering teams are encouraged to develop robust and dependable systems. At the second level, managers are urged to cultivate a safety culture across their organizations. At the third level, industry-wide certification can help establish standards that promote trustworthy HCAI systems.
These concepts are designed to be dynamic, inviting challenge, refinement, and extension to accommodate new technologies. They aim to reframe design discussions for AI products and services, offering an opportunity to restart and reshape these conversations. The ultimate goal is to deliver greater benefits to individuals, families, communities, businesses, and society, ensuring that AI developments align with human values and societal goals
By joining two people-centered approaches, Human-Centered Design (HCD) and Community-Based Participatory Research (CBPR) offer a fresh way to tackle challenging real-world issues. While CBPR has been used in academic and community partnerships to address health inequities through social action and empowerment, HCD has historically been used in the business sector to guide the creation of products and services. [ 20 ] Although the public sector has just started using HCD concepts to inform public policy, more research is still needed to fully understand its cycle and how it might be strategically applied to health promotion. By combining CBPR's emphasis on community trust and collaboration with HCD's emphasis on user-centric design, this integration provides a complimentary approach. The potential of these approaches to improve public health outcomes is demonstrated by CBPR initiatives, such as those that try to lower the spread of STIs and improve handwashing among farmworkers. The combined strategy can result in more lasting and successful health interventions by addressing pertinent concerns, establishing partnerships, and involving community members.
In order to improve quality and safety in healthcare, Human Factors and Ergonomics ( HFE ) are integrated using the Systems Engineering Initiative for Patient Safety (SEIPS) models. These models are based on a human-centered design approach, which gives patients' and healthcare practitioners' wants and experiences top priority when designing systems. By extending the "process" component to handle the intricacies of contemporary healthcare delivery, SEIPS 3.0 builds upon this. [ 21 ]
The idea of the patient journey is introduced by the SEIPS 3.0 model as healthcare becomes more dispersed across different locations and eras. This journey-centric approach emphasizes a comprehensive view of patients' experiences over time by mapping their contacts with various care venues. By emphasizing the patient journey, SEIPS 3.0 emphasizes how crucial it is to create systems that can adapt to patients' changing demands in order to provide seamless, secure, and encouraging care. [ 21 ]
In order to implement human-centered design in SEIPS 3.0, HFE professionals must take into account a variety of viewpoints and encourage sincere involvement from all parties involved, including patients, caregivers, and medical professionals. In order to increase interactions across various healthcare settings and capture the intricacies of patient experiences, this approach calls for creative techniques. By putting people first, SEIPS 3.0 seeks to develop healthcare systems that improve the general happiness and well-being of both patients and caregivers in addition to preventing harm | https://en.wikipedia.org/wiki/Human-centered_design |
Human-guided migration or human-led migration is a method of restoring migratory routes of birds bred by humans for their reintroduction into the wild. [ 1 ] [ 2 ]
It is a technique especially used for endangered species in which the loss of individuals and territories has caused the disappearance of their migratory routes. To prevent their extinction , captive breeding has been needed, so their subsequent release into the wild requires teaching these routes to the juveniles . [ 2 ] [ 3 ]
Hand-reared juveniles have been imprinted on their adoptive parents, whom they follow. After a period of flight training and adaptation to the aircraft and its noise, the juveniles accompany their adoptive parents by flying to their wintering grounds. [ 1 ] [ 2 ]
This technique has been used in birds such as the northern bald ibis and the whooping crane , among other species. [ 3 ] [ 4 ] [ 5 ] [ 6 ] | https://en.wikipedia.org/wiki/Human-guided_migration |
Human-in-the-loop ( HITL ) is used in multiple contexts. It can be defined as a model requiring human interaction. [ 1 ] [ 2 ] HITL is associated with modeling and simulation (M&S) in the live, virtual, and constructive taxonomy . HITL along with the related human- on -the-loop are also used in relation to lethal autonomous weapons . [ 3 ] Further, HITL is used in the context of machine learning . [ 4 ]
In machine learning, HITL is used in the sense of humans aiding the computer in making the correct decisions in building a model. [ 4 ] HITL improves machine learning over random sampling by selecting the most critical data needed to refine the model. [ 5 ]
In simulation, HITL models may conform to human factors requirements as in the case of a mockup . In this type of simulation a human is always part of the simulation and consequently influences the outcome in such a way that is difficult if not impossible to reproduce exactly. HITL also readily allows for the identification of problems and requirements that may not be easily identified by other means of simulation.
HITL is often referred to as interactive simulation, which is a special kind of physical simulation in which physical simulations include human operators, such as in a flight or a driving simulator .
Human-in-the-loop allows the user to change the outcome of an event or process. The immersion effectively contributes to a positive transfer of acquired skills into the real world. This can be demonstrated by trainees utilizing flight simulators in preparation to become pilots.
HITL also allows for the acquisition of knowledge regarding how a new process may affect a particular event. Utilizing HITL allows participants to interact with realistic models and attempt to perform as they would in an actual scenario. HITL simulations bring to the surface issues that would not otherwise be apparent until after a new process has been deployed. A real-world example of HITL simulation as an evaluation tool is its usage by the Federal Aviation Administration (FAA) to allow air traffic controllers to test new automation procedures by directing the activities of simulated air traffic while monitoring the effect of the newly implemented procedures. [ 6 ]
As with most processes, there is always the possibility of human error , which can only be reproduced using HITL simulation. Although much can be done to automate systems, humans typically still need to take the information provided by a system to determine the next course of action based on their judgment and experience. Intelligent systems can only go so far in certain circumstances to automate a process; only humans in the simulation can accurately judge the final design. Tabletop simulation may be useful in the very early stages of project development for the purpose of collecting data to set broad parameters, but the important decisions require human-in-the-loop simulation. [ 7 ]
Virtual simulations inject HITL in a central role by exercising motor control skills (e.g. flying an airplane), decision making skills (e.g. committing fire control resources to action), or communication skills (e.g. as members of a C4I team).
Although human-in-the-loop simulation can include a computer simulation in the form of a synthetic environment, computer simulation is not necessarily a form of human-in-the-loop simulation, and is often considered as human-out-of-the loop simulation. In this particular case, a computer model’s behavior is modified according to a set of initial parameters. The results of the model differ from the results stemming from a true human-in-the-loop simulation because the results can easily be replicated time and time again, by simply providing identical parameters.
Three classifications of the degree of human control of autonomous weapon systems were laid out by Bonnie Docherty in a 2012 Human Rights Watch report. [ 3 ] | https://en.wikipedia.org/wiki/Human-in-the-loop |
Human-rating certification , also known as man-rating or crew-rating , is the certification of a spacecraft or launch vehicle as capable of safely transporting humans. There is no one particular standard for human-rating a spacecraft or launch vehicle, and the various entities that launch or plan to launch such spacecraft specify requirements for their particular systems to be human-rated.
One entity that applies human rating is the US government civilian space agency, NASA . NASA's human-rating requires not just that a system be designed to be tolerant of failure and to protect the crew even if an unrecoverable failure occurs, but also that astronauts aboard a human-rated spacecraft have some control over it. [ 1 ] This set of technical requirements and the associated certification process for crewed space systems are in addition to the standards and requirements for all of NASA's space flight programs. [ 1 ]
The development of the Space Shuttle and the International Space Station pre-dated later NASA human-rating requirements. After the Challenger and Columbia accidents, the criteria used by NASA for human-rating spacecraft were made more stringent. [ 2 ]
The NASA CCP human-rating standards require that the probability of a loss on ascent does not exceed 1 in 500, and that the probability of a loss on descent did not exceed 1 in 500. The overall mission loss risk, which includes vehicle risk from micrometeorites and orbital debris while in orbit for up to 210 days, is required to be no more than 1 in 270. [ 3 ] Maximum sustained acceleration is limited to 3 g . [ 3 ]
The United Launch Alliance (ULA) published a paper submitted to AIAA detailing the modifications to its Delta IV and Atlas V launch vehicles that would be needed to conform to NASA Standard 8705.2B. [ 2 ] ULA has since been awarded $6.7 million under NASA's Commercial Crew Development (CCDev) program for development of an Emergency Detection System , one of the final pieces that would be needed to make these launchers suitable for human spaceflight. [ 4 ]
SpaceX is using Dragon 2 , launched on a Falcon 9 Block 5 rocket, to deliver crew to the ISS. Dragon 2 made its first uncrewed test flight in March 2019 and has been conducting crewed flights since Demo-2 in May 2020. [ 5 ]
Boeing's Starliner spacecraft is also a part of the Commercial Crew Program since Boeing CFT in June 2024.
The China Manned Space Agency (CMSA) operates and oversees crewed spaceflight activities launched from China, including the Shenzhou spacecraft and Tiangong space station .
Roscosmos , a Russian state corporation , conducts and oversees human spaceflights launched from Russia. This includes Soyuz spacecraft and the Russian Orbital Segment of the International Space Station.
The space agency of India, ISRO , oversees planned human spaceflights launched from India. [ 6 ]
On 13 February 2024 the CE-20 engine, after a series of ground qualification tests, was certified for crewed Gaganyaan spaceflight missions. [ 7 ] The CE-20 will power the upper stage of the human-rated version of the LVM3 (formerly known as GSLV Mk III) launch vehicle.
Each private spaceflight system builder typically sets up their own specific criteria to be met before carrying humans on a space transport system. | https://en.wikipedia.org/wiki/Human-rating_certification |
The Human-transcriptome DataBase for Alternative Splicing ( H-DBAS ) is a database of alternatively spliced human transcripts based on H-Invitational . [ 1 ]
This Biological database -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Human-transcriptome_DataBase_for_Alternative_Splicing |
The DNA Based Technology (Use and Regulation) Bill, 2017 or the Human DNA Profiling Bill is a proposed legislation in India . [ 1 ] The bill will allow the government to establish a National DNA Data Bank and a DNA Profiling Board, and use the data for various specified forensic purposes. The bill has raised concerns of privacy among citizen rights groups. [ 2 ] The bill was expected to be presented in the parliament in the monsoon session of 2015. [ 3 ]
The bill was originally proposed in 2007 and in 2012 drafting of the bill began. [ 2 ] The draft bill was prepared by the Department of Biotechnology . [ 4 ] The bill proposes to form a National DNA Data Bank and a DNA Profiling Board, and use the data for various specified purposes. [ 2 ]
The proposed DNA Profiling Board will consist of molecular biology, human genetics, population biology, bioethics, social sciences, law and criminal justice experts. The Board will define standards and controls for DNA profiling. It will also certify labs and handle access of the data by law enforcement agencies. [ 2 ] There will be similar bodies at state levels. [ 4 ]
The bill will also create a National DNA Data Bank, which will collect data from offenders, suspects, missing persons, unidentified dead bodies and volunteers. [ 5 ] It will profile and store DNA data in criminal cases like homicide, sexual assault, and other crimes. The data will be restricted and will be available only to the accused or the suspect. A person facing imprisonment or death sentence can send a request for DNA profiling of related evidence to the court that convicted him. [ 4 ]
The bill has the provision that any misuse of data will carry a punishment of up to three years imprisonment and also fine. [ 4 ]
The bill has been criticised for not addressing the concerns of privacy. [ 2 ] The Citizens Forum for Civil Liberties has opposed the bill on privacy concerns and sent a complaint to the National Human Rights Commission of India in 2012. [ 4 ]
In October 2012, an expert committee headed by Ajit Prakash Shah presented its report. It said that there should be safeguards to prevent illegal collection and use of DNA data. There should be also safeguards to prevent the proposed body from misusing them. The report also suggested that there should a mechanism using which citizens can appeal against the retention of data. There should also be a mechanism of appeal under which citizens under trial can request a second sample to be taken. The samples must be taken after consent in case of victims and suspects. However, samples can also be taken from crime scenes. The committee noted that although the bill allows volunteers to submit samples, there is no proper procedure to obtain consent and there is no mechanism under which volunteer can withdraw their data. The committee proposed that before giving the data to a third party, the person must be notified and consent must be sought, if the third party is not an authorised agency. The report said that purpose for which data is being collected should be state publicly, and the data should be destroyed after the purpose has been served and the time frame has expired. The report said bodies collecting, analysing and storing DNA data should be made to release an annual report, detailing their practices and organisational structure. [ 5 ]
In 2012, a non-government organization (NGO) called Lokniti Foundation filed a public interest litigation against the government in the Supreme Court of India , Writ Petition (Civil) No.491 of 2012, stating that India does not have a national DNA database to address the issue of thousands of unclaimed dead bodies in India that are reported annually. The Supreme Court asked the government to present a roadmap detailing how the bill will implement mandatory DNA profiling of unclaimed bodies. [ 6 ]
The government replied to concern by stating that the bill will establish a national database which will help identify unclaimed bodies, and returned rescued children and adults to their families. The database would store DNA profiles from the relatives of missing persons, and also from convicts, accused and volunteers. [ 7 ]
The government also added in the affidavit that India also lacks trained personnel to implement it [ 8 ] It has also been stated that the Centre for DNA Fingerprinting and Diagnostics, Hyderabad, India was in the process of acquiring specialized software from the Federal Bureau of Investigation (FBI), USA for cross matching of DNA profiling data. India has 30-40 DNA examiners and a DNA examiner can handle 100 cases per year. However, India gets 40,000 unclaimed bodies annually, the purpose will require at least 400 DNA examiners. Since, a single case requires ₹ 20,000, annually ₹ 80 crore will be required for 40,000 cases, excluding salaries of the examiners and support personnel. [ 2 ]
The government also said that the final bill will be presented in March 2015. It has been halted due to privacy concerns raised by some NGOs and expert committee set up by the Department of Biotechnology is addressing them. [ 7 ] [ 8 ] | https://en.wikipedia.org/wiki/Human_DNA_Profiling_Bill |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.