id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
39,319,467 | https://en.wikipedia.org/wiki/Human%20viruses%20in%20water | Viruses are a major cause of human waterborne and water-related diseases. Waterborne diseases are caused by water that is contaminated by human and animal urine and feces that contain pathogenic microorganisms. A subject can get infected through contact with or consumption of the contaminated water. Viruses affect all living organisms from single cellular plants, bacteria and animal to the highest forms of plants and animals including human beings. Within a specific kingdom ( Plantae, Animalia, Fungi etc) the localization of viruses colonizing the host can vary: Some human viruses, for example, HIV, colonizes only the immune system, while influenza viruses on the other hand can colonize either the upper respiratory tract or the lower respiratory tract depending on the type (human Influenza virus or avian influenza viruses respectively). Different viruses can have different routes of transmission; for example, HIV is directly transferred by contaminated body fluids from an infected host into the tissue or bloodstream of a new host while influenza is airborne and transmitted through inhalation of contaminated air containing viral particles by a new host. Research has also suggested that solid surface plays a role in the transmission of water viruses. In a experiments that used E.coli phages, Qβ, fr, T4, and MS2 confirmed that viruses survive on a solid surface longer compared to when they are in water. Because of this adaptation to survive longer on solid surfaces, viruses now have a prolonged opportunities to infect humans. Enteric viruses primarily infect the intestinal tract through ingestion of food and water contaminated with viruses of fecal origin. Some viruses can be transmitted through all three routes of transmission.
Water virology started about half a century ago when scientists attempted to detect the polio virus in water samples. Since then, other pathogenic viruses that are responsible for gastroenteritis, hepatitis, and many other virus strains have replaced enteroviruses as the main aim for detection in the water environment.
History
Major outbreaks
Water virology was born after a large hepatitis outbreak transmitted through water was confirmed in New Delhi between December 1955 and January 1956.
Viruses can cause massive human mortality. The smallpox virus killed an estimated 10 to 15 million people per year until 1967. Smallpox was finally eliminated in 1977 by extinction of the virus through vaccination, and the impact of viruses such as influenza, poliomyelitis and measles are mainly controlled by vaccination.
Despite advances in vaccination and prevention of viral diseases, it is estimated that in the 1980s a child died approximately every six seconds from diarrhea confirmed by WHO. Many cases of hepatitis A and/or E, both of which are enteric viruses, are typically transmitted by food and water. Extreme examples include the outbreak of 300,000 cases of hepatitis A and 25,000 cases of gastroenteritis in 1988 in Shanghai caused by shellfish harvested from a sewage polluted estuary. In 1991, an outbreak of 79,000 cases of hepatitis E in Kanpur was ascribed to drinking polluted water.
A more recent outbreak of Hepatitis E in South Sudan killed 88 people. Medecins Sans Frontieres (MSF) said it had treated almost 4,000 patients since the outbreak was identified in South Sudan in July 2012. In this outbreak, Hepatitis E, which causes liver infections, and was thought to be spread by drinking water contaminated with feces. Even more recently In 2014, another Hepatitis E outbreak occurred in south Sudan refugee camp situated in Ethiopia. The outbreak, which began in April of 2014 and ended in January of 2015, claimed a total of twenty-one lives.
Sewage contaminated water contains many viruses, over one hundred species are reported and can lead to diseases that affect human beings. For example, hepatitis, gastroenteritis, meningitis, fever, rash, and conjunctivitis can all be spread through contaminated water. More viruses are being discovered in water because of new detection and characterization methods, although only some of these viruses are human pathogens.
Virus survival in water
Viruses need a suitable environment to survive in. There are many characteristics that control the survival of viruses in water such as temperature, light, pH, salinity, organic matter, suspended solids or sediments, and air–water interfaces.
Temperature
Temperature has the highest effect on virus's survival in water since lower temperatures are the key to longer virus survival. For instance, an article published in 2018 noted that it takes one year for certain viruses including poliovirus and echovirus to decrease by a 5log unit at a temperature of 4°C, while it takes only a week to obtain same result at a temperature of 37°C (human body temperature). The rate of protein, nucleic acid denaturation and chemical reactions that destroy the viral capsid are increased at higher temperatures, thus viruses will survive best at low temperatures. Hepatitis A, adenoviruses and parvoviruses have the highest survival rate in low temperatures amongst enteric viruses.
Light
Ultraviolet light (UV) is the light in sunlight and can inactivate viruses by causing cross-linking of the nucleotides in the viral genome. Many viruses in water are exterminated in the presence of sunlight. The combination of higher temperatures and more UV in the summer time corresponds to shorter viral survival in summer compared to winter. Double stranded DNA viruses like adenoviruses are more resistant to UV light inactivation than enteroviruses because they can use their host cell to repair the damage caused by the UV light.
Visible light can also affect virus survival by a process called photodynamic inactivation but the length and intensity of the light exposure can change the inactivation rate.
pH
The pH of most natural water is between 5–9. Enteric viruses are stable in these conditions. On the other hand, many enteric viruses are more stable at pH 3-5 than at pH 9 and 12. Enteroviruses can survive at pH 11–11.5 and 1–2, but for only short periods. Adenoviruses and rotaviruses are delicate to a pH of 10 or greater and leads to inactivation.
Salts and metals
In a general perspective, viruses don't survive in areas with high concentration of salt. Thus, viruses can live longer in a freshwater habitat than water bodies with high salt concentration. It is also known that certain heavy metals are toxic to viruses.
Interface
Some types of coliphages (a type of bacteriophage) are inactive in an of air-water-solid interface. This is due to the unfolding of the viruses' protein capsid (which is a crucial component for infecting the host). Aggravation of this effect is seen when the ionic strength of the solution increases
Aggregation
Aggregation is one of the most known methods for the survival of viruses. In a liquid environment, viruses tend to form a clump (aggregation). This aggregation result in a reduced rate of virus inactivation promptly showing that viral particles that do not aggregate are more easily destroyed. It has also been proven that aggregation may form spontaneously or may result by nucleation on particles of water.
Virus removal from water
Water that is intended for drinking should go through some treatment to reduce pathogenic viral and bacterial concentrations. As the density of the human population has increased the incidence of sewage contamination of water has increased as well, thus the risk to humans from pathogenic viruses will increase if precautions are not taken.
Scientific studies suggest that the most common viruses found are caliciviruses, astroviruses and enteric viruses. Laboratories are still looking for improved methods to detect these pathogenic viruses. Reducing the amount of viruses in drinking water is accomplished by various treatments that are typically part of drinking water treatment systems in developed countries.
Water purification of surface water (water from lakes, rivers, or reservoirs) typically utilizes four treatment stages: coagulation and flocculation, sedimentation, filtration, and disinfection. The first three stages remove mainly dirt and larger particles, although filtration does reduce the number of viruses and bacteria in the water the number of pathogens present after filtration is still considered too high for drinking water. Purification of water from underground aquifers, called ground water, may skip some of these steps as ground water tends to have fewer contaminants than surface water. The last step, disinfection, is primarily responsible for the reduction of pathogenic viruses to safe levels in all drinking water sources. The most common disinfectants used are chlorine and chloramine. Ozone and UV light can also be used to treat large volumes of water to remove pathogens.
In an article published in 2010, it was determined that nanoparticles of silver could significantly inactivate the activity of some water viruses. When 5.4 ml of the nanoparticles of silver was added to a water virus, its activity decreased by 4log.
Prevention of water viruses
The quality of drinking water is ensured through a framework of water safety plans that ensures the safe disposal of human waste so that drinking water supplies are not contaminated. Improving the water supply, sanitation, hygiene and management of our water resources could prevent ten percent of total global disease.
Half of the hospital beds occupied in the world are related to the lack of safe drinking water. Unsafe water leads to the 88% of the global cases of diarrhea and 90% of the deaths of diarreaheal diseases in children under five years old. Most of these deaths occur in developing countries due to poverty and the high cost of safe water. An article published in 2003 by CDC concluded that the death of children (less than five years of age) caused by rotavirus on a global scale ranges between 352,000 to 592,000.
Approximately 1.1 billion people do not have access to improved water and 2.4 billion people do not have access to sanitation facilities. This situation leads to 2 million preventable deaths each year.
See also
Waterborne diseases
References
Viruses | Human viruses in water | [
"Biology"
] | 2,036 | [
"Viruses",
"Tree of life (biology)",
"Microorganisms"
] |
39,327,092 | https://en.wikipedia.org/wiki/Gene%20therapy%20in%20Parkinson%27s%20disease | Gene therapy in Parkinson's disease consists of the creation of new cells that produce a specific neurotransmitter (dopamine), protect the neural system, or the modification of genes that are related to the disease. Then these cells are transplanted to a patient with the disease. There are different kinds of treatments that focus on reducing the symptoms of the disease but currently there is no cure.
Current treatments
Parkinson's disease (PD) is a progressive neurological disorder resulting from the death of cells in the substantia nigra that contain and produce dopamine. People with PD may develop disturbance in their motor activities. Some activities can be tremor or shaking, rigidity and slow movements (bradykinesia). Patients may eventually present certain psychiatric problems like depression and dementia. Current pharmacological intervention consist on the administration of L-dopa, a dopamine precursor. The L-dopa therapy increases dopamine production of the remaining nigral neurons. Other therapy is the deep brain electrical stimulation to modulate the overactivity of the subthalamic nucleus to the loss of dopamine signaling in the stratum. However, with this treatment, the number of substantia nigra neurons decrease so it becomes less efficient.
These treatments try to reduce the symptoms of the patient focusing on increasing the production of dopamine but they do not cure the disease. The new treatments for PD are in clinical trials and most of them are centered on gene therapy. With this, researchers expect to compensate the loss of dopamine or to protect the dopamine neurons from degeneration. The pharmacological and surgical therapies for PD focus on compensating the ganglia dysfunction caused by the degeneration of the dopaminergic neuron from substantia nigra.
Gene therapy background
There are many new PD treatments in clinical trials and several of those are focusing on gene therapeutic approaches that compensate the loss of dopamine or protect the nervous system dopamine neurons from degeneration. There are some important reasons for focusing on gene therapy as a treatment for PD. First of all, currently there is no cure for this disease. Secondly, some genes have been identified which can modulate the neuron phenotype or act as neuroprotective agents. Also, the environment of the brain cannot afford repeated injections into the region where the substantia nigra meets the striatum, the nigrostriatum. Therefore, gene therapy could be a single treatment appealing, viral vectors used in the therapy are diffusible and capable to do transduction of the striatum.
Gene therapy bases
The main idea of the gene therapy is to create new generations of cells that produce particular neurotransmitter (dopamine) and then transplant these cells to the patients with PD. This is because the neurons cannot proliferate nor be renewed; and replacing lost neurons it is a process that is currently going under investigation. Also, the use of embryonic dopaminergic cells cannot be used because these cells are difficult to obtain and modifications of cell can only be made on somatic cells, not germline. With the modifications of the transplanted cell, there can be a change in the expression of the genes or normalize them.
Types of gene therapy
There are several types of gene therapy. There are therapies for symptomatic approaches like the production of ectopic L-dopa, the full ectopic dopamine synthesis, the ectopic L-dopa conversion or the use of glutamic acid descarboxylase (GAD). Also there are disease modifying therapies like NTN or GNDF (glial cell line-derived neurotrophic factor), the regulation of the α-synuclein and Parkin gene expression. Currently the main studies are using AAV2 as a vector platform, making it the standard vector for this disease although a lentevirus has also been used. In the different types of the gene therapy, the investigations are encoding enzymes that are necessary for dopamine synthesis, such as tyrosine hydroxylase, GTP cyclohydrolase 1 and AADC.
Symptomatic approaches
A symptomatic approach is a treatment focused on the symptoms of the patients. The first one, consists in the ectopic dopamine synthesis. Here, the production of ectopic L-dopa in the striatum is another alternative gene therapy. This therapy consists on transferring the TH and GTP cyclohydrolase 1 genes into the MSNs because the endogenous AADC activity is able to convert the L-dopa into dopamine. In an experiment in 2005, using tyrosine hydroxylase (TH) and GCH1 altogether with vectors, they could provide normal levels of L-dopa to rats. The results of this experiment showed reduced dyskinesias by 85% as well as, the reversion view of abnormal projections in the striatum using the TH-GCH1 gene transfer.
Dopamine synthesis can be fully ectopic. In this case, the enzyme AADC it is in charge of converting the levodopa to dopamine. In Parkinson disease, the loss of neurons from the nigrostriatum leads to the inability to convert levodopa to dopamine. The goal of AAV2-hAADC is to restore normal levels of AADC in the striatum so there could be more conversion of levodopa, and therefore reducing levodopa- induced dyskinesia. Using the gene therapy, in 2012, an experiment was accomplish with primates testing tyrosine hydroxylase (TH) transgene in primate astrocytes. Gene therapy was made with the transfer of a TH full-length cDNA using rat TH. The results showed behavioural improvement in the monkeys that received the plasmid, unlike the control monkey.
Another type is the ectopic L-dopa conversion in which they use a gene enzyme replacement therapy that can be used to increase the efficacy of the pharmacological L-dopa therapy by using AAV vectors. This AAV vectors have been designed to send the AADC coding sequence to the MSN (medium spiny neurons) in the striatum to be able to convert administered L-dopa into dopamine.
Other kind of gene therapy as a symptomatic approach is the use of glutamic acid decarboxylase (GAD) expression in the subthalamic nucleus. This is a gene enzyme replacement therapy that can be used to increase the efficacy of the pharmacological L-dopa therapy by using AAV vectors. This AAV vectors have been designed to send the AADC coding sequence to the MSN in the striatum to be able to convert administered L-dopa into dopamine. A phase 2 study, published in the journal Lancet Neurology Parkinson, says that a gene therapy called NLX-P101 dramatically reduces movement damage. In this study, they used glutamic acid decarboxylase (GAD). They introduced genetic material in the brain related to motor functions. The symptoms included tremor, stiffness and difficulty in movements; and were improved in half of the group in gene therapy, while in the control group, 14% improved them.
Disease modifying
There are therapies in development based in the modification of the disease. The first one is the neurotrophic factors gene delivery. In this therapy, GNDF or NTN are used to protect the system. GNDF is a factor of the TGFß superfamily, is secreted by astrocytes (glia cells that are in charge of the survival of the midbrain dopaminergic neurons) and is homologous to NTN, persephin and artemin. Preclinical studies of the nigrostriatal dopaminergic in relation to Parkinson disease system have shown that GNDF and NTN are very potential neuroprotective agents.
Another type in the disease's modification technique is the synuclein silencing. Some cases of PD were related to polymorphisms in the α-synuclein promoter and also in the multiplication of the locus that carries the α-synuclein gene. Therefore, trying to down-regulate the α-synuclein expression could impact the development of the disease. There have been explored several viral vector-based gene delivery system that interfere with α- synuclein expression, and they depend on the interference of the RNA (destabilizing the α-synuclein RNAm) and/or the block the protein translation (using short hairpin RNA or micro RNA directed against the α-synuclein RNAm sequence).
The discovery of the Parkin gene is another type of modification of PD. The Parkin gene is linked with mutations associated with autosomal recessive juvenile parkinsonism (previous state of Parkinson with the typical symptoms and pathology but with a slow progression). The mutations in the Parkin gene are responsible for the development of the autosomal recessive juvenile parkinsonism.
New projects and investigations
More gene therapy trials have been conducted for PD (with the adeno-associated virus 2 gene), the objectives and strategies used on the actual researches are clear, the research tries to translate the experience obtained during the trials and try to improve the development of new technology for the gene therapy of PD.
References
Parkinson's disease
Gene therapy | Gene therapy in Parkinson's disease | [
"Engineering",
"Biology"
] | 1,989 | [
"Gene therapy",
"Genetic engineering"
] |
39,328,460 | https://en.wikipedia.org/wiki/F%C3%A9tizon%20oxidation | Fétizon oxidation is the oxidation of primary and secondary alcohols utilizing the compound silver(I) carbonate absorbed onto the surface of celite also known as Fétizon's reagent first employed by Marcel Fétizon in 1968. It is a mild reagent, suitable for both acid and base sensitive compounds. Its great reactivity with lactols makes the Fétizon oxidation a useful method to obtain lactones from a diol. The reaction is inhibited significantly by polar groups within the reaction system as well as steric hindrance of the α-hydrogen of the alcohol.
Preparation
Fétizon's reagent is typically prepared by adding silver nitrate to an aqueous solution of a carbonate, such as sodium carbonate or potassium bicarbonate, while being vigorously stirred in the presence of purified celite.
Mechanism
A proposed mechanism for the oxidation of an alcohol by Fétizon's reagent involves single electron oxidation of both the alcoholic oxygen and the hydrogen alpha to the alcohol by two atoms of silver(I) within the celite surface. The carbonate ion then proceeds to deprotonate the resulting carbonyl generating bicarbonate which is further protonated by the additionally generated hydrogen cation to cause elimination of water and generation of carbon dioxide.
The rate limiting step of this reaction is proposed to be the initial association of the alcohol to the silver ions. As a result, the presence of even weakly associating ligands to the silver can inhibit the reaction greatly. As a result, even slightly polar solvents of any variety, such as ethyl acetate or methyl ethyl ketone, are avoided when using this reagent as they competitively associate with the reagent. Additional polar functionalities of the reactant should also be avoided whenever possible as even the presence of an alkene can sometimes reduce the reactivity of a substrate 50 fold. Commonly employed solvents such as benzene and xylene are extremely non-polar and further acceleration of the reaction can be achieved through the use of the more non-polar heptane. The solvent is also typically refluxed to drive the reaction with heat and remove the water generated by the reaction through azeotropic distillation.
Steric hindrance of the hydrogen alpha to the alcohol is a major determination of the rate of oxidation as it affects the rate of association. Tertiary alcohols lacking an alpha hydrogen are selected against and generally do not oxidize in the presence of Fétizon's reagent.
Increasing the amount of celite used in the reagent accelerates the rate of the reaction by increasing the surface area available to react. However, increasing the amount of celite past 900 grams per mole of silver(I) carbonate begins to slow the reaction due to dilution effects.
Reactivity
Fétizon's reagent is used primarily in the oxidation of primary or secondary alcohols to aldehydes or ketones with a slight selectivity toward secondary alcohols and unsaturated alcohols. The reaction is typically done in a refluxing dry non-polar organic solvent with copious stirring. The reaction time varies with the structure of the alcohol and is typically completed within three hours. A very attractive property of Fétizon's reagent is its ability to be separated from the reaction product by physically filtering it out and washing with benzene.
The inability of Fétizon's reagent to oxidize tertiary alcohols makes it extremely useful in the monooxidation of a [1,2] diol in which one of the alcohols is tertiary while avoiding cleavage of the carbon-carbon bond.
The mildness and structural sensitivity of the reagent also makes this reagent ideal for the monooxidation of a symmetric diol.
Lactols are extremely sensitive to Fétizon's reagent, being oxidized very quickly to a lactone functionality. This allows for the selective oxidation of lactols in the presence of other alcohols. This also allows for a classic use of Fétizon's reagent to form lactones from a primary diol. By oxidizing one of the alcohols to an aldehyde, the second alcohol equilibrates with the aldehyde to form a lactol which is reacted quickly with more Fétizon's reagent to trap the cyclic intermediate as a lactone. This method allows for the synthesis of seven-member lactones which are traditionally more challenging to synthesize.
Phenol functional groups can be oxidized to their respective quinone forms. These quinones can further couple within solution producing numerous dimerizations depending upon their substituents.
Amines have been shown to oxidize in the presence of Fétizon's reagent to enamines and iminium cations that have been trapped, but can also be selected against in a compound with more easily oxidized alcohol functionalities.
Fétizon's reagent can also being used to facilitate cycloaddition of a 4-hydroxy-2-furoquinilone and an olefin to form dihydrofuroquinolinones.
Protecting groups
Para-methoxybenzyl (PMB) is a commonly used protecting group for alcohols against Fétizon's reagent. As Fétizon's oxidation is a neutral reaction, acid and base sensitive protecting groups are also compatible with the reagent and by products generated.
Sensitive groups
While tertiary alcohols are typically not affected by Fétizon's reagent, tertiary propargylic alcohols have been shown to oxidize under these conditions and results in the fragmentation of the alcohol with an alkyne leaving group.
Halohydrins that possess a trans stereochemistry have been demonstrated to form epoxides and transposed products in the presence of Fétizon's reagent. Halohydrins possessing a cis-stereochemistry seem to perform a typical Fétizon's oxidation to a ketone.
[1,3] diols have a tendency to eliminate water following the monooxidation by Fétizon's reagent to form an enone.
Under differing structural conditions, [1,2] diols can form diketones in the presence of Fétizon's reagent. However, oxidative carbon-carbon bond cleavage may also occur.
Applications
Since its discovery as a useful method of oxidation, Fétizon's reagent has been used in the total synthesis of numerous molecules such as (±)-bukittinggine.
Fétizon's reagent has also been employed extensively in the study of various sugar chemistry, to achieve selective oxidation of tri and tetra methylated aldoses to aldolactones, oxidation of D-xylose and L-arabinose to D-threose and L-erythrose respectively, and oxidation of L-sorbose to afford L-threose among many others.
References
Organic oxidation reactions
Name reactions | Fétizon oxidation | [
"Chemistry"
] | 1,463 | [
"Name reactions",
"Organic oxidation reactions",
"Organic reactions"
] |
39,329,230 | https://en.wikipedia.org/wiki/Bipartite%20half | In graph theory, the bipartite half or half-square of a bipartite graph is a graph whose vertex set is one of the two sides of the bipartition (without loss of generality, ) and in which there is an edge for each pair of vertices in that are at distance two from each other in . That is, in a more compact notation, the bipartite half is where the superscript 2 denotes the square of a graph and the square brackets denote an induced subgraph.
Examples
For instance, the bipartite half of the complete bipartite graph is the complete graph and the bipartite half of the hypercube graph is the halved cube graph.
When is a distance-regular graph, its two bipartite halves are both distance-regular. For instance, the halved Foster graph is one of finitely many degree-6 distance-regular locally linear graphs.
Representation and hardness
Every graph is the bipartite half of another graph, formed by subdividing the edges of into two-edge paths. More generally, a representation of as a bipartite half can be found by taking any clique edge cover of and replacing each clique by a star. Every representation arises in this way. Since finding the smallest clique edge cover is NP-hard, so is finding the graph with the fewest vertices for which is the bipartite half.
Special cases
The map graphs, that is, the intersection graphs of interior-disjoint simply-connected regions in the plane, are exactly the bipartite halves of bipartite planar graphs.
See also
Bipartite double cover
References
Graph operations
Bipartite graphs | Bipartite half | [
"Mathematics"
] | 346 | [
"Mathematical relations",
"Graph theory",
"Graph operations"
] |
49,594,749 | https://en.wikipedia.org/wiki/Cyber%20resilience | Cyber resilience refers to an entity's ability to continuously deliver the intended outcome, despite cyber attacks. Resilience to cyber attacks is essential to IT systems, critical infrastructure, business processes, organizations, societies, and nation-states. A related term is cyberworthiness, which is an assessment of the resilience of a system from cyber attacks. It can be applied to a range of software and hardware elements (such as standalone software, code deployed on an internet site, the browser itself, military mission systems, commercial equipment, or IoT devices).
Adverse cyber events are those that negatively impact the availability, integrity, or confidentiality of networked IT systems and associated information and services. These events may be intentional (e.g. cyber attack) or unintentional (e.g. failed software update) and caused by humans, nature, or a combination thereof.
Unlike cyber security, which is designed to protect systems, networks and data from cyber crimes, cyber resilience is designed to prevent systems and networks from being derailed in the event that security is compromised. Cyber security is effective without compromising the usability of systems and there is a robust continuity business plan to resume operations, if the cyber attack is successful.
Cyber resilience helps businesses to recognize that hackers have the advantage of innovative tools, element of surprise, target and can be successful in their attempt. This concept helps business to prepare, prevent, respond and successfully recover to the intended secure state. This is a cultural shift as the organization sees security as a full-time job and embedded security best practices in day-to-day operations. In comparison to cyber security, cyber resilience requires the business to think differently and be more agile on handling attacks.
The objective of cyber resilience is to maintain the entity's ability to deliver the intended outcome continuously at all times. This means doing so even when regular delivery mechanisms have failed, such as during a crisis or after a security breach. The concept also includes the ability to restore or recover regular delivery mechanisms after such events, as well as the ability to continuously change or modify these delivery mechanisms, if needed in the face of new risks. Backups and disaster recovery operations are part of the process of restoring delivery mechanisms.
Frameworks
Resilience, as defined by Presidential Policy Directive PPD-21, is the ability to prepare for and adapt to changing conditions and withstand and recover rapidly from disruptions. Cyber resilience focuses on the preventative, detective, and reactive controls in an information technology environment to assess gaps and drive enhancements to the overall security posture of the entity. The Cyber Resilience Review (CRR) is one framework for the assessment of an entity's resiliency created by the Department of Homeland Security. Another framework created by Symantec is based on 5 pillars: Prepare/Identify, Protect, Detect, Respond, and Recover.
The National Institute of Standards and Technology's Special Publication 800-160 Volume 2 Rev. 1 offers a framework for engineering secure and reliable systems—treating adverse cyber events as both resiliency and security issues. In particular 800-160 identifies fourteen techniques that can be used to improve resiliency:
See also
Critical infrastructure protection
Decentralization
Internet censorship
Peer-to-peer
Proactive cyber defense
Resilience (organizational)
Operational Collaboration
Airworthiness
Crashworthiness
Roadworthiness
Railworthiness
Seaworthiness
Spaceworthiness
References
Computer security procedures
IT infrastructure
Cyberwarfare
Security
National security
Business continuity
Disaster preparedness
Computing terminology | Cyber resilience | [
"Technology",
"Engineering"
] | 717 | [
"Cybersecurity engineering",
"Computing terminology",
"IT infrastructure",
"Information technology",
"Computer security procedures"
] |
49,596,228 | https://en.wikipedia.org/wiki/Sulfolobus%20tengchongensis%20spindle-shaped%20virus | Sulfolobus tengchongensis spindle-shaped virus 1 (STSV1) is a DNA virus of the family Bicaudaviridae. It infects the hyperthermophilic archaeon Sulfolobus tengchongensis which can be found in the volcanic area of Tengchong, Baoshan City, in western Yunnan province, People's Republic of China.
In 2014, Sulfolobus tengchongensis spindle-shaped virus 2 (STSV2), a relative of STSV1, also infecting S. tengchongensis, has been reported. Besides S. tengchongensis, STSV2 infects Sulfolobus islandicus REY15A. It has been demonstrated that STSV2 induces unprecedented gigantism of S. islandicus cells by blocking the expression of the cell division genes and arresting the cell cycle in the S phase. The diameter of infected cells increases up to 20 times, resulting in 8,000-fold increase in volume compared to noninfected cells.
References
External links
UCSC Sulfolobus virus STSV1 Genome Browser Gateway
ENA Genomes Pages - Archaealvirus STSV-1, also STSV-2, and Sulfolobus spindle-shaped virus 1, 4, 5, 6, 7 as well
NCBI: Sulfolobus tengchongensis spindle-shaped virus 1 (STSV-1) (unclassified species)
NCBI: Sulfolobus spindle-shaped virus 1 (SSV1) (species, CTV accepted) of genus Alphafusellovirus, family Fuselloviridae — not to be confused (see also UniProt: SSV1)
Bicaudaviridae | Sulfolobus tengchongensis spindle-shaped virus | [
"Biology"
] | 365 | [
"Virus stubs",
"Viruses"
] |
49,597,510 | https://en.wikipedia.org/wiki/Lion%20Mark%20%28toys%29 | The Lion Mark is a British consumer symbol developed in 1988 by British Toy & Hobby Association (BTHA) and used to identify toys denoted as safe and of high quality.
It represents a red and white lion face in a triangle with a yellow background and green borders.
This conformity mark is voluntary and only members of the British Toy & Hobby Association can use it.
It certifies conformity to EN 71 standards and EN 62115 in the case of electrical toys.
The Lion Mark is also used to indicate shops as "Approved Lion Mark Retailers", in accordance with a joint initiative of the British Toy & Hobby Association with the Toy Retailers Association (TRA).
References
Certification marks
Toy safety | Lion Mark (toys) | [
"Mathematics"
] | 142 | [
"Symbols",
"Certification marks"
] |
49,598,417 | https://en.wikipedia.org/wiki/LINC00520 | Long intergenic non-protein coding RNA 520 is a long non-coding RNA that in humans is encoded by the LINC00520 gene.
References
Proteins | LINC00520 | [
"Chemistry"
] | 34 | [
"Proteins",
"Biomolecules by chemical classification",
"Molecular biology"
] |
49,599,350 | https://en.wikipedia.org/wiki/Epoxydocosapentaenoic%20acid | Epoxide docosapentaenoic acids (epoxydocosapentaenoic acids, EDPs, or EpDPEs) are metabolites of the 22-carbon straight-chain omega-3 fatty acid, docosahexaenoic acid (DHA). Cell types that express certain cytochrome P450 (CYP) epoxygenases metabolize polyunsaturated fatty acids (PUFAs) by converting one of their double bonds to an epoxide. In the best known of these metabolic pathways, cellular CYP epoxygenases metabolize the 20-carbon straight-chain omega-6 fatty acid, arachidonic acid, to epoxyeicosatrienoic acids (EETs); another CYP epoxygenase pathway metabolizes the 20-carbon omega-3 fatty acid, eicosapentaenoic acid (EPA), to epoxyeicosatetraenoic acids (EEQs). CYP epoxygenases similarly convert various other PUFAs to epoxides (see Epoxygenase). These epoxide metabolites have a variety of activities. However, essentially all of them are rapidly converted to their corresponding, but in general far less active, vicinal dihydroxy fatty acids by ubiquitous cellular soluble epoxide hydrolase (sEH; also termed epoxide hydrolase 2). Consequently, these epoxides, including EDPs, operate as short-lived signaling agents that regulate the function of their parent or nearby cells. The particular feature of EDPs (and EEQs) distinguishing them from EETs is that they derive from omega-3 fatty acids and are suggested to be responsible for some of the beneficial effects attributed to omega-3 fatty acids and omega-3-rich foods such as fish oil.
Structure
EDPs are epoxide eicosapentaenoic acid metabolites of DHA. DHA has 6 cis (see Cis–trans isomerism) double bonds each one of which is located between carbons 4-5, 7-8, 10-11, 13-14, 16-17, or 19-20. Cytochrome P450 epoxygenases attack any one of these double bonds to form a respective docosapentaenoic acid (DPA) epoxide regioisomer. A given epoxygenase may therefore convert DHA to 4,5-EDP (i.e. 4,5-epoxy-7Z,10Z,13Z,16Z,19Z-DPA), 7,8-EDP (i.e. 7,8-epoxy-4Z,10Z,13Z,16Z,19Z-DPA), 10,11-EDP (i.e. 10,11-epoxy-4Z,7Z,13Z,16Z,19Z-DPA), 13,14-EDP (i.e. 13,14-epoxy-4Z,7Z,10Z,16Z,19Z-DPA), 16,17-EDP (i.e. 16,17-epoxy-4Z,7Z,10Z,13Z,19Z-DPA, or 19,20-EDP (i.e. 19,20-epoxy-4Z, 7Z,10Z,13Z,16Z-DPA. The epoxygenase enzymes generally form both R/S enantiomers at each former double bond position; for example, cytochrome P450 epoxidases attack DHA at the 16,17-double bond position to form two epoxide enantiomers, 16R,17S-EDP and 16S,17R-EDP. The 4,5-EDP metabolite is unstable and generally not detected among the EDP formed by cells.
Production
Enzymes of the cytochrome P450 (CYP) superfamily that are classified as epoxygenases based on their ability to metabolize PUFA, particularly arachidonic acid, to epoxides include: CYP1A, CYP2B, CYP2C, CYP2E, CYP2J, and within the CYP3A subfamily, CYP3A4. In humans, CYP2C8, CYP2C9, CYP2C19, CYP2J2, and possibly CYP2S1 isoforms appear to be the principal epoxygenases responsible for metabolizing arachidonic acid to EETs (see ). In general, these same CYP epoxygenases also metabolize DHA to EDP (as well as EPA to EEQ; CYP2S1 has not yet been tested for DHA-metabolizing ability), doing so at rates that are often greater than their rates in metabolizing arachidonic acid to EETs; that is, DHA (and EPA) appear to be preferred over arachidonic acid as substrates for many of the CYP epoxygenases. CYP1A1, CYP1A2, CYP2C18, CYP2E1, CYP4A11, CYP4F8, and CYP4F12 also metabolize DHA to EDPs. CYP2C8, CYP2C18, CYP2E1, CYP2J2, VYP4A11, CYP4F8, and CYP4F12 preferentially attack the terminal omega-3 double bond that distinguishes DHA from omega-6 fatty acids and therefore metabolize DHA principally to 19,20-EDP isomers while CYP2C19 metabolizes DHA to 7,8-EDP, 10,11-EDP, and 19,20-EDP isomers CYP2J2 metabolizes DHA to EPAs, principally 19,20-EPA, at twice the rate that it metabolizes arachidonic acid to EETs. In addition to the cited CYP's, CYP4A11, CYP4F8, CYP4F12, CYP1A1, CYP1A2, and CYP2E1, which are classified as CYP monooxygenase rather than CYP epoxygeanses because they metabolize arachidonic acid to monohydroxy eicosatetraenoic acids (see 20-Hydroxyeicosatetraenoic acid), i.e. 19-hydroxyeicosatetraenoic acid and/or 20-hydroxyeicosatetranoic acid, take on epoxygease activity in converting DHA primarily to 19,20-EDP isomers (see Epoxyeicosatrienoic acid). The CYP450 epoxygenases capable of metabolizing DHA to EDPs are widely distributed in organs and tissues such as the liver, kidney, heart, lung, pancreas, intestine, blood vessels, blood leukocytes, and brain. These tissues are known to metabolize arachidonic acid to EETs; it has been shown or is presumed that they also metabolize DHA to EPD's.
The EDPs are commonly made by the stimulation of specific cell types by the same mechanisms which produce EETs (see Epoxyeicosatrienoic acid). That is, cell stimulation causes DHA to be released from the sn-2 position of their membrane-bound cellular phospholipid pools through the action of a phospholipase A2-type enzyme and the subsequent attack of the released DHA by CYP450 epoxidases. It is notable that the consumption of omega-3 fatty acid-rich diets dramatically raises the serum and tissue levels of EDPs and EEQs in animals as well as humans. Indeed, this rise in EDP (and EEQ) levels in humans is by far the most prominent change in the profile of PUFA metabolites caused by dietary omega-3 fatty acids and, it is suggested, may be responsible for at least some of the beneficial effects ascribed to dietary omega-3 fatty acids.
EDP metabolism
Similar to EETs (see Epoxyeicosatrienoic acid), EDPs are rapidly metabolized in cells by a cytosolic soluble epoxide hydrolase (sEH, also termed epoxide hydrolase 2 [EC 3.2.2.10.]) to form their corresponding vicinal diol dihydroxyeicosapentaenoic acids. Thus, sEH converts 19,20-EDP to 19,20-dihdroxydocosapentaenoic acid (DPA), 16,17-EDP to 16,17-dihydroxy-DPA, 13,14-EDP to 13,14-dihydroxy-DPA, 10,11-EDP to 10,11-dihydroxy-DPA, and 7,8-EDP to 7,8-dihydroxy-EDP; 4,5-EDP is unstable and therefore generally not detected in cells. The dihydroxy-EDP products, like their epoxy precursors, are enantiomer mixtures; for instance, sEH converts 16,17-EDP to a mixture of 16(S),17(R)-dihydroxy-DPA and 16(R),17(S)-dihydroxy-DPA. These dihydroxy-DPAs typically are far less active than their epoxide precursors. The sEH pathway acts rapidly and is by far the predominant pathway of EDP inactivation; its operation causes EDPs to function as short-lived mediators whose actions are limited to their parent and nearby cells, i.e. they are autocrine and paracrine signaling agents, respectively.
In addition to the sEH pathway, EDPs, similar to the EETs, may be acylated into phospholipids in an acylation-like reaction; this pathway may serve to limit the action of EETs or store them for future release. Finally, again similar to the EETs, EDPs are subject to inactivation by being further metabolized by beta oxidation.
Clinical significance
EDPs have not be studied nearly as well as the EETs. This is particularly the case for animal studies into their potential clinical significance. In comparison to a selection of the many activities attributed to the EETs (see Epoxyeicosatrienoic acid), animal studies reported to date find that certain EDPs (16,17-EDP and 19,20-EDP have been most often examined) are: 1) more potent than EETs in decreasing hypertension and pain perception; 2) more potent than or at least equal in potency to the EETs in suppressing inflammation; and 3) act oppositely from the EETs in that EDPs inhibit angiogenesis, endothelial cell migration, endothelial cell proliferation, and the growth and metastasis of human breast and prostate cancer cell lines whereas EETs have stimulatory effects in each of these systems. As indicated in the Metabolism section, consumption of omega-3 fatty acid-rich diets dramatically raises the serum and tissue levels of EDPs and EEQs in animals as well as humans and in humans is by far the most prominent change in the profile of PUFA metabolites caused by dietary omega-3 fatty acids. Hence, the metabolism of DHA to EDPs (and EPA to EEQs) may be responsible for at least some of the beneficial effects ascribed to dietary omega-3 fatty acids.
References
Metabolic intermediates
Docosanoids
Fatty acids
Epoxides
Cell biology
Immunology
Inflammations
Blood pressure
Human physiology
Animal physiology | Epoxydocosapentaenoic acid | [
"Chemistry",
"Biology"
] | 2,524 | [
"Animals",
"Cell biology",
"Animal physiology",
"Immunology",
"Metabolic intermediates",
"Biomolecules",
"Metabolism"
] |
49,602,444 | https://en.wikipedia.org/wiki/DU%20spectrophotometer | The DU spectrophotometer or Beckman DU, introduced in 1941, was the first commercially viable scientific instrument for measuring the amount of ultraviolet light absorbed by a substance. This model of spectrophotometer enabled scientists to easily examine and identify a given substance based on its absorption spectrum, the pattern of light absorbed at different wavelengths. Arnold O. Beckman's National Technical Laboratories (later Beckman Instruments) developed three in-house prototype models (A, B, C) and one limited distribution model (D) before moving to full commercial production with the DU. Approximately 30,000 DU spectrophotometers were manufactured and sold between 1941 and 1976.
Sometimes referred to as a UV–Vis spectrophotometer because it measured both the ultraviolet (UV) and visible spectra, the DU spectrophotometer is credited as being a truly revolutionary technology. It yielded more accurate results than previous methods for determining the chemical composition of a complex substance, and substantially reduced the time needed for an accurate analysis from weeks or hours to minutes. The Beckman DU was essential to several critical secret research projects during World War II, including the development of penicillin and synthetic rubber.
Background
Before the development of the DU spectrophotometer, analysis of a test sample to determine its components was a long, costly, and often inaccurate process. A classical wet laboratory contained a wide variety of complicated apparatus. Test samples were run through a series of awkward and time-consuming qualitative processes to separate out and identify their components. Determining quantitative concentrations of those components in the sample involved further steps. Processes could involve techniques for chemical reactions, precipitations, filtrations and dissolutions. Determination of the concentrations of known impurities in a known inorganic substance such as molten iron could be done in under thirty minutes. The determination of complex organic structures such as chlorophyll using wet and dry methods could take decades.
Spectroscopic methods for observing the absorption of electromagnetic radiation in the visible spectrum were known as early as the 1860s.
Scientists had observed that light traveling through a medium would be absorbed at different wavelengths, depending on the matter-composition of the medium involved. A white light source would emit light at multiple wavelengths over a range of frequencies. A prism could be used to separate a light source into specific wavelengths. Shining the light through a sample of a material would cause some wavelengths of light to be absorbed, while others would be unaffected and continue to be transmitted. Wavelengths in the resulting absorption spectrum would differ depending upon the atomic and molecular composition if the material involved.
Spectroscopic methods were predominantly used by physicists and astrophysicists. Spectroscopic techniques were rarely taught in chemistry classes and were unfamiliar to most practicing chemists. Beginning around 1904, Frank Twyman of the London instrument making firm Adam Hilger, Ltd. tried to develop spectroscopic instruments for chemists, but his customer base was consistently made up of physicists rather than chemists.
By the 1930s he had developed a niche market in metallurgy, where his instruments were well adapted to the types of problems that chemists were solving.
By the 1940s, both academic and industrial chemists were becoming increasingly interested in problems involving the composition and detection of biological molecules. Biological molecules, including proteins and nucleic acids, absorb light energy in both the ultraviolet and visible range. The spectrum of visible light was not broad enough to enable scientists to examine substances such as vitamin A. Accurate characterization of complex samples, particularly of biological materials, would require the accurate reading of absorption frequencies in the ultraviolet and infrared (IR) sections of the spectrum in addition to visible light. Existing instruments such as the Cenco "Spectrophotelometer" and the Coleman Model DM Spectrophotometer could not be effectively used to examine wavelengths in the ultraviolet range.
The array of equipment needed to measure light energy reaching beyond the visible spectrum towards the ultraviolet could cost a laboratory as much as $3,000, a huge amount in 1940. Repeated readings of a sample were taken to produce photographic plates showing the absorption spectrum of a material at different wavelengths. An experienced human could compare these to the known images to identify a match. Then information from the plates had to be combined to create a graph showing the spectrum as a whole. Ultimately, the accuracy of such approaches was dependent on accurate, consistent development of the photographic plates, and on human visual acuity and practice in reading the wavelengths.
Development
The DU was developed at National Technical Laboratories (later Beckman Instruments) under the direction of Arnold Orville Beckman, an American chemist and inventor. Beginning in 1940, National Technical Laboratories developed three in-house prototype models (A, B, C) and one limited distribution model (D) before moving to full commercial production with the DU in 1941. Beckman's research team was led by Howard Cary, who went on to co-found Applied Physics Corporation (later Cary Instruments) which became one of Beckman Instruments' strongest competitors. Other scientists included Roland Hawes and Kenyon George.
Coleman Instruments had recently coupled a pH meter with an optical phototube unit to examine the visual spectrum (the Coleman Model DM). Beckman had already developed a successful pH meter for measuring acidity of solutions, his company's breakthrough product. Seeing the potential to build upon their existing expertise, Beckman made it a goal to create an easy-to-use integrated instrument which would both register and report specific wavelengths extending into the ultraviolet range. Rather than depending on development of photographic plates, or a human observer's visual ability to detect wavelengths in the absorption spectrum, phototubes would be used to register and report the specific wavelengths that were detected. This had the potential to increase the instrument's accuracy and reliability as well as its speed and ease of use.
Model A (prototype)
The first prototype Beckman spectrophotometer, the Model A, was created at National Technologies Laboratories in 1940. It used a tungsten light source with a glass Fery prism as a monochromator. Tungsten was used for incandescent light filaments because it was strong, withstood heat, and emitted a steady light. Types of light sources differed in the range of wavelengths of light that they emitted. Tungsten lamps were useful in the visible light range but gave poor coverage in the ultraviolet range. However, they had the advantage of being readily available because they were used as automobile headlamps. An external amplifier from the Beckman pH meter and a vacuum tube photocell were used to detect wavelengths.
Model B (prototype)
It was quickly realized that a glass dispersive prism was not suitable for use in the ultraviolet spectrum. Glass absorbed electromagnetic radiation below 400 millimicrons rather than dispersing it. In the Model B, a quartz prism was substituted for the earlier glass.
A tangent bar mechanism was used to adjust the monochromator. The mechanism was highly sensitive and required a skilled operator. Only two Model B prototypes were made. One was sold: in February 1941, to the University of California Chemistry department in Los Angeles.
The Model B prototype should be distinguished from a later production model of spectrophotometer that was also referred to as the Model "B". The production Model "B" was introduced in 1949 as a less-expensive, simple-to-use alternative to the Beckman DU. It used a glass Fery prism as a chromator and operated in a narrower range, roughly from 320 millimicrons to 950 millimicrons, and 5 to 20 Å.
Model C (prototype)
Three Model C instruments were then built, improving the instrument's wavelength resolution. The Model B's rotary cell compartment was replaced with a linear sample chamber. The tangent bar mechanism was replaced by a scroll drive mechanism, which could be more precisely controlled to reset the quartz prism and select the desired wavelength. With this new mechanism, results could be more easily and reliably obtained, without requiring a highly skilled operator. This set the pattern for all of Beckman's later quartz prism instruments. Although only three Model B prototypes were built, all were sold, one to Caltech and the other two to companies in the food industry.
Model D (limited production)
The A, B, and C prototype models all coupled an external Beckman pH meter to the optical component to obtain readouts. In developing the Model D, Beckman took the direct-coupled amplifier circuit from the pH meter and combined the optical and electronic components in a single housing, making it more economical.
Moving from a prototype to production of the Model D involved challenges.
Beckman originally approached Bausch and Lomb about making quartz prisms for the spectrophotometer. When they turned down the opportunity, National Technical Laboratories designed its own optical system, including both a control mechanism and a quartz prism. Large, high optical quality quartz suitable for creating prisms was difficult to obtain. It came from Brazil, and was in demand for wartime radio oscillators. Beckman had to obtain a wartime priority listing for the spectrophotometer to get access to suitable quartz supplies.
Beckman had previously attempted to find a source of reliable hydrogen lamps, seeking better sensitivity to wavelengths in the ultraviolet range than was possible with tungsten. As described in July 1941, the Beckman spectrophotometer could use a "hot cathode hydrogen discharge tube" or a tungsten light source interchangeably. However, Beckman was still unsatisfied with the available hydrogen lamps. National Technical Laboratories designed its own hydrogen lamp, an anode enclosed in a thin blown-glass window. By December 1941, the in-house design was being used in production of the Model D.
The instrument's design also required a more sensitive phototube than was commercially available at that time. Beckman was able to obtain small batches of an experimental phototube from RCA for the first Model D instruments.
The Model D spectrophotometer, using the experimental RCA phototube, was shown at MIT's Summer Conference on Spectroscopy in July 1941. The paper that Cary and Beckman presented there was published in the Journal of the Optical Society of America. In it, Cary and Beckman compared designs for a modified self-collimating quartz Fery prism, a mirror-collimated quartz Littrow prism, and various gratings. The Littrow prism was a half-prism, which had a mirrored back face, so that the light went through the front face twice. Use of a tungsten light source with the quartz Littrow prism as a monochromator was reported to minimize light scattering within the instrument.
The Model D was the first model to enter actual production. A small number of Model D instruments were sold, beginning in July 1941, before it was superseded by the DU.
Model DU
When RCA could not meet Beckman's demand for experimental phototubes, National Technical Laboratories again had to design its own components in-house. They developed a pair of phototubes, sensitive to the red and blue areas of the spectrum, capable of amplifying the signals they received. With the incorporation of Beckman's UV-sensitive phototubes, the Model D became the Model DU UV–Vis spectrophotometer. Its designation as a "UV–Vis" spectrophotometer indicates its ability to measure light in both the visible and ultraviolet spectra.
The DU was the first commercially viable scientific instrument for measuring the amount of ultraviolet light absorbed by a substance. As he had done with the pH meter, Beckman had replaced an array of complicated equipment with a single, easy-to-use instrument. One of the first fully integrated instruments or "black boxes" used in modern chemical laboratories, it sold for $723 in 1941.
It is generally assumed that the "DU" in the name was a combination of "D" for the Model D on which it was based, and "U" for the ultraviolet spectrum. However, it has been suggested that "DU" may also reference Beckman's fraternity at the University of Illinois, Delta Upsilon, whose members were called "DU"s.
A publication in the scholarly literature compared the optical quality of the DU to the Cary 14 Spectrophotometer, another leading UV–Vis spectrophotometer of the time.
Design
From 1941 until 1976, when it was discontinued, the Model DU spectrophotometer was built upon what was essentially the same design. It was a single beam instrument.
The DU spectrophotometers used a quartz prism to separate light from a lamp into its absorption spectrum and a phototube to electrically measure the light energy across the spectrum. This allowed the user to plot the light absorption spectrum of a substance to obtain a standardized "fingerprint" characteristic of a compound. All modern UV–Vis spectrophotometer are built on the same basic principles as the DU spectrophotometer.
Although the default light source for the instrument was tungsten, a hydrogen or mercury lamp could be substituted depending on the optimal range of measurement for which the instrument was to be used. The tungsten lamp was suitable for transmittance of wavelengths between 320 and 1000 millimicrons; the hydrogen lamp for 220 to 320 millimicrons, and the mercury lamp for checking the calibration of the spectrophotometer.
As advertised in the 1941 News Edition of the American Chemical Society, the Beckman Spectrophotometer used an autocollimating quartz crystal prism for a monochromator, capable of covering a range from the ultraviolet (200 millimicrons) to the infrared (2000 millimicrons), with a nominal bandwidth of 2 millimicrons or less for most of its spectral range. The slit mechanism was continuously adjustable from .01 to 2.0 mm and claimed to have less than 1/10% of stray light over most of the spectral range. It featured an easy-to-read wavelength scale, simultaneously reporting % Transmission and Density information.
The sample holder held up to 4 cells. Cells could be moved into the light path via an external control, allowing the user to take multiple readings without opening the cell compartment. As described in the DU's manual, absorbance measurements of a sample were made in comparison to a blank, or standard, "a solution identical in composition with the sample except that the absorbing material being measured is absent." The standard could be a cell filled with a solvent such as distilled water or a prepared solvent of a known concentration. At each wavelength two measurements are made: with the sample and with the standard in the light beam. This enables the ratio, transmittance, to be obtained. For quantitative measurements transmittance is converted to absorbance which is proportional to the solute concentration according to Beer's law. This makes possible the quantitative determination of the amount of a substance in solution.
The user could also switch between phototubes without removing the sample holder. A 1941 advertisement indicates that three types of phototubes were available, with maximum sensitivity to red, blue and ultraviolet light ranges.
The 1954 DU spectrophotometer differs in that it claims to be useful from 200 to 1000 millimicrons, and does not mention the ultraviolet phototube. The wavelength selector, however, still ranged from 200 to 2000 millimicrons. and an "Ultraviolet accessory set" was available. This shift away from using the DU for infrared measurement is understandable, since by 1954 Beckman Instruments was marketing a separate infrared spectrophotometer. Beckman developed the IR-1 infrared spectrophotometer during World War II, and redesigned it as the IR-4 between 1953 and 1956.
Use
The Beckman spectrophotometer was the first easy-to-use single instrument containing both the optical and electronic components needed for ultraviolet-absorption spectrophotometry within a single housing. The user could insert a cell tray with standard and sample cells, dial up the desired wavelength of light, confirm that the instrument was properly set by measuring the standard, and then measure the amount of absorption of the sample, reading the frequency from a simple meter. A series of readings at different wavelengths could be taken without disturbing the sample. The DU spectrophotometer's manual scanning method was extremely fast, reducing analysis times from weeks or hours to minutes.
It was accurate in both the visible and ultraviolet ranges.
Working in both the ultraviolet and the visible regions of the spectrum, the model DU produced accurate absorption spectra which could be obtained with relative ease and accurately replicated. The National Bureau of Standards ran tests to certify that the DU's results were accurate and repeatable and recommended its use.
Other advantages included its high resolution and the minimization of stray light in the ultraviolet region. Although it was not cheap, its initial price of $723 made it available to the average laboratory. In comparison, in 1943, the GE Hardy Spectrophotometer cost $6,400. Practical and reliable, the DU rapidly established itself as a standard for laboratory equipment.
Impact
Credited with having "brought about a breakthrough in optical spectroscopy", the Beckman DU has been identified as "an indispensable tool for chemistry" and "the Model T of laboratory instruments". Approximately 30,000 DU spectrophotometers were manufactured and sold between 1941 and 1976.
The DU enabled researchers to perform easier analysis of substances by quickly taking measurements at more than one wavelength to produce an absorption spectrum describing the complete substance. For example, the standard method of analysis of the vitamin A content of shark liver oil, before the introduction of the DU spectrophotometer, involved feeding the oil to rats for 21 days, then cutting off the rats' tails and examining their bone structure. With the DU's UV technology, vitamin A content of shark liver oil could be determined directly in a matter of minutes.
The Scripps Research Institute and the Massachusetts Institute of Technology credit the DU with improving both accuracy and speed of chemical analysis. MIT states: "This device forever simplified and streamlined chemical analysis, by allowing researchers to perform a 99.9% accurate quantitative measurement of a substance within minutes, as opposed to the weeks required previously for results of only 25% accuracy."
Inorganic chemist and philosopher of science Theodore L. Brown states that it "revolutionized the measurement of light signals from samples". Nobel laureate Bruce Merrifield is quoted as calling the DU spectrophotometer "probably the most important instrument ever developed towards the advancement of bioscience." Historian of science Peter J. T. Morris identifies the introduction of the DU and other scientific instruments in the 1940s as the beginning of a Kuhnian revolution.
For the Beckman company, the DU was one of three foundational inventions – the pH meter, the DU spectrophotometer, and the helipot potentiometer – that established the company on a secure financial basis and enabled it to expand.
Vitamins
Development of the spectrophotometer had direct relevance to World War II and the American war effort. The role of vitamins in health was of significant concern, as scientists wanted to identify Vitamin A-rich foods to keep soldiers healthy. Previous methods of assessing Vitamin A levels involved feeding rats a food for several weeks and then performing a biopsy to estimate ingested Vitamin A levels. In contrast, examining a food sample with a DU spectrophotometer yielded better results in a matter of minutes. The DU spectrophotometer could be used to study both vitamin A and its precursor carotenoids, and rapidly became the preferred method of spectrophotometric analysis.
Penicillin
The DU spectrophotometer was also an important tool for scientists studying and producing the new wonder drug penicillin.
The development of penicillin was a secret national mission, involving 17 drug companies, with the goal of providing penicillin to all U.S. Forces engaged in World War II. It was known that penicillin was more effective than sulfa drugs, and that its use reduced mortality, severity of long-term wound trauma, and recovery time. However, its structure was not understood, isolation procedures used to create pure cultures were primitive, and production using known surface culture techniques was slow.
At Northern Regional Research Laboratory in Peoria, Illinois, researchers collected and examined more than 2,000 specimens of molds (as well as other microorganisms). An extensive research team included Robert Coghill, Norman Heatley, Andrew Moyer, Mary Hunt, Frank H. Stodola and Morris E. Friedkin. Friedkin recalls that an early model of the Beckman DU spectrophotometer was used by the penicillin researchers in Peoria. The Peoria lab was successful in isolating and commercially producing superior strains of the mold, which were 200 times more effective than the original forms discovered by Alexander Fleming. By the end of the war, American pharmaceutical companies were producing 650 billion units of penicillin each month. Much of the work done in this area during World War II was kept secret until after the war.
Hydrocarbons
The DU spectrophotometer was also used for critical analysis of hydrocarbons. A number of hydrocarbons were of interest to the war effort. Toluene, a hydrocarbon in crude oil, was used in production of TNT for military use. Benzene and butadienes were used in the production of synthetic rubber. Rubber, used in tires for jeeps, airplanes and tanks, was in critically short supply because the United States was cut off from foreign supplies of natural rubber. The Office of Rubber Reserve organized researchers at universities and in industry to secretly work on the problem. The demand for synthetic rubber caused Beckman Instruments to develop infrared spectrophotometers. Infrared spectrophotometers were better suited than UV–Vis spectrophotometers to the analysis of C4 hydrocarbons, particularly for applications in petroleum refining and gasoline production.
Enzyme assays and DNA research
Gerty Cori and her husband Carl Ferdinand Cori won the Nobel Prize in Physiology or Medicine in 1947 in recognition of their work on enzymes. They made several discoveries critical to understanding carbohydrate metabolism, including the isolation and discovery of the Cori ester, glucose 1-phosphate, and the understanding of the Cori cycle. They determined that the enzyme phosphorylase catalyzes formation of glucose 1-phosphate, which is the beginning and ending step in the conversions of glycogen into glucose and blood glucose to glycogen. Gerty Cori was also the first to show that a defect in an enzyme can be the cause of a human genetic disease. The Beckman DU spectrophotometer was used in the Cori laboratory to calculate enzyme concentrations, including phosphorylase.
Another researcher who spent six months in 1947 at the Cori laboratory, "the most vibrant place in biochemistry" at that time, was Arthur Kornberg. Kornberg was already familiar with the DU spectrophotometer, which he had used at Severo Ochoa's laboratory at New York University. The "new and scarce" Beckman DU, loaned to Ochoa by the American Philosophical Society, was highly prized and in constant use. Kornberg used it to purify aconitase, an enzyme in the citric acid cycle.
Kornberg and Bernard L. Horecker used the Beckman DU spectrophotometer for enzyme assays measuring NADH and NADPH. They determined their extinction coefficients, establishing a basis for quantitative measurements in reactions involving nucleotides. This work became one of the most cited papers in biochemistry. Kornberg went on to study nucleotides in DNA synthesis, isolating the first DNA polymerizing enzyme (DNA polymerase I) in 1956 and receiving the Nobel Prize in Physiology or Medicine with Severo Ochoa in 1959.
The bases of DNA absorbed ultraviolet light near 260 nm. Inspired by the work of Oswald Avery on DNA, Erwin Chargaff used a DU spectrophotometer in the 1940s in measuring the relative concentrations of bases in DNA. Based on this research, he formulated Chargaff's rules. In the first complete quantitative analysis of DNA, he reported the near-equal correspondence of pairs of bases in DNA, with the number of guanine units equaling the number of cytosine units, and the number of adenine units equaling the number of thymine units. He further demonstrated that the relative amounts of guanine, cytosine, adenine and thymine varied between species. In 1952, Chargaff met Francis Crick and James D. Watson, discussing his findings with them. Watson and Crick built upon his ideas in their determination of the structure of DNA.
Biotechnology
Ultraviolet spectroscopy has wide applicability in molecular biology, particularly the study of photosynthesis. It has been used to study a wide variety of flowering plants and ferns by researchers in departments of biology, plant physiology and agriculture science as well as molecular genetics.
Particularly useful in detecting conjugated double bonds, the new technology made it possible for researchers like Ralph Holman and George O. Burr to study dietary fats, work that had significant implications for human diet. The DU spectrophotometer was also used in the study of steroids by researchers like Alejandro Zaffaroni, who helped to develop the birth control pill, the nicotine patch, and corticosteroids.
Later models
The Beckman team eventually developed additional models, as well as a number of accessories or attachments which could be used to modify the DU for different types of work. One of the first accessories was a flame attachment with a more powerful photo multiplier to enable the user to examine flames such as potassium, sodium and cesium (1947).
In the 1950s, Beckman Instruments developed the DR and the DK, both of which were double-beam ultraviolet spectrophotometers. The DK was named for Wilbur I. Kaye, who developed it by modifying the DU to expand its range into the near-infrared. He did the initial work while at Tennessee Eastman Kodak, and later was hired by Beckman Instruments. The DKs introduced an automatic recording feature. The DK-1 used a non-linear scroll, and the DK-2 used a linear scroll to automatically record the spectra.
The DR incorporated a "robot operator" which would reset the knobs on the DU to complete a sequence of measurements at different wavelengths, just like a human operator would to generate results for a full spectrum. It used a linear shuttle with four positions, and a superstructure to change the knobs. It had a moving chart recorder to plot results, with red, green and black dots. The price of recording spectrophotometers was substantially higher than non-recording machines.
The DK was ten times faster than the DR, but not quite as accurate. It used a photomultiplier, which had introduced a source of error. The DK's speed made it preferred to the DR. Kaye eventually developed the DKU, combining infrared and ultraviolet features in one instrument, but it was more expensive than other models.
The last DU spectrophotometer was produced on July 6, 1976. By the 1980s, computers were being incorporated into scientific instruments such as Bausch & Lomb's Spectronic 2000 UV–Vis spectrophotometer, to improve data acquisition and provide instrument control. Specialized spectrophotometers designed for specific tasks now tend to be used rather than general "all-purpose machines" like the DU.
References
External links
Scientific instruments
Spectrometers | DU spectrophotometer | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 5,676 | [
"Spectrum (physical sciences)",
"Scientific instruments",
"Measuring instruments",
"Spectrometers",
"Spectroscopy"
] |
37,872,896 | https://en.wikipedia.org/wiki/Kostant%27s%20convexity%20theorem | In mathematics, Kostant's convexity theorem, introduced by , can be used to derive Lie-theoretical extensions of the Golden–Thompson inequality and the Schur–Horn theorem for Hermitian matrices.
Konstant's convexity theorem states that the projection of every coadjoint orbit of a connected compact Lie group into the dual of a Cartan subalgebra is a convex set. It is a special case of a more general result for symmetric spaces. Kostant's theorem is a generalization of a result of , and for Hermitian matrices. They proved that the projection onto the diagonal matrices of the space of all n by n complex self-adjoint matrices with given eigenvalues Λ = (λ1, ..., λn) is the convex polytope with vertices all permutations of the coordinates of Λ.
Compact Lie groups
Let K be a connected compact Lie group with maximal torus T and Weyl group W = NK(T)/T. Let their Lie algebras be and . Let P be the orthogonal projection of onto for some Ad-invariant inner product on . Then for X in , P(Ad(K)⋅X) is the convex polytope with vertices w(X) where w runs over the Weyl group.
Symmetric spaces
Let G be a compact Lie group and σ an involution with K a compact subgroup fixed by σ and containing the identity component of the fixed point subgroup of σ. Thus G/K is a symmetric space of compact type. Let and be their Lie algebras and let σ also denote the corresponding involution of . Let be the −1 eigenspace of σ and let be a maximal Abelian subspace. Let Q be the orthogonal projection of onto for some Ad(K)-invariant inner product on . Then for X in , Q(Ad(K)⋅X) is the convex polytope with vertices the w(X) where w runs over the restricted Weyl group (the normalizer of in K modulo its centralizer).
The case of a compact Lie group is the special case where G = K × K, K is embedded diagonally and σ is the automorphism of G interchanging the two factors.
Proof for a compact Lie group
Kostant's proof for symmetric spaces is given in . There is an elementary proof just for compact Lie groups using similar ideas, due to : it is based on a generalization of the Jacobi eigenvalue algorithm to compact Lie groups.
Let K be a connected compact Lie group with maximal torus T. For each positive root α there is a homomorphism of SU(2) into K. A simple calculation with 2 by 2 matrices shows that if Y is in and k varies in this image of SU(2), then P(Ad(k)⋅Y) traces a straight line between P(Y) and its reflection in the root α. In particular the component in the α root space—its "α off-diagonal coordinate"—can be sent to 0. In performing this latter operation, the distance from P(Y) to P(Ad(k)⋅Y) is bounded above by size of the α off-diagonal coordinate of Y. Let m be the number of positive roots, half the dimension of K/T. Starting from an arbitrary Y1 take the largest off-diagonal coordinate and send it to zero to get Y2. Continue in this way, to get a sequence (Yn). Then
Thus P⊥(Yn) tends to 0 and
Hence Xn = P(Yn) is a Cauchy sequence, so tends to X in . Since Yn = P(Yn) ⊕ P⊥(Yn), Yn tends to X. On the other hand, Xn lies on the line segment joining Xn+1 and its reflection in the root α. Thus Xn lies in the Weyl group polytope defined by Xn+1. These convex polytopes are thus increasing as n increases and hence P(Y) lies in the polytope for X. This can be repeated for each Z in the K-orbit of X. The limit is necessarily in the Weyl group orbit of X and hence P(Ad(K)⋅X) is contained in the convex polytope defined by W(X).
To prove the opposite inclusion, take X to be a point in the positive Weyl chamber. Then all the other points Y in the convex hull of W(X) can be obtained by a series of paths in that intersection moving along the negative of a simple root. (This matches a familiar picture from representation theory: if by duality X corresponds to a dominant weight λ, the other weights in the Weyl group polytope defined by λ are those appearing in the irreducible representation of K with highest weight λ. An argument with lowering operators shows that each such weight is linked by a chain to λ obtained by successively subtracting simple roots from λ.) Each part of the path from X to Y can be obtained by the process described above for the copies of SU(2) corresponding to simple roots, so the whole convex polytope lies in P(Ad(K)⋅X).
Other proofs
gave another proof of the convexity theorem for compact Lie groups, also presented in . For compact groups, and showed that if M is a symplectic manifold with a Hamiltonian action of a torus T with Lie algebra , then the image of the moment map
is a convex polytope with vertices in the image of the fixed point set of T (the image is a finite set). Taking for M a coadjoint orbit of K in , the moment map for T is the composition
Using the Ad-invariant inner product to identify and , the map becomes
the restriction of the orthogonal projection. Taking X in , the fixed points of T in the orbit Ad(K)⋅X are just the orbit under the Weyl group, W(X). So the convexity properties of the moment map imply that the image is the convex polytope with these vertices. gave a simplified direct version of the proof using moment maps.
showed that a generalization of the convexity properties of the moment map could be used to treat the more general case of symmetric spaces. Let τ be a smooth involution of M which takes the symplectic form ω to −ω and such that t ∘ τ = τ ∘ t−1. Then M and the fixed point set of τ (assumed to be non-empty) have the same image under the moment map. To apply this, let T = exp , a torus in G. If X is in as before the moment map yields the projection map
Let τ be the map τ(Y) = − σ(Y). The map above has the same image as that of the fixed point set of τ, i.e. Ad(K)⋅X. Its image is the convex polytope with vertices the image of the fixed point set of T on Ad(G)⋅X, i.e. the points w(X) for w in W = NK(T)/CK(T).
Further directions
In the convexity theorem is deduced from a more general convexity theorem concerning the projection onto the component A in the Iwasawa decomposition G = KAN of a real semisimple Lie group G. The result discussed above for compact Lie groups K corresponds to the special case when G is the complexification of K: in this case the Lie algebra of A can be identified with . The more general version of Kostant's theorem has also been generalized to semisimple symmetric spaces by . gave a generalization for infinite-dimensional groups.
Notes
References
Lie groups
Lie algebras
Homogeneous spaces
Mathematical theorems | Kostant's convexity theorem | [
"Physics",
"Mathematics"
] | 1,617 | [
"Lie groups",
"Mathematical structures",
"Group actions",
"Homogeneous spaces",
"Space (mathematics)",
"Topological spaces",
"Algebraic structures",
"nan",
"Geometry",
"Mathematical problems",
"Mathematical theorems",
"Symmetry"
] |
37,880,011 | https://en.wikipedia.org/wiki/Nanoc | Nanoc is a Ruby-based website compiler that generates static HTML. It supports compiling from various markup languages, including Markdown, Textile, and Haml. It can generate and lay out pages with a consistent look and feel. Nanoc is not a content management system, however it acts somewhat like one.
Advantages of Nanoc
In comparison to other static site generators, Nanoc has a modular architecture.
Differences from traditional content management systems
Although Nanoc sometimes acts as a content management system (CMS), there are many differences.
Traditional CMSs must assemble the webpage every time a user requests it. Static HTML pages are pre-assembled and as such do not have to be re-assembled.
CMSs run using a server-side language, which exposes the CMS to all the vulnerabilities of the language. Since Nanoc compiles websites to static HTML, the only vulnerabilities are that of the web server itself.
The content managed by a CMS can usually be changed at any time through a web interface. Since Nanoc must recompile the website at every change, it is more difficult to modify a website.
References
External links
Official project website
Command-line software
Compilers
Free static website generators | Nanoc | [
"Technology"
] | 253 | [
"Command-line software",
"Computing commands"
] |
25,181,469 | https://en.wikipedia.org/wiki/Meiotic%20recombination%20checkpoint | The meiotic recombination checkpoint monitors meiotic recombination during meiosis, and blocks the entry into metaphase I if recombination is not efficiently processed.
Generally speaking, the cell cycle regulation of meiosis is similar to that of mitosis. As in the mitotic cycle, these transitions are regulated by combinations of different gene regulatory factors, the cyclin-Cdk complex and the anaphase-promoting complex (APC). The first major regulatory transition occurs in late G1, when the start of meiotic cycle is activated by Ime1 instead of Cln3/Cdk1 in mitosis. The second major transition occurs at the entry into metaphase I. The main purpose of this step is to make sure that DNA replication has completed without error so that spindle pole bodies can separate. This event is triggered by the activation of M-Cdk in late prophase I. Then the spindle assembly checkpoint examines the attachment of microtubules at kinetochores, followed by initiation of metaphase I by APCCdc20. The special chromosome separation in meiosis, homologous chromosomes separation in meiosis I and chromatids separation in meiosis II, requires special tension between homologous chromatids and non-homologous chromatids for distinguishing microtubule attachment and it relies on the programmed DNA double strand break (DSB) and repair in prophase I. Therefore meiotic recombination checkpoint can be a kind of DNA damage response at specific time spot. On the other hand, the meiotic recombination checkpoint also makes sure that meiotic recombination does happen in every pair of homologs.
DSB-dependent pathway
The abrupt onset of M-Cdk in late prophase I depends on the positive transcription regulation feedback loop consisting of Ime2, Ndt80 and Cdk/cyclin complex. However the activation of M-Cdk is controlled by the general phosphorylation switch Wee1/Cdc25. Wee1 activity is high in early prophase I and the accumulation of Cdc25 activates M-Cdk by direct phosphorylation and marking Wee1 to be degraded.
Meiotic recombination may begin with a double-strand break, either induced by Spo11 or by other endogenous or exogenous causes of DNA damage. These DNA breaks must be repaired before metaphase I. and these DSBs must be repaired before metaphase I. The cell monitor these DSBs via ATM pathway, in which Cdc25 is suppressed when DSB lesion is detected. This pathway is the same as classical DNA damage response and is the part we know the best in meiotic recombination checkpoint.
DSB-independent pathway
The DSB-independent pathway was proposed when people studied spo11 mutant cells in some species and found that these Spo11 cells could not process to metaphase I even in the absence of DSB. The direct purpose of these DSBs is to help with the condensation of chromosomes. Even though the initial homolog paring in early leptotene is just random interactions, the further progression into presynaptic alignment depends on the formation of double strand breaks and single strand transfer complexes. Therefore the unsynapsed chromosomes in Spo11 cells can be a target of checkpoint. An AAA–adenosine triphosphatase (AAA-ATPase) was found to be essential in this pathway. but the mechanism is not yet clear. Some other studies also drew sex body formation into attention, and the signaling could be either structure based or transcription regulation such as meiotic sex chromosome inactivation. Under this cascade, failure to synapse will maintain the gene expression from sex chromosomes and some products may inhibit cell cycle progression. Meiotic sex chromosome inactivation only happens in male, which may partially be the reason why only Spo11 mutant spermatocytes but not oocytes fail to transition from prophase I to metaphase I. However the asynapsis does not happen only within sex chromosomes, and such transcription regulation was suspended until it was further expanded to all the chromosomes as meiotic silencing of unsynapsed chromatin, but the effector gene is not found yet.
Meiotic checkpoint protein kinases CHEK1 and CHEK2
The central role in meiosis of human and mouse CHEK1 and CHEK2 and their orthologs in Saccharomyces cerevisiae, Caenorhabditis elegans, Schizosaccharomyces pombe and Drosophila has been reviewed by MacQueen and Hochwagen and Subramanian and Hochwagen. During meiotic recombination in human and mouse, CHEK1 protein kinase is important for integrating DNA damage repair with cell cycle arrest. CHEK1 is expressed in the testes and associates with meiotic synaptonemal complexes during the zygonema and pachynema stages. CHEK1 likely acts as an integrator for ATM and ATR signals and in monitoring meiotic recombination. In mouse oocytes CHEK1 appears to be indispensable for prophase I arrest and to function at the G2/M checkpoint.
CHEK2 regulates cell cycle progression and spindle assembly during mouse oocyte maturation and early embryo development. Although CHEK2 is a down stream effector of the ATM kinase that responds primarily to double-strand breaks it can also be activated by ATR (ataxia-telangiectasia and Rad3 related) kinase that responds primarily to single-strand breaks. In mouse, CHEK2 is essential for DNA damage surveillance in female meiosis. The response of oocytes to DNA double-strand break damage involves a pathway hierarchy in which ATR kinase signals to CHEK2 which then activates p53 and p63 proteins.
In the fruitfly Drosophila, irradiation of germ line cells generates double-strand breaks that result in cell cycle arrest and apoptosis. The Drosophila CHEK2 ortholog mnk and the p53 ortholog dp53 are required for much of the cell death observed in early oogenesis when oocyte selection and meiotic recombination occur.
Meiosis-specific Transcription factor Ndt80
Ndt80 is a meiosis-specific transcription factor required for successful completion of meiosis and spore formation. The protein recognizes and binds to the middle sporulation element (MSE) 5'-C[AG]CAAA[AT]-3' in the promoter region of stage-specific genes that are required for progression through meiosis and sporulation. The DNA-binding domain of Ndt80 has been isolated, and the structure reveals that this protein is a member of the Ig-fold family of transcription factors. Ndt80 also competes with the repressor SUM1 for binding to promoters containing MSEs.
Transitions in yeast
When a mutation inactivates Ndt80 in budding yeast, meiotic cells display a prolonged delay in late pachytene, the third stage of prophase. The cells display intact synaptonemal complexes but eventually arrest in the diffuse chromatin stage that follows pachytene. This checkpoint-mediated arrest prevents later events from occurring until earlier events have been executed successfully and prevents chromosome missegregation.
Role in cell cycle progression
NDt80 is crucial for the completion of prophase and entry into meiosis 1, as it stimulates the expression of a large number of middle meiotic genes. Ndt80 is regulated through transcriptional and post-translational mechanisms (i.e. phosphorylation).
Interaction with Clb1
Ndt80 stimulates the expression of the B-type cyclin Clb-1, which greatly interacts with Cdk1 during meiotic divisions. Active complexes of Clb-1 with Cdk1 play a large role in triggering the events of the first meiotic division, and their activity is restricted to meiosis 1.
Interaction with Ime2
Ndt80 stimulates expression of itself and expression of protein kinase Ime2, both of which feedback to further stimulate Ndt80. This increased amount of Ndt80 protein further enhances the transcription of target genes. Early in meiosis 1, Ime2 activity rises and is required for the normal accumulation and activity of Ndt80. However, if Ndt80 is expressed prematurely, it will initially accumulate in an unmodified form. Ime2 can then also act as a meiosis-specific kinase that phosphorylates Ndt80, resulting in fully activated Ndt80.
Expression of Plk
Ndt80 stimulates the expression of the gene that encodes polo-like kinase, Plk. This protein is activated in late pachytene and is needed for crossover formation and partial loss of cohesion from chromosome arms. Plk is also both necessary and sufficient to trigger exit from pachytene points.
Recombination model
The meiotic recombination checkpoint operates in response to defects in meiotic recombination and chromosome synapsis, potentially arresting cells before entry into meiotic divisions. Because recombination is initiated by double stranded breaks (DSBs) at certain regions of the genome, entry into Meiosis 1 must be delayed until the DSBs are repaired. The meiosis-specific kinase Mek1 plays an important role in this and recently, it has been discovered that Mek1 is able to phosphorylate Ndt80 independently of IME2. This phosphorylation, however, is inhibitory and prevents Ndt80 from binding to MSEs in the presence of DSBs.
Roles outside of cell cycle progression
Heterokaryon Incompatibility
Heterokaryon Incompatibility (HI) has been likened to a fungal immune system; it is a non-self recognition mechanism that is ubiquitous among filamentous members of the Asomycota phylum of the Fungi kingdom. Vib-1 is an Ndt80 homologue in Neurospora crassa and is required for HI in this species. It has been found that mutations at the vib1 locus suppress non-self recognition, and VIB-1 is required for the production of downstream effectors associated with HI, such as extracellular proteases.
Female sexual development
Studies have indicated that Ndt80 homologues also play a role in female sexual development in fungi species other than the more commonly studied Saccharomyces cerevisiae. Mutations in vib-1 have been found to affect the timing and development of female reproductive structures prior to fertilization.
Role in Cancer
Although usually characterized in yeast and other fungi, the DNA-binding domain of Ndt80 is homologous to a number of proteins in higher eukaryotes and the residues used for binding are highly conserved. In humans, the Ndt80 homologue C11orf9 is highly expressed in invasive or metastatic tumor cells, suggesting potential usage as a target molecule in cancer treatment. However, not much progress has been made on this front in recent years.
See also
Cell cycle checkpoint
References
DNA repair | Meiotic recombination checkpoint | [
"Biology"
] | 2,346 | [
"Molecular genetics",
"DNA repair",
"Cellular processes"
] |
25,183,424 | https://en.wikipedia.org/wiki/Dibutylamine | Dibutylamine is a colorless fluid with a fishy odor. It is an amine used as a corrosion inhibitor, in the manufacturing of emulsifiers, and as a flotation agent. It is flammable and toxic.
References
Alkylamines
Corrosion inhibitors
Secondary amines
Butyl compounds | Dibutylamine | [
"Chemistry"
] | 66 | [
"Corrosion inhibitors",
"Process chemicals"
] |
25,186,925 | https://en.wikipedia.org/wiki/Hartmann%20number | The Hartmann number (Ha) is the ratio of electromagnetic force to the viscous force, first introduced by Julius Hartmann (18811951) of Denmark. It is frequently encountered in fluid flows through magnetic fields. It is defined by:
where
B is the magnetic field intensity
L is the characteristic length scale
σ is the electrical conductivity
μ is the dynamic viscosity
See also
Magnetohydrodynamics
References
Further reading
Hartmann number is indicated by letter M in analogy with Mach number for aerodynamics.
Dimensionless numbers of fluid mechanics
Fluid dynamics
Magnetohydrodynamics | Hartmann number | [
"Chemistry",
"Engineering"
] | 119 | [
"Piping",
"Magnetohydrodynamics",
"Chemical engineering",
"Fluid dynamics"
] |
25,187,611 | https://en.wikipedia.org/wiki/C12H14CaO12 | {{DISPLAYTITLE:C12H14CaO12}}
The molecular formula C12H14CaO12 (molar mass: 390.310 g/mol, exact mass: 390.0111 u) may refer to:
Calcium ascorbate
Calcium erythorbate
Molecular formulas | C12H14CaO12 | [
"Physics",
"Chemistry"
] | 62 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
25,187,803 | https://en.wikipedia.org/wiki/Distortion%20%28mathematics%29 | In mathematics, the distortion is a measure of the amount by which a function from the Euclidean plane to itself distorts circles to ellipses. If the distortion of a function is equal to one, then it is conformal; if the distortion is bounded and the function is a homeomorphism, then it is quasiconformal. The distortion of a function ƒ of the plane is given by
which is the limiting eccentricity of the ellipse produced by applying ƒ to small circles centered at z. This geometrical definition is often very difficult to work with, and the necessary analytical features can be extrapolated to the following definition. A mapping ƒ : Ω → R2 from an open domain in the plane to the plane has finite distortion at a point x ∈ Ω if ƒ is in the Sobolev space W(Ω, R2), the Jacobian determinant J(x,ƒ) is locally integrable and does not change sign in Ω, and there is a measurable function K(x) ≥ 1 such that
almost everywhere. Here Df is the weak derivative of ƒ, and |Df| is the Hilbert–Schmidt norm.
For functions on a higher-dimensional Euclidean space Rn, there are more measures of distortion because there are more than two principal axes of a symmetric tensor. The pointwise information is contained in the distortion tensor
The outer distortion KO and inner distortion KI are defined via the Rayleigh quotients
The outer distortion can also be characterized by means of an inequality similar to that given in the two-dimensional case. If Ω is an open set in Rn, then a function has finite distortion if its Jacobian is locally integrable and does not change sign, and there is a measurable function KO (the outer distortion) such that
almost everywhere.
See also
Deformation (mechanics)
References
.
.
Conformal mappings
Real analysis
Complex analysis
Topology
Measure theory
Euclidean geometry | Distortion (mathematics) | [
"Physics",
"Mathematics"
] | 392 | [
"Spacetime",
"Topology",
"Space",
"Geometry"
] |
50,708,946 | https://en.wikipedia.org/wiki/Genome%20Project%E2%80%93Write | The Genome Project–Write (also known as GP-Write) is a large-scale collaborative research project (an extension of Genome Projects, aimed at reading genomes since 1984) that focuses on the development of technologies for the synthesis and testing of genomes of many different species of microbes, plants, and animals, including the human genome in a sub-project known as Human Genome Project–Write (HGP-Write). Formally announced on 2 June 2016, the project leverages two decades of work on synthetic biology and artificial gene synthesis.
The newly created GP-Write project will be managed by the Center of Excellence for Engineering Biology, an American nonprofit organization. Researchers expect that the ability to artificially synthesize large portions of many genomes will result in many scientific and medical advances.
Science & development
In May 2021, GP-Write and Twist Bioscience launched a new CAD platform for whole genome design. The GP-Write CAD will automate workflows to enable collaborative efforts critical for scale-up from designing plasmids to megabases across entire genomes.
Microbial Genome Projects–Write
Technologies for constructing and testing yeast artificial chromosomes (YACs), synthetic yeast genomes (Sc2.0), and virus/phage-resistant bacterial genomes have industrial, agricultural, and medical applications.
Human Genome Project–Write
A complete haploid copy of the human genome consists of at least three billion DNA nucleotide base pairs, which have been described in the Human Genome Project - Read program (95% completed as of 2004). Among the many goals of GP-Write are the making of cell lines resistant to all viruses and synthesis assembly lines to test variants of unknown significance that arise in research and diagnostic sequencing of human genomes (which has been exponentially improving in cost, quality, and interpretation).
See also
BRAIN Initiative
ENCODE
EuroPhysiome
Genome Compiler
HUGO Gene Nomenclature Committee
Human Cytome Project
Human Microbiome Project
Human Proteome Project
Human Protein Atlas
Human Variome Project
List of biological databases
Personal Genome Project
References
Further reading
361 pages. Examines the intellectual origins, history, and motivations of the project to map the human genome; draws on interviews with key figures.
Genome Project-Write: official information page of the consortium
National Human Genome Research Institute (NHGRI). NHGRI led the National Institutes of Health's contribution to the International Human Genome Project. This project, which had as its primary goal the sequencing of the three billion base pairs that make up human genome, was 95% complete in April 2004.
Biotechnology
Genome projects
Human Genome Project scientists
Life sciences industry | Genome Project–Write | [
"Engineering",
"Biology"
] | 536 | [
"Life sciences industry",
"Biotechnology",
"Human Genome Project scientists",
"nan",
"Genome projects",
"Human genome projects"
] |
50,713,436 | https://en.wikipedia.org/wiki/Data-independent%20acquisition | In mass spectrometry, data-independent acquisition (DIA) is a method of molecular structure determination in which all ions within a selected m/z range are fragmented and analyzed in a second stage of tandem mass spectrometry. Tandem mass spectra are acquired either by fragmenting all ions that enter the mass spectrometer at a given time (called broadband DIA) or by sequentially isolating and fragmenting ranges of m/z. DIA is an alternative to data-dependent acquisition (DDA) where a fixed number of precursor ions are selected and analyzed by tandem mass spectrometry.
Broadband
One of the first DIA approaches was a nozzle-skimmer dissociation method called shotgun collision-induced dissociation (CID). Fragmentation can be in the ion source of the mass spectrometer by increasing the nozzle-skimmer voltage in electrospray ionization.
MSE is a broadband DIA technique that uses alternating low-energy CID and high-energy CID. The low-energy CID is used to acquire precursor ion mass spectra whereas the high-energy CID is used to obtain product ion information by tandem mass spectrometry.
Data analysis
Data analysis is generally challenging for DIA methods as the resulting fragment ion spectra are highly multiplexed. In DIA spectra therefore the direct relation between a precursor ion and its fragment ions is lost since the fragment ions in DIA spectra may potentially result from multiple precursor ions (any precursor ion present in the m/z range from which the DIA spectrum was derived).
One approach to DIA data analysis attempts to use database-based search engines used in data-dependent acquisition to search the produced multiplexed spectra. This approach can be improved by assigning individual fragment ion to precursor ions observed in precursor ion scans, using the elution profile of the fragment ions and the precursor ions, and then searching the resulting "pseudo-spectra".
A second approach to DIA data analysis is based on a targeted analysis, also known as SWATH-MS (Sequential Windowed Acquisition of All Theoretical Fragment Ion Mass Spectra). This approach uses targeted extraction of fragment ion traces directly for identification and quantification without an explicit attempt to de-multiplex the DIA fragment ion spectra.
See also
Fragmentation (mass spectrometry)
Ion-mobility spectrometry–mass spectrometry
Targeted mass spectrometry
References
Further reading
Tandem mass spectrometry
Proteomics | Data-independent acquisition | [
"Physics"
] | 489 | [
"Mass spectrometry",
"Spectrum (physical sciences)",
"Tandem mass spectrometry"
] |
50,714,830 | https://en.wikipedia.org/wiki/Hydraulic%20Press%20Channel | Hydraulic Press Channel (HPC) is a YouTube channel operated by Finnish workshop owner Lauri Vuohensilta. Launched in October 2015, the channel publishes videos of various objects being crushed in a hydraulic press, as well as occasional experiments using different devices. On 31 October 2015, the channel published a video of Vuohensilta unsuccessfully attempting to fold a piece of paper more than seven times with the hydraulic press. The video was subsequently posted to the social news website Reddit in March 2016, causing it to receive more than two million views within a day.
The channel's unexpected success caused Vuohensilta to continue producing videos for the Hydraulic Press Channel. In June 2016, the channel became eligible for both the silver and the gold YouTube Play Buttons, leading to his attempt to crush the silver one with the press. Analysis of the channel's success often cites the excitement of the unexpected results, Vuohensilta's sense of humor, and his distinctive Finnish accent.
Overview
Each video begins with an intro, in which Vuohensilta announces, "Welcome to the Hydraulic Press Channel". He then introduces one or more objects that he is going to crush using the hydraulic press.
Objects that have been crushed using the press include a golf ball, a book, a rubber duck, a bearing ball, a bowling ball and pin, a hockey puck, Lego toys, a Nokia 3310, a Barbie doll, a diamond, and multiple smaller hydraulic presses. Videos may also feature the press crushing an assortment of items, such as explosive materials, objects that have been placed in liquid nitrogen, fruits, and Australian memorabilia.
Originally at the end of each video, after the outro, a clay figure made by his then wife, Anni Vuohensilta, often described by Lauri as "very dangerous", was "dealt with" by the press as "extra content" of the day.
Hydraulic press specifications
It is noted in the "how to use hydraulic press" video that the press weighs .
The press can exert of force. The main pump maxes out at 100 tonnes of force then an additional smaller pump finishes the total force.
Vuohensilta noted that the green colour was painted by him and that the press was not always green.
Since 2023, a new press capable of exerting 300 tonnes of force has been used in partnership with Dutch manufacturer Profi Press.
In addition to this, Vuohensilta has constructed a bulletproof concrete bunker for the press to be enclosed in, allowing for more dangerous experiments to be performed.
History
The channel officially launched on 6 October 2015. Living in Tampere, Finland, Vuohensilta was inspired to create the Hydraulic Press Channel after discovering other YouTube channels committed to destroying objects, especially a channel called carsandwater, popular for videos of a man using a red-hot ball of nickel to melt various objects. Although Vuohensilta originally promised a new video every week, the channel became dormant after uploading a video on 31 October 2015 of him attempting unsuccessfully to fold a piece of paper more than seven times with the hydraulic press. The paper explosively collapsed into a brittle, stone-like material at the seventh fold. Thomas Amidon, a paper engineering professor at the State University of New York College of Environmental Science and Forestry, speculated in an interview with Popular Science that the cause of the explosion might have been the collapse of calcium carbonate within the paper, which had provided it with stiffness and opacity.
Despite its dormancy, the channel received widespread attention in March 2016 after the paper video was submitted to the social news website Reddit and subsequently received more than two million views within 24 hours. Following its unexpected popularity, the channel began publishing videos again, with new videos typically receiving over a million views within days of their release. The channel's success allowed Vuohensilta to enter a deal with a 3D printing company and receive a 3D printer. Vuohensilta planned to first use the printer to make more sophisticated safety equipment, and then to allow people to send him "... earmarks of stuff that they want to be crushed, and then I can just print them out here and crush them and make the video."
In June 2016, the channel was awarded with the silver YouTube Play Button, in commemoration for the channel reaching 100,000 subscribers. On 20 June 2016, the channel uploaded a video in which Vuohensilta attempts to crush the trophy using the press. Already in June 2016, the channel reached its millionth subscriber, making them eligible for the gold Play Button. Vuohensilta considered acquiring a more powerful press to accommodate the achievement.
The fourth most-viewed upload on the channel, with 24 million views , was produced in 2017 in partnership with 20th Century Fox to promote the then-upcoming film Logan, in which Vuohensilta tries to crush an "adamantium" bearing ball and then Wolverine's claws, both resulting in damage or destruction of the "hardened" pressing tool.
Anni Vuohensilta took leave of the channel in 2021 due to burnout and mental health issues, as well as having lost interest. In December 2022, Lauri and Anni announced their amicable divorce.
Since 2023, Lauri has run the channel with his new partner, Hanna Korpisaari.
Response
Brad Reed, in an article published in Boy Genius Report, wrote that the "couple’s reactions are part of what make the videos so funny", highlighting Vuohensilta's wife Anni's laugh, which can frequently be heard in the background of the Hydraulic Press Channel's videos, as well as Vuohensilta's tendency to "state the obvious in a fairly deadpan manner". Vuohensilta has a distinctive Finnish accent, which he believes influenced the Hydraulic Press Channel's success. Jesse Singal, in an article published on New York magazine's website, wrote that the channel attracts viewers by combining the "tension" created by the hydraulic press's destructive power with Vuohensilta's "goofy nerdiness".
Vuohensilta also attributes the channel's attractiveness to the excitement of the explosions and the unexpected results, as well as "the humor value of everything, my accent and stupid jokes". In 2017, the channel received a Shorty Award in the "weird" category. They got two pieces of the award, of which they crushed the other one.
Beyond the Press channel
In April 2016, the Vuohensilta couple opened a secondary YouTube channel called Beyond the Press, featuring behind-the-scenes material from the Hydraulic Press Channel. The video content includes, for example, the usual work in the workshop, experimental videos as well as various creative ways to explode or destroy stuff beyond the hydraulic press. , the channel has over 700,000 subscribers.
References
External links
on YouTube
, one of their secondary channels containing extra video content
2015 establishments in Finland
Comedy-related YouTube channels
English-language mass media in Finland
English-language YouTube channels
Entertainment-related YouTube channels
Finnish YouTubers
Hydraulic engineering
Tampere
Year of birth missing (living people)
YouTube channels launched in 2015 | Hydraulic Press Channel | [
"Physics",
"Engineering",
"Environmental_science"
] | 1,480 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering"
] |
50,715,374 | https://en.wikipedia.org/wiki/Utako%20Okamoto | was a Japanese medical doctor working as a medical scientist who discovered tranexamic acid in the 1950s in her quest to find a drug that would treat bleeding after childbirth (post-partum haemorrhage). After publishing results in 1962 she became a chair at Kobe Gakuin University, where she worked from 1966 until her retirement in 1990. Okamoto's career was hampered by a very male dominated environment.
During her lifetime she was unable to persuade obstetricians at Kobe to trial the antifibrinolytic agent, which had become a drug on the WHO list of essential medicines in 2009. She lived to see the 2010 beginning of the study of tranexamic acid in 20 000 women with post-partum haemorrhage, but died before its completion in 2016 and the publication of tranexamic acids fatality preventing results in 2017, that she had predicted.
Education
Okamoto began studying dentistry in 1936. She very soon switched to medicine enrolling at the Tokyo Women's Medical University and graduated in December 1941.
Career
In January 1942, Okamoto started out as a research assistant at Tokyo Women's Medical University researching the cerebellum under a neurophysiologist who "created many more opportunities for [women] than were otherwise available at the time."
After World War II and the Second Sino-Japanese War respectively in 1945, she moved to Keio University in Shinanomachi in Tokyo. As resources were scarce, she and her husband Shosuke Okamoto changed to research on blood: "If there was not enough we could simply use our own". They hoped to find a treatment for post-partum haemorrhage, a potent drug to stop bleeding after childbirth. They began by studying epsilon-amino-caproic acid (EACA). They then studied a related chemical, 1-(aminomethyl)-cyclohexane-4-carboxylic acid (AMCHA), also known as tranexamic acid. The Okamotos found it was 27 times as powerful and thus a promising hemostatic agent and published their findings in the Keio Journal of Medicine in 1962.
In 1966, Okamoto was granted a chair at Kobe Gakuin University. In 1980, she founded a local Committee for Projects on Thrombosis and Haemostasis with Shosuke, who also worked at Kobe. She retired from the University in 1990. After her husband died in 2004, she led the committee until 2014. She could never persuade obstetricians to trial the drug in post-partum hemorrhage.
Achievements
Tranexamic acid's value remained unappreciated for years, and it was not until 2009, that it was included on the WHO list of essential medicines to be used during cardiac surgery.
In 2010, a large randomised controlled trial in trauma patients showed its remarkable benefit if given within 3 hours of injury.
Also in 2010, the WOMAN (World Maternal Antifibrinolytic) trial began, a randomised, double-blind, placebo-controlled study of tranexamic acid in 20 060 women with post-partum haemorrhage. Enrollment was completed in 2016, and in April 2017, the results were published and showed that tranexamic acid reduced death in the 10,036 treated women versus the 9985 on placebo with no adverse effects.
Obstacles
In male dominated Japan, Okamoto had to fight against sexism. She had a supervisor sympathetic to women in science during the early stages of her career.
However she and a coworker were asked to leave a pediatric conference, because the event was not for "women and children" (onna kodomo), a term she said in a 2012 interview she had never heard before.
After she had presented her research for the first time, the male audience members ridiculed her by asking if she was going to dance for them.
In the video interview, Okamoto said: "Men are always aware of the fundamental differences between men and women, and so cannot help but think of themselves as superior. So I used that to my advantage by stroking their egos. [...] Until [I had a child] I could compensate for the disadvantages of being a woman by working longer hours—10 hours per day instead of the 8 that the men worked." At Keio University, she could not find day care for her daughter and brought her to the laboratory, "[hoping] that she would behave herself". She carried her on her back as an infant while working in the lab.
Personal life
Utako Okamoto was married to Shosuke Okamoto and at her death was survived by one daughter, Kumi Nakamura.
She had one miscarriage, which she said was not related to overworking but "coming home late from work".
Ian Roberts, Professor of Epidemiology and Public Health at the London School of Hygiene & Tropical Medicine who had been coordinating the 2010 trauma trial visited Okamoto, then about 92 in Japan. He said that he "found a fascinating character, really lively and vigorous and still very much engaged with research, meeting with researchers, and reading journal articles".
See also
Obstetrical bleeding
Women in Japan
Sexism in academia
References
External links
CRASH-2 Utako Okamoto, 15 min video, YouTube, TheLancetTV, 13 December 2013, accessed 3 June 2016
2016 deaths
Japanese medical researchers
1918 births
Japanese women academics
Physicians from Tokyo
Drug discovery
Discrimination in Japan
20th-century Japanese physicians
20th-century Japanese women physicians
21st-century Japanese physicians
21st-century Japanese women physicians | Utako Okamoto | [
"Chemistry",
"Biology"
] | 1,169 | [
"Life sciences industry",
"Medicinal chemistry",
"Drug discovery"
] |
50,718,540 | https://en.wikipedia.org/wiki/Atomic%20Data%20and%20Nuclear%20Data%20Tables | Atomic Data and Nuclear Data Tables is a quarterly peer-reviewed scientific journal covering nuclear physics. It is published by Elsevier and was established in 1969. The journal was established with the aid of Katharine Way, who later served as its editor until 1973. As of 2016, Boris Pritychenko is the journal's editor-in-chief.
Abstracting and indexing
The journal is abstracted and indexed in:
Chemical Abstracts Service
Current Contents/Physics, Chemical, & Earth Sciences
Energy Research Abstracts
Science Citation Index
Scopus
According to the Journal Citation Reports, the journal has a 2020 impact factor of 2.623.
References
Bibliography
External links
Elsevier academic journals
Nuclear physics journals
English-language journals
Quarterly journals
Academic journals established in 1969 | Atomic Data and Nuclear Data Tables | [
"Physics"
] | 148 | [
"Nuclear physics journals",
"Nuclear and atomic physics stubs",
"Nuclear physics"
] |
52,170,563 | https://en.wikipedia.org/wiki/CD4%2B/CD8%2B%20ratio | The CD4+/CD8+ ratio is the ratio of T helper cells (with the surface marker CD4) to cytotoxic T cells (with the surface marker CD8). Both CD4+ and CD8+ T cells contain several subsets.
The CD4+/CD8+ ratio in the peripheral blood of healthy adults and mice is about 2:1, and an altered ratio can indicate diseases relating to immunodeficiency or autoimmunity. An inverted CD4+/CD8+ ratio (namely, less than 1/1) indicates an impaired immune system. Conversely, an increased CD4+/CD8+ ratio corresponds to increased immune function.
Obesity and dysregulated lipid metabolism in the liver leads to loss of CD4+, but not CD8+ cells, contributing to the induction of liver cancer. Regulatory CD4+ cells decline with expanding visceral fat, whereas CD8+ T-cells increase.
Decreased ratio with infection
A reduced CD4+/CD8+ ratio is associated with reduced resistance to infection.
Patients with tuberculosis show a reduced CD4+/CD8+ ratio.
HIV infection leads to low levels of CD4+ T cells (lowering the CD4+/CD8+ ratio) through a number of mechanisms, including killing of infected CD4+. Acquired immunodeficiency syndrome (AIDS) is (by one definition) a CD4+ T cell count below 200 cells per μL. HIV progresses with declining numbers of CD4+ and expanding number of CD8+ cells (especially CD8+ memory cells), resulting in high morbidity and mortality. When CD4+ T cell numbers decline below a critical level, cell-mediated immunity is lost, and the body becomes progressively more susceptible to opportunistic infections. Declining CD4+/CD8+ ratio has been found to be a prognostic marker of HIV disease progression.
COVID-19
In COVID-19 B cell, natural killer cell, and total lymphocyte counts decline, but both CD4+ and CD8+ cells decline to a far greater extent. Low CD4+ predicted greater likelihood of intensive care unit admission, and CD4+ cell count was the only parameter that predicted length of time for viral RNA clearance.
Decreased ratio with aging
A declining CD4+/CD8+ ratio is associated with ageing, and is an indicator of immunosenescence. Compared to CD4+ T-cells, CD8+ T-cells show a greater increase in adipose tissue in obesity and aging, thereby reducing the CD4+/CD8+ ratio. Amplication of numbers of CD8+ cells are required for adipose tissue inflammation and macrophage infiltration, whereas numbers of CD4+ cells are reduced under those conditions. Antibodies against CD8+ T-cells reduces inflammation associated with diet-induced obesity, indicating that CD8+ T-cells are an important cause of the inflammation. CD8+ cell recruitment of macrophages into adipose tissue can initiate a vicious cycle of further recruitment of both cell types.
Elderly persons commonly have a CD4+/CD8+ ratio less than one. A study of Swedish elderly found that a CD4+/CD8+ ratio less than one was associated with short-term likelihood of death.
Immunological aging is characterized by low proportions of naive CD8+ cells and high numbers of memory CD8+ cells, particularly when cytomegalovirus is present. Exercise can reduce or reverse this effect, when not done at extreme intensity and duration.
Both effector helper T cells (Th1 and Th2) and regulatory T cells (Treg) cells have a CD4 surface marker, such that although total CD4+ T cells decrease with age, the relative percent of CD4+ T cells increases. The increase in Treg with age results in suppressed immune response to infection, vaccination, and cancer, without suppressing the chronic inflammation associated with aging.
See also
Helper/suppressor ratio
List of distinct cell types in the adult human body
References
Clusters of differentiation
Immunology
T cells | CD4+/CD8+ ratio | [
"Biology"
] | 856 | [
"Immunology"
] |
52,170,925 | https://en.wikipedia.org/wiki/Magnetic%20resonance%20enterography | Magnetic resonance enterography is a magnetic resonance imaging technique used to evaluate bowel wall features of both upper and lower gastro-intestinal tract, although it is usually used for small bowel evaluation. It is a less invasive technique with the advantages of no ionizing radiation exposure, multiplanarity and high contrast resolution for soft tissue.
The term MR enterography and MR enteroclysis are similar, but the first is referred to a MR exam with orally administered enteric contrast media, and the second to a more invasive technique in which enteric contrast media is administered through the fluoroscopy-guided positioned nasojejunal tube.
The need for imaging assessment of small bowel diseases comes from the limits of traditional endoscopy in evaluating ileum loops, as other modern techniques such as capsule endoscopy are not routinely performed as it is seldom available in most centers. Over the past several years assessment of small bowel diseases was performed by Barium follow through, or upper and lower gastrointestinal series, that provided plan film of bowel loop lumen, thanks to the swallowing or instillation of radiopaque agents mixed with water or other neutral contrast media. Gastrointestinal series allow to depict lumen caliber, gross mucosal alterations and wide fistulous tract, but were poorly diagnostic for submucosal or extraluminal features. CT scan instead provides cross sectional and multiplanar images of intraluminal and extra-mucosal, extra-luminal or even extra-enteric features, but costing higher radiation dose.
The spread of MR technique has revolutionized the diagnostic imaging of the small bowel loop, restricting CT scan to particular situations, such as emergency or MR contraindications like patients with pacemaker implant, recently implanted vascular/bilious stent or other ferromagnetic prosthesis/devices. It is a safe, multi-planar imaging modality with high soft tissue contrast resolution that does not expose to ionizing radiation, so it is feasible for young patients or when several follow up are required.
Preparation
Cathartic preparation should be performed in order to clean residual stool from bowel loops from to allow a better visualization of mucosal features and an easier luminal distention as well. This type of preparation usually implies a fiber restricted diet and intake of water solution with laxative effect few days before the exam, and abstaining from food intake starting from six hours prior to the study.
Use of enteric contrast media is recommended, aiming to distend small bowel loops, and it is administered orally at regular intervals approximately 40 minutes before the study.
The type of endo-luminal contrast media varies among negative contrast media, consisting of superparamagnetic agents that evoke low signal both in T1 and T2 weighted images, positive contrast media, represented by paramagnetic agents, that produce high signal on both sequences, or biphasic contrast media, that gives high signal intensity in T2 and low intensity in T1.
This latter, that consists of water, methyl cellulose or polyethylene glycol, is the most used, because of the wide availability, low cost, good patient compliance, and good taste. Water enema may be administered as well in order to distend bowel loop (MR-colonography).
Intravenous contrast media increases diagnostic capability of enterography MRI. Although it is better tolerated than iodinated contrast media used for CT-scan, the use of gadolinium-based contrast agent should always be preceded by kidney function assessment, in order to reduce the risk of nephrogenic systemic fibrosis, and prophylactic protocol in case of previous allergic reactions.
Antispasmodic agents may be used to reduce the motion artifacts due to peristalsis.
Protocol
High field MR scanners and the use of multi-channel phased array surface coil are suggested in order to obtain adequately diagnostic images.
The subject drinks 1.5 litres of oral contrast (3% mannitol) over 30 to 45 minutes before the scan. After that venous access is obtained and Buscopan (hyoscine butylbromide) is given to reduce the gastrointestinal tract movement, thus reducing motion artifact on MRI scan.
The patient is placed in prone position, thus provides better separation of bowel loops and reduces breathing movement-artifacts. Although MR enterography protocols may vary among different hospitals/institutions, the main sequences are the following:
Axial and coronal balanced steady-state free precession imaging (SSFP, commercial name FISP)
Axial and coronal single-shot-fast spin echo (commercial name HASTE) with fat saturation
Axial and coronal 3D spoiled gradient echo (commercial name VIBE) before and after gadolinium contrast administration
Axial Diffusion Weighted Imaging (DWI) sequences, using at least 2 B-value
Cine loop technique using SSFP sequences
Indications
The most common indication of MR enterography is diagnosis and follow up of inflammatory and neoplastic small bowel disease.
Risks and contraindications
Risks and contraindications are the same of any MR exam.
References
Fidler JL, Guimaraes L, Einstein DM. MR Imaging of the Small Bowel. RadioGraphics 2009; 29:1811–1825
Ilangovan R, Burling D, George A, Gupta A, Marshall M, and Taylor SA. CT enterography: review of technique and practical tips. Br J Radiol. 2012 Jul; 85(1015): 876–886
Lo Re G, Midiri M, et Al. Crohn's disease. Radiological features and clinical-surgical correlations; Cap.12:107-113; Cap.14:128-133
Magnetic resonance imaging | Magnetic resonance enterography | [
"Chemistry"
] | 1,199 | [
"Nuclear magnetic resonance",
"Magnetic resonance imaging"
] |
33,835,047 | https://en.wikipedia.org/wiki/Synthetic%20biological%20circuit | Synthetic biological circuits are an application of synthetic biology where biological parts inside a cell are designed to perform logical functions mimicking those observed in electronic circuits. Typically, these circuits are categorized as either genetic circuits, RNA circuits, or protein circuits, depending on the types of biomolecule that interact to create the circuit's behavior. The applications of all three types of circuit range from simply inducing production to adding a measurable element, like green fluorescent protein, to an existing natural biological circuit, to implementing completely new systems of many parts.
The goal of synthetic biology is to generate an array of tunable and characterized parts, or modules, with which any desirable synthetic biological circuit can be easily designed and implemented. These circuits can serve as a method to modify cellular functions, create cellular responses to environmental conditions, or influence cellular development. By implementing rational, controllable logic elements in cellular systems, researchers can use living systems as engineered "biological machines" to perform a vast range of useful functions.
History
The first natural gene circuit studied in detail was the lac operon. In studies of diauxic growth of E. coli on two-sugar media, Jacques Monod and Francois Jacob discovered that E.coli preferentially consumes the more easily processed glucose before switching to lactose metabolism. They discovered that the mechanism that controlled the metabolic "switching" function was a two-part control mechanism on the lac operon. When lactose is present in the cell the enzyme β-galactosidase is produced to convert lactose into glucose or galactose. When lactose is absent in the cell the lac repressor inhibits the production of the enzyme β-galactosidase to prevent any inefficient processes within the cell.
The lac operon is used in the biotechnology industry for production of recombinant proteins for therapeutic use. The gene or genes for producing an exogenous protein are placed on a plasmid under the control of the lac promoter. Initially the cells are grown in a medium that does not contain lactose or other sugars, so the new genes are not expressed. Once the cells reach a certain point in their growth, isopropyl β-D-1-thiogalactopyranoside (IPTG) is added. IPTG, a molecule similar to lactose, but with a sulfur bond that is not hydrolyzable so that E. coli does not digest it, is used to activate or "induce" the production of the new protein. Once the cells are induced, it is difficult to remove IPTG from the cells and therefore it is difficult to stop expression.
Two early examples of synthetic biological circuits were published in Nature in 2000. One, by Tim Gardner, Charles Cantor, and Jim Collins working at Boston University, demonstrated a "bistable" switch in E. coli. The switch is turned on by heating the culture of bacteria and turned off by addition of IPTG. They used green fluorescent protein as a reporter for their system. The second, by Michael Elowitz and Stanislas Leibler, showed that three repressor genes could be connected to form a negative feedback loop termed the Repressilator that produces self-sustaining oscillations of protein levels in E. coli.
Currently, synthetic circuits are a burgeoning area of research in systems biology with more publications detailing synthetic biological circuits published every year. There has been significant interest in encouraging education and outreach as well: the International Genetically Engineered Machines Competition manages the creation and standardization of BioBrick parts as a means to allow undergraduate and high school students to design their own synthetic biological circuits.
Interest and goals
Both immediate and long term applications exist for the use of synthetic biological circuits, including different applications for metabolic engineering, and synthetic biology. Those demonstrated successfully include pharmaceutical production, and fuel production. However, methods involving direct genetic introduction are not inherently effective without invoking the basic principles of synthetic cellular circuits. For example, each of these successful systems employs a method to introduce all-or-none induction or expression. This is a biological circuit where a simple repressor or promoter is introduced to facilitate creation of the product, or inhibition of a competing pathway. However, with the limited understanding of cellular networks and natural circuitry, implementation of more robust schemes with more precise control and feedback is hindered. Therein lies the immediate interest in synthetic cellular circuits.
Development in understanding cellular circuitry can lead to exciting new modifications, such as cells which can respond to environmental stimuli. For example, cells could be developed that signal toxic surroundings and react by activating pathways used to degrade the perceived toxin. To develop such a cell, it is necessary to create a complex synthetic cellular circuit which can respond appropriately to a given stimulus.
Given synthetic cellular circuits represent a form of control for cellular activities, it can be reasoned that with complete understanding of cellular pathways, "plug and play" cells with well defined genetic circuitry can be engineered. It is widely believed that if a proper toolbox of parts is generated, synthetic cells can be developed implementing only the pathways necessary for cell survival and reproduction. From this cell, to be thought of as a minimal genome cell, one can add pieces from the toolbox to create a well defined pathway with appropriate synthetic circuitry for an effective feedback system. Because of the basic ground up construction method, and the proposed database of mapped circuitry pieces, techniques mirroring those used to model computer or electronic circuits can be used to redesign cells and model cells for easy troubleshooting and predictive behavior and yields.
Example circuits
Oscillators
Repressilator
Mammalian tunable synthetic oscillator
Bacterial tunable synthetic oscillator
Coupled bacterial oscillator
Globally coupled bacterial oscillator
Elowitz et al. and Fung et al. created oscillatory circuits that use multiple self-regulating mechanisms to create a time-dependent oscillation of gene product expression.
Bistable switches
Toggle-switch
Gardner et al. used mutual repression between two control units to create an implementation of a toggle switch capable of controlling cells in a bistable manner: transient stimuli resulting in persistent responses.
Gene regulation is an essential part of developmental processes. During development, genes are turned on and off in different tissues, changes in regulatory mechanisms may result in genetic switching in a bistable system, the gene switches serve as regulatory molecule binding sites. These are proteins that activate transcription when they land on a gene switch and thereby express the gene that was expected to operate as a memory device, allowing cell fate decisions to be chosen and maintained.
Toggle switch which operates using two mutually inhibitory genes, each promoter is inhibited by the repressor that is transcribed by the opposing promoter. Toggle switch design: Inducer 1 inactivates repressor 1, which means repressor 2 is produced. Repressor 2, in turn, stops transcription of the repressor 1 gene and the reporter gene.
Logical operators
Analog tuners
Using negative feedback and identical promoters, linearizer gene circuits can impose uniform gene expression that depends linearly on extracellular chemical inducer concentration.
Controllers of gene expression heterogeneity
Synthetic gene circuits can control gene expression heterogeneity can be controlled independently of the gene expression mean.
Other engineered systems
Engineered systems are the result of implementation of combinations of different control mechanisms. A limited counting mechanism was implemented by a pulse-controlled gene cascade and application of logic elements enables genetic "programming" of cells as in the research of Tabor et al., which synthesized a photosensitive bacterial edge detection program.
Circuit design
Recent developments in artificial gene synthesis and the corresponding increase in competition within the industry have led to a significant drop in price and wait time of gene synthesis and helped improve methods used in circuit design. At the moment, circuit design is improving at a slow pace because of insufficient organization of known multiple gene interactions and mathematical models. This issue is being addressed by applying computer-aided design (CAD) software to provide multimedia representations of circuits through images, text and programming language applied to biological circuits. Some of the more well known CAD programs include GenoCAD, Clotho framework and j5. GenoCAD uses grammars, which are either opensource or user generated "rules" which include the available genes and known gene interactions for cloning organisms. Clotho framework uses the Biobrick standard rules.
References
External links
Gene regulation: Towards a circuit engineering discipline
Synthetic Genetic Oscillators
Synthetic biology | Synthetic biological circuit | [
"Engineering",
"Biology"
] | 1,727 | [
"Synthetic biology",
"Biological engineering",
"Molecular genetics",
"Bioinformatics"
] |
33,835,279 | https://en.wikipedia.org/wiki/Verlinde%20algebra | In mathematics, a Verlinde algebra is a finite-dimensional associative algebra introduced by , with a basis of elements φλ corresponding to primary fields of a rational two-dimensional conformal field theory, whose structure constants N describe fusion of primary fields.
Verlinde formula
In terms of the modular S-matrix, the fusion coefficients are given by
where is the component-wise complex conjugate of .
Twisted equivariant K-theory
If G is a compact Lie group, there is a rational conformal field theory whose primary fields correspond to the representations λ of some fixed level of loop group of G. For this special case showed that the Verlinde algebra can be identified with twisted equivariant K-theory of G.
See also
Fusion rules
Notes
References
MathOverflow discussion with a number of references.
Representation theory
Conformal field theory | Verlinde algebra | [
"Mathematics"
] | 179 | [
"Representation theory",
"Fields of abstract algebra"
] |
42,057,256 | https://en.wikipedia.org/wiki/Hemispherical%20resonator%20gyroscope | The hemispherical resonator gyroscope (HRG), also called wine-glass gyroscope or mushroom gyro, is a compact, low-noise, high-performance angular rate or rotation sensor. An HRG is made using a thin solid-state hemispherical shell, anchored by a thick stem. This shell is driven to a flexural resonance by electrostatic forces generated by electrodes which are deposited directly onto separate fused-quartz structures that surround the shell. The gyroscopic effect is obtained from the inertial property of the flexural standing waves. Although the HRG is a mechanical system, it has no moving parts, and can be very compact.
Operation
The HRG makes use of a small thin solid-state hemispherical shell, anchored by a thick stem. This shell is driven to a flexural resonance by dedicated electrostatic forces generated by electrodes which are deposited directly onto separate fused quartz structures that surround the shell.
For a single-piece design (i.e., the hemispherical shell and stem form a monolithic part) made from high-purity fused quartz, it is possible to reach a Q factor of over 30-50 million in vacuum, thus the corresponding random walks are extremely low. The Q factor is limited by the coating (extremely thin film of gold or platinum) and by fixture losses. Such resonators have to be fine-tuned by ion-beam micro-erosion of the glass or by laser ablation in order to be perfectly dynamically balanced. When coated, tuned, and assembled within the housing, the Q factor remains over 10 million.
In application to the HRG shell, Coriolis forces cause a precession of vibration patterns around the axis of rotation. It causes a slow precession of a standing wave around this axis, with an angular rate that differs from input one. This is the wave inertia effect, discovered in 1890 by British scientist George Hartley Bryan (1864–1928). Therefore, when subject to rotation around the shell symmetry axis, the standing wave does not rotate exactly with the shell, but the difference between both rotations is nevertheless perfectly proportional to the input rotation. The device is then able to sense rotation.
The electronics which sense the standing waves are also able to drive them. Therefore, the gyros can operate in either a "whole angle mode" that sense the standing waves' position or a "force rebalance mode" that holds the standing wave in a fixed orientation with respect to the gyro.
Originally used in space applications (attitude and orbit control systems for spacecraft), HRG is now used in advanced inertial navigation systems, in attitude and heading reference systems, and HRG gyrocompasses.
Advantages
The HRG is extremely reliable because of its very simple hardware (two or three pieces of machined fused quartz). It has no moving parts; its core is made of a monolithic part which includes the hemispherical shell and its stem. They have demonstrated outstanding reliability since their initial use in 1996 on the NEAR Shoemaker spacecraft.
The HRG is highly accurate and is not sensitive to external environmental perturbations. The resonating shell weighs only a few grams and it is perfectly balanced, which makes it insensitive to vibrations, accelerations, and shocks.
The HRG exhibits superior SWAP (size, weight, and power) characteristics compared to other gyroscope technologies.
The HRG generates neither acoustic nor radiated noise because the resonating shell is perfectly balanced and operates under vacuum.
The material of the resonator, the fused quartz, is naturally radiation hard in any space environment. This confers intrinsic immunity to deleterious space radiation effects to the HRG resonator. Thanks to the extremely high Q factor of the resonating shell, the HRG has an ultra-low angular random walk and extremely low power dissipation.
The HRG, unlike optical gyros (fibre-optic gyroscope and ring laser gyroscope), has inertial memory: if the power is lost for a short period of time (typically a few seconds), the sensitive element continues to integrate the input motion (angular rate) so that when the power returns, the HRG signals the angle turned while power was off.
Disadvantages
The HRG is a very high-tech device which requires sophisticated manufacturing tools and processes. The control electronics required to sense and drive the standing waves are sophisticated. This high level of sophistication limits the availability of this technology; few companies were able to produce it. Currently three companies are manufacturing HRG: Northrop Grumman, Safran Electronics & Defense and Raytheon Anschütz.
Classical HRG is relatively expensive due to the cost of the precision ground and polished hollow quartz hemispheres. This manufacturing cost restricts its use to high-added-value applications such as satellites and spacecraft. Nevertheless manufacturing costs can be dramatically reduced by design changes and engineering controls. Rather than depositing electrodes on an internal hemisphere that must perfectly match the shape of the outer resonating hemisphere, electrodes are deposited on a flat plate that matches the equatorial plane of the resonating hemisphere. In such configuration, HRG becomes very cost effective and is well suitable for high grade but cost sensitive applications.
Applications
Space – Inside the Spacecraft Bus in the James Webb Space Telescope and other satellites and spacecraft
Sea – Marine maintenance-free gyrocompasses as well as attitude and heading reference systems. Naval navigation systems for both surface vessels and submarines.
Land – Target locators, land navigation systems, and artillery pointing
Air – HRG are poised to be used in commercial air transport navigation systems
See also
Fibre-optic gyroscope
Gyroscope
HRG gyrocompass
Inertial measurement unit
Quantum gyroscope
Ring laser gyroscope
Vibrating structure gyroscope a.k.a. Coriolis vibratory gyroscope
References
Bibliography
Lynch D.D. HRG Development at Delco, Litton, and Northrop Grumman. Proceedings of Anniversary Workshop on Solid-State Gyroscopy (19–21 May 2008. Yalta, Ukraine). - Kyiv-Kharkiv. ATS of Ukraine. 2009.
L.Rosellini, JM Caron - REGYS 20: A promising HRG-based IMU for space application - 7th International ESA Conference on Guidance, Navigation & Control Systems. 2–5 June 2008, Tralee, County Kerry, Ireland
D. Roberfroid, Y. Folope, G. Remillieux (Sagem Défense Sécurité, Paris, FRANCE) - HRG and Inertial Navigation - Inertial Sensors and Systems – Symposium Gyro Technology 2012
A Carre, L Rosellini, O Prat (Sagem Défense Sécurité, Paris, France) HRG and North Finding -17th Saint Petersburg International Conference on Integrated Navigation Systems 31 May – 2 June 2010, Russia
Alain Jeanroy; Gilles Grosset; Jean-Claude Goudon; Fabrice Delhaye - HRG by Sagem from laboratory to mass production - 2016 IEEE International Symposium on Inertial Sensors and Systems
Alexandre Lenoble, Thomas Rouilleault - SWAP-oriented IMUs for multiple applications- Inertial Sensors and Systems (ISS), 2016 DGON - Karlsruhe, Germany
Fabrice Delhaye - HRG by Safran - The game-changing technology - 2018 IEEE International Symposium on Inertial Sensors and Systems - Lake Como, Italy
Fabrice Delhaye; Jean-Philippe Girault - SpaceNaute®, HRG technological breakthrough for advanced space launcher inertial reference system - 25th Saint Petersburg International Conference on Integrated Navigation Systems 31–29 May 2018, Russia
B.Deleaux, Y.Lenoir - The world smallest, most accurate and reliable pure inertial navigator: ONYX™ - Inertial Sensors and Systems 2018, Braunschweig - 12 September 2018, Germany
Y. Foloppe, Y.Lenoir - HRG CrystalTM DUAL CORE: Rebooting the INS revolution - Inertial Sensors and Systems 2019, Braunschweig - 10 September 2019, Germany
F. Delhaye, Ch. De Leprevier - SkyNaute by Safran – How the HRG technological breakthrough benefits to a disruptive IRS (Inertial Reference System) for commercial aircraft - Inertial Sensors and Systems 2019, Braunschweig - 11 September 2019, Germany
Aerospace engineering
Aircraft instruments
Avionics
Missile guidance
Navigational aids
Navigational equipment
Spacecraft components
Technology systems | Hemispherical resonator gyroscope | [
"Technology",
"Engineering"
] | 1,782 | [
"Systems engineering",
"Technology systems",
"Avionics",
"Measuring instruments",
"Aircraft instruments",
"nan",
"Aerospace engineering"
] |
42,061,923 | https://en.wikipedia.org/wiki/Modified%20Korteweg-De%20Vries%20equation | The modified Korteweg–de Vries (KdV) equation is an integrable nonlinear partial differential equation:
where is an arbitrary (nonzero) constant.
This is a special case of the Gardner equation.
See also
Korteweg–de Vries equation
Notes
References
Nonlinear partial differential equations
Integrable systems | Modified Korteweg-De Vries equation | [
"Physics"
] | 70 | [
"Integrable systems",
"Theoretical physics",
"Theoretical physics stubs"
] |
42,063,180 | https://en.wikipedia.org/wiki/Gardner%20equation | The Gardner equation is an integrable nonlinear partial differential equation introduced by the mathematician Clifford Gardner in 1968 to generalize KdV equation and modified KdV equation. The Gardner equation has applications in hydrodynamics, plasma physics and quantum field theory
where is an arbitrary real parameter.
See also
Korteweg–de Vries equation
Notes
References
Nonlinear partial differential equations
Integrable systems | Gardner equation | [
"Physics"
] | 81 | [
"Integrable systems",
"Theoretical physics",
"Theoretical physics stubs"
] |
42,067,613 | https://en.wikipedia.org/wiki/Single-cell%20sequencing | Single-cell sequencing examines the nucleic acid sequence information from individual cells with optimized next-generation sequencing technologies, providing a higher resolution of cellular differences and a better understanding of the function of an individual cell in the context of its microenvironment. For example, in cancer, sequencing the DNA of individual cells can give information about mutations carried by small populations of cells. In development, sequencing the RNAs expressed by individual cells can give insight into the existence and behavior of different cell types. In microbial systems, a population of the same species can appear genetically clonal. Still, single-cell sequencing of RNA or epigenetic modifications can reveal cell-to-cell variability that may help populations rapidly adapt to survive in changing environments.
Background
A typical human cell consists of about 2 x 3.3 billion base pairs of DNA and 600 million mRNA bases. Usually, a mix of millions of cells is used in sequencing the DNA or RNA using traditional methods like Sanger sequencing or next generation sequencing. By deep sequencing of DNA and RNA from a single cell, cellular functions can be investigated extensively. Like typical next-generation sequencing experiments, single-cell sequencing protocols generally contain the following steps: isolation of a single cell, nucleic acid extraction and amplification, sequencing library preparation, sequencing, and bioinformatic data analysis. It is more challenging to perform single-cell sequencing than sequencing from cells in bulk. The minimal amount of starting materials from a single cell makes degradation, sample loss, and contamination exert pronounced effects on the quality of sequencing data. In addition, due to the picogram level of the number of nucleic acids used, heavy amplification is often needed during sample preparation of single-cell sequencing, resulting in uneven coverage, noise, and inaccurate quantification of sequencing data.
Recent technical improvements make single-cell sequencing a promising tool for approaching a set of seemingly inaccessible problems. For example, heterogeneous samples, rare cell types, cell lineage relationships, mosaicism of somatic tissues, analyses of microbes that cannot be cultured, and disease evolution can all be elucidated through single-cell sequencing. Single-cell sequencing was selected as the method of the year 2013 by Nature Publishing Group.
Genome (DNA) sequencing
Single-cell DNA genome sequencing involves isolating a single cell, amplifying the whole genome or region of interest, constructing sequencing libraries, and then applying next-generation DNA sequencing (for example Illumina, Ion Torrent). Single-cell DNA sequencing has been widely applied in mammalian systems to study normal physiology and disease. Single-cell resolution can uncover the roles of genetic mosaicism or intra-tumor genetic heterogeneity in cancer development or treatment response. In the context of microbiomes, a genome from a single unicellular organism is referred to as a single amplified genome (SAG). Advancements in single-cell DNA sequencing have enabled collecting of genomic data from uncultivated prokaryotic species present in complex microbiomes. Although SAGs are characterized by low completeness and significant bias, recent computational advances have achieved the assembly of near-complete genomes from composite SAGs. Data obtained from microorganisms might establish processes for culturing in the future. Some of the genome assembly tools used in single cell single-cell sequencing include SPAdes, IDBA-UD, Cortex, and HyDA.
Methods
A list of more than 100 different single-cell omics methods has been published.
Multiple displacement amplification (MDA) is a widely used technique, enabling amplifying femtograms of DNA from bacterium to micrograms for sequencing. Reagents required for MDA reactions include: random primers and DNA polymerase from bacteriophage phi29. In 30 degree isothermal reaction, DNA is amplified with included reagents. As the polymerases manufacture new strands, a strand displacement reaction takes place, synthesizing multiple copies from each template DNA. At the same time, the strands that were extended antecedently will be displaced. MDA products result in a length of about 12 kb and ranges up to around 100 kb, enabling its use in DNA sequencing. In 2017, a major improvement to this technique, called WGA-X, was introduced by taking advantage of a thermostable mutant of the phi29 polymerase, leading to better genome recovery from individual cells, in particular those with high G+C content. MDA has also been implemented in a microfluidic droplet-based system to achieve a highly parallelized single-cell whole genome amplification. By encapsulating single-cells in droplets for DNA capture and amplification, this method offers reduced bias and enhanced throughput compared to conventional MDA.
Another common method is MALBAC. As done in MDA, this method begins with isothermal amplification, but the primers are flanked with a “common” sequence for downstream PCR amplification. As the preliminary amplicons are generated, the common sequence promotes self-ligation and the formation of “loops” to prevent further amplification. In contrast with MDA, the highly branched DNA network is not formed. Instead, the loops are denatured in another temperature cycle allowing the fragments to be amplified with PCR. MALBAC has also been implemented in a microfluidic device, but the amplification performance was not significantly improved by encapsulation in nanoliter droplets.
Comparing MDA and MALBAC, MDA results in better genome coverage, but MALBAC provides more even coverage across the genome. MDA could be more effective for identifying SNPs, whereas MALBAC is preferred for detecting copy number variants. While performing MDA with a microfluidic device markedly reduces bias and contamination, the chemistry involved in MALBAC does not demonstrate the same potential for improved efficiency.
A method particularly suitable for the discovery of genomic structural variation is Single-cell DNA template strand sequencing (a.k.a. Strand-seq). Using the principle of single-cell tri-channel processing, which uses joint modelling of read-orientation, read-depth, and haplotype-phase, Strand-seq enables discovery of the full spectrum of somatic structural variation classes ≥200kb in size. Strand-seq overcomes limitations of whole genome amplification based methods for identification of somatic genetic variation classes in single cells, because it is not susceptible against read chimers leading to calling artefacts (discussed in detail in the section below), and is less affected by drop outs. The choice of method depends on the goal of the sequencing because each method presents different advantages.
Limitations
MDA of individual cell genomes results in highly uneven genome coverage, i.e. relative overrepresentation and underrepresentation of various regions of the template, leading to loss of some sequences. There are two components to this process: a) stochastic over- and under-amplification of random regions; and b) systematic bias against high %GC regions. The stochastic component may be addressed by pooling single-cell MDA reactions from the same cell type, by employing fluorescent in situ hybridization (FISH) and/or post-sequencing confirmation. The bias of MDA against high %GC regions can be addressed by using thermostable polymerases, such as in the process called WGA-X.
Single-nucleotide polymorphisms (SNPs), which are a big part of genetic variation in the human genome, and copy number variation (CNV), pose problems in single cell sequencing, as well as the limited amount of DNA extracted from a single cell. Due to scant amounts of DNA, accurate analysis of DNA poses problems even after amplification since coverage is low and is susceptible to errors. With MDA, average genome coverage is less than 80% and SNPs that are not covered by sequencing reads will be opted out. In addition, MDA shows a high ratio of allele dropout, not detecting alleles from heterozygous samples. Various SNP algorithms are currently in use but none are specific to single-cell sequencing. MDA with CNV also poses the problem of identifying false CNVs that conceal the real CNVs. To solve this, when patterns can be generated from false CNVs, algorithms can detect and eradicate this noise to produce true variants.
Strand-seq overcomes limitations of methods based on whole genome amplification for genetic variant calling: Since Strand-seq does not require reads (or read pairs) transversing the boundaries (or breakpoints) of CNVs or copy-balanced structural variant classes, it is less susceptible to common artefacts of single-cell methods based on whole genome amplification, which include variant calling dropouts due to missing reads at the variant breakpoint and read chimera. Strand-seq discovers the full spectrum of structural variation classes of at least 200kb in size, including breakage-fusion-bridge cycles and chromothripsis events, as well as balanced inversions, and copy-number balanced or imbalanced translocations." Structural variant calls made by Strand-seq are resolved by chromosome-length haplotype, which provides additional variant calling specificity. As a current limitation, Strand-seq requires dividing cells for strand-specific labelling using bromodeoxyuridine (BrdU), and the method does not detect variants smaller than 200kb in size, such as mobile element insertions.
Applications
Microbiomes are among the main targets of single cell genomics due to the difficulty of culturing the majority of microorganisms in most environments. Single-cell genomics is a powerful way to obtain microbial genome sequences without cultivation. This approach has been widely applied on marine, soil, subsurface, organismal, and other types of microbiomes in order to address a wide array of questions related to microbial ecology, evolution, public health and biotechnology potential.
Cancer sequencing is also an emerging application of scDNAseq. Fresh or frozen tumors may be analyzed and categorized with respect to SCNAs, SNVs, and rearrangements quite well using whole-genome DNAS approaches. Cancer scDNAseq is particularly useful for examining the depth of complexity and compound mutations present in amplified therapeutic targets such as receptor tyrosine kinase genes (EGFR, PDGFRA etc.) where conventional population-level approaches of the bulk tumor are not able to resolve the co-occurrence patterns of these mutations within single cells of the tumor. Such overlap may provide redundancy of pathway activation and tumor cell resistance.
DNA methylome sequencing
Single-cell DNA methylome sequencing quantifies DNA methylation. There are several known types of methylation that occur in nature, including 5-methylcytosine (5mC), 5-hydroxymethylcytosine (5hmC), 6-methyladenosine (6mA), and 4-methylcytosine (4mC). In eukaryotes, especially animals, 5mC is widespread along the genome and plays an important role in regulating gene expression by repressing transposable elements. Sequencing 5mC in individual cells can reveal how epigenetic changes across genetically identical cells from a single tissue or population give rise to cells with different phenotypes.
Methods
Bisulfite sequencing has become the gold standard in detecting and sequencing 5mC in single cells. Treatment of DNA with bisulfite converts cytosine residues to uracil, but leaves 5-methylcytosine residues unaffected. Therefore, DNA that has been treated with bisulfite retains only methylated cytosines. To obtain the methylome readout, the bisulfite-treated sequence is aligned to an unmodified genome. Whole genome bisulfite sequencing was achieved in single cells in 2014. The method overcomes the loss of DNA associated with the typical procedure, where sequencing adapters are added prior to bisulfite fragmentation. Instead, the adapters are added after the DNA is treated and fragmented with bisulfite, allowing all fragments to be amplified by PCR. Using deep sequencing, this method captures ~40% of the total CpGs in each cell. With existing technology DNA cannot be amplified prior to bisulfite treatment, as the 5mC marks will not be copied by the polymerase.
Single-cell reduced representation bisulfite sequencing (scRRBS) is another method. This method leverages the tendency of methylated cytosines to cluster at CpG islands (CGIs) to enrich for areas of the genome with a high CpG content. This reduces the cost of sequencing compared to whole-genome bisulfite sequencing, but limits the coverage of this method. When RRBS is applied to bulk samples, the majority of the CpG sites in gene promoters are detected, but site in gene promoters only account for 10% of CpG sites in the entire genome. In single cells, 40% of the CpG sites from the bulk sample are detected. To increase coverage, this method can also be applied to a small pool of single cells. In a sample of 20 pooled single cells, 63% of the CpG sites from the bulk sample were detected. Pooling single cells is one strategy to increase methylome coverage, but at the cost of obscuring the heterogeneity in the population of cells.
Limitations
While bisulfite sequencing remains the most widely used approach for 5mC detection, the chemical treatment is harsh and fragments and degrades the DNA. This effect is exacerbated when moving from bulk samples to single cells. Other methods to detect DNA methylation include methylation-sensitive restriction enzymes. Restriction enzymes also enable the detection of other types of methylation, such as 6mA with DpnI. Nanopore-based sequencing also offers a route for direct methylation sequencing without fragmentation or modification to the original DNA. Nanopore sequencing has been used to sequence the methylomes of bacteria, which are dominated by 6mA and 4mC (as opposed to 5mC in eukaryotes), but this technique has not yet been scaled down to single cells.
Applications
Single-cell DNA methylation sequencing has been widely used to explore epigenetic differences in genetically similar cells. To validate these methods during their development, the single-cell methylome data of a mixed population were successfully classified by hierarchal clustering to identify distinct cell types. Another application is studying single cells during the first few cell divisions in early development to understand how different cell types emerge from a single embryo. Single-cell whole-genome bisulfite sequencing has also been used to study rare but highly active cell types in cancer such as circulating tumor cells (CTCs).
Transposase-accessible chromatin sequencing (scATAC-seq)
Single cell transposase-accessible chromatin sequencing maps chromatin accessibility across the genome. A transposase inserts sequencing adapters directly into open regions of chromatin, allowing those regions to be amplified and sequenced.
Methods
The two methods for library preparation in scATAC-Seq are based on split-pool cellular indexing and microfluidics.
Transcriptome sequencing (scRNA-seq)
Standard methods such as microarrays and bulk RNA-seq analyze the RNA expression from large populations of cells. These measurements may obscure critical differences between individual cells in mixed-cell populations.
Single-cell RNA sequencing (scRNA-seq) provides the expression profiles of individual cells and is considered the gold standard for defining cell states and phenotypes as of 2020. Although it is impossible to obtain complete information on every RNA expressed by each cell, due to the small amount of material available, gene expression patterns can be identified through gene clustering analyses. This can uncover rare cell types within a cell population that may never have been seen before. For example, one group of scientists performing scRNA-seq on neuroblastoma tumor tissue identified a rare pan-neuroblastoma cancer cell, which may be attractive for novel therapy approaches.
Methods
Current scRNA-seq protocols involve isolating single cells and their RNA, and then following the same steps as bulk RNA-seq: reverse transcription (RT), amplification, library generation and sequencing. Early methods separated individual cells into separate wells; more recent methods encapsulate individual cells in droplets in a microfluidic device, where the reverse transcription reaction takes place, converting RNAs to cDNAs. Each droplet carries a DNA "barcode" that uniquely labels the cDNAs derived from a single cell. Once reverse transcription is complete, the cDNAs from many cells can be mixed together for sequencing, because transcripts from a particular cell are identified by the unique barcode.
Challenges for scRNA-Seq include preserving the initial relative abundance of mRNA in a cell and identifying rare transcripts. The reverse transcription step is critical as the efficiency of the RT reaction determines how much of the cell's RNA population will be eventually analyzed by the sequencer. The processivity of reverse transcriptases and the priming strategies used may affect full-length cDNA production and the generation of libraries biased toward 3’ or 5' end of genes.
In the amplification step, either PCR or in vitro transcription (IVT) is currently used to amplify cDNA. One of the advantages of PCR-based methods is the ability to generate full-length cDNA. However, different PCR efficiency on particular sequences (for instance, GC content and snapback structure) may also be exponentially amplified, producing libraries with uneven coverage. On the other hand, while libraries generated by IVT can avoid PCR-induced sequence bias, specific sequences may be transcribed inefficiently, thus causing sequence drop-out or generating incomplete sequences.
Several scRNA-seq protocols have been published:
Tang et al.,
STRT,
SMART-seq, SORT-seq,
CEL-seq, RAGE-seq,
Quartz-seq.
, and C1-CAGE. These protocols differ in terms of strategies for reverse transcription, cDNA synthesis and amplification, and the possibility to accommodate sequence-specific barcodes (i.e., UMIs) or the ability to process pooled samples.
In 2017, two approaches were introduced to simultaneously measure single-cell mRNA and protein expression through oligonucleotide-labeled antibodies known as REAP-seq, and CITE-seq. Collecting cellular contents following electrophysiological recording using patch-clamp has also allowed development of the Patch-Seq method, which is steadily gaining ground in neuroscience.
Example of a droplet based platform - 10X method
This platform of single cell RNA sequencing allows to analyze transcriptomes on a cell-by-cell basis by the use of microfluidic partitioning to capture single cells and prepare next-generation sequencing (NGS) cDNA libraries. The droplets based platform enables massively parallel sequencing of mRNA in a large numbers of individual cells by capturing single cell in oil droplet.
Overall, in a first stage individual cells are captured separately and lysed, then reverse transcription (RT) of mRNA is performed and cDNA library is obtained. To select mRNA, the RT is performed with a single-stranded sequence of deoxythymine (oligo dT) primer which bind specifically the poly(A) tail of mRNA molecules. Subsequently, the amplified cDNA library is used for sequencing.
So, the first step of the method is the single cell encapsulation and library preparation. Cells are encapsulated into Gel Beads-in-emulsion (GEMs) thanks to an automate. To form these vesicle, the automate uses a microfluidic chip and combines all components with oil. Each functional GEM contains a single cell, a single Gel Bead, and RT reagents. On the Gel Bead, olignonucleotides composed by 4 distincts parts are bind: PCR primer (essential for the sequencing) ; 10X barcoded oligonucleotides ; Unique Molecular Identifier (UMI) sequence ; PolydT sequence (that enables capture of poly-adenylated mRNA molecules).
Within each GEM reaction vesicle, a single cell is lysed and undergo reverse transcription. cDNA from the same cell are identified thanks to a common 10X barcode.
In addition, the number of UMIs express the gene expression level and its analyse allows to detect highly variable genes. Those data are often used for either cellular phenotype classification or new subpopulation identification.
The final step of the platform is the sequencing. Libraries generated can be directly used for single cell whole transcriptome sequencing or target sequencing workflows. The sequencing is performed by using the Illumina dye sequencing method. This sequencing method is based on sequencing by synthesis (SBS) principle and the use of reversible dye-terminator that enables the identification of each single nucleotid.
In order to read the transcript sequences on one end, and the barcode and UMI on the other end, paired-end sequencing readers are required.
The droplet-based platform allows the detection of rare cell types thanks to its high throughput. In fact, 500 to 10,000 cells are captured per sample from a single cell suspension. The protocol is performed easily and allows a high cell recovery rate of up to 65%. The global workflow of the droplet-based platform takes 8 hours and so is faster than the Microwell-based method (BD Rhapsody), which takes 10 hours.
However, it presents some limitations as the need of fresh samples and the final detection of only 10% mRNA.
The major difference between the droplet-based method and the microwell-based method is the technique used for partitioning cells.
Limitations
Most RNA-seq methods depend on poly(A) tail capture to enrich mRNA and deplete abundant and uninformative rRNA. Thus, they are often restricted to sequencing polyadenylated mRNA molecules. However, recent studies are now starting to appreciate the importance of non-poly(A) RNA, such as long-noncoding RNA and microRNAs in gene expression regulation. Small-seq is a single-cell method that captures small RNAs (<300 nucleotides) such as microRNAs, fragments of tRNAs and small nucleolar RNAs in mammalian cells. This method uses a combination of “oligonucleotide masks” (that inhibit the capture of highly abundant 5.8S rRNA molecules) and size selection to exclude large RNA species such as other highly abundant rRNA molecules. To target larger non-poly(A) RNAs, such as long non-coding mRNA, histone mRNA, circular RNA, and enhancer RNA, size selection is not applicable for depleting the highly abundant ribosomal RNA molecules (18S and 28s rRNA). Single-cell RamDA-Seq is a method that achieves this by performing reverse transcription with random priming (random displacement amplification) in the presence of “not so random” (NSR) primers specifically designed to avoid priming on rRNA molecule. While this method successfully captures full-length total RNA transcripts for sequencing and detected a variety of non-poly(A) RNAs with high sensitivity, it has some limitations. The NSR primers were carefully designed according to rRNA sequences in the specific organism (mouse), and designing new primer sets for other species would take considerable effort. Recently, a CRISPR-based method named scDASH (single-cell depletion of abundant sequences by hybridization) demonstrated another approach to depleting rRNA sequences from single-cell total RNA-seq libraries.
Bacteria and other prokaryotes are currently not amenable to single-cell RNA-seq due to the lack of polyadenylated mRNA. Thus, the development of single-cell RNA-seq methods that do not depend on poly(A) tail capture will also be instrumental in enabling single-cell resolution microbiome studies. Bulk bacterial studies typically apply general rRNA depletion to overcome the lack of polyadenylated mRNA on bacteria, but at the single-cell level, the total RNA found in one cell is too small. Lack of polyadenylated mRNA and scarcity of total RNA found in single bacteria cells are two important barriers limiting the deployment of scRNA-seq in bacteria.
Applications
scRNA-Seq is becoming widely used across biological disciplines including Developmental biology, Neurology, Oncology, Immunology, Cardiovascular research and Infectious disease.
Using machine learning methods, data from bulk RNA-Seq has been used to increase the signal/noise ratio in scRNA-Seq. Specifically, scientists have used gene expression profiles from pan-cancer datasets in order to build coexpression networks, and then have applied these on single cell gene expression profiles, obtaining a more robust method to detect the presence of mutations in individual cells using transcript levels.
Some scRNA-seq methods have also been applied to single cell microorganisms. SMART-seq2 has been used to analyze single cell eukaryotic microbes, but since it relies on poly(A) tail capture, it has not been applied in prokaryotic cells. Microfluidic approaches such as Drop-seq and the Fluidigm IFC-C1 devices have been used to sequence single malaria parasites or single yeast cells. The single-cell yeast study sought to characterize the heterogeneous stress tolerance in isogenic yeast cells before and after the yeast are exposed to salt stress. Single-cell analysis of the several transcription factors by scRNA-seq revealed heterogeneity across the population. These results suggest that regulation varies among members of a population to increase the chances of survival for a fraction of the population.
The first single-cell transcriptome analysis in a prokaryotic species was accomplished using the terminator exonuclease enzyme to selectively degrade rRNA and rolling circle amplification (RCA) of mRNA. In this method, the ends of single-stranded DNA were ligated together to form a circle, and the resulting loop was then used as a template for linear RNA amplification. The final product library was then analyzed by microarray, with low bias and good coverage. However, RCA has not been tested with RNA-seq, which typically employs next-generation sequencing. Single-cell RNA-seq for bacteria would be highly useful for studying microbiomes. It would address issues encountered in conventional bulk metatranscriptomics approaches, such as failing to capture species present in low abundance, and failing to resolve heterogeneity among cell populations.
scRNA-Seq has provided considerable insight into the development of embryos and organisms, including the worm Caenorhabditis elegans, and the regenerative planarian Schmidtea mediterranea and axolotl Ambystoma mexicanum. The first vertebrate animals to be mapped in this way were Zebrafish and Xenopus laevis. In each case multiple stages of the embryo were studied, allowing the entire process of development to be mapped on a cell-by-cell basis. Science recognized these advances as the 2018 Breakthrough of the Year.
A molecular cell atlas of mice testes was established to define BDE47-induced prepubertal testicular toxicity using the ScRNA-seq approach, providing novel insight into our understanding of the underlying mechanisms and pathways involved in BDE47-associated testicular injury at a single-cell resolution.
Considerations
Isolation of single cells
There are several ways to isolate individual cells prior to whole genome amplification and sequencing. Fluorescence-activated cell sorting (FACS) is a widely used approach. Individual cells can also be collected by micromanipulation, for example by serial dilution or by using a patch pipette or nanotube to harvest a single cell. The advantages of micromanipulation are ease and low cost, but they are laborious and susceptible to misidentification of cell types under microscope. Laser-capture microdissection (LCM) can also be used for collecting single cells. Although LCM preserves the knowledge of the spatial location of a sampled cell within a tissue, it is hard to capture a whole single cell without also collecting the materials from neighboring cells.
High-throughput methods for single cell isolation also include microfluidics. Both FACS and microfluidics are accurate, automatic and capable of isolating unbiased samples. However, both methods require detaching cells from their microenvironments first, thereby causing perturbation to the transcriptional profiles in RNA expression analysis.
Number of cells to be sequenced and analyzed
scRNA-Seq
The single-cell RNA-Seq protocols vary in efficiency of RNA capture, which results in differences in the number of transcripts generated from each single cell. Single-cell libraries are usually sequenced to a depth of 1,000,000 reads because a large majority of genes are detected with 500,000 reads. Increasing the number of cells and decreasing the read depth increases the power of identifying major cell populations. However, low read depths may not always provide necessary information about the genes, and the difference in their expression between the cell populations is dependent on the stability and detection of the mRNA molecules.
Quality control covariates serve as a strategy to analyze the number of cells. These covariates mainly include filtering based on count depth, the number of genes, and the fraction of counts from mitochondrial genes, which leads to the interpretation of cellular signals.
See also
Single-cell analysis
Single-cell transcriptomics
Single cell epigenomics
Tcr-seq
DNA sequencing
Whole genome sequencing
References
External links
DNA sequencing
Molecular biology techniques
Biotechnology | Single-cell sequencing | [
"Chemistry",
"Biology"
] | 6,204 | [
"Biotechnology",
"Molecular biology techniques",
"DNA sequencing",
"nan",
"Molecular biology"
] |
32,236,479 | https://en.wikipedia.org/wiki/Hanle%20effect | The Hanle effect, also known as zero-field level crossing, is a reduction in the polarization of light when the atoms emitting the light are subject to a magnetic field in a particular direction, and when they have themselves been excited by polarized light.
Experiments which utilize the Hanle effect include measuring the lifetime of excited states, and detecting the presence of magnetic fields.
History
The first experimental evidence for the effect came from Robert W. Wood, and Lord Rayleigh. The effect is named after Wilhelm Hanle, who was the first to explain the effect, in terms of classical physics, in Zeitschrift für Physik in 1924. Initially, the causes of the effect were controversial, and many theorists mistakenly thought it was a version of the Faraday effect. Attempts to understand the phenomenon were important in the subsequent development of quantum physics.
An early theoretical treatment of level crossing effect was given by Gregory Breit.
Theory
Classical model
The classical explanation for this effect involves the Lorentz oscillator model, which treats the electron bound to the nucleus as a classical oscillator. When light interacts with this oscillator, it sets the electron in motion in the direction of its polarization. Consequently, the radiation emitted by this moving electron is polarized in the same direction as the incident light, as explained by classical electrodynamics.
Applications
Observation of the Hanle effect on the light emitted by the Sun is used to indirectly measure the magnetic fields within the Sun, see:
Polarization in astronomy
Imaging spectroscopy
The effect was initially considered in the context of gasses, followed by applications to solid state physics. It has been used to measure both the states of localized electrons and free electrons. For spin-polarized electrical currents, the Hanle effect provides a way to measure the effective spin lifetime in a particular device.
Related effects
The zero-field Hanle level crossings involve magnetic fields, in which the states which are degenerate at zero magnetic field are split due to the Zeeman effect. There is also the closely analogous zero-field Stark level crossings with electric fields, in which the states which are degenerate at zero electric field are split due to the Stark effect. Tests of zero field Stark level crossings came after the Hanle-type measurements, and are generally less common, due to the increased complexity of the experiments.
See also
Larmor precession
Resonance fluorescence
Optical pumping
References
Atomic physics
Magnetism
Foundational quantum physics
Physical phenomena | Hanle effect | [
"Physics",
"Chemistry"
] | 503 | [
"Physical phenomena",
"Foundational quantum physics",
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
" and optical physics"
] |
32,239,521 | https://en.wikipedia.org/wiki/Ion%20transport%20number | In chemistry, ion transport number, also called the transference number, is the fraction of the total electric current carried in an electrolyte by a given ionic species :
Differences in transport number arise from differences in electrical mobility. For example, in an aqueous solution of sodium chloride, less than half of the current is carried by the positively charged sodium ions (cations) and more than half is carried by the negatively charged chloride ions (anions) because the chloride ions are able to move faster, i.e., chloride ions have higher mobility than sodium ions. The sum of the transport numbers for all of the ions in solution always equals unity:
The concept and measurement of transport number were introduced by Johann Wilhelm Hittorf in the year 1853. Liquid junction potential can arise from ions in a solution having different ion transport numbers.
At zero concentration, the limiting ion transport numbers may be expressed in terms of the limiting molar conductivities of the cation (), anion (), and electrolyte ():
and
where and are the numbers of cations and anions respectively per formula unit of electrolyte. In practice the molar ionic conductivities are calculated from the measured ion transport numbers and the total molar conductivity. For the cation , and similarly for the anion. In solutions, where ionic complexation or associaltion are important, two different transport/transference numbers can be defined.
The practical importance of high (i.e. close to 1) transference numbers of the charge-shuttling ion (i.e. Li+ in lithium-ion batteries) is related to the fact, that in single-ion devices (such as lithium-ion batteries) electrolytes with the transfer number of the ion near 1, concentration gradients do not develop. A constant electrolyte concentration is maintained during charge-discharge cycles. In case of porous electrodes a more complete utilization of solid electroactive materials at high current densities is possible, even if the ionic conductivity of the electrolyte is reduced.
Experimental measurement
There are several experimental techniques for the determination of transport numbers. The Hittorf method is based on measurements of ion concentration changes near the electrodes. The moving boundary method involves measuring the speed of displacement of the boundary between two solutions due to an electric current.
Hittorf method
This method was developed by German physicist Johann Wilhelm Hittorf in 1853., and is based on observations of the changes in concentration of an electrolyte solution in the vicinity of the electrodes. In the Hittorf method, electrolysis is carried out in a cell with three compartments: anode, central, and cathode. Measurement of the concentration changes in the anode and cathode compartments determines the transport numbers. The exact relationship depends on the nature of the reactions at the two electrodes. For the electrolysis of aqueous copper(II) sulfate () as an example, with and ions, the cathode reaction is the reduction and the anode reaction is the corresponding oxidation of Cu to . At the cathode, the passage of coulombs of electricity leads to the reduction of moles of , where is the Faraday constant. Since the ions carry a fraction of the current, the quantity of flowing into the cathode compartment is moles, so there is a net decrease of in the cathode compartment equal to . This decrease may be measured by chemical analysis in order to evaluate the transport numbers. Analysis of the anode compartment gives a second pair of values as a check, while there should be no change of concentrations in the central compartment unless diffusion of solutes has led to significant mixing during the time of the experiment and invalidated the results.
Moving boundary method
This method was developed by British physicists Oliver Lodge in 1886 and William Cecil Dampier in 1893. It depends on the movement of the boundary between two adjacent electrolytes under the influence of an electric field. If a colored solution is used and the interface stays reasonably sharp, the speed of the moving boundary can be measured and used to determine the ion transference numbers.
The cation of the indicator electrolyte should not move faster than the cation whose transport number is to be determined, and it should have same anion as the principle electrolyte. Besides the principal electrolyte (e.g., HCl) is kept light so that it floats on indicator electrolyte. serves best because is less mobile than and is common to both and the principal electrolyte HCl.
For example, the transport numbers of hydrochloric acid (HCl(aq)) may be determined by electrolysis between a cadmium anode and an Ag-AgCl cathode. The anode reaction is so that a cadmium chloride () solution is formed near the anode and moves toward the cathode during the experiment. An acid-base indicator such as bromophenol blue is added to make visible the boundary between the acidic HCl solution and the near-neutral solution. The boundary tends to remain sharp since the leading solution HCl has a higher conductivity that the indicator solution , and therefore a lower electric field to carry the same current. If a more mobile ion diffuses into the solution, it will rapidly be accelerated back to the boundary by the higher electric field; if a less mobile ion diffuses into the HCl solution it will decelerate in the lower electric field and return to the solution. Also the apparatus is constructed with the anode below the cathode, so that the denser solution forms at the bottom.
The cation transport number of the leading solution is then calculated as
where is the cation charge, the concentration, the distance moved by the boundary in time , the cross-sectional area, the Faraday constant, and the electric current.
Concentration cells
This quantity can be calculated from the slope of the function of two concentration cells, without or with ionic transport.
The EMF of transport concentration cell involves both the transport number of the cation and its activity coefficient:
where and are activities of HCl solutions of right and left hand electrodes, respectively, and is the transport number of .
Electrophoretic magnetic resonance imaging method
This method is based on magnetic resonance imaging of the distribution of ions comprising NMR-active nuclei (usually 1H, 19F, 7Li) in an electrochemical cells upon application of electric current
See also
Activity coefficient
Born equation
Debye length
Einstein relation (kinetic theory)
Electrochemical kinetics
Ion selective electrode
ITIES
Law of dilution
Liquid junction potential
Solvated electron
Solvation shell
Supporting electrolyte
Thermogalvanic cell
van't Hoff factor
Notes
External links
Electrochemistry
Physical quantities | Ion transport number | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,382 | [
"Physical phenomena",
"Physical quantities",
"Quantity",
"Electrochemistry",
"Physical properties"
] |
32,240,109 | https://en.wikipedia.org/wiki/Metabolism%3A%20Clinical%20and%20Experimental | Metabolism: Clinical and Experimental is a monthly peer-reviewed medical journal covering all aspects of human metabolism. It was established in 1952 and is published by Elsevier. The editor-in-chief is Christos Socrates Mantzoros (Harvard Medical School) who has reinvigorated the journal during his tenure.
Abstracting and indexing
The journal is abstracted and indexed in
BIOSIS Previews
Current Contents/Life Sciences
Index Medicus/MEDLINE/PubMed
Science Citation Index
Scopus
According to the Journal Citation Reports, the journal has a 2021 impact factor of 13.93 and a current Cite Score (the equivalent of a 4-year impact factor) of 16.5 placing the journal in the top 3% of Endocrinology, Diabetes and Metabolism Journals.
References
External links
Academic journals established in 1952
Monthly journals
English-language journals
Elsevier academic journals
Endocrinology journals
Metabolism | Metabolism: Clinical and Experimental | [
"Chemistry",
"Biology"
] | 182 | [
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
32,241,229 | https://en.wikipedia.org/wiki/Fundamental%20resolution%20equation | The fundamental resolution equation is used in chromatography to help relate adjustable chromatographic parameters to resolution, and is as follows:
Rs = [N1/2/4][(α-1)/α][k2'/(1+k2')], where
N = Number of theoretical plates
α = Selectivity Term = k2'/k1'
The [N1/2/4] term is the column factor, the [(α-1)/α] term is the thermodynamic factor, and the [k2'/(1+k2')] term is the retention factor. The 3 factors are not completely independent, but they are very close, and can be treated as such.
So what does this mean? It means that to increase resolution of two peaks on a chromatogram, one of the three terms of the equation need to be modified.
1) N can be increased by lengthening the column (least effective, as doubling the column will get a 21/2 or 1.44x increase in resolution).
2) Increasing k' also helps. This can be done by lowering the column temperature in G.C., or by choosing a weaker mobile phase in L.C. (moderately effective)
3) Changing α is the most effective way of increasing resolution. This can be done by choosing a stationary phase that has a greater difference between k1' and k2'. It can also be done in L.C. by using pH to invoke secondary equilibria (if applicable).
The fundamental resolution equation is derived as follows:
For two closely spaced peaks, ω1 = ω2, and σ1 = σ2
so Rs = (tr2 - tr1)/ω2 = (tr2 - tr1)/4σ2
Where tr1 and tr2 are the retention times of two separate peaks.
Since N = [(tr2)/σ2]2, then σ = tr2/ N1/2
Using substitution, Rs = N1/2[(tr2 - tr1)/4tr2] = (N1/2/4)(1 - tr1/tr2)
Now using the following equations and solving for tr1 and tr2
k1' = (tr1 - t0)/t0 ; tr1 = t0(k1' + 1)
k2' = (tr2 - t0)/t0 ; tr2 = t0(k2' + 1)
Substituting again and you get:
Rs = [N1/2/4][1 - (k1' + 1)/(k2' + 1] = [N1/2/4][(k2' - k1')/(1 + k2')]
And finally substituting once more α = k2'/k1' and you get the Fundamental Resolution Equation:
Rs = [N1/2/4][(α-1)/α][k2'/(1+k2')]
References
Spring 2009 Class Notes, CHM 5154, Chemical Separations taught by Dr. John Dorsey, Ph.D, Florida State University
Chromatography | Fundamental resolution equation | [
"Chemistry"
] | 680 | [
"Chromatography",
"Separation processes"
] |
32,244,084 | https://en.wikipedia.org/wiki/Antidynamo%20theorem | In physics and in particular in the theory of magnetism, an antidynamo theorem is one of several results that restrict the type of magnetic fields that may be produced by dynamo action.
One notable example is Thomas Cowling's antidynamo theorem which states that no axisymmetric magnetic field can be maintained through a self-sustaining dynamo action by an axially symmetric current. Similarly, the Zeldovich's antidynamo theorem states that a two-dimensional, planar flow cannot maintain the dynamo action.
Consequences
Apart from the Earth's magnetic field, some other bodies such as Jupiter and Saturn, and the Sun have significant magnetic fields whose major component is a dipole, an axisymmetric magnetic field. These magnetic fields are self-sustained through fluid motion in the Sun or planets, with the necessary non-symmetry for the planets deriving from the Coriolis force caused by their rapid rotation, and one cause of non-symmetry for the Sun being its differential rotation.
The magnetic fields of planets with slow rotation periods and/or solid cores, such as Mercury, Venus, and Mars, have dissipated to almost nothing by comparison.
The impact of the known anti-dynamo theorems is that successful dynamos do not possess a high degree of symmetry.
See also
Dynamo theory
Magnetosphere of Jupiter
Magnetosphere of Saturn
References
Geomagnetism
Magnetohydrodynamics
Physics theorems
No-go theorems | Antidynamo theorem | [
"Physics",
"Chemistry"
] | 291 | [
"No-go theorems",
"Equations of physics",
"Fluid dynamics",
"Physics theorems",
"Magnetohydrodynamics"
] |
29,957,827 | https://en.wikipedia.org/wiki/Flow%2C%20Turbulence%20and%20Combustion | Flow, Turbulence and Combustion is a peer-reviewed scientific journal on fluid mechanics. It covers original research on fluid mechanics and combustion, with the areas of interest including industrial, geophysical, and environmental applications. The journal was established in 1949 under the name Applied Scientific Research. It obtained its present name in 1998, which also reflects its association with the European Research Community on Flow, Turbulence and Combustion (ERCOFTAC).
Since the start in 1948, the journal was published by Martinus Nijhoff Publishers. In the late 1980 it was taken over by Kluwer Academic Publishers, which subsequently became part of the current publisher, Springer Science+Business Media.
References
External links
European Research Community on Flow, Turbulence and Combustion
Energy and fuel journals
English-language journals
Fluid mechanics
Fluid dynamics journals
Academic journals established in 1949
Springer Science+Business Media academic journals
8 times per year journals | Flow, Turbulence and Combustion | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 177 | [
"Fluid dynamics journals",
"Environmental science journals",
"Energy and fuel journals",
"Civil engineering",
"Fluid mechanics",
"Fluid dynamics"
] |
29,961,903 | https://en.wikipedia.org/wiki/Plant%20matrix%20metalloproteinase | Plant matrix metalloproteinases are metalloproteins and zinc enzymes found in plants.
Matrix Metalloproteinase
Matrix metalloproteinases (MMPs) are zinc endopeptidases, commonly called metzincins. MMP enzymes represent an ancient family of proteins with major similarities in genetic make-up that are present in a range of diverse organisms from unicellular bacteria to multicellular vertebrates and invertebrates. The superfamily is distinguished due to its motif consisting of three histidines bonded to zinc at the catalytic site. The metzincins are divided into four smaller families: seralysins, astacins, adamalysins (ADAMs), and MMPs. The MMP family is formed by twenty related zinc-dependent enzymes. They are noted for having the ability to degrade extracellular matrix proteins, such as collagens, laminin, and proteoglycans. These calcium- and zinc-dependent proteases are activated at neutral pH and twenty-three have been found present in mammalian cells. Plant MMPs show structural similarity to MMPs found in mammals, such as the presence of an auto-regulatory cysteine switch domain and a zinc-binding catalytic domain.
MMPs are synthesized primarily by connective tissues and have a large contribution to the initial events of tissue degradation. There are three major groups of the MMP family and each group has more than one distinct gene product that distinguishes them apart from one another on the immunological and biochemical criteria. Similar to that of induced fit by enzyme-substrate interactions, MMPs in the first group, called collagenases, have interstitial collagens. The second group, called gelatinases, degrade denatured collagens catalytically. The third group, called stromelysins, have the broadest proteolytic action and were originally confused as proteoglyconases. A less clearly described group of MMPs is the PUMP. Its RNA was taken from stromal cells in human breast carcinomas. Based on the PUMP sequence and functionality of carcinomas in the progression of malignancy, a new branch of the MMP family could have been discovered.
Extracellular Matrix
The most basic description of the plant extracellular matrix (ECM) is the cell wall, but it is actually the cell surface continuum that includes a variety of proteins with major roles in plant growth, development, and response. The ECM is composed of the primary and secondary cell walls, along with the intercellular gap between its neighboring cells. The ECM has a functional structure, along with aid in the regulation of turgor, which acts as a protective barrier and communicates with other cells using signaling pathways. In mammalian animals, extracellular matrix metalloproteinases (MMPs) modify the ECM to play significant roles in biological processes. The important role of MMP function in the extracellular matrix modification and subsequent mammalian development and signaling suggests that further study on the structure and function of these extracellular metalloproteinases may reveal new aspects of ECM modification in plant development.
Plant MMPs
All known MMPs have been studied in vertebrates; it is hypothesized that they are involved in remodeling connective tissue during development and healing. Current advances are being made in the field of Biochemistry, which will further analyze MMP-ECM interaction and their effects during plant development, stress induction, and xylem-phloem differences. SMEP1, soybean metalloendoproteinase 1, has been sequenced and characterized. It is noted that several unique divergences are in SMEP1 from that of the normal MMP family. For example, SMEP1 is said to have a free cysteine at position 94, a non-homologous insert from V103 to S121, a free sulfhydryl group, and the complete lack of the aspartate that is found in all of the other MMPs.
Studies of plant MMPs
Protein inhibitors of proteases, are present in plants, animals, and microorganisms. They are ubiquitous in nature and have a small molecular mass ranging from four to twenty-five kilo-Daltons. Different types of protease inhibition are directed toward a single class of protease. There are few reports on natural inhibitors of metalloproteinases. The metalloproteinase inhibitors (MPIs) can prevent unwanted proteolysis by denaturing their target proteases through non-competitive inhibition at an allosteric site. Five novel Lupinus albus MPIs were found and constitute the first reported protein inhibitors of metalloproteinases in plants and the first reported plant peptide inhibitors against a matrixin proteinase.
MtMMPL1, a Medicago truncatula nodulin gene identified by transcriptomics, is said to represent a novel and specific marker for root and nodule infection by Sinorhizobium meliloti. The possible role in the nitrogen-fixing symbiosis of a nodulin gene was investigated. The immune response of the plant to the alterations in the exopolysaccharides (EPSs) and lipopolysaccharides (LPSs) of various rhizobia led to the formation of enlarged infection threads (ITs) with thickened cell walls, which is often associated with plant defense reactions, and to the production of ineffective nodules in their plant host. Even though its precise role is classified as unknown, MTMMPL1 is noted as the first member of this biologically important protein family with a clear function in plant-microbe symbiotic associations.
At2-MMP from arabidopsis was found in leaves and roots of young arabidopsis and leaves, roots, and inflorescences of mature flowering plants showing strong increase of transcript abundance with aging. In the leaves, the MMP gene was expressed in the phloem, developing xylem elements, neighboring mesophyll cell layers, and epidermal cells. The flowers were noted as having the gene in pistils, ovules, and receptacles. It was concluded that the At2-MMP has a physiological role in mature aging tissue and the possibility of being involved in plant senescence.
The fungus Chondrostereum purpureum, the causal agent of silver leaf, was grown in liquid culture and agar, which caused it to secrete extracellular proteinases into the medium. The fluid dialysed by the activation of metal ions, which confirmed the presence of metalloproteinases. The silverleaf disease is a basidiomycete pathogenic on a wide range of host plants. The most notable host plant species include pomaceous and stone fruit species which are substantial for New Zealand’s economy. Cations, such as copper, zinc, and cobalt, are all inhibitory for the control of extract and stimulatory for EDTA-dialysed extract, which could possibly make the processes native cofactors. The amount of proteinases could be variable to the duration of the infection’s presence. Activity was found throughout the infected zone and not just the wound site; therefore, fungal growth and proteinase activity have a direct relationship. Even though zinc-binding metalloproteinases have been found to aid processes such as protein turnover and embryogenesis, it is still unclear as to the role they play in plants. To try to better understand MMPs’ role in plant tissue, the SMEP1 is cloned and analyzed using a polymerase chain reaction (PCR) and the rapid amplification of cDNA ends (RACE) reaction. It was found only to be present in mature leaves, which suggest that SEMP1 may play an important role in tissue modeling.
References
Notes
Bibliography
Cao, J. & Zucker, S. (n.d.). Introduction to the MMP and TIMP families (structures, substrates) and an overview of diseases where MMPs have been incriminated. Biology and chemistry of matrix metalloproteinases (MMPs). Retrieved from http://www.abcam.com/index.html?pageconfig=resource&rid=11034
Murphy, G., Murphy, G., & Reynolds, J. (1991). The origin of matrix metalloproteinases and their familial relationships. Federation of European Biochemical Societies, 289 (1), 4-7.
Flinn, B. (2008). Review: Plant extracellular matrix metalloproteinases. Functional Plant Biology, 35, 1183-1193.
McGeehan, G., Burkhart, W., Anderegg, R., Becherer, J. D., Gillikin, J. W., & Graham, J. S. (1992). Sequencing and Characterization of the Soybean Leaf Metalloproteinase. Plant Physiol., 99, 1179-1183.
Carrilho, D., Duarte, I., Francisco, R., Ricardo, C., & Duque-Magalhaes, M. (2009). Discovery of Novel Plant Peptides as Strong Inhibitors of Metalloproteinases. Protein and Peptide Letters, 16, 543-551.
Combier, J., Vernie, T., Billy, F., Yahyaoui, F., Mathis, R., & Gamas, P. (2007). The MtMMPL1 Early Nodulin is a novel member of the matrix metalloproteinase family with a role in Medicago truncatula infection by Sinorhizobium meliloti. Plan Physiology, 144, 703-716.
Golldack, D., Popova, O., & Dietz, K. (2002). Mutation of the Matrix Metalloproteinase At2-MMP Inhibits Growth and Causes Late Flowering and Early Senescence in Arabidopsis. The Journal of Biological Chemistry, 277 (7) 5541-5547.
Graham, J. S., Xiong, J., & Gillikin, J. W. (1991). Purification and developmental Analysis of a Metalloendoproteinase from the Leaves of Glycine max. Plant Physiol., 97, 786-792.
Ao, C., Li, A., Elzaawely, A., & Tawata, S. (2008). MMP-13 Inhibitory Activity of Thirteen Selected Plant Species from Okinawa. International Journal of Pharmacology, 4 (3), 202-207.
Metalloproteins
Zinc enzymes | Plant matrix metalloproteinase | [
"Chemistry"
] | 2,258 | [
"Metalloproteins",
"Bioinorganic chemistry"
] |
28,426,193 | https://en.wikipedia.org/wiki/Port%20and%20starboard | Port and starboard are nautical terms for watercraft and spacecraft, referring respectively to the left and right sides of the vessel, when aboard and facing the bow (front).
Vessels with bilateral symmetry have left and right halves which are mirror images of each other. One asymmetric feature is where access to a boat, ship, or aircraft is at the side; it is usually only on the port side (hence the name).
Side
Port side and starboard side respectively refer to the left and right sides of the vessel, when aboard and facing the bow. The port and starboard sides of the vessel always refer to the same portion of the vessel's structure, and do not depend on the position of someone aboard the vessel.
The port side is the side to the left of an observer aboard the vessel and , towards the direction the vessel is heading when underway in the forward direction. The starboard side is to the right of such an observer.
This convention allows orders and information to be communicated unambiguously, without needing to know which way any particular crew member is facing.
Etymology
The term starboard derives from the Old English steorbord, meaning the side on which the ship is steered. Before ships had rudders on their centrelines, they were steered with a steering oar at the stern of the ship on the right hand side of the ship, because more people are right-handed. The "steer-board" etymology is shared by the German Steuerbord, Dutch stuurboord and Swedish styrbord, which gave rise to the French tribord, Italian tribordo, Catalan estribord, Portuguese estibordo, Spanish estribor and Estonian tüürpoord.
Since the steering oar was on the right side of the boat, it would tie up at the wharf on the other side. Hence the left side was called port. The Oxford English Dictionary cites port in this usage since 1543.
Formerly, larboard was often used instead of port. This is from Middle English ladebord and the term lade is related to the modern load. Larboard sounds similar to starboard and in 1844 the Royal Navy ordered that port be used instead. The United States Navy followed suit in 1846. Larboard continued to be used well into the 1850s by whalers. In chapter 12 of Life on the Mississippi (1883) Mark Twain writes larboard to refer to the left side of the ship (Mississippi River steamboat) in his days on the river – circa 1857–1861. Lewis Carroll rhymed larboard and starboard in "Fit the Second" of The Hunting of the Snark (1876).
An Anglo-Saxon record of a voyage by Ohthere of Hålogaland used the word "bæcbord" ("back-board") for the left side of a ship. With the steering rudder on the starboard side the man on the rudder had his back to the bagbord (Nordic for portside) side of the ship. The words for "port side" in other European languages, such as German Backbord, Dutch and Afrikaans bakboord, Swedish babord, Spanish babor, Portuguese bombordo, Italian babordo, French bâbord and Estonian pakpoord, are derived from the same root.
Importance of standard terms
The navigational treaty convention, the International Regulations for Preventing Collisions at Sea—for instance, as appears in the UK's Merchant Shipping (Distress Signals and Prevention of Collisions) Regulations 1996 (and comparable US documents from the US Coast Guard)—sets forth requirements for maritime vessels to avoid collisions, whether by sail or powered, and whether a vessel is overtaking, approaching head-on, or crossing. To set forth these navigational rules, the terms starboard and port are essential, and to aid in in situ decision-making, the two sides of each vessel are marked, dusk to dawn, by navigation lights, the vessel's starboard side by green and its port side by red. Aircraft are lit in the same way.
Other nautical uses
Port and starboard are also commonly used when dividing crews; for example with a two watch system the teams supplying the personnel are often named Port and Starboard. This may extend to entire crews, such as the forward-deployed crews of the Royal Navy’s Gulf-based frigate, or ballistic missile submarines.
See also
Anatomical terms of location, another example of terms of directionality that do not depend on the location of the observer for things that are bilaterally symmetrical
Dexter and sinister, in heraldry
Direction (disambiguation)
Glossary of nautical terms (disambiguation)
Handedness
Laterality, preference in humans etc. for doing things with the left or right hand etc.
Proper right and proper left, in images of people etc.
Reflection symmetry
Sinistral and dextral, chirality, in scientific contexts
Terms of orientation
Notes
References
Aeronautics
Nautical terminology
Orientation (geometry) | Port and starboard | [
"Physics",
"Mathematics"
] | 1,015 | [
"Topology",
"Space",
"Geometry",
"Spacetime",
"Orientation (geometry)"
] |
28,426,846 | https://en.wikipedia.org/wiki/Heteronuclear%20molecule | A heteronuclear molecule is a molecule composed of atoms of more than one chemical element. For example, a molecule of water (H2O) is heteronuclear because it has atoms of two different elements, hydrogen (H) and oxygen (O).
Similarly, a heteronuclear ion is an ion that contains atoms of more than one chemical element. For example, the carbonate ion () is heteronuclear because it has atoms of carbon (C) and oxygen (O). The lightest heteronuclear ion is the helium hydride ion (HeH+). This is in contrast to a homonuclear ion, which contains all the same kind of atom, such as the dihydrogen cation, or atomic ions that only contain one atom such as the hydrogen anion (H−).
References
See also
Homonuclear molecule
Chemical compound
Molecules
Sets of chemical elements | Heteronuclear molecule | [
"Physics",
"Chemistry"
] | 197 | [
"Molecular physics",
"Molecules",
"Physical objects",
"nan",
"Atoms",
"Matter"
] |
28,431,115 | https://en.wikipedia.org/wiki/The%20Earth%20Awards | The Earth Awards is an aspirational platform for consumer-driven ideas that challenge designers and innovators to build a new economy. It is an annual competition since 2007, aiming to "transform visionary ideas into market-ready solutions by offering finalists the unique opportunities to pitch their project to world business leaders". The Awards are open to students, graduates and industry professionals - the public is invited to submit innovations to be judged.
Background
The Earth Awards originated from a collective of designers, architects, scientists, writers, and entrepreneurs. The event was founded by Nicole Ting-Yap, as an initiative of the ecoStyle Project, established in 2007 by the Malaysian Government.
Current Organization
As of 2010 the event is produced by NYC Inc, and Karena Albers of the organization kontentreal is the Director of the Awards.
Submissions are judged by a panel that includes Yves Behar, Richard Branson, David DeRothschild, Bill McKibben, and TreeHugger Founder Graham Hill.
Categories and Criteria
The Earth Awards is a global search for creative solutions designed for the 21st century. The award represents six categories: Built Environment, Product, Future, Systems, Fashion, and Social Justice.
Ideas, great or small, realized or prototypes, are considered but must distinguish themselves in six criteria: Achievable, Scalable, Measurable, Useful, Original and Ecological.
Annual results
The Earth Awards 2010
One finalist from each of the six categories will have their sustainable designs showcased in September 2010. This will include an exhibition in London, in conjunction with the Financial Times’ Sustainable Business Conference and gala dinner that will invite CEOs, entrepreneurs and venture capitalists to match innovation with investment.
The 2010 Selection Committee includes:
Paola Antonelli, Curator for Architecture and Design, Museum of Modern Art
Yves Béhar, Founder, Fuseproject
Sir Richard Branson, Founder and CEO, Virgin Group
Graydon Carter, Editor-In-Chief, Vanity Fair
Majora Carter, President, The Majora Carter Group
Tony Chambers, Editor-in-Chief, Wallpaper* Magazine
Alexandra Cousteau, Founder, Blue Legacy International
David de Rothschild, Founder, Adventure Ecology
The Gyalwang Drukpa, Spiritual Leader, The Drukpa Lineage
Rick Fedrizzi, President and CEO, United States Green Building Council
Julie Gilhart, Fashion Director, Barneys New York
Dr. Jane Goodall, Jane Goodall Institute & UN Messenger of Peace
Scott Mackinlay Hahn, Co-founder, Rogan and Loomstate
Peter Head, Director, ARUP
Graham Hill, Founder of TreeHugger
Khaldoon Khalifa Al Mubarak, CEO, Mubadala Development Company
Yang Lan, Chairwoman, Sun Television
Ira C. Magaziner, Chairman, Clinton Climate Initiative
Bill McKibben, Writer, Environmentalist
Barry Nalebuff, Professor, Yale School of Management
Sergio Palleroni, Co-founder and Director, BaSiC Initiative
Karim Rashid, Founder, Karim Rashid Inc.
Jonathan Rose RIBA, Principal, AECOM and Masterplanning Practice Leader
Cameron Sinclair, Founder, Architecture for Humanity
Werner Sobek, Founder, Werner Sobek Engineering + Design
Philippe Starck, Founder, Starck Network
Diane von Furstenberg, Founder, DvF
Dilys Williams, Director, Center for Sustainable Fashion
Ken Yeang, Principal, Llewelyn Davies Yeang
The Earth Awards 2009
In 2009, The Earth Awards ceremony took place in New York City. Neri Oxman's project FAB.REcology won the grand prize for combining principles of biomimicry with the design and construction of built environments.
A prestigious and eclectic panel served on the Selection Committee, including: Paola Antonelli, Adam Bly, David Buckland, Antonio de la Rua, David de Rothschild, Nicky Gavron, Scott Hahn, Peter Head, Graham Hill, Dr. Dan Kammen, Yang Lan, Thom Mayne, Michael McDonough, Khaldoon Khalifa Al Mubarak, Barry Nalebuff, Sergio Palleroni, John Picard, Werner Sobek, Terry Tamminen, Suzanne Trocmé, Dilys Williams, and Dr. Kenneth Yeang.
References
External links
The Earth Awards
Earth Awards
American awards | The Earth Awards | [
"Engineering"
] | 863 | [
"Design",
"Design awards"
] |
28,431,595 | https://en.wikipedia.org/wiki/Lagrange%2C%20Euler%2C%20and%20Kovalevskaya%20tops | In classical mechanics, the rotation of a rigid body such as a spinning top under the influence of gravity is not, in general, an integrable problem. There are however three famous cases that are integrable, the Euler, the Lagrange, and the Kovalevskaya top, which are in fact the only integrable cases when the system is subject to holonomic constraints.
In addition to the energy, each of these tops involves two additional constants of motion that give rise to the integrability.
The Euler top describes a free top without any particular symmetry moving in the absence of any external torque, and for which the fixed point is the center of gravity. The Lagrange top is a symmetric top, in which two moments of inertia are the same and the center of gravity lies on the symmetry axis. The Kovalevskaya top is a special symmetric top with a unique ratio of the moments of inertia which satisfy the relation
That is, two moments of inertia are equal, the third is half as large, and the center of gravity is located in the plane perpendicular to the symmetry axis (parallel to the plane of the two degenerate principle axes).
Hamiltonian formulation of classical tops
The configuration of a classical top is described at time by three time-dependent principal axes, defined by the three orthogonal vectors , and with corresponding moments of inertia , and and the angular velocity about those axes. In a Hamiltonian formulation of classical tops, the conjugate dynamical variables are the components of the angular momentum vector along the principal axes
and the z-components of the three principal axes,
The Poisson bracket relations of these variables is given by
If the position of the center of mass is given by , then the Hamiltonian of a top is given by
The equations of motion are then determined by
Explicitly, these are
and cyclic permutations of the indices.
Mathematical description of phase space
In mathematical terms, the spatial configuration of the body is described by a point on the Lie group , the three-dimensional rotation group, which is the rotation matrix from the lab frame to the body frame. The full configuration space or phase space is the cotangent bundle , with the fibers parametrizing the angular momentum at spatial configuration . The Hamiltonian is a function on this phase space.
Euler top
The Euler top, named after Leonhard Euler, is an untorqued top (for example, a top in free fall), with Hamiltonian
The four constants of motion are the energy and the three components of angular momentum in the lab frame,
Lagrange top
The Lagrange top, named after Joseph-Louis Lagrange, is a symmetric top with the center of mass along the symmetry axis at location, , with Hamiltonian
The four constants of motion are the energy , the angular momentum component along the symmetry axis, , the angular momentum in the z-direction
and the magnitude of the n-vector
Kovalevskaya top
The Kovalevskaya top is a symmetric top in which , and the center of mass lies in the plane perpendicular to the symmetry axis . It was discovered by Sofia Kovalevskaya in 1888 and presented in her paper "Sur le problème de la rotation d'un corps solide autour d'un point fixe", which won the Prix Bordin from the French Academy of Sciences in 1888. The Hamiltonian is
The four constants of motion are the energy , the Kovalevskaya invariant
where the variables are defined by
the angular momentum component in the z-direction,
and the magnitude of the n-vector
Nonholonomic constraints
If the constraints are relaxed to allow nonholonomic constraints, there are other possible integrable tops besides the three well-known cases. The nonholonomic Goryachev–Chaplygin top (introduced by D. Goryachev in 1900 and integrated by Sergey Chaplygin in 1948) is also integrable (). Its center of gravity lies in the equatorial plane.
See also
Cardan suspension
References
External links
Kovalevskaya Top – from Eric Weisstein's World of Physics
Kovalevskaya Top
Spinning tops
Hamiltonian mechanics | Lagrange, Euler, and Kovalevskaya tops | [
"Physics",
"Mathematics"
] | 861 | [
"Hamiltonian mechanics",
"Theoretical physics",
"Classical mechanics",
"Dynamical systems"
] |
28,439,122 | https://en.wikipedia.org/wiki/Comminution | Comminution is the reduction of solid materials from one average particle size to a smaller average particle size, by crushing, grinding, cutting, vibrating, or other processes. In geology, it occurs naturally during faulting in the upper part of the Earth's crust. In industry, it is an important unit operation in mineral processing, ceramics, electronics, and other fields, accomplished with many types of mill. In dentistry, it is the result of mastication of food. In general medicine, it is one of the most traumatic forms of bone fracture.
Within industrial uses, the purpose of comminution is to reduce the size and to increase the surface area of solids. It is also used to free useful materials from matrix materials in which they are embedded, and to concentrate minerals.
Energy requirements
The comminution of solid materials consumes energy, which is being used to break up the solid into smaller pieces. The comminution energy can be estimated by:
Rittinger's law, which assumes that the energy consumed is proportional to the newly generated surface area;
Kick's law, which related the energy to the sizes of the feed particles and the product particles;
Bond's law, which assumes that the total work useful in breakage is inversely proportional to the square root of the diameter of the product particles, [implying] theoretically that the work input varies as the length of the new cracks made in breakage.
Holmes's law, which modifies Bond's law by substituting the square root with an exponent that depends on the material.
Forces
There are three forces which are typically used to affect the comminution of particles: impact, shear, and compression.
Methods
There are several methods of comminution. Comminution of solid materials requires different types of crushers and mills depending on the feed properties such as hardness at various size ranges and application requirements such as throughput and maintenance. The most common machines for the comminution of coarse feed material (primary crushers) are the jaw crusher (1m > P80 > 100 mm), cone crusher (P80 > 20 mm) and hammer crusher. Primary crusher products in intermediate feed particle size ranges (100mm > P80 > 20mm) can be ground in autogenous (AG) or semi-autogenous (SAG) mills depending on feed properties and application requirements. For comminution of finer particle size ranges (20mm > P80 > 30 μm) machines like the ball mill, vertical roller mill, hammer mill, roller press or high compression roller mill, vibration mill, jet mill and others are used. For yet finer grind sizes (sometimes referred to as "ultrafine grinding"), specialist mills such as the IsaMill are used.
Trituration, for instance, is comminution (or substance breakdown) by rubbing. Furthermore, methods of trituration include levigation, which is the trituration of a powder with a non-solvent liquid, and pulverization by intervention, which is trituration with a solvent that can be easily removed after the substance has been broken down.
See also
Electromagnetic vortex intensifier with ferromagnetic particles - Special equipment for ultrafine grinding
References
Industrial processes | Comminution | [
"Physics",
"Chemistry",
"Engineering"
] | 663 | [
"Chemical equipment",
"nan",
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
31,229,259 | https://en.wikipedia.org/wiki/Coordinating%20Committee%20for%20Earthquake%20Prediction | The Coordinating Committee for Earthquake Prediction (CCEP) (Japanese: 地震予知連絡会, Jishin Yochi Renraku-kai) in Japan was founded in April 1969, as part of the Geodesy Council's Second Earthquake Prediction Plan, in order to carry out a comprehensive evaluation of earthquake data in Japan. The committee consists of 30 members and meets four times each year, as well as publishing a report on its activities twice each year. The CCEP brings together representatives from 20 governmental bodies and universities engaged in earthquake prediction and research. It has a secretariat within the Ministry of Land, Infrastructure, Transport and Tourism.
History
The first moves towards the committee were taken after earthquake researchers published Earthquake Prediction - Current Status and Action Plan in 1962. This was adopted by the General Assembly of Geodesy Council with the launch of their first prediction plan in 1964. Following earthquakes in 1964, 1965, and 1968 the EEPC was founded to coordinate future prediction activities.
Geographical Areas of Observation
In order to focus future work, based on the geological evidence, and as well as the prediction of a Tōkai earthquake in the relatively near future, in 1970, the CCEP designated certain areas of Japan as Areas of Specified Observation or Areas of Intensified Observation. The Tōkai region was upgraded to an Area of Intensified Observation in 1974.
By 1978, when some of the boundaries were also changed, eight Areas of Specified Observation and two Areas of Intensified Observation had been designated.
Areas of Intensified Observation
South Kantō
Tōkai region
Participating organisations
The following organisations are represented on the CCEP:
Universities
Institute of Seismology and Volcanology, Hokkaido University
Research Center for Prediction of Earthquakes and Volcanic Eruptions, Tohoku University
School of Life and Environmental Sciences, University of Tsukuba
School of Science, University of Tokyo
Earthquake Research Institute, University of Tokyo
Volcanic Fluid Research Center, Tokyo Institute of Technology
Research Center for Seismology, Volcanology and Disaster Mitigation, Nagoya University
Department of Geophysics, Kyoto University
Disaster Prevention Research Institute, Kyoto University
Geospheric Structure and Dynamics Laboratory, Tottori University
Institute of Seismology and Volcanology, Kyushu University
Nansei-Toko Observatory for Earthquakes and Volcanoes, Kagoshima University
Institute of Statistical Mathematics
Governmental organisations
National Research Institute for Earth Science and Disaster Prevention
Japan Agency for Marine-Earth Science and Technology
Geological Survey of Japan, National Institute of Advanced Industrial Science and Technology
Hydrographic and Oceanographic Department, Japan Coast Guard
Japan Meteorological Agency / Meteorological Research Institute
Geospatial Information Authority of Japan
Other bodies
Tono research institute of Earthquake Science
Hot Springs Research Institute, Kanagawa Prefecture
See also
Kiyoo Mogi, former chair of the CCEP
Seismicity in Japan
Nuclear Power in Japan - Seismicity
References
External links
Science and technology in Japan
Organizations established in 1969
Geology of Japan
Earthquake and seismic risk mitigation
Prediction
Earthquakes in Japan
1969 establishments in Japan | Coordinating Committee for Earthquake Prediction | [
"Engineering"
] | 583 | [
"Structural engineering",
"Earthquake and seismic risk mitigation"
] |
31,232,369 | https://en.wikipedia.org/wiki/Markov%20perfect%20equilibrium | A Markov perfect equilibrium is an equilibrium concept in game theory. It has been used in analyses of industrial organization, macroeconomics, and political economy. It is a refinement of the concept of subgame perfect equilibrium to extensive form games for which a pay-off relevant state space can be identified. The term appeared in publications starting about 1988 in the work of economists Jean Tirole and Eric Maskin.
Definition
In extensive form games, and specifically in stochastic games, a Markov perfect equilibrium is a set of mixed strategies for each of the players which satisfy the following criteria:
The strategies have the Markov property of memorylessness, meaning that each player's mixed strategy can be conditioned only on the state of the game. These strategies are called Markov reaction functions.
The state can only encode payoff-relevant information. This rules out strategies that depend on non-substantive moves by the opponent. It excludes strategies that depend on signals, negotiation, or cooperation between the players (e.g. cheap talk or contracts).
The strategies form a subgame perfect equilibrium of the game.
Focus on symmetric equilibria
In symmetric games, when the players have a strategy and action sets which are mirror images of one another, often the analysis focuses on symmetric equilibria, where all players play the same mixed strategy. As in the rest of game theory, this is done both because these are easier to find analytically and because they are perceived to be stronger focal points than asymmetric equilibria.
Lack of robustness
Markov perfect equilibria are not stable with respect to small changes in the game itself. A small change in payoffs can cause a large change in the set of Markov perfect equilibria. This is because a state with a tiny effect on payoffs can be used to carry signals, but if its payoff difference from any other state drops to zero, it must be merged with it, eliminating the possibility of using it to carry signals.
Examples
For examples of this equilibrium concept, consider the competition between firms which have invested heavily into fixed costs and are dominant producers in an industry, forming an oligopoly. The players are taken to be committed to levels of production capacity in the short run, and the strategies describe their decisions in setting prices. The firms' objectives are modelled as maximizing the present discounted value of profits.
Airfare game
Often an airplane ticket for a certain route has the same price on either airline A or airline B. Presumably, the two airlines do not have exactly the same costs, nor do they face the same demand function given their varying frequent-flyer programs, the different connections their passengers will make, and so forth. Thus, a realistic general equilibrium model would be unlikely to result in nearly identical prices.
Both airlines have made sunk investments into the equipment, personnel, and legal framework, thus committing to offering service. They are engaged or trapped, in a strategic game with one another when setting prices.
Consider the following strategy of an airline for setting the ticket price for a certain route. At every price-setting opportunity:
if the other airline is charging $300 or more, or is not selling tickets on that flight, charge $300
if the other airline is charging between $200 and $300, charge the same price
if the other airline is charging $200 or less, choose randomly between the following three options with equal probability: matching that price, charging $300, or exiting the game by ceasing indefinitely to offer service on this route.
This is a Markov strategy because it does not depend on a history of past observations. It satisfies also the Markov reaction function definition because it does not depend on other information which is irrelevant to revenues and profits.
Assume now that both airlines follow this strategy exactly. Assume further that passengers always choose the cheapest flight and so if the airlines charge different prices, the one charging the higher price gets zero passengers. Then if each airline assumes that the other airline will follow this strategy, there is no higher-payoff alternative strategy for itself, i.e. it is playing a best response to the other airline strategy. If both airlines followed this strategy, it would form a Nash equilibrium in every proper subgame, thus a subgame-perfect Nash equilibrium.
A Markov-perfect equilibrium concept has also been used to model aircraft production, as different companies evaluate their future profits and how much they will learn from production experience in light of demand and what others firms might supply.
Discussion
Airlines do not literally or exactly follow these strategies, but the model helps explain the observation that airlines often charge exactly the same price, even though a general equilibrium model specifying non-perfect substitutability would generally not provide such a result. The Markov perfect equilibrium model helps shed light on tacit collusion in an oligopoly setting, and make predictions for cases not observed.
One strength of an explicit game-theoretical framework is that it allows us to make predictions about the behaviours of the airlines if and when the equal-price outcome breaks down, and interpret and examine these price wars in light of different equilibrium concepts. In contrast to another equilibrium concept, Maskin and Tirole identify an empirical attribute of such price wars: in a Markov strategy price war, "a firm cuts its price not to punish its competitor, [rather only to] regain market share" whereas in a general repeated game framework a price cut may be a punishment to the other player. The authors claim that the market share justification is closer to the empirical account than the punishment justification, and so the Markov perfect equilibrium concept proves more informative, in this case.
Notes
References
Bibliography
Tirole, Jean. 1988. The Theory of Industrial Organization. Cambridge, MA: The MIT Press.
Maskin, Eric, and Jean Tirole. 1988. "A Theory of Dynamic Oligopoly: I & II" Econometrica 56:3, 549-600.
Game theory equilibrium concepts
Non-cooperative games | Markov perfect equilibrium | [
"Mathematics"
] | 1,226 | [
"Game theory",
"Non-cooperative games",
"Game theory equilibrium concepts"
] |
31,235,878 | https://en.wikipedia.org/wiki/Fixed%20orbit | A fixed orbit is the concept, in atomic physics, where an electron is considered to remain in a specific orbit, at a fixed distance from an atom's nucleus, for a particular energy level.
The concept was promoted by quantum physicist Niels Bohr c. 1913.
The idea of the fixed orbit is considered a major component of the Bohr model (or Bohr theory).
References
Quantum mechanics
Niels Bohr | Fixed orbit | [
"Physics",
"Chemistry"
] | 87 | [
" and optical physics stubs",
"Theoretical physics",
"Quantum mechanics",
" molecular",
"Atomic",
"Physical chemistry stubs",
" and optical physics"
] |
31,236,248 | https://en.wikipedia.org/wiki/Stationary%20orbit | In celestial mechanics, a stationary orbit is an orbit around a planet or moon where the orbiting satellite or spacecraft remains over the same spot on the surface. From the ground, the satellite would appear to be standing still, hovering above the surface in the same spot, day after day.
In practice, this is accomplished by matching the rotation of the surface below, by reaching a particular altitude where the orbital speed almost matches the rotation below, in an equatorial orbit. As the speed decreases slowly, then an additional boost would be needed to increase the speed back to a matching speed, or a retro-rocket could be fired to slow the speed when too fast.
The stationary-orbit region of space is known as the Clarke Belt, named after British science fiction writer Arthur C. Clarke, who published the idea in Wireless World magazine in 1945. A stationary orbit is sometimes referred to as a "fixed orbit".
Stationary Earth orbit
Around the Earth, stationary satellites orbit at altitudes of approximately . Writing in 1945, the science-fiction author Arthur C. Clarke imagined communications satellites as travelling in stationary orbits, where those satellites would travel around the Earth at the same speed the globe is spinning, making them hover stationary over one spot on the Earth's surface.
A satellite being propelled into place, into a stationary orbit, is first fired to a special equatorial orbit called a "geostationary transfer orbit" (GTO). Within this oval-shaped (elliptical) orbit, the satellite will alternately swing out to high and then back down to an altitude of only above the Earth (223 times closer). Then, at a planned time and place, an attached "kick motor" will push the satellite out to maintain an even, circular orbit at the 22,300-mile altitude.
Stationary Mars orbit
An areostationary orbit or areosynchronous equatorial orbit (abbreviated AEO) is a circular areosynchronous orbit in the Martian equatorial plane about from the centre of mass of Mars, any point on which revolves about Mars in the same direction and with the same period as the Martian surface. Areostationary orbit is a concept similar to Earth's geostationary orbit. The prefix areo- derives from Ares, the ancient Greek god of war and counterpart to the Roman god Mars, with whom the planet was identified. The modern Greek word for Mars is ().
See also
Lagrangian point
Cytherocentric orbit
References
Orbits
Astrophysics
Spaceflight concepts | Stationary orbit | [
"Physics",
"Astronomy"
] | 503 | [
"Astronomical sub-disciplines",
"Astrophysics"
] |
31,236,452 | https://en.wikipedia.org/wiki/Ecodistrict | An ecodistrict or eco-district (from "ecological" and "district") is a neighborhood, urban area, or region whose urban planning aims to integrate objectives of sustainable development and social equity, and to reduce the district's ecological footprint. The notion of an "ecodistrict" insists on the consideration of all environmental issues, via a collaborative process.
In order to design ecodistricts, one needs to completely redesign their energy system plans. The usage of photovoltaic panels and electric vehicles is common.
Examples
Ecodistricts can be found in metropolises such as :
Stockholm (Hammarby Sjöstad) (Sweden)
Hanover (Germany)
Marseille (Euroméditerranée) (France)
Bordeaux (Ginko) (France)
Freiburg im Breisgau (Vauban, Freiburg) (Germany)
Malmö (BO01) (Sweden)
London (BedZED) (United Kingdom)
Grenoble (De Bonne and Blanche Monier) (France)
Dongtan (China)
EVA Lanxmeer (Netherlands)
Amsterdam-Noord (Netherlands)
Jono district low-carbon project (Kitakyushu, Japan)
Frequel-Fontarabie (Paris, France)
Atlanta (Midtown, Atlanta Georgia) (United States)
Energy Hub Project— Tweewaters Leuven (Belgium)
Etna, PA named first ever US Ecodistrict in 2019 (Etna, Pennsylvania)
See also
Ecological footprint
Ecological debt
Ecovillage
Green building
Green retrofit
Peri-urbanisation
Sustainable city
Sustainable design
Sustainable transport
Transition town
Urban agriculture
Urban ecology
Urban forest
Urban green space
Urban vitality
Vertical farming
References
City
Sustainable design
Sustainable urban planning
Environmental planning
Landscape architecture | Ecodistrict | [
"Engineering"
] | 351 | [
"Landscape architecture",
"Architecture"
] |
31,239,754 | https://en.wikipedia.org/wiki/Lupeol | Lupeol is a pharmacologically active pentacyclic triterpenoid. It has several potential medicinal properties, like anticancer and anti-inflammatory activity.
Natural occurrences
Lupeol is found in a variety of plants, including mango, Acacia visco and Abronia villosa. It is also found in dandelion coffee. Lupeol is present as a major component in Camellia japonica leaf.
Total synthesis
The first total synthesis of lupeol was reported by Gilbert Stork et al.
In 2009, Surendra and Corey reported a more efficient and enantioselective total synthesis of lupeol, starting from (1E,5E)-8-[(2S)-3,3-dimethyloxiran-2-yl]-2,6-dimethylocta-1,5-dienyl acetate by use of a polycyclization.
Biosynthesis
Lupeol is produced by several organisms from squalene epoxide. Dammarane and baccharane skeletons are formed as intermediates. The reactions are catalyzed by the enzyme lupeol synthase. A recent study on the metabolomics of Camellia japonica leaf revealed that lupeol is produced from squalene epoxide where squalene play the role as a precursor.
Pharmacology
Lupeol has a complex pharmacology, displaying antiprotozoal, antimicrobial, antiinflammatory, antitumor and chemopreventive properties.
Animal models suggest lupeol may act as an anti-inflammatory agent. A 1998 study found lupeol to decrease paw swelling in rats by 39%, compared to 35% for the standardized control compound indomethacin.
One study has also found some activity as a dipeptidyl peptidase-4 inhibitor and prolyl oligopeptidase inhibitor at high concentrations (in the millimolar range).
It is an effective inhibitor in laboratory models of prostate and skin cancers.
As an anti-inflammatory agent, lupeol functions primarily on the interleukin system. Lupeol to decreases interleukin 4 (IL-4) production by T-helper type 2 cells.
Lupeol has been found to have a contraceptive effect due to its inhibiting effect on the calcium channel of sperm (CatSper).
Lupeol has also been shown to exert anti-angiogenic and anti-cancer effects via the downregulation of TNF-alpha and VEGFR-2.
The leaves of Camellia japonica contain lupeol.
See also
Betulin
Betulinic acid
References
Triterpenes
Secondary alcohols
Total synthesis
Cyclopentanes | Lupeol | [
"Chemistry"
] | 582 | [
"Total synthesis",
"Chemical synthesis"
] |
36,426,069 | https://en.wikipedia.org/wiki/Hasse%20invariant%20of%20an%20algebra | In mathematics, the Hasse invariant of an algebra is an invariant attached to a Brauer class of algebras over a field. The concept is named after Helmut Hasse. The invariant plays a role in local class field theory.
Local fields
Let K be a local field with valuation v and D a K-algebra. We may assume D is a division algebra with centre K of degree n. The valuation v can be extended to D, for example by extending it compatibly to each commutative subfield of D: the value group of this valuation is (1/n)Z.
There is a commutative subfield L of D which is unramified over K, and D splits over L. The field L is not unique but all such extensions are conjugate by the Skolem–Noether theorem, which further shows that any automorphism of L is induced by a conjugation in D. Take γ in D such that conjugation by γ induces the Frobenius automorphism of L/K and let v(γ) = k/n. Then k/n modulo 1 is the Hasse invariant of D. It depends only on the Brauer class of D.
The Hasse invariant is thus a map defined on the Brauer group of a local field K to the divisible group Q/Z. Every class in the Brauer group is represented by a class in the Brauer group of an unramified extension of L/K of degree n, which by the Grunwald–Wang theorem and the Albert–Brauer–Hasse–Noether theorem we may take to be a cyclic algebra (L,φ,πk) for some k mod n, where φ is the Frobenius map and π is a uniformiser. The invariant map attaches the element k/n mod 1 to the class. This exhibits the invariant map as a homomorphism
The invariant map extends to Br(K) by representing each class by some element of Br(L/K) as above.
For a non-Archimedean local field, the invariant map is a group isomorphism.
In the case of the field R of real numbers, there are two Brauer classes, represented by the algebra R itself and the quaternion algebra H. It is convenient to assign invariant zero to the class of R and invariant 1/2 modulo 1 to the quaternion class.
In the case of the field C of complex numbers, the only Brauer class is the trivial one, with invariant zero.
Global fields
For a global field K, given a central simple algebra D over K then for each valuation v of K we can consider the extension of scalars Dv = D ⊗ Kv The extension Dv splits for all but finitely many v, so that the local invariant of Dv is almost always zero. The Brauer group Br(K) fits into an exact sequence
where S is the set of all valuations of K and the right arrow is the sum of the local invariants. The injectivity of the left arrow is the content of the Albert–Brauer–Hasse–Noether theorem. Exactness in the middle term is a deep fact from global class field theory.
References
Further reading
Field (mathematics)
Algebraic number theory | Hasse invariant of an algebra | [
"Mathematics"
] | 680 | [
"Algebraic number theory",
"Number theory"
] |
36,433,745 | https://en.wikipedia.org/wiki/Factor%20system | In mathematics, a factor system (sometimes called factor set) is a fundamental tool of Otto Schreier’s classical theory for group extension problem. It consists of a set of automorphisms and a binary function on a group satisfying certain condition (so-called cocycle condition). In fact, a factor system constitutes a realisation of the cocycles in the second cohomology group in group cohomology.
Introduction
Suppose is a group and is an abelian group. For a group extension
there exists a factor system which consists of a function and homomorphism such that it makes the cartesian product a group as
So must be a "group 2-cocycle" (and thus define an element in H(G, A), as studied in group cohomology). In fact, does not have to be abelian, but the situation is more complicated for non-abelian groups
If is trivial, then splits over , so that is the semidirect product of with .
If a group algebra is given, then a factor system f modifies that algebra to a skew-group algebra by modifying the group operation to .
Application: for Abelian field extensions
Let G be a group and L a field on which G acts as automorphisms. A cocycle or (Noether) factor system is a map c: G × G → L* satisfying
Cocycles are equivalent if there exists some system of elements a : G → L* with
Cocycles of the form
are called split. Cocycles under multiplication modulo split cocycles form a group, the second cohomology group H2(G,L*).
Crossed product algebras
Let us take the case that G is the Galois group of a field extension L/K. A factor system c in H2(G,L*) gives rise to a crossed product algebra A, which is a K-algebra containing L as a subfield, generated by the elements λ in L and ug with multiplication
Equivalent factor systems correspond to a change of basis in A over K. We may write
The crossed product algebra A is a central simple algebra (CSA) of degree equal to [L : K]. The converse holds: every central simple algebra over K that splits over L and such that deg A = [L : K] arises in this way. The tensor product of algebras corresponds to multiplication of the corresponding elements in H2. We thus obtain an identification of the Brauer group, where the elements are classes of CSAs over K, with H2.
Cyclic algebra
Let us further restrict to the case that L/K is cyclic with Galois group G of order n generated by t. Let A be a crossed product (L,G,c) with factor set c. Let u = ut be the generator in A corresponding to t. We can define the other generators
and then we have un = a in K. This element a specifies a cocycle c by
It thus makes sense to denote A simply by (L,t,a). However a is not uniquely specified by A since we can multiply u by any element λ of L* and then a is multiplied by the product of the conjugates of λ. Hence A corresponds to an element of the norm residue group K*/NL/KL*. We obtain the isomorphisms
References
Cohomology theories
Group theory | Factor system | [
"Mathematics"
] | 698 | [
"Group theory",
"Fields of abstract algebra"
] |
36,437,024 | https://en.wikipedia.org/wiki/Perfect%20thermal%20contact | Perfect thermal contact of the surface of a solid with the environment (convective heat transfer) or another solid occurs when the temperatures of the mating surfaces are equal.
Perfect thermal contact conditions
Perfect thermal contact supposes that on the boundary surface there holds an equality of the temperatures
and an equality of heat fluxes
where are temperatures of the solid and environment (or mating solid), respectively; are thermal conductivity coefficients of the solid and mating laminar layer (or solid), respectively; is normal to the surface .
If there is a heat source on the boundary surface , e.g. caused by sliding friction, the latter equality transforms in the following manner
where is heat-generation rate per unit area.
References
H. S. Carslaw, J. C. Jaeger (1959). Conduction of heat in solids. Oxford: Clarendon Press.
M. Shillor, M. Sofonea, J. J. Telega (2004). Models and analysis of quasistatic contact. Variational methods. Berlin: Springer.
Heat transfer
Boundary conditions | Perfect thermal contact | [
"Physics",
"Chemistry"
] | 217 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Thermodynamics"
] |
26,616,666 | https://en.wikipedia.org/wiki/Building%20engineering%20physics | The term building engineering physics was introduced in a report released in January 2010 commissioned by The Royal Academy of Engineering (RAeng). The report, entitled Engineering a Low Carbon Built Environment: The Discipline of Building Engineering Physics, presents the initiative of many at the Royal Academy of Engineering in developing a field that addresses our fossil fuel dependence while working towards a more sustainably built environment for the future.
The field of building engineering physics combines the existing professions of building services engineering, applied physics and building construction engineering into a single field designed to investigate the energy efficiency of old and new buildings. The application of building engineering physics allows the construction and renovation of high performance, energy efficient buildings, while minimizing their environmental impacts.
Building engineering physics addresses several different areas in building performance including: air movement, thermal performance, control of moisture, ambient energy, acoustics, light, climate and biology. This field employs creative ways of manipulating these principal aspects of a building's indoor and outdoor environments so that a more eco-friendly standard of living is obtained. Building engineering physics is unique from other established applied sciences or engineering professions as it combines the sciences of architecture, engineering and human biology and physiology. Building engineering physics not only addresses energy efficiency and building sustainability, but also a building's internal environment conditions that affect the comfort and performance levels of its occupants.
Throughout the 20th century, a large percentage of buildings were constructed completely dependent on fossil fuels. Rather than focusing on energy efficiency, architects and engineers were more concerned with experimenting with “new materials and structural forms” to further aesthetic ideals. Now in the 21st century, building energy performance standards are pushing towards a zero carbon standard in old and new buildings alike. The threat of global change and the need for energy independence and sustainability has prompted governments across the globe to adopt firm carbon reducing standards. A significant way to meet these stringent standards is in the construction of buildings that minimize environmental impacts, as well as the refurbishing of older buildings to meet carbon emission standards. The application of building engineering physics can aid in this transition to reduce energy dependent buildings, provide for the demands of a growing population and better standard of living. The 2010 RAEng report expressed the expectation that growth in the application of this field would largely due to the introduction of regulations requiring the calculation of carbon emissions to demonstrate compliance, principally the Energy Performance of Buildings Directive (EPBD).
As of 2010, the discipline of building engineering physics had not been adapted widely in the construction industry.
References
Sources
Applied and interdisciplinary physics
Energy conservation
Environmental design
Environmental science | Building engineering physics | [
"Physics",
"Engineering",
"Environmental_science"
] | 513 | [
"Environmental design",
"Applied and interdisciplinary physics",
"Building engineering",
"Civil engineering",
"nan",
"Design",
"Architecture"
] |
26,625,162 | https://en.wikipedia.org/wiki/Cell%20division%20control%20protein%204 | Cdc4 (cell division control protein 4) is a substrate recognition component of the SCF (SKP1-CUL1-F-box protein) ubiquitin ligase complex, which acts as a mediator of ubiquitin transfer to target proteins, leading to their subsequent degradation via the ubiquitin-proteasome pathway. Cdc4 targets primarily cell cycle regulators for proteolysis. It serves the function of an adaptor that brings target molecules to the core SCF complex.
Cdc4 was originally identified in the model organism Saccharomyces cerevisiae.
CDC4 gene function is required at G1/S and G2/M transitions during mitosis and at various stages during meiosis.
Homologues
The human homologue of the cdc4 gene is called FBXW7. The corresponding gene product is the F-box/WD repeat-containing protein 7.
In the nematode C. elegans, the homologue to Cdc4 is F-box/WD repeat-containing protein sel-10.
Some general features
Cdc4 has a molecular weight of 86'089Da, an isoelectric point of 7.14, and consists of 779 amino acids. It resides exclusively in the nucleus because of a single monopartite nuclear localisation sequence (NLS) comprising amino acids 82-85 in the N-terminal domain.
Structure
Cdc4 is one component of the E3 complex SCF (CDC4), which comprises CDC53, SKP1, RBX1, and CDC4.
Its 779 amino acids (in S. cerevisiae) are arranged into one F-box domain (approximately 40 amino acids ("F-box" motif)) and 7 WD repeats.
Cdc4 is a WD-40 repeat F-box protein. Like all members of this family, it contains a conserved dimerization motif called D domain. In yeast Cdc4, the D domain protomers arrange in a superhelical homodimeric manner. SCF (Cdc4) dimerization hardly affects the affinity for target molecules, but significantly increases ubiquitin conjugation. Cdc4 adapts a suprafacial configuration: The substrate-binding sites lie in the same plane AS the catalytic sites, with a separation of 64Å within and 102Å between each SCF monomer. In Cdc4, the substrate binding domain is built on WD40 domains, which use repeats of 40 amino acids), each forming four anti-parallel beta-strands, to assemble the blades of a so-called beta-propeller. Beta-propellers are a quite frequent form of adaptable surface for interaction between different proteins. This substrate interaction region is located C-terminally. There are three isoforms of Cdc4 in mammals: α, β, and γ. These are produced via alternative splicing of 3 unique 5’ exons to 10 common 3’ exons. This results in proteins that differ only at their N-termini.
Cdc4 protein interacts with Cdc34, an ubiquitin-conjugating enzyme, and Cdc53 in vivo. (There is a Cdc4p/Cdc53p-binding region on Cdc34p.) All three proteins are stable throughout the cell cycle.
Function
Various cellular regulatory mechanisms heavily depend on ubiquitin-dependent degradation. The SCF (Cdc4) complex has a regulatory function in cell cycle progression, signal transduction, and transcription.
In order for the cell cycle to proceed, several inhibitory proteins, as well as cyclins, have to be eliminated at given time points. Cdc4 assists there by recruiting target molecules via its C-terminal substrate interaction domain (WD40 repeat domain) to the ubiquitination machinery. This causes transfer of ubiquitin molecules to the target, hence marks it for degradation.
Cdc4 recognizes and binds to phosphorylated target proteins.
Cdc4 can be essential, or nonessential, depending on the organism. For instance, it is essential in S. cerevisiae, while it is non-essential in C. albicans.
It is essential for initiation of DNA replication and separation of spindle pole bodies, hence for the formation of the poles of the mitotic spindle. In budding yeast it is also involved in bud development, fusion of zygotic nuclei (karyogamy) after conjugation, and several aspects of sporulation.
Roughly speaking, in the cell cycle Cdc4 function is required for G1/S and G2/M transition.
Some important interactions in which Cdc4 is involved are:
ubiquitination of the phosphorylated form of the cell cycle kinase inhibitor (CKI) SIC1
degradation of the CKI FAR1 in absence of pheromone; restriction of FAR1 degradation to the nucleus (since Cdc4 is exclusively nuclear)
transcription activation of the HTA1-HTB1 locus
degradation of the phosphorylated form of Cdc6
Onset of S-phase
Swi5 is a transcriptional activator of Sic1, which inhibits S-phase CDKs. Thus, Sic1 protein degradation is necessary to enter S-phase. SCF (Cdc4) complex’s regulatory function concerning S-phase entry comprises not only degradation of Sic1, but also degradation of Swi5.
In order for the substrate adapter unit Cdc4 to bind to Sic1, a minimum of any six of the nine cyclin-dependent kinase sites on Sic1 have to be phosphorylated. In other words: There is a threshold number of phosphorylation sites in order to achieve receptor-ligand binding. As recently stated, this "suggests that the ultrasensitivity in the Sic1-Cdc4 system may be driven at least in part by cumulative electrostatic interactions". In general, an ultrasensitive enzyme requires less than 81-fold increase in stimulus to drive it from 10% to 90% activity. "Ultrasensitivity" highlights that the upstroke of the stimulus/response curve is steeper than the one that is obtained for a hyperbolic Michaelis-Menten enzyme. Thus, ultrasensitivity allows a highly sensitive response: A graded input can be transformed into a sharply thresholded output. The development of B-type cyclin–cyclin-dependent kinase activity, as well as the onset of DNA replication, requires degradation of Sic1 in the late G1 phase of the cell cycle. The WD domain of Cdc4 binds to the phosphorylated form of Sic1. Each bond to a Sic1-Phosphate is weak, but together the binding is strong enough to enable Sic1-degradation via the pathway described before. Hence, in this case ultrasensitivity allows precise definition ("fine tuning") of the time point in which destruction of Sic1 occurs, leading to initiation of the next step in the cell cycle (-> DNA replication).
G2/M transition
Up until now it is not satisfyingly understood how Cdc4 triggers G2-M transition. In general, the second degradation complex involved in cell cycle progression, APC, is responsible for proteolysis at that stage.
However, experimental data suggests that Cdc4 function in G2/M transition may be linked to the degradation of Pds1 (anaphase inhibitor). And what is more, CDC4 and CDC20, an activator of APC, interact genetically.
Cdc4 recruits several other substrates than Sic1 to the SCF core complex, including the Cln-Cdc28 inhibitor / cytoskeletal scaffold protein Far1, the transcription factor Gcn4, and the replication protein Cdc6.
In addition to those functions mentioned above, Cdc4 is involved in some other degradation-dependent events in S. cerevisiae like for instance unfolded protein response.
Clinical significance
In mammals, amongst others c-Myc, Src3, Cyclin E, and the Notch intracellular domain are substrates of Cdc4. Due to its involvement in degradation of various cell cycle regulators, as well as several compounds of signaling pathways (e.g. Notch), Cdc4 is a highly sensible component of every organism in which it functions.
The cdc4 gene is a haplo-insufficient tumor suppressor gene. Knock-out of this gene in mice leads to an embryonic lethal phenotype. CDC4 mutations occur in a number of cancer types. They are described best in colorectal tumors, and also have been found to be a mutational target in pancreatic cancer.
E3 has an additional function to its primary role in the degradation of certain cell cycle regulators: It is also involved in formation of the neural crest. Hence, Cdc4 is a protein "with separable but complementary functions in control of cell proliferation and differentiation". This evokes the assumption -beyond regulating cell cycle progression- Cdc4 as a tumor suppressor protein may extend its ability to directly regulate tissue differentiation. However, its concrete role in diseases is still to be elucidated.
See also
ubiquitin ligase
ubiquitin proteasome system
cell cycle
References
Cell cycle
Saccharomyces cerevisiae genes
Proteins | Cell division control protein 4 | [
"Chemistry",
"Biology"
] | 1,924 | [
"Biomolecules by chemical classification",
"Cellular processes",
"Molecular biology",
"Proteins",
"Cell cycle"
] |
26,626,178 | https://en.wikipedia.org/wiki/Failure%20of%20electronic%20components | Electronic components have a wide range of failure modes. These can be classified in various ways, such as by time or cause. Failures can be caused by excess temperature, excess current or voltage, ionizing radiation, mechanical shock, stress or impact, and many other causes. In semiconductor devices, problems in the device package may cause failures due to contamination, mechanical stress of the device, or open or short circuits.
Failures most commonly occur near the beginning and near the ending of the lifetime of the parts, resulting in the bathtub curve graph of failure rates. Burn-in procedures are used to detect early failures. In semiconductor devices, parasitic structures, irrelevant for normal operation, become important in the context of failures; they can be both a source and protection against failure.
Applications such as aerospace systems, life support systems, telecommunications, railway signals, and computers use great numbers of individual electronic components. Analysis of the statistical properties of failures can give guidance in designs to establish a given level of reliability. For example, the power-handling ability of a resistor may be greatly derated when applied in high-altitude aircraft to obtain adequate service life.
A sudden fail-open fault can cause multiple secondary failures if it is fast and the circuit contains an inductance; this causes large voltage spikes, which may exceed 500 volts. A broken metallisation on a chip may thus cause secondary overvoltage damage. Thermal runaway can cause sudden failures including melting, fire or explosions.
Packaging failures
The majority of electronic parts failures are packaging-related. Packaging, as the barrier between electronic parts and the environment, is very susceptible to environmental factors. Thermal expansion produces mechanical stresses that may cause material fatigue, especially when the thermal expansion coefficients of the materials are different. Humidity and aggressive chemicals can cause corrosion of the packaging materials and leads, potentially breaking them and damaging the inside parts, leading to electrical failure. Exceeding the allowed environmental temperature range can cause overstressing of wire bonds, thus tearing the connections loose, cracking the semiconductor dies, or causing packaging cracks. Humidity may also cause cracking, as may mechanical damage or shock.
During encapsulation, bonding wires can be severed, shorted, or touch the chip die, usually at the edge. Dies can crack due to mechanical overstress or thermal shock; defects introduced during processing, like scribing, can develop into fractures. Lead frames may contain excessive material or burrs, causing shorts. Ionic contaminants like alkali metals and halogens can migrate from the packaging materials to the semiconductor dies, causing corrosion or parameter deterioration. Glass-metal seals commonly fail by forming radial cracks that originate at the pin-glass interface and permeate outwards; other causes include a weak oxide layer on the interface and poor formation of a glass meniscus around the pin.
Various gases may be present in the package cavity, either as impurities trapped during manufacturing, due to outgassing of the materials used, or chemical reactions, as is when the packaging material gets overheated (the products are often ionic and facilitate corrosion with delayed failure). To detect this, helium is often in the inert atmosphere inside the packaging as a tracer gas to detect leaks during testing. Carbon dioxide and hydrogen may form from organic materials, moisture is outgassed by polymers and amine-cured epoxies outgas ammonia. Formation of cracks and intermetallic growth in die attachments may lead to the formation of voids and delamination, impairing heat transfer from the chip die to the substrate and heatsink and causing a thermal failure. As some semiconductors like silicon and gallium arsenide are infrared-transparent, infrared microscopy can check the integrity of die bonding and under-die structures.
Red phosphorus, used as a char-promoting flame retardant, facilitates silver migration when present in packaging. It is normally coated with aluminium hydroxide; if the coating is incomplete, the phosphorus particles oxidize to the highly hygroscopic phosphorus pentoxide, which reacts with moisture to form phosphoric acid. This is a corrosive electrolyte that in the presence of electric fields facilitates dissolution and migration of silver, short-circuiting adjacent packaging pins, lead frame leads, tie bars, chip mount structures, and chip pads. The silver bridge may be interrupted by thermal expansion of the package; thus, disappearance of the shorting when the chip is heated and its reappearance after cooling is an indication of this problem. Delamination and thermal expansion may move the chip die relative to the packaging, deforming and possibly shorting or cracking the bonding wires.
Contact failures
Electrical contacts exhibit contact resistance, the magnitude of which is governed by surface structure and the composition of surface layers. Ideally contact resistance should be low and stable, however weak contact pressure, mechanical vibration, and corrosion can alter contact resistance significantly, leading to resistive heating and circuit failure.
Soldered joints can fail in many ways like electromigration and formation of brittle intermetallic layers. Some failures show only at extreme joint temperatures, hindering troubleshooting. Thermal expansion mismatch between the printed circuit board material and its packaging strains the part-to-board bonds; while leaded parts can absorb the strain by bending, leadless parts rely on the solder to absorb stresses. Thermal cycling may lead to fatigue cracking of the solder joints, especially with elastic solders; various approaches are used to mitigate such incidents. Loose particles, like bonding wire and weld flash, can form in the device cavity and migrate inside the packaging, causing often intermittent and shock-sensitive shorts. Corrosion may cause buildup of oxides and other nonconductive products on the contact surfaces. When closed, these then show unacceptably high resistance; they may also migrate and cause shorts. Tin whiskers can form on tin-coated metals like the internal side of the packagings; loose whiskers then can cause intermittent short circuits inside the packaging. Cables, in addition to the methods described above, may fail by fraying and fire damage.
Printed circuit board failures
Printed circuit boards (PCBs) are vulnerable to environmental influences; for example, the traces are corrosion-prone and may be improperly etched leaving partial shorts, while the vias may be insufficiently plated through or filled with solder. The traces may crack under mechanical loads, often resulting in unreliable PCB operation. Residues of solder flux may facilitate corrosion; those of other materials on PCBs can cause electrical leaks. Polar covalent compounds can attract moisture like antistatic agents, forming a thin layer of conductive moisture between the traces; ionic compounds like chlorides tend to facilitate corrosion. Alkali metal ions may migrate through plastic packaging and influence the functioning of semiconductors. Chlorinated hydrocarbon residues may hydrolyze and release corrosive chlorides; these are problems that occur after years. Polar molecules may dissipate high-frequency energy, causing parasitic dielectric losses.
Above the glass transition temperature of PCBs, the resin matrix softens and becomes susceptible contaminant diffusion. For example, polyglycols from the solder flux can enter the board and increase its humidity intake, with corresponding deterioration of dielectric and corrosion properties. Multi-layer substrates using ceramics suffer from many of the same problems.
Conductive anodic filaments (CAFs) may grow within the boards along the fibers of the composite material. Metal is introduced to a vulnerable surface typically from plating the vias, then migrates in presence of ions, moisture, and electrical potential; drilling damage and poor glass-resin bonding promotes such failures. The formation of CAFs usually begins by poor glass-resin bonding; a layer of adsorbed moisture then provides a channel through which ions and corrosion products migrate. In presence of chloride ions, the precipitated material is atacamite; its semiconductive properties lead to increased current leakage, deteriorated dielectric strength, and short circuits between traces. Absorbed glycols from flux residues aggravate the problem. The difference in thermal expansion of the fibers and the matrix weakens the bond when the board is soldered; the lead-free solders which require higher soldering temperatures increase the occurrence of CAFs. Besides this, CAFs depend on absorbed humidity; below a certain threshold, they do not occur. Delamination may occur to separate the board layers, cracking the vias and conductors to introduce pathways for corrosive contaminants and migration of conductive species.
Relay and switch failures
Every time the contacts of an electromechanical relay, switch or contactor are opened or closed, there is a certain amount of contact wear. An electric arc occurs between the contact points (electrodes) both during the transition from closed to open (break) or from open to closed (make). The arc caused during the contact break (break arc) is akin to arc welding, as the break arc is typically more energetic and more destructive.
The heat and current of the electrical arc across the contacts creates specific cone & crater formations from metal migration. In addition to the physical contact damage, there appears also a coating of carbon and other matter. This degradation drastically limits the overall operating life of a relay or contactor to a range of perhaps 100,000 operations, a level representing 1% or less than the mechanical life expectancy of the same device.
Semiconductor failures
Many failures result in generation of hot electrons. These are observable under an optical microscope, as they generate near-infrared photons detectable by a CCD camera. Latchups can be observed this way. If visible, the location of failure may present clues to the nature of the overstress. Liquid crystal coatings can be used for localization of faults: cholesteric liquid crystals are thermochromic and are used for visualisation of locations of heat production on the chips, while nematic liquid crystals respond to voltage and are used for visualising current leaks through oxide defects and of charge states on the chip surface (particularly logical states). Laser marking of plastic-encapsulated packages may damage the chip if glass spheres in the packaging line up and direct the laser to the chip.
Examples of semiconductor failures relating to semiconductor crystals include:
Nucleation and growth of dislocations. This requires an existing defect in the crystal, as is done by radiation, and is accelerated by heat, high current density and emitted light. With LEDs, gallium arsenide and aluminium gallium arsenide are more susceptible to this than gallium arsenide phosphide and indium phosphide; gallium nitride and indium gallium nitride are insensitive to this defect.
Accumulation of charge carriers trapped in the gate oxide of MOSFETs. This introduces permanent gate biasing, influencing the transistor's threshold voltage; it may be caused by hot carrier injection, ionizing radiation or nominal use. With EEPROM cells, this is the major factor limiting the number of erase-write cycles.
Migration of charge carriers from floating gates. This limits the lifetime of stored data in EEPROM and flash EPROM structures.
Improper passivation. Corrosion is a significant source of delayed failures; semiconductors, metallic interconnects, and passivation glasses are all susceptible. The surface of semiconductors subjected to moisture has an oxide layer; the liberated hydrogen reacts with deeper layers of the material, yielding volatile hydrides.
Parameter failures
Vias are a common source of unwanted serial resistance on chips; defective vias show unacceptably high resistance and therefore increase propagation delays. As their resistivity drops with increasing temperature, degradation of the maximum operating frequency of the chip the other way is an indicator of such a fault. Mousebites are regions where metallization has a decreased width; such defects usually do not show during electrical testing but present a major reliability risk. Increased current density in the mousebite can aggravate electromigration problems; a large degree of voiding is needed to create a temperature-sensitive propagation delay.
Sometimes, circuit tolerances can make erratic behaviour difficult to trace; for example, a weak driver transistor, a higher series resistance and the capacitance of the gate of the subsequent transistor may be within tolerance but can significantly increase signal propagation delay. These can manifest only at specific environmental conditions, high clock speeds, low power supply voltages, and sometimes specific circuit signal states; significant variations can occur on a single die. Overstress-induced damage like ohmic shunts or a reduced transistor output current can increase such delays, leading to erratic behavior. As propagation delays depend heavily on supply voltage, tolerance-bound fluctuations of the latter can trigger such behavior.
Gallium arsenide monolithic microwave integrated circuits can have these failures:
Degradation of IDSS by gate sinking and hydrogen poisoning. This failure is the most common and easiest to detect, and is affected by reduction of the active channel of the transistor in gate sinking and depletion of the donor density in the active channel for hydrogen poisoning.
Degradation in gate leakage current. This occurs at accelerated life tests or high temperatures and is suspected to be caused by surface-state effects.
Degradation in pinch-off voltage. This is a common failure mode for gallium arsenide devices operating at high temperature, and primarily stems from semiconductor-metal interactions and degradation of gate metal structures, with hydrogen being another reason. It can be hindered by a suitable barrier metal between the contacts and gallium arsenide.
Increase in drain-to-source resistance. It is observed in high-temperature devices, and is caused by metal-semiconductor interactions, gate sinking and ohmic contact degradation.
Metallisation failures
Metallisation failures are more common and serious causes of FET transistor degradation than material processes; amorphous materials have no grain boundaries, hindering interdiffusion and corrosion. Examples of such failures include:
Electromigration moving atoms out of active regions, causing dislocations and point defects acting as nonradiative recombination centers producing heat. This may occur with aluminium gates in MESFETs with RF signals, causing erratic drain current; electromigration in this case is called gate sinking. This issue does not occur with gold gates. With structures having aluminium over a refractory metal barrier, electromigration primarily affects aluminium but not the refractory metal, causing the structure's resistance to erratically increase. Displaced aluminium may cause shorts to neighbouring structures; 0.5-4% of copper in the aluminium increases electromigration resistance, the copper accumulating on the alloy grain boundaries and increasing the energy needed to dislodge atoms from them. Other than that, indium tin oxide and silver are subject to electromigration, causing leakage current and (in LEDs) nonradiative recombination along chip edges. In all cases, electromigration can cause changes in dimensions and parameters of the transistor gates and semiconductor junctions.
Mechanical stresses, high currents, and corrosive environments forming of whiskers and short circuits. These effects can occur both within packaging and on circuit boards.
Formation of silicon nodules. Aluminium interconnects may be silicon-doped to saturation during deposition to prevent alloy spikes. During thermal cycling, the silicon atoms may migrate and clump together forming nodules that act as voids, increasing local resistance and lowering device lifetime.
Ohmic contact degradation between metallisation and semiconductor layers. With gallium arsenide, a layer of gold-germanium alloy (sometimes with nickel) is used to achieve low contact resistance; an ohmic contact is formed by diffusion of germanium, forming a thin, highly n-doped region under the metal facilitating the connection, leaving gold deposited over it. Gallium atoms may migrate through this layer and get scavenged by the gold above, creating a defect-rich gallium-depleted zone under the contact; gold and oxygen then migrate oppositely, resulting in increased resistance of the ohmic contact and depletion of effective doping level. Formation of intermetallic compounds also plays a role in this failure mode.
Electrical overstress
Most stress-related semiconductor failures are electrothermal in nature microscopically; locally increased temperatures can lead to immediate failure by melting or vaporising metallisation layers, melting the semiconductor or by changing structures. Diffusion and electromigration tend to be accelerated by high temperatures, shortening the lifetime of the device; damage to junctions not leading to immediate failure may manifest as altered current–voltage characteristics of the junctions. Electrical overstress failures can be classified as thermally-induced, electromigration-related and electric field-related failures; examples of such failures include:
Thermal runaway, where clusters in the substrate cause localised loss of thermal conductivity, leading to damage producing more heat; the most common causes are voids caused by incomplete soldering, electromigration effects and Kirkendall voiding. Clustered distribution of current density over the junction or current filaments lead to current crowding localised hot spots, which may evolve to a thermal runaway.
Reverse bias. Some semiconductor devices are diode junction-based and are nominally rectifiers; however, the reverse-breakdown mode may be at a very low voltage, with a moderate reverse bias voltage causing immediate degradation and vastly accelerated failure. 5 V is a maximum reverse-bias voltage for typical LEDs, with some types having lower figures.
Severely overloaded Zener diodes in reverse bias shorting. A sufficiently high voltage causes avalanche breakdown of the Zener junction; that and a large current being passed through the diode causes extreme localised heating, melting the junction and metallisation and forming a silicon-aluminium alloy that shorts the terminals. This is sometimes intentionally used as a method of hardwiring connections via fuses.
Latchups (when the device is subjected to an over- or undervoltage pulse); a parasitic structure acting as a triggered SCR then may cause an overcurrent-based failure. In ICs, latchups are classified as internal (like transmission line reflections and ground bounces) or external (like signals introduced via I/O pins and cosmic rays); external latchups can be triggered by an electrostatic discharge while internal latchups cannot. Latchups can be triggered by charge carriers injected into chip substrate or another latchup; the JEDEC78 standard tests susceptibility to latchups.
Electrostatic discharge
Electrostatic discharge (ESD) is a subclass of electrical overstress and may cause immediate device failure, permanent parameter shifts and latent damage causing increased degradation rate. It has at least one of three components, localized heat generation, high current density and high electric field gradient; prolonged presence of currents of several amperes transfer energy to the device structure to cause damage. ESD in real circuits causes a damped wave with rapidly alternating polarity, the junctions stressed in the same manner; it has four basic mechanisms:
Oxide breakdown occurring at field strengths above 6–10 MV/cm.
Junction damage manifesting as reverse-bias leakage increases to the point of shorting.
Metallisation and polysilicon burnout, where damage is limited to metal and polysilicon interconnects, thin film resistors and diffused resistors.
Charge injection, where hot carriers generated by avalanche breakdown are injected into the oxide layer.
Catastrophic ESD failure modes include:
Junction burnout, where a conductive path forms through the junction and shorts it
Metallisation burnout, where melting or vaporizing of a part of the metal interconnect interrupts it
Oxide punch-through, formation of a conductive path through the insulating layer between two conductors or semiconductors; the gate oxides are thinnest and therefore most sensitive. The damaged transistor shows a low-ohmic junction between gate and drain terminals.
A parametric failure only shifts the device parameters and may manifest in stress testing; sometimes, the degree of damage can lower over time. Latent ESD failure modes occur in a delayed fashion and include:
Insulator damage by weakening of the insulator structures.
Junction damage by lowering minority carrier lifetimes, increasing forward-bias resistance and increasing reverse-bias leakage.
Metallisation damage by conductor weakening.
Catastrophic failures require the highest discharge voltages, are the easiest to test for and are rarest to occur. Parametric failures occur at intermediate discharge voltages and occur more often, with latent failures the most common. For each parametric failure, there are 4–10 latent ones. Modern VLSI circuits are more ESD-sensitive, with smaller features, lower capacitance and higher voltage-to-charge ratio. Silicon deposition of the conductive layers makes them more conductive, reducing the ballast resistance that has a protective role.
The gate oxide of some MOSFETs can be damaged by 50 volts of potential, the gate isolated from the junction and potential accumulating on it causing extreme stress on the thin dielectric layer; stressed oxide can shatter and fail immediately. The gate oxide itself does not fail immediately but can be accelerated by stress induced leakage current, the oxide damage leading to a delayed failure after prolonged operation hours; on-chip capacitors using oxide or nitride dielectrics are also vulnerable. Smaller structures are more vulnerable because of their lower capacitance, meaning the same amount of charge carriers charges the capacitor to a higher voltage. All thin layers of dielectrics are vulnerable; hence, chips made by processes employing thicker oxide layers are less vulnerable.
Current-induced failures are more common in bipolar junction devices, where Schottky and PN junctions are predominant. The high power of the discharge, above 5 kilowatts for less than a microsecond, can melt and vaporise materials. Thin-film resistors may have their value altered by a discharge path forming across them, or having part of the thin film vaporized; this can be problematic in precision applications where such values are critical.
Newer CMOS output buffers using lightly doped silicide drains are more ESD sensitive; the N-channel driver usually suffers damage in the oxide layer or n+/p well junction. This is caused by current crowding during the snapback of the parasitic NPN transistor. In P/NMOS totem-pole structures, the NMOS transistor is almost always the one damaged. The structure of the junction influences its ESD sensitivity; corners and defects can lead to current crowding, reducing the damage threshold. Forward-biased junctions are less sensitive than reverse-biased ones because the Joule heat of forward-biased junctions is dissipated through a thicker layer of the material, as compared to the narrow depletion region in reverse-biased junction.
Passive element failures
Resistors
Resistors can fail open or short, alongside their value changing under environmental conditions and outside performance limits. Examples of resistor failures include:
Manufacturing defects causing intermittent problems. For example, improperly crimped caps on carbon or metal resistors can loosen and lose contact, and the resistor-to-cap resistance can change the values of the resistor
Surface-mount resistors delaminating where dissimilar materials join, like between the ceramic substrate and the resistive layer.
Nichrome thin-film resistors in integrated circuits attacked by phosphorus from the passivation glass, corroding them and increasing their resistance.
SMD resistors with silver metallization of contacts suffering open-circuit failure in a sulfur-rich environment, due to buildup of silver sulfide.
Copper dendrites growing from Copper(II) oxide present in some materials (like the layer facilitating adhesion of metallization to a ceramic substrate) and bridging the trimming kerf slot.
Potentiometers and trimmers
Potentiometers and trimmers are three-terminal electromechanical parts, containing a resistive path with an adjustable wiper contact. Along with the failure modes for normal resistors, mechanical wear on the wiper and the resistive layer, corrosion, surface contamination, and mechanical deformations may lead to intermittent path-wiper resistance changes, which are a problem with audio amplifiers. Many types are not perfectly sealed, with contaminants and moisture entering the part; an especially common contaminant is the solder flux. Mechanical deformations (like an impaired wiper-path contact) can occur by housing warpage during soldering or mechanical stress during mounting. Excess stress on leads can cause substrate cracking and open failure when the crack penetrates the resistive path.
Capacitors
Capacitors are characterized by their capacitance, parasitic resistance in series and parallel, breakdown voltage and dissipation factor; both parasitic parameters are often frequency- and voltage-dependent. Structurally, capacitors consist of electrodes separated by a dielectric, connecting leads, and housing; deterioration of any of these may cause parameter shifts or failure. Shorted failures and leakage due to increase of parallel parasitic resistance are the most common failure modes of capacitors, followed by open failures. Some examples of capacitor failures include:
Dielectric breakdown due to overvoltage or aging of the dielectric, occurring when breakdown voltage falls below operating voltage. Some types of capacitors "self-heal", as internal arcing vaporizes parts of the electrodes around the failed spot. Others form a conductive pathway through the dielectric, leading to shorting or partial loss of dielectric resistance.
Electrode materials migrating across the dielectric, forming conductive paths.
Leads separated from the capacitor by rough handling during storage, assembly or operation, leading to an open failure. The failure can occur invisibly inside the packaging and is measurable.
Increase of dissipation factor due to contamination of capacitor materials, particularly from flux and solvent residues.
Electrolytic capacitors
In addition to the problems listed above, electrolytic capacitors suffer from these failures:
Aluminium versions having their electrolyte dry out for a gradual leakage, equivalent series resistance and loss of capacitance. Power dissipation by high ripple currents and internal resistances cause an increase of the capacitor's internal temperature beyond specifications, accelerating the deterioration rate; such capacitors usually fail short.
Electrolyte contamination (like from moisture) corroding the electrodes, leading to capacitance loss and shorts.
Electrolytes evolving a gas, increasing pressure inside the capacitor housing and sometimes causing an explosion; an example is the capacitor plague.
Tantalum versions being electrically overstressed, permanently degrading the dielectric and sometimes causing open or short failure. Sites that have failed this way are usually visible as a discolored dielectric or as a locally melted anode.
Metal oxide varistors
Metal oxide varistors typically have lower resistance as they heat up; if connected directly across a power bus, for protection against voltage spikes, a varistor with a lowered trigger voltage can slide into catastrophic thermal runaway and sometimes a small explosion or fire. To prevent this, the fault current is typically limited by a thermal fuse, circuit breaker, or other current limiting device.
MEMS failures
Microelectromechanical systems suffer from various types of failures:
Stiction causing moving parts to stick; an external impulse sometimes restores functionality. Non-stick coatings, reduction of contact area, and increased awareness mitigate the problem in contemporary systems.
Particles migrating in the system and blocking their movements. Conductive particles may short out circuits like electrostatic actuators. Wear damages the surfaces and releases debris that can be a source of particle contamination.
Fractures causing loss of mechanical parts.
Material fatigue inducing cracks in moving structures.
Dielectric charging leading to change of functionality and at some point parameter failures.
Recreating failure modes
In order to reduce failures, a precise knowledge of bond strength quality measurement during product design and subsequent manufacture is of vital importance. The best place to start is with the failure mode. This is based on the assumption that there is a particular failure mode, or range of modes, that may occur within a product. It is therefore reasonable to assume that the bond test should replicate the mode, or modes of interest. However, exact replication is not always possible. The test load must be applied to some part of the sample and transferred through the sample to the bond. If this part of the sample is the only option and is weaker than the bond itself, the sample will fail before the bond.
See also
Reliability (semiconductor)
References
Further reading
Herfst, R.W., Steeneken, P.G., Schmitz, J., Time and voltage dependence of dielectric charging in RF MEMS capacitive switches, (2007) Annual Proceedings – Reliability Physics (Symposium), art. no. 4227667, pp. 417–421.
External links
http://www.esda.org - ESD Association
Semiconductor device defects
Engineering failures | Failure of electronic components | [
"Technology",
"Engineering"
] | 5,973 | [
"Systems engineering",
"Reliability engineering",
"Technological failures",
"Semiconductor device defects",
"Engineering failures",
"Civil engineering"
] |
40,644,951 | https://en.wikipedia.org/wiki/Bishop%E2%80%93Phelps%20theorem | In mathematics, the Bishop–Phelps theorem is a theorem about the topological properties of Banach spaces named after Errett Bishop and Robert Phelps, who published its proof in 1961.
Statement
Importantly, this theorem fails for complex Banach spaces.
However, for the special case where is the closed unit ball then this theorem does hold for complex Banach spaces.
See also
References
Banach spaces
Theorems in functional analysis | Bishop–Phelps theorem | [
"Mathematics"
] | 84 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Theorems in functional analysis",
"Mathematical analysis stubs"
] |
40,645,022 | https://en.wikipedia.org/wiki/GlnALG%20operon | The glnALG operon is an operon that regulates the nitrogen content of a cell. It codes for the structural gene glnA the two regulatory genes glnL and glnG. glnA encodes glutamine synthetase, an enzyme which catalyzes the conversion of glutamate and ammonia to glutamine, thereby controlling the nitrogen level in the cell. glnG encodes NRI which regulates the expression of the glnALG operon at three promoters, which are glnAp1, glnAp2 located upstream of glnA) and glnLp (intercistronic glnA-glnL region). glnL encodes NRII which regulates the activity of NRI.
No significant homology is found in Eukaryotes.
Structure
The glnALG has three structural genes:
glnA: encodes glutamine synthetase, an enzyme which catalyzes the conversion of glutamate and ammonia to glutamine.
glnL: encodes NRII , which regulates the activity of NRI.
glnG: encodes NRI , which regulates the expression of glnALG operon at three promoters, which are glnAp1, glnAp2 and glnLp.
Physiological significance of glnALG
glnALG operon, along with the glnD and glnF and their gene products, plays an extremely important role in regulating the nitrogen level inside the cell. It also plays a role in the ammonium (methylammonium) transport system (Amt). Hence it increases the ammonia content of the cell when grown on glutamine or glutamate.
Hence along with histidase, glnALG operon maintains homeostasis within the cell.
Mechanism of Regulation
The glnALG operon is regulated by an intricate network of repressors and activators. Along with NRI and NRII, there are gene products of glnF and glnD which play a key role in this network.
The expression of the glnALG operon is regulated by the NRI at three promoters: glnAp1, glnAp2 and glnLp. The initiation of transcription at glnAp1 is stimulated exclusively under carbon starvation conditions and stationary phase during which cAMP accumulates in high concentration in the cell. The binding of cAMP to the catabolite activator protein (CAP) causes CAP to bind to a specific DNA site in glnAp1, and glnAp1 is repressed by NRI. Initiation of transcription at glnAp2 requires the activated form of NRI, i.e. NRI–P(phosphorylated NRI), as well as the glnF gene product, σ54, and it is regulated by NRII. NRII in the presence of ATP, catalyzes the transfer of ϒ-phosphate of ATP to NRI. In the presence of PII, which is encoded by glnB, NRII catalyzes the dephosphorylation of NRI–P.
The nitrogen content in the cell is directly proportional to the ratio of concentration of glutamine to the concentration of 2-ketoglutarate. When nitrogen content is lower, the product of glnD gene, uridylyl transferase catalyzes the conversion of PII to give PII-UMP, hampering PII's ability of dephosphorylating NRI–P. Uridylyl transferase catalyzes this reaction because the high concentration of 2-ketoglutarate allosterically activates it. In the case of high nitrogen, there is excess of NRI which represses the transcription of the promoters glnAp1, glnAp2 and glnLp, which in turn represses the synthesis of glutamine synthetase.
References
Gene expression
Operons | GlnALG operon | [
"Chemistry",
"Biology"
] | 833 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Operons"
] |
40,646,055 | https://en.wikipedia.org/wiki/Alignment-free%20sequence%20analysis | In bioinformatics, alignment-free sequence analysis approaches to molecular sequence and structure data provide alternatives over alignment-based approaches.
The emergence and need for the analysis of different types of data generated through biological research has given rise to the field of bioinformatics. Molecular sequence and structure data of DNA, RNA, and proteins, gene expression profiles or microarray data, metabolic pathway data are some of the major types of data being analysed in bioinformatics. Among them sequence data is increasing at the exponential rate due to advent of next-generation sequencing technologies. Since the origin of bioinformatics, sequence analysis has remained the major area of research with wide range of applications in database searching, genome annotation, comparative genomics, molecular phylogeny and gene prediction. The pioneering approaches for sequence analysis were based on sequence alignment either global or local, pairwise or multiple sequence alignment. Alignment-based approaches generally give excellent results when the sequences under study are closely related and can be reliably aligned, but when the sequences are divergent, a reliable alignment cannot be obtained and hence the applications of sequence alignment are limited. Another limitation of alignment-based approaches is their computational complexity and are time-consuming and thus, are limited when dealing with large-scale sequence data. The advent of next-generation sequencing technologies has resulted in generation of voluminous sequencing data. The size of this sequence data poses challenges on alignment-based algorithms in their assembly, annotation and comparative studies.
Alignment-free methods
Alignment-free methods can broadly be classified into five categories: a) methods based on k-mer/word frequency, b) methods based on the length of common substrings, c) methods based on the number of (spaced) word matches, d) methods based on micro-alignments, e) methods based on information theory and f) methods based on graphical representation. Alignment-free approaches have been used in sequence similarity searches, clustering and classification of sequences, and more recently in phylogenetics (Figure 1).
Such molecular phylogeny analyses employing alignment-free approaches are said to be part of next-generation phylogenomics. A number of review articles provide in-depth review of alignment-free methods in sequence analysis.
The AFproject is an international collaboration to benchmark and compare software tools for alignment-free sequence comparison.
Methods based on k-mer/word frequency
The popular methods based on k-mer/word frequencies include feature frequency profile (FFP), Composition vector (CV), Return time distribution (RTD), frequency chaos game representation (FCGR). and Spaced Words.
Feature frequency profile (FFP)
The methodology involved in FFP based method starts by calculating the count of each possible k-mer (possible number of k-mers for nucleotide sequence: 4k, while that for protein sequence: 20k) in sequences. Each k-mer count in each sequence is then normalized by dividing it by total of all k-mers' count in that sequence. This leads to conversion of each sequence into its feature frequency profile. The pair wise distance between two sequences is then calculated Jensen–Shannon (JS) divergence between their respective FFPs. The distance matrix thus obtained can be used to construct phylogenetic tree using clustering algorithms like neighbor-joining, UPGMA etc.
Composition vector (CV)
In this method frequency of appearance of each possible k-mer in a given sequence is calculated. The next characteristic step of this method is the subtraction of random background of these frequencies using Markov model to reduce the influence of random neutral mutations to highlight the role of selective evolution. The normalized frequencies are put a fixed order to form the composition vector (CV) of a given sequence. Cosine distance function is then used to compute pairwise distance between CVs of sequences. The distance matrix thus obtained can be used to construct phylogenetic tree using clustering algorithms like neighbor-joining, UPGMA etc. This method can be extended through resort to efficient pattern matching algorithms to include in the computation of the composition vectors: (i) all k-mers for any value of k, (ii) all substrings of any length up to an arbitrarily set maximum k value, (iii) all maximal substrings, where a substring is maximal if extending it by any character would cause a decrease in its occurrence count.
Return time distribution (RTD)
The RTD based method does not calculate the count of k-mers in sequences, instead it computes the time required for the reappearance of
k-mers. The time refers to the number of residues in successive appearance of particular k-mer. Thus the occurrence of each k-mer in a sequence is calculated in the form of RTD, which is then summarised using two statistical parameters mean (μ) and standard deviation (σ). Thus each sequence is represented in the form of numeric vector of size 2⋅4k containing μ and σ of 4k RTDs. The pair wise distance between sequences is calculated using Euclidean distance measure. The distance matrix thus obtained can be used to construct phylogenetic tree using clustering algorithms like neighbor-joining, UPGMA etc. A recent approach Pattern Extraction through Entropy Retrieval (PEER) provides direct detection of the k-mer length and summarised the occurrence interval using entropy.
Frequency chaos game representation (FCGR)
The FCGR methods have evolved from chaos game representation (CGR) technique, which provides scale independent representation for genomic sequences. The CGRs can be divided by grid lines where each grid square denotes the occurrence of oligonucleotides of a specific length in the sequence. Such representation of CGRs is termed as Frequency Chaos Game Representation (FCGR). This leads to representation of each sequence into FCGR. The pair wise distance between FCGRs of sequences can be calculated using the Pearson distance, the Hamming distance or the Euclidean distance.
Spaced-word frequencies
While most alignment-free algorithms compare the word-composition of sequences, Spaced Words uses a pattern of care and don't care positions. The occurrence of a spaced word in a sequence is then defined by the characters at the match positions only, while the characters at the don't care positions are ignored. Instead of comparing the frequencies of contiguous words in the input sequences, this approach compares the frequencies of the spaced words according to the pre-defined pattern. Note that the pre-defined pattern can be selected by analysis of the Variance of the number of matches, the probability of the first occurrence on several models, or the Pearson correlation coefficient between the expected word frequency and the true alignment distance.
Methods based on length of common substrings
The methods in this category employ the similarity and differences of substrings in a pair of sequences. These algorithms
were mostly used for string processing in computer science.
Average common substring (ACS)
In this approach, for a chosen pair of sequences (A and B of lengths n and m respectively), longest substring starting at some position is identified in one sequence (A) which exactly matches in the other sequence (B) at any position. In this way, lengths of longest substrings starting at different positions in sequence A and having exact matches at some positions in sequence B are calculated. All these lengths are averaged to derive a measure . Intuitively, larger the , the more similar the two sequences are. To account for the differences in the length of sequences, is normalized [i.e. ]. This gives the similarity measure between the sequences.
In order to derive a distance measure, the inverse of similarity measure is taken and a correction term is subtracted from it to assure that will be zero. Thus
This measure is not symmetric, so one has to compute , which gives final ACS measure between the two strings (A and B). The subsequence/substring search can be efficiently performed by
using suffix trees.
k-mismatch average common substring approach (kmacs)
This approach is a generalization of the ACS approach. To define the distance between two DNA or protein sequences, kmacs estimates for each position i of the first sequence the longest substring starting at i and matching a substring of the second sequence with up to k mismatches. It defines the average of these values as a measure of similarity between the sequences and turns this into a symmetric distance measure. Kmacs does not compute exact k-mismatch substrings, since this would be computational too costly, but approximates such substrings.
Mutation distances (Kr)
This approach is closely related to the ACS, which calculates the number of substitutions per site between two DNA sequences using the shortest
absent substring (termed as ).
Length distribution of k-mismatch common substrings
This approach uses the program kmacs to calculate longest common substrings with up to k mismatches for a pair of DNA sequences. The phylogenetic distance between the sequences can then be estimated from a local maximum in the length distribution of the k-mismatch common substrings.
Methods based on the number of (spaced) word matches
and
These approachese are variants of the statistics that counts the number of -mer matches between two sequences. They improve the simple statistics by taking the background distribution of the compared sequences into account.
MASH
This is an extremely fast method that uses the MinHash bottom sketch strategy for estimating the Jaccard index of the multi-sets of -mers of two input sequences. That is, it estimates the ratio of -mer matches to the total number of -mers of the sequences. This can be used, in turn, to estimate the evolutionary distances between the compared sequences, measured as the number of substitutions per sequence position since the sequences evolved from their last common ancestor.
Slope-Tree
This approach calculates a distance value between two protein sequences based on the decay of the number of -mer matches if increases.
Slope-SpaM
This method calculates the number of -mer or spaced-word matches
(SpaM) for different values for the word length or number of match positions in the underlying pattern, respectively. The slope of an affine-linear function that depends on is calculated to estimate the Jukes-Cantor distance between the input sequences .
Skmer
Skmer calculates distances between species from unassembled sequencing reads. Similar to MASH, it uses the Jaccard index on the sets of -mers from the input sequences. In contrast to MASH, the program is still accurate for low sequencing coverage, so it can be used for genome skimming.
Methods based on micro-alignments
Strictly spoken, these methods are not alignment-free. They are using simple gap-free micro-alignments where sequences are required to match at certain pre-defined positions. The positions aligned at the remaining positions of the micro-alignments where mismatches are allowed, are then used for phylogeny inference.
Co-phylog
This method searches for so-called structures that are defined as pairs of k-mer matches between two DNA sequences that are one position apart in both sequences. The two k-mer matches are called the context, the position between them is called the object. Co-phylog then defines the distance between two sequences the fraction of such structures for which the two nucleotides in the object are different. The approach can be applied to unassembled sequencing reads.
andi
andi estimates phylogenetic distances between genomic sequences based on ungapped local alignments that are flanked by maximal exact word matches. Such word matches can be efficiently found using suffix arrays. The gapfree alignments between the exact word matches are then used to estimate phylogenetic distances between genome sequences. The resulting distance estimates are accurate for up to around 0.6 substitutions per position.
Filtered Spaced-Word Matches (FSWM)
FSWM uses a pre-defined binary pattern P representing so-called match positions and don't-care positions. For a pair of input DNA sequences, it then searches for spaced-word matches w.r.t. P, i.e. for local gap-free alignments with matching nucleotides at the match positions of P and possible mismatches at the don't-care positions. Spurious low-scoring spaced-word matches are discarded, evolutionary distances between the input sequences are estimated based on the nucleotides aligned to each other at the don't-care positions of the remaining, homologous spaced-word matches. FSWM has been adapted to estimate distances based on unassembled NGS reads, this version of the program is called Read-SpaM.
Prot-SpaM
Prot-SpaM (Proteome-based Spaced-word Matches) is an implementation of the FSWM algorithm for partial or whole proteome sequences.
Multi-SpaM
Multi-SpaM (MultipleSpaced-word Matches) is an approach to genome-based phylogeny reconstruction that extends the FSWM idea to multiple sequence comparison. Given a binary pattern P of match positions and don't-care positions, the program searches for P-blocks, i.e. local gap-free four-way alignments with matching nucleotides at the match positions of P and possible mismatches at the don't-care positions. Such four-way alignments are randomly sampled from a set of input genome sequences. For each P-block, an unrooted tree topology is calculated using RAxML. The program Quartet MaxCut is then used to calculate a supertree from these trees.
Methods based on information theory
Information Theory has provided successful methods for alignment-free sequence analysis and comparison. The existing applications of information theory include global and local characterization of DNA, RNA and proteins, estimating genome entropy to motif and region classification. It also holds promise in gene mapping, next-generation sequencing analysis and metagenomics.
Base–base correlation (BBC)
Base–base correlation (BBC) converts the genome sequence into a unique 16-dimensional numeric vector using the following equation,
The and denotes the probabilities of bases i and j in the genome. The indicates the probability of bases i and j at distance ℓ in the genome. The parameter K indicates the maximum distance between the bases i and j. The variation in the values of 16 parameters reflect variation in the genome content and length.
Information correlation and partial information correlation (IC-PIC)
IC-PIC (information correlation and partial information correlation) based method employs the base correlation property of DNA sequence. IC and PIC were calculated using following formulas,
The final vector is obtained as follows:
which defines the range of distance between bases.
The pairwise distance between sequences is calculated using Euclidean distance measure. The distance matrix thus obtained can be used to construct phylogenetic tree using clustering algorithms like neighbor-joining, UPGMA, etc..
Compression
Examples are effective approximations to Kolmogorov complexity, for example Lempel-Ziv complexity. In general compression-based methods use the mutual information between the sequences. This is expressed in conditional Kolmogorov complexity, that is, the length of the shortest self-delimiting program required to generate a string given the prior knowledge of the other string. This measure has a relation to measuring k-words in a sequence, as they can be easily used to generate the sequence. It is sometimes a computationally intensive method. The theoretic basis for the Kolmogorov complexity approach was
laid by Bennett, Gacs, Li, Vitanyi, and Zurek (1998) by proposing the information distance. The Kolmogorov complexity being incomputable it was approximated by compression algorithms. The better they compress the better they are. Li, Badger, Chen, Kwong,, Kearney, and Zhang (2001) used a non-optimal but normalized form of this approach, and the optimal normalized form by Li, Chen, Li, Ma, and Vitanyi (2003) appeared in and more extensively and proven by Cilibrasi and Vitanyi (2005) in.
Otu and Sayood (2003) used the Lempel-Ziv complexity method to construct five different distance measures for phylogenetic tree construction.
Context modeling compression
In the context modeling complexity the next-symbol predictions, of one or more statistical models, are combined or competing to yield a prediction that is based on events recorded in the past. The algorithmic information content derived from each symbol prediction can be used to compute algorithmic information profiles with a time proportional to the length of the sequence. The process has been applied to DNA sequence analysis.
Methods based on graphical representation
Iterated maps
The use of iterated maps for sequence analysis was first introduced by HJ Jefferey in 1990 when he proposed to apply the Chaos Game to map genomic sequences into a unit square. That report coined the procedure as Chaos Game Representation (CGR). However, only 3 years later this approach was first dismissed as a projection of a Markov transition table by N Goldman. This objection was overruled by the end of that decade when the opposite was found to be the case – that CGR bijectively maps Markov transition is into a fractal, order-free (degree-free) representation. The realization that iterated maps provide a bijective map between the symbolic space and numeric space led to the identification of a variety of alignment-free approaches to sequence comparison and characterization. These developments were reviewed in late 2013 by JS Almeida in. A number of web apps such as https://github.com/usm/usm.github.com/wiki, are available to demonstrate how to encode and compare arbitrary symbolic sequences in a manner that takes full advantage of modern MapReduce distribution developed for cloud computing.
Comparison of alignment based and alignment-free methods
Applications of alignment-free methods
Genomic rearrangements
Molecular phylogenetics
Metagenomics
Next generation sequence data analysis
Epigenomics
Barcoding of species
Population genetics
Horizontal gene transfer
Sero/genotyping of viruses
Allergenicity prediction
SNP discovery
Recombination detection
Viral Classification
Archaea Taxonomic Identification
Taxonomic Classification
Temporal Analysis
Low-complexity Regions Identification
List of web servers/software for alignment-free methods
See also
Sequence analysis
Multiple sequence alignment
Phylogenomics
Bioinformatics
Metagenomics
Next-generation sequencing
Population genetics
SNPs
Recombination detection program
Genome skimming
References
Bioinformatics
Computational biology | Alignment-free sequence analysis | [
"Engineering",
"Biology"
] | 3,828 | [
"Bioinformatics",
"Biological engineering",
"Computational biology"
] |
40,646,963 | https://en.wikipedia.org/wiki/Photonic%20molecule | Photonic molecules are a form of matter in which photons bind together to form "molecules". They were first predicted in 2007. Photonic molecules are formed when individual (massless) photons "interact with each other so strongly that they act as though they have mass". In an alternative definition (which is not equivalent), photons confined to two or more coupled optical cavities also reproduce the physics of interacting atomic energy levels, and have been termed as photonic molecules.
Researchers drew analogies between the phenomenon and the fictional "lightsaber" from Star Wars.
Construction
Gaseous rubidium atoms were pumped into a vacuum chamber. The cloud was cooled using lasers to just a few degrees above absolute zero. Using weak laser pulses, small numbers of photons were fired into the cloud.
As the photons entered the cloud, their energy excited atoms along their path, causing them to lose speed. Inside the cloud medium, the photons dispersively coupled to strongly interacting atoms in highly excited Rydberg states. This caused the photons to behave as massive particles with strong mutual attraction (photon molecules). Eventually the photons exited the cloud together as normal photons (often entangled in pairs).
The effect is caused by a so-called Rydberg blockade, which, in the presence of one excited atom, prevents nearby atoms from being excited to the same degree. In this case, as two photons enter the atomic cloud, the first excites an atom, annihilating itself in the interaction, but the transmitted energy must move forward inside the excited atom before the second photon can excite nearby atoms. In effect the two photons push and pull each other through the cloud as their energy is passed from one atom to the next, forcing them to interact. This photonic interaction is mediated by the electromagnetic interaction between photons and atoms.
Possible applications
The interaction of the photons suggests that the effect could be employed to build a system that can preserve quantum information, and process it using quantum logic operations.
The system could also be useful in classical computing, given the much-lower power required to manipulate photons than electrons.
It may be possible to arrange the photonic molecules in such a way within the medium that they form larger two-dimensional structures (similar to drawings).
Interacting optical cavities as photonic molecules
The term photonic molecule has been also used since 1998 for an unrelated phenomenon involving electromagnetically interacting optical microcavities. The properties of quantized confined photon states in optical micro- and nanocavities are very similar to those of confined electron states in atoms. Owing to this similarity, optical microcavities can be termed 'photonic atoms'. Taking this analogy even further, a cluster of several mutually-coupled photonic atoms forms a photonic molecule. When individual photonic atoms are brought into close proximity, their optical modes interact and give rise to a spectrum of hybridized super-modes of photonic molecules. This is very similar to what happens when two isolated systems are coupled, like two hydrogen atomic orbitals coming together to form the bonding and antibonding orbitals of the hydrogen molecule, which are hybridized super-modes of the total coupled system.
"A micrometer-sized piece of semiconductor can trap photons inside it in such a way that they act like electrons in an atom. Now the 21 September PRL describes a way to link two of these "photonic atoms" together. The result of such a close relationship is a "photonic molecule," whose optical modes bear a strong resemblance to the electronic states of a diatomic molecule like hydrogen." "Photonic molecules, named by analogy with chemical molecules, are clusters of closely located electromagnetically interacting microcavities or "photonic atoms"." "Optically coupled microcavities have emerged as photonic structures with promising properties for investigation of fundamental science as well as for applications."
The first photonic realization of the two-level system of a photonic molecule was by Spreew et al., who used optical fibers to realize a ring resonator, although they did not use the term "photonic molecule". The two modes forming the molecule could then be the polarization modes of the ring or the clockwise and counterclockwise modes of the ring. This was followed by the demonstration of a lithographically fabricated photonic molecule, inspired by an analogy with a simple diatomic molecule. However, other nature-inspired PM structures (such as ‘photonic benzene’) have been proposed and shown to support confined optical modes closely analogous to the ground-state molecular orbitals of their chemical counterparts.
Photonic molecules offer advantages over isolated photonic atoms in a variety of applications, including bio(chemical) sensing, cavity optomechanics, and microlasers, Photonic molecules can also be used as quantum simulators of many-body physics and as building blocks of future optical quantum information processing networks.
In complete analogy, clusters of metal nanoparticles – which support confined surface plasmon states – have been termed ‘plasmonic molecules.”
Finally, hybrid photonic-plasmonic (or opto-plasmonic) and elastic molecules have also been proposed and demonstrated.
See also
Luminiferous aether
Photoluminescence
References
Atomic physics
Particle physics | Photonic molecule | [
"Physics",
"Chemistry"
] | 1,093 | [
"Quantum mechanics",
"Atomic physics",
" molecular",
"Particle physics",
"Atomic",
" and optical physics"
] |
40,650,445 | https://en.wikipedia.org/wiki/Copenhagen%20Amber%20Museum | The Copenhagen Amber Museum (Danish: Københavns Ravmuseum ) is a museum on Kongens Nytorv in central Copenhagen, Denmark. The museum is owned by House of Amber. The museum holds an extensive collection of amber antiques and artifacts, including a wide array of entombed insects from prehistoric times. The collection comprises one of the largest piece of amber in the world.
Kanneworff House
The museum is located in Kanneworff House (Kanneworffs Hus) one of Copenhagen’s oldest houses. It is placed at the square Kongens Nytorv right at the entrance of Nyhavn. Kanneworffs House was built in 1606, even before Kongens Nytorv was founded and the channel of Nyhavn was dug. Its current appearance is largely due to an adaptation in the 1780s which added an extra floor and the Mansard roof. The three-story building consists of three bays on Bredgade, four bays on Kongens Nytorv and two bays on Store Strandstræde. Another adaptation in 1904 moved the entrance to Bredgade. Through the years the house has been inhabited by all kinds of people from barbers, tobacco spinners, carpenters, grocers and even the lackey of a noble count. In 1836 wool and cloth grocer Lars Kanneworff bought the house and during the next century it housed one of Copenhagen’s tailor establishments.
Museum collection
One of the main attractions of the museum is the collection of more than 100 pieces of amber with inclusions of insects and plants. Magnifying glasses enable the visitor to observe the more than 30 million year-old insects and plants closely. Copenhagen Amber Museum also presents its visitors to the world's largest piece of amber, which weighs 47.5 kg.
Denmark’s biggest amber find in modern times can be found in the Copenhagen Amber Museum. In June 2010, a Danish fisherman caught one of Denmark’s largest pieces of amber ever found. He caught the piece in his net on a fishing trip far out to sea. He found it hard to believe his own eyes when he pulled in his net that June morning, for in the net lay the biggest piece of amber he had ever found. The amber rock weighed 4,125 gr., and was the largest piece of amber found in Denmark since 1767.
References
External links
Københavns Ravmuseum website
House of Amber website
Museums in Copenhagen
Amber | Copenhagen Amber Museum | [
"Physics"
] | 512 | [
"Amorphous solids",
"Unsolved problems in physics",
"Amber"
] |
40,651,129 | https://en.wikipedia.org/wiki/Cycleanine | Cycleanine is a selective vascular calcium antagonist isolated from Stephania.
External links
Calcium antagonist properties of the bisbenzylisoquinoline alkaloid cycleanine
Calcium channel blockers
Alkaloids found in plants | Cycleanine | [
"Chemistry"
] | 47 | [
"Organic chemistry stubs"
] |
40,651,215 | https://en.wikipedia.org/wiki/Olivacine | Olivacine is an antimalarial alkaloid.
External links
Antimalarial agents
Indole alkaloids
Isoquinoline alkaloids
Carbazoles | Olivacine | [
"Chemistry"
] | 38 | [
"Isoquinoline alkaloids",
"Alkaloids by chemical classification",
"Indole alkaloids"
] |
40,652,472 | https://en.wikipedia.org/wiki/Advisory%20Group%20on%20Greenhouse%20Gases | The Advisory Group on Greenhouse Gases, created in 1986, was an advisory body for the review of studies into the greenhouse effect. The group was created by the International Council of Scientific Unions, the United Nations Environment Programme, and the World Meteorological Organization to follow up on the recommendations of the International conference of the Assessment of the role of carbon dioxide and of other greenhouse gases in climate variations and associated impacts, held at Villach, Austria, in October 1985.
The seven-member panel included Swedish meteorologist Bert Bolin and Canadian climatologist Kenneth Hare.
The group held its last meeting in 1990. It was gradually replaced by the Intergovernmental Panel on Climate Change.
References
Greenhouse gases
Organizations established in 1986 | Advisory Group on Greenhouse Gases | [
"Chemistry",
"Environmental_science"
] | 144 | [
"Greenhouse gases",
"Environmental chemistry"
] |
46,689,173 | https://en.wikipedia.org/wiki/FCA%20Global%20Medium%20Engine | The Global Medium Engine (GME for short) is a family of engines created by the powertrain division of Alfa Romeo and in production since 2016.
The GME family is composed by two new series of engine: one created by Alfa Romeo (codeproject Giorgio) for Alfa Romeo Giulia and Stelvio, and the second (codeproject Hurricane) by FCA US for American vehicles made by Chrysler, Dodge, and Jeep. Both are produced in Termoli, Italy at the Termoli Powertrain Plant.
The first vehicle to use the GME T4 engine is the 2016 Alfa Romeo Giulia introduced in April 2016, followed by the Alfa Romeo Stelvio. The first American Hurricane was adopted by the new Jeep Wrangler (JL) in 2018 followed by the facelift 2019 Jeep Cherokee (KL) and the Chinese Jeep Grand Commander. It is currently available only in 2.0L capacities, with different tunings.
The 2.0L GME-T4 received an update in 2025 dubbed Hurricane4 EVO, and is expected to debut in the 2026 Jeep Grand Cherokee WL mid-cycle refresh. This engine will ultimately replace the Pentastar V6 engine.
Production
Around 2018, it was rumored production of the Hurricane would move to the Trenton Engine Plant in Trenton, Michigan which also builds the World Gasoline Engine and the Chrysler Pentastar engine. However, FCA announced on March 5, 2020, it will invest $400 million to repurpose the idled Indiana Transmission Plant II in Kokomo, Indiana, to build the GME for the United States market. Production of the USA-built Hurricane began in 2022.
By June 2018 the GME T4 will also be built in Changsha (China) by GAC Fiat Chrysler Powertrain plant for Chinese made vehicles.
Production Plants
Termoli Powertrain Plant in Termoli, Italy (since 2016 for European and United States markets)
GAC Fiat Chrysler in Changsha, China (2018-2022 for Chinese markets)
Kokomo Engine Plant (formerly Indiana Transmission Plant II) in Kokomo, Indiana (since 2022 for United States markets)
Dundee Engine Plant in Dundee, Michigan (since 2025 for United States markets)(GME-T4 EVO)
Applications
GME T4
2016- Alfa Romeo Giulia (952)
2017- Alfa Romeo Stelvio
2018- Jeep Wrangler (JL)
2018-2023 Jeep Cherokee (KL)
2018- Jeep Grand Commander
2021- Maserati Ghibli Hybrid
2021- Maserati Levante Hybrid
2021- Jeep Wrangler 4xe
2022- Jeep Grand Cherokee (WL)
2022- Jeep Grand Cherokee 4xe (WL)
2022- Maserati Grecale
2023- Alfa Romeo Tonale / Dodge Hornet
2023- Ram Rampage
2023- Jeep Compass
2025- Jeep Gladiator 4xe
2025- Jeep Commander
GME T4-EVO
2026- Jeep Grand Cherokee (WL)
2026- Jeep Wrangler (JL)
2026- Jeep Gladiator (JT)
See also
Stellantis Hurricane engine
References
Automobile engines
Hurricane
Fiat Chrysler Automobiles
Straight-four engines
Gasoline engines by model
Stellantis
Stellantis engines | FCA Global Medium Engine | [
"Technology"
] | 675 | [
"Engines",
"Automobile engines"
] |
46,690,508 | https://en.wikipedia.org/wiki/Cerebrolysin | Cerebrolysin (developmental code name FPF-1070) is an experimental mixture of enzymatically-treated peptides derived from pig brain whose constituents can include brain-derived neurotrophic factor (BDNF), glial cell line-derived neurotrophic factor (GDNF), nerve growth factor (NGF), and ciliary neurotrophic factor (CNTF). Although it is under preliminary study for its potential to treat various brain diseases, it is used as a therapy in dozens of countries in Eurasia.
Cerebrolysin has been studied for potential treatment of several neurodegenerative diseases, with only preliminary research, as of 2023. No clear benefit in the treatment of acute stroke has been found, and an increased rate of spontaneous adverse effects requiring hospitalization is reported. Some positive effects have been reported when cerebrolysin is used to treat vascular dementia.
Research
Stroke
A 2023 review indicated that cerebrolysin or cerebrolysin-like peptide mixtures from cattle brain likely provide no benefit for preventing all-cause death in acute ischemic stroke, and that higher quality studies are needed. In addition, cerebrolysin might cause a higher rate of spontaneous adverse events requiring hospitalization.
Studies of ischemic stroke in Asian subpopulations found an absence of benefit. A 2020 study suggested a lack of benefit in hemorrhagic stroke related to cerebral aneurysm.
Dementia
Reviews of preliminary research indicate a possible improvement in cognitive function using cerebrolysin for vascular dementia and Alzheimer's disease, although further high-quality research is needed.
Other
Early studies have suggested potential use of cerebrolysin with a wide variety of neurodegenerative disorders, including traumatic brain injury, schizophrenia, multiple sclerosis, cerebral palsy and spinal cord injury although research is still preliminary.
Adverse effects
Upon injection, adverse effects of cerebrolysin include nausea, dizziness, headache, and sweating. It is not recommended for use in people with epilepsy, kidney disease, or hypersensitivity to the compound constituents.
In trials studying the use of cerebrolysin after acute stroke, there was no increased risk of "serious adverse events" requiring hospitalization. These were specifically defined as "...any untoward medical occurrence that, at any dose, resulted in death, [was] life-threatening, required inpatient hospitalisation or resulted in prolongation of existing hospitalisation, resulted in persistent or significant disability/incapacity, [was] a congenital anomaly/birth defect, or [was] a medically important event or reaction".
Pharmacology
Laboratory studies indicate there may be neurotrophic effects of cerebrolysin similar to endogenous mechanisms, although its specific molecular effects are not clear.
Cerebrolysin is given by injection. Some of the peptides in cerebrolysin are short-lived once in the blood (for example, the half-life of BDNF is only 10 minutes).
Regulatory
Although cerebrolysin is used in Russia, Eastern European countries, China, and other Asian countries, its status as a government-approved drug is unclear. It is only available by prescription from a physician. According to the manufacturer, the European Medicines Agency has declared cerebrolysin as safe.
It is not an approved drug in the United States.
References
Antidementia agents
Neuroprotective agents
Nootropics
Peptides
Management of stroke | Cerebrolysin | [
"Chemistry"
] | 740 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
46,691,835 | https://en.wikipedia.org/wiki/Microbial%20cytology | Microbial cytology is the study of microscopic and submicroscopic details of microorganisms. Origin of "Microbial" 1880-85; < Greek mīkro- micro- small + bíos life). "Cytology" 1857; < Cyto-is derived from the Greek "kytos" meaning "hollow, as a cell or container." + -logy meaning "the study of"). Microbial cytology is analyzed under a microscope for cells which were collected from a part of the body. The main purpose of microbial cytology is to see the structure of the cells, and how they form and operate.
References
Microbiology | Microbial cytology | [
"Chemistry",
"Biology"
] | 138 | [
"Microbiology",
"Microscopy"
] |
46,694,032 | https://en.wikipedia.org/wiki/Participatory%20monitoring | Participatory monitoring (also known as collaborative monitoring, community-based monitoring, locally based monitoring, or volunteer monitoring) is the regular collection of measurements or other kinds of data (monitoring), usually of natural resources and biodiversity, undertaken by local residents of the monitored area, who rely on local natural resources and thus have more local knowledge of those resources. Those involved usually live in communities with considerable social cohesion, where they regularly cooperate on shared projects.
Participatory monitoring has emerged as an alternative or addition to professional scientist-executed monitoring. Scientist-executed monitoring is often costly and hard to sustain, especially in those regions of the world where financial resources are limited. Moreover, scientist-executed monitoring can be logistically and technically difficult and is often perceived to be irrelevant by resource managers and the local communities. Involving local people and their communities in monitoring is often part of the process of sharing the management of land and resources with the local communities. It is connected to the devolution of rights and power to the locals. Aside from potentially providing high-quality information, participatory monitoring can raise local awareness and build the community and local government expertise that is needed for addressing the management of natural resources.
Participatory monitoring is sometimes included in terms such as citizen science, crowd-sourcing, ‘public participation in scientific research’ and participatory action research.
Definition
The term ‘participatory monitoring’ embraces a broad range of approaches, from self-monitoring of harvests by local resource users themselves, to censuses by local rangers, and inventories by amateur naturalists. The term includes techniques labelled as ‘self-monitoring’, ranger-based monitoring’, ‘event-monitoring’, ‘participatory assessment, monitoring and evaluation of biodiversity’, ‘community-based observing’, and ‘community-based monitoring and information systems’.
Many of these approaches are directly linked to resource management, but the entities being monitored vary widely, from individual animals and plants, through habitats, to ecosystem goods and services. However, all of the approaches have in common that the monitoring is carried out by individuals who live in the monitored places and rely on local natural resources, and that local people or local government staff are directly involved in formulation of research questions, data collection, and (in most instances) data analysis, and implementation of management solutions based on research findings.
Participatory monitoring is included in the term ’participatory monitoring and management’ which has been defined as "approaches used by local and Indigenous communities, informed by traditional and local knowledge, and, increasingly, by contemporary science, to assess the status of resources and threats on their land and advance sustainable economic opportunities based on the use of natural resources". term ’participatory monitoring and management’ is particularly used in tropical, Arctic and developing regions, where communities are most often the custodians of valuable biodiversity and extensive natural ecosystems.
Alternative definitions
Other definitions for participatory monitoring have also been proposed, including:
"The systematic collection of information at regular intervals for initial assessment and for the monitoring of change. This collection is undertaken by locals in a community who do not have professional training".
Likewise, the term ’community-based monitoring of natural resources’ has been defined as:
"A process where concerned citizens, government agencies, industry, academia, community groups and local institutions collaborate to monitor, track, and respond to issues of common community concern".
"Monitoring of natural resources undertaken by local stakeholders using their own resources and in relation to aims and objectives that make sense to them".
"A process of routinely observing environmental or social phenomena, or both, that is led and undertaken by community members and can involve external collaboration and support of visiting researchers and government agencies".
Limitations
It has been suggested that participatory monitoring is unlikely to provide quantitative data on large-scale changes in habitat area, or on populations of cryptic species that are hard to identify or census reliably. It has also been suggested that participatory monitoring is not suitable for monitoring resources that are so valuable they attract powerful outsiders. Likewise, in areas where changes, threats, or interventions operate in complex fashions, where rural people do not depend on the use of natural resources and there are no real benefits flowing to the local people from doing monitoring work (or the costs to local people of involvement exceed the benefits), or where there is a poor relationship between the authorities and the local people, participatory monitoring is probably less likely to yield useful data and management solutions than conventional scientific approaches.
History
Whereas government censuses of human populations, which date perhaps to the 16th century B.C., were likely the first formal attempts at environmental monitoring, farmers, fishers and forest users have informally monitored resource conditions for even longer, their observations influencing survival strategies and resource use.
Participatory monitoring schemes are in operation on all the inhabited continents, and the approach is beginning to appear in textbooks.
Conferences
An international symposium on participatory monitoring was hosted by the Nordic Agency for Development and Ecology and the Zoology Department at Cambridge University in Denmark in April 2004. It led to a special issue of Biodiversity and Conservation October 2005.
In the Arctic, a symposium on data management and local knowledge was hosted by ELOKA and held in Boulder, USA, in November 2011. It led to a special issue of Polar Geography in 2014.
In the Arctic, three circumpolar meetings were held in 2013-2014:
In November 2013 in Cambridge Bay, Nunavut, hosted by Oceans North Canada,
In December 2013 in Copenhagen, Denmark, hosted by Greenland Department of Fisheries, Hunting and Agriculture, ELOKA, and Nordic Foundation for Development and Ecology,
In March 2014 in Kautokeino, Norway, hosted by International Centre for Reindeer Husbandry, UNESCO and other partners.
The first global conference on Participatory Monitoring and Management was hosted by the Brazilian Ministry of Environment (MMA) and the Chico Mendes Institute for Biodiversity Conservation (ICMBio) and held in Manaus, Brazil in September 2014.
Approaches
Thematically, participatory monitoring has considerable potential in several areas, including:
For connecting knowledge systems: in efforts to bring Indigenous and local knowledge systems into the science–policy interface such as the Intergovernmental Platform for Biodiversity and Ecosystem Services.
For monitoring rapidly changing environments: to inform resource management in rapidly changing environments such as the Arctic, where Indigenous and local communities have detailed knowledge of key components of their environment, such as sea-ice, snow, weather patterns, caribou and other natural resources.
In Payment for Ecosystem Services (PES) programs: to connect environmental performance with payment schemes such as REDD+.
For reinforcing international agreements: in efforts to link international environmental agreements to decision-making in the ‘real world’.<
Typology
A typology of monitoring schemes has been proposed, determined on the basis of relative contributions of local stakeholders and professional researchers. and supported by findings from statistical analysis of published schemes. The typology identified 5 categories of monitoring schemes that between them span the full spectrum of natural resource monitoring protocols:
Category A. Autonomous Local Monitoring. In this category the whole monitoring process—from design, to data collection, to analysis, and finally to use of data for management decisions—is carried out autonomously by local stakeholders. There is no direct involvement of external agencies. For an example see.
Category B. Collaborative Monitoring with Local Data Interpretation. In these schemes, the original initiative was taken by scientists but local stakeholders collect, process and interpret the data, although external scientists may provide advice and training. The original data collected by local people remain in the area being monitored, which helps create local ownership of the scheme and its results, but copies of the data may be sent to professional researchers for in-depth or larger-scale analysis. Examples are included in.
Category C. Collaborative Monitoring with External Data Interpretation. The third most distinct group is monitoring scheme category C. These schemes were designed by scientists who also analyse the data, but the local stakeholders collect the data, take decisions on the basis of the findings and carry out the management interventions emanating from the monitoring scheme. Examples are provided in.
Category D. Externally Driven Monitoring with Local Data Collectors. This category of monitoring scheme involves local stakeholders only in data collection. The design, analysis, and interpretation of the monitoring results are undertaken by professional researchers—generally far from the site. Monitoring schemes of category D are mostly long-running ‘citizen science’ projects from Europe and North America. See for example
Category E. Externally Driven, Professionally Executed Monitoring. Monitoring schemes of category E do not involve local stakeholders. Design of the scheme, analysis of the results, and management decisions derived from these analyses are all undertaken by professional scientists funded by external agencies. An example is
The use of technology for participatory monitoring
Traditional methods of data collection for participatory monitoring use paper and pen. This has advantages in terms of low cost of materials and training, simplicity, and reduced potential for technical hitches. However, all data must be transcribed for analysis, which takes time and can be subject to transcription errors. Increasingly, participatory monitoring initiatives incorporate technology, from GPS recorders to georeference the data collected on paper, to drones to survey remote areas, phones to send simple reports via SMS, or smartphones to collect and store data. Various apps exist to create and manage data collection forms on smartphones (e.g. ODK, Sapelli and others).
Some initiatives find that the use of smartphones for data collection has advantages over paper-based systems. The advantages include that very little equipment need be carried on a survey, a large amount and variety of data can be stored (geographical locations, photos and audio, as well as data entered onto monitoring forms) and data can be shared rapidly for analysis without transcription errors. The use of smartphones can incentivise young people to get involved in monitoring, sparking an interest in conservation. Some apps are especially designed to be usable by illiterate monitors. If local people risk threats or violence by monitoring illegal activities, the true purpose of the phones can be denied, and the monitoring data locked away. However, phones are expensive; are vulnerable to damage and technical issues; necessitate additional training - not least due to rapid technological change; phone charging can be a challenge (especially under thick forest canopies); and uploading data for analysis is difficult in areas without network connections.
Data sharing in participatory monitoring
A key challenge for participatory monitoring is to develop ways to store, manage and share data and to do this in ways that respect the rights of the communities that supplied the data. A ‘rights-based approach to data sharing’ can be based on principles of free, prior and informed consent, and prioritise the protection of the rights of those who generated the data, and/or those potentially affected by data-sharing. Local people can do much more than simply collect data: they can also define the ways that this data is used, and who has access to it.
Clear agreements on data sharing are especially important for initiatives where diverse data is collected, of variable relevance to different stakeholders. For example, monitoring could on the one hand, investigate sensitive social problems within a community, or contested resources at the centre of local conflicts or illegal exploitation - data that community leaders might want to keep confidential and address locally; on the other hand, the same initiative could generate data on forest biomass, of greater interest to external stakeholders.
One way to establish the rules around data sharing is to set up a data sharing protocol. This can define:
The infrastructure for data storage and management (computer programmes, hard drives and cloud storage). Local capacity should be strong enough to access, manage and retain control of the data.
Data classification: discussions in the communities can set out how different types of data can be used – for example a traffic light system can define ‘red’ data that is confidential to the community, ‘amber’ data which should be discussed prior to any use, and ‘green’ data that is approved for release.
Processes for data sharing: this defines the roles and responsibilities of different people, and the processes to be followed for requests to access data, dependent on how that data is classified.
Reporting: the protocol can set out how data should be reported, for example specifying the manner and frequency with which findings are reported to the local community, and ensuring that technical data is presented in a way that is compatible with external systems (e.g. government databanks or processes to respond to findings).
See also
References
Further reading
Gardner, T.A. 2010. Monitoring Forest Biodiversity: Improving Conservation through Ecologically Responsible Management. Earthscan, London.
Johnson, N. et al. 2015. Community-Based Monitoring in a Changing Arctic: A Review for the Sustaining Arctic Observing Network. Final report of Sustaining Arctic Observing Networks Task #9. Ottawa, ON: Inuit Circumpolar Council.
Lawrence, A. (Ed.). 2010. Taking Stock of Nature. Cambridge Univ. Press, Cambridge, UK.
Nordic Council of Ministers 2015. Local knowledge and resource management. On the use of indigenous and local knowledge to document and manage natural resources in the Arctic. TemaNord 2015-506. Nordic Council of Ministers, Copenhagen, Denmark. .
Special issue of Biodiversity and Conservation on the potential of locally based approaches to monitoring of biodiversity and resource use, available at www.monitoringmatters.org (Danielsen et al. 2005b).
Special issue of Polar Geography on local and traditional knowledge and data management in the Arctic http://www.tandfonline.com/toc/tpog20/37/1#.VTd0oTrtU3Q
Tebtebba 2013. Developing and Implementing Community‐Based Monitoring and Information Systems: The Global Workshop and the Philippine Workshop Reports. http://tebtebba.org/index.php/all‐resources/category/8‐ books?download=890:developing‐and‐implementing‐cbmis‐the‐global‐workshop‐and‐ the‐Philippine‐workshop‐reports
Participatory democracy
Participatory budgeting
Measurement
Environmental monitoring
Volunteered geographic information
Citizen science | Participatory monitoring | [
"Physics",
"Mathematics"
] | 2,915 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
46,694,677 | https://en.wikipedia.org/wiki/Bamboo%20construction | Bamboo construction involves the use of bamboo as a building material for scaffolding, bridges, houses and buildings. Bamboo, like wood, is a natural composite material with a high strength-to-weight ratio useful for structures. Bamboo's strength-to-weight ratio is similar to timber, and its strength is generally similar to a strong softwood or hardwood timber.
Historic use of bamboo for construction
In its natural form, bamboo as a construction material is traditionally associated with the cultures of South Asia, East Asia, the South Pacific, and Central and South America. In China and India, bamboo was used to hold up simple suspension bridges, either by making cables of split bamboo or twisting whole culms of sufficiently pliable bamboo together. One such bridge in the area of Qian-Xian is referenced in writings dating back to 960 AD and may have stood since as far back as the third century BC, due largely to continuous maintenance.
Bamboo has also long been used as scaffolding; the practice has been banned in mainland China for buildings over six stories, but is still in continuous use for skyscrapers in Hong Kong. In the Philippines, the nipa hut is a fairly typical example of the most basic sort of housing where bamboo is used; the walls are split and woven bamboo, and bamboo slats and poles may be used as its support. In Japanese architecture, bamboo is used primarily as a supplemental and/or decorative element in buildings such as fencing, fountains, grates and gutters, largely due to the ready abundance of quality timber.
In parts of India, bamboo is used for drying clothes indoors, both as a rod high up near the ceiling to hang clothes on, and as a stick wielded with acquired expert skill to hoist, spread, and to take down the clothes when dry. It is also commonly used to make ladders, which apart from their normal function, are also used for carrying bodies in funerals. In Maharashtra, the bamboo groves and forests are called Veluvana, the name velu for bamboo is most likely from Sanskrit, while vana means forest. Furthermore, bamboo is also used to create flagpoles.
In Central and South America, bamboo has formed an essential part of the construction culture. Vernacular forms of housing such as bahareque have developed that use bamboo in highly seismic areas. When well-maintained and in good condition, these have been found to perform surprisingly well in earthquakes.
Modern use of bamboo round poles for construction
Over the past few decades, there has been a growing interest in using bamboo round poles for construction, primarily because of its sustainability. Famous bamboo architects and builders include Simón Velez, Marcelo Villegas, Oscar Hidalgo-López, Jörg Stamm, Vo Trong Nghia, Elora Hardy and John Hardy. To date, the most high-profile bamboo construction projects have tended to be in Vietnam, Bali (Indonesia), China and Colombia. The greatest advancements in structural use of bamboo have been in Colombia, where Universities have been conducting significant research into element and joint design and large high-profile buildings and bridges have been constructed. In Brazil, bamboo have been studied for more than 40 years at the Pontifical Catholic University of Rio de Janeiro PUC-Rio for structural applications. Some important results are the tensegrity bamboo structures, the bamboo bicycles, the bamboo space structure with rigid steel joints, the deployable bamboo structure pavilions with flexible joints and the bamboo active bending-pantographic amphitheater structure developed by Bambutec Design company.
Structural design codes
The first structural design codes for bamboo in-the-round were published by ISO in 2004 (ISO 22156 Bamboo - structural design, ISO 22157-1 Bamboo – Determination of Physical and Mechanical properties part 1 and ISO 22157-2 Bamboo – Determination of Physical and Mechanical properties part 2: Laboratory manual. Colombia was the first country to publish a country-specific code in the structural use of bamboo (NSR-10 G12). Since then, Ecuador, Peru, India and Bangladesh have all published codes, however the Colombian code is still widely considered to be the most reliable and comprehensive.
Curved structural shapes
Heat and pressure is sometimes traditionally used to form curved shapes in bamboo.
Structural behaviour
A typical bamboo shows a nonlinear stress-strain behaviour. It can restrain strain of up to 0.05 until it breaks at which the stress level can be about 300 MPa.
Durability
Bamboo is more susceptible to decay than timber, due to a lack of natural toxins and its typically thin walls, which means that a small amount of decay can mean a significant percentage change in capacity. There are three causes of decay: beetle attack, termite attack and fungal attack (rot). Untreated bamboo can last 2–6 years internally, and less than a year if exposed to water.
In order to protect bamboo from decay, two design principles are required:
The bamboo must be kept dry throughout its life to protect it against rot (fungi). This fundamental architectural principle is called "durability by design", and involves keeping the bamboo dry through good design practices such as elevating the structure above the ground, using damp proof membranes, having good drip details, having good roof overhangs, using waterproof coatings for the walls, etc.
The bamboo must be treated to protect it against insects (namely beetles and termites). The most common and appropriate chemical to treat bamboo is boron, normally either a mixture of borax and boric acid, but it also comes in one compound (di-sodium tetraborate decahydrate).
Both principles must be applied to a design in order to protect bamboo. Boron by itself is inadequate to protect against rot, and it will wash out if exposed to water.
Modern fixed preservatives may be used as alternatives to boron such as copper azole, however little bamboo has been reliably tested using these methods to date. In addition, they tend to be more hazardous for the treatment workers and the end user, and therefore are less appropriate for developing countries, which is where bamboo is currently mostly used.
Natural forms of bamboo treatment such as soaking in water and exposing to smoke may provide some limited protection against beetles, however, there is little evidence to show they are effective against termites and rot, and are therefore not typically used in modern construction.
Modern use of laminated bamboo for construction
Bamboo can be cut and laminated into sheets and planks. This process involves cutting stalks into thin strips, planing them flat, and drying the strips; they are then glued, pressed and finished. Long used in China and Japan, entrepreneurs started developing and selling laminated bamboo flooring in the West during the mid-1990s; products made from bamboo laminate, including flooring, cabinetry, furniture and even decorations, are currently surging in popularity, transitioning from the boutique market to mainstream providers such as Home Depot. The bamboo goods industry (which also includes small goods, fabric, etc.) is expected to be worth $25 billion by 2012. The quality of bamboo laminate varies among manufacturers and varies according to the maturity of the plant from which it was harvested (six years being considered the optimum).
Case studies
Bamboo was used for the structural members of the India pavilion at Expo 2010 in Shanghai. The pavilion is the world's largest bamboo dome, about in diameter, with bamboo beams/members overlaid with a ferro-concrete slab, waterproofing, copper plate, solar PV panels, a small windmill, and live plants. A total of of bamboo was used. The dome is supported on 18-m-long steel piles and a series of steel ring beams. The bamboo was treated with borax and boric acid as a fire retardant and insecticide and bent in the required shape. The bamboo sections were joined with reinforcement bars and concrete mortar to achieve the necessary lengths.
Bamboo has been used successfully for housing in Costa Rica, Ecuador, El Salvador, Colombia, Mexico, Nepal and the Philippines. An appropriate way of using bamboo for housing is considered to be "bahareque encemendato", or "improved bahareque"/"engineered bahareque". This method takes the Latin America vernacular construction system bahareque (a derivative of wattle and daub) and engineers it, making it considerably more durable and resistant to earthquakes and typhoons.
Panyaden International School in northern Thailand expanded its campus with a bamboo sports hall designed by Chiangmai Life Architects. Inspired by the lotus flower, the hall spans 782 square meters and includes courts for various sports and a liftable stage. The innovative design uses prefabricated bamboo trusses spanning over 17 meters, ensuring the structure can withstand high-speed winds and earthquakes. The hall's natural ventilation and insulation provide year-round comfort, while its use of bamboo maintains a zero-carbon footprint.
Bamboo is one of the primary materials for the flood resistant homes in Pakistan designed by Yasmeen Lari. The technique is derived from the vernacular tradition of Sindh. It uses bamboo and mud brick.
Cultivation
Harvesting
Bamboo used for construction purposes must be harvested when the culms reach their greatest strength and when sugar levels in the sap are at their lowest, as high sugar content increases the ease and rate of pest infestation.
Harvesting of bamboo is typically undertaken according to the following cycles:
Life cycle of the culm
As each individual culm goes through a 5–7 year life cycle, culms are ideally allowed to reach this level of maturity prior to full capacity harvesting. The clearing out or thinning of culms, particularly older decaying culms, helps to ensure adequate light and resources for new growth. Well-maintained clumps may have a productivity 3–4× that of an unharvested wild clump. Consistent with the life cycle described above, bamboo is harvested from two to three years through to five to seven years, depending on the species.
Annual cycle
As all growth of new bamboo occurs during the wet season, disturbing the clump during this phase will potentially damage the upcoming crop. Also during this high rainfall period, sap levels are at their highest, and then diminish towards the dry season. Picking immediately prior to the wet/growth season may also damage new shoots. Hence, harvesting is best a few months prior to the start of the wet season.
Daily cycle
During the height of the day, photosynthesis is at its peak, producing the highest levels of sugar in sap, making this the least ideal time of day to harvest. Many traditional practitioners believe the best time to harvest is at dawn or dusk on a waning moon.
Additional images
See also
Bamboo bicycle
Bamboo textiles
International Network for Bamboo and Rattan
References
External links
Elora Hardy: Magical houses, made of bamboo at TED
Arup: Full-scale shake-table test of earthquake-proof housing for El Salvador
Bambutec Design: Deployable Bamboo Structure Pavilion
Bamboo buildings and structures
Building materials | Bamboo construction | [
"Physics",
"Engineering"
] | 2,240 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
35,509,452 | https://en.wikipedia.org/wiki/Cabled%20observatory | A cabled observatory is a seabed oceanographic research platform connected to land by cables that provide power and communication. Observatories are outfitted with a multitude of scientific instruments that can collect many kinds of data from the seafloor and water column. By removing the limitations of undersea power sources and sonar or RF communications, cabled observatories allow persistent study of underwater phenomena. Data from these instruments is relayed to a land station and data networks, such as Ocean Networks Canada, in real time.
On-board sensors
Cabled observatories have the benefit of high-power cable connections that can support a variety of instrumentation at any time. Such instrumentation can include cameras and microphones that can take high-definition audio and video, standard sensors that measure pressure, temperature, oxygen content, conductivity, turbidity, and chlorophyll-a fluorescence, and custom sensors for specialized purposes. Over 200 instruments can be installed on a cabled observatory at a time, as seen on the NEPTUNE and VENUS observatories.
Comparison with other data collection methods
Cabled observatories are ideal for use in complex regions of the ocean where continuous data sampling is required for understanding the area of interest. Such areas include the complex biospheres of the temperate coasts and polar regions, which are sensitive to climate change. Conventional methods for oceanographic data collection, such as by ship, are often limited by the harsh typical weather conditions and cannot sample data continuously. Mooring systems have also been a common method for long-term ocean data sampling, however they require scientific cruises for scientists to receive data or to discover damage to the mooring system and carry out repairs. Data collection by ship and by mooring system in complex or harsh environments has historically led to data losses and inaccurate conclusions. By eliminating the need for regular ship use and bolstered with extensive sensor sets, enabled by direct power connections, cabled observatories have the capability to provide continuous and detailed data sampling for regions of the ocean that are otherwise inaccessible.
Usage locations
Cabled observatories are permanently fixed in one area and cannot take measurements beyond that area, however they can support sensors and apparatuses that can travel vertically in the water column and observatory data can be combined with ship data to create a more complete understanding of the area as well. An observatory can be placed as far as 300 km from shore if the conditions permit. Observatories can be placed in waters as deep as 2500 meters and as shallow as 10 meters, even when the wave height is greater than the water depth.
Operation Limitations
Many issues involving data reliability and loss have arisen and been investigated by teams running cabled observatories. Such issues include data loss, sensor failure, and data reliability issues. The sources of these issues are diverse, with common causes being improper operation, biofouling, cable connection issues, and leakages. Systematic improvements, to lessen the impacts of such factors, are currently being studied by groups such as Ocean Networks Canada. Additionally, data loss can occur from improper installation or operations of sensors and data management, which are more likely to occur if those responsibilities are taken on by research groups external to the observatory team. This issue prompted the usage of streaming of final probe data to communicate data to partner research groups for the COSYNA observatory team, and streaming is now a common method for data communication for other observatory teams.
Examples of cabled observatories
MARS (Monterey Accelerated Research System)
NEPTUNE (North-East Pacific Time-series Undersea Networked Experiments)
VENUS (Victoria Experimental Network Under the Sea)
Liquid Jungle Lab (LJL) Panama- PLUTO
(Hawaii-2 Observatory)- early experiment
ALOHA
ESONET
Ocean Observatories Initiative Cabled Array
Exploration & Remote Instrumentation by Students (ERIS)
See also
Mooring (oceanography)
Benthic lander
Oceanography
Ocean observations
References
Oceanography
Oceanographic instrumentation
Submarine communications cables | Cabled observatory | [
"Physics",
"Technology",
"Engineering",
"Environmental_science"
] | 807 | [
"Hydrology",
"Oceanographic instrumentation",
"Applied and interdisciplinary physics",
"Oceanography",
"Measuring instruments"
] |
35,510,934 | https://en.wikipedia.org/wiki/Two-state%20vector%20formalism | The two-state vector formalism (TSVF) is a description of quantum mechanics in terms of a causal relation in which the present is caused by quantum states of the past and of the future taken in combination.
Theory
The two-state vector formalism is one example of a time-symmetric interpretation of quantum mechanics (see Interpretations of quantum mechanics). Time-symmetric interpretations of quantum mechanics were first suggested by Walter Schottky in 1921, and later by several other scientists. The two-state vector formalism was first developed by Satosi Watanabe in 1955, who named it the Double Inferential state-Vector Formalism (DIVF). Watanabe proposed that information given by forwards evolving quantum states is not complete; rather, both forwards and backwards evolving quantum states are required to describe a quantum state: a first state vector that evolves from the initial conditions towards the future, and a second state vector that evolves backwards in time from future boundary conditions. Past and future measurements, taken together, provide complete information about a quantum system. Watanabe's work was later rediscovered by Yakir Aharonov, Peter Bergmann and Joel Lebowitz in 1964, who later renamed it the Two-State Vector Formalism (TSVF). Conventional prediction, as well as retrodiction, can be obtained formally by separating out the initial conditions (or, conversely, the final conditions) by performing sequences of coherence-destroying operations, thereby cancelling out the influence of the two state vectors.
The two-state vector is represented by:
where the state evolves backwards from the future and the state evolves forwards from the past.
In the example of the double-slit experiment, the first state vector evolves from the electron leaving its source, the second state vector evolves backwards from the final location of the electron on the detection screen, and the combination of forwards and backwards evolving state vectors determines what occurs when the electron passes the slits.
The two-state vector formalism provides a time-symmetric description of quantum mechanics, and is constructed such as to be time-reversal invariant. It can be employed in particular for analyzing pre- and post-selected quantum systems. Building on the notion of two-state, Reznik and Aharonov constructed a time-symmetric formulation of quantum mechanics that encompasses probabilistic observables as well as nonprobabilistic weak observables.
Relation to other work
In view of the TSVF approach, and in order to allow information to be obtained about quantum systems that are both pre- and post-selected, Yakir Aharonov, David Albert and Lev Vaidman developed the theory of weak values.
In TSVF, causality is time-symmetric; that is, the usual chain of causality is not simply reversed. Rather, TSVF combines causality both from the past (forward causation) and the future (backwards causation, or retrocausality).
Similarly as the de Broglie–Bohm theory, TSVF yields the same predictions as standard quantum mechanics. Lev Vaidman emphasizes that TSVF fits very well with Hugh Everett's many-worlds interpretation, with the difference that initial and final conditions single out one branch of wavefunctions (our world).
The two-state vector formalism has similarities with the transactional interpretation of quantum mechanics proposed by John G. Cramer in 1986, although Ruth Kastner has argued that the two interpretations (Transactional and Two-State Vector) have important differences as well. It shares the property of time symmetry with the Wheeler–Feynman absorber theory by Richard Feynman and John Archibald Wheeler and with the time-symmetric theories of Kenneth B. Wharton and Michael B. Heaney
See also
Satosi Watanabe
Yakir Aharonov
Weak measurement
Delayed choice experiment
Wheeler–Feynman absorber theory
Positive operator valued measure
References
Further reading
Yakir Aharonov, Lev Vaidman: The Two-State Vector Formalism of Quantum Mechanics: an Updated Review. In: Juan Gonzalo Muga, Rafael Sala Mayato, Íñigo Egusquiza (eds.): Time in Quantum Mechanics, Volume 1, Lecture Notes in Physics, vol. 734, pp. 399–447, 2nd ed., Springer, 2008, , DOI 10.1007/978-3-540-73473-4_13, arXiv:quant-ph/0105101v2 (submitted 21 May 2001, version of 10 June 2007)
Lev Vaidman: The Two-State Vector Formalism, arXiv:0706.1347v1 (submitted 10 June 2007)
Yakir Aharonov, Eyal Y. Gruss: Two-time interpretation of quantum mechanics, arXiv:quant-ph/0507269v1 (submitted 28 July 2005)
Eyal Gruss: A Suggestion for a Teleological Interpretation of Quantum Mechanics, arXiv:quant-ph/0006070v2 (submitted 14 June 2000, version of 4 August 2000)
Causality
Quantum mechanics | Two-state vector formalism | [
"Physics"
] | 1,065 | [
"Theoretical physics",
"Quantum mechanics"
] |
35,514,066 | https://en.wikipedia.org/wiki/Chromium%28II%29%20fluoride | Chromium(II) fluoride is an inorganic compound with the formula CrF2. It exists as a blue-green iridescent solid. Chromium(II) fluoride is sparingly soluble in water, almost insoluble in alcohol, and is soluble in boiling hydrochloric acid, but is not attacked by hot distilled sulfuric acid or nitric acid. Like other chromous compounds, chromium(II) fluoride is oxidized to chromium(III) oxide in air.
Preparation and structure
The compound is prepared by passing anhydrous hydrogen fluoride over anhydrous chromium(II) chloride. The reaction will proceed at room temperature but is typically heated to 100-200 °C to ensure completion:
CrCl2 + 2 HF → CrF2 + 2 HCl
Like many difluorides, CrF2 adopts a structure like rutile with octahedral molecular geometry about Cr(II) and trigonal geometry at F−. Two of the six Cr–F bonds are long at 2.43 Å, and four are short near 2.00 Å. This distortion is a consequence of the Jahn–Teller effect that arises from the d4 electron configuration of the chromium(II) ion.
See also
Chromyl fluoride
Chromium(II) chloride
References
External links
Crystal Structure
Chromium(II) compounds
Fluorides
Metal halides | Chromium(II) fluoride | [
"Chemistry"
] | 310 | [
"Inorganic compounds",
"Fluorides",
"Metal halides",
"Salts"
] |
45,349,681 | https://en.wikipedia.org/wiki/Pulsed%20field%20magnet | A pulsed field magnet is a strong electromagnet which is powered by a brief pulse of electric current through its windings rather than a continuous current, producing a brief but strong pulse of magnetic field. Pulsed field magnets are used in research in fields such as materials science to study the effect of strong magnetic fields, since they can produce stronger fields than continuous magnets. The maximum field strength that continuously-powered high-field electromagnets can produce is limited by the enormous waste heat generated in the windings by the large currents required. Therefore by applying brief pulses of current, with time between the pulses to allow the heat to dissipate, stronger currents can be used and thus stronger magnetic fields can be generated. The magnetic field produced by pulsed field magnets can reach between 50 and 100 T, and lasts several tens of milliseconds.
References
Bernd Ctortecka, High-field NMR in pulsed magnets, Max-Planck Innovation.
Electromagnetic coils
Nuclear magnetic resonance | Pulsed field magnet | [
"Physics",
"Chemistry"
] | 204 | [
"Nuclear chemistry stubs",
"Nuclear magnetic resonance",
"Nuclear magnetic resonance stubs",
"Nuclear physics"
] |
45,355,813 | https://en.wikipedia.org/wiki/Hertz%E2%80%93Knudsen%20equation | In surface chemistry, the Hertz–Knudsen equation, also known as Knudsen–Langmuir equation describes evaporation rates, named after Heinrich Hertz and Martin Knudsen.
Definition
Non-dissociative adsorption (Langmuirian adsorption)
The Hertz–Knudsen equation describes the non-dissociative adsorption of a gas molecule on a surface by expressing the variation of the number of molecules impacting on the surfaces per unit of time as a function of the pressure of the gas and other parameters which characterise both the gas phase molecule and the surface:
where:
Since the equation result has the units of s−1 it can be assimilated to a rate constant for the adsorption process.
See also
Langmuir (unit)
References
Surface science | Hertz–Knudsen equation | [
"Physics",
"Chemistry",
"Materials_science"
] | 170 | [
"Condensed matter physics",
"Surface science"
] |
45,355,837 | https://en.wikipedia.org/wiki/A4%20polytope | {{DISPLAYTITLE:A4 polytope}}
In 4-dimensional geometry, there are 9 uniform polytopes with A4 symmetry. There is one self-dual regular form, the 5-cell with 5 vertices.
Symmetry
A4 symmetry, or [3,3,3] is order 120, with Conway quaternion notation +1/60[I×].21. Its abstract structure is the symmetric group S5. Three forms with symmetric Coxeter diagrams have extended symmetry, [[3,3,3]] of order 240, and Conway notation ±1/60[I×].2, and abstract structure S5×C2.
Visualizations
Each can be visualized as symmetric orthographic projections in Coxeter planes of the A4 Coxeter group, and other subgroups. Three Coxeter plane 2D projections are given, for the A4, A3, A2 Coxeter groups, showing symmetry order 5,4,3, and doubled on even Ak orders to 10,4,6 for symmetric Coxeter diagrams.
The 3D picture are drawn as Schlegel diagram projections, centered on the cell at pos. 3, with a consistent orientation, and the 5 cells at position 0 are shown solid.
Coordinates
The coordinates of uniform 4-polytopes with pentachoric symmetry can be generated as permutations of simple integers in 5-space, all in hyperplanes with normal vector (1,1,1,1,1). The A4 Coxeter group is palindromic, so repeated polytopes exist in pairs of dual configurations. There are 3 symmetric positions, and 6 pairs making the total 15 permutations of one or more rings. All 15 are listed here in order of binary arithmetic for clarity of the coordinate generation from the rings in each corresponding Coxeter diagram.
The number of vertices can be deduced here from the permutations of the number of coordinates, peaking at 5 factorial for the omnitruncated form with 5 unique coordinate values.
References
J.H. Conway and M.J.T. Guy: Four-Dimensional Archimedean Polytopes, Proceedings of the Colloquium on Convexity at Copenhagen, page 38 und 39, 1965
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 26)
H.S.M. Coxeter:
H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, Wiley::Kaleidoscopes: Selected Writings of H.S.M. Coxeter
(Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380–407, MR 2,10]
(Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591]
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966
External links
Uniform, convex polytopes in four dimensions:, Marco Möller
Uniform 4-polytopes | A4 polytope | [
"Physics"
] | 763 | [
"Uniform 4-polytopes",
"Uniform polytopes",
"Symmetry"
] |
45,355,856 | https://en.wikipedia.org/wiki/List%20of%20F4%20polytopes | {{DISPLAYTITLE:List of F4 polytopes}}
In 4-dimensional geometry, there are 9 uniform 4-polytopes with F4 symmetry, and one chiral half symmetry, the snub 24-cell. There is one self-dual regular form, the 24-cell with 24 vertices.
Visualization
Each can be visualized as symmetric orthographic projections in Coxeter planes of the F4 Coxeter group, and other subgroups.
The 3D picture are drawn as Schlegel diagram projections, centered on the cell at pos. 3, with a consistent orientation, and the 5 cells at position 0 are shown solid.
Coordinates
Vertex coordinates for all 15 forms are given below, including dual configurations from the two regular 24-cells. (The dual configurations are named in bold.) Active rings in the first and second nodes generate points in the first column. Active rings in the third and fourth nodes generate the points in the second column. The sum of each of these points are then permutated by coordinate positions, and sign combinations. This generates all vertex coordinates. Edge lengths are 2.
The only exception is the snub 24-cell, which is generated by half of the coordinate permutations, only an even number of coordinate swaps. φ=(+1)/2.
References
J.H. Conway and M.J.T. Guy: Four-Dimensional Archimedean Polytopes, Proceedings of the Colloquium on Convexity at Copenhagen, page 38 und 39, 1965
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 26)
H.S.M. Coxeter:
H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, Wiley::Kaleidoscopes: Selected Writings of H.S.M. Coxeter
(Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380–407, MR 2,10]
(Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591]
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966
External links
Uniform, convex polytopes in four dimensions:, Marco Möller
Uniform 4-polytopes | List of F4 polytopes | [
"Physics"
] | 616 | [
"Uniform 4-polytopes",
"Uniform polytopes",
"Symmetry"
] |
49,610,754 | https://en.wikipedia.org/wiki/Lockheed%20Martin%20X-59%20Quesst | The Lockheed Martin X-59 Quesst ("Quiet SuperSonic Technology"), sometimes styled QueSST, is an American experimental supersonic aircraft under development by Skunk Works for NASA's Low-Boom Flight Demonstrator project. Preliminary design started in February 2016, with the X-59 planned to begin flight testing in 2021. After delays, as of January 2024, it is planned to be delivered to NASA for flight testing in 2024. It is expected to cruise at at an altitude of , creating a low 75 effective perceived noise level (EPNdB) thump to evaluate supersonic transport acceptability.
Development
In February 2016, Lockheed Martin was awarded a preliminary design contract, aiming to fly in the 2020 timeframe.
A 9%-scale model was to be wind tunnel tested from Mach 0.3 to Mach 1.6 between February and April 2017. The preliminary design review was originally planned to be completed by June 2017. While NASA received three inquiries for its August 2017 request for proposals, Lockheed was the sole bidder.
On April 2, 2018, NASA awarded Lockheed Martin a $247.5 million contract to design, build and deliver in late 2021 the Low-Boom X-plane. On June 26, 2018, the US Air Force informed NASA it had assigned the X-59 QueSST designation to the demonstrator. By October, NASA Langley had completed three weeks of wind tunnel testing of an 8%-scale model, with high AOAs up to 50° and 88° at very low speed, up from 13° in previous tunnel campaigns. Testing was for static stability and control, dynamic forced oscillations, and laser flow visualization, expanding on previous experimental and computational predictions.
From November 5, 2018, NASA was to begin tests over two weeks to gather feedback: up to eight thumps a day at different locations to be monitored by 20 noise sensors and described by 400 residents, receiving a $25 per week compensation.
To simulate the thump, an F/A-18 Hornet is diving from to briefly go supersonic for reduced shock waves over Galveston, Texas, an island, and a stronger boom over water.
By then, Lockheed Martin had begun machining the first part in Palmdale, California.
In May 2019, the initial major structural parts were loaded in the tooling assembly. In June, assembly was getting underway. The external vision system (XVS) was flight tested on a King Air at NASA Langley. This is to be followed by high speed wind tunnel tests to verify inlet performance predictions with a 9.5%-scale model at NASA Glenn Research Center.
The critical design review was successfully held on September 9–13, before the report to NASA's Integrated Aviation Systems Program by November. Then, 80–90% of the drawings should be released to engineering. The wing assembly was to be completed in 2020. In December 2020, construction was halfway completed with the first flight then planned for 2022.
After flight-clearance testing at the Armstrong Flight Research Center, an acoustic validation including air-to-air Schlieren imaging backlit by the Sun to confirm the shockwave pattern testing was slated to be done through September 2022. NASA planned to conduct flight tests over U.S. cities to verify the safety and performance of the X-59's quiet supersonic technologies and evaluate community responses for regulators, which could enable commercial supersonic travel over land.
As of 2018, community-response flight tests starting in 2023–2025 were planned to be used for ICAO's Committee on Aviation Environmental Protection meeting (CAEP13) establishing a sonic boom standard. As of 2022, the results of the community overflights were slated to be delivered to the ICAO and the FAA in 2027, allowing for a decision to be made to revise the rules on commercial supersonic travel over land in 2028.
NASA reported the installation of the General Electric F414-GE-100 engine on the X-59, which took place at Lockheed Martin's Skunk Works in Palmdale, California early November 2022. The engine is long and produces of thrust. The X-59's first flight was initially planned for 2024.
Lockheed Martin released a video showing an assembled X-59 rolling out of a hangar on August 4, 2023. The corporation unveiled the X-59 on January 12, 2024. In November 2024, the X-59's engine was tested for the first time, with plans for the aircraft's first flight to take place in 2025.
Design
The Low-Boom X-plane is long with a wingspan for a maximum takeoff weight of . Propelled by a General Electric F414 engine, it should reach a maximum speed of Mach 1.5 or , and cruise at Mach 1.42 or at .
The cockpit, ejection seat and canopy come from a Northrop T-38 and the landing gear from an F-16. With afterburner, its engine will provide of thrust.
As of 2017, the ground noise was expected to be around 60 dB(A), about 1/1000 as loud as current supersonic aircraft. This was to be achieved by using a long, narrow airframe and canards to keep the shock waves from coalescing.
A 2018 projection was that the aircraft would create a 75 EPNdB thump on ground, as loud as closing a car door, compared with 105-110 EPNdB for the Concorde. The central engine has a top-mounted intake for low boom, but inlet flow distortion due to vortices is a concern.
The flush cockpit means that the long and pointed nose-cone will obstruct all forward vision. The X-59 will use an enhanced flight vision system (EVS), consisting of a forward 4K camera with a 33° by 19° angle of view, which will compensate for the lack of forward visibility.
In January 2019, RTX Corporation subsidiary Collins Aerospace was selected to supply its Pro Line Fusion Cockpit avionics, displaying the boom on the ground, and EVS with long-wave infrared sensors. The Collins EVS-3600 multispectral imaging system, beneath the nose, is used for landing, while the NASA external vision system (XVS), in front of the cockpit, gives a forward view.
See also
References
External links
QueSST
NASA programs
2020s United States experimental aircraft
Supersonic transports
Mid-wing aircraft
Single-engined jet aircraft
Aircraft with retractable tricycle landing gear | Lockheed Martin X-59 Quesst | [
"Physics"
] | 1,329 | [
"Physical systems",
"Transport",
"Supersonic transports"
] |
49,613,313 | https://en.wikipedia.org/wiki/Rise%20in%20core | The rise in core (RIC) method is an alternate reservoir wettability characterization method described by S. Ghedan and C. H. Canbaz in 2014. The method enables estimation of all wetting regions such as strongly water wet, intermediate water, oil wet and strongly oil wet regions in relatively quick and accurate measurements in terms of Contact angle rather than wettability index.
During the RIC experiments, core samples saturated with selected reservoir fluid were subjected to imbibition from a second reservoir fluid. RIC wettability measurements are compared with and modified – Amott test and USBM measurements using core plug pairs from different heights of a thick carbonate reservoir. Results show good coherence. The RIC method is an alternate method to Amott and USBM methods and that efficiently characterizes Reservoir Wettability.
Cut-off values vs wettability index
One study used the water advancing contact angle to estimate the wettability of fifty-five oil reservoirs. De-oxygenated synthetic formation brine and dead anaerobic crude was tested on quartz and calcite crystals at reservoir temperature. Contact angles from 0 to 75 degrees were deemed water wet, 75 to 105 degrees as intermediate and 105 to 180 degrees as oil wet. Although the range of wettabilities were divided into three regions, these were arbitrary divisions. The wettability of different reservoirs can vary within the broad spectrum from strongly water-wet to strongly oil-wet.
Another study described two initial conditions as reference and non-reference for calculating cut-off values by using advancing and receding contact angles and spontaneous imbibition data. Limiting value between water wet and intermediate zones was described as 62-degree. Similarly, cut-off values for advancing contact angle is described as 0 to 62 degrees for water wet region, 62 to 133 degrees for Intermediate-wet zone, and 133 to 180 degrees for Oil wet zone.
Chilingar and Yen examined extensive research work on 161 limestone, dolomitic limestone, calcitic dolomite, and dolomite cores. Cut-off values classified as 160 to 180 degrees for strongly oil wet, 100 to 160 degrees for oil wet, 80 to 100 degrees intermediate wet, 80 to 20 degrees water wet and 0 to 20 strongly water wet.
Rise in core uses a combination of Chilingar et al. and Morrow wettability cut-off criteria. The contact angle range 80 – 100 degrees indicate neutral-wetness, the range 100 – 133 degrees indicate slight-oil wetness, the range 133 – 160 degrees indicate oil-wetness while the range 160- 180 degrees indicate strongly oil-wetness. The range 62 – 80 degrees indicate slight water wetness, the range 20 – 62 degrees indicate, water-wetness, while the range 0 – 20 degrees indicate strong water-wetness.
Technique
RIC wettability characterization technique is based on a modified form of Washburn's equation (1921). The technique enables relatively quick and accurate measurements of wettability in terms of contact angle while requiring no complex equipment. The method is applicable for any set of reservoir fluids, on any type of reservoir rock and at any heterogeneity level. It characterizes wettability across the board from strongly water to strongly oil wet conditions.
The step of deriving the modified form of Washburn equation for a rock/liquid/liquid system involves acquiring a Washburn equation for a rock/air/liquid system. The Washburn equation for a rock/air/liquid system is represented by:
(Eq.1).
Herein, "t" is the penetration rate of liquid into a porous sample, "μ" is the liquid's viscosity, "ρ" is the liquid's density, "γ" is the liquid's surface tension, "θ" is the liquid's contact angle, "m" is the mass of the liquid that penetrates the porous sample and "C" is the constant of characterization of the porous sample. evaluating a value of "γos" using a young’s equation for a rock surface/water/air system (Figure 2) and a value of "γws" using young’s equation for a liquid/liquid/rock system is represented as:
(Eq.2).
"γow" is the surface tension between the oil and water system, "γos" is the surface tension between oil and solid system and "γws" is the surface tension between water and the solid system. Using Young's equation for a rock surface/ water/air system and substituting in equation (2) to obtain equation 3:
(Eq. 3).
Rearranging equation (1) to factor out γLV obtains equation (4), wherein γLV a liquid-vapor surface tension is:
(Eq. 4).
Realizing that γLV (liquid–vapor surface tension) is equivalent to γo (oil–air surface tension), or γw (water–air-surface tension), substituting equation (4) in equation (3) and cancelling out similar terms obtains equation (5):
(Eq. 5).
Therein, γLV is liquid-vapor surface tension, γois oil-air surface tension, γw is water-air surface tension, μo is viscosity of oil and μw is viscosity of water. cosθwo is contact angle between water and oil; representing a relationship between a mass of water imbibed into the core sample and a mass of oil imbibed in the core sample with an equation (6):
(Eq. 6).
Therein ρw is density of water and Vw is volume of water imbibed, ρo is density of oil and Vo is volume of oil imbibed, the amount of water imbibed and amount of oil imbibed under gravity are same; and air behaves as a strong non-wetting phase in both an oil–air–solid and a water–air–solid systems, thereby indicating that both oil and water behave as strong wetting phases, resulting in equal air/oil and air/water capillary forces for the same porous media and for a given pore size distribution. Thus, a mass change of a core sample due to water imbibition is equal to a mass change of a core sample due to oil imbibition, because water or oil penetration of the porous media at any time is a function of a balance between gravity and capillary forces. The mass of water imbibed into a core sample is approximately equal to a mass of oil imbibed in the core sample core samples of a same rock type and dimensions, and for equal capillary forces;
Cancelling out g in equation(6) gives equation (7):
(Eq. 7),
which means
(Eq. 8).
Therein, mw is mass of water and mo is mass of oil. Factoring out from Eq. 5 to obtain Eq. 9, gives Modified Washburn Equation:
(Eq. 9).
Therein θ12 is the contact angle of liquid/liquid/rock system, μ1 is a viscosity of oil phase, μ2 is a viscosity of water phase, ρ1 is density of oil phase in g/cm3, ρ2 is density of water phase in g/cm3, m is mass of fluid penetrated into a porous rock, t is time in min, γ_L1L2 is the surface tension between an oil and a water in dyne/cm, and ∁ is a characteristic constant of the porous rock.
Experimental setup and procedure
Schematic view and experimental setups of the RIC wettability testing method is described in Figure 1. Core plugs are divided into 3–4 core samples, each of 3.8 cm average diameter and 1.5 cm length. The lateral area of each core sample is sealed by epoxy resin to ensure one-dimensional liquid penetration into the core by imbibition. A hook is mounted on top side of the core sample.
The RIC setup includes a beaker to host the imbibing fluid. A thin rope connects the core sample to a high-precision balance (0.001 gm accurate). A hanging core sample is positioned with the bottom part of the sample barely touching the imbibing fluid in the beaker. Relative saturation as well as mass of core samples starts to change during imbibition. A computer connected to a balance continuously monitors the core sample mass change over time. Plots of squared mass change versus time are generated.
Determination of "C" constant
The RIC experiment is first performed with a n-dodecane–air–rock system to determine the constant ∁ of the Washburn Equation. N-dodecane imbibes into one of the core samples and the imbibition curve is recorded in Figure 2. Dodecane is an alkane that has low surface energy, very strongly wetting the rock sample in the presence of air, with contact angle θ equal to zero. Constant ∁ is determined by the contact angle value for dodecane/air/rock system, determining physical properties of n-dodecane (ρ,μ,γ) and rearranging equation 1;
(Eq. 10)
Experiment
The second step of the RIC experimental process is to saturate the neighboring core sample with crude oil and subjected the sample to water imbibition. Applying the slope of the RIC curve , fluid properties of oil/brine system (ρ,μ,γ) and the ∁ value are determined from the neighboring core sample into Eq. 9 to calculate the contact angle, θ.
References
Reservoirs
Petroleum engineering
Fluid dynamics | Rise in core | [
"Chemistry",
"Engineering"
] | 2,003 | [
"Chemical engineering",
"Petroleum engineering",
"Energy engineering",
"Piping",
"Fluid dynamics"
] |
49,615,155 | https://en.wikipedia.org/wiki/Giatec%20Scientific | Giatec Scientific Inc. is a Canadian-based company with headquarters located in Ottawa, Ontario. It is a developer and manufacturer of nondestructive testing quality control and condition assessment devices for the construction industry.
History
Giatec Scientific Inc. was co-founded by Pouria Ghods of Carleton University and Aali R. Alizadeh of the University of Ottawa in September 2010. The pair began working with advisers at Invest Ottawa, who arranged sources, funding and ideas to bring Giatec's products to the market.
The company's first product was a sensor to detect corrosion speed in the rebar/steel inside concrete. Unlike other non-destructive testing methods available at the time, Giatec used mobile-based applications software and smart technology to collect and analyze data.
In 2012, Giatec became independent of Invest Ottawa. That year, after the collapse of the Algo Centre Mall in Elliot Lake in 2012, Giatec's equipment was used in the forensic structural examination that was initiated as part of the public inquiry.
Giatec has developed a variety of testing devices and sensors for measurement of concrete permeability, electrical resistivity measurement of concrete, half-cell corrosion, corrosion rate, concrete temperature, and concrete maturity. In 2014, Giatec won the Rio Info 2013 Innovation Award, and in 2014, the company was included in the Ottawa Business Journal's annual list of "Startups to Watch". Giatec is also the recipient of Ottawa's Top 10 Fastest Growing Companies, and Canada's Top 500 Fastest Growing Companies in 2018.
Giatec also began to develop Internet of Things (IoT) applications for the construction industry through wireless concrete temperature and maturity sensors. In March 2015, the company released a new electrical resistivity monitoring device that sends data directly to a smartphone through a downloadable application. In October 2016, Giatec released Smart Concrete, a new IoT-based solution for ready-mix concrete producers. Giatec later changed the name of their product due to opposition from Kryton International Inc., which holds trademark registrations for "Smart Concrete".
Giatec was awarded a $2.4M grant by the SDTC to commercialize a new clean-tech solution to optimize the amount of cement used by readymix concrete producers. On Nov. 19th, Paul Loucks (Ottawa tech veteran and former CEO of Halogen Software) joined Giatec as the new CEO of the company.
After 11 years of organic growth, Giatec raised single-digit million Euro strategic funding from HeidelbergCement in May 2022 followed by $5M from BDC Capital to develop and commercialize new software and sensor solutions for concrete monitoring and AI-based concrete mixture optimization.
Products
The Giatec product range of nondestructive testing devices can be divided into three areas: laboratory devices, which include products that can be used to measure the permeability of concrete specimens; hand-held portable field inspection devices that can be used to conduct in-situ condition assessment of concrete structures such as bridges; and embedded wireless sensors for real-time monitoring of concrete properties such as temperature, humidity, maturity, and strength.
References
External links
Concrete
Nondestructive testing
Sensors
Internet of things companies
2010 establishments in Canada | Giatec Scientific | [
"Materials_science",
"Technology",
"Engineering"
] | 656 | [
"Structural engineering",
"Measuring instruments",
"Nondestructive testing",
"Materials testing",
"Concrete",
"Sensors"
] |
39,335,050 | https://en.wikipedia.org/wiki/Force%20Concept%20Inventory | The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research. Men score on average about 10% higher.
References
Psychological tests and scales
Physics education | Force Concept Inventory | [
"Physics"
] | 247 | [
"Applied and interdisciplinary physics",
"Physics education"
] |
39,336,181 | https://en.wikipedia.org/wiki/Weather%20Machine | Weather Machine is a lumino kinetic bronze sculpture and columnar machine that serves as a weather beacon, displaying a weather prediction each day at noon. Designed and constructed by Omen Design Group Inc., the approximately sculpture was installed in 1988 in a corner of Pioneer Courthouse Square in Portland, Oregon, United States. Two thousand people attended its dedication, which was broadcast live nationally from the square by Today weatherman Willard Scott. The machine costs $60,000.
During its daily two-minute sequence, which includes a trumpet fanfare, mist, and flashing lights, the machine displays one of three metal symbols as a prediction of the weather for the following 24-hour period: a sun for clear and sunny weather, a blue heron for drizzle and transitional weather, or a dragon and mist for rainy or stormy weather. The sculpture includes two bronze wind scoops and displays the temperature via colored lights along its stem. The air quality index is also displayed by a light system below the stainless steel globe. Weather predictions are made based on information obtained by employees of Pioneer Courthouse Square from the National Weather Service and the Department of Environmental Quality. Considered a tourist attraction, Weather Machine has been praised for its quirkiness, and has been compared to a giant scepter.
Description and history
Weather Machine is a lumino kinetic bronze sculpture that serves as a weather beacon, designed and constructed by Omen Design Group Inc. Contributors included Jere and Ray Grimm, Dick Ponzi, who won a 40-entry international competition to design the machine for Pioneer Courthouse Square (1984), and Roger Patrick Sheppard. The group described their efforts as "collaborative", but Sheppard considered Ponzi the "maestro" of the project. Ponzi did the engineering and hydraulics, and the machine was assembled at his vineyard near Beaverton. The sculpture was inspired by Portland-born-and-based writer Terence O'Donnell, who suffered from osteomyelitis during his childhood, and his "funny Irish jig". Weather Machine, which took five years to plan and build and cost $60,000, was installed in the square in August 1988. Today weatherman Willard Scott broadcast live from the square to dedicate the sculpture on its August 24 opening. Two thousand people were present as early as 4 a.m. for the dedication. Financial contributors included Pete and Mary Mark, the AT&T Foundation, Alyce R. Cheatham, Alexandra MacColl, E. Kimbark MacColl, Meier & Frank, the Oregon Department of Environmental Quality, David Pugh and Standard Insurance Company. Information about the donors was included on a plaque added to the sculpture's stem in the weeks following the dedication.
Each day at noon, the columnar machine performs a two-minute sequence that begins with a trumpet fanfare of the opening bars of Aaron Copland's Fanfare for the Common Man, and produces mist and flashing lights. It eventually reveals one of three metal symbols: a stylized golden sun ("helia") for clear and sunny weather, a blue heron (Portland's official bird) for drizzle and transitional weather, or mist and a "fierce, open-mouthed" dragon for heavy rain or stormy weather. The fanciful symbols change at the same time every day, representing weather predictions for the following 24-hour period. "Helia", described as "gleaming", was designed by Jere Grimm; her design would later be applied to one of her husband's pots, exhibited in 1989. The trumpets are allowed to play at noon due to a waiver of Portland's noise ordinance for that time period. Ray Grimm constructed the blue heron symbol, and the group collaborated on the dragon symbol based on his drawings. In order for the machine to display an accurate weather prediction, as reported by The Oregonian in 1988, employees of Pioneer Courthouse Square contact the National Weather Service each morning at 10:30 a.m. for the forecast, and then enter information into the machine's computer, located behind a nearby door.
The machine, whose height is reported to be between , includes two bronze wind scoops that turn in opposite directions. It also indicates the temperature (when or above) via vertical colored lights along the sculpture's stem. Measured by an internal gauge, the machine displays blue lights for temperatures below freezing, white lights for above freezing and red lights to mark every ten degrees (°F). Referring to an additional light system (below the stainless steel globe) that indicates air quality, The Oregonian reported in 1988 that a green light indicates good air quality, amber reflects "semismoggy" air and a red light indicates poor air quality. However, in 1998, one writer for The Oregonian warned: "you don't want to breathe so much when the white light is on". Pioneer Courthouse Square employees enter air quality information into the machine's computer following routine checks with the Department of Environmental Quality.
In addition to its pre-dawn dedication on national television, Weather Machine had a public dedication at noon on August 24, attended by Mayor Bud Clark and other city officials. On that day, the machine displayed the sun symbol and a green light for good air quality, and indicated a temperature of . Following the fanfare, known officially as "Fanfare for Weather Machine with Four Trumpets", jazz singer Shirley Nanette led the crowd in a rendition of "You Are My Sunshine". Portland had good weather in the days following its dedication, preventing visitors from seeing all three symbols for an extended length of time (though all three symbols are displayed briefly during the daily two-minute sequence). This prompted the executive director of Pioneer Courthouse Square to consider altering the machine's schedule so that the public would have a chance to see all three symbols. The sculpture maintained good operation until winter 1995, when its mechanical performance temporarily began deviating away from noon and the temperature gauge had difficulties working properly. In 2012, the machine malfunctioned and stopped operating for about a week.
Reception
In the weeks following Weather Machine dedication, an estimated 300 to 400 people gathered at the square daily to witness the noon sequence. Following the dedication, The Oregonian wrote: "It takes nothing from its fascination to know that a human on the staff of the square will be making the daily phone calls to the Weather Service and the Department of Environmental Quality, and pushing the necessary buttons to cue the pillar's performance ... They have given Portland an attraction no other city has. We're going to like it."
Ponzi described the machine as "light-hearted ... active, distinctive—and fun". O'Donnell, who inspired the sculpture, called it a "gentle spectacle" and described the work as "a cartoon contraption, an odd little thingamajig. It has bells and whistles and other mechanized wonders that confirm rain sometime after the downpour and proudly announce sunshine in the bright light of day." In 1994, The Oregonian reported that O'Donnell regarded Weather Machine with a "mixture of wonder and embarrassment" and stated that he "[didn't] think it [was] all that attractive". The publication's Vivian McInerny said of O'Donnell and the machine: "Practical people may wonder why the square needs such a silly weather machine when a glance out the window works as well .... And these practical people may be the very ones who make the world go 'round. But it is the less practical people, the dreamers like O'Donnell, who make it worth going 'round."
In 1995, The Oregonian Jonathan Nicholas wrote, "To this day, nobody is exactly sure what happens when the thing sounds off each day at noon. It's like having a governor in blue jeans. We can't really explain it: It just happens." Grant Butler of The Oregonian gave the machine's trumpet fanfare as one of three examples of ways in which people could be certain it was noon in Portland.
The machine is considered a tourist attraction, recommended in visitor guides for Portland and included in walking tours. One travel contributor recommended a visit to the sculpture for people with children seeking a "perfect family day". Weather Machine has been compared to a giant scepter and has been called "bizarre", "eccentric", "playful", "unique", "wacky", "whimsical", "zany", and a "piece of wizardry".
See also
1988 in art
Allow Me (Portland, Oregon), a bronze sculpture also located in Pioneer Courthouse Square
References
External links
Weather Machine, (sculpture)., Smithsonian Institution
Grounds map (PDF), Pioneer Courthouse Square
Image showing "helia" symbol, Americans for the Arts (PDF, p. 7)
Summer at the Square (PDF), Pioneer Courthouse Square (2009)
1988 establishments in Oregon
1988 sculptures
Bronze sculptures in Portland, Oregon
Interactive art
Kinetic sculptures in the United States
Outdoor sculptures in Southwest Portland, Oregon
Sculptures of birds in Oregon
Sculptures of dragons
Sound sculptures
Stainless steel sculptures in Oregon
Steel sculptures in Portland, Oregon
Weather prediction | Weather Machine | [
"Physics"
] | 1,858 | [
"Weather",
"Weather prediction",
"Physical phenomena"
] |
39,337,116 | https://en.wikipedia.org/wiki/Biomaterial%20surface%20modifications | Biomaterials exhibit various degrees of compatibility with the harsh environment within a living organism. They need to be nonreactive chemically and physically with the body, as well as integrate when deposited into tissue. The extent of compatibility varies based on the application and material required. Often modifications to the surface of a biomaterial system are required to maximize performance. The surface can be modified in many ways, including plasma modification and applying coatings to the substrate. Surface modifications can be used to affect surface energy, adhesion, biocompatibility, chemical inertness, lubricity, sterility, asepsis, thrombogenicity, susceptibility to corrosion, degradation, and hydrophilicity.
Background of Polymer Biomaterials
Polytetrafluoroethylene (Teflon)
Teflon is a hydrophobic polymer composed of a carbon chain saturated with fluorine atoms. The fluorine-carbon bond is largely ionic, producing a strong dipole. The dipole prevents Teflon from being susceptible to Van der Waals forces, so other materials will not stick to the surface. Teflon is commonly used to reduce friction in biomaterial applications such as in arterial grafts, catheters, and guide wire coatings.
Polyetheretherketone (PEEK)
PEEK is a semicrystalline polymer composed of benzene, ketone, and ether groups. PEEK is known for having good physical properties including high wear resistance and low moisture absorption and has been used for biomedical implants due to its relative inertness inside of the human body.
Plasma modification of biomaterials
Plasma modification is one way to alter the surface of biomaterials to enhance their properties. During plasma modification techniques, the surface is subjected to high levels of excited gases that alter the surface of the material. Plasma's are generally generated with a radio frequency (RF) field. Additional methods include applying a large (~1KV) DC voltage across electrodes engulfed in a gas. The plasma is then used to expose the biomaterial surface, which can break or form chemical bonds. This is the result of physical collisions or chemical reactions of the excited gas molecules with the surface. This changes the surface chemistry and therefore surface energy of the material which affects the adhesion, biocompatibility, chemical inertness, lubricity, and sterilization of the material. The table below shows several biomaterial applications of plasma treatments.
Abbreviations used in table: PC: polycarbonate, PS: polystyrene, PP: polypropylene, PET: poly (ethylene terephthalate), PTFE: polytetrafluoroethylene, UHMWPE: ultra high molecular weight PE, SiR: silicone rubber
Surface Energy
The surface energy is equal to the sum of disrupted molecular bonds that occur at the interface between two different phases. Surface energy can be estimated by contact angle measurements using a version of the Young–Laplace equation:
Where is the surface tension at the interface of solid and vapor, is the surface tension at the interface of solid and liquid, and is the surface tension at the interface of liquid and vapor. Plasma modification techniques alter the surface of the material, and subsequently the surface energy. Changes in surface energy then alter the surface properties of the material.
Surface Functionalization
Surface modification techniques have been extensively researched for the application of adsorbing biological molecules. Surface functionalization can be performed by exposing surfaces to RF plasma. Many gases can be excited and used to functionalize surfaces for a wide variety of applications. Common techniques include using air plasma, oxygen plasma, and ammonia plasma as well as other exotic gases. Each gas can have varying effects on a substrate. These effects decay with time as reactions with molecules in air and contamination occur.
Plasma Treatment to Reduce Thrombogenesis
Ammonia plasma treatment can be used to attach amine functional groups. These functional groups lock on to anticoagulants like Heparin decreasing thrombogenicity.
Covalent Immobilization by Gas Plasma RF Glow Discharge
Polysaccharides have been used as thin film coatings for biomaterial surfaces. Polysaccharides are extremely hydrophilic and will have small contact angles. They can be used for a wide range of applications due to their wide range of compositions. They can be used to reduce the adsorption of proteins to biomaterial surfaces. Additionally, they can be used as receptor sites, targeting specific biomolecules. This can be used to activate specific biological responses.
Covalent attachment to a substrate is necessary to immobilize polysaccharides, otherwise they will rapidly desorb in a biological environment. This can be a challenge due to the fact that the majority of biomaterials do not possess the surface properties to covalently attach polysaccharides. This can be achieved by the introduction of amine groups by RF glow discharge plasma. Gases used to form amine groups, including ammonia or n-heptylamine vapor, can be used to deposit a thin film coating containing surface amines. Polysaccharides must also be activated by oxidation of anhydroglucopyranoside subunits. This can be completed with sodium metaperiodate (NaIO4). This reaction converts anhydroglucopyranoside subunits to cyclic hemiacetal structures, which can be reacted with amine groups to form a Schiff base linkage (a carbon-nitrogen double bond). These linkages are unstable and will easily dissociate. Sodium cyanoborohydride (NaBH3CN) can be used as a stabilizer by reducing the linkages back to an amine.
Surface Cleaning
There are many examples of contamination of biomaterials that are specific to the preparation or manufacturing process. Additionally, nearly all surfaces are prone to contamination of organic impurities in the air. Contamination layers are usually limited to a monolayer or less of atoms and are thus only detectable by surface analysis techniques, such as XPS. It is unknown whether this sort of contamination is harmful, yet it is still regarded as contamination and will most certainly affect surface properties.
Glow discharge plasma treatment is a technique that is used for cleaning contamination from biomaterial surfaces. Plasma treatment has been used for various biological evaluation studies to increase the surface energy of biomaterial surfaces, as well as cleaning. Plasma treatment has also been proposed for sterilization of biomaterials for potential implants.
Modification of Biomaterials with Polymer Coatings
Another method of altering surface properties of biomaterials is to coat the surface. Coatings are used in many applications to improve biocompatibility and alter properties such as adsorption, lubricity, thrombogenicity, degradation, and corrosion.
Adhesion of Coatings
In general, the lower the surface tension of a liquid coating, the easier it will be to form a satisfactory wet film from it. The difference between the surface tension of a coating and the surface energy of a solid substrate to which a coating is applied affects how the liquid coating flows out over the substrate. It also affects the strength of the adhesive bond between the substrate and the dry film. If for instance, the surface tension of the coating is higher than the surface tension of the substrate, then the coating will not spread out and form a film. As the surface tension of the substrate is increased, it will reach a point to where the coating will successfully wet the substrate but have poor adhesion. Continuous increase in the coating surface tension will result in better wetting in film formation and better dry film adhesion.
More specifically whether a liquid coating will spread across a solid substrate can be determined from the surface energies of the involved materials by using the following equation:
Where S is the coefficient of spreading, is the surface energy of the substrate in air, is the surface energy of the liquid coating in air and is the interfacial energy between the coating and the substrate. If S is positive the liquid will cover the surface and the coating will adhere well. If S is negative the coating will not completely cover the surface, producing poor adhesion.
Corrosion Protection
Organic coatings are a common way to protect a metallic substrate from corrosion. Up until ~1950 it was thought that coatings act as a physical barrier which disallows moisture and oxygen to contact the metallic substrate and form a corrosion cell. This cannot be the case because the permeability of paint films is very high. It has since been discovered that corrosion protection of steel depends greatly upon the adhesion of a noncorrosive coating when in the presence of water. With low adhesion, osmotic cells form underneath the coating with high enough pressures to form blisters, which expose more unprotected steel. Additional non-osmotic mechanisms have also been proposed. In either case, sufficient adhesion to resist displacement forces is required for corrosion protection.
Guide Wires
Guide wires are an example of an application for biomedical coatings. Guide wires are used in coronary angioplasty to correct the effects of coronary artery disease, a disease that allows plaque build up on the walls of the arteries. The guide wire is threaded up through the femoral artery to the obstruction. The guide wire guides the balloon catheter to the obstruction where the catheter is inflated to press the plaque against the arterial walls. Guide wires are commonly made from stainless steel or Nitinol and require polymer coatings as a surface modification to reduce friction in the arteries. The coating of the guide wire can affect the trackability, or the ability of the wire to move through the artery without kinking, the tactile feel, or the ability of the doctor to feel the guide wire's movements, and the thrombogenicity of the wire.
Hydrophilic Coatings
Hydrophilic coatings can reduce friction in the arteries by up to 83% when compared to bare wires due to their high surface energy. When the hydrophilic coatings come into contact with bodily fluids they form a waxy surface texture that allows the wire to slide easily through the arteries. Guide wires with hydrophilic coatings have increased trackability and are not very thrombogenic; however the low coefficient of friction increases the risk of the wire slipping and perforating the artery.
Hydrophobic Coatings
Teflon and Silicone are commonly used hydrophobic coatings for coronary guide wires. Hydrophobic coatings have a lower surface energy and reduce friction in the arteries by up to 48%. Hydrophobic coatings do not need to be in contact with fluids to form a slippery texture. Hydrophobic coatings maintain tactile sensation in the artery, giving doctors full control of the wire at all times and reducing the risk of perforation; though, the coatings are more thrombogenic than hydrophilic coatings. The thrombogenicity is due to the proteins in the blood adapting to the hydrophobic environment when they adhere to the coating. This causes an irreversible change for the protein, and the protein remains stuck to the coating allowing for a blood clot to form.
Magnetic Resonance Compatible Guide Wires
Using an MRI to image the guide wire during use would have an advantage over using x-rays because the surrounding tissue can be examined while the guide wire is advanced. Because most guide wires' core materials are stainless steel they are not capable of being imaged with an MRI. Nitinol wires are not magnetic and could potentially be imaged, but in practice the conductive nitinol heats up under the magnetic radiation which would damage surrounding tissues. An alternative that is being examined is to replace contemporary guide wires with PEEK cores, coated with iron particle embedded synthetic polymers.
References
Biomaterials | Biomaterial surface modifications | [
"Physics",
"Biology"
] | 2,425 | [
"Biomaterials",
"Materials",
"Matter",
"Medical technology"
] |
39,339,697 | https://en.wikipedia.org/wiki/Iribarren%20number | In fluid dynamics, the Iribarren number or Iribarren parameter – also known as the surf similarity parameter and breaker parameter – is a dimensionless parameter used to model several effects of (breaking) surface gravity waves on beaches and coastal structures. The parameter is named after the Spanish engineer Ramón Iribarren Cavanilles (1900–1967), who introduced it to describe the occurrence of wave breaking on sloping beaches. The parameter used to describe breaking wave types on beaches; or wave run-up on – and reflection by – beaches, breakwaters and dikes.
Iribarren's work was further developed by Jurjen Battjes in 1974, who named the parameter after Iribarren.
Definition
The Iribarren number which is often denoted as Ir or ξ – is defined as:
with
where ξ is the Iribarren number, is the angle of the seaward slope of a structure, H is the wave height, L0 is the deep-water wavelength, T is the period and g is the gravitational acceleration. Depending on the application, different definitions of H and T are used, for example: for periodic waves the wave height H0 at deep water or the breaking wave height Hb at the edge of the surf zone. Or, for random waves, the significant wave height Hs at a certain location.
Breaker types
The type of breaking wave – spilling, plunging, collapsing or surging – depends on the Iribarren number. According to , for periodic waves propagating on a plane beach, two possible choices for the Iribarren number are:
or
where H0 is the offshore wave height in deep water, and Hb is the value of the wave height at the break point (where the waves start to break). Then the breaker types dependence on the Iribarren number (either ξ0 or ξb) is approximately:
References
Footnotes
Other
Water waves
Dimensionless numbers of fluid mechanics
Coastal engineering | Iribarren number | [
"Physics",
"Chemistry",
"Engineering"
] | 395 | [
"Physical phenomena",
"Water waves",
"Coastal engineering",
"Waves",
"Civil engineering",
"Fluid dynamics"
] |
50,721,357 | https://en.wikipedia.org/wiki/Interleukin-38 | Interleukin-38 (IL-38) is a member of the interleukin-1 (IL-1) family and the interleukin-36 (IL-36) subfamily. It is important for the inflammation and host defense. This cytokine is named IL-1F10 in humans and has similar three dimensional structure as IL-1 receptor antagonist (IL-1Ra). The organisation of IL-1F10 gene is conserved with other members of IL-1 family within chromosome 2q13. IL-38 is produced by mammalian cells may bind the IL-1 receptor type I. It is expressed in basal epithelia of skin, in proliferating B cells of the tonsil, in spleen and other tissues. This cytokine is playing important role in regulation of innate and adaptive immunity.
Discovery
IL-38 probably originated from a common ancestral gene - an ancient IL-1RN gene. This cytokine has 41% homology with IL-1Ra and 43% homology with IL-36Ra. IL-38 is expressed in skin, spleen, tonsil, thymus, heart, placenta and fetal liver. In tissues which do not play a special role in immune response, IL-38 is expressed in low quantity similar to other members of the IL-1 family. In disease setting, specially when the activation of inflammatory response is dysregulated, the expression of IL-38 is changed. For example, in case of spondylitis ankylopoetica, cardiovascular disease, rheumatoid arthritis or hidradenitis suppurativa.
Processing and signaling
According to consensus of cleaving site of IL-1 family, it is predicted that two amino acids (AA) should be removed to generate a processed 3-152AA IL-38 protein. The protease which cleaves IL-38 is still unknown as well as it is still not known which form of IL-38 is the natural variant present in the human body. It was reported that 20-152AA IL-38 form has increased biological activity.
IL-38 has non-characteristic dose-response curve and it binds to IL-36R (IL-1R6). This cytokine is blocking Candida-induced interleukin-17 (IL-17) response better in low concentration than in higher concentration even if induction of cytokine is not blocked. So it is possible that IL-38 released by apoptotic cells can bind to the Three Immunoglobulin Domain-containing IL-1 receptor-related 2 (TIGIRR-2, gene name IL1RAPL1, also known as IL-1R9) and IL-38 will have in this case an antagonistic effect on induction of inflammatory cytokine. It is possible that IL-38 would be first ligand of TIGIRR-2, a former orphan receptor of the IL-1 Family.
Role in disease
Studies showed that IL-38 could play an important role in rheumatic diseases. IL-38 is also one of the five proteins which are related with C-reactive protein (CRP) levels in the serum. The association of IL-38 with CRP could mean that IL-38 will play role also in inflammatory diseases as cardiovascular disease.
Function
The observation of knockdown of IL-38 with siRNA in peripheral blood mononuclear cells shows that production of interleukin-6 (IL-6), APRIL and CCL-2 were increased in response to TLR ligands, so IL-38 acted like antagonist in this case. There are also studies which show agonistic effect. In one study was compared the function of full-length IL-38 and truncated IL-38 and showed that high concentrations of the truncated IL-38 decreased production of IL-6 in response to interleukin-1β (IL-1β) in human macrophages, while full-length form increased IL-6 in the same concentrations. So IL-38 could have agonistic and also antagonistic effects which depend on processing and concentration.
Also when spontaneous murine model of systemic lupus erythematosus (SLE) was treated with recombinant IL-38, mice had less symptoms like proteinuria and skin lesions. Also serum levels of IL-17 and interleukin-22 were lower in these mice what approves in vitro observation that IL-38 could inhibit Th17 responses. Patients with SLE had higher concentrations of IL-38 in the serum than healthy patients and also patients with active disease had higher concentrations of IL-38 in the serum than patients with inactive form.
Sjogren's disease is disease related to SLE. Biopsy of gland of patients with primary Sjogren's disease shows that the expression of IL-38 was increased here. For modulation of this disease is important axis of IL-36. IL-38 is probably antagonist of IL-36 signaling similar as IL-36Ra what can play an important role in the pathogenesis of this autoimmune disease.
IL-38 was found also in the synovium of patients with rheumatoid arthritis and as well in mice with collagen-induced arthritis (CIA). IL-38 concentrations correlated with IL-1β. The overexpression of IL-38 in murine model of arthritis and serum transfer-induced arthritis ameliorate these diseases but not in case of antigen-induced arthritis. TNF production and IL-17 responses were decreased in these models. These data shows that IL-38 could have anti-inflammatory properties in rheumatoid arthritis and probably could be use in a therapeutic strategy.
References
Immunology
Cytokines | Interleukin-38 | [
"Chemistry",
"Biology"
] | 1,195 | [
"Immunology",
"Cytokines",
"Signal transduction"
] |
50,727,550 | https://en.wikipedia.org/wiki/East%20Ural%20Nature%20Reserve | East Ural Nature Reserve () is a Russian 'zapovednik' (strict nature reserve) that is near the site of the 1957 Kyshtym disaster, the world's second highest radioactivity release after Chernobyl. As a state "radiation reserve", the site functions for the protection of a contaminated area, and for long-term scientific study of the effects of radiation on the forest-steppe ecology on the east slope of the southern Ural Mountains. The reserve is situated in Ozyorsk, Chelyabinsk Oblast. It was formally established in 1968, and covers . The reserve, as of 2007, is under the control of Rosatom, a state-run corporation, which conducts regular radiation and radio ecological monitoring.
Topography
The East Ural Reserve is oblong in shape, pointing towards the northeast, with a width of approximately 10 km and a length of 50 km.
Ecoregion and climate
East Ural Nature Reserve is located in the West Siberian taiga ecoregion, a region that covers the West Siberian Plain, from the Urals to the Central Siberian Plateau. It is a region of extensive conifer boreal forests, and also extensive wetlands, including bogs and mires.
The climate of East Ural Nature Reserve is humid continental climate, cool summer (Köppen climate classification subarctic climate). This climate is characterized by mild summers (only 1–3 months above ) and cold, snowy winters (coldest month below ).
Ecoeducation and access
As a state radiation and strict nature reserve, the East Ural Reserve is not accessible to the public.
See also
List of Russian Nature Reserves (class 1a 'zapovedniks')
References
External links
Map of East Ural Reserve, OpenStreetMap
Discussion of the East Ural Reserve, and map, at Wikimapia
Nature reserves in Russia
Radioactively contaminated areas
Protected areas established in 1968
1968 establishments in Russia
Geography of Chelyabinsk Oblast
Zapovednik | East Ural Nature Reserve | [
"Chemistry",
"Technology"
] | 404 | [
"Radioactively contaminated areas",
"Soil contamination",
"Radioactive contamination"
] |
50,728,706 | https://en.wikipedia.org/wiki/Oscillatory%20baffled%20reactor | A Continuous Oscillatory Baffled Reactor (COBR) is a specially designed chemical reactor to achieve plug flow under laminar flow conditions. Achieving plug flow has previously been limited to either a large number of continuous stir tank reactors (CSTR) in series or conditions with high turbulent flow. The technology incorporates annular baffles to a tubular reactor framework to create eddies when liquid is pushed up through the tube. Likewise, when liquid is on a downstroke through the tube, eddies are created on the other side of the baffles. Eddy generation on both sides of the baffles creates very effective mixing while still maintaining plug flow. By using COBR, potentially higher yields of product can be made with greater control and reduced waste.
Design
A standard COBR consists of a 10-150mm ID tube with equally spaced baffles throughout. There are typically two pumps in a COBR; one pump is reciprocating to generate continuous oscillatory flow and a second pump creates net flow through the tube. This design offers a control over mixing intensity that conventional tubular reactors cannot achieve. Each baffled cell acts as a CSTR and because a secondary pump is creating a net laminar flow, much longer residence times can be achieved relative to turbulent flow systems.
With conventional tubular reactors, mixing is accomplished through stirring mechanisms or turbulent flow conditions, which is difficult to control. By changing variable values such as baffle spacing or thickness, COBRs can operate with much better mixing control. For instance, it has been found that a spacing of 1.5 times tube diameter size is the most effective mixing condition; furthermore, vortex deformation increases with increase in baffle thickness greater than 3mm.
Biological applications
The low shear rate and enhanced mass transfer provided by the COBR makes it an ideal reactor for various biological processes. For shear rate, it has been found that COBRs have an evenly distributed, five-fold reduction in shear rate relative to conventional tubular reactors; this is especially important for biological process given that high shear rates can damage microorganisms.
For the case of mass transfer, COBR fluid mechanics allows for an increase in oxygen gas residence time. Furthermore, the vortexes created in the COBRs causes a gas bubble break-up and thus an increase in surface area for gas transfer. For aerobic biological processes, therefore, COBRs again present an advantage. An especially promising aspect of the COBR technology is its ability to scale-up processes while still retaining the advantages in shear rate and mass transfer.
Limitations
Though the prospect for COBR applications in fields like bioprocessing are very promising, there are a number of necessary improvements to be made before more global use. Clearly, there is additional complexity in the COBR design relative to other bioreactors, which can introduce complications in operation. Furthermore, for bioprocessing it is possible that fouling of baffles and internal surfaces becomes an issue. Perhaps the most significant needed advancement moving forward is further comprehensive studies that COBR technology can indeed be useful in industry. There are currently no COBRs in use at industrial bioprocessing plants and the evidence of its effectiveness, though very promising and theoretically an improvement relative to current reactors in industry, is limited to smaller laboratory-scale experiments.
References
Bioreactors | Oscillatory baffled reactor | [
"Chemistry",
"Engineering",
"Biology"
] | 676 | [
"Bioreactors",
"Biological engineering",
"Chemical reactors",
"Biochemical engineering",
"Microbiology equipment"
] |
50,729,160 | https://en.wikipedia.org/wiki/Ascidian%20mitochondrial%20code | The ascidian mitochondrial code (translation table 13) is a genetic code found in the mitochondria of Ascidia.
Code
AAs = FFLLSSSSYY**CCWWLLLLPPPPHHQQRRRRIIMMTTTTNNKKSSGGVVVVAAAADDEEGGGG
Starts = ---M------------------------------MM---------------M------------
Base1 = TTTTTTTTTTTTTTTTCCCCCCCCCCCCCCCCAAAAAAAAAAAAAAAAGGGGGGGGGGGGGGGG
Base2 = TTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGG
Base3 = TCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAG
Bases: adenine (A), cytosine (C), guanine (G) and thymine (T) or uracil (U).
Amino acids: Alanine (Ala, A), Arginine (Arg, R), Asparagine (Asn, N), Aspartic acid (Asp, D), Cysteine (Cys, C), Glutamic acid (Glu, E), Glutamine (Gln, Q), Glycine (Gly, G), Histidine (His, H), Isoleucine (Ile, I), Leucine (Leu, L), Lysine (Lys, K), Methionine (Met, M), Phenylalanine (Phe, F), Proline (Pro, P), Serine (Ser, S), Threonine (Thr, T), Tryptophan (Trp, W), Tyrosine (Tyr, Y), Valine (Val, V)
Differences from the standard code
Systematic range and comments
There is evidence from a phylogenetically diverse sample of tunicates (Urochordata) that AGA and AGG code for glycine. In other organisms, AGA/AGG code for either arginine or serine and in vertebrate mitochondria they code a STOP. Evidence for glycine translation of AGA/AGG was first found in 1993 in Pyura stolonifera and Halocynthia roretzi. It was then confirmed by tRNA sequencing and sequencing whole mitochondrial genomes.
Alternative initiation codons
ATA, GTG and TTG
ATT is the start codon for the CytB gene in Halocynthia roretzi.
See also
List of genetic codes
References
Molecular genetics
Gene expression
Protein biosynthesis | Ascidian mitochondrial code | [
"Chemistry",
"Biology"
] | 645 | [
"Protein biosynthesis",
"Gene expression",
"Molecular genetics",
"Biosynthesis",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
25,197,285 | https://en.wikipedia.org/wiki/Adamkiewicz%20reaction | The Adamkiewicz reaction is part of a biochemical test used to detect the presence of the amino acid tryptophan in proteins. When concentrated sulfuric acid is combined with a solution of protein and glyoxylic acid, a red/purple colour is produced. It was named after its discoverer, Albert Wojciech Adamkiewicz. Pure sulfuric acid and a minimal amount of pure formaldehyde, along with an oxidizing agent introduced into the sulfuric acid, allow the reaction to proceed. Later studies clarified the reaction's dependence on glyoxylic acid and its specific interaction with the amino acid tryptophan. These findings also shed light on the underlying chemical mechanism.
Dependence on glyoxylic acid
In 1901, researchers Fredricks Hopkins and Sydney W. Cole determined that glyoxylic acid, an impurity in acetic acid, was an essential component in the Adamkiewicz reaction. It was observed that the violet-red characteristic of the reaction occurred only when glyoxylic acid was present in the acetic acid used in the reaction. Without glyoxylic acid, the reaction failed, even if other conditions remained unchanged. Their work demonstrated that glyoxylic acid, in the presence of concentrated sulfuric acid and tryptophan, reacted with proteins to produce the characteristic violet-red coloration of the Adamkiewicz reaction.
Mechanism and the indole ring
The reaction relies on the interaction between glyoxylic acid and the indole ring of the amino acid tryptophan, a structural feature found in most proteins. When proteins are exposed to concentrated sulfuric acid and glyoxylic acid, the indole group undergoes a reaction that produces a highly colored compound. This interaction highlights tryptophan's central role in the test, as proteins lacking this amino acid do not produce the characteristic color change. Hopkins and Cole further noted that the sulfuric acid provided the acidic environment and acted as an oxidizing agent necessary for the reaction to proceed.
Later studies proposed that the reaction involves a condensation process, where glyoxylic acid combines with the indole group of tryptophan to form a complex quinonoid structure. This process explains the strong color change observed in the test and has been key to understanding tryptophan's chemical properties and its function in proteins.
See also
Tryptophan
Glyoxylic acid
Indole
References
Organic reactions
Biochemistry | Adamkiewicz reaction | [
"Chemistry",
"Biology"
] | 507 | [
"Biochemistry",
"nan",
"Organic reactions"
] |
52,180,843 | https://en.wikipedia.org/wiki/Testing%20and%20inspection%20of%20diving%20cylinders | Transportable pressure vessels for high-pressure gases are routinely inspected and tested as part of the manufacturing process. They are generally marked as evidence of passing the tests, either individually or as part of a batch (some tests are destructive), and certified as meeting the standard of manufacture by the authorised testing agency, making them legal for import and sale. When a cylinder is manufactured, its specification, including manufacturer, working pressure, test pressure, date of manufacture, capacity and weight are stamped on the cylinder.
Most countries require diving cylinders to be checked on a regular basis. This usually consists of an internal visual inspection and a hydrostatic test. The inspection and testing requirements for scuba cylinders may be very different from the requirements for other compressed gas containers due to the more corrosive environment in which they are used. After a cylinder passes the test, the test date, (or the test expiry date in some countries such as Germany), is punched into the shoulder of the cylinder for easy verification at fill time. The international standard for the stamp format is ISO 13769, Gas cylinders - Stamp marking.
A hydrostatic test involves pressurising the cylinder to its test pressure (usually 5/3 or 3/2 of the working pressure) and measuring its volume before and after the test. A permanent increase in volume above the tolerated level means the cylinder fails the test and must be permanently removed from service.
An inspection may include external and internal inspection for damage, corrosion, and correct colour and markings. The failure criteria vary according to the published standards of the relevant authority, but may include inspection for bulges, overheating, dents, gouges, electrical arc scars, pitting, line corrosion, general corrosion, cracks, thread damage, defacing of permanent markings, and colour coding.
Gas filling operators may be required to check the cylinder markings and perform an external visual inspection before filling the cylinder and may refuse to fill non-standard or out-of-test cylinders.
Quality assurance during manufacture
Standard seamless high pressure aluminium cylinders do not have a limited life. They may be used until they fail test or inspection. Three cylinders from each batch are pulsation hydrostatically tested for 10,000 cycles from 0 to test pressure at 12 cycles per minute. An alternative test approved by the US Department of Transportation is 100,000 cycles from 0 to working pressure. For the batch to pass, there must be no leaks or failures. Bursting pressure when new is about 2.5 times working pressure.
Intervals between inspections and tests
A cylinder is due to be inspected and tested at the first time it is to be filled after the expiry of the interval as specified by the United Nations Recommendations on the Transport of Dangerous Goods, Model Regulations, or as specified by national or international standards applicable in the region of use.
An external visual pre-fill inspection should be done before filling a cylinder.
In the United States, an annual visual inspection is not required by the USA DOT, though they do require a hydrostatic test every five years. The visual inspection requirement is a diving industry standard based on observations made during a review by the National Underwater Accident Data Center.
In European Union countries a visual inspection is required every two and a half years, and a hydrostatic test every five years.
In Norway a hydrostatic test (including a visual inspection) is required three years after production date, then every two years.
In Australia the applicable Australian Standards require that cylinders are hydrostatically tested every twelve months.
In South Africa a hydrostatic test is required every four years, and visual inspection every two years, or more often if the service history or external condition suggests it is necessary. Eddy current testing of neck threads must be done according to the manufacturer's recommendations.
Procedures for periodic inspections and tests
If a cylinder passes the listed procedures, but the condition remains doubtful, further tests can be applied to ensure that the cylinder is fit for use. Cylinders that fail the tests or inspection and cannot be fixed should be rendered unserviceable after notifying the owner of the reason for failure.
Identification of cylinder and preparation for testing
Before starting work the cylinder must be identified from the labelling and permanent stamp markings, and the ownership and contents verified.
Depressurisation and removal of cylinder valve
Before internal inspection the valve must be removed after depressurising and verifying that the valve is open. Cylinders containing breathing gases do not need special precautions for discharge except that high oxygen fraction gases should not be released in an enclosed space because of the fire hazard. If the valve is blocked or stuck closed it may be necessary to release the pressure by removing the burst disc or drilling into the valve body below the valve seat. These operations require care to avoid injury.
External visual inspection
Before inspection the cylinder must be clean and free of loose coatings, corrosion products and other materials which may obscure the surface. Foreign materials may be removed by brushing, controlled shot-blasting, water-jet cleaning chemical cleaning or other non-destructive methods. The method used must not remove a significant amount of cylinder material, and steel cylinders may not be heated above 300 °C. Aluminium cylinders are even more restricted in the temperatures permitted, which are specified by the manufacturer.
The cylinder is inspected for dents, cracks, gouges, cuts, bulges, laminations and excessive wear, heat damage, torch or electric arc burns, and corrosion damage. The cylinder is also checked for illegible, incorrect or unauthorised permanent stamp markings, and unauthorised additions or modifications. If the cylinder exceeds the rejection criteria for these items it is unsuitable for further service and will be made permanently unserviceable.
Typical rejection criteria include:
Bulges
Burns
Cracks
Cuts
Dents
Defaced stamp markings
Gouges
Corrosion
General corrosion
Line corrosion
Pit corrosion
(details to be added)
Internal visual inspection
Unless the cylinder walls are examined by ultrasonic methods, the interior must be visually inspected using sufficient illumination to identify any damage and defects, particularly corrosion. If the inner surface is not clearly visible it should be cleaned by approved method which does not remove a significant amount of wall material. Methods allowed include shot-blasting, water jet cleaning, flailing, steam or hot water jet, rumbling and chemical cleaning . The cylinder must be internally inspected after cleaning.
Typical rejection criteria include:
Cracks
Corrosion
General corrosion
Line corrosion
Pit corrosion
Damage to neck threads
(details to be added)
Supplementary tests
When there is uncertainty whether a defect found during visual inspection meets the rejection criteria, additional tests may be applied, such as ultrasonic measurement of pitting wall thickness, or weight checks to establish total weight lost to corrosion. Hardness tests on aluminium cylinders are done on the cylindrical body and must avoid making deep impressions.
Cylinder neck inspection
While the valve is off, the threads of cylinder and valve must be checked to identify the thread type and condition. The threads of cylinder and valve must be of matching thread specification, clean and full form, undamaged and free of cracks, burrs and other imperfections. Tap marks are acceptable and should not be confused with cracks. Other neck surfaces will also be examined to be sure they are free from cracks. In some cases threads may be re-tapped, but if the threads are altered they must be checked with the appropriate thread gauges.
The aluminium alloys used for diving cylinders are 6061 and 6351. 6351 alloy is subject to sustained load cracking and cylinders manufactured of this alloy should be periodically eddy current tested according to national legislation and manufacturer's recommendations. 6351 alloy has been superseded for new manufacture, but many old cylinders are still in service, and are still legal and considered safe if they pass the periodic hydrostatic, visual and eddy current tests required by regulation and as specified by the manufacturer. The number of cylinders that have failed catastrophically is in the order of 50 out of some 50 million manufactured. A larger number have failed the eddy current test and visual inspection of neck threads, or have leaked and been removed from service without harm to anyone.
Pressure test or ultrasonic examination
Ultrasonic inspection may be substituted for the pressure test, which is usually a hydrostatic test and may be either a proof test or a volumetric expansion test, depending on the cylinder design specification. Test pressure is specified in the stamp markings of the cylinder. The results of a correctly performed pressure test are final.
Inspection of valve
Valves that are to be reused must be inspected and maintained to ensure they remain fit for service.
The recommended practice for valve inspection and maintenance includes inspection, and where applicable correction of threads, cleaning of components, replacement of excessively worn and damaged parts, packing and safety devices, lubrication as applicable with approved lubricants for the gas service, checks for correct operation and sealing at intended operating pressure. Checks may be done with the valve fitted to the cylinder after inspection and testing, or before the valve is fitted.
Gauging of threads may be mandatory to ensure the integrity of parallel threads. If the gauge exceeds the maximum gauge limit for taper threads, re-tapping may be considered at the discretion of the competent person.
Final operations
The interior of the cylinder must be thoroughly dried immediately after cleaning or hydrostatic testing, and the interior inspected to ensure that there is no trace of free water or other contaminants.
If the cylinder is repainted or plastic coated, the temperature must not exceed 300 °C for steel cylinders, or the temperature specified by the manufacturer for aluminium cylinders.
Before fitting the valve the thread type must be checked to ensure that a valve with matching thread specification is fitted. Fitting of valves should follow the procedures specified in ISO 13341 Transportable gas cylinders - Fitting of valves to gas cylinders.
After the tests have been satisfactorily completed, a cylinder passing the test will be marked accordingly. Stamp marking will include the registered mark of the inspection facility and the date of testing (month and year).
Records of a periodic inspection and test are made by the test station and kept available for inspection. These include:
Identification of the cylinder:
name of current owner;
cylinder serial number;
mass of cylinder;
name of the cylinder manufacturer;
manufacturer's serial number;
cylinder design specification;
cylinder water capacity or size;
date of test during manufacture.
Records of the tests and inspections:
type of inspections and tests done;
test pressure;
date of the test;
whether the cylinder passed or failed the inspections and tests (giving reasons for failure);
identification stamp mark of the test station;
identification of tester;
details of any repairs made.
Rejection and rendering cylinder unserviceable
If a cylinder fails inspection or testing and cannot be recovered, the owner must be notified before making the empty cylinder unserviceable by crushing, burning a hole in the shoulder, irregular cutting of the neck or cylinder or bursting using a safe method. If the owner does not give permission they become legally responsible for any consequences.
Pre-fill visual inspection
Before filling a cylinder the filling operator may be required by regulations, code of practice, or operations manual, to inspect the cylinder and valve for any obvious external defects or damage, and to reject for filling any cylinder that does not comply with the standards. They may also be required to record cylinder details in the filling log.
International variation
In South Africa test stations are accredited by the South African National Accreditation System (SANAS) under the approval of the Department of Employment and Labour.
Notes
References
CEN. EN 1089-2:2002 Transportable gas Cylinders, Part 2 - Precautionary labels Superseded by EN ISO 7225:2007.
CEN. EN 1089-3:2004 Transportable gas Cylinders, Part 3 - Colour coding Current standard.
Sources
External links
Pressure vessels
Safety
Quality assurance | Testing and inspection of diving cylinders | [
"Physics",
"Chemistry",
"Engineering"
] | 2,368 | [
"Structural engineering",
"Chemical equipment",
"Physical systems",
"Hydraulics",
"Pressure vessels"
] |
52,182,319 | https://en.wikipedia.org/wiki/QIIME | QIIME ( ) is a bioinformatics data science platform, originally developed for analysis of high-throughput microbiome marker gene (e.g., 16S or 18S rRNA genes) amplicon sequencing data. There have been two major versions of the QIIME platform, QIIME 1 and QIIME 2.
While microbiome marker gene analysis continues to be a major focus in QIIME 2, the developers describe it as a microbiome multi-omics platform, and support exists or is being added for analysis of shotgun metagenomics and metatranscriptomics data, as well as metabolomics mass spectrometry data.
Development of QIIME 1 was initiated in the Knight Lab at the University of Colorado at Boulder, and the first version of QIIME 1 was released on 26 January 2010. Beginning in August 2011, QIIME 1 development was led as a collaboration between the Caporaso Lab at Northern Arizona University and the Knight Lab. QIIME 2 development is led by the Caporaso Lab, but the project remains a community effort, with developers dispersed around the world. In January 2018, QIIME 2 succeeded QIIME 1, whereby the QIIME 2 community can official help through the QIIME 2 forum.
"QIIME" was originally coined as an acronym for Quantitative Insights Into Microbial Ecology, but since the development of QIIME 2 this acronym has not been used.
See also
QIIME 2 website
Microbial ecology
Microbiome
References
Bioinformatics software
Metagenomics | QIIME | [
"Biology"
] | 312 | [
"Bioinformatics",
"Bioinformatics software"
] |
52,187,512 | https://en.wikipedia.org/wiki/Topological%20recursion | In mathematics, topological recursion is a recursive definition of invariants of spectral curves.
It has applications in enumerative geometry, random matrix theory, mathematical physics, string theory, knot theory.
Introduction
The topological recursion is a construction in algebraic geometry. It takes as initial data a spectral curve: the data of , where: is a covering of Riemann surfaces with ramification points; is a meromorphic differential 1-form on , regular at the ramification points; is a symmetric meromorphic bilinear differential form on having a double pole on the diagonal and no residue.
The topological recursion is then a recursive definition of infinite sequences of symmetric meromorphic n-forms on , with poles at ramification points only, for integers g≥0 such that 2g-2+n>0. The definition is a recursion on the integer 2g-2+n.
In many applications, the n-form is interpreted as a generating function that measures a set of surfaces of genus g and with n boundaries. The recursion is on 2g-2+n the Euler characteristics, whence the name "topological recursion".
Origin
The topological recursion was first discovered in random matrices. One main goal of random matrix theory, is to find the large size asymptotic expansion of n-point correlation functions, and in some suitable cases, the asymptotic expansion takes the form of a power series. The n-form is then the gth coefficient in the asymptotic expansion of the n-point correlation function. It was found that the coefficients always obey a same recursion on 2g-2+n. The idea to consider this universal recursion relation beyond random matrix theory, and to promote it as a definition of algebraic curves invariants, occurred in Eynard-Orantin 2007 who studied the main properties of those invariants.
An important application of topological recursion was to Gromov–Witten invariants. Marino and BKMP conjectured that Gromov–Witten invariants of a toric Calabi–Yau 3-fold are the TR invariants of a spectral curve that is the mirror of .
Since then, topological recursion has generated a lot of activity in particular in enumerative geometry.
The link to Givental formalism and Frobenius manifolds has been established.
Definition
(Case of simple branch points. For higher order branchpoints, see the section Higher order ramifications below)
For and :
where is called the recursion kernel:
and is the local Galois involution near a branch point , it is such that .
The primed sum means excluding the two terms and .
For and :
with any antiderivative of .
The definition of and is more involved and can be found in the original article of Eynard-Orantin.
Main properties
Symmetry: each is a symmetric -form on .
poles: each is meromorphic, it has poles only at branchpoints, with vanishing residues.
Homogeneity: is homogeneous of degree . Under the change , we have .
Dilaton equation:
where .
Loop equations: The following forms have no poles at branchpoints
where the sum has no prime, i.e. no term excluded.
Deformations: The satisfy deformation equations
Limits: given a family of spectral curves , whose limit as is a singular curve, resolved by rescaling by a power of , then .
Symplectic invariance: In the case where is a compact algebraic curve with a marking of a symplectic basis of cycles, is meromorphic and is meromorphic and is the fundamental second kind differential normalized on the marking, then the spectral curve and , have the same shifted by some terms.
Modular properties: In the case where is a compact algebraic curve with a marking of a symplectic basis of cycles, and is the fundamental second kind differential normalized on the marking, then the invariants are quasi-modular forms under the modular group of marking changes. The invariants satisfy BCOV equations.
Generalizations
Higher order ramifications
In case the branchpoints are not simple, the definition is amended as follows (simple branchpoints correspond to k=2):
The first sum is over partitions of with non empty parts , and in the second sum, the prime means excluding all terms such that .
is called the recursion kernel:
The base point * of the integral in the numerator can be chosen arbitrarily in a vicinity of the branchpoint, the invariants will not depend on it.
Topological recursion invariants and intersection numbers
The invariants can be written in terms of intersection numbers of tautological classes:
(*)
where the sum is over dual graphs of stable nodal Riemann surfaces of total arithmetic genus , and smooth labeled marked points , and equipped with a map .
is the Chern class of the cotangent line bundle whose fiber is the cotangent plane at .
is the th Mumford's kappa class.
The coefficients , , , are the Taylor expansion coefficients of and in the vicinity of branchpoints as follows:
in the vicinity of a branchpoint (assumed simple), a local coordinate is . The Taylor expansion of near branchpoints , defines the coefficients
.
The Taylor expansion at , defines the 1-forms coefficients
whose Taylor expansion near a branchpoint is
.
Write also the Taylor expansion of
.
Equivalently, the coefficients can be found from expansion coefficients of the Laplace transform, and the coefficients are the expansion coefficients of the log of the Laplace transform
.
For example, we have
The formula (*) generalizes ELSV formula as well as Mumford's formula and Mariño-Vafa formula.
Some applications in enumerative geometry
Mirzakhani's recursion
M. Mirzakhani's recursion for hyperbolic volumes of moduli spaces is an instance of topological recursion.
For the choice of spectral curve
the n-form is the Laplace transform of the Weil-Petersson volume
where is the moduli space of hyperbolic surfaces of genus g with n geodesic boundaries of respective lengths , and is the Weil-Petersson volume form.
The topological recursion for the n-forms , is then equivalent to Mirzakhani's recursion.
Witten–Kontsevich intersection numbers
For the choice of spectral curve
the n-form is
where is the Witten-Kontsevich intersection number of Chern classes of cotangent line bundles in the compactified moduli space of Riemann surfaces of genus g with n smooth marked points.
Hurwitz numbers
For the choice of spectral curve
the n-form is
where is the connected simple Hurwitz number of genus g with ramification : the number of branch covers of the Riemann sphere by a genus g connected surface, with 2g-2+n simple ramification points, and one point with ramification profile given by the partition .
Gromov–Witten numbers and the BKMP conjecture
Let a toric Calabi–Yau 3-fold, with Kähler moduli .
Its mirror manifold is singular over a complex plane curve given by a polynomial equation , whose coefficients are functions of the Kähler moduli.
For the choice of spectral curve
with the fundamental second kind differential on ,
According to the BKMP conjecture, the n-form is
where
is the genus g Gromov–Witten number, representing the number of holomorphic maps of a surface of genus g into , with n boundaries mapped to a special Lagrangian submanifold . is the 2nd relative homology class of the surface's image, and are homology classes (winding number) of the boundary images.
The BKMP conjecture has since then been proven.
Notes
References
Topology
Algebraic geometry
Mathematical physics
String theory | Topological recursion | [
"Physics",
"Astronomy",
"Mathematics"
] | 1,607 | [
"Astronomical hypotheses",
"Applied mathematics",
"Theoretical physics",
"Fields of abstract algebra",
"Topology",
"Space",
"Geometry",
"Algebraic geometry",
"Spacetime",
"String theory",
"Mathematical physics"
] |
42,068,958 | https://en.wikipedia.org/wiki/Drinfeld%E2%80%93Sokolov%E2%80%93Wilson%20equation | The Drinfeld–Sokolov–Wilson (DSW) equations are an integrable system of two coupled nonlinear partial differential equations proposed by Vladimir Drinfeld and Vladimir Sokolov, and independently by George Wilson:
References
Nonlinear partial differential equations
Integrable systems | Drinfeld–Sokolov–Wilson equation | [
"Physics"
] | 57 | [
"Integrable systems",
"Theoretical physics",
"Theoretical physics stubs"
] |
42,070,652 | https://en.wikipedia.org/wiki/Abnormal%20grain%20growth | In materials science, abnormal or discontinuous grain growth, also referred to as exaggerated or secondary recrystallisation grain growth, is a grain growth phenomenon in which certain energetically favorable grains (crystallites) grow rapidly in a matrix of finer grains, resulting in a bimodal distribution of grain size.
In ceramic materials, this phenomenon can result in the formation of elongated prismatic, acicular (needle-like) grains in a densified matrix. This microstructure has the potential to improve fracture toughness by impeding the propagation of cracks.
Mechanisms
Abnormal grain growth (AGG) is encountered in metallic or ceramic systems exhibiting one or more of several characteristics:
Systems with secondary phase inclusions, precipitates or impurities above a certain threshold concentration.
Systems with a highly anisotropic surface energy.
Systems far from chemical equilibrium.
Abnormal grain growth occurs due to very high local rates of interface migration and is enhanced by the localized formation of liquid at grain boundaries. In 2023, Liss et al.
have shown that the spontaneous activation of a grain boundary opens diffusion pathways, leading to the activation of one grain in an otherwise inactive microstructure and allowing the grain to rotate and coalesce with a neighbor grain. However, due to competition with the surrounding grains, rotation may proceed erratically. Coupled with spontaneous activation, this makes abnormal grain growth a largely erratic process. While the activation of grain boundaries (leading to rotation and growth) can occur at temperatures well below the temperatures required for partial melting of the grain boundaries, the effect is emphasized when melting occurs.
Significance
In the sintering of ceramic materials, abnormal grain growth is often viewed as an undesirable phenomenon because rapidly growing grains may lower the hardness of the bulk material through Hall-Petch-type effects. However, the controlled introduction of dopants to bring about controlled AGG may be used to impart fibre-toughening in ceramic materials. Additionally, AGG is undesirable in piezoelectric ceramics, as it may degrade the piezoelectric effect.
Example systems
Rutile (TiO2) frequently exhibits a prismatic or acicular growth habit. In the presence of alkali dopants or a solid-state ZrSiO4 dopant, rutile has been observed to crystallise from an anatase parent-phase in the form of abnormally large grains existing in a matrix of finer, equiaxed anatase or rutile grains.
Alumina, Al2O3 with silica and/or yttria dopants/impurities has been reported to exhibit undesirable AGG.
BaTiO3 barium titanate with an excess of TiO2 is known to exhibit abnormal grain growth with profound consequences on piezoelectric performance.
Tungsten carbide has been reported to exhibit AGG of faceted grains in the presence of a liquid cobalt-containing phase at grain boundaries
Silicon nitride (Si3N4) may exhibit AGG depending on the size distribution of β-phase material in an α-Si3N4 precursor. This type of grain growth is of importance in the toughening of silicon nitride materials
Silicon carbide has been shown to exhibit improved fracture toughness as the result of AGG processes yielding elongated crack-tip/wake-bridging grains, with consequences for applications in ballistic armor. This enhancement of fracture toughness in ceramic materials via crack-bridging resulting from AGG is consistent with reported morphological effects on crack propagation in ceramics
Strontium barium niobate, used for electro-optics and dielectric applications, is known to exhibit AGG with significant consequences on the electronic performance of the material
Calcium titanate (CaTiO3, perovskite) systems doped with BaO have been observed to exhibit AGG without the formation of liquid as the result of polytype interfaces between solid phases
See also
Crystallites
Fractography
Grain boundary
Metallurgy
Micrography
Micrograph
Microstructure
Sintering
References
External links
Abnormal Grain Growth by Cyclic Heat Treatment
University of Virginia, Surface Energy
Materials science
Crystallography
Mineralogy concepts | Abnormal grain growth | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 849 | [
"Applied and interdisciplinary physics",
"Materials science",
"Crystallography",
"Condensed matter physics",
"nan"
] |
42,071,402 | https://en.wikipedia.org/wiki/Transcriptome%20in%20vivo%20analysis%20tag | A transcriptome in vivo analysis tag (TIVA tag) is a multifunctional, photoactivatable mRNA-capture molecule designed for isolating mRNA from a single cell in complex tissues.
Background
A transcript is an RNA molecule that is copied or transcribed from a DNA template. A transcript can be further processed by alternative splicing, which is the retention of different combinations of exons. These unique combinations of exons are termed RNA transcript isoforms. The transcriptome is a set of all RNA, including rRNA, mRNA, tRNA, and non-coding RNA. Specifically mRNA transcripts can be used to investigate differences in gene expression patterns. Transcriptome profiling is determining the composition of transcripts and their relative expression levels in a given reference set of cells. This analysis involves characterization of all functional genomic elements, coding and non-coding.
The current RNA capture methods involve sorting cells in suspension from acutely dissociated tissue, and thus can lose information about cell morphology and microenvironment. Transcript abundance and isoforms are significantly different across tissues and are continually changing throughout an individual’s life. Gene expression is highly tissue specific, therefore with traditional RNA capture methods one must be cautious in the interpretation of gene expression patterns, as they often reflect expression of a heterogeneous mix of cell populations. Even in the same cell type, tissue measurements, where a population of cells is obtained, mask both low-level mRNA expression in single cells and variation in expression between cells. The photoactivatable TIVA tag is engineered to capture the mRNA of a single cell in complex tissues.
Chemical structure
TIVA tags are created initially via solid-phase synthesis with the cell-penetrating peptide conjugated afterwards. The functional components of the tag can be summarized as following:
Biotin: binds to streptavidin beads for tag isolation.
Cy3 fluorophore: used to validated cleavage of photocleavable linker. If cleaved, cell will appear green upon exposure to 514 nm light.
Cy5 fluorophore: used to validate uptake into cells. If uptake is successful, and if Cy5 is not yet cleaved from the TIVA tag, energy from a 514 nm light will be absorbed via FRET from Cy3 to Cy5, where cells that have taken up the TIVA will appear red.
PolyU 18-mer oligonucleotide: used to bind mRNA via complementary base pairing of their polyadenylated tails. Before cleavage of photocleavable linkers, it is caged by complementary base pairing to two polyA 7-mer oligonucleotides.
PolyA 7-mer oligonucleotides: before the cleavage of photocleavable linkers, 2 polyA 7-mer molecules conjugate to polyU oligonucleotides to cage the TIVA tag, and thus prevent it from binding mRNA molecules. After photocleavable linkers are cleaved, the melting temperature decreases from 59 °C to less than 25 °C, leading to the disassociation of the PolyA 7-mer oligonucleotides from the TIVA tag.
Photocleavable linker: links and stabilizes Cy5 fluorophore and PolyA 7-mer oligonucleotides to the TIVA tag. It is cleaved upon photoactivation.
Cell-penetrating peptide CPP: guides the TIVA tag through cell membranes into tissues. It is linked to the TIVA tag by a disulphide bond that is cleaved once exposed to extracellular environment.
Methodology of a TIVA Experiment
Tissue preparation
Tissue fixation is performed by chemical fixation using formalin. This prevents the postmortem degeneration of the tissue and hardens soft tissue. The tissue is dehydrated using ethanol and the alcohol is cleared using an organic solvent such as xylene. The tissue is embedded in paraffin which infiltrates the microscopic spaces present throughout the tissue. The embedded tissue is sliced using a microtome and subsequently stained to produce contrast needed to visualize the tissue.
Loading of the TIVA tag into cells and validation
A cell saline buffer containing the TIVA tag is added to the coverslip and incubated. During the incubation period, the TIVA tag penetrates the cell membrane via the CPP that is bound to it. Subsequently, the cytosolic environment cleaves the CPP and the TIVA tag is trapped inside the cell. After incubation, the coverslip is rinsed twice with cell saline buffer and then transferred to an imaging chamber. Using a confocal microscope, loading of the tag is confirmed by detecting the Cy5 signal at a wavelength of 561 nm.
Photoactivation of the TIVA tag in target cell and validation
Photolysis is performed resulting in photoactivation of the TIVA tag in the target cell or cells. Specifically, uncaging of the TIVA tag is accomplished using a 405-nm laser while measuring FRET excited by 514 nm. During this process, the mRNA-capturing moiety is released and subsequently anneals to the poly(A) tail of cellular mRNA. To confirm that the cell is not damaged during photolysis, the cell is imaged with the confocal microscope.
Extraction, lysis of target cell and affinity purification of TIVA tag
Using a glass pipette, the photolysed cell is isolated by aspiration. Cells are lysed and affinity purification is performed using streptavidin-coated beads that bind, immobilize and purify the biotinylated TIVA tag.
RNA-seq analysis
RNA-seq uses reverse transcriptase to convert the mRNA template to cDNA. During library preparation, the cDNA is fragmented into small pieces, which then serve as the template for sequencing. After sequencing RNA-seq analysis can then be performed.
Advantages and Disadvantages
Advantages
Noninvasive method for capturing mRNA from single cells in living, intact tissues for transcriptome analysis.
Though other methods can be applied, such as laser capture microdissection and patch-pipette aspiration to isolate single cells, With TIVA Tags no damage to the cells and no tissue deformation from penetration of the pipette that may alter components of the transcriptional profile.
Can be performed on various cell types, while existing methods depend on transgenic rodent models to identify cells of interest.
Disadvantages
CPPs have been used to transport a variety of biomolecules into cells in both vitro and in vivo. One must be cautious of which CPPs are used. For example, different CPPs promote movement into different cell types and cellular components.
If the TIVA tag is not used within 3 months of synthesis, the FRET signal is weakened.
The storage of TIVA tag requires a -80 °C freezer and should be in dried form.
References
RNA
Gene expression | Transcriptome in vivo analysis tag | [
"Chemistry",
"Biology"
] | 1,424 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
42,071,443 | https://en.wikipedia.org/wiki/Shot%20peening%20of%20steel%20belts | Shot peening can be used to recondition distorted steel conveyor belts.
The shot peening process
Shot peening is a conservation process for flattening a deformed steel belt in which the surface of the belt is impacted by small stainless steel or carbon steel balls, called peening shot. Each ball hitting the belt functions as a peening hammer, forming a small indentation, or dimple, on the steel belt surface.
For the indentation to be formed, the steel belt surface layer must yield in tension. The compressed grains help to restore the surface to its original shape by producing a hemisphere of cold-worked metal, highly stressed in compression. Overlapping indentations create a continuous layer of residual compressive stress. It is well known that cracks will not propagate in a compressively stressed zone. Since most fatigue and stress corrosion failures originate at the surface, the compressive stresses from shot peening significantly enhance the belt’s lifespan. Note that:
Although it is possible to peen while the belt is in production, care must be taken to ensure that there is no loss of shot which would contaminate the finished product or the press system.
The belt is run at a speed of 15–20 ft/min to start with, but this may be increased if the levelling function is satisfactory. The faster the belt runs, the less effective the peening process becomes.
The process starts with a low pressure (20 PSI), and work up in steps of 10 PSI until a noticeable effect is seen in the belt curve. For a precipitation-hardened type of stainless steel belt, the required pressure could be as high as 90 PSI.
If the shot becomes contaminated with oil from the belt, it becomes less effective as a blasting medium, and the oil also clogs the air blast system. If oil pickup is unavoidable, then frequent cleaning of the equipment and washing of the shot will be required.
Peening has to be done over a flat surface.
Peening starts from the center of a section and progresses towards the edge. Several light passes across the belt are less likely to over-compress the surface than one heavy pass which could distort the belt in the opposite direction.
Peening must be applied to the concave side of a curve to stretch the metal on the "short" side.
Portable shot blasting unit
Press belts may become deformed and worn over time. The portable shot blasting unit is primarily used to flatten deformed press belts and prepare the belt material for reuse. The unit is designed for field use and is portable, allowing for efficient operation. All necessary equipment (excluding the carriage frame and air compressor) fits into a box with dimensions of approximately 350 × 350 × 320 mm. The combined weight of the blaster, valve, air hose, and other components is about 25 kg, with the blasting machine itself weighing 9 kg.
A pair of universal channels (38 mm × 76 mm), typically 500 mm longer than the belt width, must be provided on-site. These channels are welded together to allow the blaster to move smoothly across the belt’s surface. The total installation time, including assembling the carriage frame, generally takes only a few hours, after which the peening process can begin.
An electric shut-off valve is mounted on the inlet air hose to protect the belt from over-blasting should it suddenly stop during the blasting operation. The valve solenoid must be connected (interlocked) to the press machine's power supply to be effective. For best blasting results, an air supply of 4,200 liters per minute is required at a pressure of 6 bar. The unit is supplied with a flexible air hose that connects it to the local air supply. All local supply pipes should have a minimum bore diameter of 1 inch. The recommended shot blasting medium is tungsten shot (beads) with a diameter ranging from 0.2 to 0.4 mm and a hardness exceeding 40 HRC. The machine operates by drawing a quantity of tungsten shot from the bottom of the scroll case into the high-velocity nozzles. The shot is blasted onto the surface of the belt, and most of the shot bounces back into the scroll case. The air is vented through the filter socks, and any shot carried with the air is filtered out and dropped back into the scroll case.
Flattening out deformed belts
Since the 1980s, the standard procedure used to solve the problem of deformed belts was to turn the belt over, i.e. what was previously the back of the belt was used to form the new product side. The belt became flatter when turned due to the equalization of stresses on both sides. However, over time, the belt typically reverted to its original shape, albeit in the opposite direction. As a result, it often became necessary to turn the belt again after approximately one year. This process was extremely time-consuming and costly, as it required cutting the belt, dismantling it from the press, turning it and then reinstalling it. The reinstallation involved belt joining operations such as welding and grinding of the joint, as well as running-in procedures. Additionally, this process demanded specialized equipment for handling the belt, welding jigs, and skilled personnel for joint welding. Compounding the issue, production had to be halted during these operations, with stoppages lasting up to a week not uncommon. The steel belt shot peening process was introduced as a solution to address the belt cross-curvature problem.
Shot peening | Shot peening of steel belts | [
"Materials_science"
] | 1,134 | [
"Strengthening mechanisms of materials",
"Shot peening"
] |
42,075,215 | https://en.wikipedia.org/wiki/Discretization%20of%20Navier%E2%80%93Stokes%20equations | Discretization of the Navier–Stokes equations of fluid dynamics is a reformulation of the equations in such a way that they can be applied to computational fluid dynamics. Several methods of discretization can be applied:
Finite volume method
Finite elements method
Finite difference method
Finite volume method
Incompressible flow
We begin with the incompressible form of the momentum equation. The equation has been divided through by the density (P = p/ρ) and density has been absorbed into the body force term.
The equation is integrated over the control volume of a computational cell.
The time-dependent term and the body force term are assumed constant over the volume of the cell. The divergence theorem is applied to the advection, pressure gradient, and diffusion terms.
where n is the normal of the surface of the control volume and V is the volume. If the control volume is a polyhedron and values are assumed constant over each face, the area integrals can be written as summations over each face.
where the subscript nbr denotes the value at any given face.
Two-dimensional uniformly-spaced Cartesian grid
For a two-dimensional Cartesian grid, the equation can be expanded to
On a staggered grid, the x-momentum equation is
and the y-momentum equation is
The goal at this point is to determine expressions for the face-values for u, v, and P and to approximate the derivatives using finite difference approximations. For this example we will use backward difference for the time derivative and central difference for the spatial derivatives. For both momentum equations, the time derivative becomes
where n is the current time index and Δt is the time-step. As an example for the spatial derivatives, derivative in the west-face diffusion term in the x-momentum equation becomes
where I and J are the indices of the x-momentum cell of interest.
Finite elements method
Finite difference method
Fluid dynamics | Discretization of Navier–Stokes equations | [
"Chemistry",
"Engineering"
] | 386 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
42,075,360 | https://en.wikipedia.org/wiki/Patternation | Patternation is the specialized technical art of performing quantitative measurements of specific properties of particles within a spray and visualizing the patterns of this specific property within the spray. In order to understand patternation, we need to consider the role sprays play in our daily lives.
Uses of sprays
Sprays have a number of uses. In its natural form, sprays appear in waterfall mists, rains and ocean sprays, according to Arthur Lefebvre, in his book, Atomization and Sprays.
Within the household sphere, sprays are used in showers, garden hoses, spray paint cans, hair spray, deodorant sprays and more. Industrial uses of sprays include spray drying, coating, washing, and irrigating.
Sprays are also used in many internal combustion engines to directly disperse the fuel into the combustion chamber and mix it with air so that either spontaneously ignite under high pressure and temperatures or they can be ignited using spark plugs.
Understanding the importance of spray patternation
Patternation of sprays is important in a variety of applications including IC engines, turbines, spray coating, spray drying, agriculture, and consumer products. For example, asymmetries in patternation directly affect surface finish quality during painting and poor product quality during spray drying.
Similarly, in gas turbines, variation in the spray pattern leads to fuel lean and fuel rich pockets resulting in excessive turbine wear and increased particulate emission.
For metered-dose inhalers, the pattern of the spray is very important to ensure that the maximum amount of the drug goes through the throat passageways into the lungs.
In agricultural nozzles the pattern of the spray is important so as to optimize the delivery of pesticides and fertilizers to the plants. Spray drying requires that the droplet sizes be closely controlled. In general, increasing the total drop surface area within the dryer will lead to higher evaporation rates and greater efficiency for the process.
Similarly, pharmaceutical tablets owe their thin film surface coating to an atomizer produced spray that needs to be perfect. The coating not only masks taste but also performs key functions such as sealing the tablet from moisture to improve shelf life, controlling drug release rates to get slow and extended release tablets.
How scientists study spray patterns
Spray patterns are studied using diagnostic tools known as patternators. A patternator quantifies the spatial location of the drops emitted from a sprayer and visualizes the patterns of sprays. There are different types of patternators, mechanical and optical.
Mechanical patternators
Mechanical patternators are called intrusive patternators.
They typically collect the mass flux of liquid in small containers or spray color onto papers to look at the pattern that are then weighed to provide local information. Mechanical patternators are not very accurate and suffer from a number of systematic and random errors
In addition, they interfere with the spray pattern itself, and are generally not suitable for obtaining accurate data on spray patterns.
Optical patternators
To avoid the errors in mechanical patternation, it is generally accepted that the patternation technique has to be optical.
Optical patternators provide the distribution of drop surface areas rather than the mass flux. In many instances, this is advantageous as all local transfer phenomena, such as mass; momentum; energy and species are directly proportional to the surface area density of the droplets in the spray.
There are three principal types of optical patternators: (a) those that use laser sheets, (b)those that use planar liquid laser induced fluorescence and (c) those that use laser extinction tomography.
Using a laser sheet
The first is based on using a laser sheet to illuminate a plane orthogonal to the spray direction or along the spray direction and capturing the image of the scattered light using an off-axis camera. It is assumed that the scattered intensity is proportional to the local drop surface area density.
Using planar liquid laser induced fluorescence
A second type of patternators are based on using planar liquid laser induced fluorescence. The liquid is mixed with some fluorescent material and illuminated with a high power laser sheet. The resulting fluorescence image is captured and analyzed to provide the local mass concentrations within the spray.
Both these methods have significant errors due to laser extinction, signal attenuation, and secondary emission
Using laser extinction tomography
The third set of optical patternators is based on laser extinction or laser extinction tomography.
The laser extinction tomography system provides the local drop surface area density (number of drops per unit volume at a specific location multiplied by the surface area of the drops). This quantity is directly proportional to the evaporation of the drops and is very important in applications involving combustion and spray drying.
The picture shows results from testing a Gasoline Direct Injector (GDI) using an optical patternator that is based on extinction tomography. This optical patternator can be used to analyze highly dense sprays.
References
Aerosols
Measurement | Patternation | [
"Physics",
"Chemistry",
"Mathematics"
] | 992 | [
"Physical quantities",
"Quantity",
"Colloids",
"Measurement",
"Size",
"Aerosols"
] |
42,075,644 | https://en.wikipedia.org/wiki/Adsorbed%20natural%20gas | Adsorbed natural gas (ANG) is a process to store natural gas. Natural gas burns cleanly as a fuel, making it useful in many vehicles and applications such as cooking, heating or running generators. It contains mostly methane and ethane. These light gases have very high vapor pressure at ambient temperatures, and their storage requires either high-pressure compression (CNG) or an extreme reduction of temperature (LNG); or adsorbent systems—this is ANG. In the ANG process, natural gas adsorbs to a porous adsorbent at relatively low pressure (100 to 900 psi) and ambient temperature, solving both the high-pressure and low-temperature problems. If a suitable adsorbent is used, it is possible to store more gas in an adsorbent-filled vessel than in an empty vessel at the same pressure. The amount of adsorbed gas depends on pressure, temperature and adsorbent type. Since this adsorption process is exothermic, an increase in pressure or a decrease in temperature enhances the efficiency of the adsorption process.
It is possible to mix the ANG and CNG technology to reach an increased capacity of natural gas storage. In this process known as high pressure ANG, a high pressure CNG tank is filled by absorbers such as activated carbon (which is an adsorbent with high surface area) and stores natural gas by both CNG and ANG mechanisms.
Currently, researchers are developing new adsorbents with higher adsorption ratio to optimize this process, including MOFs (metal-organic frameworks).
References
Natural gas storage | Adsorbed natural gas | [
"Chemistry"
] | 338 | [
"Natural gas storage",
"Natural gas technology"
] |
48,363,015 | https://en.wikipedia.org/wiki/NGC%201528 | NGC 1528 is an open cluster in the constellation Perseus. It was discovered by William Herschel in 1790. It is located in the north-eastern part of the constellation, just under 3 degrees north of μ Persei. Less than 1.5° to the southeast is the open cluster NGC 1545 (m = 6.2). The NGC 1528 is clearly visible with 10x50 binoculars. 165 stars are recognised as members of NGC 1528, the brightest of which has apparent magnitude 8.7.
See also
List of NGC objects (1001–2000)
References
External links
SEDS
1528
Perseus (constellation)
Open clusters
Discoveries by William Herschel | NGC 1528 | [
"Astronomy"
] | 140 | [
"Perseus (constellation)",
"Constellations"
] |
48,363,641 | https://en.wikipedia.org/wiki/Artificial%20turf%E2%80%93cancer%20hypothesis | Artificial turf is surface of synthetic fibers resembling natural grass. It is widely used for sports fields for being more hard-wearing and resistant than natural surfaces. Most use infills of crumb rubber from recycled tires; this use is controversial because of concerns that the tires contain carcinogens, though research into the issue is ongoing.
Studies
An unpublished study by Rutgers University examined crumb rubber from synthetic fields in New York City. It found six possibly carcinogenic polycyclic aromatic hydrocarbons at levels excessive to state regulations. The researchers warned that the findings could have been made inaccurate by solvent extraction used to release the chemicals from the rubber.
In a statistical study of the list of soccer players with cancer provided by UW coach Amy Griffin, public health researchers for the State of Washington found that the rates of cancer were actually lower than was estimated for the general population. While they did not state any conclusions on the safety of this form of artificial turf, they did recommend that players not restrict their play due to the presumed health benefits of being active.
In 2007, the California Office of Environmental Health Assessment (OEHHA) simulated interactions children can have with after coming into direct contact with artificial turf. Results showed that five chemicals, including four polycyclic aromatic hydrocarbons (PAHs), were found in samples. One of these compounds, chrysene, was present at levels higher than the standard established by OEHHA. Chrysene is a known carcinogen, meaning it can increase the risk of a child developing cancer.
In late 2015, the United States Congress' House Energy and Commerce Committee ordered for the Environmental Protection Agency (EPA) to investigate a link. As of 2016, the EPA, the Consumer Product Safety Commission and the Centers for Disease Control and Prevention were investigating.
In 2018, a study commissioned by the Dutch minister of Health, Welfare and Sport from the Dutch National Institute for Public Health and the Environment found that "our findings for a representative number of Dutch pitches are consistent with those of prior and contemporary studies observing no elevated health risk from playing sports on synthetic turf pitches with recycled rubber granulate".
A 2019 Yale study showed that there were 306 chemicals in crumb rubber and that 52 of these chemicals were classified as carcinogens by the Environmental Protection Agency (EPA). They stated that "a vacuum in our knowledge about the carcinogenic properties of many crumb rubber infill. The crumb rubber infill of artificial turf fields contains or emits chemicals that can affect human physiology."
In 2020, the European Risk Assessment Study on Synthetic Turf Rubber Infill was completed; published in Science of the Total Environment, this was a scientific study funded by companies and industry association from the tyre granulate supply chain, drawing on data from diverse parts of Europe. The researchers concluded that "there are no relevant health risks associated with the use of synthetic turfs with ELT-derived infill material".
A 2022 study published in the same journal analyzed the composition of synthetic turf football pitches from 17 countries. It confirmed the presence of "hazardous substances in the recycled crumb rubber samples collected all around the world" including PAHs of high and very high concern. The study concluded that different stakeholders "must work on a consensus to protect not only human health but also the environment, since there is evidence that crumb rubber hazardous chemicals can reach the environment and affect wildlife." The paper did not, however, discuss cancer risk in any detail.
In March 2023, investigative reporters from the Philadelphia Inquirer bought souvenir samples of the old Veterans Stadium AstroTurf used from 1977–81 and commissioned diagnostics through the Eurofins Environmental Testing laboratory. The resulting lab report linked per- and polyfluoroalkyl substances (PFAS) to the turf. Six former Philadelphia Phillies who played at Veterans Stadium, home to the team from 1971 to 2003, died from glioblastoma, an aggressive brain cancer: Tug McGraw, Darren Daulton, John Vukovich, Johnny Oates, Ken Brett, and David West.
Testimonies
Nigel Maguire, formerly a chief executive for the National Health Service in Cumbria, claims that his son, a goalkeeper, could have developed Hodgkin's lymphoma by playing on an artificial surface. He has called for a ban on the surfaces, saying "It is obscene so little research has been done."
In 2014, Amy Griffin, soccer coach at the University of Washington, surveyed American players of the sport who had developed cancer. Of 38 players, 34 were goalkeepers, a position in which diving to the surface makes accidental ingestion or blood contact with crumb rubber more likely, Griffin has asserted. Lymphoma and leukemia, cancers of the blood, predominated.
Sports organizations
FIFA, the world governing body of association football (soccer), has stated that the evidence weighs in favour of artificial pitches being safe. The Football Association of England stated in February 2016 that they were observing reports and conducting their own research on the issue.
References
Cancer
Cancer | Artificial turf–cancer hypothesis | [
"Chemistry"
] | 1,030 | [
"Synthetic materials",
"Artificial turf"
] |
48,369,399 | https://en.wikipedia.org/wiki/Marine%20pump | A Marine pump is a pump which is used on board a vessel (ship) or an offshore platform.
Upper category
It is a kind of general equipment, usually driven by an electrical motor, refer to pump category.
It is widely used as a machine in marine industry, refer to marine industry category.
General
A pump is a device that moves fluids (liquids or gases), or sometimes slurries, by mechanical action. A marine pump is an important auxiliary equipment in marine industry and ship building industry. It is widely used in all kinds of marine vessels, such as barges, tug boats, containers, carriers, ships, vessels, fixed offshore structure, drilling jack-up rigs and so on.
These marine pumps can be serviced for cooling, circulating, ballast, general service(G/S), fire-fighting, boiler feed water, condensate water, fresh(drinking) water, sanitary water, bilge & sludge, F.O. transfer, L.O. transfer, F.O. and F.S. cargo pumping, cargo stripping, hydrophone tank unit, sewage treatment unit, oil water separator, incinerator, fresh water generator, and so on.
Type
Marine pump is named by its usage or application, it covers a lot of types of pumps, such as:
The marine centrifugal pump is used to transport fluids by the conversion of rotational kinetic energy to the hydrodynamic energy of the fluid flow. The rotational energy typically comes from an engine or electric motor. The fluid enters the pump impeller along or near to the rotating axis and is accelerated by the impeller, flowing radially outward into a diffuser or volute chamber (casing), from where it exits.
The gear pump is a marine gear pump that uses the meshing of gears to pump fluid by displacement. It is one of the most common types of pumps for hydraulic fluid power applications. Gear pumps are also widely used in chemical installations to pump high viscosity fluids. There are two main variations; external gear pumps which use two external spur gears, and internal gear pumps which use an external and an internal spur gears (internal spur gear teeth face inwards, see below). Gear pumps are positive displacement (or fixed displacement), meaning they pump a constant amount of fluid for each revolution. Some gear pumps are designed to function as either a motor or a pump.
The screw pump is a positive-displacement (PD) pump that uses one or several screws to move fluids or solids along the screw(s) axis. In its simplest form (the Archimedes' screw pump), a single screw rotates in a cylindrical cavity, thereby moving the material along the screw's spindle.
See also
Handy billy
References
Pumps
Marine engineering | Marine pump | [
"Physics",
"Chemistry",
"Engineering"
] | 566 | [
"Pumps",
"Turbomachinery",
"Physical systems",
"Hydraulics",
"Marine engineering"
] |
33,843,216 | https://en.wikipedia.org/wiki/Brinicle | A brinicle (brine icicle, also known as an ice stalactite) is a downward-growing hollow tube of ice enclosing a plume of descending brine that is formed beneath developing sea ice.
As seawater freezes in the polar ocean, salt brine concentrates are expelled from the sea ice, creating a downward flow of dense, extremely cold, saline water, with a lower freezing point than the surrounding water. When this plume comes into contact with the neighboring ocean water, its extremely low temperature causes ice to instantly form around the flow. This creates a hollow stalactite, or icicle, referred to as a brinicle.
Formation
The formation of ice from salt water produces marked changes in the composition of the nearby unfrozen water. When water freezes, most impurities are excluded from the water crystals; even ice from seawater is relatively fresh compared to the seawater from which it is formed. As a result of forcing the impurities out (such as salt and other ions) sea ice is very porous and spongelike, quite different from the solid ice produced when fresh water freezes.
As the seawater freezes and salt is forced out of the pure ice crystal lattice, the surrounding water becomes more saline as concentrated brine leaks out. This lowers its freezing temperature and increases its density. Lowering the freezing temperature allows this surrounding, brine-rich water to remain liquid and not freeze immediately. The increase in density causes this layer to sink. Tiny tunnels called brine channels are created all through the ice as this supersaline, supercooled water sinks away from the frozen pure water. The stage is now set for the creation of a brinicle.
As this supercooled saline water reaches unfrozen seawater below the ice, it will cause the creation of additional ice. Water moves from high to low concentrations. Because the brine possesses a lower concentration of water, it therefore attracts the surrounding water. Due to the low temperature of the brine, the newly attracted water freezes. If the brine channels are relatively evenly distributed, the ice pack grows downward evenly. However, if brine channels are concentrated in one small area, the downward flow of the cold brine (now so salt-rich that it cannot freeze at its normal freezing point) begins to interact with unfrozen seawater as a flow. Just as hot air from a fire rises as a plume, this cold, dense water sinks as a plume. Its outer edges begin accumulating a layer of ice as the surrounding water, cooled by this jet to below its freezing point, ices up. A brinicle has now been formed, resembling an inverted "chimney" of ice enclosing a downward flow of this supercooled, super salinated water.
When the brinicle becomes thick enough, it becomes self-sustaining. As ice accumulates around the down-flowing cold jet, it forms an insulating layer that prevents the cold, saline water from diffusing and warming. As a result, the ice jacket surrounding the jet grows downward with the flow. The inner wall temperature of the stalactite remains on the salinity-determined freezing curve, so as the stalactite grows and the temperature deficit of the brine goes into the growth of ice, the inner wall melts to dilute and cool the adjacent brine back to its freezing point. It is like an icicle turned inside-out; rather than cold air freezing liquid water into layers, down-rushing cold water is freezing the surrounding water, enabling it to descend even deeper. As it does, it creates more ice, and the brinicle grows longer.
A brinicle is limited in size by the depth of the water, the growth of the overlying sea ice fueling its flow, and the surrounding water itself. In 2011, brinicle formation was filmed for the first time. The salinity of the liquid water within the brinicle has been confirmed to vary depending on the temperature of the air. The lower the temperature, the greater the brine concentration. A January 2014 along the coast of the White Sea recorded that at an air temperature of −1 °C the brine salinity was between 30 and 35 psu while the salinity at sea was 28 psu. When the temperature was −12 °C the salinity of the brine increased to between 120 and 156 psu.
Structure
At the time of its creation, a brinicle resembles a pipe of ice reaching down from the underside of a layer of sea ice. Inside the pipe is extremely cold and saline water produced by the growth of the sea ice above, accumulated through brine channels. At first, brinicles are very fragile, the walls are thin, but the constant flow of colder brine sustains the brinicle growth and hinders its melt that would be caused by the contact with the less cold surrounding water. As ice accumulates and the walls becomes thicker, brinicles becomes more stable.
A brinicle can, under the proper conditions, reach down to the seafloor. To do so, the supercold brine from the pack ice overhead must continue to flow, the surrounding water must be significantly less saline than the brine, the water cannot be very deep, the overhead sea ice pack must be still, and currents in the area must be minimal or still. If the surrounding water is too saline, its freezing point will be too low to create a significant amount of ice around the brine plume. If the water is too deep, the brinicle is likely to break free under its own weight before reaching the seafloor. If the icepack is mobile or currents too strong, strain will break the brinicle.
Under the right conditions, including favorable ocean floor topography, a brine pool may be created. However, unlike brine pools created by cold seeps, brinicle brine pools are likely to be very transient as the brine supply will eventually cease.
On reaching the seafloor, it will continue to accumulate ice as surrounding water freezes. The brine will travel along the seafloor in a down-slope direction until it reaches the lowest possible point, where it will pool. Any bottom-dwelling sea creatures, including starfish or sea urchins, can be encased in this expanding web of ice and be trapped, ultimately freezing to death.
Research history
Brinicles have been known since the 1960s. The generally accepted model of their formation was proposed by the US oceanographer Seelye Martin in 1974. The formation of a brinicle was first filmed in 2011 by producer Kathryn Jeffs and cameramen Hugh Miller and Doug Anderson for the BBC series Frozen Planet. The first numerical model for a brinicle formation was developed in 2023 by researchers at the National University of Colombia in collaboration with the Lawrence Livermore National Laboratory.
References
The 'Finger of Death' that Freezes Everything it Touches, Earth's Great Seasons, BBC Earth
External links
Attenborough's polar trip: The tech that made Frozen Planet possible
Modelling and simulation of brinicle formation
Sea ice | Brinicle | [
"Physics"
] | 1,472 | [
"Physical phenomena",
"Earth phenomena",
"Sea ice"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.