id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
658,700 | https://en.wikipedia.org/wiki/Lactoferrin | Lactoferrin (LF), also known as lactotransferrin (LTF), is a multifunctional protein of the transferrin family. Lactoferrin is a globular glycoprotein with a molecular mass of about 80 kDa that is widely represented in various secretory fluids, such as milk, saliva, tears, and nasal secretions. Lactoferrin is also present in secondary granules of PMNs and is secreted by some acinar cells. Lactoferrin can be purified from milk or produced recombinantly. Human colostrum ("first milk") has the highest concentration, followed by human milk, then cow milk (150 mg/L).
Lactoferrin is one of the components of the immune system of the body; it has antimicrobial activity (bacteriocide, fungicide) and is part of the innate defense, mainly at mucoses. It is constantly produced and released into saliva, tears, as well as seminal and vaginal fluid. Lactoferrin provides antibacterial activity to human infants. Lactoferrin interacts with DNA and RNA, polysaccharides and heparin, and shows some of its biological functions in complexes with these ligands.
Lactoferrin supplements reduce the risk of respiratory tract infections, based on a recent meta-analysis of randomized controlled trials. As with any supplements sold online, quality may be an issue because nutritional supplement production quality controls are not subject to the same strict regulatory process as medicines.
History
Occurrence of iron-containing red protein in bovine milk was reported as early as in 1939; however, the protein could not be properly characterized because it could not be extracted with sufficient purity. Its first detailed studies were reported around 1960. They documented the molecular weight, isoelectric point, optical absorption spectra and presence of two iron atoms per protein molecule. The protein was extracted from milk, contained iron and was structurally and chemically similar to serum transferrin. Therefore, it was named lactoferrin in 1961, though the name lactotransferrin was used in some earlier publications, and later studies demonstrated that the protein is not restricted to milk. The antibacterial action of lactoferrin was also documented in 1961, and was associated with its ability to bind iron.
Structure
Genes of lactoferrin
At least 60 gene sequences of lactoferrin have been characterized in 11 species of mammals. In most species, stop codon is TAA, and TGA in Mus musculus. Deletions, insertions and mutations of stop codons affect the coding part and its length varies between 2,055 and 2,190 nucleotide pairs. Gene polymorphism between species is much more diverse than the intraspecific polymorphism of lactoferrin. There are differences in amino acid sequences: 8 in Homo sapiens, 6 in Mus musculus, 6 in Capra hircus, 10 in Bos taurus and 20 in Sus scrofa. This variation may indicate functional differences between different types of lactoferrin.
In humans, lactoferrin gene LTF is located on the third chromosome in the locus 3q21-q23. In oxen, the coding sequence consists of 17 exons and has a length of about 34,500 nucleotide pairs. Exons of the lactoferrin gene in oxen have a similar size to the exons of other genes of the transferrin family, whereas the sizes of introns differ within the family. Similarity in the size of exons and their distribution in the domains of the protein molecule indicates that the evolutionary development of lactoferrin gene occurred by duplication. Study of polymorphism of genes that encode lactoferrin helps selecting livestock breeds that are resistant to mastitis.
Molecular structure
Lactoferrin is one of the transferrin proteins that transfer iron to the cells and control the level of free iron in the blood and external secretions. It is present in the milk of humans and other mammals, in the blood plasma and neutrophils and is one of the major proteins of virtually all exocrine secretions of mammals, such as saliva, bile, tears and pancreas. Concentration of lactoferrin in the milk varies from 7 g/L in the colostrum to 1 g/L in mature milk.
X-ray diffraction reveals that lactoferrin is based on one polypeptide chain that contains about 700 amino acids and forms two homologous globular domains named N-and C-lobes. N-lobe corresponds to amino acid residues 1-333 and C-lobe to 345-692, and the ends of those domains are connected by a short α-helix. Each lobe consists of two subdomains, N1, N2 and C1, C2, and contains one iron binding site and one glycosylation site. The degree of glycosylation of the protein may be different and therefore the molecular weight of lactoferrin varies between 76 and 80 kDa. The stability of lactoferrin has been associated with the high glycosylation degree.
Lactoferrin belongs to the basic proteins, its isoelectric point is 8.7. It exists in two forms: iron-rich hololactoferrin and iron-free apolactoferrin. Their tertiary structures are different; apolactoferrin is characterized by "open" conformation of the N-lobe and the "closed" conformation of the C-lobe, and both lobes are closed in the hololactoferrin.
Each lactoferrin molecule can reversibly bind two ions of iron, zinc, copper or other metals. The binding sites are localized in each of the two protein globules. There, each ion is bonded with six ligands: four from the polypeptide chain (two tyrosine residues, one histidine residue and one aspartic acid residue) and two from carbonate or bicarbonate ions.
Lactoferrin forms a reddish complex with iron; its affinity for iron is 300 times higher than that of transferrin. The affinity increases in weakly acidic medium. This facilitates the transfer of iron from transferrin to lactoferrin during inflammations, when the pH of tissues decreases due to accumulation of lactic and other acids. The saturated iron concentration in lactoferrin in human milk is estimated as 10 to 30% (100% corresponds to all lactoferrin molecules containing 2 iron atoms). It is demonstrated that lactoferrin is involved not only in the transport of iron, zinc and copper, but also in the regulation of their intake. Presence of loose ions of zinc and copper does not affect the iron binding ability of lactoferrin, and might even increase it.
Polymeric forms
Both in blood plasma and in secretory fluids lactoferrin can exist in different polymeric forms ranging from monomers to tetramers. Lactoferrin tends to polymerize both in vitro and in vivo, especially at high concentrations. Several authors found that the dominant form of lactoferrin in physiological conditions is a tetramer, with the monomer:tetramer ratio of 1:4 at the protein concentrations of 10−5 M.
It is suggested that the oligomer state of lactoferrin is determined by its concentration and that polymerization of lactoferrin is strongly affected by the presence of Ca2+ ions. In particular, monomers were dominant at concentrations below 10−10−10−11 M in the presence of Ca2+, but they converted into tetramers at lactoferrin concentrations above 10−9−10−10 M. Titer of lactoferrin in the blood corresponds to this particular "transition concentration" and thus lactoferrin in the blood should be presented both as a monomer and tetramer. Many functional properties of lactoferrin depend on its oligomeric state. In particular, monomeric, but not tetrameric lactoferrin can strongly bind to DNA.
Function
Lactoferrin belongs to the innate immune system. Apart from its main biological function, namely binding and transport of iron ions, lactoferrin also has antibacterial, antiviral, antiparasitic, catalytic, anti-cancer, and anti-allergic functions and properties.
Enzymatic activity of lactoferrin
Lactoferrin hydrolyzes RNA and exhibits the properties of pyrimidine-specific secretory ribonucleases . In particular, by destroying the RNA genome, milk RNase inhibits reverse transcription of retroviruses that cause breast cancer in mice. Parsi women in West India have the milk RNase level markedly lower than in other groups, and their breast cancer rate is three times higher than average. Thus, ribonucleases of milk, and lactoferrin in particular, might play an important role in pathogenesis.
Lactoferrin receptor
The lactoferrin receptor plays an important role in the internalization of lactoferrin; it also facilitates absorption of iron ions by lactoferrin. It was shown that gene expression increases with age in the duodenum and decreases in the jejunum.
The moonlighting glycolytic enzyme glyceraldehyde-3-phosphate dehydrogenase (GAPDH) has been demonstrated to function as a receptor for lactoferrin.
Bone activity
Ribonuclease-enriched lactoferrin has been used to examine how lactoferrin affects bone. Lactoferrin has shown to have positive effects on bone turnover. It has aided in decreasing bone resorption and increasing bone formation. This was indicated by a decrease in the levels of two bone resorption markers (deoxypyridinoline and N-telopeptide) and an increase in the levels two bone formation markers (osteocalcin and alkaline phosphatase). It has reduced osteoclast formation, which signifies a decrease in pro-inflammatory responses and an increase in anti-inflammatory responses which indicates a reduction in bone resorption as well.
Interaction with nucleic acids
One of the important properties of lactoferrin is its ability to bind with nucleic acids. The fraction of protein extracted from milk, contains 3.3% RNA,
but, the protein preferably binds to double-stranded DNA rather than single-stranded DNA. The ability of lactoferrin to bind DNA is used for its isolation and purification using affinity chromatography with columns containing immobilized DNA-containing sorbents, such as agarose with the immobilized single-stranded DNA.
Clinical significance
Antibacterial activity
Lactoferrin's primary role is to sequester free iron, and in doing so remove essential substrate required for bacterial growth. Antibacterial action of lactoferrin is also explained by the presence of specific receptors on the cell surface of microorganisms. Lactoferrin binds to lipopolysaccharide of bacterial walls, and the oxidized iron part of the lactoferrin oxidizes bacteria via formation of peroxides. This affects the membrane permeability and results in the cell breakdown (lysis).
Although lactoferrin also has other antibacterial mechanisms not related to iron, such as stimulation of phagocytosis, the interaction with the outer bacterial membrane described above is the most dominant and most studied. Lactoferrin not only disrupts the membrane, but even penetrates into the cell. Its binding to the bacteria wall is associated with the specific peptide lactoferricin, which is located at the N-lobe of lactoferrin and is produced by in vitro cleavage of lactoferrin with another protein, trypsin. A mechanism of the antimicrobial action of lactoferrin has been reported as lactoferrin targets H+-ATPase and interferes with proton translocation in the cell membrane, resulting in a lethal effect in vitro.
Lactoferrin prevents the attachment of H. pylori in the stomach, which in turn, aids in reducing digestive system disorders. Bovine lactoferrin has more activity against H. pylori than human lactoferrin.
Antiviral activity
Lactoferrin in sufficient strength acts on a wide range of human and animal viruses based on DNA and RNA genomes, including the herpes simplex virus 1 and 2, cytomegalovirus, HIV, hepatitis C virus, hantaviruses, rotaviruses, poliovirus type 1, human respiratory syncytial virus, murine leukemia viruses and Mayaro virus. Activity against COVID-19 has been speculated but not proven.
The most studied mechanism of antiviral activity of lactoferrin is its diversion of virus particles from the target cells. Many viruses tend to bind to the lipoproteins of the cell membranes and then penetrate into the cell. Lactoferrin binds to the same lipoproteins thereby repelling the virus particles. Iron-free apolactoferrin is more efficient in this function than hololactoferrin; and lactoferricin, which is responsible for antimicrobial properties of lactoferrin, shows almost no antiviral activity.
Beside interacting with the cell membrane, lactoferrin also directly binds to viral particles, such as the hepatitis viruses. This mechanism is also confirmed by the antiviral activity of lactoferrin against rotaviruses, which act on different cell types.
Lactoferrin also suppresses virus replication after the virus penetrated into the cell. Such an indirect antiviral effect is achieved by affecting natural killer cells, granulocytes and macrophages – cells, which play a crucial role in the early stages of viral infections, such as severe acute respiratory syndrome (SARS).
Antifungal activity
Lactoferrin and lactoferricin inhibit in vitro growth of Trichophyton mentagrophytes, which are responsible for several skin diseases such as ringworm. Lactoferrin also acts against the Candida albicans – a diploid fungus (a form of yeast) that causes opportunistic oral and genital infections in humans. Fluconazole has long been used against Candida albicans, which resulted in emergence of strains resistant to this drug. However, a combination of lactoferrin with fluconazole can act against fluconazole-resistant strains of Candida albicans as well as other types of Candida: C. glabrata, C. krusei, C. parapsilosis and C. tropicalis. Antifungal activity is observed for sequential incubation of Candida with lactoferrin and then with fluconazole, but not vice versa. The antifungal activity of lactoferricin exceeds that of lactoferrin. In particular, synthetic peptide 1–11 lactoferricin shows much greater activity against Candida albicans than native lactoferricin.
Administration of lactoferrin through drinking water to mice with weakened immune systems and symptoms of aphthous ulcer reduced the number of Candida albicans strains in the mouth and the size of the damaged areas in the tongue. Oral administration of lactoferrin to animals also reduced the number of pathogenic organisms in the tissues close to the gastrointestinal tract. Candida albicans could also be completely eradicated with a mixture containing lactoferrin, lysozyme and itraconazole in HIV-positive patients who were resistant to other antifungal drugs. Such antifungal action when other drugs deem inefficient is characteristic of lactoferrin and is especially valuable for HIV-infected patients. Contrary to the antiviral and antibacterial actions of lactoferrin, very little is known about the mechanism of its antifungal action. Lactoferrin seems to bind the plasma membrane of C. albicans inducing an apoptotic-like process.
Anticarcinogenic activity
The anticancer activity of bovine lactoferrin (bLF) has been demonstrated in experimental lung, bladder, tongue, colon, and liver carcinogeneses on rats, possibly by suppression of phase I enzymes, such as cytochrome P450 1A2 (CYP1A2). Also, in another experiment done on hamsters, bovine lactoferrin decreased the incidence of oral cancer by 50%. Currently, bLF is used as an ingredient in yogurt, chewing gums, infant formulas, and cosmetics.
Cystic fibrosis
The human lung and saliva contain a wide range of antimicrobial compound including lactoperoxidase system, producing hypothiocyanite and lactoferrin, with hypothiocyanite missing in cystic fibrosis patients. Lactoferrin, a component of innate immunity, prevents bacterial biofilm development. The loss of microbicidal activity and increased formation of biofilm due to decreased lactoferrin activity is observed in patients with cystic fibrosis. In cystic fibrosis, antibiotic susceptibility may be modified by lactoferrin. These findings demonstrate the important role of lactoferrin in human host defense and especially in lung. Lactoferrin with hypothiocyanite has been granted orphan drug status by the EMEA and the FDA.
Necrotizing enterocolitis
Low quality evidence suggests that oral lactoferrin supplementation with or without the addition of a probiotic may decrease late onset of sepsis and necrotizing enterocolitis (stage II or III) in preterm infants with no adverse effects.
In diagnosis
Lactoferrin levels in tear fluid have been shown to decrease in dry eye diseases such as Sjögren's syndrome. A rapid, portable test utilizing microfluidic technology has been developed to enable measurement of lactoferrin levels in human tear fluid at the point-of-care with the aim of improving diagnosis of Sjögren's syndrome and other forms of dry eye disease.
Technology
Extraction
Bovine lactoferrin can be isolated from raw milk, colostrum, or whey using methods such as salt extraction, chromatography, and membrane filtration. Lactoferrin from a variety of species, including humans, can also be produced using transgenic organisms as a recombinant protein.
Nanotechnology
Lactotransferrin has been used in the synthesis of fluorescent gold quantum clusters, which has potential applications in nanotechnology.
See also
Respiratory tract antimicrobial defense system
References
External links
Uniprot
LTF on the National Center for Biotechnology Information
FDA Lactoferrin Considered Safe to Fight E. Coli.
Glycoproteins
Transferrins | Lactoferrin | Chemistry | 3,966 |
33,316,990 | https://en.wikipedia.org/wiki/Jean%20Weigle | Jean-Jacques Weigle (9 July 1901 – 28 December 1968) was a Swiss molecular biologist at Caltech and formerly a physicist at the University of Geneva from 1931 to 1948. He is known for his major contributions on field of bacteriophage λ research, focused on the interactions between those viruses and their E. coli hosts.
Biography
Jean Weigle was born in Geneva, Switzerland, where he obtained his PhD in physics in 1923, from the University of Geneva.
He married Ruth Juliette Falk, a widow.
He died in Pasadena, California, after suffering a heart attack in 1968.
Research
As a physicist he was awarded for his research on x-ray diffraction to the study of crystal structure; the effects of temperature on this diffraction; the diffraction of light by ultrasonics.
He was working as professor of Physics at the University of Pittsburgh in the 1920s.
At the University of Geneva he became director of the Institute of Physics in 1931. He developed the first electron microscope made in Switzerland, an important factor for the studies of molecular biology leading to creation in 1964 of the Institute of Molecular Biology (MOLBIO) in Geneva by Eduard Kellenberger and others.
After suffering his first heart attack in 1946 he emigrated to the US in 1948, resigned from the faculty of the University of Geneva and went to Caltech in Pasadena, California.
There he turned to biology and worked in the Phage group of Max Delbrück, Seymour Benzer, Elie Wollman, and Gunther Stent. While at Caltech, Weigle worked with other notable molecular biologists, including George Streisinger (whom Weigle mentored as a postdoctoral researcher), Giuseppe Bertani, and Nobel laureate Werner Arber.
In 1952, Salvador Luria had discovered the phenomenon of "restriction modification" (the modification of phage growing within an infected bacterium, so that upon their release and re-infection of a related bacterium the phage's growth is restricted), (also described in Luria's autobiography, pgs. 45 and 99). Work by Jean Weigle and Giuseppe Bertani at almost the same time, and later work by others clarified the basis for this phenomenon. They showed that restriction was actually due to attack by specific bacterial enzymes on the modified phage's DNA. This work led to the discovery of the class of enzymes now known as "restriction enzymes." These enzymes allowed controlled manipulation of DNA in the laboratory, thus providing the foundation for the development of genetic engineering.
He is most noted for his demonstration, with Matthew Meselson, of Caltech and Grete Kellenberger of Geneva, that genetic recombination involves actual breakage and reunion of DNA molecules. He created the classic induction of a lysogen, which involved irradiating the infected cells with ultraviolet light. He demonstrated through his classical experiments the inducible nature of the DNA repair system.
The induction of DNA damage-response genes in bacteria has come to be known as the SOS response. This response includes DNA damage inducible mutagenesis (now referred to as Weigle mutagenesis in his honor) and inducible DNA repair following DNA damage (termed Weigle reactivation).
Selected works
Weigle, J. J., and M. Delbrück. 1951. "Mutual exclusion between an infecting phage and a carried phage". J. Bacteriol. 62:301-318.
Weigle, J. J. (1953). "Induction of Mutations in a Bacterial Virus". Proc Natl Acad Sci USA 39 (7):628-636.PDF file
Awards and honours
In 1947 he received an honorary doctorate from Case Institute of Technology. In 1962 he was awarded the Prix des trois physiciens.
Legacy
"So Weigle was the pioneer of the whole lambda genetics business, which is now a real industrial operation".
"The interest of physical scientists such Max Delbrück and Jean Weigle in the 20th Century had a revolutionizing effect on biology".
In his honor the institutions where he worked created the Weigle Memorial Service and the Weigle Memorial Lecture at Caltech, and several friends established the Jean Weigle Memorial Fund.
The Weigle lecture honors his memory, since he was instrumental for the development of Molecular Biology in Geneva.
References
External links
Weigle lectures
History of MOLBIO at Geneva University
1901 births
1968 deaths
Molecular biologists
Mutagenesis
Swiss physicists
University of Geneva alumni
Academic staff of the University of Geneva
Scientists from Geneva
California Institute of Technology faculty
Phage workers
20th-century Swiss biologists
University of Pittsburgh faculty | Jean Weigle | Chemistry | 943 |
20,865,515 | https://en.wikipedia.org/wiki/Arctic%20geoengineering | Arctic geoengineering is a type of climate engineering in which polar climate systems are intentionally manipulated to reduce the undesired impacts of climate change. As a proposed solution to climate change, arctic geoengineering is relatively new and has not been implemented on a large scale. It is based on the principle that Arctic albedo plays a significant role in regulating the Earth's temperature and that there are large-scale engineering solutions that can help maintain Earth's hemispheric albedo. According to researchers, projections of sea ice loss, when adjusted to account for recent rapid Arctic shrinkage, indicate that the Arctic will likely be free of summer sea ice sometime between 2059 and 2078. Advocates for Arctic geoengineering believe that climate engineering methods can be used to prevent this from happening.
Current proposed methods of arctic geoengineering include using sulfate aerosols to reflect sunlight, pumping water up to freeze on the surface, and using hollow glass microspheres to increase albedo. These methods are highly debated and have drawn criticism from some researchers, who argue that these methods may be ineffective, counterproductive, or produce unintended consequences.
Background
History
The main goal of geoengineering from the 19th to mid 20th century was to create rain for use in irrigation or as offensive military action. In 1965, the Johnson administration in the US issued a report that brought the focus of geoengineering to climate change. Some of the early plans for geoengineering in the Arctic came from a 2006 NASA conference on the topic of "managing solar radiation" where astrophysicist Lowell Wood advanced the proposition of bombarding the Arctic stratosphere with sulfates to build up an ice sheet. Other arctic geoengineering methods have since been proposed including the use of hollow glass microspheres.
Motivation
The Arctic's albedo plays a significant role in modulating the amount of solar radiation absorbed by Earth's surface. With the loss of Arctic sea ice and the recent average darkening of Arctic albedo, the Arctic is less able to reflect solar radiation and thus cool Earth's surface. Increased solar radiation causes higher surface temperatures and results in a positive feedback loop where arctic ice melts and albedo decreases further. Such a feedback loop can push temperatures past a tipping point for certain irreversible climate domino effects. This is known as the ice-albedo feedback loop.
Arctic sea ice retreat is further exacerbated by the release of methane, a greenhouse gas that is stored in arctic permafrost in the form of methane clathrate. Excess methane being released into the atmosphere could result in another positive feedback loop in which temperatures continue to rise and more arctic sea ice melts. At the current melting rate, if the global temperature rises 3°C above pre-industrial levels, the top permafrost layers of the arctic could melt at a rate of 30-85% and cause a climate emergency.The IPCC Fourth Assessment Report of 2007 states that "in some projections, Arctic late-summer sea ice disappears almost entirely by the latter part of the 21st century."However, arctic late-summer sea ice has since undergone significant retreat, reaching a record low in surface area in 2007 before recovering slightly in 2008.Climate engineering has been proposed for preventing or reversing tipping point events in the Arctic, in particular to halt the retreat of the sea ice.
Goals
Proponents of arctic geoengineering believe that it may be one way of stabilizing carbon storage in the Arctic. Arctic permafrost holds an estimated 1,700 billion metric tons of carbon, which is about 51 times the amount of carbon that was released globally as fossil fuel emissions in 2019. Permafrost in the Northern Hemisphere also contains about twice as much carbon as the atmosphere, and arctic air temperature has increased at about six times more than the global average over the past few decades. Arctic ecosystems are more sensitive to climate changes and could significantly contribute to global warming if arctic sea ice continues to melt at the current rate. Preventing further ice loss is important for climate control, because the Arctic ice helps regulate global temperatures, by restraining strong greenhouse gasses like methane and carbon dioxide, which trap heat in Earth's atmosphere.
Proponents believe that geoengineering techniques could be applied in the Arctic to protect existing sea ice and to promote further ice buildup by increasing ice production, reducing solar radiation from reaching the ice's surface, and slowing the melting of ice. The various proposed methods of recovering arctic ice vary in terms of cost and complexity, with some of the more intensive methods requiring significant economic investments and complex infrastructure systems. One proposed method of increasing Earth's albedo is the injection of sulfate aerosols into the stratosphere. Other proposed geoengineering methods to recover arctic ice include pumping seawater on top of existing arctic sea ice, and covering arctic sea ice with small hollow glass spheres.
Proposed methods
Stratospheric sulfate aerosols
The idea to inject sulfate aerosols into the stratosphere comes from simulating volcanic eruptions. Sulfate particles found in the atmosphere help scatter sunlight, which increases the albedo, and in theory, produces a cooler climate on earth.
Caldeira and Wood analyzed the effect of climate engineering in the Arctic using stratospheric sulfate aerosols. He found that the Earth's average temperature change per unit albedo is unaffected by latitude because climate system feedbacks have a stronger presence in areas of high latitude; where less sunlight is reflected.
Building thicker sea ice
It has been proposed to actively enhance the polar ice cap by spraying or pumping water onto the top of it which would build thicker sea ice. A benefit of this method is that the increased salt content of the melting ice will tend to strengthen downwelling currents when the ice re-melts. Some ice in the sea is frozen seawater. Other ice comes from glaciers, which come from compacted snow, and is thus fresh water ice.
A proposed method to build thicker sea ice is to use wind powered water pumps. These pumps contain a buoy that has a wind turbine attached to it, which functions to transfer the wind energy to power the pump. The buoy also has a tank attached to it to store and release water as necessary. In theory pumping 1.3 meters of water on top of the ice, at the right time, could increase the ice's thickness by 1.0 meter. The goal of this pump is to increase ice thickness in a way that is energy efficient. Pumps that use wind power to drive them have been successfully used in the South pole to increase ice thickness.
Glass beads to increase albedo
Ice911, a non-profit organization whose goal is to reduce climate change, conducted an experiment in a lab. They found that releasing reflective material on top of ice increased its albedo. The reasoning behind this finding is that raising the ice's surfaces reflectivity increases its ability to reflect sunlight and therefore reduces the temperature on the ice's surface. Of the materials used, Ice911 found glass was not only effective in raising the ice's albedo, but it was also financially feasible and environmentally friendly. The team then moved forward and conducted field tests in California, Minnesota, and Alaska. In all field testing locations, the albedo were increased in ice that had the glass beads poured on top of it compared to the ice that didn't have the glass beads added to its surface. The findings indicate the glass beads placed on top of the ice increased the ice's reflectivity.
Decreasing water salinity
Decreased salinity of ocean water causes it to become less dense, which in turn causes changing ocean currents. For this reason, it has been suggested that locally influencing salinity and temperature of the Arctic Ocean by changing the ratio of Pacific and fluvial waters entering through the Bering Strait could play a key role in preserving Arctic sea ice. The purpose would be to create a relative increase of fresh water inflow from the Yukon River, while blocking (part) of the warm and saltier waters from the Pacific Ocean. Proposed geoengineering options include a dam connecting St. Lawrence Island and a threshold under the narrow part of the strait.
Limitations and risks
Adverse weather conditions
Because geoengineering is a relatively new concept, there are no real studies on the ramifications of these new technologies and how they may affect weather patterns, ecosystems, and the climate in the long term. Certain methods of arctic geoengineering, such as injecting sulfate aerosol into the stratosphere to reflect more sunlight, or marine cloud brightening, may trigger a chain of events that may be irreversible. For the case of sulfur injection, such effects may include ocean acidification or crop failure due to either delayed precipitation patterns, or by reducing the amount of sunlight needed for them to grow. The latter effects are similar for marine cloud brightening. The process involves using boats to increase sea water aerosol particles in the clouds closest to Earth's surface in order to reflect sunlight.
Rapid Ozone Depletion
Nobel laureate Paul Crutzen proposed a method of geoengineering in which emitting sulfates into the lower atmosphere would lead to global cooling and theoretically help tackle climate change. The possible downside of this is that injecting sulfates into the stratosphere has the potential to lead to ozone depletion. The process by which this works is that sulfate particles come into contact with atmospheric chlorine and chemically alter them. This reaction is estimated to have the possibility to deplete between one-third and one-half of the ozone layer over the Arctic if it goes into effect. A proposed alternative to prevent this from happening is to swap out sulfates for calcite particles, the idea being that this is the material emitted into the atmosphere during a volcanic eruption. There have not been any prototypes of such an experiment thus far, and while this method would not reverse the damage already done to the environment, it may aid in reducing some of the long-term potential damage.
Effectiveness of reflective particles
There are concerns surrounding the effectiveness of using glass, and other reflective particles, to increase albedo. A study conducted by Webster and Warren found these particles actually increase the melting rates of sea ice. Webster and Warren argue spreading glass over new ice works because the new ice is formed in the months were there is little sunlight, thus the effectiveness of the glass beads can not definitively be credited to the beads themselves. Additionally, Webster and Warren argue the glass beads used in the study absorbed dark substances and overall decreased the albedo which could potentially lead to a faster melting rate of sea ice.
References
Planetary engineering
Arctic research
Environment of the Arctic
Climate change policy
Climate engineering | Arctic geoengineering | Engineering | 2,183 |
68,967,458 | https://en.wikipedia.org/wiki/Transhumanist%20Bill%20of%20Rights | The Transhumanist Bill of Rights is a crowdsourced document that conveys rights and laws to humans and all sapient entities while specifically targeting future scenarios of humanity. The original version was created by transhumanist US presidential candidate Zoltan Istvan and was posted by Zoltan on the wall of the United States Capitol building on December 14, 2015.
History
In act reminiscent of Martin Luther, Zoltan Istvan was filmed writing the Transhumanist Bill of Rights on steps of the US Supreme Court on December 13, 2015. The following day, while being surrounded and warned by Capitol police he was going to be arrested for trespassing, Zoltan posted the one-page bill to the northside wall of the US Capitol, an act which is partially documented in the documentary Immortality or Bust. The bill quickly fell off the building.
Version 2.0 was published in Wired magazine by Bruce Sterling. By the time Version 3.0 was published in December 2018, the document had quadrupled in size from version 1.0. Versions 2.0 and 3.0 were developed using electronic ranked-preference voting that involved the members of the U.S. Transhumanist Party proposing articles and then selecting among the proposed wordings.
Istvan has spoken about the bill at the World Economic Forum (Global Council Meeting), Congreso Futuro, the World Bank, and the US Navy.
Content
The most current version of the Transhumanist Bill of Rights focuses protecting the rights of: Human beings; genetically modified humans beings; cyborgs; digital intelligences; intellectually enhanced, previously non-sapient animals; any species of plant or animal which has been enhanced to possess the capacity for intelligent thought; and other advanced sapient life forms. The section on morphological freedom has received particular attention in both the press and scholarly literature.
The Transhumanist Bill of Rights has been widely discussed - major media has published information on it, books have discussed it, and academics have written papers about it. Its 43 articles cover items such as the right to abolish all suffering, the right for morphological freedom, the right to universal basic income and healthcare, the right to strive for radical life extension, and the legal requirement for sentient entities to protect themselves against existential risk. To help guard against existential risk and ensure a bright future for humanity article 5 of the bill mandates that governments "take all reasonable measures to embrace and fund space travel". The bill also requires aging to be classified as a disease by all governments.
Criticism
In an article at The American Spectator titled a “A Transhumanist Bill of Wrongs” perennial transhumanist critic Wesley Smith argued that the laws in the Transhumanist Bill of Rights would cost too much and harm human exceptionalism. Dr. Michael Cook questions why transhumanists even need a bill of rights, and, instead, asks whether society would need a bill of rights against transhumanists and their goals. When critiquing Version 2.0 of the Bill, Michael Cook along with commentator Jasper Hammill of The Metro erroneously assumed that when Article IV references a right to "ending involuntary suffering", it was referring to euthanasia. As U.S. Transhumanist party chair Gennady Stolyarov II has explained, no such implication was intended and the text is actually a reference to David Pearce’s idea that suffering itself should be abolished for entities who desire this, as expressed in his philosophy of abolitionism.
Text of Version 1.0
References
External links
Version 1.0
Version 2.0
Version 3.0
Transhumanism
Transhumanist politics | Transhumanist Bill of Rights | Technology,Engineering,Biology | 753 |
66,660,778 | https://en.wikipedia.org/wiki/Chlorophyllum%20agaricoides | Chlorophyllum agaricoides, commonly known as the gasteroid lepiota, puffball parasol, false puffball, or puffball agaric, is a species of fungus belonging to the family Agaricaceae. When young, it is edible, and has been traditionally eaten in Turkey for many years.
It has cosmopolitan distribution, with notable documentation in China, Mongolia, Bulgaria, and Turkey. It is also a protected species in Hungary, and is believed to be in decline across Europe due to habitat destruction.
Description
It is a secotioid mushroom, meaning its hymenium takes the form of a gleba made of underdeveloped gills, completely enclosed by the cap, which never fully opens. This protects the mushroom from desiccation. The cap is egg-shaped to spherical, often tapering upward to form a blunt, conical point 1-7cm wide and 2-10cm tall. It is white, and becomes dark brown with age. It is mostly smooth, with some small fibrils, though it may also develop fibrous scales. The gills are contorted, irregularly chambered, and underdeveloped, making up an enclosed gleba which is white, aging to a mustardy yellow-brown. The stipe is 0-3cm long and 0.5-2cm thick. There is no ring. Its odor becomes cabbagey with age. It grows singularly or in clusters mostly on cultivated land or grass, though occasionally on the forest floor. The spores are 6.5-9.5 x 5-7 μm, globose to elliptic, green to yellow-brown, turning reddish brown in Melzer's. The germ pore is indistinct. Cheilocystidia and pleurocystidia are absent. Agaricus inapertus is a look-alike, although unlike C. agaricoides, it prefers forests and develops a black gleba with age.
References
Agaricaceae
Fungus species
Secotioid fungi | Chlorophyllum agaricoides | Biology | 427 |
454,315 | https://en.wikipedia.org/wiki/Plancherel%20theorem | In mathematics, the Plancherel theorem (sometimes called the Parseval–Plancherel identity) is a result in harmonic analysis, proven by Michel Plancherel in 1910. It is a generalization of Parseval's theorem; often used in the fields of science and engineering, proving the unitarity of the Fourier transform.
The theorem states that the integral of a function's squared modulus is equal to the integral of the squared modulus of its frequency spectrum. That is, if is a function on the real line, and is its frequency spectrum, then
A more precise formulation is that if a function is in both Lp spaces and , then its Fourier transform is in and the Fourier transform is an isometry with respect to the L2 norm. This implies that the Fourier transform restricted to has a unique extension to a linear isometric map , sometimes called the Plancherel transform. This isometry is actually a unitary map. In effect, this makes it possible to speak of Fourier transforms of quadratically integrable functions.
A proof of the theorem is available from Rudin (1987, Chapter 9). The basic idea is to prove it for Gaussian distributions, and then use density. But a standard Gaussian is transformed to itself under the Fourier transformation, and the theorem is trivial in that case. Finally, the standard transformation properties of the Fourier transform then imply Plancherel for all Gaussians.
Plancherel's theorem remains valid as stated on n-dimensional Euclidean space . The theorem also holds more generally in locally compact abelian groups. There is also a version of the Plancherel theorem which makes sense for non-commutative locally compact groups satisfying certain technical assumptions. This is the subject of non-commutative harmonic analysis.
Due to the polarization identity, one can also apply Plancherel's theorem to the inner product of two functions. That is, if and are two functions, and denotes the Plancherel transform, then
and if and are furthermore functions, then
and
so
Locally compact groups
There is also a Plancherel theorem for the Fourier transform in locally compact groups. In the case of an abelian group , there is a Pontrjagin dual group of characters on . Given a Haar measure on , the Fourier transform of a function in is
for a character on .
The Plancherel theorem states that there is a Haar measure on , the dual measure such that
for all (and the Fourier transform is also in ).
The theorem also holds in many non-abelian locally compact groups, except that the set of irreducible unitary representations may not be a group. For example, when is a finite group, is the set of irreducible characters. From basic character theory, if is a class function, we have the Parseval formula
More generally, when is not a class function, the norm is
so the Plancherel measure weights each representation by its dimension.
In full generality, a Plancherel theorem is
where the norm is the Hilbert-Schmidt norm of the operator
and the measure , if one exists, is called the Plancherel measure.
See also
Carleson's theorem
Plancherel theorem for spherical functions
References
.
.
.
.
External links
Plancherel's Theorem on Mathworld
Theorems in functional analysis
Theorems in harmonic analysis
Theorems in Fourier analysis
Lp spaces | Plancherel theorem | Mathematics | 691 |
28,088,553 | https://en.wikipedia.org/wiki/Ferrocenium%20hexafluorophosphate | Ferrocenium hexafluorophosphate is an organometallic compound with the formula [Fe(C5H5)2]PF6. This salt is composed of the cation [Fe(C5H5)2]+ and the hexafluorophosphate anion (). The related tetrafluoroborate is also a popular reagent with similar properties. The ferrocenium cation is often abbreviated Fc+ or Cp2Fe+. The salt is deep blue in color and paramagnetic.
Ferrocenium salts are one-electron oxidizing agents, and the reduced product, ferrocene, is relatively inert and readily separated from ionic products. The ferrocene–ferrocenium couple is often used as a reference in electrochemistry. In acetonitrile solution that is 0.1 M in NBu4PF6, the Fc+/Fc couple is +0.641 V with respect to the normal hydrogen electrode.
Preparation and structure
Commercially available, this compound may be prepared by oxidizing ferrocene with ferric salts followed by addition of hexafluorophosphoric acid.
The compound is monoclinic with well-separated cation and anion; the may rotate freely. The average Fe-C bond length is 2.047 Å, which is virtually indistinguishable from the Fe-C distance in ferrocene.
References
Ferrocenes
Hexafluorophosphates
Oxidizing agents | Ferrocenium hexafluorophosphate | Chemistry | 327 |
1,950,552 | https://en.wikipedia.org/wiki/Wood%20frog | Lithobates sylvaticus or Rana sylvatica, commonly known as the wood frog, is a frog species that has a broad distribution over North America, extending from the boreal forest of the north to the southern Appalachians, with several notable disjunct populations including lowland eastern North Carolina. The wood frog has garnered attention from biologists because of its freeze tolerance, relatively great degree of terrestrialism (for a ranid), interesting habitat associations (peat bogs, vernal pools, uplands), and relatively long-range movements.
The ecology and conservation of the wood frog has attracted research attention in recent years because they are often considered "obligate" breeders in ephemeral wetlands (sometimes called "vernal pools"), which are themselves more imperiled than the species that breed in them. The wood frog has been proposed to be the official state amphibian of New York.
Description
Wood frogs range from in length. Females are larger than males. Adult wood frogs are usually brown, tan, or rust-colored, and usually have a dark eye mask. Individual frogs are capable of varying their color; Conant (1958) depicts one individual which was light brown and dark brown at different times. The underparts of wood frogs are pale with a yellow or green cast; in northern populations, the belly may be faintly mottled. Their body colour may change seasonally; exposure to sunlight causes darkening.
Geographic range
The contiguous wood frog range is from northern Georgia and northeastern Canada in the east to Alaska and southern British Columbia in the west. They range all throughout the boreal forests of Canada. It is the most widely distributed frog in Alaska. It is also found in the Medicine Bow National Forest.
Habitat
Wood frogs are forest-dwelling organisms that breed primarily in ephemeral, freshwater wetlands: woodland vernal pools. They are nonarboreal and spend most of their time on the forest floor. Long-distance migration plays an important role in their life history. Individual wood frogs range widely (hundreds of metres) among their breeding pools and neighboring freshwater swamps, cool-moist ravines, and/or upland habitats. Genetic neighborhoods of individual pool breeding populations extend more than a kilometre away from the breeding site. Thus, conservation of this species requires a landscape (multiple habitats at appropriate spatial scales) perspective. They also can be camouflaged with their surroundings.
A study was done on wood frogs dispersal patterns in 5 ponds at the Appalachian Mountains where they reported adult wood frogs were 100% faithful to the pond of their first breeding but 18% of juveniles dispersed to breed in other ponds.
Adult wood frogs spend summer months in moist woodlands, forested swamps, ravines, or bogs. During the fall, they leave summer habitats and migrate to neighboring uplands to overwinter. Some may remain in moist areas to overwinter. Hibernacula tend to be in the upper organic layers of the soil, under leaf litter. By overwintering in uplands adjacent to breeding pools, adults ensure a short migration to thawed pools in early spring. Wood frogs are mostly diurnal and are rarely seen at night, except maybe in breeding choruses. They are one of the first amphibians to emerge for breeding right when the snow melts, along with spring peepers.
Feeding
Wood frogs eat a variety of small, forest-floor invertebrates, with a diet primarily consisting of insects. The tadpoles are omnivorous, feeding on plant detritus and algae along with other tadpoles of their own and other species.
The feeding pattern of the wood frog is similar to that of other ranids. It is triggered by prey movement and consists of a bodily lunge that terminates with the mouth opening and an extension of the tongue onto the prey. The ranid tongue is attached to the floor of the mouth near the tip of the jaw, and when the mouth is closed, the tongue lies flat, extended posteriorly from its point of attachment.
In the feeding strike, the tongue is swung forward as though on a hinge, so some portion of the normally dorsal and posterior tongue surface makes contact with the prey. At this point in the feeding strike, the wood frog differs markedly from more aquatic Lithobates species, such as the green frog, leopard frog, and bullfrog. The wood frog makes contact with the prey with just the tip of its tongue, much like a toad. A more extensive amount of tongue surface is applied in the feeding strikes of these other frog species, with the result that usually the prey is engulfed by the fleshy tongue and considerable tongue surface contacts the surrounding substrate.
Cold tolerance
Similar to other northern frogs that enter dormancy close to the surface in soil and/or leaf litter, wood frogs can tolerate the freezing of their blood and other tissues. Urea is accumulated in tissues in preparation for overwintering, and liver glycogen is converted in large quantities to glucose in response to internal ice formation. Both urea and glucose act as cryoprotectants to limit the amount of ice that forms and to reduce osmotic shrinkage of cells. Frogs found in southern Canada and the American midwest can tolerate freezing temperatures of . However, wood frogs in Interior Alaska exhibit even greater tolerance, with some of their body water freezing while still surviving. When frozen, wood frogs have no detectable vital signs: no heartbeat, breathing, blood circulation, muscle movement, or detectable brain activity.
Wood frogs in natural hibernation remain frozen for 193 +/- 11 consecutive days and reached an average (October–May) temperature of and an average minimum temperature of . The wood frog has evolved various physiological adaptations that allow it to tolerate the freezing of 65–70% of its total body water. When water freezes, ice crystals form in cells and break up the structure, so that when the ice thaws the cells are damaged. Frozen frogs also need to endure the interruption of oxygen delivery to their tissues as well as strong dehydration and shrinkage of their cells when water is drawn out of cells to freeze. The wood frog has evolved traits that prevent their cells from being damaged when frozen and thawed out. The wood frog has developed various adaptations that allow it to effectively combat prolonged ischemia/anoxia and extreme cellular dehydration. One crucial mechanism utilized by the wood frog is the accumulation of high amounts of glucose that act as a cryoprotectant.
Frogs can survive many freeze/thaw events during winter if no more than about 65%-70% of the total body water freezes. Wood frogs have a series of seven amino acid substitutions in the sarco/endoplasmic reticulum Ca2+-ATPase 1 (SERCA 1) enzyme ATP binding site that allows this pump to function at lower temperatures relative to less cold-tolerant species (e.g. Lithobates clamitans).
Studies on northern subpopulations found that Alaskan wood frogs had a larger liver glycogen reserve and greater urea production compared to those in more temperate zones of its range. These conspecifics also showed higher glycogen phosphorylase enzymatic activity, which facilitates their adaptation to freezing.
The phenomenon of cold resistance is observed in other anuran species. The Japanese tree frog shows even greater cold tolerance than the wood frog, surviving in temperatures as low as for up to 120 days.
Reproduction
L. sylvaticus primarily breeds in ephemeral pools rather than permanent water bodies such as ponds or lakes. This is believed to provide some protection for the adult frogs and their offspring (eggs and tadpoles) from predation by fish and other predators of permanent water bodies. Adult wood frogs typically hibernate within 65 meters of breeding pools. They emerge from hibernation in early spring and migrate to the nearby pools. There, males chorus, emitting duck-like quacking sounds.
Wood frogs are considered explosive breeders; many populations will conduct all mating in the span of a week. Males actively search for mates by swimming around the pool and calling. Females, on the other hand, will stay under the water and rarely surface, most likely to avoid sexual harassment. A male approaches a female and clasps her from behind her forearms before hooking his thumbs together in a hold called "amplexus", which is continued until the female deposits the eggs. Females deposit eggs attached to submerged substrate, typically vegetation or downed branches. Most commonly, females deposit eggs adjacent to other egg masses, creating large aggregations of masses.
Some advantage is conferred to pairs first to breed, as clutches closer to the center of the raft absorb heat and develop faster than those on the periphery, and have more protection from predators. If pools dry before tadpoles metamorphose into froglets, they die. This constitutes the risk counterbalancing the antipredator protection of ephemeral pools. By breeding in early spring, however, wood frogs increase their offspring's chances of metamorphosing before pools dry.
The larvae undergo two stages of development: fertilization to free-living tadpoles, and free-living tadpoles to juvenile frogs. During the first stage, the larvae are adapted for rapid development, and their growth depends on the temperature of the water. Variable larval survival is a major contributor to fluctuations in wood frog population size from year to year. The second stage of development features rapid development and growth, and depends on environmental factors including food availability, temperature, and population density.
Some studies suggest that road-salts, as used in road de-icing, may have toxic effects on wood frog larvae. A study exposed wood frog tadpoles to NaCl and found that tadpoles experienced reduced activity and weight, and even displayed physical abnormalities. There was also significantly lower survivorship and decreased time to metamorphosis with increasing salt concentration. De-icing agents may pose a serious conservation concern to wood frog larvae. Another study has found increased tolerance to salt with higher concentrations, though the authors caution against over-extrapolating from short-term, high concentration studies to longer-term, lower concentration conditions, as contradictory outcomes occur.
Following metamorphosis, a small percentage (less than 20%) of juveniles will disperse, permanently leaving the vicinity of their natal pools. The majority of offspring are philopatric, returning to their natal pool to breed. Most frogs breed only once in their lives, although some will breed two or three times, generally with differences according to age. The success of the larvae and tadpoles is important in populations of wood frogs because they affect the gene flow and genetic variation of the following generations.
Conservation status
Although the wood frog is not endangered or threatened, in many parts of its range, urbanization is fragmenting populations. Several studies have shown, under certain thresholds of forest cover loss or over certain thresholds of road density, wood frogs and other common amphibians begin to "drop out" of formerly occupied habitats. Another conservation concern is that wood frogs are primarily dependent on smaller, "geographically isolated" wetlands for breeding. At least in the United States, these wetlands are largely unprotected by federal law, leaving it up to states to tackle the problem of conserving pool-breeding amphibians.
The wood frog has a complex lifecycle that depends on multiple habitats, damp lowlands, and adjacent woodlands. Their habitat conservation is, therefore, complex, requiring integrated, landscape-scale preservation.
Wood frog development in the tadpole stage is known to be negatively affected by road salt contaminating freshwater ecosystems. Tadpoles have also been shown to develop abnormalities due to a combination of warmer conditions and toxic metals from pesticides near their habitats. These conditions allow them to be predated upon by dragonfly larvae more easily often causing missing limbs.
References
Further reading
(Rana sylvatica, new species, p. 282).
External links
Photographs, video and audio recording of breeding Wood Frogs
Lithobates
Cryozoa
Frogs of North America
Amphibians of Canada
Amphibians of the United States
Frog, Wood
Fauna of the Great Lakes region (North America)
Extant Pliocene first appearances
Amphibians described in 1825
Taxa named by John Eatton Le Conte | Wood frog | Chemistry | 2,512 |
22,766,559 | https://en.wikipedia.org/wiki/Thanet%20Earth | Thanet Earth is a large industrial agriculture and plant factory project consortium on the Isle of Thanet in Kent, England. It is the largest greenhouse complex in the UK, covering 90 hectares, or of land. The glasshouses produce approximately 400 million tomatoes, 24 million peppers and 30 million cucumbers a year, equal to roughly 12, 11 and 8 per cent respectively of Britain’s entire annual production of those salad ingredients. Thanet Earth's main customers are Asda, Sainsbury’s, Tesco, M&S and agency HRGO.
Food production
The complex began producing in October 2008. Cucumbers and peppers are picked continuously from February to October, and tomatoes are harvested every day of the week, 52 weeks a year.
The UK's largest privately owned fresh produce supplier, Fresca Group Ltd, has a 50% stake in the trading business that sells all the crops grown at the site, Thanet Earth Marketing Limited. The remaining 50% of Thanet Earth Marketing Limited is owned by three salad growing specialist companies which each owns and operates a glasshouse at the site – Kaaij Greenhouses UK, Rainbow Growers and a six hectare glasshouse owned by A&A. Planning permission exists for a further four greenhouses on the site, making seven in total; in time for planting vine tomatoes in January 2013 they built an additional eight hectares of greenhouses
Power
The complex is powered by combined heat and power systems that create heat, power and carbon dioxide (which is absorbed by the plants) for the greenhouses. Through a partnership with a Virtual power plant they also export their excess power to the grid and automatically add extra power to the grid at times of peak demand.
Controversies
In it was reported that during misty nights the lit glasshouses were a source of light pollution in the form of a clearly visible night glow. The company were quoted as saying that "For ventilation purposes we have to leave tiny gaps where the blinds meet. Even when the blinds are fully closed we estimate that approximately 2 per cent of area is uncovered."
Media
As the first of its kind in the UK, the Thanet Earth project received minor, but national coverage.
See also
Thanet Offshore Wind Project
References
External links
Official Website
Virtual Tour
Greenhouses in the United Kingdom
Hydroponics
Farms in Kent
Thanet
Companies based in Kent
Intensive farming | Thanet Earth | Chemistry | 475 |
22,132,096 | https://en.wikipedia.org/wiki/Laser%20ablation%20synthesis%20in%20solution | Laser ablation synthesis in solution (LASiS) is a commonly used method for obtaining colloidal solution of nanoparticles in a variety of solvents. Nanoparticles (NPs,), are useful in chemistry, engineering and biochemistry due to their large surface-to-volume ratio that causes them to have unique physical properties. LASiS is considered a "green" method due to its lack of use for toxic chemical precursors to synthesize nanoparticles.
In the LASiS method, nanoparticles are produced by a laser beam hitting a solid target in a liquid and during the condensation of the plasma plume, the nanoparticles are formed. Since the ablation is occurring in a liquid, versus air/vacuum/gas/, the environment allows for plume expansion, cooling and condensation with a higher temperature, pressure and density to create a plume with stronger confinement. These environmental conditions allow for more refined and smaller nanoparticles LASiS is usually considered a top-down physical approach. LASiS emerged as a reliable alternative to traditional chemical reduction methods for obtaining noble metal nanoparticles (NMNp). LASiS is also used for synthesis of silver nanoparticles AgNPs, which are known for their antimicrobial effects. Production of AgNPs via LASiS causes nanoparticles with varying antimicrobial characteristics due to different properties achieved via the fine tuning of NPs size in liquid ablation.
Pros and Cons
LASiS has some limitations in the size control of NMNp, which can be overcome by laser treatments of NMNp. Other cons of LASiS include: the slow rate of NPs production, high consumption of energy, laser equipment cost, and decreased ablation efficiency with longer usage of the laser within a session. Other pros of LASiS include: minimal waste production, minimal manual operation, and refined size control of nanoparticles.
References
Nanoparticles
Plasma technology and applications
Chemical synthesis | Laser ablation synthesis in solution | Physics,Chemistry,Materials_science | 411 |
4,349,040 | https://en.wikipedia.org/wiki/Dipper%20%28Chinese%20constellation%29 | The Dipper mansion (斗宿, pinyin: Dǒu Xiù) is one of the Twenty-eight mansions of the Chinese constellations. It is one of the northern mansions of the Black Tortoise. In Taoism, it is known as the "Six Stars of the Southern Dipper" (南斗六星, Nándǒu liù xīng), in contrast to the Big Dipper north to this mansion.
Asterisms
Stars
ζ Sgr
τ Sgr
σ Sgr
φ Sgr
λ Sgr
μ Sgr
References
Chinese constellations | Dipper (Chinese constellation) | Astronomy | 114 |
10,158,584 | https://en.wikipedia.org/wiki/Kicker%20magnet | Kicker magnets are dipole magnets used to rapidly switch a particle beam between two paths. Conceptually similar to a railroad switch in function, a kicker magnet must switch on very rapidly, then maintain a stable magnetic field for some minimum time. Switch-off time is also important, but less critical.
An injection kicker magnet merges two beams incoming from different directions. Most commonly, there is a beam circulating in a synchrotron, in the form of a particle train which only partially fills the arc. As soon as the circulating particle train has passed the kicker, it is switched on so that an additional batch of particles may be appended to the train. The magnet must then be switched off in time to not affect the head of the train when it next rounds the synchrotron.
An ejection kicker magnet does the opposite, diverting a circulating beam so it leaves the synchrotron. Almost always, an ejection kicker is used to eject the entire particle train, emptying the synchrotron. This means that it has the entire tail-to-head gap in the synchrotron to function, and the switch-off time is essentially irrelevant. However, it must hold a stable field for longer (one full rotation of the synchrotron), and must generate a stronger magnetic field, as it is used to eject a higher energy beam that has been accelerated in the synchrotron.
The magnets are powered by a high voltage (usually in the range of tens of thousands of volts) source called a power modulator which uses a pulse forming network to produce a short pulse of current (usually in the range of a few nanoseconds to a microsecond and thousands of amperes in amplitude). The current produces a magnetic field in the magnet, which in turn imparts a Lorentz force on the particles as they traverse the magnet's length, causing the beam to deflect into the proper trajectory.
Because a kicker magnet applies a particular lateral impulse to the beam, to achieve a fixed deflection angle the strength of the kick must be accurately matched to the momentum of the particles. This is part of the power modulator's job.
References
Accelerator physics | Kicker magnet | Physics | 460 |
40,654,603 | https://en.wikipedia.org/wiki/Conodurine | Conodurine is an acetylcholinesterase inhibitor and butyrylcholinesterase inhibitor isolated from Tabernaemontana.
See also
Conolidine
Confoline
References
Acetylcholinesterase inhibitors
Alkaloids found in Apocynaceae
Indole alkaloids
Tertiary amines | Conodurine | Chemistry | 65 |
564,685 | https://en.wikipedia.org/wiki/Digital%20control | Digital control is a branch of control theory that uses digital computers to act as system controllers.
Depending on the requirements, a digital control system can take the form of a microcontroller to an ASIC to a standard desktop computer.
Since a digital computer is a discrete system, the Laplace transform is replaced with the Z-transform. Since a digital computer has finite precision (See quantization), extra care is needed to ensure the error in coefficients, analog-to-digital conversion, digital-to-analog conversion, etc. are not producing undesired or unplanned effects.
Since the creation of the first digital computer in the early 1940s the price of digital computers has dropped considerably, which has made them key pieces to control systems because they are easy to configure and reconfigure through software, can scale to the limits of the memory or storage space without extra cost, parameters of the program can change with time (See adaptive control) and digital computers are much less prone to environmental conditions than capacitors, inductors, etc.
Digital controller implementation
A digital controller is usually cascaded with the plant in a feedback system. The rest of the system can either be digital or analog.
Typically, a digital controller requires:
Analog-to-digital conversion to convert analog inputs to machine-readable (digital) format
Digital-to-analog conversion to convert digital outputs to a form that can be input to a plant (analog)
A program that relates the outputs to the inputs
Output program
Outputs from the digital controller are functions of current and past input samples, as well as past output samples - this can be implemented by storing relevant values of input and output in registers. The output can then be formed by a weighted sum of these stored values.
The programs can take numerous forms and perform many functions
A digital filter for low-pass filtering
A state space model of a system to act as a state observer
A telemetry system
Stability
Although a controller may be stable when implemented as an analog controller, it could be unstable when implemented as a digital controller due to a large sampling interval. During sampling the aliasing modifies the cutoff parameters. Thus the sample rate characterizes the transient response and stability of the compensated system, and must update the values at the controller input often enough so as to not cause instability.
When substituting the frequency into the z operator, regular stability criteria still apply to discrete control systems. Nyquist criteria apply to z-domain transfer functions as well as being general for complex valued functions. Bode stability criteria apply similarly.
Jury criterion determines the discrete system stability about its characteristic polynomial.
Design of digital controller in s-domain
The digital controller can also be designed in the s-domain (continuous). The Tustin transformation can transform the continuous compensator to the respective digital compensator. The digital compensator will achieve an output that approaches the output of its respective analog controller as the sampling interval is decreased.
Tustin transformation deduction
Tustin is the Padé(1,1) approximation of the exponential function :
And its inverse
Digital control theory is the technique to design strategies in discrete time, (and/or) quantized amplitude (and/or) in (binary) coded form to be implemented in computer systems (microcontrollers, microprocessors) that will control the analog (continuous in time and amplitude) dynamics of analog systems. From this consideration many errors from classical digital control were identified and solved and new methods were proposed:
Marcelo Tredinnick and Marcelo Souza and their new type of analog-digital mapping
Yutaka Yamamoto and his "lifting function space model"
Alexander Sesekin and his studies about impulsive systems.
M.U. Akhmetov and his studies about impulsive and pulse control
Design of digital controller in z-domain
The digital controller can also be designed in the z-domain (discrete). The Pulse Transfer Function (PTF) represents the digital viewpoint of the continuous process when interfaced with appropriate ADC and DAC, and for a specified sample time is obtained as:
Where denotes z-Transform for the chosen sample time . There are many ways to directly design a digital controller to achieve a given specification. For a type-0 system under unity negative feedback control, Michael Short and colleagues have shown that a relatively simple but effective method to synthesize a controller for a given (monic) closed-loop denominator polynomial and preserve the (scaled) zeros of the PTF numerator is to use the design equation:
Where the scalar term ensures the controller exhibits integral action, and a steady-state gain of unity is achieved in the closed-loop. The resulting closed-loop discrete transfer function from the z-Transform of reference input to the z-Transform of process output is then given by:
Since process time delay manifests as leading co-efficient(s) of zero in the process PTF numerator , the synthesis method above inherently yields a predictive controller if any such delay is present in the continuous plant.
See also
Sampled data systems
Adaptive control
Analog control
Control theory
Digital
Feedback, Negative feedback, Positive feedback
Laplace transform
Real-time control
Z-transform
References
FRANKLIN, G.F.; POWELL, J.D., Emami-Naeini, A., Digital Control of Dynamical Systems, 3rd Ed (1998). Ellis-Kagle Press, Half Moon Bay, CA
KATZ, P. Digital control using microprocessors. Englewood Cliffs: Prentice-Hall, 293p. 1981.
OGATA, K. Discrete-time control systems. Englewood Cliffs: Prentice-Hall,984p. 1987.
PHILLIPS, C.L.; NAGLE, H. T. Digital control system analysis and design. Englewood Cliffs, New Jersey: Prentice Hall International. 1995.
M. Sami Fadali, Antonio Visioli, (2009) "Digital Control Engineering", Academic Press, .
JURY, E.I. Sampled-data control systems. New-York: John Wiley. 1958.
Control theory
de:Digitaler Regler | Digital control | Mathematics | 1,251 |
7,349,608 | https://en.wikipedia.org/wiki/Dennis%20Sullivan | Dennis Parnell Sullivan (born February 12, 1941) is an American mathematician known for his work in algebraic topology, geometric topology, and dynamical systems. He holds the Albert Einstein Chair at the Graduate Center of the City University of New York and is a distinguished professor at Stony Brook University.
Sullivan was awarded the Wolf Prize in Mathematics in 2010 and the Abel Prize in 2022.
Early life and education
Sullivan was born in Port Huron, Michigan, on February 12, 1941. His family moved to Houston soon afterwards.
He entered Rice University to study chemical engineering but switched his major to mathematics in his second year after encountering a particularly motivating mathematical theorem. The change was prompted by a special case of the uniformization theorem, according to which, in his own words:
He received his Bachelor of Arts degree from Rice University in 1963. He obtained his Doctor of Philosophy from Princeton University in 1966 with his thesis, Triangulating homotopy equivalences, under the supervision of William Browder.
Career
Sullivan worked at the University of Warwick on a NATO Fellowship from 1966 to 1967. He was a Miller Research Fellow at the University of California, Berkeley from 1967 to 1969 and then a Sloan Fellow at Massachusetts Institute of Technology from 1969 to 1973. He was a visiting scholar at the Institute for Advanced Study in 1967–1968, 1968–1970, and again in 1975.
Sullivan was an associate professor at Paris-Sud University from 1973 to 1974, and then became a permanent professor at the Institut des Hautes Études Scientifiques (IHÉS) in 1974. In 1981, he became the Albert Einstein Chair in Science (Mathematics) at the Graduate Center of the City University of New York and reduced his duties at the IHÉS to a half-time appointment. He joined the mathematics faculty at Stony Brook University in 1996 and left the IHÉS the following year.
Sullivan was involved in the founding of the Simons Center for Geometry and Physics and is a member of its board of trustees.
Research
Topology
Geometric topology
Along with Browder and his other students, Sullivan was an early adopter of surgery theory, particularly for classifying high-dimensional manifolds. His thesis work was focused on the Hauptvermutung.
In an influential set of notes in 1970, Sullivan put forward the radical concept that, within homotopy theory, spaces could directly "be broken into boxes" (or localized), a procedure hitherto applied to the algebraic constructs made from them.
The Sullivan conjecture, proved in its original form by Haynes Miller, states that the classifying space BG of a finite group G is sufficiently different from any finite CW complex X, that it maps to such an X only 'with difficulty'; in a more formal statement, the space of all mappings BG to X, as pointed spaces and given the compact-open topology, is weakly contractible. Sullivan's conjecture was also first presented in his 1970 notes.
Sullivan and Daniel Quillen (independently) created rational homotopy theory in the late 1960s and 1970s. It examines "rationalizations" of simply connected topological spaces with homotopy groups and singular homology groups tensored with the rational numbers, ignoring torsion elements and simplifying certain calculations.
Kleinian groups
Sullivan and William Thurston generalized Lipman Bers' density conjecture from singly degenerate Kleinian surface groups to all finitely generated Kleinian groups in the late 1970s and early 1980s. The conjecture states that every finitely generated Kleinian group is an algebraic limit of geometrically finite Kleinian groups, and was independently proven by Ohshika and Namazi–Souto in 2011 and 2012 respectively.
Conformal and quasiconformal mappings
The Connes–Donaldson–Sullivan–Teleman index theorem is an extension of the Atiyah–Singer index theorem to quasiconformal manifolds due to a joint paper by Simon Donaldson and Sullivan in 1989 and a joint paper by Alain Connes, Sullivan, and Nicolae Teleman in 1994.
In 1987, Sullivan and Burton Rodin proved Thurston's conjecture about the approximation
of the Riemann map by circle packings.
String topology
Sullivan and Moira Chas started the field of string topology, which examines algebraic structures on the homology of free loop spaces. They developed the Chas–Sullivan product to give a partial singular homology analogue of the cup product from singular cohomology. String topology has been used in multiple proposals to construct topological quantum field theories in mathematical physics.
Dynamical systems
In 1975, Sullivan and Bill Parry introduced the topological Parry–Sullivan invariant for flows in one-dimensional dynamical systems.
In 1985, Sullivan proved the no-wandering-domain theorem. This result was described by mathematician Anthony Philips as leading to a "revival of holomorphic dynamics after 60 years of stagnation."
Awards and honors
1971 Oswald Veblen Prize in Geometry
1981 Prix Élie Cartan, French Academy of Sciences
1983 Member, National Academy of Sciences
1991 Member, American Academy of Arts and Sciences
1994 King Faisal International Prize for Science
2004 National Medal of Science
2006 Steele Prize for lifetime achievement
2010 Wolf Prize in Mathematics, for "his contributions to algebraic topology and conformal dynamics"
2012 Fellow of the American Mathematical Society
2014 Balzan Prize in Mathematics (pure or applied)
2022 Abel Prize
Personal life
Sullivan is married to fellow mathematician Moira Chas.
See also
Assembly map
Double bubble conjecture
Flexible polyhedron
Formal manifold
Loch Ness monster surface
Normal invariant
Ring lemma
Rummler–Sullivan theorem
Ruziewicz problem
References
External links
Sullivan's homepage at the City University of New York
Sullivan's homepage at Stony Brook University
Dennis Sullivan International Balzan Prize Foundation
1941 births
Living people
20th-century American mathematicians
21st-century American mathematicians
Abel Prize laureates
Dynamical systems theorists
CUNY Graduate Center faculty
Fellows of the American Mathematical Society
Homotopy theory
Mathematicians from Michigan
Members of the United States National Academy of Sciences
National Medal of Science laureates
Princeton University alumni
Recipients of the Great Cross of the National Order of Scientific Merit (Brazil)
Rice University alumni
Stony Brook University faculty
American topologists
Wolf Prize in Mathematics laureates | Dennis Sullivan | Mathematics | 1,239 |
31,271,694 | https://en.wikipedia.org/wiki/Elasticity%20%28data%20store%29 | The elasticity of a data store relates to the flexibility of its data model and clustering capabilities. The greater the number of data model changes that can be tolerated, and the more easily the clustering can be managed, the more elastic the data store is considered to be.
Types
Clustering elasticity
Clustering elasticity is the ease of adding or removing nodes from the distributed data store. Usually, this is a difficult and delicate task to be done by an expert in a relational database system. Some NoSQL data stores, like Apache Cassandra have an easy solution, and a node can be added/removed with a few changes in the properties and by adding specifying at least one seed.
Data-modelling elasticity
Relational databases are most often very inelastic, as they have a predefined data model that can only be adapted through redesign. Most NoSQL data stores, however, do not have a fixed schema. Each row can have a different number and even different type of columns. Concerning the data store, modifications in the schema are no problem. This makes this kind of data stores more elastic concerning the data model. The drawback is that the programmer has to take into account that the data model may change over time.
References
See also
Apache Cassandra
Oracle NoSQL Database
Data store
Data modeling
Databases | Elasticity (data store) | Engineering | 269 |
46,911,081 | https://en.wikipedia.org/wiki/Tanzanian%20units%20of%20measurement | A number of units of measurement have been used in Tanzania to measure length, mass, capacity, etc. The metric system was adopted in Tanzania from 1967 to 1969.
System before metric system
A number of units were used.
Zanzibar
Several units were used in Zanzibar
Length
One ohra was equal to 0.571 m (22.48 in).
Weight
One bazla was equal to 15.525 kg (32.226 lb). One mane was equal to 2.0071 lb. One franzella of 36 rotoli was equal to 35.2822 lb.
Capacity
One djezla was equal to 257.4 L (7.305 bushels).
References
Culture of Tanzania
Tanzania | Tanzanian units of measurement | Mathematics | 148 |
11,470,145 | https://en.wikipedia.org/wiki/Carousel%20feeding | Carousel feeding is a cooperative hunting method used by Norwegian orcas (Orcinus orca) to capture wintering Norwegian spring-spawning herring (Clupea harengus). The term carousel feeding was first used to describe a similar hunting behaviour in bottlenose dolphins (Turslops truncatus) in the Black Sea. There are two main phases of carousel feeding in orcas, the herding phase and the feeding phase. In the herding phase the orcas surround a school of herring and herd them into a tight ball. They tighten the ball by blowing bubbles, flashing their white underside and slapping their tails on the surface. They move the ball of herring toward the surface of the water before initiating the feeding phase. During the feeding phase several orcas begin to eat while the others continue herding the fish to maintain the ball. The feeding orcas whip their tails into the ball to stun and kill several herring at a time. The dead and stunned herring are then consumed and their heads and spines discarded.
Herding
Carousel feeding begins when an orca pod locates a school of herring. This is primarily done by echolocation. Orcas can detect herring at a much greater distance than the herring can detect the predator. This gives the orcas an advantage over the herring. The matriarch orca leads the pod (group of 3–9 orcas) in splitting the herring school into a smaller more manageable group. The orcas then circle the herring forcing them into a ball shape. The diameter of the tight ball can range anywhere between two and seven meters. During this period the orcas are highly vocal including clicks and whistling. While tightening the herring ball the orcas push their prey towards the surface of the water. It has been speculated that surface feeding is beneficial because the animals do not have to deep dive so energy is saved, and since the pressure is less intense each tail strike is more effective. In addition, the light conditions are better so the orcas can be more accurate, and the sea surface provides a barrier for the prey. During the herding process herring can be seen jumping at the sea surface. The final stages of herding include blowing bubbles to tighten the ball, flashing the orca's white underbelly to blind and disorient the herring, and slapping the sea surface with their tails. Once the herring are tightly compacted into a conical or elliptical shape near the surface the feeding stage begins.
Feeding
The second stage of carousel is when the orcas get to consume their prey. A portion of the pod feeds while the rest continues herding; after a while the roles switch so all the orcas in the pod get a chance to feed. There are always more orcas herding than there are feeding to ensure the herring ball stays tight. The feeding orcas whip their tails into the herring to stun and kill them. The stunning is a result of the loud noise and physical contact of the tail and the fish. In addition, the fish are debilitated by the pressure change and turbulence which makes it easy for the orcas to catch them. The orcas consume the stunned and dead herring and spit out the heads and spines. The orcas can catch and kill up to 15 herring with each successful slap. Once the orcas are satisfied they release the remaining herring. A carousel feeding event can last from ten minutes to three hours depending on the herring available and the number of orcas in the pod, as well as environmental conditions.
Ecological impacts
Orcas
Cooperative feeding is a strategy common to social carnivores like orcas. Some of the strategies orcas employ include producing large waves to knock seals off ice floes, or even beaching themselves to catch sea lions. The strategies orcas develop depend on their typical prey type and the most efficient method to capture them considering environmental conditions. Norwegian orcas have developed carousel feeding because it is an effective method to capture spring-spawning herring.
Carousel feeding teaches young individuals important hunting skills. This gives the orcas an evolutionary advantage that helps them ensure the survival of their young. In a K-selected species like orcas parental investment in offspring is very important. This includes teaching the offspring skills to improve their chances of survival. Young orcas learn prey-specific hunting techniques by both imitation of conspecifics and social learning. The amount of parental involvement in learning a hunting behaviour depends on how risky it is. Since carousel feeding is not very risky for young orcas they can participate as soon as they are able. For hunting behaviours that involve sharks or beaching themselves the young orcas are slowly introduced.
The abundance of orcas in a particular area is related to the distribution of spring-spawn herring. Norwegian orcas are most commonly seen in the waters near northern Norway in the fall. This coincides with the wintering of spring-spawning herring. Cod and saithe are also common in the area but studies examining the stomach contents of Norwegian killer whales show the primary biomass consumed is from herring. This is likely due to herring being the most abundant at this time. In the summer many Norwegian orca pods move to the coast of the Lofoten and Vesterålen islands where adolescent herring, mackerel and saithe are more abundant. In addition, whales return to the same regions each season where herring are abundant. Since herring are the Norwegian orca's main food source it is clearly adaptive to have produced a cooperative feeding method specific to the predation of this species.
Herring
Herring are an important food source for Norwegian orcas; therefore, their distributions influence each other. Herring influence the distribution of orcas and orcas can have a large impact on the population of herring. The herring population is not completely depleted by orcas because they never eat the whole herring ball during the feeding phase of carousel feeding. The herring that are not consumed are able to escape from the orcas. This means the orcas do not completely deplete their food source and potentially the strongest herring will survive the event.
Herrings also have certain adaptive behaviours to protect themselves against predation. The schooling behaviour of herring is largely the reason that many can survive these predation events. Herring are known to form very dense schools when swimming in the more dangerous open ocean, and smaller, less dense schools when closer to the shore where there is less risk of predation. Their ability to change behaviour depending on the situation allows more herring to survive the attack. The selective pressure just one orca pod can put on herrings in the area can emphasize anti-predator behaviours over feeding or mating behaviours. Another example of anti-predator behaviour is the release of gas bubbles from the herring. Orcas force the herring up from large depths, which causes the herring to release gas bubbles in an attempt to disorient predators visually and acoustically.
Carousel feeding can provide food for other species as well as orcas. For example, during a feeding event when the herring have been pushed to the surface of the water, seabirds are often seen feeding on the herring from above. In addition, stunned herring that are left behind by the orcas can be consumed by other fish.
References
External links
Killer Whale Hunting Strategies
Carousel Feeding
Orcas
Ethology | Carousel feeding | Biology | 1,468 |
5,634,235 | https://en.wikipedia.org/wiki/Small%20cleaved%20cells | Small cleaved cells are a distinctive type of cell that appears in certain types of lymphoma.
When used to uniquely identify a type of lymphoma, they are usually categorized as follicular () or diffuse () .
The "small cleaved cells" are usually centrocytes that express B-cell markers such as CD20. The disease is strongly correlated with the genetic translocation t(14;18), which results in juxtaposition of the bcl-2 proto-oncogene with the heavy chain JH locus, and thus in overexpression of bcl-2. Bcl-2 is a well known anti-apoptotic gene, and thus its overexpression results in the "failure to die" motif of cancer seen in follicular lymphoma.
Follicular lymphoma must be carefully monitored, as it often progresses into a more aggressive "Diffuse Large B-Cell Lymphoma."
External links
Histopathology | Small cleaved cells | Chemistry | 215 |
26,239,776 | https://en.wikipedia.org/wiki/26%20Draconis | 26 Draconis is a triple star system in the constellation Draco, located 46 light years from the Sun. Two of the system components, A and B, form a spectroscopic binary that completes an orbit every 76 years. The composite spectral classification of the AB pair is G0V, which decomposes to individual spectral types F9V and K3V. A 1962 study estimated the masses of these two stars as 1.30 and 0.83 times the mass of the Sun, respectively. The stars are considered moderately metal-poor compared to the Sun, which means they have a lower proportion of elements other than hydrogen or helium.
Gliese 685
The third component, GJ 685, is a red dwarf spectral classification of M0V. As of 1970, this star is separated by 737.9 arc seconds from the AB pair and they share a common proper motion. The Star GJ 685 has one known planet orbit that was detected by radial velocity.
The space velocity components of 26 Draconis are U = +36.5, V = −4.3 and W = −21.8 km/s. This system is on an orbit through the Milky Way galaxy that has an eccentricity of 0.14, taking it as close as and as far as from the galactic core. The inclination of this orbit carries the star system as much as above the plane of the galactic disk. This system may be a member of the Ursa Major moving group.
References
Triple star systems
G-type main-sequence stars
K-type main-sequence stars
M-type main-sequence stars
Solar-type stars
Draco (constellation)
Durchmusterung objects
Draconis, 26
0684
160269
086036
6573 | 26 Draconis | Astronomy | 362 |
42,653,446 | https://en.wikipedia.org/wiki/Jesper%20deClaville%20Christiansen | Jesper deClaville Christiansen (born 30 June 1963 in Skive, Denmark) is a Danish professor in Materials Science and Technology. Professor Christiansen is known for his work in the field of mechanics of polymers, diffusion, rheology and micro and nano composites especially.
Professor Jesper deClaville Christiansen was knighted on 11. April 2014 (ridder af Dannebrog) by Queen Margrethe II of Denmark.
Professor Christiansen received his PhD degree in 1989 after joint studies at Aalborg University, Denmark and Brunel University in London, U.K. His appointment to Professor in Materials Science came in 1998 where a 5-year research professorship in rheology of silicates initiated a chair in Materials Science.
Since 1 October 2012 Professor Christiansen has been coordinator of the European Community Research Program FP-7 Large EVOLUTION under the "Green Car" where a new electrical car 40% lighter than existing cars using green materials and technology is the aim. (12 mill. Euro). He was also coordinator for the successful European Community Research Program FP-7 Large Nanotough (2008–2011), where light and tough and strong nano composites were developed for space and automotive applications.
Professor Christiansen is active as reviewer for several journals: Langmuir, Journal of Polymer Science: Polymer Physics. Macromol. Mater. Eng., Oil & Gas Science and Technology-Revue de l'IFP, Composites A, Materials Science and Engineering, Geochimica Cosmochimica Acta, Journal of Engineering Education, Journal of Non-Newtonian Fluid Mechanics, American Mineralogist, Polymers and Polymer Composites, Journal of Rheology, Polymer Engineering and Science to mention some.
Professor Christiansen is Head of the Doctoral program in Mechanical and Manufacturing Engineering at Aalborg University
Professor Christiansen is author/co-author of more than 200 publications
References
Living people
People from Skive Municipality
1963 births
Academic staff of Aalborg University
Materials scientists and engineers
Alumni of Brunel University London
Aalborg University alumni | Jesper deClaville Christiansen | Materials_science,Engineering | 423 |
236,851 | https://en.wikipedia.org/wiki/Dictaphone | Dictaphone was an American company founded by Alexander Graham Bell that produced dictation machines. It is now a division of Nuance Communications, based in Burlington, Massachusetts.
Although the name "Dictaphone" is a trademark, it has become genericized as a means to refer to any dictation machine.
History
The Volta Laboratory was established by Alexander Graham Bell in Washington, D.C. in 1881. When the Laboratory's sound-recording inventions were sufficiently developed with the assistance of Charles Sumner Tainter and others, Bell and his associates set up the Volta Graphophone Company, which later merged with the American Graphophone Company (founded in 1887) which itself later evolved into Columbia Records (founded as the Columbia Phonograph Company in 1889).
The name "Dictaphone" was trademarked in 1907 by the Columbia Graphophone Company, which soon became the leading manufacturer of such devices. This perpetuated the use for voice recording of wax cylinders, which had otherwise been eclipsed by disc-based technology. Dictaphone was spun off into a separate company in 1923 under the leadership of C. King Woodbridge.
In 1947, having relied on wax-cylinder recording to the end of World War II, Dictaphone introduced its Dictabelt technology. This cut a mechanical groove into a Lexan plastic belt instead of into a wax cylinder. The advantage of the Lexan belt was that recordings were permanent and admissible in court. Eventually IBM introduced a dictating machine using an erasable belt made of magnetic tape which enabled the user to correct dictation errors rather than marking errors on a paper tab. Dictaphone in turn added magnetic recording models while still selling the models recording on the Lexan belts. Machines based on magnetic tape recording were introduced in the late seventies, initially using the standard compact (or "C") cassette, but soon, in dictation machines, using mini-cassettes or microcassettes instead. Using smaller cassette sizes was important to the manufacturer for reducing the size of portable recorders.
Walter D. Fuller became the director of the company in 1952. In 1969 he was appointed as chairman.
In Japan, JVC was licensed to produce machines designed and developed by Dictaphone. Dictaphone and JVC later developed the picocassette, released in 1985, which was even smaller than a microcassette but retained a good recording quality and duration.
Dictaphone also developed "endless loop" recording using magnetic tape, introduced in the mid-seventies as the "Thought Tank". The recording medium did not need to be moved from where the dictation took place to the location such as a typing pool where the typists were located. This was normally operated via a dedicated in-house telephone system, enabling dictation to be made from a variety of locations within the hospital or other organizations with typing pools. One version calculated each typist's turnaround time and allocated the next piece of dictation accordingly.
Dictaphone was prominent in the provision of multi-channel recorders, used extensively in the emergency services to record emergency telephone calls (to numbers such as 911, 999, 112) and subsequent conversations.
Additionally, Dictaphone at one point expanded its product line to market a line of electronic (desktop and portable) calculators.
In 1979, Dictaphone was purchased by Pitney Bowes and kept as a wholly owned but independent subsidiary.
Dictaphone bought Dual Display Word Processor, a stiff competitor to Wang Laboratories, the industry leader.
In 1982, it marketed a word processor from Symantec. The hardware sold for $5,950 in 1982. The software was an additional $600. The advent of the personal computer, MS-DOS, and general-purpose word-processing software saw the demise of the dedicated word-processor, and the division was closed.
In 1995, Pitney Bowes sold Dictaphone to the investment group Stonington Partners of Connecticut for a reported $462 million. Dictaphone thereafter sold a range of products that included speech-recognition and voicemail software with limited success as the market only existed among some early adopters despite its vertical markets' enhancements.
In 2000, Dictaphone was acquired by the then-leading Belgian voice-recognition and translation company Lernout & Hauspie for nearly $1 billion. Lernout & Hauspie provided the voice-recognition technology for Dictaphone's enhanced voice-recognition-based transcription system. Soon after the purchase, however, the SEC raised questions about Lernout & Hauspie's finances, focusing on the supposedly skyrocketing income reported from its East Asian endeavors. Subsequently, the company and all its subsidiaries, including Dictaphone, were forced into bankruptcy protection.
In early 2002, Dictaphone emerged from bankruptcy as a privately held organization, with Rob Schwager as its chairman and CEO. In 2004, it was split into three divisions:
IHS, focusing on dictation for the healthcare and medical industries;
IVS, focusing on dictation in law offices and police stations;
CRS (Communications Recording Solutions), focusing on voice logging and radios for use by public-safety organizations and quality-monitoring by call centers.
In June 2005, Dictaphone Corporation announced the sale of its Communications Recording Systems to NICE Systems for $38.5 million. This was considered a great bargain in the industry and came after NICE was ordered to pay Dictaphone $10 million in settlements related to a patent-infringement suit in late 2003.
In September 2005, Dictaphone sold its IVS business outside the United States to a private Swiss group around its former VP Martin Niederberger, who formed Dictaphone IVS AG (later Calison AG) in Urdorf, Switzerland and developed "FRISBEE", the first hardware-independent dictation-management software system with integrated speech-recognition and workflow management. In 2008, iSpeech AG took over the activities and products of the former Calison AG.
In February and March 2006, the remainder of Dictaphone was sold for $357 million to Nuance Communications (formerly ScanSoft), ending its short tenure as an independent company that had begun in 2002. This, in effect, closed a circle of events, as Dictaphone had been sold to Lernout & Hauspie prior to L&H's bankruptcy which resulted in Dictaphone becoming an independent company.
In March 2007, Nuance acquired Focus Informatics and, with the intention of further expansion in its healthcare-transcription business, linked it with its Dictaphone division.
See also
Carl Lindström Company, creator of the Parlophone company, which evolved into a music label that first released The Beatles albums
References
Products introduced in 1907
Audio storage
Brands that became generic
Office equipment
Sound recording technology | Dictaphone | Technology | 1,401 |
492,249 | https://en.wikipedia.org/wiki/Smith%20chart | The Smith chart (sometimes also called Smith diagram, Mizuhashi chart (), Mizuhashi–Smith chart (), Volpert–Smith chart () or Mizuhashi–Volpert–Smith chart), is a graphical calculator or nomogram designed for electrical and electronics engineers specializing in radio frequency (RF) engineering to assist in solving problems with transmission lines and matching circuits.
It was independently proposed by Tōsaku Mizuhashi () in 1937, and by () and Phillip H. Smith in 1939.
Starting with a rectangular diagram, Smith had developed a special polar coordinate chart by 1936, which, with the input of his colleagues Enoch B. Ferrell and James W. McRae, who were familiar with conformal mappings, was reworked into the final form in early 1937, which was eventually published in January 1939.
While Smith had originally called it a "transmission line chart" and other authors first used names like "reflection chart", "circle diagram of impedance", "immittance chart" or "Z-plane chart", early adopters at MIT's Radiation Laboratory started to refer to it simply as "Smith chart" in the 1940s, a name generally accepted in the Western world by 1950.
The Smith chart can be used to simultaneously display multiple parameters including impedances, admittances, reflection coefficients, scattering parameters, noise figure circles, constant gain contours and regions for unconditional stability. The Smith chart is most frequently used at or within the unity radius region. However, the remainder is still mathematically relevant, being used, for example, in oscillator design and stability analysis.
While the use of paper Smith charts for solving the complex mathematics involved in matching problems has been largely replaced by software based methods, the Smith chart is still a very useful method of showing how RF parameters behave at one or more frequencies, an alternative to using tabular information.
Thus most RF circuit analysis software includes a Smith chart option for the display of results and all but the simplest impedance measuring instruments can plot measured results on a Smith chart display.
Overview
The Smith chart is a mathematical transformation of the two-dimensional Cartesian complex plane. Complex numbers with positive real parts map inside the circle. Those with negative real parts map outside the circle.
If we are dealing only with impedances with non-negative resistive components, our interest is focused on the area inside the circle.
The transformation, for an impedance Smith chart, is:
where , i.e., the complex impedance, , normalized by the reference impedance, . The impedance Smith chart is then an Argand plot of impedances thus transformed. Impedances with non-negative resistive components will appear inside a circle with unit radius; the origin will correspond to the reference impedance, .
The Smith chart is plotted on the complex reflection coefficient plane in two dimensions and may be scaled in normalised impedance (the most common), normalised admittance or both, using different colours to distinguish between them. These are often known as the Z, Y and YZ Smith charts respectively. Normalised scaling allows the Smith chart to be used for problems involving any characteristic or system impedance which is represented by the center point of the chart. The most commonly used normalization impedance is 50 ohms. Once an answer is obtained through the graphical constructions described below, it is straightforward to convert between normalised impedance (or normalised admittance) and the corresponding unnormalized value by multiplying by the characteristic impedance (admittance). Reflection coefficients can be read directly from the chart as they are unitless parameters.
The Smith chart has a scale around its circumference or periphery which is graduated in wavelengths and degrees. The wavelengths scale is used in distributed component problems and represents the distance measured along the transmission line connected between the generator or source and the load to the point under consideration. The degrees scale represents the angle of the voltage reflection coefficient at that point. The Smith chart may also be used for lumped-element matching and analysis problems.
Use of the Smith chart and the interpretation of the results obtained using it requires a good understanding of AC circuit theory and transmission-line theory, both of which are prerequisites for RF engineers.
As impedances and admittances change with frequency, problems using the Smith chart can only be solved manually using one frequency at a time, the result being represented by a point. This is often adequate for narrow band applications (typically up to about 5% to 10% bandwidth) but for wider bandwidths it is usually necessary to apply Smith chart techniques at more than one frequency across the operating frequency band. Provided the frequencies are sufficiently close, the resulting Smith chart points may be joined by straight lines to create a locus.
A locus of points on a Smith chart covering a range of frequencies can be used to visually represent:
how capacitive or how inductive a load is across the frequency range
how difficult matching is likely to be at various frequencies
how well matched a particular component is.
The accuracy of the Smith chart is reduced for problems involving a large locus of impedances or admittances, although the scaling can be magnified for individual areas to accommodate these.
Mathematical basis
Actual and normalised impedance and admittance
A transmission line with a characteristic impedance of may be universally considered to have a characteristic admittance of where
Any impedance, expressed in ohms, may be normalised by dividing it by the characteristic impedance, so the normalised impedance using the lower case zT is given by
Similarly, for normalised admittance
The SI unit of impedance is the ohm with the symbol of the upper case Greek letter omega (Ω) and the SI unit for admittance is the siemens with the symbol of an upper case letter S. Normalised impedance and normalised admittance are dimensionless. Actual impedances and admittances must be normalised before using them on a Smith chart. Once the result is obtained it may be de-normalised to obtain the actual result.
The normalised impedance Smith chart
Using transmission-line theory, if a transmission line is terminated in an impedance () which differs from its characteristic impedance (), a standing wave will be formed on the line comprising the resultant of both the incident or forward () and the reflected or reversed () waves. Using complex exponential notation:
and
where
is the temporal part of the wave
is the spatial part of the wave and
where
is the angular frequency in radians per second (rad/s)
is the frequency in hertz (Hz)
is the time in seconds (s)
and are constants
is the distance measured along the transmission line from the load toward the generator in metres (m)
Also
is the propagation constant which has SI units radians/meter
where
is the attenuation constant in nepers per metre (Np/m)
is the phase constant in radians per metre (rad/m)
The Smith chart is used with one frequency () at a time, and only for one moment () at a time, so the temporal part of the phase () is fixed. All terms are actually multiplied by this to obtain the instantaneous phase, but it is conventional and understood to omit it. Therefore,
and
where and are respectively the forward and reverse voltage amplitudes at the load.
The variation of complex reflection coefficient with position along the line
The complex voltage reflection coefficient is defined as the ratio of the reflected wave to the incident (or forward) wave. Therefore,
where is also a constant.
For a uniform transmission line (in which is constant), the complex reflection coefficient of a standing wave varies according to the position on the line. If the line is lossy ( is non-zero) this is represented on the Smith chart by a spiral path. In most Smith chart problems however, losses can be assumed negligible () and the task of solving them is greatly simplified. For the loss free case therefore, the expression for complex reflection coefficient becomes
where is the reflection coefficient at the load, and is the line length from the load to the location where the reflection coefficient is measured. The phase constant may also be written as
where is the wavelength within the transmission line at the test frequency.
Therefore,
This equation shows that, for a standing wave, the complex reflection coefficient and impedance repeats every half wavelength along the transmission line. The complex reflection coefficient is generally simply referred to as reflection coefficient. The outer circumferential scale of the Smith chart represents the distance from the generator to the load scaled in wavelengths and is therefore scaled from zero to 0.50.
The variation of normalised impedance with position along the line
If and are the voltage across and the current entering the termination at the end of the transmission line respectively, then
and
.
By dividing these equations and substituting for both the voltage reflection coefficient
and the normalised impedance of the termination represented by the lower case , subscript T
gives the result:
Alternatively, in terms of the reflection coefficient
These are the equations which are used to construct the Smith chart. Mathematically speaking and are related via a Möbius transformation.
Both and are expressed in complex numbers without any units. They both change with frequency so for any particular measurement, the frequency at which it was performed must be stated together with the characteristic impedance.
may be expressed in magnitude and angle on a polar diagram. Any actual reflection coefficient must have a magnitude of less than or equal to unity so, at the test frequency, this may be expressed by a point inside a circle of unity radius. The Smith chart is actually constructed on such a polar diagram. The Smith chart scaling is designed in such a way that reflection coefficient can be converted to normalised impedance or vice versa. Using the Smith chart, the normalised impedance may be obtained with appreciable accuracy by plotting the point representing the reflection coefficient treating the Smith chart as a polar diagram and then reading its value directly using the characteristic Smith chart scaling. This technique is a graphical alternative to substituting the values in the equations.
By substituting the expression for how reflection coefficient changes along an unmatched loss-free transmission line
for the loss free case, into the equation for normalised impedance in terms of reflection coefficient
and using Euler's formula
yields the impedance-version transmission-line equation for the loss free case:
where
is the impedance 'seen' at the input of a loss free transmission line of length terminated with an impedance
Versions of the transmission-line equation may be similarly derived for the admittance loss free case and for the impedance and admittance lossy cases.
The Smith chart graphical equivalent of using the transmission-line equation is to normalise to plot the resulting point on a Smith chart and to draw a circle through that point centred at the Smith chart centre. The path along the arc of the circle represents how the impedance changes whilst moving along the transmission line. In this case the circumferential (wavelength) scaling must be used, remembering that this is the wavelength within the transmission line and may differ from the free space wavelength.
Regions of the Smith chart
If a polar diagram is mapped on to a cartesian coordinate system it is conventional to measure angles relative to the positive -axis using a counterclockwise direction for positive angles. The magnitude of a complex number is the length of a straight line drawn from the origin to the point representing it. The Smith chart uses the same convention, noting that, in the normalised impedance plane, the positive -axis extends from the center of the Smith chart at to the point The region above the x-axis represents inductive impedances (positive imaginary parts) and the region below the -axis represents capacitive impedances (negative imaginary parts).
If the termination is perfectly matched, the reflection coefficient will be zero, represented effectively by a circle of zero radius or in fact a point at the centre of the Smith chart. If the termination was a perfect open circuit or short circuit the magnitude of the reflection coefficient would be unity, all power would be reflected and the point would lie at some point on the unity circumference circle.
Circles of constant normalised resistance and constant normalised reactance
The normalised impedance Smith chart is composed of two families of circles: circles of constant normalised resistance and circles of constant normalised reactance. In the complex reflection coefficient plane the Smith chart occupies a circle of unity radius centred at the origin. In cartesian coordinates therefore the circle would pass through the points (+1,0) and (−1,0) on the -axis and the points (0,+1) and (0,−1) on the -axis.
Since both and are complex numbers, in general they may be written as:
with a, b, c and d real numbers.
Substituting these into the equation relating normalised impedance and complex reflection coefficient:
gives the following result:
This is the equation which describes how the complex reflection coefficient changes with the normalised impedance and may be used to construct both families of circles.
The Smith chart
The Smith chart is constructed in a similar way to the Smith chart case but by expressing values of voltage reflection coefficient in terms of normalised admittance instead of normalised impedance. The normalised admittance is the reciprocal of the normalised impedance , so
Therefore:
and
The Smith chart appears like the normalised impedance, type but with the graphic nested circles rotated through 180°, but the numeric scale remaining in its same position (not rotated) as the chart.
Similarly taking
for real and gives an analogous result, although with more and different minus signs:
The region above the -axis represents capacitive admittances and the region below the -axis represents inductive admittances. Capacitive admittances have positive imaginary parts and inductive admittances have negative imaginary parts.
Again, if the termination is perfectly matched the reflection coefficient will be zero, represented by a 'circle' of zero radius or in fact a point at the centre of the Smith chart. If the termination was a perfect open or short circuit the magnitude of the voltage reflection coefficient would be unity, all power would be reflected and the point would lie at some point on the unity circumference circle of the Smith chart.
Practical examples
A point with a reflection coefficient magnitude 0.63 and angle 60° represented in polar form as , is shown as point P1 on the Smith chart. To plot this, one may use the circumferential (reflection coefficient) angle scale to find the graduation and a ruler to draw a line passing through this and the centre of the Smith chart. The length of the line would then be scaled to P1 assuming the Smith chart radius to be unity. For example, if the actual radius measured from the paper was 100 mm, the length OP1 would be 63 mm.
The following table gives some similar examples of points which are plotted on the Z Smith chart. For each, the reflection coefficient is given in polar form together with the corresponding normalised impedance in rectangular form. The conversion may be read directly from the Smith chart or by substitution into the equation.
Working with both the Z Smith chart and the Y Smith charts
In RF circuit and matching problems sometimes it is more convenient to work with admittances (representing conductances and susceptances) and sometimes it is more convenient to work with impedances (representing resistances and reactances). Solving a typical matching problem will often require several changes between both types of Smith chart, using normalised impedance for series elements and normalised admittances for parallel elements. For these a dual (normalised) impedance and admittance Smith chart may be used. Alternatively, one type may be used and the scaling converted to the other when required. In order to change from normalised impedance to normalised admittance or vice versa, the point representing the value of reflection coefficient under consideration is moved through exactly 180 degrees at the same radius. For example, the point P1 in the example representing a reflection coefficient of has a normalised impedance of . To graphically change this to the equivalent normalised admittance point, say Q1, a line is drawn with a ruler from P1 through the Smith chart centre to Q1, an equal radius in the opposite direction. This is equivalent to moving the point through a circular path of exactly 180 degrees. Reading the value from the Smith chart for Q1, remembering that the scaling is now in normalised admittance, gives . Performing the calculation
manually will confirm this.
Once a transformation from impedance to admittance has been performed, the scaling changes to normalised admittance until a later transformation back to normalised impedance is performed.
The table below shows examples of normalised impedances and their equivalent normalised admittances obtained by rotation of the point through 180°. Again, these may be obtained either by calculation or using a Smith chart as shown, converting between the normalised impedance and normalised admittances planes.
Choice of Smith chart type and component type
The choice of whether to use the Z Smith chart or the Y Smith chart for any particular calculation depends on which is more convenient. Impedances in series and admittances in parallel add while impedances in parallel and admittances in series are related by a reciprocal equation. If is the equivalent impedance of series impedances and is the equivalent impedance of parallel impedances, then
For admittances the reverse is true, that is
Dealing with the reciprocals, especially in complex numbers, is more time-consuming and error-prone than using linear addition. In general therefore, most RF engineers work in the plane where the circuit topography supports linear addition. The following table gives the complex expressions for impedance (real and normalised) and admittance (real and normalised) for each of the three basic passive circuit elements: resistance, inductance and capacitance. Using just the characteristic impedance (or characteristic admittance) and test frequency an equivalent circuit can be found and vice versa.
Using the Smith chart to solve conjugate matching problems with distributed components
Distributed matching becomes feasible and is sometimes required when the physical size of the matching components is more than about 5% of a wavelength at the operating frequency. Here the electrical behaviour of many lumped components becomes rather unpredictable. This occurs in microwave circuits and when high power requires large components in shortwave, FM and TV broadcasting.
For distributed components the effects on reflection coefficient and impedance of moving along the transmission line must be allowed for using the outer circumferential scale of the Smith chart which is calibrated in wavelengths.
The following example shows how a transmission line, terminated with an arbitrary load, may be matched at one frequency either with a series or parallel reactive component in each case connected at precise positions.
Supposing a loss-free air-spaced transmission line of characteristic impedance , operating at a frequency of 800 MHz, is terminated with a circuit comprising a 17.5 resistor in series with a 6.5 nanohenry (6.5 nH) inductor. How may the line be matched?
From the table above, the reactance of the inductor forming part of the termination at 800 MHz is
so the impedance of the combination () is given by
and the normalised impedance () is
This is plotted on the Z Smith chart at point P20. The line OP20 is extended through to the wavelength scale where it intersects at the point . As the transmission line is loss free, a circle centred at the centre of the Smith chart is drawn through the point P20 to represent the path of the constant magnitude reflection coefficient due to the termination. At point P21 the circle intersects with the unity circle of constant normalised resistance at
.
The extension of the line OP21 intersects the wavelength scale at , therefore the distance from the termination to this point on the line is given by
Since the transmission line is air-spaced, the wavelength at 800 MHz in the line is the same as that in free space and is given by
where is the velocity of electromagnetic radiation in free space and is the frequency in hertz. The result gives , making the position of the matching component 29.6 mm from the load.
The conjugate match for the impedance at P21 () is
As the Smith chart is still in the normalised impedance plane, from the table above a series capacitor is required where
Rearranging, we obtain
.
Substitution of known values gives
To match the termination at 800 MHz, a series capacitor of 2.6 pF must be placed in series with the transmission line at a distance of 29.6 mm from the termination.
An alternative shunt match could be calculated after performing a Smith chart transformation from normalised impedance to normalised admittance. Point Q20 is the equivalent of P20 but expressed as a normalised admittance. Reading from the Smith chart scaling, remembering that this is now a normalised admittance gives
(In fact this value is not actually used). However, the extension of the line OQ20 through to the wavelength scale gives . The earliest point at which a shunt conjugate match could be introduced, moving towards the generator, would be at Q21, the same position as the previous P21, but this time representing a normalised admittance given by
.
The distance along the transmission line is in this case
which converts to 123 mm.
The conjugate matching component is required to have a normalised admittance () of
.
From the table it can be seen that a negative admittance would require an inductor, connected in parallel with the transmission line. If its value is , then
This gives the result
A suitable inductive shunt matching would therefore be a 6.5 nH inductor in parallel with the line positioned at 123 mm from the load.
Using the Smith chart to analyze lumped-element circuits
The analysis of lumped-element components assumes that the wavelength at the frequency of operation is much greater than the dimensions of the components themselves. The Smith chart may be used to analyze such circuits in which case the movements around the chart are generated by the (normalized) impedances and admittances of the components at the frequency of operation. In this case the wavelength scaling on the Smith chart circumference is not used. The following circuit will be analyzed using a Smith chart at an operating frequency of 100 MHz. At this frequency the free space wavelength is 3 m. The component dimensions themselves will be in the order of millimetres so the assumption of lumped components will be valid. Despite there being no transmission line as such, a system impedance must still be defined to enable normalization and de-normalization calculations and is a good choice here as . If there were very different values of resistance present a value closer to these might be a better choice.
The analysis starts with a Z Smith chart looking into R1 only with no other components present. As is the same as the system impedance, this is represented by a point at the centre of the Smith chart. The first transformation is OP1 along the line of constant normalized resistance in this case the addition of a normalized reactance of -j0.80, corresponding to a series capacitor of 40 pF. Points with suffix P are in the Z plane and points with suffix Q are in the Y plane. Therefore, transformations P1 to Q1 and P3 to Q3 are from the Z Smith chart to the Y Smith chart and transformation Q2 to P2 is from the Y Smith chart to the Z Smith chart. The following table shows the steps taken to work through the remaining components and transformations, returning eventually back to the centre of the Smith chart and a perfect 50 ohm match.
3D Smith chart
A generalization of the Smith chart to a three dimensional sphere, based on the extended complex plane (Riemann sphere) and inversive geometry, was proposed by Muller, et al in 2011.
The chart unifies the passive and active circuit design on little and big circles on the surface of a unit sphere, using a stereographic conformal map of the reflection coefficient's generalized plane. Considering the point at infinity, the space of the new chart includes all possible loads: The north pole is the perfectly matched point, while the south pole is the completely mismatched point.
The 3D Smith chart has been further extended outside of the spherical surface, for plotting various scalar parameters, such as group delay, quality factors, or frequency orientation. The visual frequency orientation (clockwise vs. counter-clockwise) enables one to differentiate between a negative / capacitance and positive / inductive whose reflection coefficients are the same when plotted on a 2D Smith chart, but whose orientations diverge as frequency increases.
See also
Binary tiling
Bode plot
Nyquist plot
Heyland–Ossanna circle diagram
cis (mathematics)
Transversal (instrument making)
References
Further reading
(37 pages) (NB. Includes an early graphical depiction of something similar to a Smith chart decades before Mizuhashi, Volpert and Smith published their works.)
(xiv+316 pages)
(29 pages)
(27 pages)
External links
Non-commercial, interactive Smith Chart that looks best in Excel 2007+.
Non-commercial, available for Windows, Mac, and Linux. Many Smith chart tutorial videos. No circuit size restrictions. Not limited to ladder circuits.
Commercial and free Smith chart for Windows
Free web based Smith Chart Educational tool available on GitHub.
2D and 3D Smith chart generalized tool for active and passive circuits (free for academia/education).
Electrical engineering
Charts | Smith chart | Engineering | 5,264 |
57,829 | https://en.wikipedia.org/wiki/Keystroke%20logging | Keystroke logging, often referred to as keylogging or keyboard capturing, is the action of recording (logging) the keys struck on a keyboard, typically covertly, so that a person using the keyboard is unaware that their actions are being monitored. Data can then be retrieved by the person operating the logging program. A keystroke recorder or keylogger can be either software or hardware.
While the programs themselves are legal, with many designed to allow employers to oversee the use of their computers, keyloggers are most often used for stealing passwords and other confidential information. Keystroke logging can also be utilized to monitor activities of children in schools or at home and by law enforcement officials to investigate malicious usage.
Keylogging can also be used to study keystroke dynamics or human-computer interaction. Numerous keylogging methods exist, ranging from hardware and software-based approaches to acoustic cryptanalysis.
History
In the mid-1970s, the Soviet Union developed and deployed a hardware keylogger targeting typewriters. Termed the "selectric bug", it measured the movements of the print head of IBM Selectric typewriters via subtle influences on the regional magnetic field caused by the rotation and movements of the print head. An early keylogger was written by Perry Kivolowitz and posted to the Usenet newsgroup net.unix-wizards, net.sources on November 17, 1983. The posting seems to be a motivating factor in restricting access to /dev/kmem on Unix systems. The user-mode program operated by locating and dumping character lists (clients) as they were assembled in the Unix kernel.
In the 1970s, spies installed keystroke loggers in the US Embassy and Consulate buildings in Moscow.
They installed the bugs in Selectric II and Selectric III electric typewriters.
Soviet embassies used manual typewriters, rather than electric typewriters, for classified information—apparently because they are immune to such bugs.
As of 2013, Russian special services still use typewriters.
Application of keylogger
Software-based keyloggers
A software-based keylogger is a computer program designed to record any input from the keyboard. Keyloggers are used in IT organizations to troubleshoot technical problems with computers and business networks. Families and businesspeople use keyloggers legally to monitor network usage without their users' direct knowledge. Microsoft publicly stated that Windows 10 has a built-in keylogger in its final version "to improve typing and writing services". However, malicious individuals can use keyloggers on public computers to steal passwords or credit card information. Most keyloggers are not stopped by HTTPS encryption because that only protects data in transit between computers; software-based keyloggers run on the affected user's computer, reading keyboard inputs directly as the user types.
From a technical perspective, there are several categories:
Hypervisor-based: The keylogger can theoretically reside in a malware hypervisor running underneath the operating system, which thus remains untouched. It effectively becomes a virtual machine. Blue Pill is a conceptual example.
Kernel-based: A program on the machine obtains root access to hide in the OS and intercepts keystrokes that pass through the kernel. This method is difficult both to write and to combat. Such keyloggers reside at the kernel level, which makes them difficult to detect, especially for user-mode applications that do not have root access. They are frequently implemented as rootkits that subvert the operating system kernel to gain unauthorized access to the hardware. This makes them very powerful. A keylogger using this method can act as a keyboard device driver, for example, and thus gain access to any information typed on the keyboard as it goes to the operating system.
API-based: These keyloggers hook keyboard APIs inside a running application. The keylogger registers keystroke events as if it was a normal piece of the application instead of malware. The keylogger receives an event each time the user presses or releases a key. The keylogger simply records it.
Windows APIs such as GetAsyncKeyState(), GetForegroundWindow(), etc. are used to poll the state of the keyboard or to subscribe to keyboard events. A more recent example simply polls the BIOS for pre-boot authentication PINs that have not been cleared from memory.
Form grabbing based: Form grabbing-based keyloggers log Web form submissions by recording the form data on submit events. This happens when the user completes a form and submits it, usually by clicking a button or pressing enter. This type of keylogger records form data before it is passed over the Internet.
JavaScript-based: A malicious script tag is injected into a targeted web page, and listens for key events such as onKeyUp(). Scripts can be injected via a variety of methods, including cross-site scripting, man-in-the-browser, man-in-the-middle, or a compromise of the remote website.
Memory-injection-based: Memory Injection (MitB)-based keyloggers perform their logging function by altering the memory tables associated with the browser and other system functions. By patching the memory tables or injecting directly into memory, this technique can be used by malware authors to bypass Windows UAC (User Account Control). The Zeus and SpyEye trojans use this method exclusively. Non-Windows systems have protection mechanisms that allow access to locally recorded data from a remote location. Remote communication may be achieved when one of these methods is used:
Data is uploaded to a website, database or an FTP server.
Data is periodically emailed to a pre-defined email address.
Data is wirelessly transmitted employing an attached hardware system.
The software enables a remote login to the local machine from the Internet or the local network, for data logs stored on the target machine.
Keystroke logging in writing process research
Since 2006, keystroke logging has been an established research method for the study of writing processes. Different programs have been developed to collect online process data of writing activities, including Inputlog, Scriptlog, Translog and GGXLog.
Keystroke logging is used legitimately as a suitable research instrument in several writing contexts. These include studies on cognitive writing processes, which include
descriptions of writing strategies; the writing development of children (with and without writing difficulties),
spelling,
first and second language writing, and
specialist skill areas such as translation and subtitling.
Keystroke logging can be used to research writing, specifically. It can also be integrated into educational domains for second language learning, programming skills, and typing skills.
Related features
Software keyloggers may be augmented with features that capture user information without relying on keyboard key presses as the sole input. Some of these features include:
Clipboard logging. Anything that has been copied to the clipboard can be captured by the program.
Screen logging. Screenshots are taken to capture graphics-based information. Applications with screen logging abilities may take screenshots of the whole screen, of just one application, or even just around the mouse cursor. They may take these screenshots periodically or in response to user behaviors (for example, when a user clicks the mouse). Screen logging can be used to capture data inputted with an on-screen keyboard.
Programmatically capturing the text in a control. The Microsoft Windows API allows programs to request the text 'value' in some controls. This means that some passwords may be captured, even if they are hidden behind password masks (usually asterisks).
The recording of every program/folder/window opened including a screenshot of every website visited.
The recording of search engines queries, instant messenger conversations, FTP downloads and other Internet-based activities (including the bandwidth used).
Hardware-based keyloggers
Hardware-based keyloggers do not depend upon any software being installed as they exist at a hardware level in a computer system.
Firmware-based: BIOS-level firmware that handles keyboard events can be modified to record these events as they are processed. Physical and/or root-level access is required to the machine, and the software loaded into the BIOS needs to be created for the specific hardware that it will be running on.
Keyboard hardware: Hardware keyloggers are used for keystroke logging utilizing a hardware circuit that is attached somewhere in between the computer keyboard and the computer, typically inline with the keyboard's cable connector. There are also USB connector-based hardware keyloggers, as well as ones for laptop computers (the Mini-PCI card plugs into the expansion slot of a laptop). More stealthy implementations can be installed or built into standard keyboards so that no device is visible on the external cable. Both types log all keyboard activity to their internal memory, which can be subsequently accessed, for example, by typing in a secret key sequence. Hardware keyloggers do not require any software to be installed on a target user's computer, therefore not interfering with the computer's operation and less likely to be detected by software running on it. However, its physical presence may be detected if, for example, it is installed outside the case as an inline device between the computer and the keyboard. Some of these implementations can be controlled and monitored remotely using a wireless communication standard.
Wireless keyboard and mouse sniffers: These passive sniffers collect packets of data being transferred from a wireless keyboard and its receiver. As encryption may be used to secure the wireless communications between the two devices, this may need to be cracked beforehand if the transmissions are to be read. In some cases, this enables an attacker to type arbitrary commands into a victim's computer.
Keyboard overlays: Criminals have been known to use keyboard overlays on ATMs to capture people's PINs. Each keypress is registered by the keyboard of the ATM as well as the criminal's keypad that is placed over it. The device is designed to look like an integrated part of the machine so that bank customers are unaware of its presence.
Acoustic keyloggers: Acoustic cryptanalysis can be used to monitor the sound created by someone typing on a computer. Each key on the keyboard makes a subtly different acoustic signature when struck. It is then possible to identify which keystroke signature relates to which keyboard character via statistical methods such as frequency analysis. The repetition frequency of similar acoustic keystroke signatures, the timings between different keyboard strokes and other context information such as the probable language in which the user is writing are used in this analysis to map sounds to letters. A fairly long recording (1000 or more keystrokes) is required so that a large enough sample is collected.
Electromagnetic emissions: It is possible to capture the electromagnetic emissions of a wired keyboard from up to away, without being physically wired to it. In 2009, Swiss researchers tested 11 different USB, PS/2 and laptop keyboards in a semi-anechoic chamber and found them all vulnerable, primarily because of the prohibitive cost of adding shielding during manufacture. The researchers used a wide-band receiver to tune into the specific frequency of the emissions radiated from the keyboards.
Optical surveillance: Optical surveillance, while not a keylogger in the classical sense, is nonetheless an approach that can be used to capture passwords or PINs. A strategically placed camera, such as a hidden surveillance camera at an ATM, can allow a criminal to watch a PIN or password being entered.
Physical evidence: For a keypad that is used only to enter a security code, the keys which are in actual use will have evidence of use from many fingerprints. A passcode of four digits, if the four digits in question are known, is reduced from 10,000 possibilities to just 24 possibilities (104 versus 4! [factorial of 4]). These could then be used on separate occasions for a manual "brute force attack".
Smartphone sensors: Researchers have demonstrated that it is possible to capture the keystrokes of nearby computer keyboards using only the commodity accelerometer found in smartphones. The attack is made possible by placing a smartphone near a keyboard on the same desk. The smartphone's accelerometer can then detect the vibrations created by typing on the keyboard and then translate this raw accelerometer signal into readable sentences with as much as 80 percent accuracy. The technique involves working through probability by detecting pairs of keystrokes, rather than individual keys. It models "keyboard events" in pairs and then works out whether the pair of keys pressed is on the left or the right side of the keyboard and whether they are close together or far apart on the QWERTY keyboard. Once it has worked this out, it compares the results to a preloaded dictionary where each word has been broken down in the same way. Similar techniques have also been shown to be effective at capturing keystrokes on touchscreen keyboards while in some cases, in combination with gyroscope or with the ambient-light sensor.
Body keyloggers: Body keyloggers track and analyze body movements to determine which keys were pressed. The attacker needs to be familiar with the keys layout of the tracked keyboard to correlate between body movements and keys position, although with a suitably large sample this can be deduced. Tracking audible signals of the user' interface (e.g. a sound the device produce to informs the user that a keystroke was logged) may reduce the complexity of the body keylogging algorithms, as it marks the moment at which a key was pressed.
Cracking
Writing simple software applications for keylogging can be trivial, and like any nefarious computer program, can be distributed as a trojan horse or as part of a virus. What is not trivial for an attacker, however, is installing a covert keystroke logger without getting caught and downloading data that has been logged without being traced. An attacker that manually connects to a host machine to download logged keystrokes risks being traced. A trojan that sends keylogged data to a fixed e-mail address or IP address risks exposing the attacker.
Trojans
Researchers Adam Young and Moti Yung discussed several methods of sending keystroke logging. They presented a deniable password snatching attack in which the keystroke logging trojan is installed using a virus or worm. An attacker who is caught with the virus or worm can claim to be a victim. The cryptotrojan asymmetrically encrypts the pilfered login/password pairs using the public key of the trojan author and covertly broadcasts the resulting ciphertext. They mentioned that the ciphertext can be steganographically encoded and posted to a public bulletin board such as Usenet.
Use by police
In 2000, the FBI used FlashCrest iSpy to obtain the PGP passphrase of Nicodemo Scarfo, Jr., son of mob boss Nicodemo Scarfo.
Also in 2000, the FBI lured two suspected Russian cybercriminals to the US in an elaborate ruse, and captured their usernames and passwords with a keylogger that was covertly installed on a machine that they used to access their computers in Russia. The FBI then used these credentials to gain access to the suspects' computers in Russia to obtain evidence to prosecute them.
Countermeasures
The effectiveness of countermeasures varies because keyloggers use a variety of techniques to capture data and the countermeasure needs to be effective against the particular data capture technique. In the case of Windows 10 keylogging by Microsoft, changing certain privacy settings may disable it. An on-screen keyboard will be effective against hardware keyloggers; transparency will defeat some—but not all—screen loggers. An anti-spyware application that can only disable hook-based keyloggers will be ineffective against kernel-based keyloggers.
Keylogger program authors may be able to update their program's code to adapt to countermeasures that have proven effective against it.
Anti-keyloggers
An anti-keylogger is a piece of software specifically designed to detect keyloggers on a computer, typically comparing all files in the computer against a database of keyloggers, looking for similarities which might indicate the presence of a hidden keylogger. As anti-keyloggers have been designed specifically to detect keyloggers, they have the potential to be more effective than conventional antivirus software; some antivirus software do not consider keyloggers to be malware, as under some circumstances a keylogger can be considered a legitimate piece of software.
Live CD/USB
Rebooting the computer using a Live CD or write-protected Live USB is a possible countermeasure against software keyloggers if the CD is clean of malware and the operating system contained on it is secured and fully patched so that it cannot be infected as soon as it is started. Booting a different operating system does not impact the use of a hardware or BIOS based keylogger.
Anti-spyware / Anti-virus programs
Many anti-spyware applications can detect some software based keyloggers and quarantine, disable, or remove them. However, because many keylogging programs are legitimate pieces of software under some circumstances, anti-spyware often neglects to label keylogging programs as spyware or a virus. These applications can detect software-based keyloggers based on patterns in executable code, heuristics and keylogger behaviors (such as the use of hooks and certain APIs).
No software-based anti-spyware application can be 100% effective against all keyloggers. Software-based anti-spyware cannot defeat non-software keyloggers (for example, hardware keyloggers attached to keyboards will always receive keystrokes before any software-based anti-spyware application).
The particular technique that the anti-spyware application uses will influence its potential effectiveness against software keyloggers. As a general rule, anti-spyware applications with higher privileges will defeat keyloggers with lower privileges. For example, a hook-based anti-spyware application cannot defeat a kernel-based keylogger (as the keylogger will receive the keystroke messages before the anti-spyware application), but it could potentially defeat hook- and API-based keyloggers.
Network monitors
Network monitors (also known as reverse-firewalls) can be used to alert the user whenever an application attempts to make a network connection. This gives the user the chance to prevent the keylogger from "phoning home" with their typed information.
Automatic form filler programs
Automatic form-filling programs may prevent keylogging by removing the requirement for a user to type personal details and passwords using the keyboard. Form fillers are primarily designed for Web browsers to fill in checkout pages and log users into their accounts. Once the user's account and credit card information has been entered into the program, it will be automatically entered into forms without ever using the keyboard or clipboard, thereby reducing the possibility that private data is being recorded. However, someone with physical access to the machine may still be able to install software that can intercept this information elsewhere in the operating system or while in transit on the network. (Transport Layer Security (TLS) reduces the risk that data in transit may be intercepted by network sniffers and proxy tools.)
One-time passwords (OTP)
Using one-time passwords may prevent unauthorized access to an account which has had its login details exposed to an attacker via a keylogger, as each password is invalidated as soon as it is used. This solution may be useful for someone using a public computer. However, an attacker who has remote control over such a computer can simply wait for the victim to enter their credentials before performing unauthorized transactions on their behalf while their session is active.
Another common way to protect access codes from being stolen by keystroke loggers is by asking users to provide a few randomly selected characters from their authentication code. For example, they might be asked to enter the 2nd, 5th, and 8th characters. Even if someone is watching the user or using a keystroke logger, they would only get a few characters from the code without knowing their positions.
Security tokens
Use of smart cards or other security tokens may improve security against replay attacks in the face of a successful keylogging attack, as accessing protected information would require both the (hardware) security token as well as the appropriate password/passphrase. Knowing the keystrokes, mouse actions, display, clipboard, etc. used on one computer will not subsequently help an attacker gain access to the protected resource. Some security tokens work as a type of hardware-assisted one-time password system, and others implement a cryptographic challenge–response authentication, which can improve security in a manner conceptually similar to one time passwords. Smartcard readers and their associated keypads for PIN entry may be vulnerable to keystroke logging through a so-called supply chain attack where an attacker substitutes the card reader/PIN entry hardware for one which records the user's PIN.
On-screen keyboards
Most on-screen keyboards (such as the on-screen keyboard that comes with Windows XP) send normal keyboard event messages to the external target program to type text. Software key loggers can log these typed characters sent from one program to another.
Keystroke interference software
Keystroke interference software is also available.
These programs attempt to trick keyloggers by introducing random keystrokes, although this simply results in the keylogger recording more information than it needs to. An attacker has the task of extracting the keystrokes of interest—the security of this mechanism, specifically how well it stands up to cryptanalysis, is unclear.
Speech recognition
Similar to on-screen keyboards, speech-to-text conversion software can also be used against keyloggers, since there are no typing or mouse movements involved. The weakest point of using voice-recognition software may be how the software sends the recognized text to target software after the user's speech has been processed.
Handwriting recognition and mouse gestures
Many PDAs and lately tablet PCs can already convert pen (also called stylus) movements on their touchscreens to computer understandable text successfully. Mouse gestures use this principle by using mouse movements instead of a stylus. Mouse gesture programs convert these strokes to user-definable actions, such as typing text. Similarly, graphics tablets and light pens can be used to input these gestures, however, these are becoming less common.
The same potential weakness of speech recognition applies to this technique as well.
Macro expanders/recorders
With the help of many programs, a seemingly meaningless text can be expanded to a meaningful text and most of the time context-sensitively, e.g. "en.wikipedia.org" can be expanded when a web browser window has the focus. The biggest weakness of this technique is that these programs send their keystrokes directly to the target program. However, this can be overcome by using the 'alternating' technique described below, i.e. sending mouse clicks to non-responsive areas of the target program, sending meaningless keys, sending another mouse click to the target area (e.g. password field) and switching back-and-forth.
Deceptive typing
Alternating between typing the login credentials and typing characters somewhere else in the focus window can cause a keylogger to record more information than it needs to, but this could be easily filtered out by an attacker. Similarly, a user can move their cursor using the mouse while typing, causing the logged keystrokes to be in the wrong order e.g., by typing a password beginning with the last letter and then using the mouse to move the cursor for each subsequent letter. Lastly, someone can also use context menus to remove, cut, copy, and paste parts of the typed text without using the keyboard. An attacker who can capture only parts of a password will have a larger key space to attack if they choose to execute a brute-force attack.
Another very similar technique uses the fact that any selected text portion is replaced by the next key typed. e.g., if the password is "secret", one could type "s", then some dummy keys "asdf". These dummy characters could then be selected with the mouse, and the next character from the password "e" typed, which replaces the dummy characters "asdf".
These techniques assume incorrectly that keystroke logging software cannot directly monitor the clipboard, the selected text in a form, or take a screenshot every time a keystroke or mouse click occurs. They may, however, be effective against some hardware keyloggers.
See also
Anti-keylogger
Black-bag cryptanalysis
Computer surveillance
Cybercrime
Digital footprint
Hardware keylogger
Reverse connection
Session replay
Spyware
Trojan horse
Virtual keyboard
Web tracking
References
External links
Cryptographic attacks
Spyware
Surveillance
Cybercrime
Security breaches | Keystroke logging | Technology | 5,132 |
9,655,935 | https://en.wikipedia.org/wiki/XMPP%20Standards%20Foundation | XMPP Standards Foundation (XSF) is the foundation in charge of the standardization of the protocol extensions of XMPP, the open standard of instant messaging and presence of the IETF.
History
The XSF was originally called the Jabber Software Foundation (JSF). The Jabber Software Foundation was originally established to provide an independent, non-profit, legal entity to support the development community around Jabber technologies (and later XMPP). Originally its main focus was on developing JOSL, the Jabber Open Source License (since deprecated), and an open standards process for documenting the protocols used in the Jabber/XMPP developer community. Its founders included Michael Bauer and Peter Saint-Andre.
Process
Members of the XSF vote on acceptance of new members, a technical Council, and a Board of Directors. However, membership is not required to publish, view, or comment on the standards that it promulgates. The unit of work at the XSF is the XMPP Extension Protocol (XEP); XEP-0001 specifies the process for XEPs to be accepted by the community. Most of the work of the XSF takes place on the XMPP Extension Discussion List, the jdev and the xsf chat room.
Organization
Board of directors
The Board of Directors of the XMPP Standards Foundation oversees the business affairs of the organization. As elected by the XSF membership, the Board of Directors for 2020-2021 consists of the following individuals:
Ralph Meijer ( XSF Chair)
Dave Cridland
Ralph Meijer
Severino Ferrer de la Peñita
Arc Riley
Matthew Wild
Council
The XMPP Council is the technical steering group that approves XMPP Extension Protocols, as governed by the XSF Bylaws and XEP-0001. The Council is elected by the members of the XMPP Standards Foundation each year in September. The XMPP Council (2020–2021) consists of the following individuals:
Kim Alvefur
Dave Cridland
Daniel Gultsch
Georg Lukas
Jonas Schäfer
Members
There are currently 66 elected members of the XSF.
Emeritus Members
The following individuals are emeritus members of the XMPP Standards Foundation:
Ryan Eatmon
Peter Millard (deceased)
Jeremie Miller
Julian Missig
Thomas Muldowney
Dave Smith
XEPs
One of the most important outputs of the XSF is a series of "XEPs", or XMPP Extension Protocols, auxiliary protocols defining additional features. Some have chosen to pronounce "XEP" as if it were spelled "JEP", rather than "ZEP", in order to keep with a sense of tradition. Some XEPs of note include:
Data Forms
Service Discovery
Multi-User Chat
Publish-Subscribe
XHTML-IM
Entity Capabilities
Bidirectional-streams Over Synchronous HTTP (BOSH)
Jingle
Serverless Messaging
XMPP Summit
The XSF biannually holds a XMPP Summit where software and protocol developers from all around the world meet and share ideas and discuss topics around the XMPP protocol and the XEPs. In winter it takes place around the FOSDEM event in Brussels, Belgium and in summer it takes place around the RealtimeConf event in Portland, USA. These meetings are open to anyone and focus on discussing both technical and non-technical issues that the XSF members wish to discuss with no costs attached for the participants. However the XSF is open to donations. The first XMPP Summit took place on July 24 and 25, 2006, in Portland.
References
External links
Instant messaging
Standards organizations in the United States
Free and open-source software organizations
Organizations based in Denver
XMPP | XMPP Standards Foundation | Technology | 761 |
30,008,292 | https://en.wikipedia.org/wiki/Descartes%20snark | In the mathematical field of graph theory, a Descartes snark is an undirected graph with 210 vertices and 315 edges. It is a snark, a graph with three edges at each vertex that cannot be partitioned into three perfect matchings. It was first discovered by William Tutte in 1948 under the pseudonym Blanche Descartes.
A Descartes snark is obtained from the Petersen graph by replacing each vertex with a nonagon and each edge with a particular graph closely related to the Petersen graph. Because there are multiple ways to perform this procedure, there are multiple Descartes snarks.
References
Graph families | Descartes snark | Mathematics | 133 |
50,951,887 | https://en.wikipedia.org/wiki/List%20of%20antimicrobial%20peptides%20in%20the%20female%20reproductive%20tract | Antimicrobial peptides are short peptides that possess antimicrobial properties. The female reproductive tract and its tissues produce antimicrobial peptides as part of the immune response. These peptides are able to fight pathogens and at the same time allow the maintenance of the microbiota that are part of the reproductive system in women.
Defensins
alpha-Defensins
beta-Defensins
theta-defensins
Cathelicidins
LL-37
Whey acid proteins
SLPI
Elafin
HE-4
Lysozyme
S100 proteins
Calpotectin
Psoriasin (S100A7)
C-type lectins
SP-A
SP-D
Iron metabolism proteins
Lactoferrin
Kinocidins
CCL20/Mip-3-alpha
External links
Defensins Database, Singapore
Innate ( Nonspecific ) Immunity at Western Kentucky University
References
Immunology
Immune system
Peripheral membrane proteins
Medical lists | List of antimicrobial peptides in the female reproductive tract | Biology | 195 |
298,551 | https://en.wikipedia.org/wiki/Downstream%20%28networking%29 | In a telecommunications network or computer network, downstream refers to data sent from a network service provider to a customer.
One process sending data primarily in the downstream direction is downloading. However, the overall download speed depends on the downstream speed of the user, the upstream speed of the server, and the network between them.
In the client–server model, downstream can refer to the direction from the server to the client.
References
Data transmission
Orientation (geometry)
nl:Downstream | Downstream (networking) | Physics,Mathematics,Technology | 95 |
67,892,328 | https://en.wikipedia.org/wiki/Long%20Lived%20In-situ%20Solar%20System%20Explorer | Long Lived In-situ Solar System Explorer (LLISSE) is a possible NASA payload on the Russian Venera-D mission to Venus. LLISSE uses new materials and heat-resistant electronics that would enable independent operation for about 90 Earth days. This endurance may allow it to obtain periodic measurements of weather data to update global circulation models and quantify near surface atmospheric chemistry variability. Its anticipated instruments include wind speed/direction sensors, temperature sensors, pressure sensors, and a chemical multi-sensor array. LLISSE is a small cube of about . The Venera-D lander may carry two LLISSE units; one would be battery-powered (3,000 h), and the other would be wind-powered.
References
Venera program
NASA
Spacecraft instruments
Meteorological instrumentation and equipment
Space science experiments | Long Lived In-situ Solar System Explorer | Astronomy,Technology,Engineering | 163 |
63,437,511 | https://en.wikipedia.org/wiki/Barnard%20203 | The dark nebula Barnard 203 or Lynds 1448 is located about one degree southwest of NGC 1333 in the Perseus molecular cloud, at a distance of about 800 light-years. Three infrared sources were observed in this region by IRAS, called IRS 1, IRS 2 and IRS 3.
The region also contains multiple Herbig-Haro objects, including HH 193–197, which are driven by the protostars in this region.
The young stellar object population
The source IRS 1 is a class I young stellar object and a binary. IRS 1 is more evolved than most of the protostars in this region and less well-studied.
The source IRS 2 is a binary that is very young (class 0 young stellar object), surrounded by a rotating disk and the system shows a bipolar outflow signature. The system has an hourglass shaped magnetic field that is aligned with the bipolar outflow. Towards the east is the source IRS 2E, a source between a pre-stellar core and a protostar.
The source IRS 3B was studied the most and ALMA showed that it is a triple protostar system with one star forming via disk fragmentation. The two outer stars are separated by 61 and 183 astronomical units from the central star and all three stars are surrounded by a circumstellar disk that shows spiral arms. IRS 3B is a class 0 young stellar object and might be younger than 150,000 years. The two protostellar objects in the center have a mass of about 1 and the protostar further from the center has a mass of about 0.085 . The disk that surrounds the three protostars has an estimated mass of about 0.30 . The sources IRS 3A, B and C show molecular outflows. IRS 3 is also called L1448N.
Another well-studied source in this region is called L1448-mm or L1448C. It is a class 0 young stellar object that drives a highly collimated flow, detected in carbon monoxide, Silicon monoxide and water.
References
Dark nebulae
Barnard objects
Star-forming regions
Perseus (constellation) | Barnard 203 | Astronomy | 434 |
2,344,516 | https://en.wikipedia.org/wiki/NF-%CE%BAB | Nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) is a family of transcription factor protein complexes that controls transcription of DNA, cytokine production and cell survival. NF-κB is found in almost all animal cell types and is involved in cellular responses to stimuli such as stress, cytokines, free radicals, heavy metals, ultraviolet irradiation, oxidized LDL, and bacterial or viral antigens. NF-κB plays a key role in regulating the immune response to infection. Incorrect regulation of NF-κB has been linked to cancer, inflammatory and autoimmune diseases, septic shock, viral infection, and improper immune development. NF-κB has also been implicated in processes of synaptic plasticity and memory.
Discovery
NF-κB was discovered by Ranjan Sen in the lab of Nobel laureate David Baltimore via its interaction with an 11-base pair sequence in the immunoglobulin light-chain enhancer in B cells. Later work by Alexander Poltorak and Bruno Lemaitre in mice and Drosophila fruit flies established Toll-like receptors as universally conserved activators of NF-κB signalling. These works ultimately contributed to awarding of the Nobel Prize to Bruce Beutler and Jules A. Hoffmann, who were the principal investigators of those studies.
Structure
All proteins of the NF-κB family share a Rel homology domain in their N-terminus. A subfamily of NF-κB proteins, including RelA, RelB, and c-Rel, have a transactivation domain in their C-termini. In contrast, the NF-κB1 and NF-κB2 proteins are synthesized as large precursors, p105 and p100, which undergo processing to generate the mature p50 and p52 subunits, respectively. The processing of p105 and p100 is mediated by the ubiquitin/proteasome pathway and involves selective degradation of their C-terminal region containing ankyrin repeats. Whereas the generation of p52 from p100 is a tightly regulated process, p50 is produced from constitutive processing of p105. The p50 and p52 proteins have no intrinsic ability to activate transcription and thus have been proposed to act as transcriptional repressors when binding κB elements as homodimers. Indeed, this confounds the interpretation of p105-knockout studies, where the genetic manipulation is removing an IκB (full-length p105) and a likely repressor (p50 homodimers) in addition to a transcriptional activator (the RelA-p50 heterodimer).
Members
NF-κB family members share structural homology with the retroviral oncoprotein v-Rel, resulting in their classification as NF-κB/Rel proteins.
There are five proteins in the mammalian NF-κB family:
The NF-κB/Rel proteins can be divided into two classes, which share general structural features:
Below are the five human NF-κB family members:
Species distribution and evolution
In addition to mammals, NF-κB is found in a number of simple animals as well. These include cnidarians (such as sea anemones, coral and hydra), porifera (sponges), single-celled eukaryotes including Capsaspora owczarzaki and choanoflagellates, and insects (such as moths, mosquitoes and fruitflies). The sequencing of the genomes of the mosquitoes A. aegypti and A. gambiae, and the fruitfly D. melanogaster has allowed comparative genetic and evolutionary studies on NF-κB. In those insect species, activation of NF-κB is triggered by the Toll pathway (which evolved independently in insects and mammals) and by the Imd (immune deficiency) pathway.
Signaling
Effect of activation
NF-κB is crucial in regulating cellular responses because it belongs to the category of "rapid-acting" primary transcription factors, i.e., transcription factors that are present in cells in an inactive state and do not require new protein synthesis in order to become activated (other members of this family include transcription factors such as c-Jun, STATs, and nuclear hormone receptors). This allows NF-κB to be a first responder to harmful cellular stimuli. Known inducers of NF-κB activity are highly variable and include reactive oxygen species (ROS), tumor necrosis factor alpha (TNFα), interleukin 1-beta (IL-1β), bacterial lipopolysaccharides (LPS), isoproterenol, cocaine, endothelin-1 and ionizing radiation.
NF-κB suppression of tumor necrosis factor cytotoxicity (apoptosis) is due to induction of antioxidant enzymes and sustained suppression of c-Jun N-terminal kinases (JNKs).
Receptor activator of NF-κB (RANK), which is a type of TNFR, is a central activator of NF-κB. Osteoprotegerin (OPG), which is a decoy receptor homolog for RANK ligand (RANKL), inhibits RANK by binding to RANKL, and, thus, osteoprotegerin is tightly involved in regulating NF-κB activation.
Many bacterial products and stimulation of a wide variety of cell-surface receptors lead to NF-κB activation and fairly rapid changes in gene expression. The identification of Toll-like receptors (TLRs) as specific pattern recognition molecules and the finding that stimulation of TLRs leads to activation of NF-κB improved our understanding of how different pathogens activate NF-κB. For example, studies have identified TLR4 as the receptor for the LPS component of Gram-negative bacteria. TLRs are key regulators of both innate and adaptive immune responses.
Unlike RelA, RelB, and c-Rel, the p50 and p52 NF-κB subunits do not contain transactivation domains in their C terminal halves. Nevertheless, the p50 and p52 NF-κB members play critical roles in modulating the specificity of NF-κB function. Although homodimers of p50 and p52 are, in general, repressors of κB site transcription, both p50 and p52 participate in target gene transactivation by forming heterodimers with RelA, RelB, or c-Rel. In addition, p50 and p52 homodimers also bind to the nuclear protein Bcl-3, and such complexes can function as transcriptional activators.
Inhibition
In unstimulated cells, the NF-κB dimers are sequestered in the cytoplasm by a family of inhibitors, called IκBs (Inhibitor of κB), which are proteins that contain multiple copies of a sequence called ankyrin repeats. By virtue of their ankyrin repeat domains, the IκB proteins mask the nuclear localization signals (NLS) of NF-κB proteins and keep them sequestered in an inactive state in the cytoplasm.
IκBs are a family of related proteins that have an N-terminal regulatory domain, followed by six or more ankyrin repeats and a PEST domain near their C terminus. Although the IκB family consists of IκBα, IκBβ, IκBε, and Bcl-3, the best-studied and major IκB protein is IκBα. Due to the presence of ankyrin repeats in their C-terminal halves, p105 and p100 also function as IκB proteins. The c-terminal half of p100, that is often referred to as IκBδ, also functions as an inhibitor. IκBδ degradation in response to developmental stimuli, such as those transduced through LTβR, potentiate NF-κB dimer activation in a NIK dependent non-canonical pathway.
Activation process (canonical/classical)
Activation of the NF-κB is initiated by the signal-induced degradation of IκB proteins. This occurs primarily via activation of a kinase called the IκB kinase (IKK). IKK is composed of a heterodimer of the catalytic IKKα and IKKβ subunits and a "master" regulatory protein termed NEMO (NF-κB essential modulator) or IKKγ. When activated by signals, usually coming from the outside of the cell, the IκB kinase phosphorylates two serine residues located in an IκB regulatory domain. When phosphorylated on these serines (e.g., serines 32 and 36 in human IκBα), the IκB proteins are modified by a process called ubiquitination, which then leads them to be degraded by a cell structure called the proteasome.
With the degradation of IκB, the NF-κB complex is then freed to enter the nucleus where it can 'turn on' the expression of specific genes that have DNA-binding sites for NF-κB nearby. The activation of these genes by NF-κB then leads to the given physiological response, for example, an inflammatory or immune response, a cell survival response, or cellular proliferation. Translocation of NF-κB to nucleus can be detected immunocytochemically and measured by laser scanning cytometry. NF-κB turns on expression of its own repressor, IκBα. The newly synthesized IκBα then re-inhibits NF-κB and, thus, forms an auto feedback loop, which results in oscillating levels of NF-κB activity. In addition, several viruses, including the AIDS virus HIV, have binding sites for NF-κB that controls the expression of viral genes, which in turn contribute to viral replication or viral pathogenicity. In the case of HIV-1, activation of NF-κB may, at least in part, be involved in activation of the virus from a latent, inactive state. YopP is a factor secreted by Yersinia pestis, the causative agent of plague, that prevents the ubiquitination of IκB. This causes this pathogen to effectively inhibit the NF-κB pathway and thus block the immune response of a human infected with Yersinia.
Inhibitors of NF-κB activity
Concerning known protein inhibitors of NF-κB activity, one of them is IFRD1, which represses the activity of NF-κB p65 by enhancing the HDAC-mediated deacetylation of the p65 subunit at lysine 310, by favoring the recruitment of HDAC3 to p65. In fact IFRD1 forms trimolecular complexes with p65 and HDAC3.
The NAD-dependent protein deacetylase and longevity factor SIRT1 inhibits NF-κB gene expression by deacetylating the RelA/p65 subunit of NF-κB at lysine 310.
Non-canonical/alternate pathway
A select set of cell-differentiating or developmental stimuli, such as lymphotoxin β-receptor (LTβR), BAFF or RANKL, activate the non-canonical NF-κB pathway to induce NF-κB/RelB:p52 dimer in the nucleus. In this pathway, activation of the NF-κB inducing kinase (NIK) upon receptor ligation led to the phosphorylation and subsequent proteasomal processing of the NF-κB2 precursor protein p100 into mature p52 subunit in an IKK1/IKKa dependent manner. Then p52 dimerizes with RelB to appear as a nuclear RelB:p52 DNA binding activity. RelB:p52 regulates the expression of homeostatic lymphokines, which instructs lymphoid organogenesis and lymphocyte trafficking in the secondary lymphoid organs. In contrast to the canonical signaling that relies on NEMO-IKK2 mediated degradation of IκBα, -β, -ε, non-canonical signaling depends on NIK mediated processing of p100 into p52. Given their distinct regulations, these two pathways were thought to be independent of each other. However, it was found that syntheses of the constituents of the non-canonical pathway, viz RelB and p52, are controlled by canonical IKK2-IκB-RelA:p50 signaling. Moreover, generation of the canonical and non-canonical dimers, viz RelA:p50 and RelB:p52, within the cellular milieu are mechanistically interlinked. These analyses suggest that an integrated NF-κB system network underlies activation of both RelA and RelB containing dimer and that a malfunctioning canonical pathway will lead to an aberrant cellular response also through the non-canonical pathway. Most intriguingly, a recent study identified that TNF-induced canonical signalling subverts non-canonical RelB:p52 activity in the inflamed lymphoid tissues limiting lymphocyte ingress. Mechanistically, TNF inactivated NIK in LTβR‐stimulated cells and induced the synthesis of Nfkb2 mRNA encoding p100; these together potently accumulated unprocessed p100, which attenuated the RelB activity. A role of p100/Nfkb2 in dictating lymphocyte ingress in the inflamed lymphoid tissue may have broad physiological implications.
In addition to its traditional role in lymphoid organogenesis, the non-canonical NF-κB pathway also directly reinforces inflammatory immune responses to microbial pathogens by modulating canonical NF-κB signalling. It was shown that p100/Nfkb2 mediates stimulus-selective and cell-type-specific crosstalk between the two NF-κB pathways and that Nfkb2-mediated crosstalk protects mice from gut pathogens. On the other hand, a lack of p100-mediated regulations repositions RelB under the control of TNF-induced canonical signalling. In fact, mutational inactivation of p100/Nfkb2 in multiple myeloma enabled TNF to induce a long-lasting RelB activity, which imparted resistance in myeloma cells to chemotherapeutic drug.
In immunity
NF-κB is a major transcription factor that regulates genes responsible for both the innate and adaptive immune response. Upon activation of either the T- or B-cell receptor, NF-κB becomes activated through distinct signaling components. Upon ligation of the T-cell receptor, protein kinase Lck is recruited and phosphorylates the ITAMs of the CD3 cytoplasmic tail. ZAP70 is then recruited to the phosphorylated ITAMs and helps recruit LAT and PLC-γ, which causes activation of PKC. Through a cascade of phosphorylation events, the kinase complex is activated and NF-κB is able to enter the nucleus to upregulate genes involved in T-cell development, maturation, and proliferation.
In the nervous system
In addition to roles in mediating cell survival, studies by Mark Mattson and others have shown that NF-κB has diverse functions in the nervous system including roles in plasticity, learning, and memory. In addition to stimuli that activate NF-κB in other tissues, NF-κB in the nervous system can be activated by Growth Factors (BDNF, NGF) and synaptic transmission such as glutamate. These activators of NF-κB in the nervous system all converge upon the IKK complex and the canonical pathway.
Recently there has been a great deal of interest in the role of NF-κB in the nervous system. Current studies suggest that NF-κB is important for learning and memory in multiple organisms including crabs, fruit flies, and mice. NF-κB may regulate learning and memory in part by modulating synaptic plasticity, synapse function, as well as by regulating the growth of dendrites and dendritic spines.
Genes that have NF-κB binding sites are shown to have increased expression following learning, suggesting that the transcriptional targets of NF-κB in the nervous system are important for plasticity. Many NF-κB target genes that may be important for plasticity and learning include growth factors (BDNF, NGF) cytokines (TNF-alpha, TNFR) and kinases (PKAc).
Despite the functional evidence for a role for Rel-family transcription factors in the nervous system, it is still not clear that the neurological effects of NF-κB reflect transcriptional activation in neurons. Most manipulations and assays are performed in the mixed-cell environments found in vivo, in "neuronal" cell cultures that contain significant numbers of glia, or in tumor-derived "neuronal" cell lines. When transfections or other manipulations have been targeted specifically at neurons, the endpoints measured are typically electrophysiology or other parameters far removed from gene transcription. Careful tests of NF-κB-dependent transcription in highly purified cultures of neurons generally show little to no NF-κB activity.
Some of the reports of NF-κB in neurons appear to have been an artifact of antibody nonspecificity. Of course, artifacts of cell culture—e.g., removal of neurons from the influence of glia—could create spurious results as well. But this has been addressed in at least two co-culture approaches. Moerman et al. used a coculture format whereby neurons and glia could be separated after treatment for EMSA analysis, and they found that the NF-κB induced by glutamatergic stimuli was restricted to glia (and, intriguingly, only glia that had been in the presence of neurons for 48 hours). The same investigators explored the issue in another approach, utilizing neurons from an NF-κB reporter transgenic mouse cultured with wild-type glia; glutamatergic stimuli again failed to activate in neurons. Some of the DNA-binding activity noted under certain conditions (particularly that reported as constitutive) appears to result from Sp3 and Sp4 binding to a subset of κB enhancer sequences in neurons. This activity is actually inhibited by glutamate and other conditions that elevate intraneuronal calcium. In the final analysis, the role of NF-κB in neurons remains opaque due to the difficulty of measuring transcription in cells that are simultaneously identified for type. Certainly, learning and memory could be influenced by transcriptional changes in astrocytes and other glial elements. And it should be considered that there could be mechanistic effects of NF-κB aside from direct transactivation of genes.
Clinical significance
Cancers
NF-κB is widely used by eukaryotic cells as a regulator of genes that control cell proliferation and cell survival. As such, many different types of human tumors have misregulated NF-κB: that is, NF-κB is constitutively active. Active NF-κB turns on the expression of genes that keep the cell proliferating and protect the cell from conditions that would otherwise cause it to die via apoptosis. In cancer, proteins that control NF-κB signaling are mutated or aberrantly expressed, leading to defective coordination between the malignant cell and the rest of the organism. This is evident both in metastasis, as well as in the inefficient eradication of the tumor by the immune system.
Normal cells can die when removed from the tissue they belong to, or when their genome cannot operate in harmony with tissue function: these events depend on feedback regulation of NF-κB, and fail in cancer.
Defects in NF-κB results in increased susceptibility to apoptosis leading to increased cell death. This is because NF-κB regulates anti-apoptotic genes especially the TRAF1 and TRAF2 and therefore abrogates the activities of the caspase family of enzymes, which are central to most apoptotic processes.
In tumor cells, NF-κB activity is enhanced, as for example, in 41% of nasopharyngeal carcinoma, colorectal cancer, prostate cancer and pancreatic tumors. This is either due to mutations in genes encoding the NF-κB transcription factors themselves or in genes that control NF-κB activity (such as IκB genes); in addition, some tumor cells secrete factors that cause NF-κB to become active. Blocking NF-κB can cause tumor cells to stop proliferating, to die, or to become more sensitive to the action of anti-tumor agents. Thus, NF-κB is the subject of much active research among pharmaceutical companies as a target for anti-cancer therapy.
However, even though convincing experimental data have identified NF-κB as a critical promoter of tumorigenesis, which creates a solid rationale for the development of antitumor therapy that is based upon suppression of NF-κB activity, caution should be exercised when considering anti-NF-κB activity as a broad therapeutic strategy in cancer treatment as data has also shown that NF-κB activity enhances tumor cell sensitivity to apoptosis and senescence. In addition, it has been shown that canonical NF-κB is a Fas transcription activator and the alternative NF-κB is a Fas transcription repressor. Therefore, NF-κB promotes Fas-mediated apoptosis in cancer cells, and thus inhibition of NF-κB may suppress Fas-mediated apoptosis to impair host immune cell-mediated tumor suppression.
Inflammation
Because NF-κB controls many genes involved in inflammation, it is not surprising that NF-κB is found to be chronically active in many inflammatory diseases, such as inflammatory bowel disease, arthritis, sepsis, gastritis, asthma, atherosclerosis and others. It is important to note though, that elevation of some NF-κB activators, such as osteoprotegerin (OPG), are associated with elevated mortality, especially from cardiovascular diseases. Elevated NF-κB has also been associated with schizophrenia. Recently, NF-κB activation has been suggested as a possible molecular mechanism for the catabolic effects of cigarette smoke in skeletal muscle and sarcopenia. Research has shown that during inflammation the function of a cell depends on signals it activates in response to contact with adjacent cells and to combinations of hormones, especially cytokines that act on it through specific receptors. A cell's phenotype within a tissue develops through mutual stimulation of feedback signals that coordinate its function with other cells; this is especially evident during reprogramming of cell function when a tissue is exposed to inflammation, because cells alter their phenotype, and gradually express combinations of genes that prepare the tissue for regeneration after the cause of inflammation is removed. Particularly important are feedback responses that develop between tissue resident cells, and circulating cells of the immune system.
Fidelity of feedback responses between diverse cell types and the immune system depends on the integrity of mechanisms that limit the range of genes activated by NF-κB, allowing only expression of genes which contribute to an effective immune response and subsequently, a complete restoration of tissue function after resolution of inflammation. In cancer, mechanisms that regulate gene expression in response to inflammatory stimuli are altered to the point that a cell ceases to link its survival with the mechanisms that coordinate its phenotype and its function with the rest of the tissue. This is often evident in severely compromised regulation of NF-κB activity, which allows cancer cells to express abnormal cohorts of NF-κB target genes. This results in not only the cancer cells functioning abnormally: cells of surrounding tissue alter their function and cease to support the organism exclusively. Additionally, several types of cells in the microenvironment of cancer may change their phenotypes to support cancer growth. Inflammation, therefore, is a process that tests the fidelity of tissue components because the process that leads to tissue regeneration requires coordination of gene expression between diverse cell types.
NEMO
NEMO deficiency syndrome is a rare genetic condition relating to a fault in IKBKG that in turn activates NF-κB. It mostly affects males and has a highly variable set of symptoms and prognoses.
Aging and obesity
NF-κB is increasingly expressed with obesity and aging, resulting in reduced levels of the anti-inflammatory, pro-autophagy, anti-insulin resistance protein sirtuin 1. NF-κB increases the levels of the microRNA miR-34a, which inhibits nicotinamide adenine dinucleotide (NAD) synthesis by binding to its promoter region, resulting in lower levels of sirtuin 1.
NF-κB and interleukin 1 alpha mutually induce each other in senescent cells in a positive feedback loop causing the production of senescence-associated secretory phenotype (SASP) factors. NF-κB and the NAD-degrading enzyme CD38 also mutually induce each other.
NF-κB is a central component of the cellular response to damage. NF-κB is activated in a variety of cell types that undergo normal or accelerated aging. Genetic or pharmacologic inhibition of NF-κB activation can delay the onset of numerous aging related symptoms and pathologies. This effect may be explained, in part, by the finding that reduction of NF-κB reduces the production of mitochondria-derived reactive oxygen species that can damage DNA.
Addiction
NF-κB is one of several induced transcriptional targets of ΔFosB which facilitates the development and maintenance of an addiction to a stimulus. In the caudate putamen, NF-κB induction is associated with increases in locomotion, whereas in the nucleus accumbens, NF-κB induction enhances the positive reinforcing effect of a drug through reward sensitization.
Non-drug inhibitors
Many natural products (including anti-oxidants) that have been promoted to have anti-cancer and anti-inflammatory activity have also been shown to inhibit NF-κB. There is a controversial US patent (US patent 6,410,516) that applies to the discovery and use of agents that can block NF-κB for therapeutic purposes. This patent is involved in several lawsuits, including Ariad v. Lilly. Recent work by Karin, Ben-Neriah and others has highlighted the importance of the connection between NF-κB, inflammation, and cancer, and underscored the value of therapies that regulate the activity of NF-κB.
Extracts from a number of herbs and dietary plants are efficient inhibitors of NF-κB activation in vitro. Nobiletin, a flavonoid isolated from citrus peels, has been shown to inhibit the NF-κB signaling pathway in mice. The circumsporozoite protein of Plasmodium falciparum has been shown to be an inhibitor of NF-κB. Likewise, various withanolides of Withania somnifera (Ashwagandha) have been found to have inhibiting effects on NF-κB through inhibition of proteasome mediated ubiquitin degradation of IκBα.
As a drug target
Aberrant activation of NF-κB is frequently observed in many cancers. Moreover, suppression of NF-κB limits the proliferation of cancer cells. In addition, NF-κB is a key player in the inflammatory response. Hence methods of inhibiting NF-κB signaling has potential therapeutic application in cancer and inflammatory diseases.
Both the canonical and non-canonical NF-κB pathways require proteasomal degradation of regulatory pathway components for NF-κB signalling to occur. The proteosome inhibitor Bortezomib broadly blocks this activity and is approved for treatment of NF-κB driven Mantle Cell Lymphoma and Multiple Myeloma.
The discovery that activation of NF-κB nuclear translocation can be separated from the elevation of oxidant stress gives a promising avenue of development for strategies targeting NF-κB inhibition.
The drug denosumab acts to raise bone mineral density and reduce fracture rates in many patient sub-groups by inhibiting RANKL. RANKL acts through its receptor RANK, which in turn promotes NF-κB,
RANKL normally works by enabling the differentiation of osteoclasts from monocytes.
Disulfiram, olmesartan and dithiocarbamates can inhibit the NF-κB signaling cascade. Effort to develop direct NF-κB inhibitor has emerged with compounds such as (-)-DHMEQ, PBS-1086, IT-603 and IT-901. (-)-DHMEQ and PBS-1086 are irreversible binder to NF-κB while IT-603 and IT-901 are reversible binder. DHMEQ covalently binds to Cys 38 of p65.
Anatabine's antiinflammatory effects are claimed to result from modulation of NF-κB activity. However the studies purporting its benefit use abnormally high doses in the millimolar range (similar to the extracellular potassium concentration), which are unlikely to be achieved in humans.
BAY 11-7082 has also been identified as a drug that can inhibit the NF-κB signaling cascade. It is capable of preventing the phosphorylation of IKK-α in an irreversible manner such that there is down regulation of NF-κB activation.
It has been shown that administration of BAY 11-7082 rescued renal functionality in diabetic-induced Sprague-Dawley rats by suppressing NF-κB regulated oxidative stress.
Research has shown that the N-acylethanolamine, palmitoylethanolamide is capable of PPAR-mediated inhibition of NF-κB.
The biological target of iguratimod, a drug marketed to treat rheumatoid arthritis in Japan and China, was unknown as of 2015, but the primary mechanism of action appeared to be preventing NF-κB activation.
See also
IKK2
RELA
TNF receptor superfamily
Imd pathway
Notes
References
External links
Delta0
Aging-related proteins
Protein complexes
Programmed cell death
Transcription factors | NF-κB | Chemistry,Biology | 6,530 |
3,609,793 | https://en.wikipedia.org/wiki/Human%20mail | Human mail is the transportation of a person through the postal system, usually as a stowaway. While rare, there have been some reported cases of people attempting to travel through the mail.
Real occurrences
Henry Brown (age 42), an African-American slave from Virginia, successfully escaped in a shipping box sent north to the free state of Pennsylvania in 1849. He was known thereafter as Henry "Box" Brown.
Among his many human-mail stunts in the 1890s, Austrian tailor Herman Zeitung mailed himself in a box from New York to the World's Columbian Exposition in Chicago, arriving on July 28, 1893. Four days earlier, another Austrian, Ignatz Lefkovitz, did the same.
W. Reginald Bray mailed himself within England by ordinary mail in 1900 and then by registered mail in 1903.
Suffragettes Elspeth Douglas McClelland and Daisy Solomon mailed themselves successfully to the then Prime Minister of the United Kingdom, H. H. Asquith at 10 Downing Street on 23 February 1909, but his office refused to accept the parcels.
Reg Spiers mailed himself from Heathrow Airport, London, to Perth Airport, Western Australia, in 1964. His 63-hour journey was spent in a box made by fellow British javelin thrower John McSorley. Spiers spent some time outside his container in the cargo hold of the plane, and suffered from dehydration when he was offloaded onto the tarmac of Bombay Airport. He arrived in Perth undetected and returned home to Adelaide.
In 1965, Brian Robson posted himself from Australia to the United Kingdom; he was discovered in the United States in transit and sent back to London. The journey took four days, with the box repeatedly being stored upside down. Two men, Paul and John, assisted him in the trip by nailing the box shut.
Charles McKinley (age 25) shipped himself from New York City to Dallas, Texas in a box in 2003. He was attempting to visit his parents and wanted to save on the air fare by charging the shipping fees to his former employer. However, he was discovered during the final leg of his journey, having successfully travelled by plane.
An inmate (age 42) serving a seven-year drug conviction sentence in Germany escaped from a prison in 2008 by climbing into a box in the mail room, which was picked up by a courier.
In August 2012, Hu Seng of Chongqing, a city in southern China, shipped himself to his girlfriend as a prank. He nearly died when the courier took three hours to deliver the package. Seng had minimal air in the box which was too thick to puncture a hole so that he could breathe. When he arrived at the destination address, his girlfriend found him unconscious and he had to be revived by paramedics.
Mailing children
The mailing of people weighing less than , i.e., children, was occasionally practiced due to a legal ambiguity when the United States first introduced domestic parcel post in 1913, but was restricted by 1914. The children were carried along by mail carriers, but were not put in boxes.
See also
Freighthopping
References
Postal systems
Stowaways | Human mail | Technology | 640 |
7,330,494 | https://en.wikipedia.org/wiki/Hepatocyte%20growth%20factor | Hepatocyte growth factor (HGF) or scatter factor (SF) is a paracrine cellular growth, motility and morphogenic factor. It is secreted by mesenchymal cells and targets and acts primarily upon epithelial cells and endothelial cells, but also acts on haemopoietic progenitor cells and T cells. It has been shown to have a major role in embryonic organ development, specifically in myogenesis, in adult organ regeneration, and in wound healing.
Function
Hepatocyte growth factor regulates cell growth, cell motility, and morphogenesis by activating a tyrosine kinase signaling cascade after binding to the proto-oncogenic c-Met receptor. Hepatocyte growth factor is secreted by platelets, and mesenchymal cells and acts as a multi-functional cytokine on cells of mainly epithelial origin. Its ability to stimulate mitogenesis, cell motility, and matrix invasion gives it a central role in angiogenesis, tumorogenesis, and tissue regeneration.
Structure
It is secreted as a single inactive polypeptide and is cleaved by serine proteases into a 69-kDa alpha-chain and 34-kDa beta-chain. A disulfide bond between the alpha and beta chains produces the active, heterodimeric molecule. The protein belongs to the plasminogen subfamily of S1 peptidases but has no detectable protease activity.
Clinical significance
Human HGF plasmid DNA therapy of cardiomyocytes is being examined as a potential treatment for coronary artery disease as well as treatment for the damage that occurs to the heart after myocardial infarction.
As well as the well-characterised effects of HGF on epithelial cells, endothelial cells and haemopoietic progenitor cells, HGF also regulates the chemotaxis of T cells into heart tissue. Binding of HGF by c-Met, expressed on T cells, causes the upregulation of c-Met, CXCR3, and CCR4 which in turn imbues them with the ability to migrate into heart tissue. HGF also promotes angiogenesis in ischemia injury.
HGF may further play a role as an indicator for prognosis of chronicity for Chikungunya virus induced arthralgia. High HGF levels correlate with high rates of recovery.
Excessive local expression of HGF in the breasts has been implicated in macromastia. HGF is also importantly involved in normal mammary gland development.
HGF has been implicated in a variety of cancers, including of the lungs, pancreas, thyroid, colon, and breast.
Increased expression of HGF has been associated with the enhanced and scarless wound healing capabilities of fibroblast cells isolated from the oral mucosa tissue.
Circulating plasma levels
Plasma from patients with advanced heart failure presents increased levels of HGF, which correlates with a negative prognosis and a high risk of mortality. Circulating HGF has been also identified as a prognostic marker of severity in patients with hypertension. Circulating HGF has been also suggested as a precocious biomarker for the acute phase of bowel inflammation.
Pharmacokinetics
Exogenous HGF administered by intravenous injection is cleared rapidly from circulation by the liver, with a half-life of approximately 4 minutes.
Modulators
Dihexa is an orally active, centrally penetrant small-molecule compound that directly binds to HGF and potentiates its ability to activate its receptor, c-Met. It is a strong inducer of neurogenesis and is being studied for the potential treatment of Alzheimer's disease and Parkinson's disease.
Interactions
Hepatocyte growth factor has been shown to interact with the protein product of the c-Met oncogene, identified as the HGF receptor (HGFR). Both overexpression of the Met/HGFR receptor protein and autocrine activation of Met/HGFR by simultaneous expression of the hepatocyte growth factor ligand have been implicated in oncogenesis.
Hepatocyte growth factor interacts with the sulfated glycosaminoglycans heparan sulfate and dermatan sulfate. The interaction with heparan sulfate allows hepatocyte growth factor to form a complex with c-Met that is able to transduce intracellular signals leading to cell division and cell migration.
See also
Epidermal growth factor
Insulin-like growth factor 1
Epithelial–mesenchymal transition
Madin-Darby Canine Kidney Cells
References
Further reading
External links
Hepatocyte growth factor on the Atlas of Genetics and Oncology
UCSD Signaling Gateway Molecule Page on HGF
Growth factors
Developmental genes and proteins
Cytokines | Hepatocyte growth factor | Chemistry,Biology | 1,017 |
20,872,294 | https://en.wikipedia.org/wiki/Bench%20%28geology%29 | In geomorphology, geography and geology, a bench or benchland is a long, relatively narrow strip of relatively level or gently inclined land bounded by distinctly steeper slopes above and below it. Benches can be of different origins and created by very different geomorphic processes.
First, the differential erosion of rocks or sediments of varying hardness and resistance to erosion can create benches. Earth scientists called such benches "structural benches." Second, other benches are narrow fluvial terraces created by the abandonment of a floodplain by a river or stream and entrenchment of the river valley into it. Finally, a bench is also the name of a narrow flat area often seen at the base of a sea cliff created by waves or other physical or chemical erosion near the shoreline. These benches are typically referred to as either "coastal benches," "wave-cut benches," or "wave-cut platforms."
In mining, a bench is a narrow, strip of land cut into the side of an open-pit mine. These step-like zones are created along the walls of an open-pit mine for access and mining.
See also
References
External links
Landforms
Fluvial landforms
Geomorphology
Riparian zone | Bench (geology) | Environmental_science | 249 |
2,697,320 | https://en.wikipedia.org/wiki/Brucella%20suis | Brucella suis is a bacterium that causes swine brucellosis, a zoonosis that affects pigs. The disease typically causes chronic inflammatory lesions in the reproductive organs of susceptible animals or orchitis, and may even affect joints and other organs. The most common symptom is abortion in pregnant susceptible sows at any stage of gestation. Other manifestations are temporary or permanent sterility, lameness, posterior paralysis, spondylitis, and abscess formation. It is transmitted mainly by ingestion of infected tissues or fluids, semen during breeding, and suckling infected animals.
Since brucellosis threatens the food supply and causes undulant fever, Brucella suis and other Brucella species (B. melitensis, B. abortus, B. ovis, B. canis) are recognized as potential agricultural, civilian, and military bioterrorism agents.
Symptoms and signs
The most frequent clinical sign following B. suis infection is abortion in pregnant females, reduced milk production, and infertility. Cattle can also be transiently infected when they share pasture or facilities with infected pigs, and B. suis can be transmitted by cow's milk.
Swine also develop orchitis (swelling of the testicles), lameness (movement disability), hind limb paralysis, or spondylitis (inflammation in joints).
Cause
Brucella suis is a Gram-negative, facultative, intracellular coccobacillus, capable of growing and reproducing inside of host cells, specifically phagocytic cells. They are also not spore-forming, capsulated, or motile. Flagellar genes, however, are present in the B. suis genome, but are thought to be cryptic remnants because some were truncated and others were missing crucial components of the flagellar apparatus. In mouse models, the flagellum is essential for a normal infectious cycle, where the inability to assemble a complete flagellum leads to severe attenuation of the bacteria.
Brucella suis is differentiated into five biovars (strains), where biovars 1–3 infect wild boar and domestic pigs, and biovars 1 and 3 may cause severe diseases in humans.
In contrast, biovar 2 found in wild boars in Europe shows mild or no clinical signs and cannot infect healthy humans, but does infect pigs and hares.
Pathogenesis
Phagocytes are an essential component of the host's innate immune system with various antimicrobial defense mechanisms to clear pathogens by oxidative burst, acidification of phagosomes, and fusion of the phagosome and lysosome. B. suis, in return, has developed ways to counteract the host cell defense to survive in the macrophage and to deter host immune responses.
B. suis possesses smooth lipopolysaccharide (LPS), which has a full-length O-chain, as opposed to rough LPS, which has a truncated or no O-chain. This structural characteristic allows for B. suis to interact with lipid rafts on the surface of macrophages to be internalized, and the formed lipid-rich phagosome is able to avoid fusion with lysosomes through this endocytic pathway. In addition, this furtive entry into macrophages does not affect the cell's normal trafficking. The smooth LPS also inhibits host cell apoptosis by O-polysaccharides through a TNF-alpha-independent mechanism, which allows for B. suis to avoid the activation of the host immune system.
Once inside macrophages, B. suis is able to endure the rapid acidification in the phagosome to pH 4.0–4.5 by expressing metabolism genes mainly for amino acid synthesis. The acidic pH is actually essential for replication of the bacteria by inducing major virulence genes of the virB operon and the synthesis of DnaK chaperones. DnaK is part of the heat shock protein 70 family, and aids in the correct synthesis and activation of certain virulence factors.
In addition, the B. suis gene for nickel transport, nikA, is activated by metal ion deficiency and is expressed once in the phagosome. Nickel is essential for many enzymatic reactions, including ureolysis to produce ammonia which in turn may neutralize acidic pH. Since B. suis is unable to grow in a strongly acidic medium, it could be protected from acidification by the ammonia.
Summary:
Brucella suis encounters a macrophage, but no oxidative burst occurs.
Lipid rafts are necessary for macrophage penetration.
The phagosome rapidly acidifies, creating a stressful environment for bacteria, which triggers activation of virulence genes.
Lipid rafts on phagosomes prevent lysosomal fusion, and normal cell trafficking is unaffected.
Diagnosis
Treatment
Because B. suis is facultative and intracellular, and is able to adapt to environmental conditions in macrophages, treatment failure and relapse rates are high. The only effective way to control and eradicate zoonosis is by vaccination of all susceptible hosts and elimination of infected animals. The Brucella abortus (rough LPS Brucella) vaccine, developed for bovine brucellosis and licensed by the USDA Animal Plant Health Inspection Service, has shown protection for some swine and is also effective against B. suis infection, but there is currently no approved vaccine for swine brucellosis.
Biological warfare
In the United States, B. suis was the first biological agent weaponized in 1952, and was field-tested with B. suis-filled bombs called M33 cluster bombs. It is, however, considered to be one of the agents of lesser threat because many infections are asymptomatic and the mortality is low, but it is used more as an incapacitating agent.
References
Swine diseases
Bacterial diseases
Biological agents
Theriogenology
Hyphomicrobiales | Brucella suis | Biology,Environmental_science | 1,255 |
26,020,588 | https://en.wikipedia.org/wiki/Name-bearing%20type | Under the International Code of Zoological Nomenclature (Code), the name-bearing type or onomatophore is the biological type that determines the application of a name. Each animal taxon regulated by the Code at least potentially has a name-bearing type. The name-bearing type can be either a type genus (family group), type species (genus group), or one or more type specimens (species group). For example, the name Mabuya maculata (Gray, 1839) has often been used for the Noronha skink (currently Trachylepis atlantica), but because the name-bearing type of the former, a lizard preserved in the Muséum national d'histoire naturelle in Paris, does not represent the same species as the Noronha skink, the name maculata cannot be used for the latter.
Effect on synonymy
Under the ICZN, two names of the same rank that have the same name-bearing type are objective synonyms, as are two whose name-bearing types are themselves objectively synonymous names; for example, the names Didelphis brevicaudata Erxleben, 1777, and Didelphys brachyuros Schreber, 1778, were both based on a specimen (now in the British Museum of Natural History) described by Seba in 1734 and are therefore objective synonyms (the species they refer to, a small South American opossum, is currently known as Monodelphis brevicaudata). In contrast, a subjective synonym is based on a different name-bearing type, but is regarded as representing the same taxon; for example, the name Viverra touan Shaw, 1800, is based on a different name-bearing type (a specimen in the Field Museum of Natural History), but is currently regarded as representing the same species as Didelphis brevicaudata and Didelphys brachyuros.
Family group
"Family-group" ranks include the superfamily and all other ranks below it and above the genus, including the family and tribe. The name of a family-group taxon is based on the stem of a designated type genus; for example, the Central American rodent tribe Nyctomyini has Nyctomys as its type genus and its name consists of the stem of the type genus, Nyctomy-, and the appropriate ending for a tribe, -ini.
Genus group
"Genus group" ranks consist of the genus and subgenus. The name-bearing type for a genus-group taxon is the type species, which must be one of the species included when that taxon ("genus" hereafter for brevity) was first formally named or, when no species were included when the genus was named, one of the first species that were subsequently included in it. A genus described after 1930 (1999 for ichnotaxa) must have its type species fixed when first named; in taxa described earlier without such an explicit designation, the type species can be fixed subsequently. For example, the skink genus Euprepis contained nine species when first described by Wagler in 1830, but no type species was designated. In 2002, Mausfeld and others used the name for a mainly African group of skinks, designating Lacerta punctata Linnaeus, 1758, as the type species (currently Lygosoma punctatum), but in 2003, Bauer noted that Loveridge had already fixed the type species of Euprepis in 1957 as Scincus agilis (currently Mabuya agilis), invalidating the later fixation by Mausfeld and others. Accordingly, Euprepis is now a subjective synonym of Mabuya and the mostly African group Mausfeld and others incorrectly called Euprepis is known as Trachylepis.
Species group
Official "species-group" ranks consist of just the species and subspecies. (Note: a species group defined as a taxon, rather than as a category of ranks, has an unofficial rank, one of several such ranks between the subgenus and species levels sometimes used by zoologists in taxa with many species; see Taxonomic rank.) The name-bearing type of a species-group taxon (hereafter "species" for brevity) is an actual specimen or set of specimens; the Code recommends that great care should be exercised to ensure the preservation of such specimens. It can either be designated in the publication establishing the name or designated later. In the former case, there is either a single name-bearing type, a holotype, or a set of syntypes. In species named before 2000 without explicit designation of a holotype, all specimens in the type series are considered as syntypes. Name-bearing types designated after the original publication include lectotypes and neotypes. If a taxon has syntypes, one of those can be selected as the lectotype, upon which act the others lose the status of syntype. A neotype may be designated to replace the previous name-bearing type when the original type is lost or by application to the Commission when the previous name-bearing type cannot be identified. For example, Shaw's name Viverra touan was based on a description of "Le Touan" by Georges-Louis Leclerc, Comte de Buffon, which left the identity of the name uncertain, and in 2001 Voss and others selected as the neotype a specimen in the Field Museum of Natural History, which thereby becomes the name-bearing type.
The name-bearing type is usually an individual animal in a museum collection; for example, the name-bearing type (in this case, lectotype) of the skink species currently known as Trachylepis maculata (Gray, 1839) is a lizard preserved in the collections of the French Muséum national d'histoire naturelle. Other kinds of name-bearing types are also allowed by the Code, including colonies of asexually reproducing animals, natural casts of fossils, a series of stages of the life cycle of a living protistan (a hapantotype), and some others. If an illustration or description is used as the basis of a species, the specimen or group of specimens illustrated or described is the name-bearing type (not the illustration or description itself), even if no longer in existence.
See also
Glossary of scientific naming
Notes
References to "Code" refer to the International Code for Zoological Nomenclature (International Commission on Zoological Nomenclature, 1999).
Literature cited
Bauer, A.M. 2003. "On the identity of Lacerta punctata (Linnaeus 1758), the type species of the genus Euprepis (Wagler 1830), and the generic assignment of Afro-Malagasy skinks." African Journal of Herpetology 52:1–7.
Groves, C.P. 2005. "Order Primates." Pp. 111–184 in Wilson, D.E. & Reeder, D.M. (eds.). Mammal Species of the World: a taxonomic and geographic reference, 3rd ed. Baltimore: Johns Hopkins University Press, 2 vols., 2142 pp.
Holthuis, L.B. 1996. "Original watercolours donated by Cornelius Sittardus to Conrad Gesner, and published by Gesner in his (1558–1670) works on aquatic animals". Zoologische Mededelingen 70:169–196.
International Commission on Zoological Nomenclature. 1999. International Code of Zoological Nomenclature, 4th ed. London: International Trust for Zoological Nomenclature.
Mausfeld, P. and Vrcibradic, D. 2002. On the nomenclature of the skink (Mabuya) endemic to the western Atlantic archipelago of Fernando de Noronha, Brazil (subscription only). Journal of Herpetology 36(2):292–295.
Miralles, A., Chaparro, J.C. and Harvey, M.B. 2009. "Three rare and enigmatic South American skinks" (first page only). Zootaxa 2012:47–68.
Musser, G.G. and Carleton, M.D. 2005. "Superfamily Muroidea." Pp. 894–1531 in Wilson, D.E. and Reeder, D.M. (eds.). "Mammal Species of the World: a taxonomic and geographic reference, 3rd ed." Baltimore: Johns Hopkins University Press, 2 vols., 2142 pp.
Voss, R.S., Lunde, D.P. and Simmons, N.B. 2001. "The mammals of Paracou, French Guiana: a Neotropical lowland rainforest fauna. Part 2. Nonvolant species." Bulletin of the American Museum of Natural History 263:1–236.
Zoological nomenclature | Name-bearing type | Biology | 1,806 |
18,915,587 | https://en.wikipedia.org/wiki/Fischer%20assay | The Fischer assay is a standardized laboratory test for determining the oil yield from oil shale to be expected from a conventional shale oil extraction. A 100 gram oil shale sample crushed to <2.38 mm is heated in a small aluminum retort to at a rate of 12°C/min (22°F/min), and held at that temperature for 40 minutes. The distilled vapors of oil, gas, and water are passed through a condenser and cooled with ice water into a graduated centrifuge tube. The oil yields achieved by other technologies are often reported as a percentage of the Fischer Assay oil yield.
The original Fischer Assay test was developed in the early low temperature coal retorting research by Franz Joseph Emil Fischer and Hans Schrader. It was adapted for evaluating oil shale yields in 1949 by K. E. Stanfield and I. C. Frost.
See also
Fischer–Tropsch process
References
Oil shale technology
Name reactions
Chemical processes | Fischer assay | Chemistry | 201 |
65,910 | https://en.wikipedia.org/wiki/Printed%20circuit%20board | A printed circuit board (PCB), also called printed wiring board (PWB), is a laminated sandwich structure of conductive and insulating layers, each with a pattern of traces, planes and other features (similar to wires on a flat surface) etched from one or more sheet layers of copper laminated onto or between sheet layers of a non-conductive substrate. PCBs are used to connect or "wire" components to one another in an electronic circuit. Electrical components may be fixed to conductive pads on the outer layers, generally by soldering, which both electrically connects and mechanically fastens the components to the board. Another manufacturing process adds vias, metal-lined drilled holes that enable electrical interconnections between conductive layers, to boards with more than a single side.
Printed circuit boards are used in nearly all electronic products today. Alternatives to PCBs include wire wrap and point-to-point construction, both once popular but now rarely used. PCBs require additional design effort to lay out the circuit, but manufacturing and assembly can be automated. Electronic design automation software is available to do much of the work of layout. Mass-producing circuits with PCBs is cheaper and faster than with other wiring methods, as components are mounted and wired in one operation. Large numbers of PCBs can be fabricated at the same time, and the layout has to be done only once. PCBs can also be made manually in small quantities, with reduced benefits.
PCBs can be single-sided (one copper layer), double-sided (two copper layers on both sides of one substrate layer), or multi-layer (stacked layers of substrate with copper plating sandwiched between each and on the outside layers). Multi-layer PCBs provide much higher component density, because circuit traces on the inner layers would otherwise take up surface space between components. The rise in popularity of multilayer PCBs with more than two, and especially with more than four, copper planes was concurrent with the adoption of surface-mount technology. However, multilayer PCBs make repair, analysis, and field modification of circuits much more difficult and usually impractical.
The world market for bare PCBs exceeded US$60.2 billion in 2014, and was estimated at $80.33 billion in 2024, forecast to be $96.57 billion for 2029, growing at 4.87% per annum.
History
Predecessors
Before the development of printed circuit boards, electrical and electronic circuits were wired point-to-point on a chassis. Typically, the chassis was a sheet metal frame or pan, sometimes with a wooden bottom. Components were attached to the chassis, usually by insulators when the connecting point on the chassis was metal, and then their leads were connected directly or with jumper wires by soldering, or sometimes using crimp connectors, wire connector lugs on screw terminals, or other methods. Circuits were large, bulky, heavy, and relatively fragile (even discounting the breakable glass envelopes of the vacuum tubes that were often included in the circuits), and production was labor-intensive, so the products were expensive.
Development of the methods used in modern printed circuit boards started early in the 20th century. In 1903, a German inventor, Albert Hanson, described flat foil conductors laminated to an insulating board, in multiple layers. Thomas Edison experimented with chemical methods of plating conductors onto linen paper in 1904. Arthur Berry in 1913 patented a print-and-etch method in the UK, and in the United States Max Schoop obtained a patent to flame-spray metal onto a board through a patterned mask. Charles Ducas in 1925 patented a method of electroplating circuit patterns.
Predating the printed circuit invention, and similar in spirit, was John Sargrove's 1936–1947 Electronic Circuit Making Equipment (ECME) that sprayed metal onto a Bakelite plastic board. The ECME could produce three radio boards per minute.
Early PCBs
The Austrian engineer Paul Eisler invented the printed circuit as part of a radio set while working in the UK around 1936. In 1941 a multi-layer printed circuit was used in German magnetic influence naval mines.
Around 1943 the United States began to use the technology on a large scale to make proximity fuzes for use in World War II. Such fuzes required an electronic circuit that could withstand being fired from a gun, and could be produced in quantity. The Centralab Division of Globe Union submitted a proposal which met the requirements: a ceramic plate would be screenprinted with metallic paint for conductors and carbon material for resistors, with ceramic disc capacitors and subminiature vacuum tubes soldered in place. The technique proved viable, and the resulting patent on the process, which was classified by the U.S. Army, was assigned to Globe Union. It was not until 1984 that the Institute of Electrical and Electronics Engineers (IEEE) awarded Harry W. Rubinstein its Cledo Brunetti Award for early key contributions to the development of printed components and conductors on a common insulating substrate. Rubinstein was honored in 1984 by his alma mater, the University of Wisconsin-Madison, for his innovations in the technology of printed electronic circuits and the fabrication of capacitors. This invention also represents a step in the development of integrated circuit technology, as not only wiring but also passive components were fabricated on the ceramic substrate.
Post-war developments
In 1948, the US released the invention for commercial use. Printed circuits did not become commonplace in consumer electronics until the mid-1950s, after the Auto-Sembly process was developed by the United States Army. At around the same time in the UK work along similar lines was carried out by Geoffrey Dummer, then at the RRDE.
Motorola was an early leader in bringing the process into consumer electronics, announcing in August 1952 the adoption of "plated circuits" in home radios after six years of research and a $1M investment. Motorola soon began using its trademarked term for the process, PLAcir, in its consumer radio advertisements. Hallicrafters released its first "foto-etch" printed circuit product, a clock-radio, on November 1, 1952.
Even as circuit boards became available, the point-to-point chassis construction method remained in common use in industry (such as TV and hi-fi sets) into at least the late 1960s. Printed circuit boards were introduced to reduce the size, weight, and cost of parts of the circuitry. In 1960, a small consumer radio receiver might be built with all its circuitry on one circuit board, but a TV set would probably contain one or more circuit boards.
Originally, every electronic component had wire leads, and a PCB had holes drilled for each wire of each component. The component leads were then inserted through the holes and soldered to the copper PCB traces. This method of assembly is called through-hole construction. In 1949, Moe Abramson and Stanislaus F. Danko of the United States Army Signal Corps developed the Auto-Sembly process in which component leads were inserted into a copper foil interconnection pattern and dip soldered. The patent they obtained in 1956 was assigned to the U.S. Army. With the development of board lamination and etching techniques, this concept evolved into the standard printed circuit board fabrication process in use today. Soldering could be done automatically by passing the board over a ripple, or wave, of molten solder in a wave-soldering machine. However, the wires and holes are inefficient since drilling holes is expensive and consumes drill bits and the protruding wires are cut off and discarded.
From the 1980s onward, small surface mount parts have been used increasingly instead of through-hole components; this has led to smaller boards for a given functionality and lower production costs, but with some additional difficulty in servicing faulty boards.
In the 1990s the use of multilayer surface boards became more frequent. As a result, size was further minimized and both flexible and rigid PCBs were incorporated in different devices. In 1995 PCB manufacturers began using microvia technology to produce High-Density Interconnect (HDI) PCBs.
Recent advances
Recent advances in 3D printing have meant that there are several new techniques in PCB creation. 3D printed electronics (PEs) can be utilized to print items layer by layer and subsequently the item can be printed with a liquid ink that contains electronic functionalities.
HDI (High Density Interconnect) technology allows for a denser design on the PCB and thus potentially smaller PCBs with more traces and components in a given area. As a result, the paths between components can be shorter. HDIs use blind/buried vias, or a combination that includes microvias. With multi-layer HDI PCBs the interconnection of several vias stacked on top of each other (stacked vías, instead of one deep buried via) can be made stronger, thus enhancing reliability in all conditions. The most common applications for HDI technology are computer and mobile phone components as well as medical equipment and military communication equipment. A 4-layer HDI microvia PCB is equivalent in quality to an 8-layer through-hole PCB, so HDI technology can reduce costs. HDI PCBs are often made using build-up film such as ajinomoto build-up film, which is also used in the production of flip chip packages. Some PCBs have optical waveguides, similar to optical fibers built on the PCB.
Composition
A basic PCB consists of a flat sheet of insulating material and a layer of copper foil, laminated to the substrate. Chemical etching divides the copper into separate conducting lines called tracks or circuit traces, pads for connections, vias to pass connections between layers of copper, and features such as solid conductive areas for electromagnetic shielding or other purposes. The tracks function as wires fixed in place, and are insulated from each other by air and the board substrate material. The surface of a PCB may have a coating that protects the copper from corrosion and reduces the chances of solder shorts between traces or undesired electrical contact with stray bare wires. For its function in helping to prevent solder shorts, the coating is called solder resist or solder mask.
The pattern to be etched into each copper layer of a PCB is called the "artwork". The etching is usually done using photoresist which is coated onto the PCB, then exposed to light projected in the pattern of the artwork. The resist material protects the copper from dissolution into the etching solution. The etched board is then cleaned. A PCB design can be mass-reproduced in a way similar to the way photographs can be mass-duplicated from film negatives using a photographic printer.
FR-4 glass epoxy is the most common insulating substrate. Another substrate material is cotton paper impregnated with phenolic resin, often tan or brown.
When a PCB has no components installed, it is less ambiguously called a printed wiring board (PWB) or etched wiring board. However, the term "printed wiring board" has fallen into disuse. A PCB populated with electronic components is called a printed circuit assembly (PCA), printed circuit board assembly or PCB assembly (PCBA). In informal usage, the term "printed circuit board" most commonly means "printed circuit assembly" (with components). The IPC preferred term for an assembled board is circuit card assembly (CCA), and for an assembled backplane it is backplane assembly. "Card" is another widely used informal term for a "printed circuit assembly".
For example, expansion card.
A PCB may be printed with a legend identifying the components, test points, or identifying text. Originally, silkscreen printing was used for this purpose, but today other, finer quality printing methods are usually used. Normally the legend does not affect the function of a PCBA.
Layers
A printed circuit board can have multiple layers of copper which almost always are arranged in pairs. The number of layers and the interconnection designed between them (vias, PTHs) provide a general estimate of the board complexity. Using more layers allow for more routing options and better control of signal integrity, but are also time-consuming and costly to manufacture. Likewise, selection of the vias for the board also allow fine tuning of the board size, escaping of signals off complex ICs, routing, and long term reliability, but are tightly coupled with production complexity and cost.
One of the simplest boards to produce is the two-layer board. It has copper on both sides that are referred to as external layers; multi layer boards sandwich additional internal layers of copper and insulation. After two-layer PCBs, the next step up is the four-layer. The four layer board adds significantly more routing options in the internal layers as compared to the two layer board, and often some portion of the internal layers is used as ground plane or power plane, to achieve better signal integrity, higher signaling frequencies, lower EMI, and better power supply decoupling.
In multi-layer boards, the layers of material are laminated together in an alternating sandwich: copper, substrate, copper, substrate, copper, etc.; each plane of copper is etched, and any internal vias (that will not extend to both outer surfaces of the finished multilayer board) are plated-through, before the layers are laminated together. Only the outer layers need be coated; the inner copper layers are protected by the adjacent substrate layers.
Component mounting
"Through hole" components are mounted by their wire leads passing through the board and soldered to traces on the other side. "Surface mount" components are attached by their leads to copper traces on the same side of the board. A board may use both methods for mounting components. PCBs with only through-hole mounted components are now uncommon. Surface mounting is used for transistors, diodes, IC chips, resistors, and capacitors. Through-hole mounting may be used for some large components such as electrolytic capacitors and connectors.
The first PCBs used through-hole technology, mounting electronic components by lead inserted through holes on one side of the board and soldered onto copper traces on the other side. Boards may be single-sided, with an unplated component side, or more compact double-sided boards, with components soldered on both sides. Horizontal installation of through-hole parts with two axial leads (such as resistors, capacitors, and diodes) is done by bending the leads 90 degrees in the same direction, inserting the part in the board (often bending leads located on the back of the board in opposite directions to improve the part's mechanical strength), soldering the leads, and trimming off the ends. Leads may be soldered either manually or by a wave soldering machine.
Surface-mount technology emerged in the 1960s, gained momentum in the early 1980s, and became widely used by the mid-1990s. Components were mechanically redesigned to have small metal tabs or end caps that could be soldered directly onto the PCB surface, instead of wire leads to pass through holes. Components became much smaller and component placement on both sides of the board became more common than with through-hole mounting, allowing much smaller PCB assemblies with much higher circuit densities. Surface mounting lends itself well to a high degree of automation, reducing labor costs and greatly increasing production rates compared with through-hole circuit boards. Components can be supplied mounted on carrier tapes. Surface mount components can be about one-quarter to one-tenth of the size and weight of through-hole components, and passive components much cheaper. However, prices of semiconductor surface mount devices (SMDs) are determined more by the chip itself than the package, with little price advantage over larger packages, and some wire-ended components, such as 1N4148 small-signal switch diodes, are actually significantly cheaper than SMD equivalents.
Electrical properties
Each trace consists of a flat, narrow part of the copper foil that remains after etching. Its resistance, determined by its width, thickness, and length, must be sufficiently low for the current the conductor will carry. Power and ground traces may need to be wider than signal traces. In a multi-layer board one entire layer may be mostly solid copper to act as a ground plane for shielding and power return. For microwave circuits, transmission lines can be laid out in a planar form such as stripline or microstrip with carefully controlled dimensions to assure a consistent impedance. In radio-frequency and fast switching circuits the inductance and capacitance of the printed circuit board conductors become significant circuit elements, usually undesired; conversely, they can be used as a deliberate part of the circuit design, as in distributed-element filters, antennae, and fuses, obviating the need for additional discrete components. High density interconnects (HDI) PCBs have tracks or vias with a width or diameter of under 152 micrometers.
Materials
Laminates
Laminates are manufactured by curing layers of cloth or paper with thermoset resin under pressure and heat to form an integral final piece of uniform thickness. They can be up to in width and length. Varying cloth weaves (threads per inch or cm), cloth thickness, and resin percentage are used to achieve the desired final thickness and dielectric characteristics. Available standard laminate thickness are listed in
ANSI/IPC-D-275.
The cloth or fiber material used, resin material, and the cloth to resin ratio determine the laminate's type designation (FR-4, CEM-1, G-10, etc.) and therefore the characteristics of the laminate produced. Important characteristics are the level to which the laminate is fire retardant, the dielectric constant (er), the loss tangent (tan δ), the tensile strength, the shear strength, the glass transition temperature (Tg), and the Z-axis expansion coefficient (how much the thickness changes with temperature).
There are quite a few different dielectrics that can be chosen to provide different insulating values depending on the requirements of the circuit. Some of these dielectrics are polytetrafluoroethylene (Teflon), FR-4, FR-1, CEM-1 or CEM-3. Well known pre-preg materials used in the PCB industry are FR-2 (phenolic cotton paper), FR-3 (cotton paper and epoxy), FR-4 (woven glass and epoxy), FR-5 (woven glass and epoxy), FR-6 (matte glass and polyester), G-10 (woven glass and epoxy), CEM-1 (cotton paper and epoxy), CEM-2 (cotton paper and epoxy), CEM-3 (non-woven glass and epoxy), CEM-4 (woven glass and epoxy), CEM-5 (woven glass and polyester). Thermal expansion is an important consideration especially with ball grid array (BGA) and naked die technologies, and glass fiber offers the best dimensional stability.
FR-4 is by far the most common material used today. The board stock with unetched copper on it is called "copper-clad laminate".
With decreasing size of board features and increasing frequencies, small non-homogeneities like uneven distribution of fiberglass or other filler, thickness variations, and bubbles in the resin matrix, and the associated local variations in the dielectric constant, are gaining importance.
Key substrate parameters
The circuit-board substrates are usually dielectric composite materials. The composites contain a matrix (usually an epoxy resin) and a reinforcement (usually a woven, sometimes non-woven, glass fibers, sometimes even paper), and in some cases a filler is added to the resin (e.g. ceramics; titanate ceramics can be used to increase the dielectric constant).
The reinforcement type defines two major classes of materials: woven and non-woven. Woven reinforcements are cheaper, but the high dielectric constant of glass may not be favorable for many higher-frequency applications. The spatially non-homogeneous structure also introduces local variations in electrical parameters, due to different resin/glass ratio at different areas of the weave pattern. Non-woven reinforcements, or materials with low or no reinforcement, are more expensive but more suitable for some RF/analog applications.
The substrates are characterized by several key parameters, chiefly thermomechanical (glass transition temperature, tensile strength, shear strength, thermal expansion), electrical (dielectric constant, loss tangent, dielectric breakdown voltage, leakage current, tracking resistance...), and others (e.g. moisture absorption).
At the glass transition temperature the resin in the composite softens and significantly increases thermal expansion; exceeding Tg then exerts mechanical overload on the board components - e.g. the joints and the vias. Below Tg the thermal expansion of the resin roughly matches copper and glass, above it gets significantly higher. As the reinforcement and copper confine the board along the plane, virtually all volume expansion projects to the thickness and stresses the plated-through holes. Repeated soldering or other exposition to higher temperatures can cause failure of the plating, especially with thicker boards; thick boards therefore require a matrix with a high Tg.
The materials used determine the substrate's dielectric constant. This constant is also dependent on frequency, usually decreasing with frequency. As this constant determines the signal propagation speed, frequency dependence introduces phase distortion in wideband applications; as flat a dielectric constant vs frequency characteristics as is achievable is important here. The impedance of transmission lines decreases with frequency, therefore faster edges of signals reflect more than slower ones.
Dielectric breakdown voltage determines the maximum voltage gradient the material can be subjected to before suffering a breakdown (conduction, or arcing, through the dielectric).
Tracking resistance determines how the material resists high voltage electrical discharges creeping over the board surface.
Loss tangent determines how much of the electromagnetic energy from the signals in the conductors is absorbed in the board material. This factor is important for high frequencies. Low-loss materials are more expensive. Choosing unnecessarily low-loss material is a common engineering error in high-frequency digital design; it increases the cost of the boards without a corresponding benefit. Signal degradation by loss tangent and dielectric constant can be easily assessed by an eye pattern.
Moisture absorption occurs when the material is exposed to high humidity or water. Both the resin and the reinforcement may absorb water; water also may be soaked by capillary forces through voids in the materials and along the reinforcement. Epoxies of the FR-4 materials are not too susceptible, with absorption of only 0.15%. Teflon has very low absorption of 0.01%. Polyimides and cyanate esters, on the other side, suffer from high water absorption. Absorbed water can lead to significant degradation of key parameters; it impairs tracking resistance, breakdown voltage, and dielectric parameters. Relative dielectric constant of water is about 73, compared to about 4 for common circuit board materials. Absorbed moisture can also vaporize on heating, as during soldering, and cause cracking and delamination, the same effect responsible for "popcorning" damage on wet packaging of electronic parts. Careful baking of the substrates may be required to dry them prior to soldering.
Common substrates
Often encountered materials:
FR-2, phenolic paper or phenolic cotton paper, paper impregnated with a phenol formaldehyde resin. Common in consumer electronics with single-sided boards. Electrical properties inferior to FR-4. Poor arc resistance. Generally rated to 105 °C.
FR-4, a woven fiberglass cloth impregnated with an epoxy resin. Low water absorption (up to about 0.15%), good insulation properties, good arc resistance. Very common. Several grades with somewhat different properties are available. Typically rated to 130 °C.
Aluminum, or metal core board or insulated metal substrate (IMS), clad with thermally conductive thin dielectric - used for parts requiring significant cooling - power switches, LEDs. Consists of usually single, sometimes double layer thin circuit board based on e.g. FR-4, laminated on aluminum sheet metal, commonly 0.8, 1, 1.5, 2 or 3 mm thick. The thicker laminates sometimes also come with thicker copper metalization.
Flexible substrates - can be a standalone copper-clad foil or can be laminated to a thin stiffener, e.g. 50-130 μm
Kapton or UPILEX, a polyimide foil. Used for flexible printed circuits, in this form common in small form-factor consumer electronics or for flexible interconnects. Resistant to high temperatures.
Pyralux, a polyimide-fluoropolymer composite foil. Copper layer can delaminate during soldering.
Less-often encountered materials:
FR-1, like FR-2, typically specified to 105 °C, some grades rated to 130 °C. Room-temperature punchable. Similar to cardboard. Poor moisture resistance. Low arc resistance.
FR-3, cotton paper impregnated with epoxy. Typically rated to 105 °C.
FR-5, woven fiberglass and epoxy, high strength at higher temperatures, typically specified to 170 °C.
FR-6, matte glass and polyester
G-10, woven glass and epoxy - high insulation resistance, low moisture absorption, very high bond strength. Typically rated to 130 °C.
G-11, woven glass and epoxy - high resistance to solvents, high flexural strength retention at high temperatures. Typically rated to 170 °C.
CEM-1, cotton paper and epoxy
CEM-2, cotton paper and epoxy
CEM-3, non-woven glass and epoxy
CEM-4, woven glass and epoxy
CEM-5, woven glass and polyester
PTFE, ("Teflon") - expensive, low dielectric loss, for high frequency applications, very low moisture absorption (0.01%), mechanically soft. Difficult to laminate, rarely used in multilayer applications.
PTFE, ceramic filled - expensive, low dielectric loss, for high frequency applications. Varying ceramics/PTFE ratio allows adjusting dielectric constant and thermal expansion.
RF-35, fiberglass-reinforced ceramics-filled PTFE. Relatively less expensive, good mechanical properties, good high-frequency properties.
Alumina, a ceramic. Hard, brittle, very expensive, very high performance, good thermal conductivity.
Polyimide, a high-temperature polymer. Expensive, high-performance. Higher water absorption (0.4%). Can be used from cryogenic temperatures to over 260 °C.
Copper thickness
Copper thickness of PCBs can be specified directly or as the weight of copper per area (in ounce per square foot) which is easier to measure. One ounce per square foot is 1.344 mils or 34 micrometers thickness (0.001344 inches). Heavy copper is a layer exceeding three ounces of copper per ft2, or approximately 4.2 mils (105 μm) (0.0042 inches) thick. Heavy copper layers are used for high current or to help dissipate heat.
On the common FR-4 substrates, 1 oz copper per ft2 (35 μm) is the most common thickness; 2 oz (70 μm) and 0.5 oz (17.5 μm) thickness is often an option. Less common are 12 and 105 μm, 9 μm is sometimes available on some substrates. Flexible substrates typically have thinner metalization. Metal-core boards for high power devices commonly use thicker copper; 35 μm is usual but also 140 and 400 μm can be encountered.
In the US, copper foil thickness is specified in units of ounces per square foot (oz/ft2), commonly referred to simply as ounce. Common thicknesses are 1/2 oz/ft2 (150 g/m), 1 oz/ft2 (300 g/m), 2 oz/ft2 (600 g/m), and 3 oz/ft2 (900 g/m). These work out to thicknesses of 17.05 μm (0.67 thou), 34.1 μm (1.34 thou), 68.2 μm (2.68 thou), and 102.3 μm (4.02 thou), respectively.
1/2 oz/ft2 foil is not widely used as a finished copper weight, but is used for outer layers when plating for through holes will increase the finished copper weight Some PCB manufacturers refer to 1 oz/ft2 copper foil as having a thickness of 35 μm (may also be referred to as 35 μ, 35 micron, or 35 mic).
1/0 – denotes 1 oz/ft2 copper one side, with no copper on the other side.
1/1 – denotes 1 oz/ft2 copper on both sides.
H/0 or H/H – denotes 0.5 oz/ft2 copper on one or both sides, respectively.
2/0 or 2/2 – denotes 2 oz/ft2 copper on one or both sides, respectively.
Manufacturing
Printed circuit board manufacturing involves manufacturing bare printed circuit boards and then populating them with electronic components. In large-scale board manufacturing, multiple PCBs are grouped on a single panel for efficient processing. After assembly, they are separated (depaneled).
Types
Breakout boards
A minimal PCB for a single component, used for prototyping, is called a breakout board. The purpose of a breakout board is to "break out" the leads of a component on separate terminals so that manual connections to them can be made easily. Breakout boards are especially used for surface-mount components or any components with fine lead pitch.
Advanced PCBs may contain components embedded in the substrate, such as capacitors and integrated circuits, to reduce the amount of space taken up by components on the surface of the PCB while improving electrical characteristics.
Multiwire boards
Multiwire is a patented technique of interconnection which uses machine-routed insulated wires embedded in a non-conducting matrix (often plastic resin). It was used during the 1980s and 1990s. Multiwire is still available through Hitachi.
Since it was quite easy to stack interconnections (wires) inside the embedding matrix, the approach allowed designers to forget completely about the routing of wires (usually a time-consuming operation of PCB design): Anywhere the designer needs a connection, the machine will draw a wire in a straight line from one location/pin to another. This led to very short design times (no complex algorithms to use even for high density designs) as well as reduced crosstalk (which is worse when wires run parallel to each other—which almost never happens in Multiwire), though the cost is too high to compete with cheaper PCB technologies when large quantities are needed.
Corrections can be made to a Multiwire board layout more easily than to a PCB layout.
Cordwood construction
Cordwood construction can save significant space and was often used with wire-ended components in applications where space was at a premium (such as fuzes, missile guidance, and telemetry systems) and in high-speed computers, where short traces were important. In cordwood construction, axial-leaded components were mounted between two parallel planes. The name comes from the way axial-lead components (capacitors, resistors, coils, and diodes) are stacked in parallel rows and columns, like a stack of firewood. The components were either soldered together with jumper wire or they were connected to other components by thin nickel ribbon welded at right angles onto the component leads. To avoid shorting together different interconnection layers, thin insulating cards were placed between them. Perforations or holes in the cards allowed component leads to project through to the next interconnection layer. One disadvantage of this system was that special nickel-leaded components had to be used to allow reliable interconnecting welds to be made. Differential thermal expansion of the component could put pressure on the leads of the components and the PCB traces and cause mechanical damage (as was seen in several modules on the Apollo program). Additionally, components located in the interior are difficult to replace. Some versions of cordwood construction used soldered single-sided PCBs as the interconnection method (as pictured), allowing the use of normal-leaded components at the cost of being difficult to remove the boards or replace any component that is not at the edge.
Before the advent of integrated circuits, this method allowed the highest possible component packing density; because of this, it was used by a number of computer vendors including Control Data Corporation.
Uses
Printed circuit boards have been used as an alternative to their typical use for electronic and biomedical engineering thanks to the versatility of their layers, especially the copper layer. PCB layers have been used to fabricate sensors, such as capacitive pressure sensors and accelerometers, actuators such as microvalves and microheaters, as well as platforms of sensors and actuators for Lab-on-a-chip (LoC), for example to perform polymerase chain reaction (PCR), and fuel cells, to name a few.
Repair
Manufacturers may not support component-level repair of printed circuit boards because of the relatively low cost to replace compared with the time and cost of troubleshooting to a component level. In board-level repair, the technician identifies the board (PCA) on which the fault resides and replaces it. This shift is economically efficient from a manufacturer's point of view but is also materially wasteful, as a circuit board with hundreds of functional components may be discarded and replaced due to the failure of one minor and inexpensive part, such as a resistor or capacitor, and this practice is a significant contributor to the problem of e-waste.
Legislation
In many countries (including all European Single Market participants, the United Kingdom, Turkey, and China), legislation restricts the use of lead, cadmium, and mercury in electrical equipment. PCBs sold in such countries must therefore use lead-free manufacturing processes and lead-free solder, and attached components must themselves be compliant.
Safety Standard UL 796 covers component safety requirements for printed wiring boards for use as components in devices or appliances. Testing analyzes characteristics such as flammability, maximum operating temperature, electrical tracking, heat deflection, and direct support of live electrical parts.
See also
Breadboard
BT-Epoxy - resin used in PCBs
Certified interconnect designer - qualification for PCB designers
Occam process - solder-free circuit board manufacture method
References
Further reading
Electrical engineering
Electronics substrates
Electronics manufacturing
Electronic engineering
Printed circuit board manufacturing | Printed circuit board | Technology,Engineering | 7,253 |
64,105,317 | https://en.wikipedia.org/wiki/Flortaucipir%20%2818F%29 | {{DISPLAYTITLE:Flortaucipir (18F)}}
Flortaucipir (18F), sold under the brand name Tauvid, is a radioactive diagnostic agent indicated for use with positron emission tomography (PET) imaging to image the brain.
The most common adverse reactions include headache, injection site pain and increased blood pressure.
Two proteins – tau and amyloid – are recognized as hallmarks of Alzheimer's disease. In people with Alzheimer's disease, pathological forms of tau proteins develop inside neurons in the brain, creating neurofibrillary tangles. After flortaucipir (18F) is administered intravenously, it binds to sites in the brain associated with this tau protein misfolding. The brain can then be imaged with a PET scan to help identify the presence of tau pathology.
It is the first drug used to help image a distinctive characteristic of Alzheimer's disease in the brain called tau pathology. The US Food and Drug Administration (FDA) considers it to be a first-in-class medication.
Medical uses
Flortaucipir (18F) is a radioactive diagnostic agent for adults with cognitive impairment who are being evaluated for Alzheimer's disease. It is indicated for positron emission tomography (PET) imaging of the brain to estimate the density and distribution of aggregated tau neurofibrillary tangles (NFTs), a primary marker of Alzheimer's disease.
Flortaucipir (18F) is not indicated for use in the evaluation of people for chronic traumatic encephalopathy (CTE).
Chemistry
Chemically, flortaucipir F 18 is 7-(6-[F-18]fluoropyridin-3-yl)-5H-pyrido[4,3 b]indole.
History
Flortaucipir (aka 18F-T807) was discovered by the Siemens biomarker research group, headed by Hartmuth Kolb and Katrin Szardenings, who also conducted first in human trials. Flortaucipir (18F) was approved for medical use in the United States in May 2020.
The safety and effectiveness of flortaucipir (18F) imaging was evaluated in two clinical studies. In each study, five evaluators read and interpreted the flortaucipir (18F) imaging. The evaluators were blinded to clinical information and interpreted the imaging as positive or negative.
The first study enrolled 156 participants who were terminally ill and agreed to undergo flortaucipir (18F) imaging and participate in a post-mortem brain donation program. In 64 of the participants who died within nine months of the flortaucipir (18F) brain scan, evaluators' reading of the flortaucipir (18F) scan was compared to post-mortem readings from independent pathologists who evaluated the density and distribution of neurofibrillary tangles (NFTs) in the same brain. The study showed evaluators reading the flortaucipir (18F) images had a high probability of correctly evaluating participants with tau pathology and had an average-to-high probability of correctly evaluating participants without tau pathology.
The second study included the same participants with terminal illness as the first study, plus 18 additional participants with terminal illness, and 159 participants with cognitive impairment being evaluated for Alzheimer's disease (the indicated patient population). The study gauged how well flortaucipir (18F) evaluators' readings agreed with each other's assessments of the readings. Perfect reader agreement would be 1, while no reader agreement would be 0. In this study, reader agreement was 0.87 across all 241 participants. In a separate subgroup analysis that included the 82 terminally ill participants diagnosed after death and the 159 participants with cognitive impairment, reader agreement was 0.90 for the participants in the indicated population and 0.82 in the terminally ill participants.
The US Food and Drug Administration (FDA) approved flortaucipir (18F) based on evidence of 1921 participants from 19 trials conducted at 322 sites in the United States, Australia, Belgium, Canada, France, Japan, Netherlands and Poland.
The ability of flortaucipir (18F) to detect tau pathology was assessed in participants with generally severe stages of dementia and may be lower in participants in earlier stages of cognitive decline than in the participants with terminal illness who were studied.
Society and culture
Legal status
The US Food and Drug Administration (FDA) granted the application for flortaucipir (18F) priority review and it granted approval of Tauvid to Avid Radiopharmaceuticals, Inc.
In June 2024, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Tauvid, intended for the diagnosis of Alzheimer's disease. The applicant for this medicinal product is Eli Lilly Nederland B.V. Flortaucipir (18F) was approved for medical use in the European Union in August 2024.
Names
Flortaucipir (18F) is the international nonproprietary name (INN).
References
Further reading
External links
Alzheimer's disease
Medicinal radiochemistry
PET radiotracers
Radiopharmaceuticals | Flortaucipir (18F) | Chemistry | 1,135 |
13,168,288 | https://en.wikipedia.org/wiki/Jackup%20rig | A jackup rig or a self-elevating unit is a type of mobile platform that consists of a buoyant hull fitted with a number of movable legs, capable of raising its hull over the surface of the sea. The buoyant hull enables transportation of the unit and all attached machinery to a desired location. Once on location the hull is raised to the required elevation above the sea surface supported by the sea bed. The legs of such units may be designed to penetrate the sea bed, may be fitted with enlarged sections or footings, or may be attached to a bottom mat. Generally jackup rigs are not self-propelled and rely on tugs or heavy lift ships for transportation.
Jackup platforms are almost exclusively used as exploratory oil and gas drilling platforms and as offshore and wind farm service platforms. Jackup rigs can either be triangular in shape with three legs or square in shape with four legs. Jackup platforms have been the most popular and numerous of various mobile types in existence. The total number of jackup drilling rigs in operation numbered about 540 at the end of 2013. The tallest jackup rig built to date is the Noble Lloyd Noble, completed in 2016 with legs 214 metres (702 feet) tall.
Name
Jackup rigs are so named because they are self-elevating with three, four, six and even eight movable legs that can be extended (“jacked”) above or below the hull. Jackups are towed or moved under self propulsion to the site with the hull lowered to the water level, and the legs extended above the hull. The hull is actually a water-tight barge that floats on the water’s surface. When the rig reaches the work site, the crew jacks the legs downward through the water and into the sea floor (or onto the sea floor with mat supported jackups). This anchors the rig and holds the hull well above the waves.
History
An early design was the DeLong platform, designed by Leon B. DeLong. In 1949 he started his own company, DeLong Engineering & Construction Company. In 1950 he constructed the DeLong Rig No. 1 for Magnolia Petroleum, consisting of a barge with six legs. In 1953 DeLong entered into a joint venture with McDermott, which built the DeLong-McDermott No.1 in 1954 for Humble Oil. This was the first mobile offshore drilling platform. This barge had ten legs which had spud cans to prevent them from digging into the seabed too deep. When DeLong-McDermott was taken over by the Southern Natural Gas Company, which formed The Offshore Company, the platform was called Offshore No. 51.
In 1954, Zapata Offshore, owned by George H. W. Bush, ordered the Scorpion. It was designed by R. G. LeTourneau and featured three electro-mechanically-operated lattice type legs. Built on the shores of the Mississippi River by the LeTourneau Company, it was launched in December 1955. The Scorpion was put into operation in May 1956 off Port Aransas, Texas. The second, also designed by LeTourneau, was called Vinegaroon.
Operation
A jackup rig is a barge fitted with long support legs that can be raised or lowered. The jackup is maneuvered (self-propelled or by towing) into location with its legs up and the hull floating on the water. Upon arrival at the work location, the legs are jacked down onto the seafloor. Then "preloading" takes place, where the weight of the barge and additional ballast water are used to drive the legs securely into the sea bottom so they will not penetrate further while operations are carried out. After preloading, the jacking system is used to raise the entire barge above the water to a predetermined height or "air gap", so that wave, tidal and current loading acts only on the relatively slender legs and not on the barge hull.
Modern jacking systems use a rack and pinion gear arrangement where the pinion gears are driven by hydraulic or electric motors and the rack is affixed to the legs.
Jackup rigs can only be placed in relatively shallow waters, generally less than of water. However, a specialized class of jackup rigs known as premium or ultra-premium jackups are known to have operational capability in water depths ranging from 150 to 190 meters (500 to 625 feet).
Types
Mobile offshore Drilling Units (MODU)
This type of rig is commonly used in connection with oil and/or natural gas drilling. There are more jackup rigs in the worldwide offshore rig fleet than other type of mobile offshore drilling rig. Other types of offshore rigs include semi-submersibles (which float on pontoon-like structures) and drillships, which are ship-shaped vessels with rigs mounted in their center. These rigs drill through holes in the drillship hulls, known as moon pools.
Turbine Installation Vessel (TIV)
This type of rig is commonly used in connection with offshore wind turbine installation.
Barges
Jackup rigs can also refer to specialized barges that are similar to an oil and gas platform but are used as a base for servicing other structures such as offshore wind turbines, long bridges, and drilling platforms.
See also
Crane vessel
Offshore geotechnical engineering
Oil platform
Rack phase difference
TIV Resolution
References
Oil platforms
Ship types | Jackup rig | Chemistry,Engineering | 1,091 |
11,775,607 | https://en.wikipedia.org/wiki/Pucciniastrum%20hydrangeae | Pucciniastrum hydrangeae is a plant pathogen infecting hydrangeas.
References
Fungal plant pathogens and diseases
Ornamental plant pathogens and diseases
Pucciniales
Fungi described in 1906
Fungus species | Pucciniastrum hydrangeae | Biology | 44 |
20,954,465 | https://en.wikipedia.org/wiki/Flue-gas%20condensation | Flue gas condensation is a process, where flue gas is cooled below its water dew point and the heat released by the resulting condensation of water is recovered as low temperature heat.
Cooling of the flue gas can be performed either directly with a heat exchanger or indirectly via a condensing scrubber.
The condensation of water releases more than per ton of condensed water, which can be recovered in the cooler for e.g. district heating purposes.
Excess condensed water must continuously be removed from the process.
The downstream gas is saturated with water, so even though significant amounts of water may have been removed from the cooled gas, it is likely to leave a visible stack plume of water vapor.
If the fuel contains sulfur, the flue gases will contain oxides of sulfur. If the flue gases are cooled below the acid dew-point the acid vapor (sulfuric acid, H2SO4) will begin to condense. Acid condensation can result in low-temperature corrosion, which can threaten the safety of plant. Appropriate corrosion resistant material selection is important.
The heat recovery potential of flue gas condensation is highest for fuels with a high moisture content (e.g. biomass and municipal waste), and where heat is useful at the lowest possible temperatures. Thus flue gas condensation is normally implemented at biomass fired boilers and waste incinerators connected district heating grids with relatively low return temperatures (below approximately ).
Efficiency exceeding 100 %
Flue gas condensation may cause the heat recovered to exceed the Lower Heating Value of the input fuel, and thus an efficiency greater than 100%. Since historically most combustion processes have not condensed the fuel, usual efficiency calculations assume the combustion products are not condensed. This assumption is implicit when basing calculations on the Lower Heating Value. A more rigorous approach would be to base efficiency calculations on the Higher Heating Value, which typically results in efficiencies less than 100%.
Should the flue gases be cooled below , even efficiencies based on the Higher Heating Value may exceed 100%, since typical heating value definitions assume that all heat is released when combustion products are cooled to somewhere between and .
See also
Condensing boiler
Scrubber
Wet scrubber
Heat exchanger
District heating
Energy
References
External links
On Flue gas Condensation by Götaverken Miljö AB
Scrubbers
Industrial processes
Heat exchangers | Flue-gas condensation | Chemistry,Engineering | 487 |
20,589,553 | https://en.wikipedia.org/wiki/Melt%20%28manufacturing%29 | Melt is the working material in the steelmaking process, in making glass, and when forming thermoplastics. In thermoplastics, melt is the plastic in its forming temperature, which can vary depending on how it is being used. For steelmaking, it refers to steel in liquid form.
See also
Wax melter
Crucible
References
Notes
Bibliography
.
.
Plastics industry
Steelmaking | Melt (manufacturing) | Chemistry | 81 |
75,871,822 | https://en.wikipedia.org/wiki/NGC%201513 | NGC 1513 is an open cluster of stars in the northern constellation of Perseus, positioned 2° SSE of the faint star Lambda Persei. The same telescope field contains the clusters NGC 1528 and NGC 1545. NGC 1513 was discovered in 1790 by the German-British astronomer William Herschel. The brightest component star is of magnitude 11, so a medium-sized amateur telescope is needed to observe 20-30 members. With a aperture telescope, most of the member stars can be resolved. This cluster is located at a distance of from the Sun, but is drawing closer with a radial velocity of −14.7 km/s.
This cluster has a rating of II2m in the Trumpler Catalogue, indicating it is moderately rich in stars with little central concentration. It is partially obscured by dust from the Persei dark cloud complex. NGC 1513 is 363 million years old and at least 433 stars in the field are members with a minimum 50% probability. The cluster has a core radius of and a tidal radius of . It has a metallicity of [M/H] = , indicating a lower abundance of elements more massive than helium compared to the Sun.
References
Further reading
Open clusters
Perseus (constellation)
1513
Astronomical objects discovered in 1790
Discoveries by William Herschel | NGC 1513 | Astronomy | 265 |
44,079,804 | https://en.wikipedia.org/wiki/Internet%20Digital%20DIOS | The LG Internet Digital DIOS (also known as R-S73CT) is an internet refrigerator released by LG Electronics in June 2000. The technology is the result of a project that started in 1997 and staffed by a team of 55 researchers with a budget cost of 15 billion won (US$49.2 million).
Features
The refrigerator has a TFT-LCD (thin-film transistor-liquid crystal display) screen with TV functionality and Local Area Network (LAN) port. It includes a LCD information window that features electronic pen, data memo, video messaging and schedule management functions and provides information, such as inside temperature, the freshness of stored foods, nutrition information and recipes. Other features are a webcam that is used as a scanner and tracks what is inside the refrigerator, a MP3 player and a three-level automatic icemaker. In addition, the electricity consumption is half the level of conventional refrigerators and the noise level is only 23 decibels.
References
Home appliances
Internet of things
Food storage
Cooling technology
Food preservation
LG Electronics
Refrigerators | Internet Digital DIOS | Physics,Technology | 219 |
4,573,623 | https://en.wikipedia.org/wiki/Proximity%20marketing | Proximity marketing is the localized wireless distribution of advertising content associated with a particular place. Transmissions can be received by individuals in that location who wish to receive them and have the necessary equipment to do so.
Distribution may be via a traditional localized broadcast, or more commonly is specifically targeted to devices known to be in a particular area.
The location of a device may be determined by:
A cellular phone being in a particular cell
A Bluetooth- or Wi-Fi-enabled device being within range of a transmitter
An Internet enabled device with GPS enabling it to request localized content from Internet servers
A NFC enabled phone can read a RFID chip on a product or media and launch localized content from internet servers
Communications may be further targeted to specific groups within a given location, for example content in tourist hot spots may only be distributed to devices registered outside the local area.
Communications may be both time and place specific, e.g. content at a conference venue may depend on the event in progress.
Uses of proximity marketing include distribution of media at concerts, information (weblinks on local facilities), gaming and social applications, and advertising.
Bluetooth-based systems
Bluetooth, a short-range wireless system supported by many mobile devices, is one transmission medium used for proximity marketing.
The process of Bluetooth-based proximity marketing involves setting up Bluetooth "broadcasting" equipment at a particular location and then sending information which can be text, images, audio or video to Bluetooth enabled devices within range of the broadcast server. These devices are often referred to as beacons. Other standard data exchange formats such as vCard can also be used. This form of proximity marketing is also referred to as close range marketing.
It used to be the case that due to security fears, or a desire to save battery life, many users keep their Bluetooth devices in OFF mode, or ON but not set to be 'discoverable'. Because of this, often regions where Bluetooth proximity marketing is in operation it is accompanied by advising via traditional media - such as posters, television screens or field marketing teams - suggesting people make their Bluetooth handsets 'discoverable' in order to receive free content - this is often referred to as a "Call-to-Action." A 'discoverable' Bluetooth device within range of the server is automatically sent a message asking if the user would like to receive the free content.
Current mobile phones usually have bluetooth switched ON by default, and some users leave bluetooth switched on for easy connection with car kits and headsets.
Wi-Fi-based systems
There are systems capable of detecting certain signals periodically emitted by any electronic devices equipped with Wi-Fi or Bluetooth technology, and the subsequent use of gathered information to detect the position or presence of, and/or flows of information to and from, said devices, in a statistical or aggregate form.
This technology is used in a manner equivalent to other systems, such as Radio-frequency Identification (RFID), which serve for locating devices within a controlled environment; it works in conjunction with signals from Wi-Fi issuers (also called wireless tags) and receiving antennas, in different locations, so that the movements and presence of Wi-Fi-equipped devices can be analyzed in terms of arrival time, length of visit per zone, paths of movement, general flows, etc.
The continuously increasing use of smartphones and tablets has fueled a boom in Wi-Fi tracking technology, specially in the retail environment. Such technology can be used by managers of a physical business to ascertain how many devices are present in a given area, and to observe or optimize business marketing and management.
Technically, such technology is based on two main models:
1. Re-use of standard Access Point (AP) technologies with a Captive Portal, already deployed in numerous locations (airports, malls, shops, etc.).
2. Use of antennas for the detection of signals in the 2.4 or 5 GHz frequency bands, positioning the detected devices within strategic areas, in order to obtain a unique identifier about every mobile device is detected in such locations, and with the corresponding HTML5, iOS and Android SDKs integrated in any APP or Web, allowing interaction by proximity with the users through the mobile devices.
The first option manifests weaker ability to detect and send messages to the public, because AP devices were created for purposes other than wireless tracking and operate by extracting information only from select devices (smartphones or tablets which have previously connected to the AP in question). In practice, and depending on the environment, as many as 10-20% of visitors access to the captive portal when they visit a point of sale
The second option is to analyze all signals detected within the bands used by the Wi-Fi and Bluetooth technology, offering a higher detection ratio of total visitors (about 60%-70%) and extracting behavior patterns that allow the assignment of a unique identifier, each time a device is detected. Such identifiers are not linked to any data present on the device, nor to any information from the device manufacturer, so that relation to any particular user of the device cannot be made. Unlike in the above case, visitor security (in the sense of anonymity) is total.
Assignment of the same unique identifier to tracking information obtained by the antennas, APP and Webs APIs remains a challenge, and allows to have both online and offline behaviour information to optimize proximity communication campaigns in a non-intrusive way.
NFC-based systems
Near Field Communication (NFC) tags are embedded in the NFC Smart Poster, Smart Product or Smart Book. The tag has a RFID chip with an embedded command. The command can be to open the mobile browser on a given page or offer. Any NFC-enabled phone can activate this tag by placing the device in close proximity. The information can be anything from product details, special accommodation deals, and information on local restaurants.
The German drugstore chain, Budnikowsky, launched the first NFC-enabled Smart Poster in October 2011 which allowed train commuters to tap their phones on the poster to shop and find more information. in November 2011, Atria Books/Simon & Schuster launched the Impulse Economy, the first NFC-enabled Smart Book.
In the UK NFC is being adopted by most of the outdoor poster contractors. Clear Channel have installed over 25,000 Adshel posters with NFC tags (and QR codes for Apple phones).
Retailers are also looking at NFC as it offers a cost-effective method by which consumers can engage with brands but doesn't require integrating the technology into their IT systems - which is a barrier to many new technologies like BLE. A number of retailers have already started using NFC to enhance the shopping experience, Casino in France and Vic in Holland.
Proximity Marketing Strategy using NFC Technology has been widely adopted in Japan and uses 'pull' rather than 'push' marketing allowing the consumer the choice of where and when they receive marketing messages.
There are a number NFC-enabled phones entering the market spurred by NFC mobile wallet trials globally. NFC wallets include the Google Wallet and ISIS (mobile payment system). While mobile payment is the driver for NFC, proximity marketing is an immediate beneficiary in-market.
Apple did not include this technology in their initial smartphone models. Apple added NFC to the iPhone 6 and iPhone 6 Plus.
GSM-based systems
Proximity Marketing via SMS relies on GSM 03.41 which defines the Short Message Service - Cell Broadcast. SMS-CB allows messages (such as advertising or public information) to be broadcast to all mobile users in a specified geographical area. In the Philippines, GSM-based proximity broadcast systems are used by select Government Agencies for information dissemination on Government-run community-based programs to take advantage of its reach and popularity (Philippines has the world's highest traffic of SMS). It is also used for commercial service known as Proxima SMS. Bluewater, a super-regional shopping centre in the UK, has a GSM based system supplied by NTL to help its GSM coverage for calls, it also allows each customer with a mobile phone to be tracked though the centre which shops they go into and for how long. The system enables special offer texts to be sent to the phone.
See also
Mobile marketing
Narrowcasting
Geotargeting
Sideloading
Low-power broadcasting
Locative media
Location-based service
Hypertag
Spamming
References
Mobile technology
Types of marketing
Technology in society
Neologisms
Bluetooth
Radio-frequency identification
Near-field communication | Proximity marketing | Technology,Engineering | 1,730 |
228,431 | https://en.wikipedia.org/wiki/Johann%20Wolfgang%20D%C3%B6bereiner | Johann Wolfgang Döbereiner (13 December 1780 – 24 March 1849) was a German chemist who is known best for work that was suggestive of the periodic law for the chemical elements, and for inventing the first lighter, which was known as the Döbereiner's lamp. He became a professor of chemistry and pharmacy for the University of Jena.
Life and work
As a coachman's son, Döbereiner had little opportunity for formal schooling. Thus, he was apprenticed to an apothecary, and began to read widely and to attend science lectures. He eventually became a professor for the University of Jena in 1810 and also studied chemistry at Strasbourg. In work published during 1829, Döbereiner reported trends in certain properties of selected groups of elements. For example, the average of the atomic masses of lithium and potassium was close to the atomic mass of sodium. A similar pattern was found with calcium, strontium, and barium; with sulfur, selenium, tellurium; and with chlorine, bromine, and iodine. Moreover, the densities for some of these triads had a similar pattern. These sets of elements became known as "Döbereiner's triads".
Döbereiner is also known for his discovery of furfural, for his work concerning the use of platinum as a catalyst, and for the invention of a lighter, known as Döbereiner's lamp. By 1828 hundreds of thousands of these lighters had been mass produced by the German manufacturer Gottfried Piegler in Schleiz.
The German writer Goethe was a friend of Döbereiner, attended his lectures weekly, and used his theories of chemical affinities as a basis for his famous 1809 novella Elective Affinities.
Works
Deutsches Apothekerbuch . Vol. 1-3 . Balz, Stuttgart 1842-1848 Digital edition by the University and State Library Düsseldorf
References
Further reading
ff
Kimberley A. McGrath, Bridget Travers. 1999. World of Scientific Discovery. Gale Research.
Scerri Eric. 2020, The Periodic Table: Its Story and Its Significance, 2nd edition, Oxford University Press, New York,
1780 births
1849 deaths
People from Hof, Bavaria
19th-century German chemists
18th-century German chemists
People involved with the periodic table
19th-century German inventors | Johann Wolfgang Döbereiner | Chemistry | 483 |
65,402,298 | https://en.wikipedia.org/wiki/George%20A%20Danos | George A Danos (Greek: Γιώργος Α Δανός) is a Cypriot space scientist, space diplomat, engineer, astronomer, entrepreneur and science communicator. He is a graduate and eminent alumnus of Imperial College London. He is the President of the Cyprus Space Exploration Organisation (CSEO) and the President of the Parallel Parliament for Entrepreneurship of the Republic of Cyprus.
He is Honorary Member of the International Astronomical Union (IAU), in recognition to his significant contributions to the progress of astronomy, and Vice-chair of the international Committee on Space Research Panel on Innovative Solutions.
He led several societies, groups, companies and organisations in the UK, Cyprus, Ireland, and in Europe and internationally.
Early career
Whilst a student, he was President of the Imperial College Students for the Exploration and Development of Space (IC-SEDS) and Board Member of UK-SEDS.
Whilst aged 27, he became Founder and Chief Technical Officer of Virgin Biznet, one of the most lucrative business ventures of Sir Richard Branson's Virgin Group, after pitching at the “House” of Richard Branson.
In 1996 he was part of the team that brought Virgin Radio's broadcast to the internet, making it the first European radio station to simulcast their live program 24-hours a day on the internet.
Space exploration career
Space exploration in Cyprus
In 2013 he was elected President of the Cyprus Space Exploration Organisation (CSEO), a position he continues to hold.
As President of CSEO, he led the national campaign that saw Cyprus join the European Space Agency, as a PECS Member.
He mentored and nurtured the local space community of Cyprus that saw notable achievements and multiple awards won by many teams of CSEO, and brokered many international agreements with international synergies and space research projects.
Involvement in international space exploration
He is Council Member – the highest governing body – of the international Committee on Space Research (COSPAR). In October 2020, he was appointed as Vice-chair of COSPAR's Panel on Innovative Solutions (PoIS) (see below for details in this role).
He is representing the Republic of Cyprus to the Global Experts Group on Sustainable Lunar Activities (GEGSLA).
He is the official representative of Cyprus to COSPAR and as President of CSEO – the National Member to the International Astronomical Union (IAU) – he is also the official representative of Cyprus to the IAU. He is also the official representative of CSEO to the International Astronautical Federation (IAF).
He serves as Chair of the Analogue Working Group of the Moon Village Association (MVA) and as Middle East & Africa Regional Coordinator for the MVA.
During the 70th International Astronautical Congress held in Washington, D.C., as a panelist of the "Martian and Lunar Analogues" Global Networking Forum, he announced the International Moon Analogue Consortium.
EASTRO - Contribution to European Space Policy and Institutional Activities
He is Executive Board Member for Communication of the European Association of Space Technology Research Organisations (EASTRO), which represents Research and Technology Organisations (RTOs) with Space activities in Europe towards the ESA, the European Commission and other major Institutional and Industrial stakeholders in Europe and the World. Its member institutes have more than 65,000 employees, with a combined turnover of more than 7,000 M€, and contribute to all areas of space in Europe, from advanced structures, satellite sub-systems and instruments to quantum technologies and the utilisation of remote sensing data for climate change analysis with the objective to create innovations with impact on societal goals with economic impact.
COSPAR - Artificial Intelligence and Space Weather prediction
As vice-chair of the COSPAR Panel on Innovative Solutions (PoIS) he managed the creation of the Space Innovation Lab of COSPAR, bridging the science of space weather with the engineering tools of artificial intelligence, analyzing space weather data and potentially predicting dangerous storms heading towards our planet and raising a warning alarm if needed.
During the 44th COSPAR General Assembly in July 2022, as Main Scientific Organizer (MSO) of the PoIS.2 panel session, he led the effort of bridging global industry and scientific community, towards the above goals.
Science communicator
He is a science communicator and advocate of solar system and space exploration.
He gave many presentations worldwide, including a TEDx talk, and he is the presenter of the "2030: SpaceWorks" global webinars, with a viewership of over 80,000 people worldwide.
Parallel Parliament of the Republic of Cyprus
He served for one year (Nov 2019 - Oct 2020) as Dean and M.P. of the Parallel Parliament for Research, Innovation and Digital Governance of the Republic of Cyprus.
In October 2020, he was elected President of the Parallel Parliament for Entrepreneurship of the Republic of Cyprus.
Recognitions and awards
Recognition: Academician of the International Academy of Astronautics (IAA)
In recognition of his contributions and achievements in promoting astronautics and space exploration he was elected Corresponding Member and Academician of the International Academy of Astronautics.
Recognition: Honorary Member of the International Astronomical Union (IAU)
The International Astronomical Union (IAU) selected him as Honorary Member of the IAU, in recognition to his significant contributions to the progress of astronomy, including leading campaigns that saw Cyprus join ESA, the IAU and COSPAR, as well as establishing the Cyprus Space Centre and helping Cyprus be selected as an International Astronomy Education Centre of the IAU Office for Astronomy Education (OAE).
Award: Ambassador of Hellenic Culture of the Hellenic Foundation for Culture (HFC)
The Hellenic Foundation for Culture of the Hellenic Republic awarded him as Ambassador of Hellenic Culture, in recognition to his international contribution to the advancement and promotion of space sciences, for his international successes, and for representing the Greek Spirit, Language and Culture.
Notable positions
Mar 2023 - Ambassador of Hellenic Culture, Hellenic Foundation for Culture, Hellenic Republic.
Dec 2022 - Executive Board Member for Communication, European Association of Space Technology Research Organisations.
Aug 2021 - Honorary Member of the International Astronomical Union (IAU).
Feb 2021 - Representative of the Republic of Cyprus to the Global Experts Group on Sustainable Lunar Activities (GEGSLA).
Oct 2020 - President of the Parallel Parliament for Entrepreneurship of the Republic of Cyprus.
Oct 2020 - Vice-chair of the Panel on Innovative Solutions (PoIS) of the international Committee on Space Research (COSPAR).
Nov 2019 - Dean of the Parallel Parliament for Research, Innovation and Digital Governance of the Republic of Cyprus.
Sep 2019 - Middle East & Africa Regional Coordinator of the Moon Village Association (MVA).
Jun 2018 - Chair of the Analogue Working Group of the Moon Village Association (MVA).
Mar 2018 - Corresponding Member and Academician of the International Academy of Astronautics (IAA).
Dec 2016 - Council Member of the international Committee on Space Research (COSPAR).
Oct 2013 - President of the Cyprus Space Exploration Organisation (CSEO).
May 1999 - Founder and Chief Technical Officer of Virgin Biznet.
Oct 1992 - Board Member of the United Kingdom Students for the Exploration and Development of Space (UK-SEDS).
Oct 1991 - President of the Imperial College Students for the Exploration and Development of Space (IC-SEDS).
References
Living people
Year of birth missing (living people)
Cypriot scientists
21st-century astronomers
Amateur astronomers
Space scientists | George A Danos | Astronomy | 1,500 |
25,056,802 | https://en.wikipedia.org/wiki/Amalka%20Supercomputing%20facility | The Amalka Supercomputing facility is the largest of the three Czech parallel supercomputers. It is used by Department of Space Physics,
Institute of Atmospheric Physics, Academy of Sciences of the Czech Republic.
The primary task is computation and visualisation in the area of space research for the European Space Agency or NASA, such as a preparation of Demeter (satellite) launch.
Amalka Supercomputing facility is credited with computing the first kinetic magnetic field model of Mercury in the MESSENGER project. It also helped to understand the results from the Cluster II mission.
At present, the facility is supporting the THEMIS (Time History of Events and Macroscale Interactions during Substorms) project. The results will be useful in planning for creating permanent human bases on Moon that will be protected from solar wind.
The current version runs Linux slackware and delivers 6.38 TFlops. Expansion and optimization of the infrastructure is being implemented by Sprinx Systems.
References
External links
I am Amálka, the most powerful supercomputer in the Czech Republic (Czech)
Supercomputers
Science and technology in the Czech Republic
Information technology in the Czech Republic | Amalka Supercomputing facility | Technology | 235 |
31,677,721 | https://en.wikipedia.org/wiki/Electrodiffusiophoresis | Electrodiffusiophoresis is a motion of particles dispersed in liquid induced by external homogeneous electric field, which makes it similar to electrophoresis.
Overview
In contradistinction with electrophoresis, motion of particle in homogeneous electric field only, electrodiffusiophoresis occurs in the areas of the dispersion that experience concentration polarization due to, for instance, electrochemical reactions, see electrochemistry. There are concentration gradients in such areas that affect particles motion strongly. First of all they create inhomogeneity in the electric field. Secondly, they cause diffusiophoresis. This peculiarities of the particles motion in the areas subjected to the concentration polarization justifies introduction of the special term for this electrokinetic effect - electrodiffusiophoresis.
One of the most important differences of the electrodiffusioporesis from the electrophoresis is that exists as directed particles drift in the alternating electric field. Electrophoresis, in contrary, causes only particles oscillation on the same spot. This difference opens opportunities for important applications.
Electrodiffusiophoresis was theoretically predicted in 1980 - 82.
It was experimentally observed microscopically in 1982.
The first application of this effect was explanation of particles depositing at some distance from the surface of the ion selective membrane.
These earlier experiments and theory were described in the review published in 1990. This review presents also application of electrodiffusiophoresis for making bactericidal coatings.
This effect has attracted new attention in 2010 with regard to microfluidics.
See also
Interface and colloid science
References
Colloidal chemistry | Electrodiffusiophoresis | Chemistry | 349 |
212,011 | https://en.wikipedia.org/wiki/Timeline%20of%20antibiotics | This is the timeline of modern antimicrobial (anti-infective) therapy.
The years show when a given drug was released onto the pharmaceutical
market. This is not a timeline of the development of the antibiotics themselves.
1911 – Arsphenamine, also Salvarsan
1912 – Neosalvarsan
1935 – Prontosil (an oral precursor to sulfanilamide), the first sulfonamide
1936 – Sulfanilamide
1938 – Sulfapyridine (M&B 693)
1939 – sulfacetamide
1940 – sulfamethizole
1942 – benzylpenicillin, the first penicillin
1942 – gramicidin S, the first peptide antibiotic
1942 – sulfadimidine
1943 – sulfamerazine
1944 – streptomycin, the first aminoglycoside
1947 – sulfadiazine
1948 – chlortetracycline, the first tetracycline
1949 – chloramphenicol, the first amphenicol
1949 – neomycin
1950 – oxytetracycline
1950 – penicillin G procaine
1952 – erythromycin, the first macrolide
1954 – benzathine penicillin
1955 – spiramycin
1955 – tetracycline
1955 – thiamphenicol
1955 – vancomycin, the first glycopeptide
1956 – phenoxymethylpenicillin
1958 – colistin, the first polymyxin
1958 – demeclocycline
1959 – virginiamycin
1960 – methicillin
1960 – metronidazole, the first nitroimidazole
1961 – ampicillin
1961 – spectinomycin
1961 – sulfamethoxazole
1961 – trimethoprim, the first dihydrofolate reductase inhibitor
1962 – oxacillin
1962 – cloxacillin
1962 – fusidic acid
1963 – fusafungine
1963 – lymecycline
1964 – gentamicin
1964 – cefalotin, the first cephalosporin
1966 – doxycycline
1967 – carbenicillin
1967 – rifampicin
1967 – nalidixic acid, the first quinolone
1968 – clindamycin, the second lincosamide
1970 – cefalexin
1971 – cefazolin
1971 – pivampicillin
1971 – tinidazole
1972 – amoxicillin
1972 – cefradine
1972 – minocycline
1972 – pristinamycin
1973 – fosfomycin
1974 – talampicillin
1975 – tobramycin
1975 – bacampicillin
1975 – ticarcillin
1976 – amikacin
1977 – azlocillin
1977 – cefadroxil
1977 – cefamandole
1977 – cefoxitin
1977 – cefuroxime
1977 – mezlocillin
1977 – pivmecillinam
1979 – cefaclor
1980 – cefmetazole
1980 – cefotaxime
1980 – piperacillin
1981 – co-amoxiclav (amoxicillin/clavulanic acid)
1981 – cefoperazone
1981 – cefotiam
1981 – cefsulodin
1981 – latamoxef
1981 – netilmicin
1982 – ceftriaxone
1982 – micronomicin
1983 – cefmenoxime
1983 – ceftazidime
1983 – ceftizoxime
1983 – norfloxacin
1984 – cefonicid
1984 – cefotetan
1984 – temocillin
1985 – cefpiramide
1985 – imipenem/cilastatin, the first carbapenem
1985 – ofloxacin
1986 – mupirocin
1986 – aztreonam
1986 – cefoperazone/sulbactam
1986 – co-ticarclav (ticarcillin/clavulanic acid)
1987 – ampicillin/sulbactam
1987 – cefixime
1987 – roxithromycin
1987 – sultamicillin
1987 – ciprofloxacin, the first 2nd-gen fluoroquinolone
1987 – rifaximin, the first ansamycin
1988 – azithromycin
1988 – flomoxef
1988 – isepamycin
1988 – midecamycin
1988 – rifapentine
1988 – teicoplanin
1989 – cefpodoxime
1989 – enrofloxacin
1989 – lomefloxacin
1989 – moxifloxacin
1990 – arbekacin
1990 – cefodizime
1990 – clarithromycin
1991 – cefdinir
1992 – cefetamet
1992 – cefpirome
1992 – cefprozil
1992 – ceftibuten
1992 – fleroxacin
1992 – loracarbef
1992 – piperacillin/tazobactam
1992 – rufloxacin
1993 – brodimoprim
1993 – dirithromycin
1993 – levofloxacin
1993 – nadifloxacin
1993 – panipenem/betamipron
1993 – sparfloxacin
1994 – cefepime
1996 – meropenem
1999 – quinupristin/dalfopristin
2000 – linezolid, the first oxazolidinone
2001 – telithromycin, the first ketolide
2003 – daptomycin
2005 – tigecycline, the first glycylcycline
2005 – doripenem
2009 – telavancin, the first lipoglycopeptide
2010 – ceftaroline
2011 – fidaxomicin
2012 – bedaquiline
2013 – telavancin
2014 – tedizolid
2014 – dalbavancin
2014 – ceftolozane/tazobactam
2015 – ceftazidime/avibactam
2017 – meropenem/vaborbactam
2019 – imipenem/cilastatin/relebactam
2019 – cefiderocol
See also
Timeline of medicine and medical technology
List of antibiotics, grouped by class
References
Antibiotics
Antibiotics
History of pharmacy
Antibiotics, timeline | Timeline of antibiotics | Chemistry,Biology | 1,296 |
25,514,559 | https://en.wikipedia.org/wiki/Nicholas%20Harrison%20%28physicist%29 | Nicholas Harrison FRSC FinstP (born 5 November 1964) is an English theoretical physicist known for his work on developing theory and computational methods for discovering and optimising advanced materials. He is the Professor of Computational Materials Science in the Department of Chemistry at Imperial College London where he is co-director of the Institute of Molecular Science and Engineering.
Education
Harrison was educated at University College London and the University of Birmingham, graduating with a BSc in Physics in 1986 and a PhD in Theoretical Physics in 1989. He performed the research that led to his PhD within the Theory and Computational Science department at Daresbury Laboratory.
Career
Nicholas Mark Harrison was born in Streetly, Sutton Coldfield, in the United Kingdom. His father was a manager at Lloyds Bank. He took a degree in physics at University College London and the University of Birmingham after which he was appointed as a research scientist at Daresbury Laboratory, spending a year in 1993 as a visiting scientist at Pacific Northwest National Laboratory. In 1994 he was appointed head of the Computational Materials Science Group at Daresbury Laboratory. In 2000 he became the Professor of Computational Materials Science at Imperial College London. He was elected a Fellow of the Institute of Physics in 2004 and a Fellow of the Royal Society of Chemistry in 2008. He is currently a co-director of the Institute for Molecular Science and Engineering at Imperial College London.
Research
Harrison has authored or co-authored a wide range of articles
Harrison's research career started with his PhD, which was concerned with developing a quantitative and
predictive theory of the electronic states in substitutionally disordered systems.
Harrison has furthered the practical use of quantum theory for predictive calculations in materials discovery and
optimisation. He has developed methods for robust and efficient calculations on functional materials in
which strong electronic interactions are dominant and used them to study processes in previously poorly
understood materials such as transition metal oxides,
oxide interfaces,
and functional materials
.
In doing so he has made significant contributions to the understanding of catalysis and photocatalysis at surfaces,
the stability of polar surfaces, spin dependent transport in low dimensional systems, high temperature
magnetism in organic and metal-organic materials and the thermodynamics of energy storage materials
.
The techniques he has developed have consistently extended the state of the art and are now used world-wide in both
academic and commercial research programmes.
References
External links
Nicholas Harrison at Imperial College London
The Computational Materials Science Group
The Institute for Molecular Science and Engineering
The Thomas Young Centre
The London Centre for Nanotechnology
Living people
Academics of Imperial College London
Alumni of the University of Birmingham
British physicists
Scientists from Birmingham, West Midlands
Fellows of the Institute of Physics
Fellows of the Royal Society of Chemistry
Computational chemists
1964 births | Nicholas Harrison (physicist) | Chemistry | 542 |
31,954,954 | https://en.wikipedia.org/wiki/Four%20Advisors | The Four Advisors () is a traditional Chinese asterism found in the Purple Forbidden enclosure (). It consists of four stars found in the modern constellations of Ursa Minor and Camelopardalis and represents the four assistants of ancient emperors. During the Qing dynasty, a star from the constellation Draco was added to the asterism.
References
Astronomy in China | Four Advisors | Astronomy | 76 |
18,559,216 | https://en.wikipedia.org/wiki/Russula%20brevipes | Russula brevipes is a species of mushroom commonly known as the short-stemmed russula or the stubby brittlegill. It is widespread in North America, and was reported from Pakistan in 2006. The fruit bodies are white and large, with convex to funnel-shaped caps measuring wide set atop a thick stipe up to long. The gills on the cap underside are closely spaced and sometimes have a faint bluish tint. Spores are roughly spherical, and have a network-like surface dotted with warts.
Fruiting from summer to autumn, the mushrooms often develop under masses of leaves or conifer needles in a mycorrhizal association with trees from several genera, including fir, spruce, Douglas-fir, and hemlock. Forms of the mushroom that develop a bluish band at the top of the stipe are sometimes referred to as variety acrior. Although edible, the mushrooms have a bland or bitter flavor. They become more palatable once parasitized by the ascomycete fungus Hypomyces lactifluorum, a bright orange mold that covers the fruit body and transforms them into lobster mushrooms.
Taxonomy
Russula brevipes was initially described by American mycologist Charles Horton Peck in 1890, from specimens collected in Quogue, New York. It is classified in the subsection Lactaroideae, a grouping of similar Russula species characterized by having whitish to pale yellow fruit bodies, compact and hard flesh, abundant lamellulae (short gills), and the absence of clamp connections. Other related Russula species with a similar range of spore ornamentation heights include Russula delica, R. romagnesiana, and R. pseudodelica.
There has been considerable confusion in the literature over the naming of Russula brevipes. Some early 20th-century American mycologists referred to it as Russula delica, although that fungus was described from Europe by Elias Fries with a description not accurately matching the North American counterparts. Fries's concept of R. delica included: a white fruit body that did not change color; a smooth, shiny cap; and thin, widely spaced gills. To add to the confusion, Rolf Singer and later Robert Kühner and Henri Romagnesi described other species they named Russula delica. Robert Shaffer summarized the taxonomic conundrum in 1964: Russula delica is a species that everybody knows, so to speak, but the evidence indicates that R. delica sensu Fries (1838) is not R. delica sensu Singer (1938), which in turn is not R. delica sensu Kühner and Romagnesi (1953)… It is best to use R. brevipes for the North American collections which most authors but not Kühner and Romagnesi (1953), call R. delica. The name, R. brevipes, is attached to a type collection, has a reasonably explicit original description, and provides a stable point about which a species concept can be formed.
Shaffer defined the Russula brevipes varieties acrior and megaspora in 1964 from Californian specimens. The former is characterized by a greenish-blue band that forms at the top of the stipe, while the latter variety has large spores. The nomenclatural database Index Fungorum does not consider these varieties to have independent taxonomical significance. In a 2012 publication, mycologist Mike Davis and colleagues suggest that western North American Russula brevipes comprise a complex of at least four distinct species. According to MycoBank, the European species Russula chloroides is synonymous with R. brevipes, although Index Fungorum and other sources consider them distinct species.
The specific epithet brevipes is derived from the Latin words brevis "short" and pes "foot", hence "short-footed". Common names used to refer to the mushroom include short-stemmed russula, short-stalked white russula, and stubby brittlegill.
Description
Fully grown, the cap can range from in diameter, whitish to dull-yellow, and is funnel-shaped with a central depression. The gills are narrow and thin, decurrent in attachment, nearly white when young but becoming pale yellow to buff with age, and sometimes forked near the stipe. The stipe is 3–8 cm long and 2.5–4 cm thick. It is initially white but develops yellowish-brownish discolorations with age. The mushroom sometimes develops a pale green band at the top of the stipe. The spore print is white to light cream.
Spores of R. brevipes are egg-shaped to more or less spherical, and measure 7.5–10 by 6.5–8.5 μm; they have a partially reticulate (network-like) surface dotted with warts measuring up to 1 μm high. The cap cuticle is arranged in the form of a cutis (characterized by hyphae that run parallel to the cap surface) comprising interwoven hyphae with rounded tips. There are no cystidia on the cap (pileocystidia).
The variant R. brevipes var. acrior Shaffer has a subtle green shading at the stipe apex and on the gills. R. brevipes var. megaspora has spores measuring 9–14 by 8–12 μm.
Similar species
The subalpine waxy cap (Hygrophorus subalpinus) is somewhat similar in appearance to R. brevipes but lacks its brittle flesh, and it has a sticky, glutinous cap. The Pacific Northwest species Russula cascadensis also resembles R. brevipes, but has an acrid taste and smaller fruit bodies. Another lookalike, R. vesicatoria, has gills that often fork near the stipe attachment. R. angustispora is quite similar to R. brevipes, but has narrower spores measuring 6.5–8.5 by 4.5–5 μm, and it does not have the pale greenish band that sometimes develops in the latter species. The European look-alike R. delica is widely distributed, although rarer in the northern regions of the continent. Similar to R. brevipes in overall morphology, it has somewhat larger spores (9–12 by 7–8.5 μm) with a surface ornamentation featuring prominent warts interconnected by a zebra-like patterns of ridges. The milk-cap mushroom Lactifluus piperatus can be distinguished from R. brevipes by the production of latex when the mushroom tissue is cut or injured.
Distribution and habitat
It is a common ectomycorrhizal fungus associated with several hosts across temperate forest ecosystems. Typical hosts include trees in the genera Abies, Picea, Pseudotsuga, and Tsuga. The fungus has been reported in Pakistan's Himalayan moist temperate forests associated with Pinus wallichiana. Fruit bodies grow singly or in groups; fruiting season occurs from summer to autumn. It appears from July to October in eastern North America, and from October to January in western North America (most abundantly in late autumn). The mushrooms are usually found as "shrumps"—low, partially emerged mounds on the forest floor, and have often been partially consumed by mammals such as rodents or deer.
Studies have demonstrated that geographically separated R. brevipes populations (globally and continentally) develop significant genetic differentiation, suggesting that gene flow between these populations is small. In contrast, there was little genetic differentiation observed between populations sampled from a smaller area (less than about ). R. brevipes is one of several Russula species that associates with the myco-heterotrophic orchid Limodorum abortivum.
Edibility
Russula brevipes is a non-descript edible species that tends to assume the flavors of meats and sauces it is cooked with. It is one of several Russula species harvested in the wild from Mexico's Iztaccíhuatl–Popocatépetl National Park and sold in local markets in nearby Ozumba. The mushrooms are suitable for pickling due to their crisp texture.
Fruit bodies are commonly parasitized by the ascomycete fungus Hypomyces lactifluorum, transforming them into an edible known as a lobster mushroom. In this form, the surface of the fruit body develops into a hard, thin crust dotted with minute pimples, and the gills are reduced to blunt ridges. The flesh of the mushroom—normally brittle and crumbly—becomes compacted and less breakable. Mycologist David Arora wrote that R. brevipes is "edible, but better kicked than picked".
Bioactive compounds
Sesquiterpene lactones are a diverse group of biologically active compounds that are being investigated for their antiinflammatory and antitumor activities. Some of these compounds have been isolated and chemically characterized from Russula brevipes: russulactarorufin, lactarorufin-A, and 24-ethyl-cholesta-7,22E-diene-3β,5α,6β-triol.
See also
List of Russula species
References
Edible fungi
Fungi described in 1890
Fungi of Pakistan
Fungi of North America
brevipes
Taxa named by Charles Horton Peck
Fungus species | Russula brevipes | Biology | 1,942 |
10,217,961 | https://en.wikipedia.org/wiki/Connecticut%20statistical%20areas | The U.S. currently has nine statistical areas that have been delineated by the Office of Management and Budget (OMB). On July 21, 2023, the OMB delineated two combined statistical areas, five metropolitan statistical areas, and two micropolitan statistical areas in Connecticut. As of 2023, the largest of these in the state is the New Haven-Hartford-Waterbury, CT CSA, encompassing the entire state outside of the Bridgeport-Stamford-Danbury, CT MSA in the southwest.
Table
See also
Geography of Connecticut
Demographics of Connecticut
Notes
References
External links
Office of Management and Budget
United States Census Bureau
United States statistical areas
Statistical Areas Of Connecticut- | Connecticut statistical areas | Mathematics | 140 |
43,275,595 | https://en.wikipedia.org/wiki/Into%20the%20Dalek | "Into the Dalek" is the second episode of the eighth series of the British science fiction television programme Doctor Who. It was written by Phil Ford and Steven Moffat, and directed by Ben Wheatley, and first broadcast on BBC One on 30 August 2014.
In the episode, the alien time traveller the Doctor (Peter Capaldi) and his companion Clara Oswald (Jenna Coleman) enter the body of a damaged Dalek captured by rebels to determine what is making the usually-hate-filled creature "good".
It was watched by 5.2 million viewers in the UK on its initial transmission, according to unofficial overnight figures, taking a 24.7 per cent share of the entire TV audience and making it the second-highest rated programme of the evening, with the final numbers giving a total of 7.29 million viewers, and received positive reviews, with the characterisation of the Dalek being acclaimed.
Plot
Danny Pink, a former soldier emotionally scarred from his experiences, begins teaching Maths at Coal Hill School in the present. Clara, an English teacher at the school, invites him out for a drink. He agrees. Back at her office, Clara is briefed by the Twelfth Doctor about a damaged Dalek taken aboard the human rebel ship Aristotle in the future that declares its own race must be destroyed. Clara agrees to assist his efforts to help the "good" Dalek, despite the Doctor's contention that Daleks cannot be turned good.
The Doctor, Clara, and three rebel soldiers are miniaturised so they can enter the Dalek—nicknamed "Rusty" by the Doctor—to determine what is making it good. Entering Rusty, they come upon its "cortex vault", which the Doctor describes as Dalek technology designed to suppress any developing compassion within the living mutant inside the shell, as well as store all of its memories. Rusty, speaking to the Doctor, relates the beauty it had witnessed in the galaxy, including the creation of a star. Rusty drew from this that Daleks must be destroyed for wanting to destroy that beauty. The Doctor repairs Rusty's power cell. This unintentionally causes it to revert to its normal thinking pattern.
Rusty contacts the Dalek mothership, which sends other Daleks to destroy the rebel ship. Inside Rusty, Clara convinces the Doctor to reconsider his conviction that Daleks are irreversibly "evil". Inside the cortex vault, Clara awakens Rusty's memory of seeing a star's creation. The Doctor then links his mind to Rusty's consciousness, showing it the beauty of the universe. However, Rusty also assimilates the Doctor's own deep-rooted hatred towards the Daleks. It exterminates its fellow Daleks as they attempt to destroy the rebel ship. After leaving the inside of Rusty, the Doctor is disturbed that Rusty saw only hatred within him; he'd hoped for a "victory" in creating a "good" Dalek. Rusty responds that the Doctor is a good Dalek, while Rusty is not. Rusty sends a retreat signal to the Dalek mothership, causing it to believe the Aristotle has self-destructed.
The Doctor turns down the surviving soldier Journey's offer to travel with him, telling her that he wishes that she was not a soldier. The Doctor returns Clara to her office moments after she left. On leaving, she bumps into Danny, who is glad his being an ex-soldier does not put her off dating him.
Continuity
Scenes from "Dalek" (2005) and "Journey's End" (2008) can be seen in the background as the Doctor 'merges' with Rusty's mind. The Doctor refers to his first encounter with the Daleks on Skaro in The Daleks (1963–64).
Production
Co-writer and executive producer Steven Moffat conceived the idea while discussing possible concepts for a Doctor Who computer game.
The read-through for the episode took place on 17 December 2013, the same day as "Deep Breath". Filming began on 25 January 2014, and took place at the Uskmouth power station, which had previously served as a location for the 2011 Christmas special, "The Doctor, the Widow and the Wardrobe". Filming also took place in St Athan, Newport, and a hangar outside Cardiff. Regular filming concluded on 18 February 2014. The last scene to be filmed was the one featuring Gretchen (Laura dos Santos) and Missy (Michelle Gomez), which was filmed concurrently with the similar scene with Half-Face Man from "Deep Breath" on 23 May 2014. Since Wheatley was unavailable on the date, an uncredited Rachel Talalay directed both scenes; she consulted Wheatley and attempted to incorporate his ideas.
Broadcast and reception
Pre-broadcast leak
As part of the series 8 leaks, both the script and a rough cut of the episode were leaked online from a server in Miami. Despite the fact that the initial online copy of the episode contained a glitch that prevented downloading, a workable version found its way online by the second week of August 2014.
Promotion
Two clips from the episode were featured alongside an interview with Peter Capaldi on BBC News on 7 August 2014. On 25 August, a ten-second clip was released showing the Doctor's reunion with a lone Dalek. The same clips were re-released on 27 August in slightly extended form.
Ratings
"Into the Dalek" was watched by 5.2 million viewers in the UK upon its initial transmission, according to unofficial overnight figures, taking a 24.7 per cent share of the entire TV audience and making it the second-highest rated programme of the evening. The episode was watched by 7.29 million people according to the final viewing figures, making it the 2nd most watched programme over the entire week on BBC1 and the 9th most watched over all channels. In the United States, the première airing on BBC America had an audience of 1.22 million viewers, well below the 2.19 million viewers earned on "Deep Breath". The episode received an Audience Appreciation Index score of 84, considered Excellent.
Critical reception
"Into the Dalek" received largely positive reviews, with the characterization of the Dalek being acclaimed. Simon Brew of Den of Geek wrote that the episode "stakes one hell of a claim... as a series highlight," and that it was "a really good, really entertaining episode," noting the similarities to 2005's "Dalek". Brew praised the new characterisation for Clara and Capaldi's emergence as the Doctor. The Guardian found the episode to be "better than we might have expected." They singled out the character development of Zawe Ashton's character in such a short period of time, and Ben Wheatley for "evoking a genuine sense of claustrophobic menace." Terry Ramsey of The Daily Telegraph gave the episode four stars out of five, praising Capaldi: "It may be hard to believe in a good Dalek, but after Saturday night it is easy to believe this will be a good Doctor." IGN also praised the episode, particularly Ford and Moffat's script, stating that it "evolves along with its characters". They ultimately labelled the episode "an entertaining new take on a classic old foe", awarding it 8.4 out of 10.
Neela Debnath of The Independent was positive regarding the episode, calling it "A classic sci-fi adventure with all the spectacle of a blockbuster," while praising the new dynamic between The Doctor and Clara. Tim Liew of Metro commented, "I rather enjoyed this episode." He was also positive about the character development for Jenna Coleman and noted, "the tight focus on a single enemy makes this the most menacing Dalek episode since 'Dalek'." Morgan Jeffery of Digital Spy felt the episode was an improvement over the series' opening episode and that it "felt like the proper debut of our new lead." He felt the dynamic between Capaldi and Coleman was very similar to that of Christopher Eccleston and Billie Piper, and called the episode "smart, stirring and visually spectacular," awarding the episode four stars out of five.
Forbes gave a negative review of the episode, criticising the Daleks' appearance so soon after Capaldi's entrance, saying, "It feels like the BBC have a lack of confidence in the public accepting Capaldi as The Doctor," and the "flawed" concept of the episode. They were critical of the characterisation of Capaldi's Doctor: "the modern Doctor is not the irascible, self-centred William Hartnell of those early episodes." However, they were positive about the characterisation of Clara and overall said, "Nonetheless, this action-packed second episode made me much more likely to tune in next week than last week’s opener.
References
External links
Twelfth Doctor episodes
2014 British television episodes
Television episodes written by Phil Ford (writer)
Television episodes written by Steven Moffat
Dalek television stories
Fiction about size change
Television episodes set in the 2010s | Into the Dalek | Physics,Mathematics | 1,836 |
11,647,120 | https://en.wikipedia.org/wiki/Key%20risk%20indicator | A key risk indicator (KRI) is a measure used in management to indicate how risky an activity is. Key risk indicators are metrics used by organizations to provide an early signal of increasing risk exposures in various areas of the enterprise. It differs from a key performance indicator (KPI) in that the latter is meant as a measure of how well something is being done while the former is an indicator of the possibility of future adverse impact. KRI give an early warning to identify potential events that may harm continuity of the activity/project.
KRIs are a mainstay of operational risk analysis.
Definitions
According to OECD
A risk indicator is an indicator that estimates the potential for some form of resource degradation using mathematical formulas or models.
Risk management
Security risk management
According to Risk IT framework by ISACA, key risk indicators are metrics capable of showing that the organization is subject or has a high probability of being subject to a risk that exceed the defined risk appetite.
Organizations have different sizes and environment. So every enterprise should choose its own KRI, taking into account the following steps:
Consider the different stakeholders of the organization
Make a balanced selection of risk indicators, covering performance indicators, lead indicators and trends
Ensure that the selected indicators drill down to the root cause of the events
Choose high relevant and high probability of predicting important risks:
High business impact
Easy to measure
With high correlation with the risk
Sensitivity
Determine thresholds and triggers for the set of KRI's
Locate and fold in data sources that contribute or feed data into KRI triggers
Determine notification methods, recipients, and action or response sequences
The constant measure of KRI can bring the following benefits to the organization:
Provide an early warning: a proactive action can take place
Provide a backward looking view on risk events, so lesson can be learned by the past
Provide an indication that the risk appetite and tolerance are reached
Provide real time actionable intelligence to decision makers and risk managers
Advances in hosted cloud data storage, data federation, and data aggregation have enabled data supply chains for real time calculation of key risk indicators across heretofore unlinked or disconnected data sources. Risk level dashboards can be supplemented with real time push notifications of risk. Systems methods and tools addressing triggering of notifications when targets are attained for key risk indicators have been evolving. Calculating and enabling notifications of key risk indicators used to be a unique benefit of enterprise software packages. With the evolution of API's to calculate trigger values for key risk indicators across various data sources, the potential for risk managers to include data external to an enterprise or external to an enterprise database has changed the risk management landscape.
Qualities of good key risk indicators
Some qualities of a good key risk indicator include:
Ability to measure the right thing (e.g., supports the decisions that need to be made)
Quantifiable (e.g., damages in dollars of profit loss)
Capability to be measured precisely and accurately
Ability to be validated against ground truth, and confidence level one has in the assertions made within the framework of the metric
Comparability Over Time and Business Units
Assessment of Risk Owners’ Performance
See also
Committee of Sponsoring Organizations of the Treadway Commission
Enterprise risk management
ISO 31000
References
Metrics
Operational risk | Key risk indicator | Mathematics | 650 |
14,667,240 | https://en.wikipedia.org/wiki/HD%20142415 | HD 142415 is a single star in the southern constellation of Norma, positioned next to the southern constellation border with Triangulum Australe and less than a degree to the west of NGC 6025. With an apparent visual magnitude of 7.33, it is too faint to be visible to the naked eye. The distance to this star is 116 light years from the Sun based on parallax, but it is drifting closer with a radial velocity of −12 km/s. It is a candidate member of the NGC 1901 open cluster of stars.
This is an ordinary G-type main-sequence star with a stellar classification of G1V. It has been identified as a solar twin by Datson et al. (2012), which means its physical properties are very similar to the Sun. It has 10% more mass than the Sun but only a 3% larger radius. The star is estimated to be 1.6 billion years old and is spinning with a projected rotational velocity of 4.2 km/s. It is radiating 1.16 times the luminosity of the Sun from its photosphere at an effective temperature of 5,869 K.
The star is currently known to have one planet, designated HD 142415 b. This was detected via the radial velocity method and announced in 2004. The orbital period is just over a year, which made a determination of the orbital eccentricity more difficult due to undersampling over part of the orbit, in combination with jitter. The authors chose to pin the eccentricity value to 0.5, although solutions in the range 0.2–0.8 would be equally plausible.
See also
HD 141937
HD 142022
List of extrasolar planets
References
G-type main-sequence stars
Planetary systems with one confirmed planet
Norma (constellation)
Durchmusterung objects
142415
078169 | HD 142415 | Astronomy | 383 |
11,543,936 | https://en.wikipedia.org/wiki/Huber%27s%20equation | Huber's equation, first derived by a Polish engineer Tytus Maksymilian Huber, is a basic formula in elastic material tension calculations, an equivalent of the equation of state, but applying to solids. In most simple expression and commonly in use it looks like this:
where is the tensile stress, and is the shear stress, measured in newtons per square meter (N/m2, also called pascals, Pa), while —called a reduced tension—is the resultant tension of the material.
Finds application in calculating the span width of the bridges, their beam cross-sections, etc.
See also
Yield surface
Stress–energy tensor
Tensile stress
von Mises yield criterion
References
Physical quantities
Structural analysis | Huber's equation | Physics,Mathematics,Engineering | 148 |
56,960,128 | https://en.wikipedia.org/wiki/Kristina%20H%C3%A5kansson | Kristina Håkansson is an analytical chemist known for her contribution in Fourier transform ion cyclotron resonance (FT-ICR) mass spectrometry for biomolecular identification and structural characterization.
Education
Håkansson received a M.Sc. in Molecular Biotechnology in 1996 and a Ph.D., also in Molecular Biotechnology in 2000, from Uppsala University. Following her graduation, she did post-doctoral research with Alan G. Marshall at the National High Magnetic Field Laboratory of Florida State University.
Career and research
Håkansson began her academic career at University of Michigan in 2003 and became the director National High Magnetic Field Laboratory's Ion Cyclotron Resonance facility at Florida State University in 2024. She served as editor for Rapid Communications in Mass Spectrometry from 2021 to 2023.
Her research focuses on mass spectrometry, primarily identification and characterization of protein posttranslational modifications by complementary fragmentation techniques such as electron-capture dissociation (ECD)/negative ion ECD (niECD) and infrared multiphoton dissociation (IRMPD) at low (femtomole) levels.
Awards
2022 Berzelius Gold Medal, Swedish Society for Mass Spectrometry
2018 Agilent Thought Leader Award
2017 Hach Lecturer, University of Wyoming
2016 Biemann Medal, American Society for Mass Spectrometry
2006–2011 National Science Foundation CAREER Award
2005–2007 Eli Lilly Analytical Chemistry Award
2005–2008 Dow Corning Assistant Professorship, University of Michigan
2005 American Society for Mass Spectrometry Research Award
2004 Elisabeth Caroline Crosby Research Award, University of Michigan
2004–2007 Searle Scholar Award
2000–2002 Swedish Foundation for International Cooperation in Research and Higher Education (STINT) postdoctoral fellow
References
External links
Year of birth missing (living people)
Living people
Uppsala University alumni
University of Michigan faculty
Mass spectrometrists
Florida State University faculty
Swedish scientists | Kristina Håkansson | Physics,Chemistry | 387 |
4,004,632 | https://en.wikipedia.org/wiki/Quotient%20category | In mathematics, a quotient category is a category obtained from another category by identifying sets of morphisms. Formally, it is a quotient object in the category of (locally small) categories, analogous to a quotient group or quotient space, but in the categorical setting.
Definition
Let C be a category. A congruence relation R on C is given by: for each pair of objects X, Y in C, an equivalence relation RX,Y on Hom(X,Y), such that the equivalence relations respect composition of morphisms. That is, if
are related in Hom(X, Y) and
are related in Hom(Y, Z), then g1f1 and g2f2 are related in Hom(X, Z).
Given a congruence relation R on C we can define the quotient category C/R as the category whose objects are those of C and whose morphisms are equivalence classes of morphisms in C. That is,
Composition of morphisms in C/R is well-defined since R is a congruence relation.
Properties
There is a natural quotient functor from C to C/R which sends each morphism to its equivalence class. This functor is bijective on objects and surjective on Hom-sets (i.e. it is a full functor).
Every functor F : C → D determines a congruence on C by saying f ~ g iff F(f) = F(g). The functor F then factors through the quotient functor C → C/~ in a unique manner. This may be regarded as the "first isomorphism theorem" for categories.
Examples
Monoids and groups may be regarded as categories with one object. In this case the quotient category coincides with the notion of a quotient monoid or a quotient group.
The homotopy category of topological spaces hTop is a quotient category of Top, the category of topological spaces. The equivalence classes of morphisms are homotopy classes of continuous maps.
Let k be a field and consider the abelian category Mod(k) of all vector spaces over k with k-linear maps as morphisms. To "kill" all finite-dimensional spaces, we can call two linear maps f,g : X → Y congruent iff their difference has finite-dimensional image. In the resulting quotient category, all finite-dimensional vector spaces are isomorphic to 0. [This is actually an example of a quotient of additive categories, see below.]
Related concepts
Quotients of additive categories modulo ideals
If C is an additive category and we require the congruence relation ~ on C to be additive (i.e. if f1, f2, g1 and g2 are morphisms from X to Y with f1 ~ f2 and g1 ~g2, then f1 + g1 ~ f2 + g2), then the quotient category C/~ will also be additive, and the quotient functor C → C/~ will be an additive functor.
The concept of an additive congruence relation is equivalent to the concept of a two-sided ideal of morphisms: for any two objects X and Y we are given an additive subgroup I(X,Y) of HomC(X, Y) such that for all f ∈ I(X,Y), g ∈ HomC(Y, Z) and h∈ HomC(W, X), we have gf ∈ I(X,Z) and fh ∈ I(W,Y). Two morphisms in HomC(X, Y) are congruent iff their difference is in I(X,Y).
Every unital ring may be viewed as an additive category with a single object, and the quotient of additive categories defined above coincides in this case with the notion of a quotient ring modulo a two-sided ideal.
Localization of a category
The localization of a category introduces new morphisms to turn several of the original category's morphisms into isomorphisms. This tends to increase the number of morphisms between objects, rather than decrease it as in the case of quotient categories. But in both constructions it often happens that two objects become isomorphic that weren't isomorphic in the original category.
Serre quotients of abelian categories
The Serre quotient of an abelian category by a Serre subcategory is a new abelian category which is similar to a quotient category but also in many cases has the character of a localization of the category.
References
Category theory
Category | Quotient category | Mathematics | 998 |
21,022,418 | https://en.wikipedia.org/wiki/Orders%20of%20magnitude%20%28force%29 | The following list shows different orders of magnitude of force.
Since weight under gravity is a force, several of these examples refer to the weight of various objects. Unless otherwise stated, these are weights under average Earth gravity at sea level.
Below 1 N
1 N and above
Notes
External links
Force | Orders of magnitude (force) | Mathematics | 59 |
27,752,237 | https://en.wikipedia.org/wiki/Jute%20genome | A consortium of researchers in Bangladesh successfully completed draft genome sequencing for the jute plant.
The consortium consisted of Dhaka University, Bangladesh Jute Research Institute, and software company DataSoft Systems Bangladesh Ltd. It worked in collaboration with Centre for Chemical Biology, University of Science Malaysia, and University of Hawaii at Manoa.
On June 16, 2010, Bangladeshi Prime Minister Sheikh Hasina disclosed in the parliament that Bangladeshi researchers had successfully completed the draft genome sequencing, which was anticipated to contribute to improvements in jute fiber production.
History
It all began in February 2008, when Maqsudul Alam approached Professor Ahmad Shamsul Islam, Coordinator of GNOBB (Global Network of Bangladeshi Biotechnologists) regarding the possibility of sequencing the jute genome. The Bangladeshi science community, which was already looking into the possibility of getting the jute genome sequenced, responded to this offer, which started the process. The whole process began with many long conference calls between Dr. Alam and plant molecular biologists, Professors Haseena Khan and Zeba Islam Seraj of the Department of Biochemistry and Molecular Biology, University of Dhaka. They established connection with University of Hawaii, USA and University of Science Malaysia for technical support and prepared a project proposal to collect fund from different institutions. At the beginning there were many assurance but the reality was different. In the primary stage Genome Research Center USA and University of Science Malaysia gave some technical help to collect research data about jute from all over the world. To analyze huge amount of data there arose a need for a super computer. There was still need of funding for field research. "Swapnajatra" team become frustrated by not getting proper support. It became difficult to keep engage the team members. In 2009, The Daily Prothom Alo published an article about the research that changed everything. Agriculture Minister Matia Chowdhury introduced Dr. Maqsudul Alam to prime minister Sheikh Hasina and assured about further support. Thus team "Swapnajatra" regained their confidence and continued their work.
Resources
Genomic DNA (gDNA) from Tossa Jute (Corchorus olitorius O-4) was used for high-throughput Next Generation Sequencing (NGS) platforms, including 454 GS FLX, Illumina/Solexa, and SOLiD. More than 50X coverage (over 100 billions of A, C, G, and Ts) of Jute genome-sequencing data were used for the draft assembly. Several open-source and commercial genome assembly and annotation pipelines were used to assemble and analyze the raw data. To validate the draft genome, transcriptome analysis was also carried out. For data analysis, different computational resources, ranging from a high-performance Cluster Server to Dell servers to Silicon Graphics SGI Altix-350 and 450, were used.
See also
Genome project
Structural genomics
References
External links
Jute genome Homepage
Genome projects
Jute
Research in Bangladesh | Jute genome | Biology | 597 |
30,373,351 | https://en.wikipedia.org/wiki/Pin-back%20button | A pin-back button or pinback button, pin button, button badge, or simply pin-back or badge, is a button or badge that can be temporarily fastened to the surface of a garment using a safety pin, or a pin formed from wire, a clutch or other mechanism. This fastening mechanism is anchored to the back side of a button-shaped metal disk, either flat or concave, which leaves an area on the front of the button to carry an image or printed message. The word is commonly associated with a campaign button used during a political campaign. The first design for a pin-back button in the United States was patented in 1896, and contemporary buttons have many of the same design features.
History
Buttons have been used around the world to allow people to personally promote/advertise their political affiliations.
In 1787 Josiah Wedgwood of the Wedgwood pottery dynasty ordered the production of the Wedgwood anti-slavery medallion to promote the British anti-slavery movement to the House of Commons. This is believed to be the first use of a slogan on a product and a forerunner of today's political campaign button. The original was a stamp for wax but the image was later reproduced by Wedgewood as a porcelain cameo.
In the United States since the first presidential inauguration in 1789, George Washington's supporters wore buttons imprinted with a slogan. These early buttons were sewn to the lapel of a coat or worn as a pendant on a string. Some of the earliest campaign buttons to feature photographs were produced to promote the political platform of Abraham Lincoln in 1860.
Benjamin S. Whitehead patented the first innovation to the design in 1893 by inserting a sheet of transparent film made of celluloid over a photograph mounted on a badge to protect the image from scratches and abrasion. Whitehead had patents for various designs of ornamental badges and medallions previously, patented as early as 1892. Another patent was issued to Whitehead & Hoag on 21 July 1896 for a "Badge Pin or Button" which used a metal pin anchored to the back of the button to fasten the badge.
My present invention has reference to improvements in badges for use as lapel pins or buttons, or other like uses, and has for its primary object to provide ... a novel means for connecting the ornamental shell or button to the bar or pin for securing the badge to the lapel of the coat.
Other improvements and modifications to the basic design were patented in the following years by other inventors.
Early pin-back buttons from 1898 were printed with a popular cartoon character, The Yellow Kid, and offered as prizes with chewing gum or tobacco products to increase sales.
These buttons were produced with a concave opening on the back side (which provided space to insert advertising), or with a closed back, filled with metal insert and fastener. These are called "open back" and "closed back" buttons.
In 1945, the Kellogg Company, the pioneer in cereal box prizes, inserted prizes in the form of pin-back buttons into each box of Pep Cereal. Pep pins have included U.S. Army squadrons as well as characters from newspaper comics. There were 5 series of comic characters and 18 different buttons in each set, with a total of 90 in the collection.
See also
Badge
Campaign button
Lapel pin
Prizes
Promotional merchandise
Safety pin
References
External links
The Busy Beaver Button Museum
Collection of United States Social Pinback Buttons. Yale Collection of Western Americana, Beinecke Rare Book and Manuscript Library.
Badges
Fashion accessories
Sales promotion
Collecting
Ephemera | Pin-back button | Mathematics | 716 |
33,264,130 | https://en.wikipedia.org/wiki/Heterocyclic%20amine%20formation%20in%20meat | Heterocyclic amines (HCAs) are a group of chemical compounds, many of which can be formed during cooking. They are found in meats that are cooked to the "well done" stage, in pan drippings and in meat surfaces that show a brown or black crust. Epidemiological studies show associations between intakes of heterocyclic amines and cancers of the colon, rectum, breast, prostate, pancreas, lung, stomach, and esophagus, and animal feeding experiments support a causal relationship. The U.S. Department of Health and Human Services Public Health Service labeled several heterocyclic amines as likely carcinogens in its 13th Report on Carcinogens. Changes in cooking techniques reduce the level of heterocyclic amines.
Compounds
More than 20 compounds fall into the category of heterocyclic amines. Table 1 shows the chemical name and abbreviation of those most commonly studied.
All four of these compounds are included in the 13th Report on Carcinogens.
Meat
The compounds found in food are formed when creatine (a non-protein amino acid found in muscle tissue), other amino acids and monosaccharides are heated together at high temperatures (125-300 °C or 275-572 °F) or cooked for long periods. HCAs form at the lower end of this range when the cooking time is long; at the higher end of the range, HCAs are formed within minutes.
Cooked ground beef
A review of 14 studies of HCA content in ground beef cooked under home conditions found in northern Europe and the U.S. found a range of values (Table 2). Because a standard U.S. serving of meat is 3 ounces, Table 2 includes a projection of the maximum amount of HCAs that could be found in a ground beef patty.
(n.d.= none detected)
United States
Meat is a major component of American diets. Data from 1960 show the combined annual per capita consumption of beef, pork and chicken at 148 pounds; in 2004, that amount increased to 195 pounds a year. Ground beef made up 42% of the beef market in 2000. Beef consumption, particularly ground and processed beef, is highest in households with incomes at or below 130 percent of the poverty level.
Patterns of beef intake by race/ethnicity show that non-Hispanic whites and Asians consumed the least amount of beef. Non-Hispanic African-Americans had the highest per capita intake of processed beef, ground beef and steaks compared to three other race/ethnicity groups.
More than half of beef purchased in the U.S. comes from retail stores and is prepared at home. Ground beef makes up the highest per capita intakes of beef both at home and away from home.
Ground beef consumption is highest among males age 12-19 who consume on average 50 pounds per year per capita. The 12-19 age group showed the highest consumption of ground beef for females, but the amount (28.5 lbs) is much lower than that of males.
US dietary exposure has been estimated at 1-17 ng/kg bodyweight per day. Table 3 shows the average daily lifetime consumption of HCAs for subgroups of the U.S. population. This analysis was based on the food intake data of 27215 people participating in the 1994 to 1996 Continuing Survey of Food Intakes by Individuals (CSFII) survey. Approximately 16 percent of HCA exposure came from hamburgers.
African American males had 50-100% higher intakes than white males and African American males consumed three times as many HCAs as white males (Table 4).
Cooking
HCA formation during cooking depends on the type of meat, cooking temperature, the degree of browning and the cooking time. Meats that are lower in fat and water content show higher concentrations of HCAs after cooking. More HCAs are formed when pan surface temperatures are higher than 220 °C (428 °F) such as with most frying or grilling. However, HCAs also form at lower temperatures when the cooking time is long, as in roasting. HCA concentrations are higher in browned or burned crusts that result from high temperature. The pan drippings and meat bits that remain after meat is fried have high concentrations of HCAs. Beef, chicken and fish have higher concentrations than pork. Sausages are high in fat and water and show lower concentrations.
Ground beef patties show lower levels of HCAs if they are flipped every minute until the target temperature is reached. Beef patties cooked while frozen show no difference in HCA levels compared to room-temperature patties.
Cancer
After scientists discovered the carcinogenic components in cigarette smoke, they questioned whether carcinogens could also be found in smoked/burned foods, such as meats. In 1977, cancer-causing compounds heterocyclic amines were discovered in food as a result of household cooking processes.
The most potent of the HCAs, MeIQ, is almost 24 times more carcinogenic than aflatoxin, a carcinogen produced by mold.
Most of the 20 HCAs are more toxic than benzopyrene, a carcinogen found in cigarette smoke and coal tar. MeIQ, IQ and 8-MeIQx are the most potent mutagens according to the Ames test. These HCAs are 100 times more potent carcinogens than PhIP, the compound most commonly found as a result of normal cooking.
HCAs contribute to the development of cancer by causing gene mutations, causing new cells to grow in an uncontrolled manner and form a tumor. Epidemiological studies linked consumption of well-done meats with increased risk of certain cancers, including cancer of the colon or rectum. A review of research articles on meat consumption and colon cancer estimated that red meat consumption contributed to 7 to 9% of colon cancer cases in European men and women.
Animal studies
Long-term rat studies showed that PhIP causes cancer of the colon and mammary gland in rats. Female rats given doses of 0, 12.4, 25, 50, 100 or 200 ppm of PhIP showed a dose-dependent incidence of adenocarcinomas. The offspring of female rats exposed to PhIP while pregnant had a higher prevalence of adenocarcinomas than those whose mothers had not been exposed. This was true even for offspring who were not exposed to PhIP. PhIP was transferred from mothers to offspring in their milk.
Epidemiological studies
The effects of HCAs and well-done cooked meat on humans are less well established. Meat consumption, especially of well-done meat and meat cooked at a high temperature, can be used as an indirect measure of exposure to HCAs. A review of all research studies reported between 1996 and 2007 that examined relationships between HCAs, meat and cancer. Twenty-two studies were found; of these, 18 showed a relationship between either meat intake or HCA exposure and some form of cancer. HCA exposure was measured in 10 of the studies and of those, 70% showed an association with cancer. The authors concluded that high intake of well-done meat and/or high exposure to certain HCAs may be associated with cancer of the colon, breast, prostate, pancreas, lung, stomach and esophagus.
A recent study found that the relative risk for colorectal cancer increased at intakes >41.4 ng/day. Some evidence of increased relative risk occurred with intakes of MeIQx greater than or equal to 19.9 ng/day, but the trend was not as strong as for PhIP.
Recent studies had mixed results, finding no relationship between dietary heterocyclic amines and lung cancer in women who had never smoked, no relationship between HCA intake and prostate cancer risk, but suggesting a positive association between red meat, PhIP and bladder cancer and increased risk of advanced prostate cancer with intakes of meat cooked at high temperatures.
Although not all studies report an association between HCA and/or meat intake and cancers, the U.S. Department of Health and Human Services Public Health Service National Toxicology Program found sufficient evidence to label four HCAs as "reasonably anticipated to be a human carcinogen" in its twelfth Report on Carcinogens, published in 2011. The HCA known as IQ was first listed in the tenth report in 2002. MeIQ, MeIQx and PhIP were added to the list of anticipated carcinogens in 2004. The Report on Carcinogens stated that MeIQ has been associated with rectal and colon cancer, MeIQx with lung cancer, IQ with breast cancer and PhIP with stomach and breast cancer. However, no current federal guidelines focus on the recommended consumption limit of HCA levels in meat.
References
Food safety
Carcinogens
Heterocyclic compounds
Imidazoquinoxalines
Imidazoquinolines | Heterocyclic amine formation in meat | Chemistry,Environmental_science | 1,815 |
223,289 | https://en.wikipedia.org/wiki/Bukkake | is a sex act in which one participant is ejaculated on by multiple participants. It is often portrayed in pornographic films.
Bukkake videos are a relatively prevalent niche in contemporary pornographic films. Originating in Japan in the 1980s, the genre subsequently spread to North America and Europe, and crossed over into gay pornography.
Etymology
Bukkake is the noun form of the Japanese verb bukkakeru (ぶっ掛ける, to dash or sprinkle water), and means "to dash", "splash", or "heavy splash". The compound verb can be decomposed into a prefix and a verb: butsu (ぶつ) and kakeru (掛ける). Butsu is a prefix derived from the verb "buchi", which literally means "to hit", but the usage of the prefix is a verb-intensifier.
Kakeru in this context means to shower or pour. The word bukkake is often used in Japanese to describe pouring out a liquid with sufficient momentum to cause splashing or spilling. Indeed, bukkake is used in Japan to describe a type of dish where hot broth is poured over noodles, as in bukkake udon and bukkake soba.
History
Bukkake was first represented in pornographic films in the mid-to-late 1980s in Japan. One source named Jesus Clits Superstar part 1 released in 1987 and directed by Saki Goto as the first Japanese pornographic movie with 10 facials. According to one commentator, a significant factor in the development of bukkake as a pornographic form was the mandatory censorship in Japan where genitals must be pixelated by a "mosaic". One consequence of this is that Japanese pornography tends to focus more on the face and body of actresses rather than on their genitals. Since film producers could not show penetration, they sought other ways to depict sex acts without violating Japanese law, and since semen did not need to be censored, a loophole existed for harder sex scenes.
The first usage of the word bukkake in a pornographic film title was Bukkake Milky Showers 01 by Shuttle Japan, released in 1995. According to Shiruou, an early Shuttle Japan employee, actor, and owner of the bukkake label Milky Cat, the term crossed over into Western usage when the English language site
bukkakebath.com appeared in 1998 using stolen content from Shuttle Japan. However, the popularization of the act and the term for it have been credited to the director Kazuhiko Matsumoto in 1998. The Japanese adult video studio Shuttle Japan registered the term "ぶっかけ/BUKKAKE" as a trademark (No. 4545137) in January 2001.
The practice spread from Japan to the United States and Europe in the late 1990s. The first US bukkake-themed film was American Bukkake 1, by JM Productions released in 1999. The appearance of bukkake videos was part of a trend towards "harder" pornography in the 1990s, preceded by a fashion for double penetration videos in the mid-1990s and occurring in parallel to the appearance of gang bang videos towards the end of that decade. There was an economic advantage for Western pornographers to produce bukkake films since they only require one actress, and often amateur male performers whose pay rates are low. However, Western-style bukkake videos differ in some aspects from those in Japan; in Japanese bukkake videos, female performers are frequently dressed as office ladies or in school uniforms. They are being humiliated, whereas women in Western-style bukkake videos are portrayed as enjoying the scene. Another Japanese variant of bukkake is gokkun, in which several men ejaculate into a container for the receiver to drink. Bukkake is less popular than some other porn niches in the West, possibly because the implicit subordination of the woman does not appeal to many consumers, and because cum shots are normally the climax of a scene, rather than the main events.
The genre has also spread to gay pornography, featuring scenes in which several men ejaculate on another man. "Lesbian bukkake" videos are also produced. The 17th World Congress of Sexology in Montreal in July 2005 included a presentation on bukkake.
More recently, cum tributes have been compared to bukkake since both involve multiple men ejaculating towards a single woman.
Male viewers' motivation
American editor and publisher Russ Kick, quoting a sexologist, states that men enjoy a "sense of closure and finality about sex", something that watching other men ejaculate provides. The viewer identifies with the ejaculating men, experiencing a sense of vicarious pleasure.
Reception
A number of authors have described bukkake as premised on humiliation. Forensic psychologist Karen Franklin has described bukkake as symbolic group rape, characterising its primary purpose as the humiliation, degradation and objectification of women. Lisa Jean Moore and Juliana Weissbein view the use of ejaculation in bukkake as part of a humiliation ritual, noting that it generally does not involve any of the female participants experiencing orgasm.
Gail Dines describes the money shot of a man ejaculating on the face or body of a woman, taken to a new extreme in bukkake through the involvement of multiple men, as "one of the most degrading acts in porn".
Tristan Taormino, feminist author and sex educator, has likened bukkake to a "gay circle jerk", noting the inconsistency between its billing as a heterosexual practice and the fact that it features a group of naked men standing in close proximity to each other, masturbating together.
Phillip Vannini, associate professor in the School of Communication and Culture at Royal Roads University, quotes "self-proclaimed net sex commentator" George Kranz, who views recent American interpretations of bukkake as a "significant advance in human behaviour", emphasising the lively, almost party-like atmosphere of American bukkake videos compared to the more subdued Japanese style.
See also
BDSM
Cum shot
Facial
Ekiben (sexual act)
Group sex
Mating press
Pornography in Japan
References
External links
Bukkake
Japanese sex terms
Pornography terminology
Sexual acts
Ejaculation
Sexuality in Japan | Bukkake | Biology | 1,293 |
10,469,862 | https://en.wikipedia.org/wiki/Technological%20somnambulism | Technological somnambulism is a concept used when talking about the philosophy of technology. The term was used by Langdon Winner in his essay Technology as forms of life. Winner puts forth the idea that we are simply in a state of sleepwalking in our mediations with technology. This sleepwalking is caused by a number of factors. One of the primary causes is the way we view technology as tools, something that can be put down and picked up again. Because of this view of objects as something we can easily separate ourselves from technology, and so we fail to look at the long term implications of using that object. A second factor is the separation of those who make the technology and those who use the technology. This division causes there to be little thought and research going into the effects of using/developing that technology. The third and most important idea is the way in which technology seems to create new worlds in which we live. These worlds are created by the restructuring of the common and seemingly everyday things around us. In most situations the changes take place with little attention or care from us because we are more focused on the menial aspects of the technology (Winner 105–107).
The concept can be found in the earlier work of Marshall McLuhan, cf. Understanding Media, where he refers to a comment made by David Sarnoff expressing a socially deterministic view of "value free" technology whose value is solely defined by its usage as representing, "...the voice of the current somnambulism". Given that this piece by McLuhan has become standard reading in Media Theory it is reasonable to suspect that Winner encountered the concept there or elsewhere and then went on to develop it further.
See also
Compatibilism and incompatibilism
Daniel Chandler's inevitability thesis
Democratic rationalization
Philosophy of technology
Privileged positions of business and science
Science, technology and society
Social construction of technology
Social determinism
Social shaping of technology
Sociocultural evolution
Technological determinism
Technological fix
References
Science and technology studies
Media studies
Technological change | Technological somnambulism | Technology | 418 |
14,772,272 | https://en.wikipedia.org/wiki/UGT2B15 | UDP-glucuronosyltransferase 2B15 is an enzyme that in humans is encoded by the UGT2B15 gene.
The UGTs are of major importance in the conjugation and subsequent elimination of potentially toxic xenobiotics and endogenous compounds. UGT2B8 demonstrates reactivity with estriol. See UGT2B4 (MIM 600067).[supplied by OMIM]
References
Further reading | UGT2B15 | Chemistry | 100 |
2,319,776 | https://en.wikipedia.org/wiki/Lunar%20effect | The lunar effect is a purported correlation between specific stages of the roughly 29.5-day lunar cycle and behavior and physiological changes in living beings on Earth, including humans. A considerable number of studies have examined the effect on humans. By the late 1980s, there were at least 40 published studies on the purported lunar-lunacy connection, and at least 20 published studies on the purported lunar-birthrate connection. Literature reviews and metanalyses have found no correlation between the lunar cycle and human biology or behavior. In cases such as the approximately monthly cycle of menstruation in humans (but not other mammals), the coincidence in timing reflects no known lunar influence. The widespread and persistent beliefs about the influence of the Moon may depend on illusory correlation – the perception of an association that does not in fact exist.
In a number of marine animals, there is stronger evidence for the effects of lunar cycles. Observed effects relating to reproductive synchrony may depend on external cues relating to the presence or amount of moonlight. Corals contain light-sensitive cryptochromes, proteins that are sensitive to different levels of light. Coral species such as Dipsastraea speciosa tend to synchronize spawning in the evening or night, around the last quarter moon of the lunar cycle. In Dipsastraea speciosa, a period of darkness between sunset and moonrise appears to be a trigger for synchronized spawning. Another marine animal, the bristle worm Platynereis dumerilii, spawns a few days after a full moon. It contains a protein with light-absorbing flavin structures that differentially detect moonlight and sunlight. It is used as a model for studying the biological mechanisms of marine lunar cycles.
Contexts
Claims of a lunar connection have appeared in the following contexts:
Fertility
It is widely believed that the Moon has a relationship with fertility due to the corresponding human menstrual cycle, which averages 28 days. However, no connection between lunar rhythms and menstrual onset has been conclusively shown to exist, and the similarity in length between the two cycles is most likely coincidental.
Birth rate
Multiple studies have found no connection between birth rate and lunar phases. A 1957 analysis of 9,551 births in Danville, Pennsylvania, found no correlation between birth rate and the phase of the Moon. Records of 11,961 live births and 8,142 natural births (not induced by drugs or cesarean section) over a 4-year period (1974–1978) at the UCLA hospital did not correlate in any way with the cycle of lunar phases. Analysis of 3,706 spontaneous births (excluding births resulting from induced labor) in 1994 showed no correlation with lunar phase. The distribution of 167,956 spontaneous vaginal deliveries, at 37 to 40 weeks gestation, in Phoenix, Arizona, between 1995 and 2000, showed no relationship with lunar phase. Analysis of 564,039 births (1997 to 2001) in North Carolina showed no predictable influence of the lunar cycle on deliveries or complications. Analysis of 6,725 deliveries (2000 to 2006) in Hannover revealed no significant correlation of birth rate to lunar phases. A 2001 analysis of 70,000,000 birth records from the National Center for Health Statistics revealed no correlation between birth rate and lunar phase. An extensive review of 21 studies from seven different countries showed that the majority of studies reported no relationship to lunar phase, and that the positive studies were inconsistent with each other. A review of six additional studies from five different countries similarly showed no evidence of relationship between birth rate and lunar phase.
In 2021, an analysis of 38.7 million births in France over 50 years, with a detailed correction for birth variations linked to holidays, and robust statistical methods to avoid false detections linked to multiple tests, found a very small (+0.4%) but statistically significant surplus of births on the full moon day, and to a lesser extent the following day. The probability of this excess being due to chance is very low, of the order of one chance in 100,000 (p-value = 1.5 x 10-5). The belief that there is a large surplus of births on full moon days is incorrect, and it is completely impossible for an observer to detect the small increase of +0.4% in a maternity hospital, even on a long time scale.
Blood loss
It is sometimes claimed that surgeons used to refuse to operate during the full Moon because of the increased risk of death of the patient through blood loss. One team, in Barcelona, Spain, reported a weak correlation between lunar phase and hospital admissions due to gastrointestinal bleeding, but only when comparing full Moon days to all non-full Moon days lumped together. This methodology has been criticized, and the statistical significance of the results disappears if one compares day 29 of the lunar cycle (full Moon) to days 9, 12, 13, or 27 of the lunar cycle, which all have an almost equal number of hospital admissions. The Spanish team acknowledged that the wide variation in the number of admissions throughout the lunar cycle limited the interpretation of the results.
In October 2009, British politician David Tredinnick asserted that during a full Moon "[s]urgeons will not operate because blood clotting is not effective and the police have to put more people on the street.". A spokesman for the Royal College of Surgeons said they would "laugh their heads off" at the suggestion they could not operate on the full Moon.
Human behavior
Epilepsy
A study into epilepsy found a significant negative correlation between the mean number of epileptic seizures per day and the fraction of the Moon that is illuminated, but the effect resulted from the overall brightness of the night, rather than from the moon phase per se.
Law and order
Senior police officers in Brighton, UK, announced in June 2007 that they were planning to deploy more officers over the summer to counter trouble they believe is linked to the lunar cycle. This followed research by the Sussex Police force that concluded there was a rise in violent crime when the Moon was full. A spokeswoman for the police force said "research carried out by us has shown a correlation between violent incidents and full moons". A police officer responsible for the research told the BBC that "From my experience of 19 years of being a police officer, undoubtedly on full moons we do seem to get people with sort of strange behavior – more fractious, argumentative."
Police in Ohio and Kentucky have blamed temporary rises in crime on the full Moon.
In January 2008, New Zealand's Justice Minister Annette King suggested that a spate of stabbings in the country could have been caused by the lunar cycle.
A reported correlation between Moon phase and the number of homicides in Miami-Dade County was found, through later analysis, not to be supported by the data and to have been the result of inappropriate and misleading statistical procedures.
Motorcycle fatalities
A study of 13,029 motorcyclists killed in nighttime crashes found that there were 5.3% more fatalities on nights with a full moon compared to other nights. The authors speculate that the increase might be due to visual distractions created by the moon, especially when it is near the horizon and appears abruptly between trees, around turns, etc.
Stock market
Several studies have argued that the stock market's average returns are much higher during the half of the month closest to the new moon than the half closest to the full moon. The reasons for this have not been studied, but the authors suggest this may be due to lunar influences on mood. Another study has found contradictory results and questioned these claims.
Meta-analyses
A meta-analysis of thirty-seven studies that examined relationships between the Moon's four phases and human behavior revealed no significant correlation. The authors found that, of twenty-three studies that had claimed to show correlation, nearly half contained at least one statistical error. Similarly, in a review of twenty studies examining correlations between Moon phase and suicides, most of the twenty studies found no correlation, and the ones that did report positive results were inconsistent with each other. A 1978 review of the literature also found that lunar phases and human behavior are not related.
Sleep quality
A 2013 study by Christian Cajochen and collaborators at the University of Basel suggested a correlation between the full Moon and human sleep quality. However, the validity of these results may be limited because of a relatively small (n=33) sample size and inappropriate controls for age and sex. A 2014 study with larger sample sizes (n1=366, n2=29, n3=870) and better experimental controls found no effect of the lunar phase on sleep quality metrics. A 2015 study of 795 children found a three-minute increase in sleep duration near the full moon, but a 2016 study of 5,812 children found a five-minute decrease in sleep duration near the full moon. No other modification in activity behaviors were reported, and the lead scientist concluded: "Our study provides compelling evidence that the moon does not seem to influence people's behavior." A study published in 2021 by researchers from the University of Washington, Yale University, and the National University of Quilmes showed a correlation between lunar cycles and sleep cycles. During the days preceding a full moon, people went to bed later and slept for shorter periods (in some cases with differences of up to 90 minutes), even in locations with full access to electric light. Finally, a Swedish study including one-night at-home sleep recordings from 492 women and 360 men found that men whose sleep was recorded during nights in the waxing period of the lunar cycle exhibited lower sleep efficiency and increased time awake after sleep onset compared to men whose sleep was measured during nights in the waning period. In contrast, the sleep of women remained largely unaffected by the lunar cycle. These results were robust to adjustment for chronic sleep problems and obstructive sleep apnea severity.
As for how the belief started in the first place, a 1999 study conjectures that the alleged connection of moon to lunacy might be a ‘cultural fossil’ from a time before the advent of outdoor lighting, when the bright light of the full moon might have induced sleep deprivation in people living outside, thereby triggering erratic behaviour in predisposed people with mental conditions such as bipolar disorder.
In animals
Corals contain light-sensitive cryptochromes, proteins that are sensitive to different levels of light.
Spawning of coral Platygyra lamellina occurs at night during the summer on a date determined by the phase of the Moon; in the Red Sea, this is the three- to five-day period around the new Moon in July and the similar period in August. Acropora coral time their simultaneous release of sperm and eggs to just one or two days a year, after sundown with a full moon.
Dipsastraea speciosa tends to synchronize spawning in the evening or night, around the last quarter moon of the lunar cycle.
Another marine animal, the bristle worm Platynereis dumerilii, also spawns a few days after a full moon. It is used as a model for studying cryptochromes and photoreduction in proteins. The L-Cry protein can distinguish between sunlight and moonlight through the differential activity of two protein strands that contain light-absorbing structures called flavins. Another molecule, called r-Opsin, may act as a moonrise sensor. Exactly how different biological signals are transmitted within the worm is not yet known.
Correlation between hormonal changes in the testis and lunar periodicity was found in streamlined spinefoot (a type of fish), which spawns synchronously around the last Moon quarter. In orange-spotted spinefoot, lunar phases affect the levels of melatonin in the blood.
California grunion fish have an unusual mating and spawning ritual during the spring and summer months. The egg laying takes place on four consecutive nights, beginning on the nights of the full and new Moons, when tides are highest. This well understood reproductive strategy is related to tides, which are highest when the Sun, Earth, and Moon are aligned, i.e., at new Moon or full Moon.
In insects, the lunar cycle may affect hormonal changes. The body weight of honeybees peaks during new Moon. The midge Clunio marinus has a biological clock synchronized with the Moon.
Evidence for lunar effect in reptiles, birds and mammals is scant, but among reptiles marine iguanas (which live in the Galápagos Islands) time their trips to the sea in order to arrive at low tide.
A relationship between the Moon and the birth rate of cows was reported in a 2016 study.
In 2000, a retrospective study in the United Kingdom reported an association between the full moon and significant increases in animal bites to humans. The study reported that patients presenting to the A&E with injuries stemming from bites from an animal rose significantly at the time of a full moon in the period 1997–1999. The study concluded that animals have an increased inclination to bite a human during a full moon period. It did not address the question of how humans came into contact with the animals, and whether this was more likely to happen during the full moon.
In plants
Serious doubts have been raised about the claim that a species of Ephedra synchronizes its pollination peak to the full moon in July. Reviewers conclude that more research is needed to answer this question.
See also
List of topics characterized as pseudoscience
Astrology
Human menstrual cycle
Menstrual synchrony
Reproductive synchrony
Solunar theory
Tide
References
Bibliography
Abell, George (1979). Review of the book The Alleged Lunar Effect by Arnold Lieber, Skeptical Inquirer, Spring 1979, 68–73. Reprinted in Science Confronts the Paranormal, edited by Kendrick Frazier, Prometheus Books, .
Abell, George and Barry Singer (1981). Science and the Paranormal – probing the existence of the supernatural, Charles Scribner's Sons, chapter 5, .
Berman, Bob (2003). Fooled by the Full Moon – Scientists search for the sober truth behind some loony ideas, Discover, September 2003, page 30.
Caton, Dan (2001). Natality and the Moon Revisited: Do Birth Rates Depend on the Phase of the Moon?, Bulletin of the American Astronomical Society, Vol 33, No. 4, 2001, p. 1371. A summary of the results of the paper.
Sanduleak, Nicholas (1985). The Moon is Acquitted of Murder in Cleveland, Skeptical Inquirer, Spring 1985, 236–242. Reprinted in Science Confronts the Paranormal, edited by Kendrick Frazier, Prometheus Books, .
External links
The Skeptic's Dictionary on the lunar effect
Moon myths
Pseudoscience
Chronobiology
Astrology
Periodic phenomena | Lunar effect | Astronomy,Biology | 3,033 |
87,793 | https://en.wikipedia.org/wiki/Joseph-Louis%20Lagrange | Joseph-Louis Lagrange (born Giuseppe Luigi Lagrangia or Giuseppe Ludovico De la Grange Tournier; 25 January 1736 – 10 April 1813), also reported as Giuseppe Luigi Lagrange or Lagrangia, was an Italian mathematician, physicist and astronomer, later naturalized French. He made significant contributions to the fields of analysis, number theory, and both classical and celestial mechanics.
In 1766, on the recommendation of Leonhard Euler and d'Alembert, Lagrange succeeded Euler as the director of mathematics at the Prussian Academy of Sciences in Berlin, Prussia, where he stayed for over twenty years, producing many volumes of work and winning several prizes of the French Academy of Sciences. Lagrange's treatise on analytical mechanics (Mécanique analytique, 4. ed., 2 vols. Paris: Gauthier-Villars et fils, 1788–89), which was written in Berlin and first published in 1788, offered the most comprehensive treatment of classical mechanics since Isaac Newton and formed a basis for the development of mathematical physics in the nineteenth century.
In 1787, at age 51, he moved from Berlin to Paris and became a member of the French Academy of Sciences. He remained in France until the end of his life. He was instrumental in the decimalisation process in Revolutionary France, became the first professor of analysis at the École Polytechnique upon its opening in 1794, was a founding member of the Bureau des Longitudes, and became Senator in 1799.
Scientific contribution
Lagrange was one of the creators of the calculus of variations, deriving the Euler–Lagrange equations for extrema of functionals. He extended the method to include possible constraints, arriving at the method of Lagrange multipliers. Lagrange invented the method of solving differential equations known as variation of parameters, applied differential calculus to the theory of probabilities and worked on solutions for algebraic equations. He proved that every natural number is a sum of four squares. His treatise Theorie des fonctions analytiques laid some of the foundations of group theory, anticipating Galois. In calculus, Lagrange developed a novel approach to interpolation and Taylor's theorem. He studied the three-body problem for the Earth, Sun and Moon (1764) and the movement of Jupiter's satellites (1766), and in 1772 found the special-case solutions to this problem that yield what are now known as Lagrangian points. Lagrange is best known for transforming Newtonian mechanics into a branch of analysis, Lagrangian mechanics. He presented the mechanical "principles" as simple results of the variational calculus.
Biography
Early years
Firstborn of eleven children as Giuseppe Lodovico Lagrangia, Lagrange was of Italian and French descent. His paternal great-grandfather was a French captain of cavalry, whose family originated from the French region of Tours. After serving under Louis XIV, he had entered the service of Charles Emmanuel II, Duke of Savoy, and married a Conti from the noble Roman family. Lagrange's father, Giuseppe Francesco Lodovico, was a doctor in Law at the University of Torino, while his mother was the only child of a rich doctor of Cambiano, in the countryside of Turin. He was raised as a Roman Catholic (but later on became an agnostic).
His father, who had charge of the King's military chest and was Treasurer of the Office of Public Works and Fortifications in Turin, should have maintained a good social position and wealth, but before his son grew up he had lost most of his property in speculations. A career as a lawyer was planned out for Lagrange by his father, and certainly Lagrange seems to have accepted this willingly. He studied at the University of Turin and his favourite subject was classical Latin. At first, he had no great enthusiasm for mathematics, finding Greek geometry rather dull.
It was not until he was seventeen that he showed any taste for mathematics – his interest in the subject being first excited by a paper by Edmond Halley from 1693
which he came across by accident. Alone and unaided he threw himself into mathematical studies; at the end of a year's incessant toil he was already an accomplished mathematician. Charles Emmanuel III appointed Lagrange to serve as the "Sostituto del Maestro di Matematica" (mathematics assistant professor) at the Royal Military Academy of the Theory and Practice of Artillery in 1755, where he taught courses in calculus and mechanics to support the Piedmontese army's early adoption of the ballistics theories of Benjamin Robins and Leonhard Euler. In that capacity, Lagrange was the first to teach calculus in an engineering school. According to Alessandro Papacino D'Antoni, the academy's military commander and famous artillery theorist, Lagrange unfortunately proved to be a problematic professor with his oblivious teaching style, abstract reasoning, and impatience with artillery and fortification-engineering applications. In this academy one of his students was François Daviet.
Variational calculus
Lagrange is one of the founders of the calculus of variations. Starting in 1754, he worked on the problem of the tautochrone, discovering a method of maximizing and minimizing functionals in a way similar to finding extrema of functions. Lagrange wrote several letters to Leonhard Euler between 1754 and 1756 describing his results. He outlined his "δ-algorithm", leading to the Euler–Lagrange equations of variational calculus and considerably simplifying Euler's earlier analysis. Lagrange also applied his ideas to problems of classical mechanics, generalising the results of Euler and Maupertuis.
Euler was very impressed with Lagrange's results. It has been stated that "with characteristic courtesy he withheld a paper he had previously written, which covered some of the same ground, in order that the young Italian might have time to complete his work, and claim the undisputed invention of the new calculus"; however, this chivalric view has been disputed. Lagrange published his method in two memoirs of the Turin Society in 1762 and 1773.
Miscellanea Taurinensia
In 1758, with the aid of his pupils (mainly with Daviet), Lagrange established a society, which was subsequently incorporated as the Turin Academy of Sciences, and most of his early writings are to be found in the five volumes of its transactions, usually known as the Miscellanea Taurinensia. Many of these are elaborate papers. The first volume contains a paper on the theory of the propagation of sound; in this he indicates a mistake made by Newton, obtains the general differential equation for the motion, and integrates it for motion in a straight line. This volume also contains the complete solution of the problem of a string vibrating transversely; in this paper, he points out a lack of generality in the solutions previously given by Brook Taylor, D'Alembert, and Euler, and arrives at the conclusion that the form of the curve at any time t is given by the equation . The article concludes with a masterly discussion of echoes, beats, and compound sounds. Other articles in this volume are on recurring series, probabilities, and the calculus of variations.
The second volume contains a long paper embodying the results of several papers in the first volume on the theory and notation of the calculus of variations, and he illustrates its use by deducing the principle of least action, and by solutions of various problems in dynamics.
The third volume includes the solution of several dynamical problems by means of the calculus of variations; some papers on the integral calculus; a solution of a Fermat's problem: given an integer which is not a perfect square, to find a number such that is a perfect square; and the general differential equations of motion for three bodies moving under their mutual attractions.
The next work he produced was in 1764 on the libration of the Moon, and an explanation as to why the same face was always turned to the earth, a problem which he treated by the aid of virtual work. His solution is especially interesting as containing the germ of the idea of generalised equations of motion, equations which he first formally proved in 1780.
Berlin
Already by 1756, Euler and Maupertuis, seeing Lagrange's mathematical talent, tried to persuade Lagrange to come to Berlin, but he shyly refused the offer. In 1765, d'Alembert interceded on Lagrange's behalf with Frederick of Prussia and by letter, asked him to leave Turin for a considerably more prestigious position in Berlin. He again turned down the offer, responding that
It seems to me that Berlin would not be at all suitable for me while M.Euler is there.
In 1766, after Euler left Berlin for Saint Petersburg, Frederick himself wrote to Lagrange expressing the wish of "the greatest king in Europe" to have "the greatest mathematician in Europe" resident at his court. Lagrange was finally persuaded. He spent the next twenty years in Prussia, where he produced a long series of papers published in the Berlin and Turin transactions, and composed his monumental work, the Mécanique analytique. In 1767, he married his cousin Vittoria Conti.
Lagrange was a favourite of the king, who frequently lectured him on the advantages of perfect regularity of life. The lesson was accepted, and Lagrange studied his mind and body as though they were machines, and experimented to find the exact amount of work which he could do before exhaustion. Every night he set himself a definite task for the next day, and on completing any branch of a subject he wrote a short analysis to see what points in the demonstrations or the subject-matter were capable of improvement. He carefully planned his papers before writing them, usually without a single erasure or correction.
Nonetheless, during his years in Berlin, Lagrange's health was rather poor, and that of his wife Vittoria was even worse. She died in 1783 after years of illness and Lagrange was very depressed. In 1786, Frederick II died, and the climate of Berlin became difficult for Lagrange.
Paris
In 1786, following Frederick's death, Lagrange received similar invitations from states including Spain and Naples, and he accepted the offer of Louis XVI to move to Paris. In France he was received with every mark of distinction and special apartments in the Louvre were prepared for his reception, and he became a member of the French Academy of Sciences, which later became part of the Institut de France (1795). At the beginning of his residence in Paris, he was seized with an attack of melancholy, and even the printed copy of his Mécanique on which he had worked for a quarter of a century lay for more than two years unopened on his desk. Curiosity as to the results of the French Revolution first stirred him out of his lethargy, a curiosity which soon turned to alarm as the revolution developed.
It was about the same time, 1792, that the unaccountable sadness of his life and his timidity moved the compassion of 24-year-old Renée-Françoise-Adélaïde Le Monnier, daughter of his friend, the astronomer Pierre Charles Le Monnier. She insisted on marrying him and proved a devoted wife to whom he became warmly attached.
In September 1793, the Reign of Terror began. Under the intervention of Antoine Lavoisier, who himself was by then already thrown out of the academy along with many other scholars, Lagrange was specifically exempted by name in the decree of October 1793 that ordered all foreigners to leave France. On 4 May 1794, Lavoisier and 27 other tax farmers were arrested and sentenced to death and guillotined on the afternoon after the trial. Lagrange said on the death of Lavoisier:
It took only a moment to cause this head to fall and a hundred years will not suffice to produce its like.
Though Lagrange had been preparing to escape from France while there was yet time, he was never in any danger; different revolutionary governments (and at a later time, Napoleon) gave him honours and distinctions. This luckiness or safety may to some extent be due to his life attitude he expressed many years before: "I believe that, in general, one of the first principles of every wise man is to conform strictly to the laws of the country in which he is living, even when they are unreasonable". A striking testimony to the respect in which he was held was shown in 1796 when the French commissary in Italy was ordered to attend in the full state on Lagrange's father and tender the congratulations of the republic on the achievements of his son, who "had done honour to all mankind by his genius, and whom it was the special glory of Piedmont to have produced". It may be added that Napoleon, when he attained power, warmly encouraged scientific studies in France, and was a liberal benefactor of them. Appointed senator in 1799, he was the first signer of the Sénatus-consulte which in 1802 annexed his fatherland Piedmont to France. He acquired French citizenship in consequence. The French claimed he was a French mathematician, but the Italians continued to claim him as Italian. Units of measurement
Lagrange was involved in the development of the metric system of measurement in the 1790s. He was offered the presidency of the Commission for the reform of weights and measures (la Commission des Poids et Mesures) when he was preparing to escape. After Lavoisier's death in 1794, it was largely Lagrange who influenced the choice of the metre and kilogram units with decimal subdivision, by the commission of 1799. Lagrange was also one of the founding members of the Bureau des Longitudes in 1795.
École Normale
In 1795, Lagrange was appointed to a mathematical chair at the newly established École Normale, which enjoyed only a short existence of four months. His lectures there were elementary; they contain nothing of any mathematical importance, though they do provide a brief historical insight into his reason for proposing undecimal or Base 11 as the base number for the reformed system of weights and measures. The lectures were published because the professors had to "pledge themselves to the representatives of the people and to each other neither to read nor to repeat from memory" ["Les professeurs aux Écoles Normales ont pris, avec les Représentants du Peuple, et entr'eux l'engagement de ne point lire ou débiter de mémoire des discours écrits"]. The discourses were ordered and taken down in shorthand to enable the deputies to see how the professors acquitted themselves. It was also thought the published lectures would interest a significant portion of the citizenry ["Quoique des feuilles sténographiques soient essentiellement destinées aux élèves de l'École Normale, on doit prévoir quיelles seront lues par une grande partie de la Nation"].
École Polytechnique
In 1794, Lagrange was appointed professor of the École Polytechnique; and his lectures there, described by mathematicians who had the good fortune to be able to attend them, were almost perfect both in form and matter. Beginning with the merest elements, he led his hearers on until, almost unknown to themselves, they were themselves extending the bounds of the subject: above all he impressed on his pupils the advantage of always using general methods expressed in a symmetrical notation.
However, Lagrange does not seem to have been a successful teacher. Fourier, who attended his lectures in 1795, wrote:
his voice is very feeble, at least in that he does not become heated; he has a very marked Italian accent and pronounces the s like z [...] The students, of whom the majority are incapable of appreciating him, give him little welcome, but the professeurs make amends for it.
Late years
In 1810, Lagrange started a thorough revision of the Mécanique analytique, but he was able to complete only about two-thirds of it before his death in Paris in 1813, in 128 rue du Faubourg Saint-Honoré. Napoleon honoured him with the Grand Croix of the Ordre Impérial de la Réunion just two days before he died. He was buried that same year in the Panthéon in Paris. The inscription on his tomb reads in translation:JOSEPH LOUIS LAGRANGE. Senator. Count of the Empire. Grand Officer of the Legion of Honour. Grand Cross of the Imperial Order of the Reunion. Member of the Institute and the Bureau of Longitude. Born in Turin on 25 January 1736. Died in Paris on 10 April 1813.
Work in Berlin
Lagrange was extremely active scientifically during the twenty years he spent in Berlin. Not only did he produce his Mécanique analytique, but he contributed between one and two hundred papers to the Academy of Turin, the Berlin Academy, and the French Academy. Some of these are really treatises, and all without exception are of a high order of excellence. Except for a short time when he was ill he produced on average about one paper a month. Of these, note the following as amongst the most important.
First, his contributions to the fourth and fifth volumes, 1766–1773, of the Miscellanea Taurinensia; of which the most important was the one in 1771, in which he discussed how numerous astronomical observations should be combined so as to give the most probable result. And later, his contributions to the first two volumes, 1784–1785, of the transactions of the Turin Academy; to the first of which he contributed a paper on the pressure exerted by fluids in motion, and to the second an article on integration by infinite series, and the kind of problems for which it is suitable.
Most of the papers sent to Paris were on astronomical questions, and among these, including his paper on the Jovian system in 1766, his essay on the problem of three bodies in 1772, his work on the secular equation of the Moon in 1773, and his treatise on cometary perturbations in 1778. These were all written on subjects proposed by the Académie française, and in each case, the prize was awarded to him.
Lagrangian mechanics
Between 1772 and 1788, Lagrange re-formulated Classical/Newtonian mechanics to simplify formulas and ease calculations. These mechanics are called Lagrangian mechanics.
Algebra
The greater number of his papers during this time were, however, contributed to the Prussian Academy of Sciences. Several of them deal with questions in algebra.
His discussion of representations of integers by quadratic forms (1769) and by more general algebraic forms (1770).
His tract on the Theory of Elimination, 1770.
Lagrange's theorem that the order of a subgroup H of a group G must divide the order of G.
His papers of 1770 and 1771 on the general process for solving an algebraic equation of any degree via the Lagrange resolvents. This method fails to give a general formula for solutions of an equation of degree five and higher because the auxiliary equation involved has a higher degree than the original one. The significance of this method is that it exhibits the already known formulas for solving equations of second, third, and fourth degrees as manifestations of a single principle, and was foundational in Galois theory. The complete solution of a binomial equation (namely an equation of the form ± ) is also treated in these papers.
In 1773, Lagrange considered a functional determinant of order 3, a special case of a Jacobian. He also proved the expression for the volume of a tetrahedron with one of the vertices at the origin as the one-sixth of the absolute value of the determinant formed by the coordinates of the other three vertices.
Number theory
Several of his early papers also deal with questions of number theory.
Lagrange (1766–1769) was the first European to prove that Pell's equation has a nontrivial solution in the integers for any non-square natural number .
He proved the theorem, stated by Bachet without justification, that every positive integer is the sum of four squares, 1770.
He proved Wilson's theorem that (for any integer ): is a prime if and only if is a multiple of , 1771.
His papers of 1773, 1775, and 1777 gave demonstrations of several results enunciated by Fermat, and not previously proved.
His Recherches d'Arithmétique of 1775 developed a general theory of binary quadratic forms to handle the general problem of when an integer is representable by the form .
He made contributions to the theory of continued fractions.
Other mathematical work
There are also numerous articles on various points of analytical geometry. In two of them, written rather later, in 1792 and 1793, he reduced the equations of the quadrics (or conicoids) to their canonical forms.
During the years from 1772 to 1785, he contributed a long series of papers which created the science of partial differential equations. A large part of these results was collected in the second edition of Euler's integral calculus which was published in 1794.
Astronomy
Lastly, there are numerous papers on problems in astronomy. Of these the most important are the following:
Attempting to solve the general three-body problem, with the consequent discovery of the two constant-pattern solutions, collinear and equilateral, 1772. Those solutions were later seen to explain what are now known as the Lagrangian points.
On the attraction of ellipsoids, 1773: this is founded on Maclaurin's work.
On the secular equation of the Moon, 1773; also noticeable for the earliest introduction of the idea of the potential. The potential of a body at any point is the sum of the mass of every element of the body when divided by its distance from the point. Lagrange showed that if the potential of a body at an external point were known, the attraction in any direction could be at once found. The theory of the potential was elaborated in a paper sent to Berlin in 1777.
On the motion of the nodes of a planet's orbit, 1774.
On the stability of the planetary orbits, 1776.
Two papers in which the method of determining the orbit of a comet from three observations is completely worked out, 1778 and 1783: this has not indeed proved practically available, but his system of calculating the perturbations by means of mechanical quadratures has formed the basis of most subsequent researches on the subject.
His determination of the secular and periodic variations of the elements of the planets, 1781–1784: the upper limits assigned for these agree closely with those obtained later by Le Verrier, and Lagrange proceeded as far as the knowledge then possessed of the masses of the planets permitted.
Three papers on the method of interpolation, 1783, 1792 and 1793: the part of finite differences dealing therewith is now in the same stage as that in which Lagrange left it.
Fundamental treatise
Over and above these various papers he composed his fundamental treatise, the Mécanique analytique.
In this book, he lays down the law of virtual work, and from that one fundamental principle, by the aid of the calculus of variations, deduces the whole of mechanics, both of solids and fluids.
The object of the book is to show that the subject is implicitly included in a single principle, and to give general formulae from which any particular result can be obtained. The method of generalised co-ordinates by which he obtained this result is perhaps the most brilliant result of his analysis. Instead of following the motion of each individual part of a material system, as D'Alembert and Euler had done, he showed that, if we determine its configuration by a sufficient number of variables x, called generalized coordinates, whose number is the same as that of the degrees of freedom possessed by the system, then the kinetic and potential energies of the system can be expressed in terms of those variables, and the differential equations of motion thence deduced by simple differentiation. For example, in dynamics of a rigid system he replaces the consideration of the particular problem by the general equation, which is now usually written in the form
where T represents the kinetic energy and V represents the potential energy of the system.
He then presented what we now know as the method of Lagrange multipliers—though this is not the first time that method was published—as a means to solve this equation.
Amongst other minor theorems here given it may suffice to mention the proposition that the kinetic energy imparted by the given impulses to a material system under given constraints is a maximum, and the principle of least action. All the analysis is so elegant that Sir William Rowan Hamilton said the work could be described only as a scientific poem. Lagrange remarked that mechanics was really a branch of pure mathematics analogous to a geometry of four dimensions, namely, the time and the three coordinates of the point in space; and it is said that he prided himself that from the beginning to the end of the work there was not a single diagram. At first no printer could be found who would publish the book; but Legendre at last persuaded a Paris firm to undertake it, and it was issued under the supervision of Laplace, Cousin, Legendre (editor) and Condorcet in 1788.
Work in France
Differential calculus and calculus of variations
Lagrange's lectures on the differential calculus at École Polytechnique form the basis of his treatise Théorie des fonctions analytiques, which was published in 1797. This work is the extension of an idea contained in a paper he had sent to the Berlin papers in 1772, and its object is to substitute for the differential calculus a group of theorems based on the development of algebraic functions in series, relying in particular on the principle of the generality of algebra.
A somewhat similar method had been previously used by John Landen in the Residual Analysis, published in London in 1758. Lagrange believed that he could thus get rid of those difficulties, connected with the use of infinitely large and infinitely small quantities, to which philosophers objected in the usual treatment of the differential calculus. The book is divided into three parts: of these, the first treats of the general theory of functions, and gives an algebraic proof of Taylor's theorem, the validity of which is, however, open to question; the second deals with applications to geometry; and the third with applications to mechanics.
Another treatise on the same lines was his Leçons sur le calcul des fonctions, issued in 1804, with the second edition in 1806. It is in this book that Lagrange formulated his celebrated method of Lagrange multipliers, in the context of problems of variational calculus with integral constraints. These works devoted to differential calculus and calculus of variations may be considered as the starting point for the researches of Cauchy, Jacobi, and Weierstrass.
Infinitesimals
At a later period Lagrange fully embraced the use of infinitesimals in preference to founding the differential calculus on the study of algebraic forms; and in the preface to the second edition of the Mécanique Analytique, which was issued in 1811, he justifies the employment of infinitesimals, and concludes by saying that:
When we have grasped the spirit of the infinitesimal method, and have verified the exactness of its results either by the geometrical method of prime and ultimate ratios, or by the analytical method of derived functions, we may employ infinitely small quantities as a sure and valuable means of shortening and simplifying our proofs.Number theory
His Résolution des équations numériques, published in 1798, was also the fruit of his lectures at École Polytechnique. There he gives the method of approximating the real roots of an equation by means of continued fractions, and enunciates several other theorems. In a note at the end, he shows how Fermat's little theorem, that is
where p is a prime and a is prime to p, may be applied to give the complete algebraic solution of any binomial equation. He also here explains how the equation whose roots are the squares of the differences of the roots of the original equation may be used so as to give considerable information as to the position and nature of those roots.
Celestial mechanics
A theory of the planetary motions had formed the subject of some of the most remarkable of Lagrange's Berlin papers. In 1806 the subject was reopened by Poisson, who, in a paper read before the French Academy, showed that Lagrange's formulae led to certain limits for the stability of the orbits. Lagrange, who was present, now discussed the whole subject afresh, and in a letter communicated to the academy in 1808 explained how, by the variation of arbitrary constants, the periodical and secular inequalities of any system of mutually interacting bodies could be determined.
Prizes and distinctions
Euler proposed Lagrange for election to the Berlin Academy and he was elected on 2 September 1756. He was elected a Fellow of the Royal Society of Edinburgh in 1790, a Fellow of the Royal Society and a foreign member of the Royal Swedish Academy of Sciences in 1806. In 1808, Napoleon made Lagrange a Grand Officer of the Legion of Honour and a Count of the Empire. He was awarded the Grand Croix of the Ordre Impérial de la Réunion in 1813, a week before his death in Paris, and was buried in the Panthéon, a mausoleum dedicated to the most honoured French people.
Lagrange was awarded the 1764 prize of the French Academy of Sciences for his memoir on the libration of the Moon. In 1766 the academy proposed a problem of the motion of the satellites of Jupiter, and the prize again was awarded to Lagrange. He also shared or won the prizes of 1772, 1774, and 1778.
Lagrange is one of the 72 prominent French scientists who were commemorated on plaques at the first stage of the Eiffel Tower when it first opened. Rue Lagrange in the 5th Arrondissement in Paris is named after him. In Turin, the street where the house of his birth still stands is named via Lagrange. The lunar crater Lagrange and the asteroid 1006 Lagrangea also bear his name.
See also
List of things named after Joseph-Louis Lagrange
Four-dimensional space
Gauss's law
History of the metre
Lagrange's role in measurement reform
Seconds pendulum
Notes
References
Citations
Sources
The initial version of this article was taken from the public domain resource A Short Account of the History of Mathematics (4th edition, 1908) by W. W. Rouse Ball.
Columbia Encyclopedia, 6th ed., 2005, "Lagrange, Joseph Louis."
W. W. Rouse Ball, 1908, "Joseph Louis Lagrange (1736–1813)" A Short Account of the History of Mathematics, 4th ed. also on Gutenberg
Chanson, Hubert, 2007, "Velocity Potential in Real Fluid Flows: Joseph-Louis Lagrange's Contribution," La Houille Blanche 5: 127–31.
Fraser, Craig G., 2005, "Théorie des fonctions analytiques" in Grattan-Guinness, I., ed., Landmark Writings in Western Mathematics. Elsevier: 258–76.
Lagrange, Joseph-Louis. (1811). Mécanique Analytique. Courcier (reissued by Cambridge University Press, 2009; )
Lagrange, J.L. (1781) "Mémoire sur la Théorie du Mouvement des Fluides"(Memoir on the Theory of Fluid Motion) in Serret, J.A., ed., 1867. Oeuvres de Lagrange, Vol. 4. Paris" Gauthier-Villars: 695–748.
Pulte, Helmut, 2005, "Méchanique Analytique" in Grattan-Guinness, I., ed., Landmark Writings in Western Mathematics''. Elsevier: 208–24.
External links
Lagrange, Joseph Louis de: The Encyclopedia of Astrobiology, Astronomy and Space Flight
The Founders of Classical Mechanics: Joseph Louis Lagrange
The Lagrange Points
Derivation of Lagrange's result (not Lagrange's method)
Lagrange's works (in French) Oeuvres de Lagrange, edited by Joseph Alfred Serret, Paris 1867, digitized by Göttinger Digitalisierungszentrum (Mécanique analytique is in volumes 11 and 12.)
Joseph Louis de Lagrange – Œuvres complètes Gallica-Math
Inventaire chronologique de l'œuvre de Lagrange Persee
Mécanique analytique (Paris, 1811-15)
Lagrangian mechanics
1736 births
1813 deaths
Scientists from Turin
18th-century Italian mathematicians
19th-century Italian mathematicians
Burials at the Panthéon, Paris
Counts of the First French Empire
Italian people of French descent
Naturalized citizens of France
French agnostics
18th-century French astronomers
18th-century Italian astronomers
Mathematical analysts
Members of the French Academy of Sciences
Members of the Prussian Academy of Sciences
Members of the Royal Swedish Academy of Sciences
Honorary members of the Saint Petersburg Academy of Sciences
Number theorists
French geometers
Scientists from the Kingdom of Sardinia
Grand Officers of the Legion of Honour
Fellows of the Royal Society
18th-century French mathematicians
19th-century French mathematicians | Joseph-Louis Lagrange | Physics,Mathematics | 6,903 |
11,547,974 | https://en.wikipedia.org/wiki/KvLQT3 | Kv7.3 (KvLQT3) is a potassium channel protein coded for by the gene KCNQ3.
It is associated with benign familial neonatal epilepsy.
The M channel is a slowly activating and deactivating potassium channel that plays a critical role in the regulation of neuronal excitability. The M channel is formed by the association of the protein encoded by this gene and one of two related proteins encoded by the KCNQ2 and KCNQ5 genes, both integral membrane proteins. M channel currents are inhibited by M1 muscarinic acetylcholine receptors and activated by retigabine, a novel anti-convulsant drug. Defects in this gene are a cause of benign familial neonatal convulsions type 2 (BFNC2), also known as epilepsy, benign neonatal type 2 (EBN2).
Interactions
KvLQT3 has been shown to interact with KCNQ5.
References
Further reading
External links
Ion channels
Proteins | KvLQT3 | Chemistry | 219 |
1,382,023 | https://en.wikipedia.org/wiki/Quartic%20interaction | In quantum field theory, a quartic interaction or φ4 theory is a type of self-interaction in a scalar field. Other types of quartic interactions may be found under the topic of four-fermion interactions. A classical free scalar field satisfies the Klein–Gordon equation. If a scalar field is denoted , a quartic interaction is represented by adding a potential energy term to the Lagrangian density. The coupling constant is dimensionless in 4-dimensional spacetime.
This article uses the metric signature for Minkowski space.
Lagrangian for a real scalar field
The Lagrangian density for a real scalar field with a quartic interaction is
This Lagrangian has a global Z2 symmetry mapping .
Lagrangian for a complex scalar field
The Lagrangian for a complex scalar field can be motivated as follows. For two scalar fields and the Lagrangian has the form
which can be written more concisely introducing a complex scalar field defined as
Expressed in terms of this complex scalar field, the above Lagrangian becomes
which is thus equivalent to the SO(2) model of real scalar fields , as can be seen by expanding the complex field in real and imaginary parts.
With real scalar fields, we can have a model with a global SO(N) symmetry given by the Lagrangian
Expanding the complex field in real and imaginary parts shows that it is equivalent to the SO(2) model of real scalar fields.
In all of the models above, the coupling constant must be positive, since otherwise the potential would be unbounded below, and there would be no stable vacuum. Also, the Feynman path integral discussed below would be ill-defined. In 4 dimensions, theories have a Landau pole. This means that without a cut-off on the high-energy scale, renormalization would render the theory trivial.
The model belongs to the Griffiths-Simon class, meaning that it can be represented also as the weak limit of an Ising model on a certain type of graph. The triviality of both the model and the Ising model in can be shown via a graphical representation known as the random current expansion.
Feynman integral quantization
The Feynman diagram expansion may be obtained also from the Feynman path integral formulation. The time-ordered vacuum expectation values of polynomials in φ, known as the n-particle Green's functions, are constructed by integrating over all possible fields, normalized by the vacuum expectation value with no external fields,
All of these Green's functions may be obtained by expanding the exponential in J(x)φ(x) in the generating function
A Wick rotation may be applied to make time imaginary. Changing the signature to (++++) then gives a φ4 statistical mechanics integral over a 4-dimensional Euclidean space,
Normally, this is applied to the scattering of particles with fixed momenta, in which case, a Fourier transform is useful, giving instead
where is the Dirac delta function.
The standard trick to evaluate this functional integral is to write it as a product of exponential factors, schematically,
The second two exponential factors can be expanded as power series, and the combinatorics of this expansion can be represented graphically. The integral with λ = 0 can be treated as a product of infinitely many elementary Gaussian integrals, and the result may be expressed as a sum of Feynman diagrams, calculated using the following Feynman rules:
Each field in the n-point Euclidean Green's function is represented by an external line (half-edge) in the graph, and associated with momentum p.
Each vertex is represented by a factor -λ.
At a given order λk, all diagrams with n external lines and k vertices are constructed such that the momenta flowing into each vertex is zero. Each internal line is represented by a factor 1/(q2 + m2), where q is the momentum flowing through that line.
Any unconstrained momenta are integrated over all values.
The result is divided by a symmetry factor, which is the number of ways the lines and vertices of the graph can be rearranged without changing its connectivity.
Do not include graphs containing "vacuum bubbles", connected subgraphs with no external lines.
The last rule takes into account the effect of dividing by . The Minkowski-space Feynman rules are similar, except that each vertex is represented by , while each internal line is represented by a factor i/(q2-m2 + i ε), where the ε term represents the small Wick rotation needed to make the Minkowski-space Gaussian integral converge.
Renormalization
The integrals over unconstrained momenta, called "loop integrals", in the Feynman graphs typically diverge. This is normally handled by renormalization, which is a procedure of adding divergent counter-terms to the Lagrangian in such a way that the diagrams constructed from the original Lagrangian and counterterms are finite. A renormalization scale must be introduced in the process, and the coupling constant and mass become dependent upon it. It is this dependence that leads to the Landau pole mentioned earlier, and requires that the cutoff be kept finite. Alternatively, if the cutoff is allowed to go to infinity, the Landau pole can be avoided only if the renormalized coupling runs to zero, rendering the theory trivial.
Spontaneous symmetry breaking
An interesting feature can occur if m2 turns negative, but with λ still positive. In this case, the vacuum consists of two lowest-energy states, each of which spontaneously breaks the Z2 global symmetry of the original theory. This leads to the appearance of interesting collective states like domain walls. In the O(2) theory, the vacua would lie on a circle, and the choice of one would spontaneously break the O(2) symmetry. A continuous broken symmetry leads to a Goldstone boson. This type of spontaneous symmetry breaking is the essential component of the Higgs mechanism.
Spontaneous breaking of discrete symmetries
The simplest relativistic system in which we can see spontaneous symmetry breaking is one with a single scalar field with Lagrangian
where and
Minimizing the potential with respect to leads to
We now expand the field around this minimum writing
and substituting in the lagrangian we get
where we notice that the scalar has now a positive mass term.
Thinking in terms of vacuum expectation values lets us understand what happens to a symmetry when it is spontaneously broken. The original Lagrangian was invariant under the symmetry . Since
are both minima, there must be two different vacua: with
Since the symmetry takes , it must take as well.
The two possible vacua for the theory are equivalent, but one has to be chosen. Although it seems that in the new Lagrangian the symmetry has disappeared, it is still there, but it now acts as
This is a general feature of spontaneously broken symmetries: the vacuum breaks them, but they are not actually broken in the Lagrangian, just hidden, and often realized only in a nonlinear way.
Exact solutions
There exists a set of exact classical solutions to the equation of motion of the theory written in the form
that can be written for the massless, , case as
where is the Jacobi elliptic sine function and are two integration constants, provided the following dispersion relation holds
The interesting point is that we started with a massless equation but the exact solution describes a wave with a dispersion relation proper to a massive solution. When the mass term is not zero one gets
being now the dispersion relation
Finally, for the case of a symmetry breaking one has
being and the following dispersion relation holds
These wave solutions are interesting as, notwithstanding we started with an equation with a wrong mass sign, the dispersion relation has the right one. Besides, Jacobi function has no real zeros and so the field is never zero but moves around a given constant value that is initially chosen describing a spontaneous breaking of symmetry.
A proof of uniqueness can be provided if we note that the solution can be sought in the form being . Then, the partial differential equation becomes an ordinary differential equation that is the one defining the Jacobi elliptic function with satisfying the proper dispersion relation.
See also
Scalar field theory
Quantum triviality
Landau pole
Renormalization
Higgs mechanism
Goldstone boson
Coleman–Weinberg potential
References
Further reading
't Hooft, G., "The Conceptual Basis of Quantum Field Theory" (online version).
Quantum field theory
Subatomic particles with spin 0 | Quartic interaction | Physics | 1,779 |
16,731,070 | https://en.wikipedia.org/wiki/HD%2097048 | HD 97048 or CU Chamaeleontis is a Herbig Ae/Be star away in the constellation Chamaeleon. It is a variable star embedded in a dust cloud containing a stellar nursery, and is itself surrounded by a dust disk.
HD 97048 is a young star still contracting towards the main sequence. Its brightness varies between magnitudes 8.38 and 8.48 and it is classified as an Orion variable. It was given the variable star designation CU Chamaeleontis in 1981. Its spectrum is also variable. The spectral class is usually given as A0 or B9, sometimes with a giant luminosity class, sometimes main sequence. The spectrum shows strong variable emission lines indicative of a shell surrounding the star.
HD 97048 is a member of the Chamaeleon T1 stellar association and is still embedded within the dark molecular cloud that it is forming from. It illuminates a small reflection nebula against the dark cloud.
Planetary system
This star has a substantial dust disk having a central cavity with a 40−46 AU radius The disk has a carbon monoxide gas velocity kink and intensity gap at 130 AUs, which is suspected to be caused by a superjovian planet. In 2019, HCO+ ion and Hydrogen cyanide emission was detected from the disk, suggesting a large amount of gas is orbiting beyond 200 AU radius.
In the system a kink in the velocity of carbon monoxide gas (CO 3–2) as well as a gap in the dust emission of the disk are seen as evidence for a jovian protoplanet. The protoplanet is located at 130 au from the star and has a mass of about 2.5 Jupiter masses. It is one of the lowest mass protoplanets discovered as of 2023.
References
Chamaeleon
Circumstellar disks
097048
Chamaeleontis, CU
054413
CD-76 488
Herbig Ae/Be stars
Hypothetical planetary systems | HD 97048 | Astronomy | 405 |
3,438,859 | https://en.wikipedia.org/wiki/Monoglyceride | Monoglycerides (also: acylglycerols or monoacylglycerols) are a class of glycerides which are composed of a molecule of glycerol linked to a fatty acid via an ester bond. As glycerol contains both primary and secondary alcohol groups two different types of monoglycerides may be formed; 1-monoacylglycerols where the fatty acid is attached to a primary alcohol, or a 2-monoacylglycerols where the fatty acid is attached to the secondary alcohol.
Synthesis
Monoglycerides are produced both biologically and industrially. They are naturally present at very low levels (0.1-0.2%) in some seed oils such as olive oil, rapeseed oil and cottonseed oil. They are biosynthesized by the enzymatic hydrolysis of triglycerides by lipoprotein lipase and the enzymatic hydrolysis of diglycerides by diacylglycerol lipase; or as an intermediate in the alkanoylation of glycerol to form fats. Several monoglycerides are pharmacologically active (e.g. 2-oleoylglycerol, 2-arachidonoylglycerol).
Industrial production is primarily achieved by a glycerolysis reaction between triglycerides and glycerol. The commercial raw materials for the production of monoacylglycerols may be either vegetable oils or animal fats.
Uses
Monoglycerides are primarily used as surfactants, usually in the form of emulsifiers. Together with diglycerides, monoglycerides are commonly added to commercial food products in small quantities as "E471" (s.a. Mono- and diglycerides of fatty acids), which helps to prevent mixtures of oils and water from separating. They are also often found in bakery products, beverages, ice cream, chewing gum, shortening, whipped toppings, margarine, spreads, and peanut butter, and confections. In bakery products, monoglycerides are useful in improving loaf volume and texture, and as antistaling agents. Monoglycerides are used to enhance the physical stability towards creaming in milk beverages.
Examples
See also
Diglyceride
Dietary sources of fatty acids, their digestion, absorption, transport in the blood and storage
Lipid
Triglyceride
Ultra-processed food
References
Fatty acid esters
Lipids
ja:モノアシルグリセロール | Monoglyceride | Chemistry | 536 |
16,852,727 | https://en.wikipedia.org/wiki/Lifelog | A lifelog is a personal record of one's daily life in a varying amount of detail, for a variety of purposes. The record contains a comprehensive dataset of a human's activities. The data could be used to increase knowledge about how people live their lives. In recent years, some lifelog data has been automatically captured by wearable technology or mobile devices. People who keep lifelogs about themselves are known as lifeloggers (or sometimes lifebloggers or lifegloggers).
The sub-field of computer vision that processes and analyses visual data captured by a wearable camera is called "egocentric vision" or egography.
Examples
A known lifelogger was Robert Shields, who manually recorded 25 years of his life from 1972 to 1997, at 5-minute intervals. This record resulted in a 37-million word diary, thought to be the longest ever written.
Steve Mann was the first person to capture continuous physiological data along with a live first-person video from a wearable camera. Starting in 1994, Mann continuously transmitted his life — 24 hours a day, 7 days a week. Using a wearable camera and wearable display, he invited others to see what he was looking at, as well as to send him live feeds or messages in real-time. In 1998 Mann started a community of lifeloggers (also known as lifebloggers or lifegloggers) which has grown to more than 20,000 members. Throughout the 1990s Mann presented this work to the U.S. Army, with two visits to US Natick Army Research Labs.
In 1996, Jennifer Ringley started JenniCam, broadcasting photographs from a webcam in her college bedroom every fifteen seconds; the site was turned off in 2003.
"We Live In Public" was a 24/7 Internet conceptual art experiment created by Josh Harris in December 1999. With a format similar to TV's Big Brother, Harris placed tapped telephones, microphones and 32 robotic cameras in the home he shared with his girlfriend, Tanya Corrin. Viewers talked to Harris and Corrin in the site's chatroom. Harris recently launched the online live video platform, Operator 11.
In 2001, Kiyoharu Aizawa discussed the problem of how to handle a huge amount of videos continuously captured in one's life and presented an automatic summarization.
The lifelog DotComGuy ran throughout 2000, when Mitch Maddox lived the entire year without leaving his house. After Joi Ito's discussion of Moblogging, which involves web publishing from a mobile device, came Gordon Bell's MyLifeBits (2004), an experiment in digital storage of a person's lifetime, including full-text search, text/audio annotations, and hyperlinks.
In 2003, a project called LifeLog was started at the Defense Advanced Research Projects Agency (DARPA), under the supervision of Douglas Gage. This project would combine several technologies to record life activities, in order to create a life diary. Shortly after, the notion of lifelogging was identified as a technology and cultural practice that could be exploited by governments, businesses or militaries through surveillance. The DARPA lifelogging project was cancelled by 2004, but this project helped to popularize the idea, and the usage of the term lifelogging in everyday discourse. It contributed to the growing acceptance of using technology for augmented memory.
In 2003, Kiyoharu Aizawa introduced a context-based video retrieval system that was designed to handle data continuously captured from various sources, including a wearable camera, a microphone, and multiple sensors such as a GPS receiver, an acceleration sensor, a gyro sensor, and a brain-wave analyzer. By extracting contextual information from these inputs, the system can retrieve specific scenes captured by the wearable camera.
In 2004, conceptual media artist Alberto Frigo began tracking everything his right hand (his dominant hand) had used, then began adding different tracking and documentation projects. His tracking was done manually rather than using technology.
In 2004 Arin Crumley and Susan Buice met online and began a relationship. They decided to forgo verbal communication during the initial courtship and instead spoke to each other via written notes, sketches, video clips, and Myspace. They went on to create an autobiographical film about their experience, called Four Eyed Monsters. It was part-documentary, part-narrative, with a few scripted elements added. They went on to produce a two-season podcast about the making of the film to promote it.
In 2007 Justin Kan began streaming continuous live video and audio from a webcam attached to a cap, beginning at midnight on March 19, 2007. He created a website, Justin.tv, for the purpose. He described this procedure as "lifecasting".
In recent years, with the advent of smartphones and similar devices, lifelogging became much more accessible. For instance, UbiqLog and Experience Explorer employ mobile sensing to perform life logging, while other lifelogging devices, like the Autographer, use a combination of visual sensors and GPS tracking to simultaneously document one's location and what one can see. Lifelogging was popularized by the mobile app Foursquare, which had users "check in" as a way of sharing and saving their location; this later evolved into the popular lifelogging app, Swarm.
Life caching
Life caching refers to the social act of storing and sharing one's entire life events in an open and public forum such as Facebook. Modern life caching is considered a form of social networking and typically takes place on the internet. The term was introduced in 2005 by trendwatching.com, in a report predicting this would soon be a trend, given the availability of relevant technology. However, life log information is privacy-sensitive, and therefore sharing such information is associated with risks.
Mobile and wearable apps
To assist in their efforts of tracking, some lifeloggers use mobile devices and apps. Utilizing the GPS and motion processors of digital devices enables lifelogging apps to easily record metadata related to daily activities. Myriad lifelogging apps are available in the App Store (iOS), Google Play and other app distribution platforms, but some commonly cited apps include: Instant, Reporter, Journey, Path, Moves, and HeyDay, insight for Wear (a smartwatch app).
Xperia also has a native mobile application which is called Lifelog. The app works standalone but gets enriched when used with Sony Smart Bands.
Swarm is a lifelogging app that motivates users to check-in, recording every place they've visited, while inspiring them to visit new places.
See also
Cathal Gurrin
Diary
Digital footprint
Dymaxion Chronofile
Egocentric vision
Gordon Bell
Lifecasting (video stream)
Lifestreaming
Microsoft SenseCam
MyLifeBits
Narrative Clip
Personal knowledge base
Quantified self
Smartglasses
Sousveillance
Wearable computer
References
Bibliography
External links
"On the Record, All the Time" from the Chronicle of Higher Education
"The Data-Driven life" from The New York Times
Lifelogging, An Inevitability (Kevin Kelly)
Lifelogging | Lifelog | Technology | 1,466 |
3,859,329 | https://en.wikipedia.org/wiki/Music%20store | A music store or musical instrument store is a retail business that sells musical instruments and related equipment and accessories. Some music stores sell additional services, such as music lessons, music instrument or equipment rental, or repair services.
Products
Music stores range from full-line stores that sell products across all musical instrument and even pro audio categories (sometimes including DJ equipment and visual stage components such as lights or fog machines), to music stores that focus on a subset of those categories (e.g. a store that sells acoustic and digital pianos, or a store that specializes only in drums and percussion), to highly-specialized stores focused on a single product type (e.g. a guitar boutique focused on vintage collectible guitars, or a sheet music store). In the United States and Canada, another common distinction exists between “Band and Orchestra” stores that cater to the needs of school music programs and their students, versus “Combo” stores that focus on instruments and equipment used by a rock band.
Music stores arose to service the needs of the local community. This included not only individual amateur musicians, but schools from elementary to college level, civic bands and orchestras, churches, and entertainment ensembles that performed at events of the community and its organizations. In service of this diverse clientele, store owners might focus on some specialty or niche market (pianos, sheet music, percussion). Instruments might be purchased outright, leased or rented. Specific or non-stock items could be ordered through the store.
More commonly, music stores offered some variety, depending upon the tastes and resources of the owners and the desires of their clientele (whether actual or sought-after). This might include some mixture of fretted instruments (electric guitars, acoustic guitars, mandolins, ukuleles); brass, woodwind, and violin-family instruments; drums and percussion; pianos and organs; consumable items (strings, reeds, drum sticks); accessories (metronomes, music stands); and sheet music.
In more recent decades, stores began to include instrument amplifiers, guitar effects units, electronic keyboards, microphones, sound recording equipment and digital audio software. Recorded musical instruction became a niche, beginning with LPs and evolving through formats of cassette tape, VHS video, compact disk, and DVD.
Some music stores provided instrument maintenance and repair, music lessons, or leasing of instruments and equipment.
Specialized stores
In the 2010s, general music stores have had to face competition from online music stores, which offer a huge selection of instruments and equipment.
Electric guitars
Electric guitars started appearing in the 1930s. Mainstream electric guitars stores sell well-known brands like Gibson, Fender and Ibanez. Most guitar stores sell six-string models, bass guitars, left handed guitars and electric guitar packages for beginners, which typically include a budget-priced electric guitar, a small practice amplifier, a strap and picks.
Guitar World magazine states that since guitar stores require patrons to try out guitars and amplifiers in the premises, some guitar players are nervous about playing in front of the store staff and other patrons. A University Press of Kentucky book on women in music states that customers did not treat a woman who worked at a guitar store like she knew anything about guitars until she would use special guitar terms.
Acoustic guitars
Acoustic guitar sections are one of the main areas in many music stores. Some stores create a separate area with a door, both to create a quieter area for customers to play the instruments and to enable humidifiers to be used. Famous acoustic guitars include C. F. Martin & Company, Taylor Guitars, Fender, Gibson, Guild, Washburn and Lowden Guitars.
Piano
One common specialty store is the piano store, which typically sells a range of upright pianos and grand pianos. In the 2010s, some piano stores sell high-end digital pianos, including grand pianos equipped with a digital player piano mechanism that can play back a recorded performance by activating the hammers.
Piano sales are on the decline, in part because high-quality, properly-maintained pianos can remain playable for 60 to 80 years after their original purchase. Some piano stores offer rental of new pianos; as well, some piano stores sell used pianos.
The high price of pianos is one factor that is causing the closing of piano stores: "A good grand piano from a respected name costs about as much as a luxury automobile", and as such, children (and their parents) are choosing less expensive instruments, such as electronic keyboards or stringed instruments.
Though sales of acoustic pianos and quality keyboard instruments continually declines in the United States, in China "piano sales are booming", with most instruments being intended for home use. This rise in sales is in part because the costly instruments are viewed as a status symbol in China.
Violin family
Another specialty shop is the "violin shop", which, despite its name, often sells various violin family instruments (violin, viola, cello and often double bass, and the bows, strings, rosin, chinrests, and other accessories used with these instruments). Violin shops are often operated by luthiers (violinmakers) who make violin family instruments and bows for sale. Luthiers also do maintenance and repairs on violin family instruments and bows.
Sheet music
Sheet music stores sell printed classical music for songs, instrumental solo pieces, chamber music, and scores for major symphonies and choral works, along with instrumental method books, "etudes" (studies) and graded musical exercises. Many sheet music stores also carry printed music songs for popular music genres such as rock, pop and musical theatre including individual songs and collections of songs grouped by artist, musical, or genre. Music for guitarists or electric bass players may be in tabulature notation, which depicts where on the instrument the performer should play a line. In the 2010s, sheet music stores often sell legal, copyright-compliant jazz fake books. Sheet music stores often carry some practice accessories, such as metronomes, music stands and tuning forks.
Pro audio
Pro audio stores sell and in many cases, rent sound reinforcement system components, PA systems, microphones and other audio gear. Some stores also rent "backline" musical gear, such as stage pianos and bass amps.
Organ stores
Prior to the widespread availability of lightweight electronic clonewheel organs in the 1980s and 1990s that emulate the sound of a heavy, electromechanical Hammond organ, many cities had organ stores which sold large electric and electronic theatre organs and spinet organs made by Hammond, Lowrey and other manufacturers. These organs were sold for use in private homes and in churches; electric and electronic organs were popular for churches, because they cost significantly less than a pipe organ.
Used stores
Music stores may sell used, possibly vintage or collectible instruments and sound gear.
Used-gear stores may employ or offer a consignment model, in which the store (acting as the consignee) sells the piece on behalf of the actual owner (the consignor) and takes a portion of the purchase price.
Stores that primarily sell used equipment may carry new merchandise, minimally guitar strings, patch cords and microphone cables. In the United States, nationwide chains such as Music Go Round (about 30% new gear) and Sam Ash Music carry on a steady trade of used instruments and equipment.
Online stores
Since the 2000s, many music retailers have begun to adopt an online model. Retailers sell instruments and sound gear through websites that contain photos of the equipment, which are grouped into categories (e.g., electric guitars, amplifiers, PA gear). Each photo of a product is accompanied by the name and model number of each item, a description of each product's features, and the price. The sophistication of online music stores varies. Some online music stores have a single photo of the item, the product name and price, and a few bullets about the features. On the other hand, some online music stores have interactive Web 2.0 features, such as 360-degree virtual reality-style images of the products, in which the viewer can "turn" the product around to see the back and sides, online comments sections where customers can review their purchases and additional music-related content, such as articles on musical instruments or sound gear written by store staff. Patrons pay electronically at online music stores using a credit card, PayPal or other electronic payment systems. The goods are shipped through the mail or by express delivery companies such as FedEx. Some music stores sell their products solely online. In other cases, some stores operate both a "brick and mortar" store (or chain) and an online store.
References
Music industry
Musical instruments
Music technology
Sound production technology
Sound recording technology
Musical instrument retailers
Retailers by type of merchandise sold | Music store | Technology | 1,759 |
61,092,979 | https://en.wikipedia.org/wiki/Macaroons%20%28computer%20science%29 | In computer security, macaroons are authorization credentials that support decentralized delegation between principals.
Macaroons are used in a variety of systems, including the Ubuntu Snappy package manager, the HyperDex data store, the Matrix communication protocol, and the Python Package Index.
Claims
A macaroon is composed of series of "caveats", for example:
may upload files to /user/A/ (issued by server)
only to /user/A/album/123 (derived by A)
only GIFs, up to 1MB (derived by B)
until noon today (derived by C)
The macaroon model doesn't specify the language for these caveats; The original paper proposes a model of subjects and rights, but the details are left to individual implementations.
Related technologies
Macaroons are similar to some other technologies.
Compared to JSON Web Token (JWT):
Holder of macaroon can issue a sub-macaroon with smaller power, while JWT is fixed
Macaroon is notably longer than JWT
Macaroon is equivalent to signed JWT, but does not offer equivalent to encrypted JWT
Compared to Certificates
Macaroons are based on a symmetric model, while certificates on asymmetric
Macaroons are computationally cheaper and require simpler cryptographic primitives
Using a macaroon (sent to a server) can disclose some private information held by the macaroon holder, meaning that server must be trusted; Using a certificate means signing a payload using a private key, which is not sent to the server, thus communication with untrusted servers is less risky.
Invalidation
Implementations need to decide whether the entire macaroon tree is invalidated at once from its root, the server secret key; or if intermediate macaroons are to be blacklisted, comparable to time-bound JWT's.
See also
Authorization
HTTP cookie
OAuth
OpenID
Simple public-key infrastructure
References
Computer access control | Macaroons (computer science) | Technology,Engineering | 406 |
18,503,091 | https://en.wikipedia.org/wiki/HD%20215114 | HD 215114 is a double star in the equatorial constellation of Aquarius. As of 2012, the pair have an angular separation of 2.29″ along a position angle of 306.4°.
References
External links
Image HD 215114
Aquarius (constellation)
215114
Spectroscopic binaries
A-type main-sequence stars
8645
Durchmusterung objects
112168 | HD 215114 | Astronomy | 82 |
20,998,975 | https://en.wikipedia.org/wiki/Channel-stopper | In semiconductor device fabrication, channel-stopper or channel-stop is an area in semiconductor devices produced by implantation or diffusion of ions, by growing or patterning the silicon oxide, or other isolation methods in semiconductor material with the primary function to limit the spread of the channel area or to prevent the formation of parasitic channels (inversion layers).
References
Semiconductor device fabrication | Channel-stopper | Materials_science | 75 |
7,082,603 | https://en.wikipedia.org/wiki/Aqueous%20Wastes%20from%20Petroleum%20and%20Petrochemical%20Plants | Aqueous Wastes from Petroleum and Petrochemical Plants is a book about the composition and treatment of the various wastewater streams produced in the hydrocarbon processing industries (i.e., oil refineries, petrochemical plants and natural gas processing plants). When it was published in 1967, it was the first book devoted to that subject.
The book is notable for being the first technical publication of a method for the rigorous tray-by-tray design of steam distillation towers for removing hydrogen sulfide from oil refinery wastewaters. Such towers are commonly referred to as sour water strippers. The design method was also presented at a World Petroleum Congress Meeting shortly after the book was published.
The subjects covered in the book include wastewater pollutants and the pertinent governmental regulations, oil refinery and petrochemical plant wastewater effluents, treatment methods, miscellaneous effluents, data on the cost of various wastewater treatment methods, and an extensive reference list.
Availability in libraries
The book became a classic in its field and is available in major university, public and industrial libraries worldwide. The book has no ISBN because they were not in use in 1967. The Library of Congress catalog number (LCCN) is 67019834 and the British Library system number is 012759691. It is no longer in print, but photocopies can be obtained from the ProQuest Company's Books On Demand service.
Book reviews
One of the book reviews is that of Dr. Nelson V. Nemerow, a Civil Engineering professor at Syracuse University in New York state, published in 1968 in the American Chemical Society's journal Environmental Science and Technology.
References
1967 in the environment
Engineering books
Oil refining
Books about petroleum
Environmental non-fiction books
Technology books | Aqueous Wastes from Petroleum and Petrochemical Plants | Chemistry | 360 |
1,741,558 | https://en.wikipedia.org/wiki/SCIgen | SCIgen is a paper generator that uses context-free grammar to randomly generate nonsense in the form of computer science research papers. Its original data source was a collection of computer science papers downloaded from CiteSeer. All elements of the papers are formed, including graphs, diagrams, and citations. Created by scientists at the Massachusetts Institute of Technology, its stated aim is "to maximize amusement, rather than coherence." Originally created in 2005 to expose the lack of scrutiny of submissions to conferences, the generator subsequently became used, primarily by Chinese academics, to create large numbers of fraudulent conference submissions, leading to the retraction of 122 SCIgen generated papers and the creation of detection software to combat its use.
Sample output
Opening abstract of Rooter: A Methodology for the Typical Unification of Access Points and Redundancy:
Prominent results
In 2005, a paper generated by SCIgen, Rooter: A Methodology for the Typical Unification of Access Points and Redundancy, was accepted as a non-reviewed paper to the 2005 World Multiconference on Systemics, Cybernetics and Informatics (WMSCI) and the authors were invited to speak. The authors of SCIgen described their hoax on their website, and it soon received great publicity when picked up by Slashdot. WMSCI withdrew their invitation, but the SCIgen team went anyway, renting space in the hotel separately from the conference and delivering a series of randomly generated talks on their own "track". The organizer of these WMSCI conferences is Professor Nagib Callaos. From 2000 until 2005, the WMSCI was also sponsored by the Institute of Electrical and Electronics Engineers. The IEEE stopped granting sponsorship to Callaos from 2006 to 2008.
Submitting the paper was a deliberate attempt to embarrass WMSCI, which the authors claim accepts low-quality papers and sends unsolicited requests for submissions in bulk to academics. As the SCIgen website states:
Computing writer Stan Kelly-Bootle noted in ACM Queue that many sentences in the "Rooter" paper were individually plausible, which he regarded as posing a problem for automated detection of hoax articles. He suggested that even human readers might be taken in by the effective use of jargon ("The pun on root/router is par for MIT-graduate humor, and at least one occurrence of methodology is mandatory") and attribute the paper's apparent incoherence to their own limited knowledge. His conclusion was that "a reliable gibberish filter requires a careful holistic review by several peer domain experts".
Schlangemann
The pseudonym "Herbert Schlangemann" was used to publish fake scientific articles in international conferences that claimed to practice peer review. The name is taken from the Swedish short film Der Schlangemann.
In 2008, in response to a series of Call-for-Paper e-mails, SCIgen was used to generate a false scientific paper titled Towards the Simulation of E-Commerce, using "Herbert Schlangemann" as the author. The article was accepted at the 2008 International Conference on Computer Science and Software Engineering (CSSE 2008), co-sponsored by the IEEE, to be held in Wuhan, China, and the author was invited to be a session chair on grounds of his fictional Curriculum Vitae. The official review comment: "This paper presents cooperative technology and classical Communication. In conclusion, the result shows that though the much-touted amphibious algorithm for the refinement of randomized algorithms is impossible, the well-known client-server algorithm for the analysis of voice-over-IP by Kumar and Raman runs in _(n) time. The authors can clearly identify important features of visualization of DHTs and analyze them insightfully. It is recommended that the authors should develop ideas more cogently, organizes them more logically, and connects them with clear transitions." The paper was available for a short time in the IEEE Xplore Database, but was then removed. The entire story is described in the official "Herbert Schlangemann" blog, and it also received attention in Slashdot and the German-language technology-news site Heise Online.
In 2009, the same incident happened and Herbert Schlangemann's latest fake paper PlusPug: A Methodology for the Improvement of Local-Area Networks was accepted for oral presentation at the 2009 International Conference on e-Business and Information System Security (EBISS 2009), also co-sponsored by IEEE, to be held again in Wuhan, China.
In all cases, the published papers were withdrawn from the conferences' proceedings, and the conference organizing committee as well as the names of the keynote speakers were removed from their websites.
List of works with notable acceptance
In conferences
Rob Thomas: Rooter: A Methodology for the Typical Unification of Access Points and Redundancy, 2005 for WMSCI (see above)
Mathias Uslar's paper was accepted to the IPSI-BG conference.
Professor Genco Gulan published a paper in the 3rd International Symposium of Interactive Media Design.
A 2013 scientometrics paper demonstrated that at least 85 SCIgen papers have been published by IEEE and Springer. Over 120 SCIgen papers were removed according to this research.
In journals
Students at Iran's Sharif University of Technology published a paper in Elsevier's Journal of Applied Mathematics and Computation. The students wrote under the surname "MosallahNejad", which translates literally from Persian language (in spite of not being a traditional Persian name) as "from an Armed Breed". The paper was subsequently removed when the publishers were informed that it was a joke paper.
Mikhail Gelfand published a translation of the "Rooter" article in the Russian-language Journal of Scientific Publications of Aspirants and Doctorants in August 2008. Gelfand was protesting against the journal, which was apparently not peer reviewed and was being used by Russian PhD candidates to publish in an "accredited" scientific journal, charging them 4000 Rubles to do so. The accreditation was revoked two weeks later. (See Dissernet for related information.)
Springer Science+Business Media and IEEE were also the subject of similar pranks.
Spoofing Google Scholar and h-index calculators
Refereeing performed on behalf of the Institute of Electrical and Electronics Engineers has also been subject to criticism after fake papers were discovered in conference publications, most notably by Labbé and a researcher using the pseudonym of Schlangemann.
Cyril Labbé from Grenoble University demonstrated the vulnerability of h-index calculations based on Google Scholar output by feeding it a large set of SCIgen-generated documents that were citing each other, effectively an academic link farm, in a 2010 paper. Using this method the author managed to rank "Ike Antkare" ahead of Albert Einstein for instance.
2013 retractions
In 2013, over 122 published conference papers created by SCIgen were retracted by Springer and the IEEE. Unlike previous submissions that were intended to be pranks, this submission were largely made by Chinese academics, who were using SCIgen papers to boost their publication record.
SciDetect
In 2015, SciDetect was released by Springer. This software, developed by Cyril Labbé, is designed to automatically detect papers generated by SCIgen.
2021 report
In 2021, a study was published on 243 SCIgen papers that had been published in the academic literature. They found that SCIgen papers made up 75 per million papers (<0.01%) in information science, and that only a small fraction of the detected papers had been dealt with.
See also
References
Further reading
External links
Copy of the fake paper: Towards the Simulation of E-Commerce by Herbert Schlangemann
SCIgen - An Automatic CS Paper Generator
SCIgen detection website
Academic scandals
Applications of artificial intelligence
Formal languages
Hoaxes in science
Natural language generation
Academic publishing | SCIgen | Mathematics | 1,594 |
58,535,251 | https://en.wikipedia.org/wiki/Aquila%20X-1 | Aquila X-1 (frequently abbreviated to Aql X-1) is a low-mass x-ray binary (LMXB) and the most luminous X-Ray source in the constellation Aquila. It was first observed by the satellite Vela 5B which detected several outbursts from this source between 1969 and 1976. Its optical counterpart is variable, so it was named V1333 Aql according to the IAU standards. The system hosts a neutron star that accretes matter from a main sequence star of spectral type K4. The binary's orbital period is 18.9479 hours.
The neutron star radiation flux is slightly variable due to the nuclear burning of the accreted helium on the surface.
References
K-type main-sequence stars
Neutron stars
X-ray binaries
Aquila (constellation)
Aquilae, V1333
J19111604+0035058 | Aquila X-1 | Astronomy | 193 |
208,151 | https://en.wikipedia.org/wiki/10 | 10 (ten) is the even natural number following 9 and preceding 11. Ten is the base of the decimal numeral system, the most common system of denoting numbers in both spoken and written language.
Linguistics
A collection of ten items (most often ten years) is called a decade.
The ordinal adjective is decimal; the distributive adjective is denary.
Increasing a quantity by one order of magnitude is most widely understood to mean multiplying the quantity by ten.
To reduce something by one tenth is to decimate. (In ancient Rome, the killing of one in ten soldiers in a cohort was the punishment for cowardice or mutiny; or, one-tenth of the able-bodied men in a village as a form of retribution, thus causing a labor shortage and threat of starvation in agrarian societies.)
Mathematics
Ten is the smallest noncototient number. There are exactly 10 small Pisot numbers that do not exceed the golden ratio.
Decagon
A ten sided polygon is called a decagon.
Science
The metric system is based on the number 10, so converting units is done by adding or removing zeros (e.g. 1 centimetre = 10 millimetres, 1 decimetre = 10 centimetres, 1 meter = 100 centimetres, 1 dekametre = 10 meters, 1 kilometre = 1,000 meters).
Mysticism
In Pythagoreanism, the number 10 played an important role and was symbolized by the tetractys.
See also
List of highways numbered 10
Notes
References
External links
Integers | 10 | Mathematics | 321 |
959,928 | https://en.wikipedia.org/wiki/Getting%20Things%20Done | Getting Things Done (GTD) is a personal productivity system developed by David Allen and published in a book of the same name. GTD is described as a time management system. Allen states "there is an inverse relationship between things on your mind and those things getting done".
The GTD method rests on the idea of moving all items of interest, relevant information, issues, tasks and projects out of one's mind by recording them externally and then breaking them into actionable work items with known time limits. This allows one's attention to focus on taking action on each task listed in an external record, instead of recalling them intuitively.
First published in 2001, a revised edition of the book was released in 2015 to reflect the changes in information technology during the preceding decade.
Themes
Allen first demonstrates stress reduction from the method with the following exercise, centered on a task that has an unclear outcome or whose next action is not defined. Allen calls these sources of stress "open loops", "incompletes", or "stuff".
The most annoying, distracting, or interesting task is chosen, and defined as an "incomplete".
A description of the successful outcome of the "incomplete" is written down in one sentence, along with the criteria by which the task will be considered completed.
The next step required to approach completion of the task is written down.
A self-assessment is made of the emotions experienced after completing the steps of this process.
He claims stress can be reduced and productivity increased by putting reminders about everything one is not working on into a trusted system external to one's mind. In this way, one can work on the task at hand without distraction from the "incompletes". The system in GTD requires one to have the following tools within easy reach:
An inbox
A trash can
A filing system for reference material
Several lists (detailed below)
A calendar (either a paper-based or digital calendar)
These tools can be physical or electronic as appropriate (e.g., a physical "in" tray or an email inbox). Then, as "stuff" enters one's life, it is captured in these tools and processed with the following workflow.
Workflow
The GTD workflow consists of five stages. The workflow is driven by five steps (numbered on the top-left in the diagram on the right): capture, clarify, organize, reflect, and engage. (The first edition used the names collect, process, organize, plan, and do; the descriptions of the stages are similar in both editions). Once all the material ("stuff") is captured (or collected) in the inbox, each item is clarified and organized by asking and answering questions about each item in turn as shown in the black boxes in the logic tree diagram. As a result, items end up in one of the eight oval end points in the diagram:
in the trash
on the someday/maybe list
in a neat reference filing system
on a list of tasks, with the outcome and next action defined if the "incomplete" is a "project" (i.e., if it will require two or more steps to complete it)
immediately completed and checked off if it can be completed in under two minutes
delegated to someone else and, if one wants a reminder to follow up, added to a "waiting for" list
on a context-based "next action" list if there is only one step to complete it
on one's calendar
Empty one's inbox or inboxes daily or at least weekly ("in" to empty). Do not use one's inbox as a "to do" list. Do not put clarified items back into the inbox. Emptying one's inbox does not mean finishing everything. It just means applying the "capture, clarify, organize" steps to all one's "stuff".
Next, reflection (termed planning in the first edition) occurs. Multi-step projects identified above are assigned a desired outcome and a single "next action". Finally, a task from one's task list is worked on ("engage" in the 2nd edition, "do" in the 1st edition) unless the calendar dictates otherwise. One selects which task to work on next by considering where one is (i.e., the "context", such as at home, at work, out shopping, by the phone, at one's computer, with a particular person), time available, energy available, and priority.
Implementation
Because hardware and software is changing so rapidly, GTD is deliberately technologically neutral. (In fact, Allen advises people to start with a paper-based system.) Many task management tools claim to implement GTD methodology and Allen maintains a list of some technology that has been adopted in or designed for GTD. Some are designated "GTD Enabled", meaning Allen was involved in the design.
Perspective
Allen emphasizes two key elements of GTD—control and perspective. The workflow is the center of the control aspect. The goal of the control processes in GTD is to get everything except the current task out of one's head and into this trusted system external to one's mind. He borrows a simile used in martial arts termed "mind like water". When a small object is thrown into a pool of water, the water responds appropriately with a small splash followed by quiescence. When a large object is thrown in the water again responds appropriately with a large splash followed by quiescence. The opposite of "mind like water" is a mind that never returns to quiescence but remains continually stressed by every input. With a trusted system and "mind like water" one can have a better perspective on one's life. Allen recommends reflection from six levels, called "Horizons of Focus":
Horizon 5: Life
Horizon 4: Long-term visions
Horizon 3: 1–2 year goals
Horizon 2: Areas of focus and accountability
Horizon 1: Current projects
Ground: Current actions
Unlike some theories, which focus on top-down goal-setting, GTD works in the opposite direction. Allen argues that it is often difficult for individuals to focus on big picture goals if they cannot sufficiently control the day-to-day tasks that they frequently must face. By developing and using the trusted system that deals with day-to-day inputs, an individual can free up mental space to begin moving up to the next level.
Allen recommends scheduling a weekly review, reflecting on the six different levels. The perspective gained from these reviews should drive one's priorities at the project level. Priorities at the project level in turn determine the priority of the individual tasks and commitments gathered during the workflow process. During a weekly review, determine the context for the tasks and put each task on its appropriate list. An example of grouping together similar tasks would be making a list of outstanding telephone calls, or the tasks/errands to perform while out shopping. Context lists can be defined by the set of tools available or by the presence of individuals or groups for whom one has items to discuss or present.
Summary
GTD is based on storing, tracking, and retrieving the information related to each thing that needs to get done. Mental blocks we encounter are caused by insufficient 'front-end' planning. This involves thinking in advance, and generating a series of actions which can later be undertaken without further planning. The mind's "reminder system" is inefficient and seldom (or too often) reminds us of what we need to do at the time and place when we can do it. Consequently, the "next actions" stored by context in the "trusted system" act as an external support which ensures that we are presented with the right reminders at the right time. As GTD relies on external reminders, it can be seen as an application of the theories of distributed cognition or the extended mind.
Reception
In 2004, James Fallows in The Atlantic described GTD's main promise as not only allowing the practitioner to do more work but to feel less anxious about what they can and cannot do.
In 2005, Wired called GTD a "new cult for the info age", describing the enthusiasm for this method among information technology and knowledge workers as a kind of cult following. Allen's ideas have also been popularized through The Howard Stern Show (Stern referenced it daily throughout 2012's summer) and the Internet, especially via blogs such as 43 Folders, Lifehacker, and The Simple Dollar.
In 2005, Ben Hammersley interviewed David Allen for The Guardian article titled "Meet the man who can bring order to your universe", saying: "For me, as with the hundreds of thousands around the world who press the book into their friends' hands with fire in their eyes, Allen's ideas are nothing short of life-changing".
In 2007, Time magazine called Getting Things Done the self-help business book of its time.
In 2007, Wired ran another article about GTD and Allen, quoting him as saying "the workings of an automatic transmission are more complicated than a manual transmission ... to simplify a complex event, you need a complex system".
A 2008 paper in the journal Long Range Planning by Francis Heylighen and Clément Vidal of the Free University of Brussels showed "recent insights in psychology and cognitive science support and extend GTD's recommendations".
See also
Human multitasking
Life hack
Pomodoro Technique
Notes
References
Further reading
External links
Management books
Self-help books
Personal development
Time management
2001 non-fiction books
Penguin Books books | Getting Things Done | Physics,Biology | 1,949 |
23,742,625 | https://en.wikipedia.org/wiki/C9H6O3 | {{DISPLAYTITLE:C9H6O3}}
The molecular formula C9H6O3 (molar mass: 162.14 g/mol, exact mass: 162.031694 u) may refer to:
Umbelliferone, also known as 7-hydroxycoumarin or hydrangine
4-Hydroxycoumarin
Molecular formulas | C9H6O3 | Physics,Chemistry | 81 |
62,654,535 | https://en.wikipedia.org/wiki/Derek%20de%20Solla%20Price%20Memorial%20Medal | The Derek de Solla Price Memorial Award, or Price Medal, was conceived to honor Derek J. de Solla Price for his contributions to information science and for his crucial role in developing the field of scientometrics. The award was launched by Tibor Braun, founder of the international journal Scientometrics, and is periodically awarded by the journal to scientists with outstanding contributions to the fields of quantitative studies of science. The awarding ceremony is part of the annual ISSI conference. The first medal was awarded to Eugene Garfield in 1984. The full list of winners can be found below.
External links
ISSI - Derek de Solla Price Memorial Medal
References
Bibliometrics | Derek de Solla Price Memorial Medal | Mathematics,Technology | 139 |
42,121,389 | https://en.wikipedia.org/wiki/NGC%201847 | NGC 1847 is a young, massive star cluster in the bar of the Large Magellanic Cloud in the constellation Dorado. It was discovered in 1835 by John Herschel with an 18.7-inch reflecting telescope.
References
External links
1847
Dorado
Star clusters
Large Magellanic Cloud | NGC 1847 | Astronomy | 59 |
752,048 | https://en.wikipedia.org/wiki/Zinc%20chloride | Zinc chloride is an inorganic chemical compound with the formula ZnCl2·nH2O, with n ranging from 0 to 4.5, forming hydrates. Zinc chloride, anhydrous and its hydrates, are colorless or white crystalline solids, and are highly soluble in water. Five hydrates of zinc chloride are known, as well as four polymorphs of anhydrous zinc chloride.
All forms of zinc chloride are deliquescent. They can usually be produced by the reaction of zinc or its compounds with some form of hydrogen chloride. Anhydrous zinc compound is a Lewis acid, readily forming complexes with a variety of Lewis bases. Zinc chloride finds wide application in textile processing, metallurgical fluxes, chemical synthesis of organic compounds, such as benzaldehyde, and processes to produce other compounds of zinc.
History
Zinc chloride has long been known but currently practiced industrial applications all evolved in the latter half of 20th century.
An amorphous cement formed from aqueous zinc chloride and zinc oxide was first investigated in 1855 by Stanislas Sorel. Sorel later went on to investigate the related magnesium oxychloride cement, which bears his name.
Dilute aqueous zinc chloride was used as a disinfectant under the name "Burnett's Disinfecting Fluid". From 1839 Sir William Burnett promoted its use as a disinfectant as well as a wood preservative. The Royal Navy conducted trials into its use as a disinfectant in the late 1840s, including during the cholera epidemic of 1849; and at the same time experiments were conducted into its preservative properties as applicable to the shipbuilding and railway industries. Burnett had some commercial success with his eponymous fluid. Following his death however, its use was largely superseded by that of carbolic acid and other proprietary products.
Structure and properties
Unlike other metal dichlorides, zinc dichloride adopts several crystalline forms (polymorphs). Four polymorph are known: α, β, γ, and δ. Each features centers surrounded in a tetrahedral manner by four chloride ligands.
Here a, b, and c are lattice constants, Z is the number of structure units per unit cell, and ρ is the density calculated from the structure parameters.
The orthorhombic form (δ) rapidly changes to another polymorph upon exposure to the atmosphere. A possible explanation is that the ions originating from the absorbed water facilitate the rearrangement. Rapid cooling of molten gives a glass.
Molten has a high viscosity at its melting point and a comparatively low electrical conductivity, which increases markedly with temperature. As indicated by a Raman scattering study, the viscosity is explained by the presence of polymers,. Neutron scattering study indicated the presence of tetrahedral centers, which requires aggregation of monomers as well.
Hydrates
A variety of hydrated zinc chloride are known: with n = 1, 1.33, 2.5, 3, and 4.5. The 1.33-hydrate, previously thought to be the hemitrihydrate, consists of trans-Zn(H2O)4Cl2 centers with the chlorine atoms connected to repeating ZnCl4 chains. The hemipentahydrate, structurally formulated [Zn(H2O)5][ZnCl4], consists of Zn(H2O)5Cl octahedrons where the chlorine atom is part of a [ZnCl4]2- tetrahedera. The trihydrate consists of distinct hexaaquozinc(II) cations and tetrachlorozincate anions; formulated [Zn(H2O)6][ZnCl4]. Finally, the heminonahydrate, structurally formulated [Zn(H2O)6][ZnCl4]·3H2O also consists of distinct hexaaquozinc(II) cations and tetrachlorozincate anions like the trihydrate but has three extra water molecules. These hydrates can be produced by evaporation of aqueous solutions of zinc chloride at different temperatures.
Preparation and purification
Historically, zinc chlorides are prepared from the reaction of hydrochloric acid with zinc metal or zinc oxide. Aqueous acids cannot be used to produce anhydrous zinc chloride. According to an early procedure, a suspension of powdered zinc in diethyl ether is treated with hydrogen chloride, followed by drying The overall method remains useful in industry, but without the solvent:
Aqueous solutions may be readily prepared similarly by treating Zn metal, zinc carbonate, zinc oxide, and zinc sulfide with hydrochloric acid:
Hydrates can be produced by evaporation of an aqueous solution of zinc chloride. The temperature of the evaporation determines the hydrates For example, evaporation at room temperature produces the 1.33-hydrate. Lower evaporation temperatures produce higher hydrates.
Commercial samples of zinc chloride typically contain water and products from hydrolysis as impurities. Laboratory samples may be purified by recrystallization from hot dioxane. Anhydrous samples can be purified by sublimation in a stream of hydrogen chloride gas, followed by heating the sublimate to 400 °C in a stream of dry nitrogen gas. A simple method relies on treating the zinc chloride with thionyl chloride.
Reactions
Chloride complexes
A number of salts containing the tetrachlorozincate anion, , are known. "Caulton's reagent", , which is used in organic chemistry, is an example of a salt containing . The compound contains tetrahedral and anions, so, the compound is not caesium pentachlorozincate, but caesium tetrachlorozincate chloride. No compounds containing the ion (hexachlorozincate ion) have been characterized. The compound crystallizes from a solution of in hydrochloric acid. It contains a polymeric anion with balancing monohydrated hydronium ions, ions.
Adducts
The adduct with thf illustrates the tendency of zinc chloride to form 1:2 adducts with weak Lewis bases. Being soluble in ethers and lacking acidic protons, this complex is used in the synthesis of organozinc compounds. A related 1:2 complex is (zinc dichloride di(hydroxylamine)). Known as Crismer's salt, this complexes releases hydroxylamine upon heating. The distinctive ability of aqueous solutions of to dissolve cellulose is attributed to the formation of zinc-cellulose complexes, illustrating the stability of its adducts. Cellulose also dissolves in molten hydrate. Overall, this behavior is consistent with Zn2+ as a hard Lewis acid.
When solutions of zinc chloride are treated with ammonia, diverse ammine complexes are produced. In addition to the tetrahedral 1:2 complex .
the complex also has been isolated. The latter contains the ion,. The species in aqueous solution have been investigated and show that is the main species present with also present at lower :Zn ratio.
Aqueous solutions of zinc chloride
Zinc chloride dissolves readily in water to give species and some free chloride. Aqueous solutions of are acidic: a 6 M aqueous solution has a pH of 1. The acidity of aqueous solutions relative to solutions of other Zn2+ salts (say the sulfate) is due to the formation of the tetrahedral chloro aqua complexes such as [ZnCl3(H2O)]−. Most metal dichlorides for octahedral complexes, with stronger O-H bonds. The combination of hydrochloric acid and gives a reagent known as "Lucas reagent". Such reagents were once used a test for primary alcohols. Similar reactions are the basis of industrial routes from methanol and ethanol respectively to methyl chloride and ethyl chloride.
In alkali solution, zinc chloride converts to various zinc hydroxychlorides. These include , , , and the insoluble . The latter is the mineral simonkolleite. When zinc chloride hydrates are heated, hydrogen chloride evolves and hydroxychlorides result.
In aqueous solution , as well as other halides (bromide, iodide), behave interchangeably for the preparation of other zinc compounds. These salts give
precipitates of zinc carbonate when treated with aqueous carbonate sources:
Ninhydrin reacts with amino acids and amines to form a colored compound "Ruhemann's purple" (RP). Spraying with a zinc chloride solution, which is colorless, forms a 1:1 complex RP:, which is more readily detected as it fluoresces more intensely than RP.
Redox
Anhydrous zinc chloride melts and even boils without any decomposition up to 900 °C. When zinc metal is dissolved in molten at 500–700 °C, a yellow diamagnetic solution is formed consisting of the , which has zinc in the oxidation state +1. The nature of this dizinc dication has been confirmed by Raman spectroscopy. Although is unusual, mercury, a heavy congener of zinc, forms a wide variety of salts.
In the presence of oxygen, zinc chloride oxidizes to zinc oxide above 400 °C. Again, this observation indicates the nonoxidation of Zn2+.
Zinc hydroxychloride
Concentrated aqueous zinc chloride dissolves zinc oxide to form zinc hydroxychloride, which is obtained as colorless crystals:
The same material forms when hydrated zinc chloride is heated.
The ability of zinc chloride to dissolve metal oxides (MO) is relevant to the utility of as a flux for soldering. It dissolves passivating oxides, exposing the clean metal surface.
Organic syntheses with zinc chloride
Zinc chloride is an occasional laboratory reagent often as a Lewis acid. A dramatic example is the conversion of methanol into hexamethylbenzene using zinc chloride as the solvent and catalyst:
This kind of reactivity has been investigated for the valorization of C1 precursors.
Examples of zinc chloride as a Lewis acid include the Fischer indole synthesis:
Related Lewis-acid behavior is illustrated by a traditional preparation of the dye fluorescein from phthalic anhydride and resorcinol, which involves a Friedel-Crafts acylation. This transformation has in fact been accomplished using even the hydrated sample shown in the picture above. Many examples describe the use of zinc chloride in Friedel-Crafts acylation reactions.
Zinc chloride also activates benzylic and allylic halides towards substitution by weak nucleophiles such as alkenes:
In similar fashion, promotes selective reduction of tertiary, allylic or benzylic halides to the corresponding hydrocarbons.
Zinc enolates, prepared from alkali metal enolates and , provide control of stereochemistry in aldol condensation reactions. This control is attributed to chelation at the zinc. In the example shown below, the threo product was favored over the erythro by a factor of 5:1 when .
Organozinc precursor
Being inexpensive and anhydrous, ZnCl2 is a widely used for the synthesis of many organozinc reagents, such as those used in the palladium catalyzed Negishi coupling with aryl halides or vinyl halides. The prominence of this reaction was highlighted by the award of the 2010 Nobel Prize in Chemistry to Ei-ichi Negishi.
Rieke zinc, a highly reactive form of zinc metal, is generated by reduction of zinc dichloride with lithium. Rieke Zn is useful for the preparation of polythiophenes and for the Reformatsky reaction.
Uses
Industrial organic chemistry
Zinc chloride is used as a catalyst or reagent in diverse reactions conducted on an industrial scale. Benzaldehyde, 20,000 tons of which is produced annually in Western countries, is produced from inexpensive toluene by exploiting the catalytic properties of zinc dichloride. This process begins with the chlorination of toluene to give benzal chloride. In the presence of a small amount of anhydrous zinc chloride, a mixture of benzal chloride are treated continuously with water according to the following stoichiometry:
Similarly zinc chloride is employed in hydrolysis of benzotrichloride, the main route to benzoyl chloride. It serves as a catalyst for the production of methylene-bis(dithiocarbamate).
As a metallurgical flux
The use of zinc chloride as a flux, sometimes in a mixture with ammonium chloride (see also Zinc ammonium chloride), involves the production of HCl and its subsequent reaction with surface oxides.
Zinc chloride forms two salts with ammonium chloride: and , which decompose on heating liberating HCl, just as zinc chloride hydrate does. The action of zinc chloride/ammonium chloride fluxes, for example, in the hot-dip galvanizing process produces gas and ammonia fumes.
Other uses
Relevant to its affinity for these paper and textiles, is used as a fireproofing agent and in the process of making Vulcanized fibre, which is made by soaking paper in concentrated zinc chloride. Zinc chloride is also used as a deodorizing agent and to make zinc soaps.
Safety and health
Zinc and chloride are essential for life. Zn2+ is a component of several enzymes, e.g., carboxypeptidase and carbonic anhydrase. Thus, aqueous solutions of zinc chlorides are rarely problematic as an acute poison. Anhydrous zinc chloride is however an aggressive Lewis acid as it can burn skin and other tissues. Ingestion of zinc chloride, often from soldering flux, requires endoscopic monitoring. Another source of zinc chloride is zinc chloride smoke mixture ("HC") used in smoke grenades. Containing zinc oxide, hexachloroethane and aluminium powder release zinc chloride, carbon and aluminium oxide smoke, an effective smoke screen. Such smoke screens can lead to fatalities.
References
Further reading
N. N. Greenwood, A. Earnshaw, Chemistry of the Elements, 2nd ed., Butterworth-Heinemann, Oxford, UK, 1997.
The Merck Index, 7th edition, Merck & Co, Rahway, New Jersey, USA, 1960.
D. Nicholls, Complexes and First-Row Transition Elements, Macmillan Press, London, 1973.
J. March, Advanced Organic Chemistry, 4th ed., p. 723, Wiley, New York, 1992.
G. J. McGarvey, in Handbook of Reagents for Organic Synthesis, Volume 1: Reagents, Auxiliaries and Catalysts for C-C Bond Formation, (R. M. Coates, S. E. Denmark, eds.), pp. 220–3, Wiley, New York, 1999.
External links
Grades and Applications of Zinc Chloride
PubChem ZnCl2 summary.
zinc
chloride
Inorganic compounds
Metal halides
Deliquescent materials | Zinc chloride | Chemistry | 3,182 |
61,328,143 | https://en.wikipedia.org/wiki/Performance%20and%20modelling%20of%20AC%20transmission | Performance modelling is the abstraction of a real system into a simplified representation to enable the prediction of performance. The creation of a model can provide insight into how a proposed or actual system will or does work. This can, however, point towards different things to people belonging to different fields of work.
Performance modelling has many benefits, which includes:
Relatively inexpensive prediction of future performance.
A clearer understanding of a system's performance characteristics.
Additionally it may include, a mechanism for risk management and reduction with design support for future projects.
A model will often be created specifically so that it can be interpreted by a software tool that simulates the system's behaviour, based on the information contained in the performance model. Such tools provide further insight into the system's behaviour and can be used to identify bottlenecks or hot spots where the design is inadequate. Solutions to the problems identified might involve the provision of more physical resources or change in the structure of the design.
Performance modelling is found helpful, in case of:
Estimating the performance of a new system.
Estimating the impact on the performance of an existing system when a new system is interacting with it.
Estimating the impact of a change of workload or input on an existing system.
Modelling of a transmission line is done to analyse its performance and characteristics. The gathered information vis simulating the model can be used to reduce losses or to compensate these losses. Moreover, it gives more insight into the working of transmission lines and helps to find a way to improve the overall transmission efficiency with minimum cost.
Overview
Electric power transmission is the bulk movement of electrical energy from a generating site, such as a power plant, to an electrical substation and is different from the local wiring between high-voltage substations and customers, which is typically referred to as electric power distribution. The interconnected network which facilitates this movement is known as a transmission line. A transmission line is a set of electrical conductors carrying an electrical signal from one place to another. Coaxial cable and twisted pair cable are examples. The transmission line is capable of transmitting electrical power from one place to another. In many electric circuits, the length of the wires connecting the components can, for the most part, be ignored. That is, the voltage on the wire at a given time can be assumed to be the same at all points. However, when the voltage changes in a time interval comparable to the time it takes for the signal to travel down the wire, the length becomes important and the wire must be treated as a transmission line. Stated another way, the length of the wire is important when the signal includes frequency components with corresponding wavelengths comparable to or less than the length of the wire. So far transmission lines are categorized and defined in many ways. Few approaches to modelling have also being done by different methods. Most of them are mathematical and assumed circuit-based models.
Transmission can be of two types :
HVDC Transmission (High Voltage Direct Current transmission)
HVAC Transmission (High Voltage Alternating Current Transmission)
HVDC transmission
High-voltage direct current (HVDC) is used to transmit large amounts of power over long distances or for interconnections between asynchronous grids. When electrical energy is to be transmitted over very long distances, the power lost in AC transmission becomes appreciable and it is less expensive to use direct current instead of alternating current. For a very long transmission line, these lower losses (and reduced construction cost of a DC line) can offset the additional cost of the required converter stations at each end.In DC transmission line, the mercury arc rectifier converts the alternating current into the DC. The DC transmission line transmits the bulk power over long distance. At the consumer ends the thyratron converts the DC into the AC.
HVAC transmission
The AC transmission line is used for transmitting the bulk of the power generation end to the consumer end. The power is generated in the generating station. The transmission line transmits the power from generation to the consumer end. High-voltage power transmission allows for lesser resistive losses over long distances in the wiring. This efficiency of high voltage transmission allows for the transmission of a larger proportion of the generated power to the substations and in turn to the loads, translating to operational cost savings. The power is transmitted from one end to another with the help of step-up and step down transformer. Most transmission lines are high-voltage three-phase alternating current (AC), although single phase AC is sometimes used in railway electrification systems. Electricity is transmitted at high voltages (115 kV or above) to reduce the energy loss which occurs in long-distance transmission.
Power is usually transmitted through overhead power lines. Underground power transmission has a significantly higher installation cost and greater operational limitations, but reduced maintenance costs. Underground transmission is sometimes used in urban areas or environmentally sensitive locations.
Terminologies
Lossless line
The lossless line approximation is the least accurate model; it is often used on short lines when the inductance of the line is much greater than its resistance. For this approximation, the voltage and current are identical at the sending and receiving ends.
The characteristic impedance is purely real, which means resistive for that impedance, and it is often called surge impedance for a lossless line. When a lossless line is terminated by surge impedance, there is no voltage drop. Though the phase angles of voltage and current are rotated, the magnitudes of voltage and current remain constant along the length of the line. For load > SIL, the voltage will drop from sending end and the line will “consume” VARs. For load < SIL, the voltage will increase from sending end, and the line will generate VARs.
Power factor
In electrical engineering, the power factor of an AC electrical power system is defined as the ratio of the real power absorbed by the load to the apparent power flowing in the circuit, and is a dimensionless number in the closed interval of −1 to 1.
A power factor of less than one indicates the voltage and current are not in phase, reducing the instantaneous product of the two. A negative power factor occurs when the device (which is normally the load) generates power, which then flows back towards the source.
Real power is the instantaneous product of voltage and current and represents the capacity of the electricity for performing work.
Apparent power is the average product of current and voltage. Due to energy stored in the load and returned to the source, or due to a non-linear load that distorts the wave shape of the current drawn from the source, the apparent power may be greater than the real power(pf ≤0.5).
In an electric power system, a load with a low power factor draws more current than a load with a high power factor for the same amount of useful power transferred. The higher currents increase energy loss in the distribution system and require larger wires and other equipment. Because of the costs of larger equipment and wasted energy, electrical utilities will usually charge a higher cost to industrial or commercial customers where there is a low power factor.
Surge impedance
The characteristic impedance or surge impedance (usually written Z0) of a uniform transmission line is the ratio of the amplitudes of voltage and current of a single wave propagating along the line; that is, a wave travelling in one direction in the absence of reflections in the other direction. Alternatively and equivalently it can be defined as the input impedance of a transmission line when its length is infinite. Characteristic impedance is determined by the geometry and materials of the transmission line and, for a uniform line, is not dependent on its length. The SI unit of characteristic impedance is Ohm (Ώ)
Surge impedance determines the loading capability of the line and reflection coefficient of the current or voltage propagating waves.
Where,
Z0 = Characteristic Impedance of the Line
L = Inductance per unit length of the Line
C = Capacitance per unit length of the Line
Line parameters
The transmission line has mainly four parameters, resistance, inductance, and capacitance and shunt conductance. These parameters are uniformly distributed along the line. Hence, it is also called the distributed parameter of the transmission line.
Ferranti Effect
In electrical engineering, the Ferranti effect is the increase in voltage occurring at the receiving end of a very long (> 200 km) AC electric power transmission line, relative to the voltage at the sending end, when the load is very small, or no load is connected. It can be stated as a factor, or as a percent increase:.
The capacitive line charging current produces a voltage drop across the line inductance that is in-phase with the sending-end voltage, assuming negligible line resistance. Therefore, both line inductance and capacitance are responsible for this phenomenon. This can be analysed by considering the line as a transmission line where the source impedance is lower than the load impedance (unterminated). The effect is similar to an electrically short version of the quarter-wave impedance transformer, but with smaller voltage transformation.
The Ferranti effect is more pronounced the longer the line and the higher the voltage applied. The relative voltage rise is proportional to the square of the line length and the square of frequency.
The Ferranti effect is much more pronounced in underground cables, even in short lengths, because of their high capacitance per unit length, and lower electrical impedance.
Corona discharge
A corona discharge is an electrical discharge brought on by the ionization of a fluid such as air surrounding a conductor that is electrically charged. Spontaneous corona discharges occur naturally in high-voltage systems unless care is taken to limit the electric field strength. A corona will occur when the strength of the electric field (potential gradient) around a conductor is high enough to form a conductive region, but not high enough to cause electrical breakdown or arcing to nearby objects. It is often seen as a bluish (or another colour) glow in the air adjacent to pointed metal conductors carrying high voltages and emits light by the same property as a gas discharge lamp.
In many high voltage applications, the corona is an unwanted side effect. Corona discharge from high voltage electric power transmission lines constitutes an economically significant waste of energy. Corona discharges are suppressed by improved insulation, corona rings, and making high voltage electrodes in smooth rounded shapes.
ABCD parameters
A, B, C, D are the constants also known as the transmission parameters or chain parameters. These parameters are used for the analysis of an electrical network. It is also used for determining the performance of input, output voltage and current of the transmission network.
Propagation constant
The propagation constant of the sinusoidal electromagnetic wave is a measure of the change undergone by the amplitude and phase of the wave as it propagates in a given direction. The quantity being measured can be the voltage, the current in a circuit, or a field vector such as electric field strength or flux density. The propagation constant itself measures the change per unit length, but it is otherwise dimensionless. In the context of two-port networks and their cascades, propagation constant measures the change undergone by the source quantity as it propagates from one port to the next.
Attenuation constant
The real part of the propagation constant is the attenuation constant and is denoted by Greek lowercase letter α (alpha). It causes signal amplitude to decrease along a transmission line.
Phase constant
The imaginary part of the propagation constant is the phase constant and is denoted by Greek lowercase letter β (beta). It causes the signal phase to shift along a transmission line. Generally denoted in radians per meter (rad/m).
The propagation constant is denoted by Greek lowercase letter γ (gamma), and γ = α + jβ
Voltage Regulation
Voltage regulation is a measure of the change in the voltage magnitude between the sending and receiving end of a component, such as a transmission or distribution line. It is given in percentage for different lines.
Mathematically, voltage regulation is given by,
Line parameters of AC transmission
The AC transmission has four line parameters, these are the series resistance & inductance, and shunt capacitance and
admittance. These parameters are responsible
for distinct behavior of voltage and current waveforms along the
transmission line. Line parameters are generally represented in their
respective units per Km of length in transmission lines. So these parameters depend upon the geometric alignment of transmission
lines (no of conductors used, shape of conductors, physical spacing
between conductors and height above the ground etc.). These parameters
are independent of current and voltage of any of the sending or
receiving ends.
Series resistance
Definition
The electrical resistance of an object is property of a substance due to which it restricts the flow of electric current due to a potential difference in its two ends. The inverse quantity is , and is the ease with which an electric current passes. Electrical resistance shares some conceptual parallels with the notion of mechanical friction. The SI unit of electrical resistance is the ohm (Ω), while electrical conductance is measured in siemens (S).
Characteristics
The resistance of an object depends in large part on the material it is made of—objects made of electrical insulators like rubber tend to have very high resistance and low conductivity, while objects made of electrical conductors like metals tend to have very low resistance and high conductivity. This material dependence is quantified by resistivity or conductivity. However, resistance and conductance are extensive rather than bulk properties, meaning that they also depend on the size and shape of an object. For example, a wire's resistance is higher if it is long and thin, and lower if it is short and thick. All objects show some resistance, except for superconductors, which have a resistance of zero.
The resistance (R) of an object is defined as the ratio of voltage across it (V) to current through it (I), while the conductance (G) is the inverse:
For a wide variety of materials and conditions, V and I are directly proportional to each other, and therefore R and G are constants (although they will depend on the size and shape of the object, the material it is made of, and other factors like temperature or strain). This proportionality is called Ohm's law, and materials that satisfy it are called ohmic materials. In other cases, such as a transformer, diode or battery, V and I are not directly proportional. The ratio V/I is sometimes still useful, and is referred to as a "chordal resistance" or "static resistance", since it corresponds to the inverse slope of a chord between the origin and an I–V curve. In other situations, the derivative may be most useful; this is called the "differential resistance".
Transmission lines, as they consist of conducting wires of very long length, have an electrical resistance that can't be neglected at all.
Series inductance
Definition
When current flows within a conductor, magnetic flux is set up. With the variation of current in the conductor, the number of lines of flux also changes, and an emf is induced in it (Faraday's Law). This induced emf is represented by the parameter known as inductance. It is customary to use the symbol L for inductance, in honour of the physicist Heinrich Lenz.
In the SI system, the unit of inductance is the henry (H), which is the amount of inductance which causes a voltage of 1 volt when the current is changing at a rate of one ampere per second. It is named for Joseph Henry, who discovered inductance independently of Faraday.
Types of inductance
The flux linking with the conductor consists of two parts, namely, the internal flux and the external flux :
The internal flux is induced due to the current flow in the conductor.
The external flux produced around the conductor is due to its current and the current of the other conductors place around it. The total inductance of the conductor is determined by the calculation of the internal and external flux.
Characteristics
The transmission line wiring is also inductive in nature and, the inductance of a single circuit line can be given mathematically by :
Where,
D is the physical spacing between the conductors.
is the radius of the fictitious conductor having no internal flux linkages but with the same inductace as the original conductor of radius r. A quantity of (=0.7788 appx.) is multiplied with the actual radius of conductor in order to account for the internal flux linkages(applicable to solid round conductors only).
is the permeability of free space and .
For transposed lines with two or more phases, the inductance between any two lines can be calculated using :
.
Where, is the geometric mean distance in between the conductors.
If the lines are not properly transposed, the inductances become unequal and contain imaginary terms due to mutual inductances. In case of proper transposition, all the conductors occupy the available positions equal distance and thus the imaginary terms are cancelled out. And all the line inductances become equal.
Shunt capacitance
Definition
Capacitance is the ratio of the change in an electric charge in a system to the corresponding change in its electric potential. The capacitance is a function only of the geometry of the design (e.g. area of the plates and the distance between them) and the permittivity of the dielectric material between the plates of the capacitor. For many dielectric materials, the permittivity and thus the capacitance is independent of the potential difference between the conductors and the total charge on them.
The SI unit of capacitance is the farad (symbol: F), named after the English physicist Michael Faraday. A 1 farad capacitor, when charged with 1 coulomb of electrical charge, has a potential difference of 1 volt between its plates. The reciprocal of capacitance is called elastance.
Types of capacitance
There are two closely related notions of capacitance self-capacitance and mutual capacitance :
For an isolated conductor, there exists a property called self-capacitance, which is the amount of electric charge that must be added to an isolated conductor to raise its electric potential by one unit (i.e. one volt, in most measurement systems). The reference point for this potential is a theoretical hollow conducting sphere, of infinite radius, with the conductor centered inside this sphere. Any object that can be electrically charged exhibits self-capacitance. A material with a large self-capacitance holds a more electric charge at a given voltage than one with low self-capacitance.
The notion of mutual capacitance is particularly important for understanding the operations of the capacitor, one of the three elementary linear electronic components (along with resistors and inductors).In electrical circuits, the term capacitance is usually a shorthand for the mutual capacitance between two adjacent conductors, such as the two plates of a capacitor.
Characteristics
Transmission line conductors constitute a capacitor between them, exhibiting mutual capacitance. The conductors of the transmission line act as a parallel plate of the capacitor and the air is just like a dielectric medium between them. The capacitance of a line gives rise to the leading current between the conductors. It depends on the length of the conductor. The capacitance of the line is proportional to the length of the transmission line. Their effect is negligible on the performance of lines with a short length and low voltage. In the case of high voltage and long lines, it is considered as one of the most important parameters. The shunt capacitance of the line is responsible for Ferranti effect.
The capacitance of a single phase transmission line can be given mathematically by :
Where,
D is the physical spacing between the conductors.
r is the radius of each conductor.
is the permittivity of air and
For lines with two or more phases, the capacitance between any two lines can be calculated using :
Where, is the geometric mean distance of the conductors.
The effect of self-capacitance, on a transmission line, is generally neglected because the conductors are not isolated and thus there exists no detectable self-capacitance.
Shunt admittance
Definition
In electrical engineering, admittance is a measure of how easily a circuit or device will allow a current to flow. It is defined as the reciprocal of impedance. The SI unit of admittance is the siemens (symbol S); the older, synonymous unit is mho, and its symbol is ℧ (an upside-down uppercase omega Ω). Oliver Heaviside coined the term admittance in December 1887.
Admittance is defined as
where
Y is the admittance, measured in siemens
Z is the impedance, measured in ohms
Characteristics
Resistance is a measure of the opposition of a circuit to the flow of a steady current, while impedance takes into account not only the resistance but also dynamic effects (known as reactance). Likewise, admittance is not only a measure of the ease with which a steady current can flow but also the dynamic effects of the material's susceptance to polarization:
where
is the admittance, measured in siemens.
is the conductance, measured in siemens.
is the susceptance, measured in siemens.
The dynamic effects of the material's susceptance relate to the universal dielectric response, the power-law scaling of a system's admittance with frequency under alternating current conditions.
In the context of electrical modelling of transmission lines, shunt components that provide paths of least resistance in certain models are generally specified in terms of their admittance. Transmission lines can span hundreds of kilometres, over which the line's capacitance can affect voltage levels. For short length transmission line analysis, this capacitance can be ignored and shunt components are not necessary for the model. Lines with more length, contain a shunt admittance governed by
where
Y – total shunt admittance
y – shunt admittance per unit length
l – length of the line
C – capacitance of the line
Modelling of transmission lines
Two port networks
A two-port network (a kind of four-terminal network or quadripole) is an electrical network (circuit) or device with two pairs of terminals to connect to external circuits. Two terminals constitute a port if the currents applied to them satisfy the essential requirement known as the port condition: the electric current entering one terminal must equal the current emerging from the other terminal on the same port. The ports constitute interfaces where the network connects to other networks, the points where signals are applied or outputs are taken. In a two-port network, often port 1 is considered the input port and port 2 is considered the output port.
The two-port network model is used in mathematical circuit analysis techniques to isolate portions of larger circuits. A two-port network is regarded as a "black box" with its properties specified by a matrix of numbers. This allows the response of the network to signals applied to the ports to be calculated easily, without solving for all the internal voltages and currents in the network. It also allows similar circuits or devices to be compared easily. For example, transistors are often regarded as two-ports, characterized by their h-parameters (see below) which are listed by the manufacturer. Any linear circuit with four terminals can be regarded as a two-port network provided that it does not contain an independent source and satisfies the port conditions.
Transmission matrix and ABCD parameters
Oftentimes, we are only interested in the terminal characteristics of the transmission line, which are the voltage and current at the sending and receiving ends, for performance analysis of the line. The transmission line itself is then modelled as a "black box" and a 2 by 2 transmission matrix is used to model its behaviour, as follows
Derivation
This equation in matrix form, consists of two individual equations as stated below:
Where,
is the sending end voltage
is the receiving end voltage
is the sending end current
is the receiving end current
Now, if we apply open circuit at the receiving end, the effective load current will be zero (i.e. IR = 0)
1.
So, the parameter A is the ratio of sending end voltage to receiving end voltage, thus called the voltage ratio. Being the ratio of two same quantities, the parameter A is unitless.
2.
So, the parameter C is the ratio of sending end current to receiving end voltage, thus called the transfer admittance and the unit of C is Mho ().
Now, if we apply short circuit at the receiving end, the effective receiving end voltage will be zero (i.e. VR = 0)
1.
So, the parameter B is the ratio of sending end voltage to receiving end current, thus called the transfer impedance and the unit of C is Ohm (Ω).
2.
So, the parameter D is the ratio of sending end current to receiving end current, thus called the current ratio. Being the ratio of two same quantities, the parameter D is unitless.
ABCD parameter values
To summarize, ABCD Parameters for a two port(four terminal) passive, linear and bilateral network is given as :
Properties
The line is assumed to be a reciprocal, symmetrical network, meaning that the receiving and sending labels can be switched with no consequence. The transmission matrix T also has the following properties:
The A, B, C and D constants are complex numbers due to the complex values of transmission parameters. And because of the complex nature, they are represented as Vectors in the Complex Plane (phasors).
(Condition for reciprocity)
(condition for symmetry)
The parameters A, B, C, and D differ depending on how the desired model handles the line's resistance (R), inductance (L), capacitance (C), and shunt (parallel, leak) conductance G. The four main models are the short line approximation, the medium line approximation, the long line approximation (with distributed parameters), and the lossless line. In all models described, a capital letter such as R refers to the total quantity summed over the line and a lowercase letter such as r refers to the per-unit-length quantity.
Classification of AC transmission line
Classification overview
The AC transmission line has resistance R, inductance L, capacitance C and the shunt or leakage conductance G. These parameters along with the load and the transmission line determines the performance of the line. The term performance means the sending end voltage, sending
end currents, sending end factor, power loss in the line, efficiency of the transmission line, regulate and limit of power flow during efficiency and transmission, regulation and limits of power during the steady-state and transient condition. AC transmission line is
generally categorized into three classes
Short Transmission Line (line length ≤ 60 km)
Medium Transmission Line (80 km ≤ Line length ≤ 250 km)
Long Transmission Line (Line length ≥ 250 km)
The classification of the transmission line depends on the frequency of power transfer and is an assumption made for ease of calculation of line performance parameters and its losses. And because this, the range of length for the categorization of a transmission line is not rigid. The ranges of length may vary (a little), and all of them are valid in their areas of approximation.
Basis of classification
Derivation of voltage\current wavelength
Current and voltage propagate in a transmission line with a speed equal to the speed of light (c) i.e. appx. and the frequency (f) of voltage or current is 50 Hz ( although in the America and parts of Asia it is typically 60 Hz)
Therefore, wavelength (λ) can be calculated as below :
or,
or,
Reason behind classification
A transmission line with 60 km of length is very very small( times) when compared with wavelength i.e. 6000 km. Up to 240 km ( times of wavelength) (250 km is taken for easy remembering) length of the line, current or voltage waveform is so small that it can be approximated to a straight line for all practical purposes. For line length of about 240 km parameters are assumed to be lumped (though practically these parameters are always distributed). Therefore, the response of transmission line for a length up to 250 km can be considered linear and hence the equivalent circuit of the line can be approximated to a linear circuit.
But if the length of the line is more than 250 km say 400 km i.e. times of wavelength, then the waveform of current or voltage can not be considered linear and therefore we need to use integration for the analysis of these lines.
For the lines of up to 60 km of length is so short, that the effect of shunt parameters are nearly undetectable throughout the line. And hence, these linear lines are categorized as Short Transmission Lines.
For the lines of effective length in between 60 km to 250 km, the effect of shunt parameters can not be neglected. And hence, they are assumed as lumped either at the middle of the line (a nominal T representation) or at the two ends of the line (a nominal Π representation). These linear lines are categorized as Medium Transmission Lines
For transmission lines of effective length above 250 km, the equivalent circuit can not be considered as linear. The parameters are distributed and rigorous calculations are required for performance analysis. These non-linear lines are categorized as Long Transmission Lines.
Short transmission line
The transmission lines which have a length less than 60 km are generally referred to as short transmission lines. For its short length, parameters like electrical resistance, impedance and inductance of these short lines are assumed to be lumped. The shunt capacitance for a short line is almost negligible and thus, are not taken into account (or assumed to be zero).
Derivation of ABCD parameter values
Now, if the impedance per km for an l km of line is, and the sending end & receiving end voltages make an angle of & respectively, with the receiving end current. Then, the total impedance of the line will be,
The sending end voltage and current for this approximation are given by :
In this, the sending and receiving end voltages are denoted by and respectively. Also the currents and are entering and leaving the network respectively.
So, by considering the equivalent circuit model for the short transmission line, the transmission matrix can be obtained as follows:
Therefore, the ABCD parameters are given by :
A = D =1, B = Z Ω and C = 0
Medium transmission line
The transmission line having its effective length more than 80 km but less than 250 km is generally referred to as a medium transmission line. Due to the line length being considerably high, shunt capacitance along with admittance Y of the network does play a role in calculating the effective circuit parameters, unlike in the case of short transmission lines. For this reason, the modelling of a medium length transmission line is done using lumped shunt admittance along with the lumped impedance in series to the circuit.
Counterintuitive behaviours of medium-length transmission lines:
voltage rise at no load or small current (Ferranti effect)
receiving-end current can exceed sending-end current
These lumped parameters of a medium length transmission line can be represented using two different models, namely :
Nominal Π representation
In case of a nominal Π representation, the total lumped shunt admittance is divided into 2 equal halves, and each half with value Y ⁄ 2 is placed at both the sending & receiving end, while the entire circuit impedance is lumped in between the two halves. The circuit, so formed resembles the symbol of pi (Π), hence is known as the nominal Π (or Π network representation) of a medium transmission line. It is mainly used for determining the general circuit parameters and performing load flow analysis.
Derivation of ABCD parameter values
Applying KCL at the two shunt ends, we get
In this,
The sending and receiving end voltages are denoted by and respectively. Also the currents and are entering and leaving the network respectively.
are the currents through the shunt capacitances at the sending and receiving end respectively whereas is the current through the series impedance.
Again,
or,
So, by substituting we get :
or,
The equation obtained thus, eq() & () can be written into matrix form as follows :
so, the ABCD parameters are :
A = D = per unit
B =Z Ω
C =
Nominal T representation
In the nominal T model of a medium transmission line, the net series impedance is divided into two halves and placed on either side of the lumped shunt admittance i.e. placed in the middle. The circuit so formed resembles the symbol of a capital T or star(Y), and hence is known as the nominal T network of a medium length transmission line.
derivation of ABCD parameter values
The application of KCL at the juncture(the neutral point for Y connection) gives,
The above equation can be rearranged as,
Here, the sending and receiving end voltages are denoted by and respectively. Also the currents and are entering and leaving the network respectively
Now, for the receiving end current, we can write :
By rearranging the equation and replacing the value of with the derived value, we get :
Now, the sending end current can be written as:
Replacing the value of in the above equation :
The equation obtained thus, eq.() & eq.() can be written into matrix form as follows :
So, the ABCD parameters are :
A = D = per unit
B =
C =
Long Transmission Line
A transmission line having a length more than 250 km is considered as a long transmission line. Unlike short and medium lines the line parameters of long transmission line are assumed to be distributed at each point of line uniformly. Thus modelling of a long line is somewhat difficult. But a few approaches can be made based on the length and values of line parameters. For a long transmission line, it is considered that the line may be divided into various sections, and each section consists of inductance, capacitance, resistance and conductance, as shown in the RLC (resistance and inductance in series, with shunt capacitance) cascade model.
Derivation of ABCD parameter values
Cascaded Model approach
Considering a bit smaller part of a long transmission line having length dx situated at a distance x from the receiving end. The series impedance of the line is represented by zdx and ydx is the shunt impedance of the line. Due to charging current and corona loss, the current is not uniform along the line. Voltage is also different in different parts of the line because of inductive reactance.
Where,
z – series impedance per unit length, per phase
y – shunt admittance per unit length, per phase to neutral
Again, as
Now for the current through the strip, applying KCL we get,
The second term of the above equation is the product of two small quantities and therefore can be neglected.
For we have,
Taking the derivative concerning x of both sides, we get
Substitution in the above equation results
The roots of the above equation are located at .
Hence the solution is of the form,
Taking derivative with respect to x we get,
Combining these two we have,
The following two quantities are defined as,
, which is called the characteristic impedance
, which is called the propagation constant
Then the previous equations can be written in terms of the characteristic impedance and propagation constant as,
Now, at we have, and
Therefore, by putting at eq.() & eq.() we get,
Solving eq.() & eq.() we get the following values for :
Also, for , we have and .
Therefore, by replacing x by l we get,
Where,
is called incident voltage wave
is called reflected voltage wave
We can rewrite the eq.() & eq.() as,
So, by considering the corresponding analogy for long transmission line, the obtained equations i.e. eq.() eq.() can be written into matrix form as follows:
The ABCD parameters are given by :
A = D =
B =
C =
Π Representation approach
Like the medium transmission line, the long line can also be approximated into an equivalent Π representation. In the Π-equivalent of a long transmission line, the series impedance is denoted by Z′ while the shunt admittance is denoted by Y′.
So, the ABCD parameters of this long line can be defined like medium transmission line as :
A = D = per unit
B = Z′ Ω
C =
Comparing it with the ABCD parameters of cascaded long transmission model, we can write :
or,
Where Z(= zl), is the total impedance of the line.
By rearranging the above equation,
or,
This can be further reduced to,
where Y(= yl) is called the total admittance of the line.
Now, if the line length(l) is small, .
Now, if the line length (l) is small, it is found that Z = Z′ and Y = Y′.
This refers that if the line length(l) is small, the nominal-π representation incorporating the assumption of lumped parameters can be befitting. But if the length of the line(l) exceeds a certain boundary(near about 240 to 250) the nominal-π representation becomes erroneous and can not be used further, for performance analysis.
Travelling waves
Travelling waves are the current and voltage waves that create a disturbance and moves along the transmission line from the sending end of a transmission line to the other end at a constant speed. The travelling wave plays a major role in knowing the voltages and currents at all the points in the power system. These waves also help in designing the insulators, protective equipment, the insulation of the terminal equipment, and overall insulation coordination.
When the switch is closed at the transmission line's starting end, the voltage will not appear instantaneously at the other end. This is caused by the transient behaviour of inductor and capacitors that are present in the transmission line. The transmission lines may not have physical inductor and capacitor elements but the effects of inductance and capacitance exist in a line. Therefore, when the switch is closed the voltage will build up gradually over the line conductors. This phenomenon is usually called as the voltage wave is travelling from the transmission line's sending end to the other end. And similarly, the gradual charging of the capacitances happens due to the associated current wave.
If the switch is closed at any instant of time, the voltage at load does not appear instantly. The 1st section will charge first and then it will charge the next section. Until and unless a section gets charged the successive section will not be charged .thus this process is a gradual one. It can be realized such that several water tanks are placed connectively and water flows from the 1st tank to the last tank.
See also
Electric power transmission
Dynamic demand (electric power)
Demand response
List of energy storage projects
Traction power network
Backfeeding
Conductor marking lights
Double-circuit transmission line
Electromagnetic Transients Program (EMTP)
Flexible AC transmission system (FACTS)
Geomagnetically induced current, (GIC)
Grid-tied electrical system
List of high voltage underground and submarine cables
Load profile
Power line communications (PLC)
Power system simulation
Radio frequency power transmission
Wheeling (electric power transmission)
References
Further reading
Grigsby, L. L., et al. The Electric Power Engineering Handbook. USA: CRC Press. (2001).
The Physics of Everyday Stuff - Transmission Lines
Electrical engineering
Electric power transmission
Scientific modelling
Power engineering | Performance and modelling of AC transmission | Engineering | 8,207 |
63,446,562 | https://en.wikipedia.org/wiki/Impact%20of%20the%20COVID-19%20pandemic%20on%20social%20media | Social media became an important platform for interaction during the COVID-19 pandemic, coinciding with the onset of social distancing. According to a study conducted by Facebook's analytics department, messaging rates rose by over 50% during this period. Individuals confined to their homes utilized social media not only to maintain social connections but also as a source of entertainment to alleviate boredom. Concerns arose regarding the overreliance on social media for primary social interactions, particularly given the constraints imposed by the pandemic.
People worldwide turned to social networking services to disseminate information, find humor through internet memes, and cope with the challenges of social distancing. The shift to virtual interactions exacerbated mental health issues to many, prompting the rapid rise of online counselling that leveraged social media platforms to connect mental health workers with those in need.
The COVID-19 pandemic highlighted the phenomenon of misinformation on social media, often referred to as an "infodemic." Platforms like Twitter and YouTube provided direct access to content, making users susceptible to rumors and unreliable information that could significantly impact individual behaviors and undermine collective efforts against the virus. Furthermore, social media became crucial for politicians, political movements, and health organizations at various levels to disseminate critical information swiftly and effectively reach the public.
Increase in usage
Messaging and video call services
Multiple social media websites reported a sharp increase in usage after social distancing measures were put into place. Since many people could not connect with their friends and family in person, social media became the main form of communication to maintain these connections. For example, the amount of Facebook users went up to about 1.9 billion worldwide by the end of 2020, marking an 8.7% increase over 2019. Meanwhile, WhatsApp has reported a 40% percent increase in usage overall. Moreover, there was a noticeable increase in the use of Zoom since the start of the pandemic. Global downloads for TikTok went up 5% in March 2020 compared to February. A new service called Quarantine Chat, which connected users randomly, reported having over 15,000 users a month after its launch on 1 March 2020. Zoom also followed a similar procedure to connect users.
Facebook, Twitter, and YouTube have all increased reliance on spam filters because staff members who moderate content were unable to work.
Online counseling services
Particularly in countries where the virus had a greater impact, online mental health services received a surge in demand, as COVID-19 social distancing obstructed patients from meeting with therapists or psychologists in person. In China, medical staff used social media programs like WeChat, Weibo, and TikTok to roll out online mental health education programs. In Canada, the provincial government of Alberta launched a $53 million COVID-19 mental health response plan, which included increasing accessibility to phone and online support with existing helplines. Additionally, the Canadian province of Ontario's government provided emergency funding of up to $12 million to expand online and virtual mental health support.
Effect of COVID-19 on mental health
There is an extensive psychology research proving that connectivity with others develops a sense of belonging and psychosocial well-being, which enhances mental health and reduces the risk of anxiety and depression. The overload of information and the constant use of social media have been shown to positively correlate with an increase in depression and anxiety, yet also with improvement in communication skills. The impact of following social distancing measures can cause feelings of loneliness and isolation in people, increasing anxiety and stress. Many adults are also reporting specific negative impacts on their mental health and well-being, such as difficulty sleeping (60%) or eating (80%), increases in alcohol consumption or substance abuse (50%), and worsening of chronic conditions (35%), due to worry and stress over education and employment conditions. While being part of a global pandemic can be stressful and cause anxiety, there are ways you can support yourself and your family.
Effect of COVID-19 on face-to-face communication
The increased use of face masks makes interpretation during face-to-face contact much more challenging because masks hide a large portion of the face, posing difficulties in reading basic communication signals like intention and emotion. Wearing a face mask causes individuals to focus on oral cues, leading to potential mistrust, misinterpretation, linguistic misunderstandings, and difficulties in comprehension. Alongside the disconnection caused by face masks, social distancing, and self-isolation, there are risks of increased social rejection, growing impersonality, individualism, and a loss of community. Data suggests that the implementation of face masks, increased social distancing, and self-isolation present challenges in fostering positive interpersonal relationships and a sense of community.
The new COVID-19 pneumonia epidemic has significantly affected the way people communicate with each other. Preventive measures to limit the spread of the virus require changes in communication patterns regarding greetings and handshakes. This situation has prompted people to adopt greetings that do not require physical contact, such as "peaceful gestures" and "hands on the chest". Additionally, telecommunications has seen a notable emphasis on personal space and social distance as business meetings, conferences, and educational activities shift to virtual communication through platforms like Zoom, Cisco WebEx, Skype, and Microsoft Teams.
Effect of COVID-19 on online businesses
The COVID-19 pandemic forced many businesses to shut down or implement remote work, leading to significant layoffs. Families were confined to home in self-isolation and quarantine as effective measures to prevent the spread of COVID-19. Since the start of the pandemic, many businesses have experienced a drastic increase in online orders. Those facing declining sales had to adapt to new consumer spending habits.
Effects of COVID-19 on visual arts
Global shutdowns compelled artists, museums, and galleries to explore new ways to engage with the public. The Getty Museum initiated a social media challenge encouraging users to recreate artworks from their collection using household items and share the results online. Galleries like David Zwirner moved scheduled exhibits to virtual spaces. Artist Benjamin Cook's Social Distance Gallery used Instagram to host mini thesis exhibitions for students worldwide who had their graduation shows cancelled.
Increased engagement
A study of people's internet and social media engagement from July 2019 to 2020 indicated a 10.5% increase in active social media users. Instagram reported a 70% surge in viewership of live videos from February to March when lockdown measures began. Another study conducted in July, four months after the initial COVID-19 lockdowns, surveyed individuals on their primary reasons for using social media and other connectivity technologies. Eighty-three percent of respondents stated that social media helped them cope with COVID-19-related lockdowns. This response was the highest, surpassing other reasons such as education (76%), staying in touch with friends and family (74%), and work-related activities (67%). It underscores the crucial role of social media in people's lives during the pandemic.
Due to the pandemic, people reduced their social activities to safeguard others. Students transitioned to online learning, with many relying on social media as a new study tool. Researchers have identified both advantages and disadvantages of using social media for studying. UNESCO reported that school closures affecting 890 million students across 114 countries disrupted traditional education. Social media became indispensable for students during the pandemic, providing an effective means to collaborate and develop skills while at home. For instance, collaborating with peers on social media enables students to learn communication and teamwork skills as they work together to solve problems.
Use as entertainment
During the pandemic, numerous Internet memes emerged related to the COVID-19 situation. One notably popular Facebook group among young people, predominantly Generation Z, was "Zoom Memes for Self Quaranteens." This group humorously played on the pun of increased Zoom usage and self-quarantine among teenagers, amassing over 500,000 members as of April 2020. Members shared and created memes about the pandemic, providing entertainment for many young people who had transitioned to online schooling and needed ways to pass the time and cope with the situation.
Various social media challenges also gained traction during this period, serving to connect individuals and provide entertainment. One such example was the See10Do10 challenge, where participants performed and recreated 10 push-ups. Other challenges included sharing baby photos, participating in dance challenges, and voting in candy and chocolate March Madness bracket polls. Additionally, the V-pop hit "Ghen" by artists Erik and Men was remixed by lyricists Khắc Hưng to create "Ghen Cô Vy," which supported Vietnam's National Institute of Occupational Safety and Health with a song encouraging handwashing. The song went viral after dancer Quang Đăng posted a dance to it on TikTok, sparking the #GhenCoVyChallenge.
Teens also used TikTok to create videos sharing their experiences in quarantine, using humour to relate to their peers and keep themselves entertained. From January to March 2020, TikTok experienced a 48.3% increase in unique visitors.
Makeup artists on YouTube adapted their content to showcase makeup looks that accommodate mask-wearing during the pandemic.
In April, The Actors Fund organized a charity livestream of The Phantom of the Opera performance from London's Royal Albert Hall, which raised funds over 48 hours. Similarly, Phoebe Waller-Bridges's stage performance of Fleabag was streamed for charity and entertainment purposes Authors, musicians, actors, actresses, and dancers collaborated on numerous concerts, live streams of past productions, readings, and performances that were either free or required an entrance fee or suggested charitable donation.
Spreading information
Social media has been used by news outlets, organizations, and the general public to disseminate both accurate information and misinformation about the pandemic. The CDC, WHO, medical journals, and healthcare organizations have been actively updating and sharing information across various platforms, often partnering with Facebook, Google Scholar, TikTok, and Twitter. Additionally, frontline healthcare professionals, such as emergency medicine physicians in New York hospitals, have utilized their social media accounts to provide firsthand accounts of combating COVID-19. A social listening study conducted from January 1 to March 19 indicated a significant increase in COVID-19-related conversations, with a 1,000% rise among healthcare professionals and a 2,500% increase among consumers. Despite hypotheses that increased public discourse and research would enhance trust in science during the pandemic, early studies reported null findings.
Accurate and reliable information disseminated through social media platforms plays a crucial role in combatting infodemics, misinformation, and rumors related to COVID-19. An article in The Lancet stated that real-time surveillance via social media can also serve as a valuable tool for public health agencies and organizations in implementing effective interventions.
Medical professionals have formed groups on social media to share information and insights on treating COVID-19. For instance, the PMG COVID-19 Subgroup on Facebook had approximately 30,000 members globally by the end of March, while the Physician Moms Group, established five years prior to the pandemic, experienced such high demand that Facebook's join feature temporarily malfunctioned.
Healthcare workers have used social media to educate the public about the challenges of wearing personal protective equipment (PPE) for extended shifts. Many participated in trends showcasing their faces post-shift, revealing marks and injuries caused by prolonged mask use.
Government use of social media
Governments have utilized social media extensively during the pandemic. The Chinese government, for example, has employed social media to disseminate scientific information about COVID-19 in accessible language to aid public understanding. In contrast, Australian health authorities have focused less on platforms popular among younger demographics, such as Instagram and TikTok, when sharing COVID-19 information. esearchers argue that effective governmental use of social media can mitigate public panic and contribute to societal stability. Governments should take proactive measures to communicate effectively on social media using language that resonates with the public, thereby reducing the spread of misinformation and fostering social stability based on evidence-backed information.
Role of World Health Organization and other international organizations
The COVID-19 pandemic significantly amplified the World Health Organizations (WHO) utilization of social media. In response to the declaration of COVID-19 as a Public Health Emergency, the WHO Information Network for Epidemics was established. This platform, staffed by 20 individuals, is dedicated to providing evidence-based responses to counteract rumours circulating across various social media platforms. It ensures that searches related to "coronavirus" on social media and Google direct users to reliable information sources such as the WHO website or the Centers for Disease Control and Prevention.
In April 2020, the United Nations launched the United Nations Communications Response initiative aimed at curbing the spread of misinformation during the pandemic. This initiative sought to mitigate hate speech and prevent disinformation from exacerbating political divisions online. Additionally, on 11 May 2020, the United Nations issued a Guidance Note on Addressing and Countering COVID-19-related Hate Speech, further targeting misinformation challenges online.
Limitations in the use of social media to spread information
Social media platforms do not uniformly impact all demographics. Older age groups often do not utilize social media as extensively as younger populations, preferring traditional communication channels. Approximately 69% of individuals aged 50 to 64 engage with some form of social media, highlighting the necessity to devise alternative methods to reach the remaining 31% of this demographic.
Social media lack editorial oversight. Unlike peer-reviewed publications, there is no mandatory peer review process for content posted online, contributing to the proliferation of misinformation. Although social media platforms employ fact-checking teams, it remains impractical to manually verify every piece of content posted across these platforms.
Misinformation
The COVID-19 pandemic has been characterized as the first major "social-media infodemic" by MIT Technology Review, highlighting social media's pivotal role as the primary source of information and communication during this period. National Geographic reported a surge in "fake animal news" circulated on social media platforms during the pandemic. Research indicates a significant shift in information consumption patterns, with many individuals increasingly relying on social media over traditional search engines and browsers, thereby influencing behaviors and potentially undermining government response efforts to the virus.
There is preliminary evidence suggesting that public trust in science and scientists may influence the perceived credibility of COVID-19 misinformation. However, caution is advised in interpreting these findings pending further study.
Social media platforms, including Twitter, have become crucial channels for news updates, although concerns persist regarding the proliferation of misinformation disseminated through automated “bot” accounts. The challenge of distinguishing reliable information sources from misinformation has contributed to varying levels of skepticism and distrust among users.
Misinformation varies widely across countries and can be disseminated intentionally or inadvertently, exacerbating the severity of the pandemic.
The algorithms behind some social platforms may have inadvertently facilitated the spread of misinformation. This was due to increased AI usage when many human moderators were unable to work remotely during shelter in place orders or faced contractual restrictions, which compromised their ability to effectively manage content and prevent the dissemination of COVID-19 misinformation.
Fox News reported instances where social media groups spread rumors opposing vaccines and campaigning against 5G mobile phone networks. For example, the Stop 5G French group on Facebook shared an article from BBC News claiming, "It is becoming pretty clear that the Hunan coronavirus is an engineered bio-weapon that was either purposely or accidentally released." These online rumors led to mob attacks in India, mass poisonings in Iran, and vandalism of phone masts in the United Kingdom.
Social media has become a primary source of misinformation during the pandemic. In China, misinformation spread through platforms like Messenger included false reports that fireworks could kill the virus in the air, and that vinegar and indigowoad root could cure infections. This misinformation resulted in panic-buying of supplies, depleting resources needed by professionals. Additionally, outdated claims, such as the reported benefits of Hydroxychloroquine, continued to circulate despite WHO ending trials due to safety concerns, potentially risking patient safety.
Misinformation and conspiracy theories related to COVID-19 have been flagged, removed, or restricted by Facebook and Instagram on their respective social media platforms. For instance, Facebook has taken measures to curb false claims about cures and prevention methods. However, the efficacy of Facebook's third-party fact-checkers in limiting the spread of false content by notifying and providing accurate information to users remains varied.
A study conducted in May 2021 identified that a small number of individuals were responsible for a significant portion (85%) of false information surrounding COVID-19 vaccines circulating on social media, prompting actions such as content blocking for some of these prolific disseminators, colloquially known as the "Disinformation Dozen."
Older adults are often exposed to misinformation on social media platforms. Research by the WHO indicates that over half (59.1%) of those surveyed are aware of and can recognize COVID-19-related fake news. Consequently, misinformation significantly impacts young people as well, with 60.1% reportedly disregarding false information encountered on social media. Addressing this challenge not only involves helping individuals identify misinformation but also mobilizing efforts to actively counter and mitigate its effects.
Usage by celebrities
Throughout the pandemic, many celebrities utilized social media platforms to engage with their fan bases and address the challenging circumstances through various means, including posts, acts of kindness, or participation in trends. Some celebrities faced swift public criticism for their posts, such as Gwyneth Paltrow, who deleted an Instagram post showcasing designer fashion, and Jared Leto, who sparked controversy with a Twitter post emerging 12-day silent meditation isolation in the desert. Similarly, Ellen DeGeneres and Gal Gadot received backlash for their social media activities, with DeGeneres criticized for comments about quarantine life in her California mansion, and Gadot for organizing a celebrity rendition of John Lennon's "Imagine."
Several celebrities or their family members also used social media to announce their positive COVID-19 diagnoses, including Tom Hanks and Rita Wilson, Idris Elba, and Daniel Dae Kim used his platform to highlight donating plasma containing active antibodies to a Vitalant blood donation center, potentially aiding others affected by the virus. Notably, a controversial Instagram post by K-Pop Star Kim Jae-joong, claiming a COVID-19 hospitalization later revealed as an April Fools' Day Prank, aimed to raise awareness about the pandemic.
Moreover, celebrities leveraged social media to promote charitable action during the pandemic. For instance, Ansel Elgort used his Instagram platform creatively, drawing attention to a GoFundMe campaign by actor Jeffrey Wright aimed at feeding frontline workers, albeit initially raising eyebrows with a provocative post captioned "OnlyFans LINK IN BIO."
Usage by world leaders
On 7 April 2020, former U.S. President Donald Trump utilized Twitter and the #AmericaWorksTogether hashtag to highlight companies aiding in mitigating the economic impacts of the virus by hiring employees and supplying health workers with necessary equipment.
Queen Elizabeth II and other members of the British royal family have also used social media to communicate with the public. Comments from the Queen were shared on the royal family's Instagram account, and in the lead-up to V-E Day, information based on the Queen's memories from a 1985 interview was posted on Instagram. Several royal family members participated in Zoom calls with nurses to commemorate International Nurses Day, which subsequently posted on their YouTube page. Prince William and Catherine Middleton allowed their Instagram account to be "taken over" for 24-hours by Shout85258, the UK's first 24/7 crisis text line they launched with Prince Harry and Meghan Markle in May 2019. The Dutch royal family used their Instagram account to share a video of King Willem-Alexander, Queen Máxima and their teenage daughters clapping for first responders, accompanied by a brief speech by the King.
Censorship
In Turkey, more than 400 individuals were arrested for posting "provocative" messages about the pandemic on social media. Chinese social media networks, such as WeChat have reportedly censored terms related to the pandemic since 31 December 2019. Notably, Dr. Li Wenliang was censored by the Wuhan police for posting about the pandemic in a private group chat. Doctors in China were instructed by local authorities to delete social media posts appealing for donations of medical supplies.
NetBlocks, a civil society group advocating for digital rights, cybersecurity, and Internet governance, reported internet outages in Wuhan during the pandemic. They also noted that the Farsi version of Wikipedia was blocked for 24 hours in Iran. The VPN company Surfshark reported a roughly 50% drop-off in its network usage in Iran after the pandemic was declared on 13 March by the WHO.
In an August 2024 letter to the American House Judiciary committee, Meta chairman Mark Zuckerberg wrote "In 2021, senior officials from the Biden Administration, including the White House, repeatedly pressured our teams for months to censor certain COVID-19 content, including humor and satire" and continued that "[he feels] strongly that we should not compromise our content standards due to pressure from any Administration in either direction – and we’re ready to push back if something like this happens again".
References
External links
Coronavirus misinformation on social media, CBS News
How to spot fake coronavirus news on social media, Los Angeles Times
2020 in Internet culture
COVID-19 pandemic in popular culture
Internet memes related to the COVID-19 pandemic
Social media | Impact of the COVID-19 pandemic on social media | Technology | 4,574 |
10,453,104 | https://en.wikipedia.org/wiki/Google%20Pinyin | Google Pinyin IME () is a discontinued input method developed by Google China Labs. The tool was made publicly available on April 4, 2007. Aside from Pinyin input, it also includes stroke count method input. As of March 2019, Google Pinyin has been discontinued and the download page has been deleted. However, Google Pinyin IME can still be obtained from https://dl.google.com/pinyin/v2/GooglePinyinInstaller.exe ().
Availability
Windows
, Google Pinyin was available for Windows XP, Windows Vista, Windows 7, Windows 8 & Windows 10 version 1511 or below. Both 32-bit and 64-bit versions were available.
Android
Google released a Pinyin IME system for Android 1.5 or newer in March 2009. The Android Pinyin IME supports user dictionary synchronization with the desktop version.
Linux
By the end of 2008, more than 20% users of Google Pinyin wanted a Linux version of the input method, which was answered in the FAQ section with a general PR phrase "We always strive to provide a better user experience and we never stop our hard work to fulfill the customer needs".
However, the Linux user community is porting the Android Google Pinyin IME to the non-Android Linux IME framework SCIM in the scim-googlepinyin module.
After Christmas 2009, the Google pinyin module for SCIM became also available for the Nokia Maemo 5 platform, which meant it could be downloaded to any Nokia N900 phone through the official application repositories.
Mac OS X
A closed beta version of Google Pinyin for Mac OS X was leaked on September 14, 2010. A public version was never made available.
Copyright infringement
After Google Pinyin was initially released in April 2007, it was soon discovered that Google Pinyin's dictionary database contained employee names of Sogou Pinyin, an indication that the dictionary was taken from Sogou, one of Google's competitors in the Chinese Internet market. On April 8, 2007, Google admitted that they used "non-Google database resources". Shortly thereafter, a new version of Google Pinyin was released which no longer appeared to be based on Sogou's database.
Synchronization failure
Google Pinyin for Windows has been failing to synchronize for years because of the deprecation of Google ClientLogin authentication. A client with an alternative authentication method has not been announced yet. Google Pinyin for Android can still synchronize (within this platform only).
See also
Pinyin input method
Google IME
Google Japanese Input
Microsoft Pinyin IME
Sogou Pinyin
References
External links
Google Pinyin Website
Official Google Group (Old)
Version History
Pinyin
CJK input methods
Freeware
Windows text-related software
Android (operating system) software
Han pinyin input | Google Pinyin | Technology | 564 |
34,519,215 | https://en.wikipedia.org/wiki/Qattara%20Depression%20Project | The Qattara Depression Project or Qattara Project is a macro-engineering project concept in Egypt. Rivalling the Aswan High Dam in scope, the intention is to develop the hydroelectric potential of the Qattara Depression by creating an artificial lake.
The Qattara depression is a region that lies below sea level on average and is currently a vast, uninhabited desert. Water could be let into the area by connecting it to the Mediterranean Sea with tunnels and/or canals. The inflowing water would then evaporate quickly because of the desert climate. A controlled balance of inflow and evaporation would produce a continuous flow to generate hydroelectricity. Eventually, the depression would become a hypersaline lake or a salt pan as the evaporating seawater leaves the salt it contains behind. This would return the Qattara Depression to its current state but with its sabkha soils tens of meters higher, allowing for salt mining.
The concept calls for excavating a large canal or tunnel of about , depending on the route chosen to the Mediterranean Sea, to bring seawater into the area. An alternative would be a 320 kilometre (200 mile) pipeline north-east to the freshwater Nile River south of Rosetta. In comparison, Egypt's Suez Canal is currently 193 kilometres in length. By balancing the inflow and evaporation, the lake's water level can be held constant. Several proposed lake levels are 70, 60 and 50 meters below sea level.
Proposals
Roudaire
The first documented suggestion for flooding large parts of the Sahara desert was by French geographer François Élie Roudaire whose proposal inspired the writer Jules Verne's final book Invasion of the Sea. Plans to use the Qattara Depression for the generation of electricity reportedly date back to 1912 from Berlin geographer Albrecht Penck.
Ball
The subject was first discussed in more detail by John Ball in 1927. Ball also made the first preliminary calculations on the achievable filling rate, inflow rate, electricity production and salinity.
Qattara's nature as a depression seems to have been unknown until after the First World War. The credit for its discovery goes to John Ball, English director of the Survey of Egypt, who oversaw the mapping of the depression in 1927 and who first suggested using it to generate hydroelectricity.
In 1957 the American Central Intelligence Agency proposed to President Dwight Eisenhower that peace in the Middle East could be achieved by flooding the Qattara Depression. The resulting lagoon, according to the CIA, would have four benefits:
It would be spectacular and peaceful.
It would materially alter the climate in adjacent areas.
It would provide work during construction and living areas after completion .
It would get Egyptian president Gamel Abdel Nasser's "mind on other matters" because "he need[ed] some way to get off the Soviet Hook."
Bassler
From 1964 onward Prof. Friedrich Bassler led the international "Board of Advisers" which was responsible for planning and financing activities of the project. He also advised the Egyptian government on the matter from 1975 onward. He was appointed to make a first preliminary feasibility study by the German Federal Ministry of Economics in Bonn.
Bassler was the driving force behind the Qattara Project for nearly a decade. Half way through the seventies a team of eight mostly German scientists and technicians was working on the planning of the first hydro-solar depression power station in the world. The first "Bassler study" of 1973 laid the basis for the Egyptian government to commission a study of its own. It decided in 1975 that Bassler and a group of companies known as "Joint Venture Qattara" should conduct a feasibility study of the project.
The project concept was: Mediterranean water should be channeled through a canal or tunnel towards the Qattara Depression, which lies below sea level. This water would then fall into the depression through penstocks for electricity generation. The water would evaporate quickly because of the very dry and hot weather once in the depression. This would allow for more water to enter the depression and would create a continuous source of electricity.
A canal 60 meters deep would connect the Mediterranean with the depression's edge at this narrow isthmus. This canal would deliver water to the depression as well as being a shipping route towards the Qattara lake with a harbor and fishing grounds in the depression. The depression was to be filled to a height of 60 m below sea level. It would take a total of 10 years to fill to that level. After that the incoming flow would balance out against the outgoing evaporation and would cause the lake level to stop changing.
In the first phase of the project the Qattara 1 station was to generate 670 megawatts. The second phase was to generate an additional 1,200 megawatts. A pumped-storage hydroelectricity facility would increase the peak production capacity with another 4,000 megawatts, totaling about 5,800 megawatts.
The core problem of the project was the cost and technical difficulty of diverting seawater to the depression. Calculations showed that digging a canal or tunnel would be too expensive. Demining would be needed to remove some of the millions of unexploded ordnance left from World War II in Northern Egypt. Consequently, use of nuclear explosives to excavate the canal was another proposal by Bassler. This plan called for the detonation in boreholes of 213 nuclear devices, each yielding 1.5 megatons (i.e. 100 times that of the atomic bomb used against Hiroshima). This fit within the Atoms for Peace program proposed by President Dwight Eisenhower in 1953. Evacuation plans cited numbers of at least 25,000 evacuees. The shock waves from the explosion might also affect the tectonically unstable Red Sea Rift located just 450 km away from the blast site. Another danger was increased coast erosion because sea currents could change in such a way that even very remote coastal areas would erode. Because of the concerns about using a nuclear solution the Egyptian government turned down the plan, and the project's stakeholders gave up on the project.
Continued interest
Since then, scientists and engineers still occasionally explore the viability of such a project, as a key to resolving economic, population, and ecological stresses in the area, but the project has yet to be undertaken.
On April 11th, 2023, Egypt announced a contract with EGIT Consulting to study the feasibility of the project.
As of 2024, Saudi Arabia and UAE are exploring projects to mine lithium for electric vehicles from existing onshore salt pans as well as salt pans supplemented with Persian Gulf sea water. Although the added value of additional table salt on global markets is low, the clean energy boom presents a unique lithium opportunity if a scheme such as Qattara Depression Project were to materialize. As of December 2024, no such project yet is being seriously considered.
See also
Arpa–Sevan tunnel
Sahara Sea
Salton Sea
References
Citations
Bibliography
M. A. Eizel-Din and M. B. Khalil.: "Egypt's Qattara Depression potential hydropower". – In: Proceedings of the international conference "Handshake across the Jordan – Water and Understanding in the Middle East". In: Forum Umwelttechnik und Wasserbau, Nr. 10. IUP – Innsbruck University Press, Innsbruck, 2011, pp. 89-96
Macro-engineering
Proposed energy infrastructure in Egypt
Water supply and sanitation in Egypt | Qattara Depression Project | Engineering | 1,515 |
8,031,769 | https://en.wikipedia.org/wiki/Vlaams%20Instituut%20voor%20Biotechnologie | VIB is a research institute located in Flanders, Belgium. It was founded by the Flemish government in 1995, and became a full-fledged institute on 1 January 1996. The main objective of VIB is to strengthen the excellence of Flemish life sciences research and to turn the results into new economic growth. VIB spends almost 80% of its budget on research activities, while almost 12% is spent on technology transfer activities and stimulating the creation of new businesses, in addition VIB spends approximately 2% on socio-economic activities. VIB is member of EU-LIFE, an alliance of leading life sciences research centres in Europe.
The institute is led by Christine Durinx and Jérôme Van Biervliet. Ajit Shetty is chairman of the board of directors.
Goals
VIB's mission is to conduct frontline biomolecular research in life sciences for the benefit of scientific progress and the benefit of society. The strategic goals of the VIB are:
Strategic basic research
Technology transfer policy to transfer the inventions to consumers and patients
Scientific information for the general public
Research Centers
The VIB scientist works on the normal and abnormal or pathological processes occurring in a cell, an organ and an organism (humans, plants, micro organisms). Instead of relocating scientists to a new campus, the VIB researchers work in research departments on six Flemish campuses: Ghent University, KU Leuven, University of Antwerp, Vrije Universiteit Brussel, IMEC and Hasselt University.
Ghent University:
VIB Inflammation Research Center, UGent (Bart Lambrecht)
VIB Center for Plant Systems Biology, UGent (Dirk Inzé)
VIB Medical Biotechnology Center, UGent (Nico Callewaert)
Institute of Plant Biotechnology Outreach (IPBO), UGent (Marc Van Montagu)
KU Leuven:
VIB Center for Cancer Biology, KU Leuven (Scientific directors: Diether Lambrechts and Chris Marine)
VIB Center for Brain & Disease Research, KU Leuven (Scientific directors: Patrik Verstreken and Joris de Wit)
VIB Center for Microbiology, KU Leuven (Scientific director: Kevin Verstrepen)
IMEC Campus
NERF, a joint research initiative between IMEC, VIB and KU Leuven
University of Antwerp:
VIB Department of Molecular Genetics, University of Antwerp (Rosa Rademakers)
VIB Center for Molecular Neurology, University of Antwerp
Vrije Universiteit Brussel:
VIB Structural Biology Research Center, Vrije Universiteit Brussel (Jan Steyaert)
VIB Laboratory Myeloid Cell Immunology, Vrije Universiteit Brussel (Jo Van Ginderachter)
VIB Nanobody Service Facility, Vrije Universiteit Brussel
Hasselt University Campus
Service facilities
VIB has established several core facilities focused on advanced technologies, which make high through-flow technologies available to academic and industrial researchers in Flanders.
VIB BioInformatics Training and Service facility
VIB Compound Screening service Facility, UGent
VIB Genetic Service Facility, University of Antwerp
VIB Nucleomics Core, KU Leuven
VIB Nanobody Service Facility, Vrije Universiteit Brussel
VIB Protein Service Facility, UGent
VIB Proteomics Expertise Center, UGent
VIB Bio Imaging Core, UGent and KU Leuven
VIB Metabolomics Core, UGent
Spin-offs
VIB was involved in the creation of spin-offs from academic research groups, such as for Ablynx, DevGen, CropDesign, ActoGeniX, Pronota (formerly Peakadilly), Agrosavfe, Multiplicom, Q-biologicals, SoluCel, Aphea.Bio and Aelin Therapeutics.
See also
Belgian Society of Biochemistry and Molecular Biology
BIOMED (University of Hasselt)
EMBL
Flanders Investment and Trade
Flemish institute for technological research
GIMV
Herman Van Den Berghe
Institute for the promotion of Innovation by Science and Technology (IWT)
Jozef Schell
Lisbon Strategy
Marc Van Montagu
Participatiemaatschappij Vlaanderen
Raymond Hamers
Science and technology in Flanders
Walter Fiers
Wellcome Trust
References
Sources
J. Comijn, P. Raeymaekers, A. Van Gysel, M. Veugelers, Today = Tomorrow : a tribute to life sciences research and innovation : 10 years of VIB, Snoeck, 2006,
Biotechnology industry in Belgium
External links
Official website
Bioinformatics organizations
Biological research institutes
Biology societies
Education in Belgium
Educational organisations based in Belgium
Flanders
Genetics organizations
Gene banks
Information technology organizations based in Europe
International research institutes
International scientific organizations based in Europe
Medical and health organisations based in Belgium
Molecular biology institutes
Molecular biology organizations
Scientific organisations based in Belgium
Research institutes
Research institutes in Belgium
Science and technology in Belgium
Science and technology in Europe
Systems science institutes
Vrije Universiteit Brussel | Vlaams Instituut voor Biotechnologie | Chemistry,Biology | 997 |
316,824 | https://en.wikipedia.org/wiki/Nozzle | A nozzle is a device designed to control the direction or characteristics of a fluid flow (specially to increase velocity) as it exits (or enters) an enclosed chamber or pipe.
A nozzle is often a pipe or tube of varying cross sectional area, and it can be used to direct or modify the flow of a fluid (liquid or gas). Nozzles are frequently used to control the rate of flow, speed, direction, mass, shape, and/or the pressure of the stream that emerges from them. In a nozzle, the velocity of fluid increases at the expense of its pressure energy.
Types
Jet
A gas jet, fluid jet, or hydro jet is a nozzle intended to eject gas or fluid in a coherent stream into a surrounding medium. Gas jets are commonly found in gas stoves, ovens, or barbecues. Gas jets were commonly used for light before the development of electric light. Other types of fluid jets are found in carburetors, where smooth calibrated orifices are used to regulate the flow of fuel into an engine, and in jacuzzis or spas.
Another specialized jet is the laminar jet. This is a water jet that contains devices to smooth out the pressure and flow, and gives laminar flow, as its name suggests. This gives better results for fountains.
The foam jet is another type of jet which uses foam instead of a gas or fluid.
Nozzles used for feeding hot blast into a blast furnace or forge are called tuyeres.
Jet nozzles are also used in large rooms where the distribution of air via ceiling diffusers is not possible or not practical. Diffusers that uses jet nozzles are called jet diffuser where it will be arranged in the side wall areas in order to distribute air. When the temperature difference between the supply air and the room air changes, the supply air stream is deflected upwards, to supply warm air, or downwards, to supply cold air.
High velocity
Frequently, the goal of a nozzle is to increase the kinetic energy of the flowing medium at the expense of its pressure and internal energy.
Nozzles can be described as convergent (narrowing down from a wide diameter to a smaller diameter in the direction of the flow) or divergent (expanding from a smaller diameter to a larger one). A de Laval nozzle has a convergent section followed by a divergent section and is often called a convergent-divergent (CD) nozzle ("con-di nozzle").
Convergent nozzles accelerate subsonic fluids. If the nozzle pressure ratio is high enough, then the flow will reach sonic velocity at the narrowest point (i.e. the nozzle throat). In this situation, the nozzle is said to be choked.
Increasing the nozzle pressure ratio further will not increase the throat Mach number above one. Downstream (i.e. external to the nozzle) the flow is free to expand to supersonic velocities; however, Mach 1 can be a very high speed for a hot gas because the speed of sound varies as the square root of absolute temperature. This fact is used extensively in rocketry where hypersonic flows are required and where propellant mixtures are deliberately chosen to further increase the sonic speed.
Divergent nozzles slow fluids if the flow is subsonic, but they accelerate sonic or supersonic fluids.
Convergent-divergent nozzles can therefore accelerate fluids that have choked in the convergent section to supersonic speeds. This CD process is more efficient than allowing a convergent nozzle to expand supersonically externally.
The shape of the divergent section also ensures that the direction of the escaping gases is directly backwards, as any
sideways component would not contribute to thrust.
Propelling
A jet exhaust produces thrust from the energy obtained from burning fuel. The hot gas is at a higher pressure than the outside air and escapes from the engine through a propelling nozzle, which increases the speed of the gas.
Exhaust speed needs to be faster than the aircraft speed in order to produce thrust but an excessive speed difference wastes fuel (poor propulsive efficiency). Jet engines for subsonic flight use convergent nozzles with a sonic exit velocity. Engines for supersonic flight, such as used for fighters and SST aircraft (e.g. Concorde) achieve the high exhaust speeds necessary for supersonic flight by using a divergent extension to the convergent engine nozzle which accelerates the exhaust to supersonic speeds.
Rocket motors maximise thrust and exhaust velocity by using convergent-divergent nozzles with very large area ratios and therefore extremely high pressure ratios. Mass flow is at a premium because all the propulsive mass is carried with vehicle, and very high exhaust speeds are desirable.
Magnetic
Magnetic nozzles have also been proposed for some types of propulsion, such as VASIMR, in which the flow of plasma is directed by magnetic fields instead of walls made of solid matter.
Spray
Many nozzles produce a very fine spray of liquids.
Atomizer nozzles are used for spray painting, perfumes, carburetors for internal combustion engines, spray on deodorants, antiperspirants and many other similar uses.
Air-aspirating nozzles use an opening in the cone shaped nozzle to inject air into a stream of water based foam (CAFS/AFFF/FFFP) to make the concentrate "foam up". Most commonly found on foam extinguishers and foam handlines.
Swirl nozzles inject the liquid in tangentially, and it spirals into the center and then exits through the central hole. Due to the vortexing this causes the spray to come out in a cone shape.
Vacuum
Vacuum cleaner nozzles come in several different shapes. Vacuum nozzles are used in vacuum cleaners.
Shaping
Some nozzles are shaped to produce a stream that is of a particular shape. For example, extrusion molding is a way of producing lengths of metals or plastics or other materials with a particular cross-section. This nozzle is typically referred to as a die.
See also
Fire hose#Forces on fire hoses and nozzles
Rocket engine nozzle
SERN
References
External links
Fluid mechanics | Nozzle | Engineering | 1,286 |
76,683,769 | https://en.wikipedia.org/wiki/NGC%204680 | NGC 4680 is a spiral/lenticular galaxy in the constellation Virgo. It is estimated to be 106 million light-years from the Milky Way and has a diameter of about 45,000 ly. In the same area of the sky there are, among other things: the galaxies NGC 4700 and NGC 4708. NGC 4680 was discovered on May 27, 1835, by John Herschel using an 18-inch reflecting telescope, who described it as "eF, S, has one or two small stars entangled in it".
One supernova has been observed in NGC 4680. SN 1997bp (type Ia, mag. 13.8) was discovered by Robert Evans on 6 April 1997.
See also
List of NGC objects (4001–5000)
References
External links
Spiral galaxies
Lenticular galaxies
Virgo (constellation)
4680
043118
-02-33-007
Astronomical objects discovered in 1835
Discoveries by John Herschel
12443-1121 | NGC 4680 | Astronomy | 201 |
52,452,554 | https://en.wikipedia.org/wiki/NGC%206540 | NGC 6540 is a globular cluster of stars in the souther constellation Sagittarius, positioned about 4.66° away from the Galactic Center. It was discovered by German-British astronomer Wilhelm Herschel on May 24, 1784, with an 18.7-inch mirror telescope, who described the cluster as "pretty faint, not large, crookedly extended, easily resolvable". It has an apparent visual magnitude of 9.3 with an angular diameter of about 9.5 arcminutes.
The cluster is located at a distance of from the Sun, and from the Galactic Center. It was originally thought to be an open cluster before being designated a globular. The cluster includes a peculiar X-ray source of uncertain type.
References
External links
NGC 6540
Robert Burnham, Jr, Burnham's Celestial Handbook: An observer's guide to the universe beyond the solar system, vol 3, p. 1556
Globular clusters
Astronomical X-ray sources
Sagittarius (constellation)
6540 | NGC 6540 | Astronomy | 211 |
8,207,512 | https://en.wikipedia.org/wiki/Biological%20response%20modifier | Biological response modifiers (BRMs) are substances that modify immune responses. They can be endogenous (produced naturally within the body) or exogenous (as pharmaceutical drugs), and they can either enhance an immune response or suppress it. Some of these substances arouse the body's response to an infection, and others can keep the response from becoming excessive. Thus they serve as immunomodulators in immunotherapy (therapy that makes use of immune responses), which can be helpful in treating cancer (where targeted therapy often relies on the immune system being used to attack cancer cells) and in treating autoimmune diseases (in which the immune system attacks the self), such as some kinds of arthritis and dermatitis. Most BRMs are biopharmaceuticals (biologics), including monoclonal antibodies, interleukin 2, interferons, and various types of colony-stimulating factors (e.g., CSF, GM-CSF, G-CSF). "Immunotherapy makes use of BRMs to enhance the activity of the immune system to increase the body's natural defense mechanisms against cancer", whereas BRMs for rheumatoid arthritis aim to reduce inflammation.
Some conditions which biologics are used to treat are rheumatic disorders such as psoriatic arthritis, ankylosing spondylitis and non-radiographic axial spondyloarthritis, and inflammatory bowel disease.
Medical uses
Biologics provide immunotherapy and can function as disease-modifying antirheumatic drugs.
Biologics can generally be grouped by their "class", that is, their specific mechanism of action and affected targets. Some classes are TNF inhibitors, anti-IL-17A antibodies, and IL-23 antibodies.
For people with moderate to severe psoriatic arthritis, biologics can provide some relief of the symptoms, and even slow down or halt the progression of the disease. Classes of biologics typically used for psoriatic arthritis include TNF inhibitors, anti-IL17-A antibodies, IL-23 antibodies, and those that act on both IL-12 and IL-23.
Biologics can treat inflammatory bowel disease. Classes of biologics typically used for inflammatory bowel disease include TNF inhibitors, and anti-CD28 antibodies.
Contraindications
Biologics are generally used after considering other less invasive treatments. Before using biologics to treat psoriasis, treatment with topical moisturizers or steroids, or light therapy may provide relief. Other drugs which may provide relief include acitretin, ciclosporin, and methotrexate, but since these drugs have their own major side effects, doctors and patients should discuss whether to try one of these or a biologic first.
Most biologics are injections so are not appropriate for use by someone with intense fear of needles. A person with any infection should not use biologics.
Other contraindications for biologics include cancer, certain neurologic disorders, being pregnant or breastfeeding, history of heart failure, or history of tuberculosis.
Adverse effects
Common adverse effects of biologic administration are injection site reactions including redness, pain, and itching. Other adverse effects include headache, skin reactions, respiratory tract infection, and urinary tract infection. Adverse effects may be class-dependent, and so switching to a biologic of another class may ameliorate those effects.
Potential serious adverse effects include allergic reactions, liver damage, cancer, and serious infections including tuberculosis, pneumonia, staph infection, and fungal infection.
Patients with systemic lupus erythematosus (SLE) who are treated with the standard of care, including biologic response modifiers, experience a higher risk of mortality and opportunistic infection compared to the general population.
Examples
Biopharmaceuticals
Biologics for immunosuppression include adalimumab, certolizumab, etanercept, golimumab, infliximab, ixekizumab, belimumab, and ustekinumab.
Natural BRMs
Extracts from some medicinal mushrooms are natural biological response modifiers.
Manufacturing
Genetically engineered cell cultures in pharmaceutical labs produce the biologics.
History
Biologics are the second generation of biopharmaceutical products. The first generation were the biopharmaceutical products which could be extracted from organisms without biotechnology from the Information Age, such as blood for transfusion, early insulin extracted from animals, and vaccines from eggs.
When biologic drugs became available they led to significant changes in the management of various autoimmune diseases.
Society and culture
Term
The term "biologic therapy" is nonspecific, and can refer to any biopharmaceutical medication. However, many sources use the term to refer to immunotherapy treatments.
The explanation for this is that while "biologic" or "biopharmaceutical" refers to the chemical composition of medications which might be used to treat a range of medical conditions, when the term "biologic" became popular, many biologic medications available provided immunosuppression.
Legal status
Biosimilar is a term used to describe a biopharmaceutical product which seems so close in composition and effect to another that they are functionally identical, analogous to generic drugs. In this context, some publications describe "biologics" as "biosimilars".
Economics
Biologic drugs are expensive. In the United States treatment with biologic drugs typically costs –6,000 per month, compared to –600 per month for conventional (small-molecule) DMARDs.
References
Deepak A. Rao; Le, Tao; Bhushan, Vikas. First Aid for the USMLE Step 1 2008 (First Aid for the Usmle Step 1). McGraw-Hill Medical. .
Immune system
Immunotherapy
Disease-modifying antirheumatic drugs
Immunosuppressants
Monoclonal antibodies | Biological response modifier | Biology | 1,269 |
23,522,474 | https://en.wikipedia.org/wiki/C19H22N2O | {{DISPLAYTITLE:C19H22N2O}}
The molecular formula C19H22N2O may refer to:
Amedalin
Cinchonidine
Cinchonine
Ketipramine
Normacusine B
Noxiptiline
Rhazinilam
Tombozine
Vinburnine (Eburnamonine) | C19H22N2O | Chemistry | 72 |
24,211,366 | https://en.wikipedia.org/wiki/ALIL%20pseudoknot | ALIL pseudoknot is an RNA element that induces frameshifting in bacteria. The expression of a minority of genes requires frameshifting to occur where the frequency of frameshifting is increased by a RNA secondary structure located on the 3' side of the shift site. This structure can be either a pseudoknot or a stem-loop and acts as a physical barrier to mRNA translocation so therefore causes ribosome pausing.
ALIL pseudoknot was identified though comparative analysis of the a class of transposable elements belonging to the insertion sequence 3 (IS3) family and is shown to be conserved across a number of bacteria species. This pseudoknot stimulates programmed -1 ribosomal frameshifting (PRF-1) which in turn stimulates the express of transposase, an enzyme required for transposition. Mutagenesis and chemical probing were used to determine the secondary structure of this pseudoknot and it has been proposed that this pseudoknot is formed by interactions between an apical loop and internal loop.
References
External links
RNA
Bacteria | ALIL pseudoknot | Biology | 220 |
1,741,453 | https://en.wikipedia.org/wiki/Parent%E2%80%93offspring%20conflict | Parent–offspring conflict (POC) is an expression coined in 1974 by Robert Trivers. It is used to describe the evolutionary conflict arising from differences in optimal parental investment (PI) in an offspring from the standpoint of the parent and the offspring. PI is any investment by the parent in an individual offspring that decreases the parent's ability to invest in other offspring, while the selected offspring's chance of surviving increases.
POC occurs in sexually reproducing species and is based on a genetic conflict: Parents are equally related to each of their offspring and are therefore expected to equalize their investment among them. Offspring are only half or less related to their siblings (and fully related to themselves), so they try to get more PI than the parents intended to provide even at their siblings' disadvantage.
However, POC is limited by the close genetic relationship between parent and offspring: If an offspring obtains additional PI at the expense of its siblings, it decreases the number of its surviving siblings. Therefore, any gene in an offspring that leads to additional PI decreases (to some extent) the number of surviving copies of itself that may be located in siblings. Thus, if the costs in siblings are too high, such a gene might be selected against despite the benefit to the offspring.
The problem of specifying how an individual is expected to weigh a relative against itself has been examined by W. D. Hamilton in 1964 in the context of kin selection. Hamilton's rule says that altruistic behavior will be positively selected if the benefit to the recipient multiplied by the genetic relatedness of the recipient to the performer is greater than the cost to the performer of a social act. Conversely, selfish behavior can only be favoured when Hamilton's inequality is not satisfied. This leads to the prediction that, other things being equal, POC will be stronger under half siblings (e.g., unrelated males father a female's successive offspring) than under full siblings.
Occurrence
In plants
In plants, POC over the allocation of resources to the brood members may affect both brood size (number of seeds matured within a single fruit) and seed size. Concerning brood size, the most economic use of maternal resources is achieved by packing as many seeds as possible in one fruit, i.e., minimizing the cost of packing per seed. In contrast, offspring benefits from low numbers of seeds per fruit, which reduces sibling competition before and after dispersal. Conflict over seed size arises because there usually exists an inverse exponential relationship between seed size and fitness, that is, the fitness of a seed increases at a diminishing rate with resource investment but the fitness of the maternal parent has an optimum, as demonstrated by Smith and Fretwell (see also marginal value theorem). However, the optimum resource investment from the offspring's point of view would be the amount that optimizes its inclusive fitness (direct and indirect fitness), which is higher than the maternal parent's optimum.
This conflict about resource allocation is most obviously manifested in the reduction of brood size (i.e. a decrease in the proportion of ovules matured into seeds). Such reduction can be assumed to be caused by the offspring: If the maternal parent's interest were to produce as few seeds as observed, selection would not favour the production of extra ovules that do not mature into seeds. (Although other explanations for this phenomenon exist, such as genetic load, resource depletion or maternal regulation of offspring quality, they could not be supported by experiments.)
There are several possibilities how the offspring can affect paternal resource allocation to brood members. Evidence exists for siblicide by dominant embryos: Embryos formed early kill the remaining embryos through an aborting chemical. In oaks, early fertilized ovules prevent the fertilization of other ovules by inhibiting the pollen tube entry into the embryo sac. In some species, the maternal parent has evolved postfertilization abortion of few seeded pods. Nevertheless, cheating by the offspring is also possible here, namely by late siblicide, when the postfertilization abortion has ceased.
According to the general POC model, reduction of brood size – if caused by POC – should depend on genetic relatedness between offspring in a fruit. Indeed, abortion of embryos is more common in out-crossing than in self-pollinating plants (seeds in cross-pollinating plants are less related than in self-pollinating plants). Moreover, the level of solicitation of resources by the offspring is also increased in cross-pollinating plants: There are several reports that the average weight of crossed seeds is greater than of seeds produced by self-fertilization.
In birds
Some of the earliest examples of parent-offspring conflict were seen in bird broods and especially in raptor species. While parent birds often lay two eggs and attempt to raise two or more young, the strongest fledgling takes a greater share of the food brought by parents and will often kill the weaker sibling (siblicide). Such conflicts have been suggested as a driving force in the evolution of optimal clutch size in birds.
In the blue-footed booby, parent-offspring conflict results in times of food scarcity. When there is less food available in a given year, the older, dominant chick will often kill the younger chick by either attacking directly, or by driving it from the nest. Parents try to prevent siblicide by building nests with steeper sides and by laying heavier second eggs.
In mammals
Even before POC theory arose, debates took place over whether infants wean themselves or mothers actively wean their infants. Furthermore, it was discussed whether maternal rejections increase infant independence. It turned out that both mother and infant contribute to infant independence. Maternal rejections can be followed by a short-term increase in infant contact but they eventually result in a long-term decrease of contact as has been shown for several primates: In wild baboons infants that are rejected early and frequently spend less time in contact whereas those that are not rejected stay much longer in the proximity of their mother and suckle or ride even in advanced ages. In wild chimpanzees an abrupt increase in maternal rejections and a decrease in mother-offspring contact is found when mothers resume estrus and consort with males. In rhesus macaques a high probability of conception in the following mating season is associated with a high rate of maternal rejection. Rejection and behavioral conflicts can occur during the first months of an infant's life and when the mother resumes estrus. These findings suggest that the reproduction of the mother is influenced by the interaction with their offspring. So there is a potential for conflicts over PI.
It was also observed in rhesus macaques that the number of contacts made by offspring is significantly higher than the number of contacts made by mother during a mating season, whereas the opposite holds for the number of broken contacts. This fact suggests that the mother resists offspring's demands for contact, whereas offspring is apparently more interested in spending time in contact. At three months of infant age a shift from mother to infant in responsibility for maintaining contact takes place. So when the infant becomes more independent, its effort to maintain proximity to its mother increases. This might sound paradoxical but becomes clear when one takes into account that POC increases during the period of PI. In summary, all these findings are consistent with POC-theory.
One might object that time in contact is not a reasonable measure for PI and that, for example, time for milk transfer (lactation) would be a better one. Here one can argue that mother and infant have different thermoregulatory needs due to the fact that they have different surface-to-volume ratios resulting in more rapid loss of heat in infants compared to adults. So infants may be more sensitive to low temperatures than their mothers. An infant might try to compensate by increased contact time with their mother, which could initiate a behavioral conflict over time. Consistency of this hypothesis has been shown for Japanese macaques where decreasing temperatures result in higher maternal rejections and increased number of contacts made by infants.
In social insects
In eusocial species, the parent-offspring conflict takes on a unique role because of haplodiploidy and the prevalence of sterile workers. Sisters are more related to each other (0.75) than to their mothers (0.5) or brothers (0.25). In most cases, this drives female workers to try and obtain a sex ratio of 3:1 (females to males) in the colony. However, queens are equally related to both sons and daughters, so they prefer a sex ratio of 1:1. The conflict in social insects is about the level investment the queen should provide for each sex for current and future offspring. It is generally thought that workers will win this conflict and the sex ratio will be closer to 3:1, however there are examples, like in Bombus terrestris, where the queen has considerable control in forcing a 1:1 ratio.
In amphibians
Many species of frogs and salamanders display complex social behavior with highly involved parental care that includes egg attendance, tadpole transport, and tadpole feeding.
Energy expenditure
Both males and females of the strawberry poison-dart frog care for their offspring, however, females invest in more costly ways. Females of certain poison frog species produce unfertilized, non-developing trophic eggs which provide nutrition to her tadpoles. The tadpoles vibrate vigorously against mother frogs to solicit nutritious eggs. These maternal trophic eggs are beneficial for offspring, positively influencing larval survival, size at metamorphosis, and post metamorphic survival.
In the neotropical, foam-nesting pointedbelly frog (Leptodactylus podicipinus), females providing parental care to tadpoles have reduced body condition and food ingestion. Females that are attending to her offspring have significantly lower body mass, ovary mass, and stomach volume. This indicates that the cost of parental care in the pointedbelly frog has the potential to affect future reproduction of females due to the reaction in body condition and food intake.
In the Puerto Rican common coqui, parental care is performed exclusively by males and consists of attending to the eggs and tadpoles at an oviposition site. When brooding, males have a higher frequency of empty stomachs and lose a significant portion of their initial body mass during parental care. Abdominal fat bodies of brooding males during the middle of parental care were significantly smaller than those of non-brooding males. Another major behavioral component of parental care is nest defense against conspecific egg cannibals. This defense behavior includes aggressive calling, sustained biting, wrestling, and blocking directed against the nest intruder.
Females of the Allegheny Mountain dusky salamander exhibit less activity and become associated with the nest site well in advance of oviposition in preparation for the reproductive season. This results in a reduced food intake and a decrease in body weight over the brooding period. Females either stop or greatly reduce their foraging activities and instead will eat opportunistically following oviposition. Since nutritional intake is reduced, there is a decrease in body weight in females. Females of the red-backed salamander make a substantial parental investment in terms of clutch size and brooding behavior. When brooding, females usually do not leave their eggs to forage but rather rely upon their fat reserves and any resources they encounter at their oviposition site. In addition, females could experience metabolic costs while safeguarding their offspring from desiccation, intruders, and predators.
Time investment
The plasticity of tadpoles may play a role in the weaning conflict in egg-feeding frogs, in which the offspring prefer to devote resources to growth, while the mother prefers nutrients to help her young become independent. A similar conflict happens in direct-developing frogs that care for clutches, with protected tadpoles having the advantage of a slower, safer development, but they need to be ready to reach independence rapidly due to the risks of predation or desiccation.
In the neotropical Zimmerman’s poison frog, the males provide a specific parental care in the form of transportation. The tadpoles are cannibalistic, hence why the males typically separate them from their siblings after hatching by transporting them to small bodies of water. However, in some cases parents do not transport their tadpoles but let them all hatch into the same pool. In order to escape their cannibalistic siblings, the tadpoles will actively seek transportative parental care. When a male frog approaches the water body in which the tadpoles had been deposited in, tadpoles will almost “jump” on the back of the adult, mimicking an attack, while adults would not assist with this movement. While this is an obvious example of sibling conflict, the one-sided interaction between tadpoles and frogs could be seen as a form of parent-offspring conflict, in which the offspring attempts to extract more from the interaction than the parent is willing to provide. In this scenario, a tadpole climbing onto an unwilling frog— who enters the pool for reasons other than tadpole transportation, such as egg deposition, cooling off, or sleeping— might be analogous to mammalian offspring seeking to nurse after weaning. In times of danger, the tadpoles of Zimmerman’s poison frog don't passively await parental assistance but instead exhibit an almost aggressive approach in mounting the adult frogs.
Trade-offs with mating
Reproductive attempts in strawberry poison-dart frog such as courtship activity, significantly decreases or will entirely cease in tadpole-rearing females compared to non-rearing females. Most brooding males of the common coqui cease calling during parental care while gravid females are still available and known to mate, hence why non-calling males miss potential opportunities to reproduce. Caring for tadpoles comes at the cost of other current reproductive opportunities for females, leading to the hypothesis that frequent reproduction is associated with reduced survival in frogs.
In humans
An important illustration of POC within humans is provided by David Haig’s (1993) work on genetic conflicts in pregnancy. Haig argued that fetal genes would be selected to draw more resources from the mother than would be optimal for the mother to give. The placenta, for example, secretes allocrine hormones that decrease the sensitivity of the mother to insulin and thus make a larger supply of blood sugar available to the fetus. The mother responds by increasing the level of insulin in her bloodstream and to counteract this effect the placenta has insulin receptors that stimulate the production of insulin-degrading enzymes.
About 30 percent of human conceptions do not progress to full term (22 percent before becoming clinical pregnancies) creating a second arena for conflict between the mother and the fetus. The fetus will have a lower quality cut off point for spontaneous abortion than the mother. The mother's quality cut-off point also declines as she nears the end of her reproductive life, which becomes significant for older mothers. Older mothers have a higher incidence of offspring with genetic defects. Indeed, with parental age on both sides, the mutational load increases as well.
Initially, the maintenance of pregnancy is controlled by the maternal hormone progesterone, but in later stages it is controlled by the fetal human chorionic gonadotrophin released into the maternal bloodstream. The release of fetal human chorionic gonadotrophin causes the release of maternal progesterone. There is also conflict over blood supply to the placenta, with the fetus being prepared to demand a larger blood supply than is optimal for the mother (or even for itself, since high birth weight is a risk factor). This results in hypertension and, significantly, high birth weight is positively correlated with maternal blood pressure.
A tripartite (fetus–mother–father) immune conflict in humans and other placentals
During pregnancy, there is a two-way traffic of immunologically active cell lines through the placenta. Fetal lymphocyte lines may survive in women even decades after giving birth.
See also
Intrauterine cannibalism
The kinship theory of genomic imprinting
Intragenomic and intrauterine conflict in humans
References
External links
Genetic conflicts in human pregnancy
Parent-offspring conflict
Evolutionary biology
Reproduction | Parent–offspring conflict | Biology | 3,344 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.