id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
35,035,097 | https://en.wikipedia.org/wiki/C17H13N5O2 | {{DISPLAYTITLE:C17H13N5O2}}
The molecular formula C17H13N5O2 (molar mass: 319.317 g/mol, exact mass: 319.1069 u) may refer to:
Nitrazolam
SB-334867
Molecular formulas | C17H13N5O2 | [
"Physics",
"Chemistry"
] | 66 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
35,035,304 | https://en.wikipedia.org/wiki/C21H25N | The molecular formula C21H25N (molar mass: 291.43 g/mol, exact mass: 291.1987 u) may refer to:
L-687,384
Terbinafine
Melitracen
Molecular formulas | C21H25N | [
"Physics",
"Chemistry"
] | 51 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
35,035,679 | https://en.wikipedia.org/wiki/Radio%20Astronomy%20Laboratory | The Radio Astronomy Lab (RAL) is an Organized Research Unit (ORU) within the Astronomy Department at the University of California, Berkeley. It was founded by faculty member Harold Weaver in 1958.
Until 2012, RAL maintained a radio astronomy observatory at Hat Creek, near Mt. Lassen. It continues to support on-campus laboratory facilities in Campbell Hall. From 1998 to 2012, the RAL collaborated with the SETI Institute of Mountain View California to design, build and operate the Allen Telescope Array (ATA).
RAL has been central to the creation of several radio observatories, including:
Hat Creek Radio Observatory (HCRO),
the Allen Telescope Array (ATA),
the Berkeley-Illinois-Maryland Association (BIMA) array,
the Combined Array for Research in Millimeter-wave Astronomy (CARMA) array,
the Precision Array for Probing the Epoch of Reionization (PAPER) array
Research interests
Millimeter-wavelength interferometry
Very long baseline interferometry
Low-frequency radio interferometry targeting the Epoch of Reionization
Pulsars and other radio transients
Digital signal processing (DSP) instrumentation
Directors
Carl Heiles (director, 2010–current)
Donald Backer (director, 2008–2010), deceased
Leo Blitz (director, 1996–2008)
William "Jack" Welch (director, 1972–1996), retired
Harold Weaver (director, 1958–1972), retired
Current faculty
Current faculty include:
Carl Heiles
Leo Blitz
Imke de Pater
Geoff Bower
Aaron Parsons
Senior scientific staff
Dick Plambeck
Melvyn Wright
See also
List of astronomical societies
References
External links
RAL homepage
RAL 50th anniversary symposium
University of California, Berkeley
Astronomy organizations
1959 establishments in California
Science and technology in the San Francisco Bay Area | Radio Astronomy Laboratory | [
"Astronomy"
] | 363 | [
"Astronomy organizations"
] |
35,038,133 | https://en.wikipedia.org/wiki/Pathogen | In biology, a pathogen (, "suffering", "passion" and , "producer of"), in the oldest and broadest sense, is any organism or agent that can produce disease. A pathogen may also be referred to as an infectious agent, or simply a germ.
The term pathogen came into use in the 1880s. Typically, the term pathogen is used to describe an infectious microorganism or agent, such as a virus, bacterium, protozoan, prion, viroid, or fungus. Small animals, such as helminths and insects, can also cause or transmit disease. However, these animals are usually referred to as parasites rather than pathogens. The scientific study of microscopic organisms, including microscopic pathogenic organisms, is called microbiology, while parasitology refers to the scientific study of parasites and the organisms that host them.
There are several pathways through which pathogens can invade a host. The principal pathways have different episodic time frames, but soil has the longest or most persistent potential for harboring a pathogen.
Diseases in humans that are caused by infectious agents are known as pathogenic diseases. Not all diseases are caused by pathogens, such as black lung from exposure to the pollutant coal dust, genetic disorders like sickle cell disease, and autoimmune diseases like lupus.
Pathogenicity
Pathogenicity is the potential disease-causing capacity of pathogens, involving a combination of infectivity (pathogen's ability to infect hosts) and virulence (severity of host disease). Koch's postulates are used to establish causal relationships between microbial pathogens and diseases. Whereas meningitis can be caused by a variety of bacterial, viral, fungal, and parasitic pathogens, cholera is only caused by some strains of Vibrio cholerae. Additionally, some pathogens may only cause disease in hosts with an immunodeficiency. These opportunistic infections often involve hospital-acquired infections among patients already combating another condition.
Infectivity involves pathogen transmission through direct contact with the bodily fluids or airborne droplets of infected hosts, indirect contact involving contaminated areas/items, or transfer by living vectors like mosquitos and ticks. The basic reproduction number of an infection is the expected number of subsequent cases it is likely to cause through transmission.
Virulence involves pathogens extracting host nutrients for their survival, evading host immune systems by producing microbial toxins and causing immunosuppression. Optimal virulence describes a theorized equilibrium between a pathogen spreading to additional hosts to parasitize resources, while lowering their virulence to keep hosts living for vertical transmission to their offspring.
Types
Algae
Algae include single-celled eukaryotes that are generally non-pathogenic. Green algae from the genus Prototheca lack chlorophyll and are known to cause the disease protothecosis in humans, dogs, cats, and cattle, typically involving the soil-associated species Prototheca wickerhamii.
Bacteria
Bacteria are single-celled prokaryotes that range in size from 0.15 and 700 μM. While the vast majority are either harmless or beneficial to their hosts, such as members of the human gut microbiome that support digestion, a small percentage are pathogenic and cause infectious diseases. Bacterial virulence factors include adherence factors to attach to host cells, invasion factors supporting entry into host cells, capsules to prevent opsonization and phagocytosis, toxins, and siderophores to acquire iron.
The bacterial disease tuberculosis, primarily caused by Mycobacterium tuberculosis, has one of the highest disease burdens, killing 1.6 million people in 2021, mostly in Africa and Southeast Asia. Bacterial pneumonia is primarily caused by Streptococcus pneumoniae, Staphylococcus aureus, Klebsiella pneumoniae, and Haemophilus influenzae. Foodborne illnesses typically involve Campylobacter, Clostridium perfringens, Escherichia coli, Listeria monocytogenes, and Salmonella. Other infectious diseases caused by pathogenic bacteria include tetanus, typhoid fever, diphtheria, and leprosy.
Fungi
Fungi are eukaryotic organisms that can function as pathogens. There are approximately 300 known fungi that are pathogenic to humans, including Candida albicans, which is the most common cause of thrush, and Cryptococcus neoformans, which can cause a severe form of meningitis. Typical fungal spores are 4.7 μm long or smaller.
Prions
Prions are misfolded proteins that transmit their abnormal folding pattern to other copies of the protein without using nucleic acids. Besides obtaining prions from others, these misfolded proteins arise from genetic differences, either due to family history or sporadic mutations. Plants uptake prions from contaminated soil and transport them into their stem and leaves, potentially transmitting the prions to herbivorous animals. Additionally, wood, rocks, plastic, glass, cement, stainless steel, and aluminum have been shown binding, retaining, and releasing prions, showcasing that the proteins resist environmental degradation.
Prions are best known for causing transmissible spongiform encephalopathy (TSE) diseases like Creutzfeldt–Jakob disease (CJD), variant Creutzfeldt–Jakob disease (vCJD), Gerstmann–Sträussler–Scheinker syndrome (GSS), fatal familial insomnia (FFI), and kuru in humans.
While prions are typically viewed as pathogens that cause protein amyloid fibers to accumulate into neurodegenerative plaques, Susan Lindquist led research showing that yeast use prions to pass on evolutionarily beneficial traits.
Viroids
Not to be confused with virusoids or viruses, viroids are the smallest known infectious pathogens. Viroids are small single-stranded, circular RNA that are only known to cause plant diseases, such as the potato spindle tuber viroid that affects various agricultural crops. Viroid RNA is not protected by a protein coat, and it does not encode any proteins, only acting as a ribozyme to catalyze other biochemical reactions.
Viruses
Viruses are generally between 20–200 nm in diameter. For survival and replication, viruses inject their genome into host cells, insert those genes into the host genome, and hijack the host's machinery to produce hundreds of new viruses until the cell bursts open to release them for additional infections. The lytic cycle describes this active state of rapidly killing hosts, while the lysogenic cycle describes potentially hundreds of years of dormancy while integrated in the host genome. Alongside the taxonomy organized by the International Committee on Taxonomy of Viruses (ICTV), the Baltimore classification separates viruses by seven classes of mRNA production:
I: dsDNA viruses (e.g., Adenoviruses, Herpesviruses, and Poxviruses) cause herpes, chickenpox, and smallpox
II: ssDNA viruses (+ strand or "sense") DNA (e.g., Parvoviruses) include parvovirus B19
III: dsRNA viruses (e.g., Reoviruses) include rotaviruses
IV: (+)ssRNA viruses (+ strand or sense) RNA (e.g., Coronaviruses, Picornaviruses, and Togaviruses) cause COVID-19, dengue fever, Hepatitis A, Hepatitis C, rubella, and yellow fever
V: (−)ssRNA viruses (− strand or antisense) RNA (e.g., Orthomyxoviruses and Rhabdoviruses) cause ebola, influenza, measles, mumps, and rabies
VI: ssRNA-RT viruses (+ strand or sense) RNA with DNA intermediate in life-cycle (e.g., Retroviruses) cause HIV/AIDS
VII: dsDNA-RT viruses DNA with RNA intermediate in life-cycle (e.g., Hepadnaviruses) cause Hepatitis B
Other parasites
Protozoans are single-celled eukaryotes that feed on microorganisms and organic tissues. Many protozoans act as pathogenic parasites to cause diseases like malaria, amoebiasis, giardiasis, toxoplasmosis, cryptosporidiosis, trichomoniasis, Chagas disease, leishmaniasis, African trypanosomiasis (sleeping sickness), Acanthamoeba keratitis, and primary amoebic meningoencephalitis (naegleriasis).
Parasitic worms (helminths) are macroparasites that can be seen by the naked eye. Worms live and feed in their living host, acquiring nutrients and shelter in the digestive tract or bloodstream of their host. They also manipulate the host's immune system by secreting immunomodulatory products which allows them to live in their host for years. Helminthiasis is the generalized term for parasitic worm infections, which typically involve roundworms, tapeworms, and flatworms.
Pathogen hosts
Bacteria
While bacteria are typically viewed as pathogens, they serve as hosts to bacteriophage viruses (commonly known as phages). The bacteriophage life cycle involves the viruses injecting their genome into bacterial cells, inserting those genes into the bacterial genome, and hijacking the bacteria's machinery to produce hundreds of new phages until the cell bursts open to release them for additional infections. Typically, bacteriophages are only capable of infecting a specific species or strain.
Streptococcus pyogenes uses a Cas9 nuclease to cleave foreign DNA matching the Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) associated with bacteriophages, removing the viral genes to avoid infection. This mechanism has been modified for artificial CRISPR gene editing.
Plants
Plants can play host to a wide range of pathogen types, including viruses, bacteria, fungi, nematodes, and even other plants. Notable plant viruses include the papaya ringspot virus, which has caused millions of dollars of damage to farmers in Hawaii and Southeast Asia, and the tobacco mosaic virus which caused scientist Martinus Beijerinck to coin the term "virus" in 1898. Bacterial plant pathogens cause leaf spots, blight, and rot in many plant species. The most common bacterial pathogens for plants are Pseudomonas syringae and Ralstonia solanacearum, which cause leaf browning and other issues in potatoes, tomatoes, and bananas.
Fungi are another major pathogen type for plants. They can cause a wide variety of issues such as shorter plant height, growths or pits on tree trunks, root or seed rot, and leaf spots. Common and serious plant fungi include the rice blast fungus, Dutch elm disease, chestnut blight and the black knot and brown rot diseases of cherries, plums, and peaches. It is estimated that pathogenic fungi alone cause up to a 65% reduction in crop yield.
Overall, plants have a wide array of pathogens and it has been estimated that only 3% of the disease caused by plant pathogens can be managed.
Animals
Animals often get infected with many of the same or similar pathogens as humans including prions, viruses, bacteria, and fungi. While wild animals often get illnesses, the larger danger is for livestock animals. It is estimated that in rural settings, 90% or more of livestock deaths can be attributed to pathogens. Animal transmissible spongiform encephalopathy (TSEs) involving prions include bovine spongiform encephalopathy (mad cow disease), chronic wasting disease, scrapie, transmissible mink encephalopathy, feline spongiform encephalopathy, and ungulate spongiform encephalopathy. Other animal diseases include a variety of immunodeficiency disorders caused by viruses related to human immunodeficiency virus (HIV), such as BIV and FIV.
Humans
Humans can be infected with many types of pathogens, including prions, viruses, bacteria, and fungi, causing symptoms like sneezing, coughing, fever, vomiting, and potentially lethal organ failure. While some symptoms are caused by the pathogenic infection, others are caused by the immune system's efforts to kill the pathogen, such as feverishly high body temperatures meant to denature pathogenic cells.
Treatment
Prions
Despite many attempts, no therapy has been shown to halt the progression of prion diseases.
Viruses
A variety of prevention and treatment options exist for some viral pathogens. Vaccines are one common and effective preventive measure against a variety of viral pathogens. Vaccines prime the immune system of the host, so that when the potential host encounters the virus in the wild, the immune system can defend against infection quickly. Vaccines designed against viruses include annual influenza vaccines and the two-dose MMR vaccine against measles, mumps, and rubella. Vaccines are not available against the viruses responsible for HIV/AIDS, dengue, and chikungunya.
Treatment of viral infections often involves treating the symptoms of the infection, rather than providing medication to combat the viral pathogen itself. Treating the symptoms of a viral infection gives the host immune system time to develop antibodies against the viral pathogen. However, for HIV, highly active antiretroviral therapy (HAART) is conducted to prevent the viral disease from progressing into AIDS as immune cells are lost.
Bacteria
Much like viral pathogens, infection by certain bacterial pathogens can be prevented via vaccines. Vaccines against bacterial pathogens include the anthrax vaccine and pneumococcal vaccine. Many other bacterial pathogens lack vaccines as a preventive measure, but infection by these bacteria can often be treated or prevented with antibiotics. Common antibiotics include amoxicillin, ciprofloxacin, and doxycycline. Each antibiotic has different bacteria that it is effective against and has different mechanisms to kill that bacteria. For example, doxycycline inhibits the synthesis of new proteins in both gram-negative and gram-positive bacteria, which makes it a broad-spectrum antibiotic capable of killing most bacterial species.
Due to misuse of antibiotics, such as prematurely ended prescriptions exposing bacteria to evolutionary pressure under sublethal doses, some bacterial pathogens have developed antibiotic resistance. For example, a genetically distinct strain of Staphylococcus aureus called MRSA is resistant to the commonly prescribed beta-lactam antibiotics. A 2013 report from the Centers for Disease Control and Prevention (CDC) estimated that in the United States, at least 2 million people get an antibiotic-resistant bacterial infection annually, with at least 23,000 of those patients dying from the infection.
Due to their indispensability in combating bacteria, new antibiotics are required for medical care. One target for new antimicrobial medications involves inhibiting DNA methyltransferases, as these proteins control the levels of expression for other genes, such as those encoding virulence factors.
Fungi
Infection by fungal pathogens is treated with anti-fungal medication. Athlete's foot, jock itch, and ringworm are fungal skin infections that are treated with topical anti-fungal medications like clotrimazole. Infections involving the yeast species Candida albicans cause oral thrush and vaginal yeast infections. These internal infections can either be treated with anti-fungal creams or with oral medication. Common anti-fungal drugs for internal infections include the echinocandin family of drugs and fluconazole.
Algae
While algae are commonly not thought of as pathogens, the genus Prototheca causes disease in humans. Treatment for protothecosis is currently under investigation, and there is no consistency in clinical treatment.
Sexual interactions
Many pathogens are capable of sexual interaction. Among pathogenic bacteria, sexual interaction occurs between cells of the same species by the process of genetic transformation. Transformation involves the transfer of DNA from a donor cell to a recipient cell and the integration of the donor DNA into the recipient genome through genetic recombination. The bacterial pathogens Helicobacter pylori, Haemophilus influenzae, Legionella pneumophila, Neisseria gonorrhoeae, and Streptococcus pneumoniae frequently undergo transformation to modify their genome for additional traits and evasion of host immune cells.
Eukaryotic pathogens are often capable of sexual interaction by a process involving meiosis and fertilization. Meiosis involves the intimate pairing of homologous chromosomes and recombination between them. Examples of eukaryotic pathogens capable of sex include the protozoan parasites Plasmodium falciparum, Toxoplasma gondii, Trypanosoma brucei, Giardia intestinalis, and the fungi Aspergillus fumigatus, Candida albicans and Cryptococcus neoformans.
Viruses may also undergo sexual interaction when two or more viral genomes enter the same host cell. This process involves pairing of homologous genomes and recombination between them by a process referred to as multiplicity reactivation. The herpes simplex virus, human immunodeficiency virus, and vaccinia virus undergo this form of sexual interaction.
These processes of sexual recombination between homologous genomes supports repairs to genetic damage caused by environmental stressors and host immune systems.
See also
Antigenic escape
Ecological competence
Emerging Pathogens Institute
Human pathogen
Pathogen-Host Interaction Database (PHI-base)
References
External links
Pronunciation Guide to Microorganisms (1)
Pronunciation Guide to Microorganisms (2)
Infectious diseases
Microbiology
Hazardous materials | Pathogen | [
"Physics",
"Chemistry",
"Technology",
"Biology"
] | 3,694 | [
"Microbiology",
"Materials",
"Microscopy",
"Hazardous materials",
"Matter"
] |
35,040,003 | https://en.wikipedia.org/wiki/Polarization%20scrambling | Polarization scrambling is the process of rapidly varying the polarization of light within a system using a polarization controller so that the average polarization over time is effectively randomized. Polarization scrambling can be used in scientific experiments to cancel out errors caused by polarization effects. Polarization scrambling is also used on long-distance fibre optic transmission systems with optical amplifiers, in order to avoid polarization hole-burning. Polarization scrambling, also for the variation of polarization mode dispersion, is a mandatory test procedure for fiber optic data transmission systems based on polarization-division multiplexing.
Polarization scramblers usually vary the normalized Stokes vector of the polarization state over the entire Poincaré sphere. They are commercially available with speeds of 20 Mrad/s on the Poincaré sphere (see external link). Various speed distributions such as peaked and quasi-Rayleigh can be generated.
Some experiments have implemented ultrafast polarization scrambling on a polaritonic platform with speeds in the order of the Trad/s on the Poincaré sphere.
See also
Depolarizer (optics)
Polarization-division multiplexing
Polarization controller
Polarization mixing
References
External links
Polarization (waves) | Polarization scrambling | [
"Physics",
"Astronomy"
] | 245 | [
"Astrophysics stubs",
"Polarization (waves)",
"Astronomy stubs",
"Astrophysics"
] |
35,041,210 | https://en.wikipedia.org/wiki/Donald%20Truhlar | Donald Gene Truhlar (born 27 February 1944, Chicago) is an American scientist working in theoretical and computational chemistry and chemical physics with special emphases on quantum mechanics and chemical dynamics.
Early life, education, and early work
Donald Gene Truhlar was born in Chicago on 27 February 1944 to John Joseph Truhlar and Lucille Marie Vancura, both of Czech ancestry. Truhlar received a B.A., from St. Mary's College of Minnesota (1965), and a Ph.D., from Caltech (1970), under Aron Kuppermann. He has been on the faculty of the University of Minnesota from 1969–present.
Important contributions
Truhlar is known for his contributions to theoretical chemical dynamics of chemical reactions; quantum mechanical scattering theory of chemical reactions and molecular energy transfer; electron scattering; theoretical kinetics and chemical dynamics; potential energy surfaces and molecular interactions; path integrals; variational transition state theory; the use of electronic structure theory for calculations of chemical structure, reaction rates, electronically nonadiabatic processes, and solvation effects; photochemistry; combustion chemistry; heterogeneous, homogeneous, and enzyme catalysis; atmospheric and environmental chemistry; drug design; nanoparticle structure and energetics; basis set development; and density functional theory, including the Minnesota Functionals.
Awards and honors
Truhlar was elected to the International Academy of Quantum Molecular Science (2006), the National Academy of Sciences of the USA (2009), and the American Academy of Arts and Sciences (2015). He became an Honorary Fellow of the Chinese Chemical Society in 2015. He was named a Fellow of the American Association for the Advancement of Science, (1986) the American Chemical Society (2009), the American Physical Society (1986), the Royal Society of Chemistry (2009), and the World Association of Theoretical and Computational Chemists (2006). He was a visiting fellow at the Battelle Memorial Institute (1973) and at the Joint Institute for Laboratory Astrophysics (1975–76), and was named an Alfred P. Sloan Foundation Research Fellow (1973).
He received the NSF Creativity Award (1993), the ACS Award for Computers in Chemical and Pharmaceutical Research (2000), the Minnesota Award (2003) ("for outstanding contributions to the chemical sciences"), the National Academy of Sciences Award for Scientific Reviewing (2004); the ACS Peter Debye Award for Physical Chemistry (2006), the Lise Meitner Lectureship Award (2006), the Schrödinger Medal of The World Association of Theoretical and Computational Chemists (2006), the Dudley R. Herschbach Award for Research in Molecular Collision Dynamics (2009), Doctor honoris causa of Technical University of Lodz, Poland, (2010), the Royal Society of Chemistry Chemical Dynamics Award (2012), the APS Earle K. Plyler Prize for Molecular Spectroscopy and Dynamics (2015), the 2019 ACS Award in Theoretical Chemistry (2018), and the 2023 Joseph O. Hirschfelder Prize in Theoretical Chemistry.
Truhlar has served as an associate editor of the Journal of the American Chemical Society (1984–2016), as (successively) editor, editor-in-chief, associate editor, and chief advisory editor of Theoretical Chemistry Accounts (formerly Theoretica Chimica Acta) (1985–present), and as a principal editor of Computer Physics Communications (1986–2015). He is currently a Regents professor at the University of Minnesota, which is the university's highest honor. The University of Minnesota has also honored him as a Distinguished University Teaching Professor and a College of Science & Engineering Distinguished Professor.
References
External links
The Truhlar Group
Google Scholar Citations
1944 births
Living people
University of Minnesota faculty
Theoretical chemists
21st-century American chemists
Members of the United States National Academy of Sciences
Schrödinger Medal recipients
Computational chemists
Fellows of the American Association for the Advancement of Science
Fellows of the American Chemical Society
Fellows of the American Physical Society
Fellows of the Royal Society of Chemistry
Sloan Research Fellows
American people of Czech descent | Donald Truhlar | [
"Chemistry"
] | 838 | [
"Theoretical chemists",
"American theoretical chemists"
] |
35,043,676 | https://en.wikipedia.org/wiki/Sky%20quality%20meter | A sky quality meter (SQM) is an instrument used to measure the luminance of the night sky, more specifically the Night Sky Brightness (NSB) at the zenith, with a bandwidth ranging from 390 nm to 600 nm. It is used, typically by amateur astronomers, to quantify the skyglow aspect of light pollution and uses units of "magnitudes per square arcsecond" favoured by astronomers.
Structure and design
The SQM is equipped with a silicon photodiode functioning as detector which is partially covered by a rejection filter for the near-infrared wavelength. The system has a high response to wavelengths up to the near-infrared (from 350 nm to 2500 nm), thanks to a converter from light to frequency.
This structure tries to mimic the human eye spectral response under the photopic regime.
A final spectral response is provided by the combination of the photodiode and the transmission near-infrared cut-off filter. This response overlaps the Johnson B and V bands, well known in astronomical photometry, in the wavelength range between 320 nm and 720 nm, which includes visible light spectrum.
Beyond amateur astronomers, the SQM photometers have become very popular among researchers from different fields of study, including associations involved in fighting light pollution.
The instrument has a systematic uncertainty which is quoted of 10% (0.1 mag arcsec−2). The aspect of uncertainty is also related to the stability of these radiometers: the variation of the instrument behaviour (mainly due to sensor ageing, the influence of the air temperature and atmospheric conditions and internal temperature) could be confused with changes of the sky brightness, especially when NSB tracking is performed over a long time interval.
Models and Production
There are several models of SQM made, offering different fields of view (i.e. measuring different angular areas on the sky), and various automatic measurement and data logging or data communication capabilities. The current versions has only one band of observation, that can produce misinterpretations if the light pollution changes from sodium-vapor lamp to LED.
The SQM-L, or "Sky Quality Meter - L," is a model with an additional integrated lens, offering a narrower measurement range of 20° compared to the 84° range of the standard SQM model.
The SQM is produced by the Canadian company Unihedron in Grimsby, Ontario.
Scale
The values reported by the SQM are in units of magnitudes per square arcsecond (mag arcsec−2).
Typically, the data provided by SQMs are recorded in magnitudes, denoted as m or mag, specifically in mSQM (or magSQM), where the subscript SQM indicates that the measured radiance is calculated by weighting the electromagnetic radiation according to the spectral responsivity of these instruments.
As astronomical magnitudes are a negative logarithmic scale, smaller values indicate a brighter sky and a difference of 5 mag arcsec−2 corresponds to a difference in luminance of 100 times. Typical values will range between around 16 for bright urban skies to 22 for the darkest skies on Earth.
Limits and considerations
SQM response can be influenced by ambient temperature variations, so it is important to verify these effects. Since SQMs are not waterproof, they must be protected from moisture using a housing, which is generally provided by the manufacturer. This housing protects the device but also traps heat generated during operation, which is minimal for USB models (SQM-LU) but more significant for Ethernet models (SQM-LE).
In urban environments, SQMs frequently record large variations in radiance due to the presence or absence of clouds. Radiance measurements taken by SQM-LU devices are stable within the temperature range of −15 °C to 35 °C, with variations smaller than the 10% systematic uncertainty stated by the manufacturer.
Citizen science
SQM measurements can be submitted to a database on the manufacturer's website and to the citizen science project GLOBE at Night.
References
External links
Product page on Unihedron webpage
GLOBE at Night project
Network of SQM measurements
Scale and Instructions on Unihedron webpage
Measuring instruments | Sky quality meter | [
"Technology",
"Engineering"
] | 848 | [
"Measuring instruments"
] |
24,849,570 | https://en.wikipedia.org/wiki/Molecular%20Biology%20of%20the%20Cell%20%28book%29 | Molecular Biology of the Cell is a cellular and molecular biology textbook published by W.W. Norton & Co and currently authored by Bruce Alberts, Rebecca Heald, David Morgan, Martin Raff, Keith Roberts, and Peter Walter. The book was first published in 1983 by Garland Science and is now in its seventh edition. The molecular biologist James Watson contributed to the first three editions.
Molecular Biology of the Cell is widely used in introductory courses at the university level, being considered a reference in many libraries and laboratories around the world. It describes the current understanding of cell biology and includes basic biochemistry, experimental methods for investigating cells, the properties common to most eukaryotic cells, the expression and transmission of genetic information, the internal organization of cells, and the behavior of cells in multicellular organisms. Molecular Biology of the Cell has been described as "the most influential cell biology textbook of its time". The sixth edition is dedicated to the memory of co-author Julian Lewis, who died in early 2014.
The book was the first to position cell biology as a central discipline for biology and medicine, and immediately became a landmark textbook. It was written in intense collaborative sessions in which the authors lived together over periods of time, organized by editor Miranda Robertson, then-Biology Editor of Nature.
References
External links
7th edition official page
6th edition on Google Books
1983 non-fiction books
American non-fiction books
English-language non-fiction books
Biology textbooks
Cell biology
Molecular biology
Collaborative non-fiction books | Molecular Biology of the Cell (book) | [
"Chemistry",
"Biology"
] | 300 | [
"Biochemistry",
"Cell biology",
"Molecular biology"
] |
24,854,118 | https://en.wikipedia.org/wiki/Maritime%20impacts%20of%20volcanic%20eruptions | Volcanic eruptions can have various impacts on maritime transportation. When a volcano erupts, large amounts of noxious gases, steam, rock, and ash are released into the atmosphere; fine ash can be transported thousands of miles from the volcano, while high concentrations of coarse particles fall out of the air near the volcano. The high concentrations of hazardous toxic gases are localized in the immediate vicinity of the volcano.
Until more recently public focus has mainly been on effects on aviation effects—ash, which can be undetectable, can cause an aircraft's engine to cut out with catastrophic potential. However, the July 2008 eruption of Okmok Volcano in Alaska triggered attention to the maritime effects. Employees at the National Weather Service Ocean Prediction Center's Ocean Applications Branch examined this event and partnered with the Alaska Volcano Observatory to compile information on the topic.
Ash can affect marine transportation in many ways:
Volcanic ash can clog air intake filters in a matter of minutes, crippling airflow to vital machinery. Ash particles are very abrasive and, if they get into an engine's moving parts, can cause severe damage very quickly.
Water is the main component in volcanic eruptions; it is what makes them so explosive. Through chemical reactions, toxic gases that are released in eruptions can bond or adsorb to ashfall particles. As the particles land on skin, metal, or other exposed shipboard equipment, they can begin to corrode.
Certain types of volcanic ash do not dissolve easily in water. Instead, they clump on the surface of the ocean in pumice rafts. These rafts can clog salt water intake strainers very quickly, which can result in overheating of shipboard machinery dependent on sea water service cooling.
Heavy amounts of volcanic ash reduce visibility to less than ½ mi, which is a hazard to navigation. This, combined with the above three other main impacts make sailing in the vicinity of volcanic ash very dangerous for mariners.
National Weather Service Ashfall Advisories
National Weather Service Instruction 10-311 will includes a new text guidance for the offshore and high seas text weather forecasts issued by the Ocean Prediction Center and Tropical Prediction Center's Tropical Analysis and Forecast Branch (TAFB).
Reported incidents
A few cases of ash impact on ships that have been documented are:
The 2008 Eruption of Chaitén Volcano in Chile prompted mass evacuations, in which the Chilean Navy participated. There are reports that the Chilean Navy encountered pumice rafts which were sucked into the salt water service system of the ship's propulsion system. This clogged sea strainers and overheated the engines, almost making the ships unable to escape.
The NOAA Ship Miller Freeman reported light accumulations of volcanic ash during the 2008 Okmok eruption in Dutch Harbor, Alaska. Due to volcanic-ash clogged ventilation systems, the ship remained in port until the event subsided.
In 1891 the Australian steam ship Catterthun reported steaming "for miles through masses of volcanic debris" after an eruption on the island of Sagir in the Indonesian archipelago. It was rumoured that all of the island's 12,000 inhabitants had perished in the eruption.
References
External links
NWS Alaska Region Volcano Coordination Page
USGS Volcano Hazards Program
USGS Alaska Volcano Observatory
USGS Cascades Volcano Observatory
Ocean Prediction Center Volcano Webpage
https://www.nytimes.com/1892/07/18/archives/volcanos-terrible-work-an-island-and-all-of-its-inhabitants.html
Volcanology
Weather hazards | Maritime impacts of volcanic eruptions | [
"Physics"
] | 722 | [
"Weather",
"Physical phenomena",
"Weather hazards"
] |
24,860,148 | https://en.wikipedia.org/wiki/Capacitor%20electric%20vehicle | A capacitor electric vehicle is a vehicle that uses supercapacitors (also called ultracapacitors) to store electricity.
, the best ultracapacitors can only store about 5% of the energy that lithium-ion rechargeable batteries can, limiting them to a couple of miles per charge. This makes them ineffective as a general energy storage medium for passenger vehicles. But ultracapacitors can charge much faster than batteries, so in vehicles such as buses that have to stop frequently at known points where charging facilities can be provided, energy storage based exclusively on ultracapacitors becomes viable.
Capabus
China is experimenting with a new form of electric bus, known as Capabus, which runs without continuous overhead lines (is an autonomous vehicle) by using power stored in large onboard electric double-layer capacitors (EDLCs), which are quickly recharged whenever the vehicle stops at any bus stop (under so-called electric umbrellas), and fully charged in the terminus.
A few prototypes were being tested in Shanghai in early 2005. In 2006 two commercial bus routes began to use electric double-layer capacitor buses; one of them is route 11 in Shanghai. In 2009 Sinautec Automobile Technologies, based in Arlington, Virginia, and its Chinese partner Shanghai Aowei Technology Development Company are testing, with 17 forty-one seat Ultracap Buses serving the Greater Shanghai area since 2006 without any major technical problems. During the Shanghai Expo in 2010, however, 40 supercapacitor buses were being used on a special Expo bus service and owing to the supercapacitors becoming overheated some of the buses broke down. Buses in the Shanghai pilot are made by Germantown, Tennessee-based Foton America Bus Company Another 60 buses will be delivered early next year with ultracapacitors that supply 10 watt-hours per kilogram.
The buses have very predictable routes and need to stop regularly every or less, allowing quick recharging at charging stations at bus stops. A collector on the top of the bus rises a few feet and touches an overhead charging line at the stop; within a couple of minutes the ultracapacitor banks stored under the bus seats are fully charged. The buses can also capture energy from braking, and the company says that recharging stations can be equipped with solar panels. A third generation of the product, which will give of range per charge or better is planned.
Sinautec estimates that one of its buses has one-tenth the energy cost of a diesel bus and can achieve lifetime fuel savings of $200,000. The buses use 40% less electricity even when compared to an electric trolley bus, mainly because they are lighter. The ultracapacitors are made of activated carbon and have an energy density of six watt-hours per kilogram (for comparison a high-performance lithium-ion battery can achieve 200 watt-hours per kilogram, but the ultracapacitor bus is about 40% cheaper than a lithium-ion battery bus and far more reliable).
There is also a plug-in hybrid version, which also uses ultracaps.
RATP, the public-owned company that manages most of Paris' public transport system, is currently performing tests using a hybrid bus outfitted with ultracapacitors. The model, called Lion's City Hybrid, is supplied by German manufacturer MAN.
GSP Belgrade, Serbia has launched the first bus line operated solo by supercapacitor buses from Chinese manufacturer Higer. First sustainable ultracapacitor (UC) e-bus was represented by Chariot Motors Company in EU and Sofia, Bulgaria in 2014. The 18-month pilot project was successful and had a great public response. The UC bus was tested by the Reputable German laboratory Belicon GmbH and was defined as one of the lowest energy consumption effective vehicles.
Based on the pilot's success the capital of Bulgaria – Sofia, (one of the most polluted European cities) chose the UC e-buses as one of the innovative and suitable for the city transport technology. Sofia public transport operator - Stolichen Elektrotransport put 45 Cariot - Higer 12m UC electric buses into operation, 15 in 2020 and 30 in 2021. Electric vehicles are equipped with 40kWh UCs, the buses run on routes 6, 60, 11, 73, 74, 84, 123 and 184, with 11 km average unduplicated length.
In Graz, Austria, lines 50 and 34E are running with short intermediate recharging, using 24–32 kWh EDLC supercapacitors.
Current collectors at bus stops
Pantographs and ground-level power supply current collectors are integrated in bus stops to recharge electric buses quickly, making it possible to use a smaller battery on the bus, which reduces the capital and running costs.
Subway and tram
In a subway car or tram, an insulator at a track switch may cut off power from the car for a few feet along the line and use a large capacitor to store energy to drive the subway car through the insulator in the power feed.
The new Nanjing tram uses supercapacitor technology, with charging hardware at each stop instead of continuous catenary. The first line started operating in 2014. The rail vehicles were produced by CSR Zhuzhou; according to the manufacturers, they are the world's first low-floor tram completely powered by supercapacitors. Several similar rail vehicles have been ordered for the Guangzhou Tram line as well.
Other deployments
In 2001 and 2002 VAG, the public transport operator in Nuremberg, Germany, tested a hybrid bus which uses a diesel-electric drive system with electric double-layer capacitors.
Since 2003 Mannheim Stadtbahn in Mannheim, Germany, has operated a capa vehicle, an LRV (light-rail vehicle), which uses electric double-layer capacitors to store braking energy.
Other companies from the public transport manufacturing sector are developing electric double-layer capacitor technology: The Transportation Systems division of Siemens AG is developing a mobile energy storage based on EDLCs called Sibac Energy Storage and also Sitras SES, a stationary version integrated into the trackside power supply.
Adetel Group has developed its own energy saver named ″NeoGreen″ for LRV, LRT and metros.
The company Cegelec is also developing an EDLC-based energy storage system.
Proton Power Systems has created the world's first triple hybrid forklift truck, which uses fuel cells and batteries as primary energy storage with EDLCs to supplement them.
University of Southampton spin-out Nanotecture has received a government grant to develop supercapacitors for hybrid vehicles. The company is set to receive £376,000 from the DTI in the UK for a project entitled "next generation supercapacitors for hybrid vehicle applications". The project also involves Johnson Matthey and HILTech Developments. The project will use supercapacitor technology to improve hybrid electric vehicles and increase overall energy efficiency.
Future developments
Sinautec is in discussions with MIT's Schindall about developing ultracapacitors of higher energy density using vertically aligned carbon nanotube structures that give the devices more surface area for holding a charge. So far they are able to get twice the energy density of an existing ultracapacitor, but they are trying to get about five times. This would create an ultracapacitor with one-quarter of the energy density of a lithium-ion battery.
Future developments include the use of inductive charging under the street, to avoid overhead wiring. A pad under each bus stop and at each stop light along the way would be used.
Motor racing
The FIA, the governing body for many motor racing events, proposed in the Power-Train Regulation Framework for Formula 1 version 1.3 of 23 May 2007 that a new set of power train regulations be issued that includes a hybrid drive of up to 200 kW input and output power using "superbatteries" made with both batteries and supercapacitors.
UltraBatteries
Ultracapacitors are used in some electric vehicles to store rapidly available energy with their high power density, in order to keep batteries within safe resistive heating limits and extend battery life. The Ultrabattery combines a supercapacitor and a battery in a single unit, creating an electric vehicle battery that lasts longer, costs less and is more powerful than current technologies used in plug-in hybrid electric vehicles (PHEVs).
See also
Battery electric bus
Charging station
Electric bus
Electric road
Fuel cell bus
Gyrobus
Power Vehicle Innovation
Trolleybus
ABB TOSA (bus) (TOSA Flash Mobility, Clean City, Smart Bus)
External links
An Electric Bus That Recharges While You Step Inside.
References
Electric buses
Capacitors
Electric vehicles
Energy storage
Vehicles introduced in 2001 | Capacitor electric vehicle | [
"Physics"
] | 1,803 | [
"Capacitance",
"Capacitors",
"Physical quantities"
] |
33,473,923 | https://en.wikipedia.org/wiki/Dra%C4%8Da | Drača () is a village in the municipality of Stanovo, Serbia. According to the preliminary results of the 2011 census, the village has a population of 911 people. In the domain of the village, cca 7 kilometres from the centre of the city of Kragujevac, the geographical centre of Serbia is located.
References
Populated places in Šumadija District
Geographical centres | Drača | [
"Physics",
"Mathematics"
] | 79 | [
"Point (geometry)",
"Geometric centers",
"Geographical centres",
"Symmetry"
] |
33,476,292 | https://en.wikipedia.org/wiki/Out%20of%20Phase%20Stereo | Out of Phase Stereo (OOPS) is an audio technique which manipulates the phase of a stereo audio track, to isolate or remove certain components of the stereo mix. It works on the principle of phase cancellation, in which two identical but inverted waveforms summed together will "cancel the other out".
Process
When a sine wave is mixed with one of identical frequency but opposite amplitude (ie: of an inverse polarity), the combined result is silence. A two-channel stereo recording contains a number of waveforms; sounds that are panned to the extreme left or right will contain the greatest difference in amplitude between the two channels, while those towards the centre will contain the smallest. A mix of the left channel with the polar inverse of the right channel will reduce centre-panned sounds towards silence, while preserving those towards the extremities.
In practice, the OOPS technique can be performed by inverting the polarity of one speaker or signal lead. It can also be performed using digital audio software by inverting one of the channels of a stereo audio waveform, and then summing both channels together to create a single mono channel.
Applications in music
This technique has been previously used to eliminate vocals in a stereo track (as vocals tend to be panned centre) to create crude karaoke tracks, or generate surround channels from a stereo source, such as in Dolby Pro Logic. It has also been used in the recording process to include tracks that were only audible once an OOPS technique was applied. This feature can be observed in several of the Beatles' stereo albums. Australian band Cinema Prague recorded a single track Meldatype that contained two songs played simultaneously, one of which was only audible after an OOPS technique was applied. It consisted of two mono tracks: a loud and distorted electric guitar playing chords repetitively, as well as a quiet vocal track. The guitar had one of the channels inverted, while the vocal track was identical in both channels. During normal playback, the guitar would be heard throughout the entire track. When the channels were summed to mono, however, the regular and inverted guitar tracks would cancel out, revealing the vocal track to the listener.
References
Interference
Stereophonic techniques
Wave mechanics | Out of Phase Stereo | [
"Physics"
] | 453 | [
"Waves",
"Wave mechanics",
"Physical phenomena",
"Classical mechanics"
] |
33,476,903 | https://en.wikipedia.org/wiki/Sepro%20Mineral%20Systems | Sepro Mineral Systems Corp. is a Canadian company founded in 1987 and headquartered in British Columbia, Canada. The outcome of the acquisition of Sepro Mineral Processing International by Falcon Concentrators in 2008, the company's key focus is the production of mineral processing equipment for the mining and aggregate industries. Sepro Mineral Systems Corp. also provides engineering and process design services. Products sold by Sepro include grinding mills, ore scrubbers, vibrating screens, centrifugal gravity concentrators, agglomeration drums, and dense media separators. The company is also a supplier of single source modular pre-designed and custom designed plants and circuits.
Today, Sepro Mineral Systems Corp. is represented by global agents in over 15 countries and has equipment operating in over 31 countries around the world.
Products
Falcon Gravity Concentrators
The Falcon Concentrator is a type of gravity separation device for the recovery of valuable metals and minerals. There are three types of Falcon Concentrators: Falcon Semi-Batch (SB), Falcon Continuous (C) and Falcon Ultra-Fine (UF). All models of Falcon Concentrator rely on the creation of centrifugal forces by way of a rapidly rotating, vertical bowl in order to stratify and separate particles based on weight. The amount of gravitational force generated and the method of collecting these heavier particles differs for each model.
Falcon Semi-Batch
The Falcon semi batch centrifugal concentrator is primarily used for the recovery of free (liberated) precious metals such as gold, silver and platinum. The machine generates forces up to 200 times the force of gravity (200 G's) and makes use of a two-stage rotating bowl for mineral separation. The smooth-walled lower portion is for particle stratification and then a fluidized upper portion is used for the collection of the heavier particles. The machine is stopped periodically to rinse and collect the valuable concentrate from the bowl. The Falcon SB concentrator is used for gold recovery at many mines around the world, including Quadra FNX Mining's Robinson mine in the United States, Newcrest's Telfer Gold Mine in Australia and the Sadiola Gold Mine (owned principally by AngloGold Ashanti and Iamgold) in Mali.
Falcon Continuous
The Falcon Continuous (C) centrifugal concentrator is primarily used for the separation of heavy minerals which occur in ore concentrations above 0.1% by weight, such as cassiterite, tantalum and scheelite. It is also used for coal cleaning and pre-concentration of gold bearing ores. The machine generates forces up to 300 times the force of gravity (300 G's) and operates by using a smooth-walled, rotating bowl to stratify the material into heavier and lighter fractions then uses pneumatic valves to control the amount of heavy material that reports to the concentrate collection stream. It does not use any fluidization water and relies entirely on centrifugal force for separation. The Falcon C concentrator is used in various process plants around the world, such as the Tanco mine in Canada, the Sekisovskoye mine in Kazakhstan and the Renison tin mine in Tasmania.
Falcon Ultra-Fine
The Falcon Ultra-Fine (UF) centrifugal concentrator is primarily used for the separation of heavy minerals which occur in ore concentrations above 0.1% by weight, such as cassiterite, tantalum and scheelite when the majority of the particles are smaller than 75 μm. The machine generates forces up to 600 times the force of gravity (600 G's) and uses a smooth-walled bowl for particle stratification with a pneumatically controlled rubber lip for heavy material collection. The machine is stopped periodically to rinse and collect the valuable concentrate from the bowl. Studies have found that the deposition of heavy material within the bowl can be predicted by a hindered settling model. The Falcon UF concentrator is used in a number of process plants around the world such as the Tanco mine in Canada and the Bluestone tin mine in Tasmania.
Sepro Tyre Drive Scrubbers
Scrubbers are material washers used to break down and disperse clays in order to prepare mineral ores or construction aggregates for further processing. Sepro Tyre Drive Scrubbers are manufactured up to 3.6m in diameter and are capable of processing up to 1500 tonnes per hour of material. Shell supported Scrubbers such as the Sepro PTD Scrubber minimize stress on the shell by spreading the power drive over the full length of the washing drum. These scrubbers operate in many applications on feeds with high clay content, and are commonly used for difficult ore and stone washing duties. A few specified applications of Sepro Scrubbers include removal of gold “robbing” carboniferous material and other contaminants from gold ores, the processing of bauxite ores for aluminum production, the washing of laterites (gold, nickel, cobalt) to liberate fine metals for gravity recovery, and the washing of crushed aggregate, gravel and sand to remove clay contamination.
Sepro Tyre Drive Agglomeration Drums
Sepro Agglomeration Drums are specifically designed to prepare feeds with high fines content on Gold and Base Metal heap loading operations. Processes where a Sepro Agglomeration Drum can be utilized include gold, copper, uranium and nickel laterite. The action in the agglomeration drum, combined with small additions of cement or lime, binds the fines into a "pelletised" product, which can be heaped and leached out without "pooling" and "channeling" caused by loss of heap permeability due to blinding by fines. The machine uses flexible rubber liners to prevent build up without the use of lifter bars and is adjustable on a pivotable base frame. Shell supported agglomerators such as the Sepro PTD Agglomeration Drum minimize stress on the shell by spreading the power drive over the full length of the unit.
Sepro Tyre Drive Grinding Mills
Sepro Tyre Driven Grinding Mills are designed for small and medium capacity grinding applications, specifically small tonnage plants, regrinding mills, reagent prep and lime slaking. Sepro Pneumatic Tyre Driven (PTD) mills provide an alternative to standard trunnion drive systems. The drive consists of multiple gears boxes and electric motors directly connected and controlled through an AC variable frequency drive. Shell supported mills such as the Sepro PTD mills minimize stress on the mill shell by spreading the power drive over the full length of the mill. Sepro Mills are suitable for ball, rod and pebble charges and are available with overflow or grate discharge to suit the application. Shell supported mills such as the Sepro PTD Grinding Mills minimize stress on the shell by spreading the power drive over the full length of the unit.
Condor Dense Medium Separators
The Condor Dense Medium Separator (DMS) is a multi-stage, high efficiency media separation machine for mineral processing operations at the rougher and scavenger stage. It is typically used in a pre-concentration duty prior to processing or milling to reject barren material. The unit is manufactured with either two or three stages of separation depending on the media with one or two valuable densities resulting, while the unit can produce up to four products from one dense medium vessel altogether. The Condor DMS can take a larger feed particle size compared to a DMS cyclone of the same diameter and capacity, and is capable of handling higher sinks or floats loading without affecting performance. The valuable dense material (or 'sinks') can be combined or separated at the final stage and is then pumped onto the next process in the circuit. Sepro Mineral Systems Corp. supplies customizable DMS Plants for a wide variety of application requirements. Sepro's standard two product (concentrate, tailings) DMS Plant utilizes a two-stage Condor Separator and single density medium circuit, while the three product (concentrate, middlings, tailings) DMS Plant utilizes a three-stage Condor Separator and two medium circuits at high and low density.
Sepro Leach Reactors
The Sepro Leach Reactor is a high concentration leach reactor developed to treat the gold concentrate produced by the Falcon Concentrator. The unit consists of a concentrate holding tank and a leach tank and impeller which are linked by a Sepro vertical bowl pump. The SLR uses either peroxide or oxygen gas to achieve elevated levels of dissolved oxygen required to accelerate the leaching process with no reagents required. The pregnant leach solution produced can be directly electrowon. With the addition of an electrowinning unit the final product becomes a gold plated carbon that can be directly refined to produce gold bullion. Extensive test work of the SLR on site has shown over 99% of the target mineral is recovered through a simple, fully automated process that is easily incorporated into recovery operations. Sepro Mineral Systems Corp. supplies SLR units with capacities ranging from .
Sepro Pumps
Sepro supplies horizontal slurry pumps, vertical sump pumps, vertical froth pumps, vertical tank pumps and horizontal fluid process pump models which are metal lined or rubber lined, one option being SH46® material for advanced wear resistance. They are designed to operate in the mining, aggregate, chemical and industrial sectors. Applications suitable for Sepro Pumps include mill discharge, mineral concentrate, dense media, coarse / fine tailings, process water and aggregates. Sepro engineers mobile, modular and fixed mineral processing plant designs which incorporate the complete line of slurry, sump, froth, tank and fluid Sepro Pumps.
Sepro Blackhawk 100 Cone Crushers
The Sepro Blackhawk 100 Cone Crusher is a modern, hydraulically operated cone crusher designed to be simple, rugged and effective for heavy duty mining and aggregate applications. The combination of the speed and eccentric throw of the crusher provides fine crushing capability and high capacity in a very compact design. The Blackhawk is capable of being applied as a secondary or tertiary crusher as well as a pebble crusher. The Blackhawk 100 is driven directly via a flexible coupling to the electric drive motor. This arrangement eliminates the need for sheaves and v-belts, allowing for simplified operation and maintenance. A variable speed drive package is included to optimize the speed of the machine to the given liner profile, feed and production conditions.
Sepro-Sizetec Screens
Sepro-Sizetec Screens are used for a variety of particle size separation and dewatering duties in mineral processing and aggregate applications. In mineral processing applications, particle size separation is of utmost importance in order to optimize crushing, grinding and gravity separation as well as many other processes. In aggregate applications, proper size separation and dewatering is essential to generate a saleable product. High capacity capable and featuring interchangeable screen decks, Sepro-Sizetec Screens are used for gold ore processing, fine aggregates, industrial minerals, soil remediation and coal processing applications.
Plants and process design
Sepro designs and builds modular and mobile processing plants for a wide range of mineral applications. Complete plants can be assembled using Sepro manufactured equipment along with equipment from third-party vendors and sub-contractors.
Sepro Mobile Plants are designed to be easily re-locatable as they are mounted on road transportable custom built trailer assemblies. These include the Sepro Mobile Mill Plant and Sepro Mobile Flotation Plant, both of which were installed by Banks Island Gold Ltd at the company's Yellow Giant Gold Property on the coast of British Columbia. They can be designed to encompass a wide variety of process options from crushing through to the final concentrate collection.
Sepro Modular and Skid Mounted Plants are engineered around structural elements that are simple and easy to erect on site. These plants can be designed with larger equipment for higher tonnage applications than that of the Sepro Mobile Plants. One example is a 360 TPD Gold Processing Plant Sepro supplied to ProEurasia LCC for the Vladimirskaya Project in Russia. This included milling, gravity and smelting circuits.
Sepro also offers standard process modules which are designed around a single recovery or procession option. Dense Media Separation and Gravity Concentration are two examples of standard Sepro process modules.
References
External links
Sepro Mineral Systems Corp. website
iCON Gold Recovery Corp. website
Metallurgical processes
Industrial processes
Centrifuges
Gold mining
Mining equipment companies
Manufacturing companies of Canada | Sepro Mineral Systems | [
"Chemistry",
"Materials_science",
"Engineering"
] | 2,546 | [
"Centrifugation",
"Mining equipment",
"Chemical equipment",
"Metallurgical processes",
"Metallurgy",
"Mining equipment companies",
"Centrifuges"
] |
33,478,695 | https://en.wikipedia.org/wiki/Non-linear%20effects | In enantioselective synthesis, a non-linear effect refers to a process in which the enantiopurity of the catalyst (or the chiral auxiliary) does not correlate linearly with the enantiopurity of the product produced. This deviation from linearity is described as the non-linear effect, NLE. The linearity can be expressed mathematically, as shown in Equation 1. Stereoselection (i.e. the eeproduct) that is higher or lower than the enantiomeric excess of the catalyst (eecatalyst, relative to the equation) is considered non-routine behavior.
For an ideal asymmetric reaction, the eeproduct may be described as the product of eemax multiplied by the eecatalyst. This is not the case for reactions exhibiting NLE's.
In 1976, Wynberg and Feringa observed different chemical behavior in the reaction of an enantiopure and racemic substrate in a phenol coupling reaction. In 1981, Kagan and collaborators described the first non-linear effects in asymmetric catalysis and gave rational explanations for these phenomena. General definitions and mathematical models are essential for understanding nonlinear effects and their application to specific chemical reactions. In recent decades, the study of nonlinear effects has helped elucidate reaction mechanism and guide synthetic applications.
Types of non-linear effects
Positive non-linear effect, (+)-NLE
A positive non-linear effect, (+)-NLE, is present in an asymmetric reaction which demonstrates a higher product ee (eeproduct ) than predicted by an ideal linear situation (Figure 1). It is often referred to as asymmetric amplification, a term coined by Oguni and co-workers. An example of a positive non-linear effect is observed in the case of Sharpless epoxidation with the substrate geraniol.In all cases of chemical reactivity exhibiting (+)-NLE, there is an innate tradeoff between overall reaction rate and enantioselectivity. The overall rate is slower and the enantioselectivity is higher relative to a linear behaving reaction.
Negative non-linear effect, (−)-NLE
Referred to as asymmetric depletion, a negative non-linear effect is present when the eeproduct is lower than predicted by an ideal linear situation. In contrast to a (+)-NLE, a (−)-NLE results in a faster overall reaction rate and a decrease in enantioselectivity. Synthetically, a (−)-NLE effect could be beneficial with a reasonable assay for separating product enantiomers and a high output is necessary . An interesting example of a (−)-NLE effect has been reported in asymmetric sulfide oxidations.
Modeling non-linear effects
1n 1986, Henri B. Kagan and coworkers observed a series of known reactions that followed a non-ideal behavior. A correction factor, f, was adapted to Equation 1 to fit the kinetic behavior of reactions with NLEs (Equation 2).
Equation 2: A general mathematical equation that describes non-linear behavior
Unfortunately, Equation 2 is too general to apply to specific chemical reactions. Due to this, Kagan and coworkers also developed simplified mathematical models to describe the behavior of catalysts which lead to non-linear effects. These models involve generic MLn species, based on a metal (M) bound to n number of enantiomeric ligands (L). The type of MLn model varies among asymmetric reactions, based on the goodness of fit with reaction data. With accurate modeling, NLE may elucidate mechanistic details of an enantioselective, catalytic reaction.
ML2 model
General description
The simplest model to describe a non-linear effect, the ML2 model involves a metal system (M) with two chiral ligands, LR and LS. In addition to the catalyzed reaction of interest, the model accounts for a steady state equilibrium between the unbound and bound catalyst complexes. There are three possible catalytic complexes at equilibrium (MLSLR, MLSLS, MLRLR). The two enantiomerically pure complexes ( MLSLS, MLRLR) are referred to as homochiral complexes. The possible heterochiral complex, MLRLS, is often referred to as a meso-complex.
The equilibrium constant that describes this equilibrium, K, is presumably independent on the catalytic chemical reaction. In Kagan's model, K is determined by the amount of aggregation present in the chemical environment. A K=4 is considered to be the state at which there is a statistical distribution of ligands to each metal complex. In other words, there is no thermodynamic disadvantage or advantage to the formation of heterochiral complexes at K=4.
Obeying the same kinetic rate law, each of the three catalytic complexes catalyze the desired reaction to form product. As enantiomers of each other, the homochiral complexes catalyze the reaction at the same rate, although opposite absolute configuration of the product is induced (i.e. rRR=rSS). The heterochiral complex, however, forms a racemic product at a different rate constant (i.e. rRS).
Mathematical model for the ML2 Model
In order to describe the ML2 model in quantitative parameters, Kagan and coworkers described the following formula:
In the correction factor, Kagan and co-workers introduced two new parameters absent in Equation 1, β and g. In general, these parameters represent the concentration and activity of three catalytic complexes relative to each other. β represents the relative amount of the heterochiral complex (MLRLS) as shown in Equation 3. It is important to recognize that the equilibrium constant K is independent on both β and g. As described by Donna Blackmond at Scripps Research Institute, "the parameter K is an inherent property of the catalyst mixture, independent of the eecatalyst. K is also independent of the catalytic reaction itself, and therefore independent of the parameter g."
Equation 3: The correction factor, β, may be described as z, the heterochiral complex concentration, divided by x and y, the respective concentrations of the complex concentration divided by x and y, the respective concentrations of the homochiral complexes
The parameter g represents the reactivity of the heterochiral complex relative to the homochiral complexes. As shown in Equation 5, this may be described in terms of rate constants. Since the homochiral complexes react at identical rates, g can then be described as the rate constant corresponding to the heterochiral complex divided by the rate constant corresponding to either homochiral complex.
Equation 4: The correction parameter, g, can be described as the rate of product formation with the heterochiral catalyst MLRLS divided by the rate of product formation of the homochiral complex (MLRLR or MLSLS).
Interpretation of the mathematical results of the ML2 Model
If β=0 or g=1, the ML2 equation simplifies to the Equation 1. No meso catalyst complex is present or active. Therefore, the simple additive properties should apply to such a scenario to establish a linear relationship between product enantioselectivity and the enantiopurity of the chiral catalyst.
If the correction factor is greater than one, the reaction displays an asymmetric amplification, also known as a positive non-linear effect. Under the ML2 model, a (+)-NLE infers a less reactive heterochiral catalyst. In this case, the equilibrium constant K also increases as the correction factor increases. Although the product enantioselectivity is relatively high compared to the enantiopurity of the chiral catalyst, this comes at a cost of the overall reaction rate. In order to achieve an asymmetric amplification, there must be a relatively large concentration of the heterochiral complex. In addition, this heterochiral complex must have a substantially slower rate of reactivity, rRS. Therefore, the reactive catalytic species should decrease in concentration, leading to an overall slower reaction rate.
If the correction factor is less than one, the reaction displays an asymmetric depletion, also known as a negative non-linear effect. In this scenario, the heterochiral catalyst is relatively more reactive than the homochiral catalyst complexes. In this case, the (−)-NLE may result in an overall faster although less selective product formation.
iv. Reaction Kinetics with the ML2 Model: Following H.B. Kagan's publication of the ML2 model, Professor Donna Blackmond at Scripps demonstrated how this model could be used to also calculate the overall reaction rates. With these relative reaction rates, Blackmond showed how the ML2 model could be used to formulate kinetic predictions which could then be compared to experimental data. The overall rate equation, Equation 6, is shown below.
In addition to the goodness of fit to the model, kinetic information about the overall reaction may further validate the proposed reaction mechanism. For instance, a positive NLE in the ML2 should result in an overall lower reaction rate. By solving the reaction rate from Equation 6, one can confirm if that is the case.
M*L2 Model
General description
Similar to the ML2 model, this modified system involves chiral ligands binding to a metal center (M) to create a new center of chirality. There are four pairs of enantiomeric chiral complexes in the M*L2 model, as shown in Figure 5.
In this model, one can make the approximation that the dimeric complexes dissociate irreversibly to the monomeric species. In this case, the same mathematical equations apply to the ML*2 model that applied to the ML2 model.
ML3 model
General description
A higher level of modeling, the ML3 model involves four active catalytic complexes: MLRLRLR, MLSLSLS, MLRLRLS, MLSLSLR. Unlike the ML2 model, where only the two homochiral complexes reacted to form enantiomerically enriched product, all four of the catalytic complexes react enantioselectively. However, the same steady state assumption applies to the equilibrium between unbound and bound catalytic complexes as in the more simple ML2 model. This relationship is shown below in Figure 7.
Mathematical modeling
Calculating the eeproduct is considerably more challenging than in the simple ML2 model. Each of the two heterochiral catalytic complexes should react at the same rate. The homochiral catalytic complexes, similar to the ML2 case, should also react at the same rate. As such, the correction parameter g is still calculated as the rate of the heterochiral catalytic complex divided by the rate of the homochiral catalytic complex. However, since the heterochiral complexes lead to enantiomerically enriched product, the overall equation for calculating the eeproduct becomes more difficult. In Figure 8., the mathematical formula for calculating enantioselectivity is shown.
Figure 8: The mathematical formula describing an ML3 system. The eeproduct is calculated by multiplying the eemax by the correction factor developed by Kagan and co-workers.
Interpretation of the ML3 Model
In general, interpreting the correction parameter values of g to predict positive and negative non-linear effects is considerably more difficult. In the case where the heterochiral complexes MLRLRLS and MLSLSLR are less reactive than the homochiral complexes MLSLSLS and MLRLRLR, a kinetic behavior similar to the ML2 model is observed (Figure 9). However, a substantially different behavior is observed in the case where the heterochiral complexes are more reactive than the homochiral complexes. In such case, Kagan and collaborators showed that it is possible to have a case “where the enantiomeric excess could take on much larger values for a partially resolved ligand than for an enantiomerically pure ligand”. The authors proposed the term “hyperpositive nonlinear effect” to characterize this situation.
Reservoir Effect
General Description
Often described adjacent or in collaboration with the ML2 model, the reservoir effect describes the scenario in which part of the chiral ligand is allocated to a pool of inactive heterochiral catalytic complexes outside the catalytic cycle. A pool of unreactive heterochiral catalysts, described with an eepool, develops an equilibrium with the catalytically active homochiral complexes, described with an eeeffective. Depending on the concentration of the inactive pool of catalysts, one can calculate the enantiopurity of the active catalyst complexes. The general result of the reservoir effect is an asymmetric amplification, also known as a (+)-NLE.
Origin of the Reservoir Effect
The pool of unreactive catalytic complexes, as described in the reservoir effect, can be the result of several factors. One of these could potentially be an aggregation effect amongst the heterochiral catalytic complexes that takes place prior to the steady state equilibrium.
Early Examples of the Non-Linear Effect
Sharpless Epoxidation of Geraniol
In 1986, Kagan and co-workers were able to demonstrate NLE with the Sharpless epoxidation of (E)-Geraniol (Figure 11). Under Sharpless oxidizing conditions with Ti(O-i-Pr)4/(+)-DET/t-BuOOH, Kagan and coworkers were able to demonstrate that there was a non-linear correlation between the eeproduct and the ee of the chiral catalyst, diethyl tartrate (DET). As one can see from Figure 11, a greater eeproduct than expected was observed. According to the ML2 model, Kagan and coworkers were able to conclude that a less reactive heterochiral DET complex was present. This would therefore explain the asymmetric amplification observed. The NLE data is also consistent with the Sharpless mechanism of asymmetric epoxidation.
Asymmetric Sulfide Oxidation
In 1994, Kagan and co-workers reported a NLE in asymmetric sulfide oxidation. The goodness of fit for the reaction data matched the ML4 model. This implied that a dimeric Titanium complexed with 4 DET ligands was the active catalytic species. In this case, the reaction rate would be significantly faster relative to ideal reaction kinetics. The downfall, as is the case in all (−)-NLE scenarios, is that the enantioselectivity was lower than expected. Below, in Figure 12, one can see the concavity of the data points is highly indicative of a (−)-NLE.
Prebiotic Catalysis and the Non-linear Effect
In pre-biotic chemistry, autocatalytic systems play a significant rule in understanding the origin of chirality in life. An autocatalytic reaction, a reaction in which the product acts as a catalyst for itself, serves as a model for homochirality. The asymmetric Soai reaction is commonly referred to as chemical plausibility for this pre-biotic hypothesis. In this system, an asymmetric amplification is observed during the process of autocatalytic catalysis. Professor Donna Blackmond has studied the NLE of this reaction extensively using Kagan's ML2 model. From this mathematical analysis, Blackmond was able to conclude that a dimeric, homochiral complex was the active catalyst in promoting homochirality for the Soai reaction.
Notes
Catalysis | Non-linear effects | [
"Chemistry"
] | 3,268 | [
"Catalysis",
"Chemical kinetics"
] |
33,482,057 | https://en.wikipedia.org/wiki/Reaction%20progress%20kinetic%20analysis | In chemistry, reaction progress kinetic analysis (RPKA) is a subset of a broad range of kinetic techniques utilized to determine the rate laws of chemical reactions and to aid in elucidation of reaction mechanisms. While the concepts guiding reaction progress kinetic analysis are not new, the process was formalized by Professor Donna Blackmond (currently at Scripps Research Institute) in the late 1990s and has since seen increasingly widespread use. Unlike more common pseudo-first-order analysis, in which an overwhelming excess of one or more reagents is used relative to a species of interest, RPKA probes reactions at synthetically relevant conditions (i.e. with concentrations and reagent ratios resembling those used in the reaction when not exploring the rate law.) Generally, this analysis involves a system in which the concentrations of multiple reactants are changing measurably over the course of the reaction. As the mechanism can vary depending on the relative and absolute concentrations of the species involved, this approach obtains results that are much more representative of reaction behavior under commonly utilized conditions than do traditional tactics. Furthermore, information obtained by observation of the reaction over time may provide insight regarding unexpected behavior such as induction periods, catalyst deactivation, or changes in mechanism.
Monitoring reaction progress
Reaction progress kinetic analysis relies on the ability to accurately monitor the reaction conversion over time. This goal may be accomplished by a range of techniques, the most common of which are described below. While these techniques are sometimes categorized as differential (monitoring reaction rate over time) or integral (monitoring the amount of substrate and/or product over time), simple mathematical manipulation (differentiation or integration) allows interconversion of the data obtained by either of the two. Regardless of the technique implemented, it is generally advantageous to confirm the validity in the system of interest by monitoring with an additional independent method.
Reaction progress NMR
NMR spectroscopy is often the method of choice for monitoring reaction progress, where substrate consumption and/or product formation may be observed over time from the change of peak integration relative to a non-reactive standard. From the concentration data, the rate of reaction over time may be obtained by taking the derivative of a polynomial fit to the experimental curve. Reaction progress NMR may be classified as an integral technique as the primary data collected are proportional to concentration vs. time. While this technique is extremely convenient for clearly defined systems with distinctive, isolated product and/or reactant peaks, it has the drawback of requiring a homogeneous system amenable to reaction in an NMR tube. While NMR observation may allow for the identification of a reaction intermediates, the presence of any given species over the course of the reaction does not necessarily implicate it in a productive process. Reaction progress NMR may, however, often be run at variable temperature, allowing the rate of reaction to be adjusted to a level convenient for observation. Examples of utilization of reaction progress NMR abound, with notable examples including investigation of Buchwald–Hartwig amination (One might note that considerable debate surrounded the best approach to mechanistic development of the Buchwald-Hartwig amination as indicated by a number of contradictory and competing reports published over a short period of time. See the designated article and references therein.)
In situ FT-IR
In situ infrared spectroscopy may be used to monitor the course of a reaction, provided a reagent or product shows distinctive absorbance in the IR spectral region. The rate of reactant consumption and/or product formation may be abstracted from the change of absorbance over time (by application of Beers' Law). Even when reactant and product spectra display some degree of overlap, modern instrumentation software is generally able to accurately deconvolute the relative contributions provided there is a dramatic change in the absolute absorbance of the peak of interest over time. In situ IR may be classified as an integral technique as the primary data collected are proportional to concentration vs. time. From these data, the starting material or product concentration over time may be obtained by simply taking the integral of a polynomial fit to the experimental curve. With increases in the availability of spectrometers with in situ monitoring capabilities, FT-IR has seen increasing use in recent years. Examples of note include mechanistic analysis of the amido-thiourea catalyzed asymmetric Strecker synthesis of unnatural amino acids and of the Lewis base catalyzed halolactonization and cycloetherification.
In situ UV-vis
Analogously to the in situ IR experiments described above, in situ UV-visible absorbance spectroscopy may be used to monitor the course of a reaction, provided a reagent or product shows distinctive absorbance in the UV spectral region. The rate of reactant consumption and/or product formation may be abstracted from the change of absorbance over time (by application of Beer's Law), again leading to classification as an integral technique. Due to the spectral region utilized, UV-vis techniques are more commonly utilized on inorganic or organometallic systems than on purely organic reactions, and examples include exploration of the samarium Barbier reaction.
Reaction calorimetry
Calorimetry may be used to monitor the course of a reaction, since the instantaneous heat flux of the reaction, which is directly related to the enthalpy change for the reaction, is monitored. Reaction calorimetry may be classified as a differential technique since the primary data collected are proportional to rate vs. time. From these data, the starting material or product concentration over time may be obtained by simply taking the integral of a polynomial fit to the experimental curve.
While reaction calorimetry is less frequently employed than a number of other techniques, it has found use as an effective tool for catalyst screening. Reaction calorimetry has also been applied as an efficient method for mechanistic study of individual reactions including the prolinate-catalyzed α-amination of aldehydes and the palladium catalyzed Buchwald-Hartwig amination reaction.
Further techniques
While Gas Chromatography, HPLC, and Mass Spectrometry are all excellent techniques for distinguishing mixtures of compounds (and sometimes even enantiomers), the time resolution of these measurements is less precise than that of the techniques described above. Regardless, these techniques have still seen use, such as in the investigation of the Heck reaction where the heterogeneous nature of the reaction precluded utilization of the techniques described above. and SOMO-activation by organocatalysts Despite their shortcomings, these techniques may serve as excellent calibration methods.
Data manipulation and presentation
Reaction progress data may often most simply be presented as plot of substrate concentration ([A]t) vs. time (t) or fraction conversion (F) vs. time (t). The latter requires minor algebraic manipulation to convert concentration/absorbance values to fractional conversion (F), by:
F =
where [A]0 is the amount, absorbance, or concentration of substrate initially present and [A]t is the amount, absorbance, or concentration of that reagent at time, t. Normalizing data to fractional conversion may be particularly helpful as it allows multiple reactions run with different absolute amounts or concentrations to be compared on the same plot.
Data may also commonly be presented as a plot of reaction rate (v) vs. time (t). Again, simple algebraic manipulation is required; for example, calorimetric experiments give:
v =
where q is the instantaneous heat transfer, ΔH is the known enthalpy change of the reaction, and V is the reaction volume.
Data from reaction progress kinetics experiments are also often presented via a rate (v) vs. substrate concentration ([S]) plot. This requires obtaining and combining both the [S] vs. t and the v vs. t plots described above (note that one may be obtained from the other by simple differentiation or integration.) The combination leads to a standard set of curves in which reaction progress is read from right to left along the x-axis and reaction rate is read from bottom to top along the y-axis. While these plots often provide a visually compelling demonstration of basic kinetic trends, differential methods are generally superior for extracting numerical rate constants. (see below)
Catalytic kinetics and catalyst resting state
In catalytic kinetics, two basic approximations are useful (in different circumstances) to describe the behavior of many systems. The situations in which the pre-equilibrium and steady-state approximations are valid can often be distinguished by reaction progress kinetic analysis, and the two situations are closely related to the resting state of the catalyst.
The steady-state approximation
Under steady-state conditions, the catalyst and substrate undergo reversible association followed by a relatively rapid consumption of the catalyst–substrate complex (by both forward reactions to product and reverse reactions to unbound catalyst.) The steady-state approximation holds that the concentration of the catalyst-substrate complex is not changing over time; the total concentration of this complex remains low as it is whisked away almost immediately after formation. A steady-state rate law contains all of the rate constants and species required to go from starting material to product, while the denominator consists of a sum of terms describing the relative rates of the forward and reverse reactions consuming the steady-state intermediate. For the simplest case where one substrate goes to one product through a single intermediate:
=
In a slightly more complex situation where two substrates bind in sequence followed by product release:
=
Increasingly complex systems can be described simply with the algorithm described in this reference.
In the case of the steady-state conditions described above, the catalyst resting state is the unbound form (because the substrate-bound intermediate is, by definition, only present at a minimal concentration.)
The pre-equilibrium approximation
Under pre-equilibrium conditions, the catalyst and substrate undergo rapid and reversible association prior to a relatively slow step leading to product formation and release. Under these conditions, the system can be described by a "one-plus" rate law where the numerator consists of all rate constants and species required to go from starting material to product, and the denominator consists of a sum of terms describing each of the states in which the catalyst exists (and 1 corresponds to the free catalyst). For the simplest case where one substrate goes to one product through a single intermediate:
=
In the slightly more complex situation where two substrates bind in sequence followed by product release:
=
In the case of the simple pre-equilibrium conditions described above, the catalyst resting state is either entirely or partially (depending on the magnitude of the equilibrium constant) the substrate bound complex.
Saturation kinetics
Saturation conditions can be viewed as a special case of pre-equilibrium conditions. At the concentration of substrate examined, formation of the catalyst-substrate complex is rapid and essentially irreversible. The catalyst resting state consists entirely of the bound complex, and [A] is no longer present in the rate law; changing [A] will have no effect on reaction rate because the catalyst is already completely bound and reacting as rapidly as k2 allows. The simplest case of saturation kinetics is the well-studied Michaelis-Menten model for enzyme kinetics.
Changes in catalyst resting state
While a reaction may exhibit one set of kinetic behavior at early conversion, that behavior may change due to:
changes in catalyst resting state influenced by changing substrate concentrations
multiple or changing mechanisms influenced by substrate or product concentrations
catalyst activation (an initiation period)
product inhibition
irreversible (or reversible) catalyst death
In the case of saturation kinetics described above, provided that [A] is not present in a large excess relative to [B], saturation conditions will only apply at the beginning of the reaction. As the substrate is consumed, the concentration decreases and eventually [A] is no longer sufficient to completely overwhelm [Cat]. This is manifested by a gradual change in rate from 0-order to some higher (i.e. 1st, 2nd, etc.) order in [A]. This can also be described as a change in catalyst resting state from the bound form to the unbound form over the course of the reaction.
In addition to simply slowing the reaction, a change in catalyst resting state over the course of the reaction may result in competing paths or processes. Multiple mechanisms may be present to access the product, in which case the order in catalyst or substrate may change depending on the conditions or point in the reaction. A particularly useful probe for changes in reaction mechanism involves examination of the normalized reaction rate vs. catalyst loading at multiple, fixed conversion points. Note that the normalized reaction rate:
k =
adjusts for the consumption of substrate over the course of the reaction, so only rate changes due to catalyst loading will be observed. A linear dependence on catalyst loading for a given conversion is indicative of a first order dependence on catalyst at that conversion, and one can similarly imagine the non-linear plots resulting from higher order dependence. Changes in the linearity or non-linearity from one set of conversion points to another are indicative of changes in the dependence on catalyst over the course of the reaction. Conversely, changes in the linearity or non-linearity of regions of the plot conserved over multiple conversion points (i.e. at 30, 50, and 70%) are indicative of a change in the dependence on catalyst based on the absolute catalyst concentration.
Catalyst interactions with multiple components of a reaction mixture can lead to a complex kinetic dependence. While off-cycle catalyst-substrate or catalyst-product interactions are generally considered "poisonous" to the system (certainly the case in the event of irreversible complexation) cases do exist in which the off-cycle species actually protects the catalyst from permanent deactivation.
In either case, it is often essential to understand the role of the catalyst resting state.
Same-excess experiments
The variable parameter of greatest interest in reaction progress kinetic analysis is the excess (e) of one substrate over another, given in units of molarity. The initial concentrations of two species in a reaction may be defined by:
[B]0 = [A]0 + e
and, assuming a one-to-one reaction stoichiometry, that excess of one substrate over the other is quantitatively preserved over the course of the entire reaction such that:
[B]t = [A]t + e
A similar set can be constructed for reactions with higher order stoichiometry in which case the excess varies predictably over the course of the reaction. While e may be any value (positive, negative, or zero) generally positive or negative values smaller in magnitude than one equivalent of substrate are used in reaction progress kinetic analysis. (One might note that pseudo-zero-order kinetics uses excess values much much greater in magnitude than the one equivalent of substrate).
Defining the parameter of excess (e) allows for the construction of same-excess experiments in which two or more runs of a kinetic experiment with different initial concentrations, but the same-excess allow one to artificially enter the reaction at any point. These experiments are critical for RPKA of catalytic reactions, as they enable one to probe a number of mechanistic possibilities including catalyst activation (induction periods), catalyst deactivation, and product inhibition described in further detail below.
Determining catalyst turnover frequency
Prior to further mechanistic investigation, it is important to determine the kinetic dependence of the reaction of interest on the catalyst. The turnover frequency (TOF) of the catalyst can be expressed as the reaction rate normalized to the concentration of catalyst:
TOF =
This TOF is determined by running any two or more same-excess experiments in which the absolute catalyst concentration is varied. Because the catalyst concentration is constant over the course of the reaction, the resulting plots are normalized by an unchanging value. If the resulting plots overlay perfectly, then the reaction is, in fact, first-order in catalyst. If the reaction fails to overlay, higher-order processes are at work and require a more detailed analysis than described here. It is also worth noting that the normalization-overlay manipulation described here is only one approach for interpretation of the raw data. Equally valid results may be obtained by fitting the observed kinetic behavior to simulated rate laws.
Exploring catalyst activation and deactivation
As described above, same-excess experiments are conducted with two or more experiments holding the excess, (e) constant while changing the absolute concentrations of the substrates (in this case, the catalyst is also treated as a substrate.) Note that this construction causes the number of equivalents and therefore the mole percentage of each reagent/catalyst to differ between reactions. These experiments enable one to artificially "enter" the reaction at any point, as the initial concentrations of one experiment (the intercepting reaction) are chosen to map directly onto the anticipated concentrations at some intermediate time, t, in another (the parent reaction). One would expect the reaction progress, described by the rate vs. substrate concentration plots detailed above, to map directly onto each other from that interception point onward. This will hold true, however, only if the rate of the reaction is not altered by changes to the active substrate/catalyst concentration (such as by catalyst activation, catalyst deactivation, or product inhibition) before that interception.
A perfect overlay of multiple experiments with the same-excess but different initial substrate loadings suggests that no changes in the active substrate/catalyst concentration occur over the course of the reaction. The failure of the plots to overlay is generally indicative of catalyst activation, deactivation, or product inhibition under the reaction conditions. These cases may be distinguished by the position of the reaction progress curves relative to each other. Intercepting reactions lying below (slower rates at the same substrate concentration) the parent reactions on the rate vs. substrate concentration plot, are indicative of catalyst activation under reaction conditions. Intercepting reactions lying above (faster rates at the same substrate concentration) the parent reactions on the rate vs. substrate concentration plot, are indicative of catalyst deactivation under reaction conditions; further experimentation is necessary to distinguish product inhibition from other forms of catalyst death.
One key difference between the intercepting reaction and the parent reaction described above is the presence of some amount of product in the parent reaction at the interception point. Product inhibition has long been known to influence catalyst efficiency of many systems, and in the case of same-excess experiments, it prevents the intercepting and parent reactions from overlaying. While same-excess experiments as described above cannot attribute catalyst deactivation to any particular cause, product inhibition can be probed by further experiments in which some initial amount of product is added to the intercepting reaction (designed to mimic the amount of product expected to be present in the parent reaction at the same substrate concentration). A perfect overlay of the rate vs. substrate concentration plots under same-excess-same product conditions indicates that product inhibition does occur under the reaction conditions used. While the failure of the rate vs. substrate concentration plots to overlay under same-excess-same product conditions does not preclude product inhibition, it does, at least, indicate that other catalyst deactivation paths must also be active.
Same-excess experiments probing catalyst deactivation and product inhibition are among the most widely used applications of reaction progress kinetic analysis. Among the numerous examples in the literature, some include investigation of the amino alcohol-catalyzed zinc alkylation of aldehydes, the amido-thiourea catalyzed asymmetric Strecker synthesis of unnatural amino acids, and the SOMO-activation of organocatalysts.
Determining reaction stoichiometry
Differential methods for extracting rate constants
With the wealth of data available from monitoring reaction progress over time paired with the power of modern computing methods, it has become reasonably straightforward to numerically evaluate the rate law, mapping the integrated rate laws of simulated reactions paths onto a fit of reaction progress over time. Due to the principles of the propagation of error, rate constants and rate laws can be determined by these differential methods with significantly lower uncertainty than by the construction of graphical rate equations (above.)
Different-excess experiments
While RPKA allows observation of rates over the course of the entire reaction, conducting only same-excess experiments does not provide sufficient information for determination of the corresponding rate constants. In order to construct enough independent relationships to solve for all of the unknown rate constants, it is necessary to examine systems with different-excess.
Consider again the simple example discussed above where the catalyst associates with substrate A, followed by reaction with B to form product P and free catalyst. Regardless of the approximation applied, multiple independent parameters (k2 and K1 in the case of pre-equilibrium; k1, k−1, and k2 in the case of steady-state) are required to define the system. While one could imagine constructing multiple equations to describe the unknowns at different concentrations, when the data is obtained from a same-excess experiment [A] and [B] are not independent:
e = [B] − [A]
Multiple experiments using different values of e are necessary to establish multiple independent equations defining the multiple independent rate constants in terms of experimental rates and concentrations. Non-linear least squares analysis may then be employed to obtain best fit values of the unknown rate constants to those equations.
Graphical rate laws
Kineticists have historically relied on linearization of rate data to extrapolate rate constants, perhaps best demonstrated by the widespread use of the standard Lineweaver–Burk linearization of the Michaelis–Menten equation. Linearization techniques were of particular importance before the advent of computing techniques capable of fitting complex curves, and they remain a staple in kinetics due to their intuitively simple presentation. It is important to note that linearization techniques should NOT be used to extract numerical rate constants as they introduce a large degree of error relative to alternative numerical techniques. Graphical rate laws do, however, maintain that intuitive presentation of linearized data, such that visual inspection of the plot can provide mechanistic insight regarding the reaction at hand. The basis for a graphical rate law rests on the rate (v) vs. substrate concentration ([S]) plots discussed above. For example, in the simple cycle discussed with regard to different-excess experiments a plot of vs. [B] and its twin vs. [A] can provide intuitive insight about the order of each of the reagents. If plots of vs. [B] overlay for multiple experiments with different-excess, the data are consistent with a first-order dependence on [A]. The same could be said for a plot of vs. [A]; overlay is consistent with a first-order dependence on [B]. Non-overlaying results of these graphical rate laws are possible and are indicative of higher order dependence on the substrates probed. Blackmond has proposed presenting the results of different-excess experiments with a series of graphical rate equations (that she presents in a flow-chart adapted here), but it is important to note that her proposed method is only one of many possible methods to display the kinetic relationship. Furthermore, while the presentation of graphical rate laws may at times be considered a visually simplified way to present complex kinetic data, fitting the raw kinetic data for analysis by differential or other rigorous numerical methods is necessary to extract accurate and quantitative rate constants and reaction orders.
Reaction stoichiometry and mechanism
It is important to note that even while kinetic analysis is a powerful tool for determining the stoichiometry of the turn-over limiting transition state relative to the ground state, it cannot answer all mechanistic questions. It is possible for two mechanisms to be kinetically indistinguishable, especially under catalytic conditions. For any thorough mechanistic evaluation it is necessary to conduct kinetic analysis of both the catalytic process and its individual steps (when possible) in concert with other forms of analysis such as evaluation of linear free energy relationships, isotope effect studies, computational analysis, or any number of alternative approaches. Finally, it is important to note that no mechanistic hypothesis can ever be proven; alternative mechanistic hypothesis can only be disproven. It is, therefore, essential to conduct any investigation in a hypothesis-driven manner. Only by experimentally disproving reasonable alternatives can the support for a given hypothesis be strengthened.
See also
Chemical kinetics
Enzyme kinetics
Hill equation (biochemistry)
Langmuir adsorption model
Michaelis-Menten kinetics
Monod equation
Rate equation (chemistry)
Reaction mechanism
Steady state (chemistry)
References
Chemical kinetics | Reaction progress kinetic analysis | [
"Chemistry"
] | 5,029 | [
"Chemical reaction engineering",
"Chemical kinetics"
] |
47,833,599 | https://en.wikipedia.org/wiki/Lead-tin%20yellow | Lead-tin yellow is a yellow pigment, of historical importance in oil painting, sometimes called the "Yellow of the Old Masters" because of the frequency with which it was used by those famous painters.
Nomenclature
The name lead-tin yellow is a modern label. During the thirteenth to eighteenth centuries when it was in widest use, it was known by a variety of names. In Italy, it was giallorino or giallolino. In other countries of Europe, it was massicot, (Spanish), (German), general (English) or (Portuguese). All of these names were often applied to other yellow pigments as well as lead-tin yellow.
Composition
Lead-tin yellow historically occurred in two varieties. The first and more common one, today known as "Type I", was a lead stannate, an oxide of lead and tin with the chemical formula Pb2SnO4. The second, "Type II", was a silicate with the formula . Lead-tin yellow was produced by heating a powder mixture of lead oxide and tin oxide to about 900 °C. In "Type II" the mixture also contained quartz. Its hue is a rather saturated yellow. The pigment is opaque and lightfast. As a type of lead paint, it presents the hazard of lead poisoning if ingested, inhaled, or contacted.
History
The origin of lead-tin yellow can be dated back to at least the thirteenth century when Type II was applied in frescos, perhaps having been discovered as a by-product of crystal glass production. Until the eighteenth century, Type I was the standard yellow used in oil painting.
Lead-tin yellow was widely employed in the Renaissance by painters such as Titian (Bacchus and Ariadne), Bellini (The Feast of the Gods) and Raphael (Sistine Madonna), and during the Baroque period by Rembrandt (Belshazzar's Feast), Vermeer (The Milkmaid), and Velázquez (Apollo in the Forge of Vulcan).
In the early eighteenth century, lead-tin yellow was almost completely replaced in use by Naples yellow. After 1750, no paintings seem to have been made containing the pigment, and its existence was eventually forgotten for reasons that are not entirely clear. Lead-tin yellow was rediscovered in 1941 by the German scientist Richard Jakobi, then-director of the Doerner Institute. Jakobi called it Blei-Zinn-Gelb; the English "lead-tin yellow" is a literal translation of the German term.
After 1967, Hermann Kühn in a series of studies proved its general use in the traditional oil technique of earlier centuries, coining the distinction between the Type I and Type II varieties.
Conjecture about disappearance
One prominent hypothesis for its disappearance from collective memory is confusion with other yellow pigments like massicot. Lead-tin yellow was sometimes called massicot, although it is a different substance. Prior to the development of modern analytical tools allowing for microscopic testing of paint, it was not always possible for art historians to distinguish between similar pigments, meaning that most yellow pigment containing lead was generally labeled Naples yellow.
Increased use of other pigments such as the less-opaque Naples yellow may also have displaced lead-tin yellow in common use. During the nineteenth century, after lead-tin yellow had vanished from common use, newer inorganic yellow pigments came into use, such as chrome yellow (lead chromate), cadmium sulfide, and cobalt yellow.
See also
List of inorganic pigments
References
Further reading
Nicholas John Eastaugh, Lead tin yellow: its history, manufacture, colour and structure. University of London, 1988.
Inorganic pigments
Lead compounds
Stannates | Lead-tin yellow | [
"Chemistry"
] | 758 | [
"Inorganic pigments",
"Inorganic compounds"
] |
47,843,705 | https://en.wikipedia.org/wiki/Korsunsky%20Work-of-Indentation%20Approach | The Korsunsky work-of-indentation approach is a method of extracting values of hardness and stiffness for a small volume of material from indentation test data, first developed by Alexander M. Korsunsky.
Instead of relying on measurements or assumptions pertaining to the observed area of contact between indenter and sample, the method uses the load-displacement data registered in the Continuously Recorded Indentation Testing (CRIT) that is widely applied in nanoindentation experiments. In particular, the Korsunsky method re-defines hardness and expresses it in terms of the energy (work) associated with indenting the surface of a material by the probe. The work-of-indentation used in the analysis may refer to the total, elastic or dissipated energy, depending on the formulation. The approach can be used in the analysis of thin coatings, nano-multi-layers, nanoscale features.
The original application of the approach was developed for the problem of finding the composite hardness of a coated system. The composite hardness is known to vary depending on the applied load and or indentation depth. In the Korsunsky work-of-indentation approach, the composite hardness is given by a simple expression (the “knee function”) of the relative indentation depth (the indentation depth normalized with respect to the coating thickness), and the substrate and coating hardness. The function contains a single fitting parameter, which describes a wide range of composite and indenter properties such as coating brittleness, interfacial strength, indenter geometry, etc. This model of hardness determination has been verified by numerous researchers investigating different coated systems.
This approach has undergone numerous modifications since its inception. Most recently, Jha et al. found that the Korsunsky work-of-indentation approach measures the nominal hardness of a material which is defined as the maximum load divided by the area of maximum contact. The nominal hardness of a material is different than its true hardness (determined by the Oliver-Pharr method), but the two concepts are interrelated. Jha et al derived an expression that determines the true hardness of a material from its nominal counterpart. In doing so, they employed a dimensionless energy-based parameter that relates the contact depth to the maximum depth of penetration. For a soft material, the difference between the contact depth and the maximum depth of penetration is small, and hence its nominal and true hardness values are practically the same. For a harder material these two types of hardness are different as the difference between them is large. The model proposed by Jha et al in its current form is applicable when the indenter is ideally sharp or when the maximum depth of penetration is sufficiently large compared to the indenter tip radius. The advantage of the modified work-of-indentation is that it does not require the computation of contact area, which is the main limitation of the conventional Oliver-Pharr method. The approach requires further modification in order to incorporate effect of bluntness at the tip of an indenter on the measured hardness of a material.
References
Hardness tests | Korsunsky Work-of-Indentation Approach | [
"Materials_science"
] | 628 | [
"Hardness tests",
"Materials testing"
] |
47,845,063 | https://en.wikipedia.org/wiki/Stochastic%20block%20model | The stochastic block model is a generative model for random graphs. This model tends to produce graphs containing communities, subsets of nodes characterized by being connected with one another with particular edge densities. For example, edges may be more common within communities than between communities. Its mathematical formulation was first introduced in 1983 in the field of social network analysis by Paul W. Holland et al. The stochastic block model is important in statistics, machine learning, and network science, where it serves as a useful benchmark for the task of recovering community structure in graph data.
Definition
The stochastic block model takes the following parameters:
The number of vertices;
a partition of the vertex set into disjoint subsets , called communities;
a symmetric matrix of edge probabilities.
The edge set is then sampled at random as follows: any two vertices and are connected by an edge with probability . An example problem is: given a graph with vertices, where the edges are sampled as described, recover the groups .
Special cases
If the probability matrix is a constant, in the sense that for all , then the result is the Erdős–Rényi model . This case is degenerate—the partition into communities becomes irrelevant—but it illustrates a close relationship to the Erdős–Rényi model.
The planted partition model is the special case that the values of the probability matrix are a constant on the diagonal and another constant off the diagonal. Thus two vertices within the same community share an edge with probability , while two vertices in different communities share an edge with probability . Sometimes it is this restricted model that is called the stochastic block model. The case where is called an assortative model, while the case is called disassortative.
Returning to the general stochastic block model, a model is called strongly assortative if whenever : all diagonal entries dominate all off-diagonal entries. A model is called weakly assortative if whenever : each diagonal entry is only required to dominate the rest of its own row and column. Disassortative forms of this terminology exist, by reversing all inequalities. For some algorithms, recovery might be easier for block models with assortative or disassortative conditions of this form.
Typical statistical tasks
Much of the literature on algorithmic community detection addresses three statistical tasks: detection, partial recovery, and exact recovery.
Detection
The goal of detection algorithms is simply to determine, given a sampled graph, whether the graph has latent community structure. More precisely, a graph might be generated, with some known prior probability, from a known stochastic block model, and otherwise from a similar Erdos-Renyi model. The algorithmic task is to correctly identify which of these two underlying models generated the graph.
Partial recovery
In partial recovery, the goal is to approximately determine the latent partition into communities, in the sense of finding a partition that is correlated with the true partition significantly better than a random guess.
Exact recovery
In exact recovery, the goal is to recover the latent partition into communities exactly. The community sizes and probability matrix may be known or unknown.
Statistical lower bounds and threshold behavior
Stochastic block models exhibit a sharp threshold effect reminiscent of percolation thresholds. Suppose that we allow the size of the graph to grow, keeping the community sizes in fixed proportions. If the probability matrix remains fixed, tasks such as partial and exact recovery become feasible for all non-degenerate parameter settings. However, if we scale down the probability matrix at a suitable rate as increases, we observe a sharp phase transition: for certain settings of the parameters, it will become possible to achieve recovery with probability tending to 1, whereas on the opposite side of the parameter threshold, the probability of recovery tends to 0 no matter what algorithm is used.
For partial recovery, the appropriate scaling is to take for fixed , resulting in graphs of constant average degree. In the case of two equal-sized communities, in the assortative planted partition model with probability matrix
partial recovery is feasible with probability whenever , whereas any estimator fails partial recovery with probability whenever .
For exact recovery, the appropriate scaling is to take , resulting in graphs of logarithmic average degree. Here a similar threshold exists: for the assortative planted partition model with equal-sized communities, the threshold lies at . In fact, the exact recovery threshold is known for the fully general stochastic block model.
Algorithms
In principle, exact recovery can be solved in its feasible range using maximum likelihood, but this amounts to solving a constrained or regularized cut problem such as minimum bisection that is typically NP-complete. Hence, no known efficient algorithms will correctly compute the maximum-likelihood estimate in the worst case.
However, a wide variety of algorithms perform well in the average case, and many high-probability performance guarantees have been proven for algorithms in both the partial and exact recovery settings. Successful algorithms include spectral clustering of the vertices, semidefinite programming, forms of belief propagation, and community detection among others.
Variants
Several variants of the model exist. One minor tweak allocates vertices to communities randomly, according to a categorical distribution, rather than in a fixed partition. More significant variants include the degree-corrected stochastic block model, the hierarchical stochastic block model, the geometric block model, censored block model and the mixed-membership block model.
Topic models
Stochastic block model have been recognised to be a topic model on bipartite networks. In a network of documents and words, Stochastic block model can identify topics: group of words with a similar meaning.
Extensions to signed graphs
Signed graphs allow for both favorable and adverse relationships and serve as a common model choice for various data analysis applications, e.g., correlation clustering. The stochastic block model can be trivially extended to signed graphs by assigning both positive and negative edge weights or equivalently using a difference of adjacency matrices of two stochastic block models.
DARPA/MIT/AWS Graph Challenge: streaming stochastic block partition
GraphChallenge encourages community approaches to developing new solutions for analyzing graphs and sparse data derived from social media, sensor feeds, and scientific data to enable relationships between events to be discovered as they unfold in the field. Streaming stochastic block partition is one of the challenges since 2017.
Spectral clustering has demonstrated outstanding performance compared to the original and even improved
base algorithm, matching its quality of clusters while being multiple orders of magnitude faster.
See also
blockmodeling
for generating benchmark networks with communities
References
Random graphs
Networks
Blockmodeling | Stochastic block model | [
"Mathematics"
] | 1,345 | [
"Mathematical relations",
"Graph theory",
"Random graphs"
] |
41,721,510 | https://en.wikipedia.org/wiki/Pseudocapacitance | Pseudocapacitance is the electrochemical storage of electricity in an electrochemical capacitor that occurs due to faradaic charge transfer originating from a very fast sequence of reversible faradaic redox, electrosorption or intercalation processes on the surface of suitable electrodes. Pseudocapacitance is accompanied by an electron charge-transfer between electrolyte and electrode coming from a de-solvated and adsorbed ion. One electron per charge unit is involved. The adsorbed ion has no chemical reaction with the atoms of the electrode (no chemical bonds arise) since only a charge-transfer takes place. Supercapacitors that rely primarily on pseudocapacitance are sometimes called pseudocapacitors.
Faradaic pseudocapacitance only occurs together with static double-layer capacitance. Pseudocapacitance and double-layer capacitance both contribute inseparably to the total capacitance value. The amount of pseudocapacitance depends on the surface area, material and structure of the electrodes. Pseudocapacitance may contribute more capacitance than double-layer capacitance for the same surface area by 100x.
The amount of electric charge stored in a pseudocapacitance is linearly proportional to the applied voltage. The unit of pseudocapacitance is farad.
History
Development of the double layer and pseudocapacitance model see Double layer (interfacial)
Development of the electrochemical components see Supercapacitors
Redox reactions
Differences
Rechargeable batteries
Redox reactions in batteries with faradaic charge-transfer between an electrolyte and the surface of an electrode were characterized decades ago. These chemical processes are associated with chemical reactions of the electrode materials usually with attendant phase changes. Although these chemical processes are relatively reversible, battery charge/discharge cycles often irreversibly produce unreversed chemical reaction products of the reagents. Accordingly, the cycle-life of rechargeable batteries is usually limited. Further, the reaction products lower power density. Additionally, the chemical processes are relatively slow, extending charge/discharge times.
Electro-chemical capacitors
A fundamental difference between redox reactions in batteries and in electrochemical capacitors (supercapacitors) is that in the latter, the reactions are a very fast sequence of reversible processes with electron transfer without any phase changes of the electrode molecules. They do not involve making or breaking chemical bonds. The de-solvated atoms or ions contributing the pseudocapacitance simply cling to the atomic structure of the electrode and charges are distributed on surfaces by physical adsorption processes. Compared with batteries, supercapacitor faradaic processes are much faster and more stable over time, because they leave only traces of reaction products. Despite the reduced amount of these products, they cause capacitance degradation. This behavior is the essence of pseudocapacitance.
Pseudocapacitive processes lead to a charge-dependent, linear capacitive behavior, as well as the accomplishment of non-faradaic double-layer capacitance in contrast to batteries, which have a nearly charge-independent behavior. The amount of pseudocapacitance depends on the surface area, material and structure of the electrodes. The pseudocapacitance may exceed the value of double-layer capacitance for the same surface area by 100x.
Capacitance functionality
Applying a voltage at the capacitor terminals moves the polarized ions or charged atoms in the electrolyte to the opposite polarized electrode. Between the surfaces of the electrodes and the adjacent electrolyte an electric double-layer forms. One layer of ions on the electrode surface and the second layer of adjacent polarized and solvated ions in the electrolyte move to the opposite polarized electrode. The two ion layers are separated by a single layer of electrolyte molecules. Between the two layers, a static electric field forms that results in double-layer capacitance. Accompanied by the electric double-layer, some de-solvated electrolyte ions pervade the separating solvent layer and are adsorbed by the electrode's surface atoms. They are specifically adsorbed and deliver their charge to the electrode. In other words, the ions in the electrolyte within the Helmholtz double-layer also act as electron donors and transfer electrons to the electrode atoms, resulting in a faradaic current. This faradaic charge transfer, originated by a fast sequence of reversible redox reactions, electrosorptions or intercalation processes between electrolyte and the electrode surface is called pseudocapacitance.
Depending on the electrode's structure or surface material, pseudocapacitance can originate when specifically adsorbed ions pervade the double-layer, proceeding in several one-electron stages. The electrons involved in the faradaic processes are transferred to or from the electrode's valence-electron states (orbitals) and flow through the external circuit to the opposite electrode where a second double-layer with an equal number of opposite-charged ions forms. The electrons remain in the strongly ionized and electrode surface's "electron hungry" transition-metal ions and are not transferred to the adsorbed ions. This kind of pseudocapacitance has a linear function within narrow limits and is determined by the potential-dependent degree of surface coverage of the adsorbed anions. The storage capacity of the pseudocapacitance is limited by the finite quantity of reagent or of available surface.
Systems that give rise to pseudocapacitance:
Redox system: Ox + ze‾ ⇌ Red
Intercalation system: in ""
Electrosorption, underpotential deposition of metal adatoms or H: + ze‾ + S ⇌ SM or + e‾ + S ⇌ SH (S = surface lattice sites)
All three types of electrochemical processes have appeared in supercapacitors.
When discharging pseudocapacitance, the charge transfer is reversed and the ions or atoms leave the double-layer and spread throughout the electrolyte.
Materials
Electrodes' ability to produce pseudocapacitance strongly depends on the electrode materials' chemical affinity to the ions adsorbed on the electrode surface as well as on the electrode pore structure and dimension. Materials exhibiting redox behavior for use as pseudocapacitor electrodes are transition-metal oxides inserted by doping in the conductive electrode material such as active carbon, as well as conducting polymers such as polyaniline or derivatives of polythiophene covering the electrode material.
Transition metal oxides/sulfides
These materials provide high pseudocapacitance and were thoroughly studied by Conway. Many oxides of transition metals like ruthenium (), iridium (), iron (), manganese () or sulfides such as titanium sulfide () or their combinations generate faradaic electron–transferring reactions with low conducting resistance.
Ruthenium dioxide () in combination with sulfuric acid () electrolyte provides one of the best examples of pseudocapacitance, with a charge/discharge over a window of about 1.2 V per electrode. Furthermore, the reversibility on these transition metal electrodes is excellent, with a cycle life of more than several hundred-thousand cycles. Pseudocapacitance originates from a coupled, reversible redox reaction with several oxidation steps with overlapping potential. The electrons mostly come from the electrode's valence orbitals. The electron transfer reaction is very fast and can be accompanied with high currents.
The electron transfer reaction takes place according to:
where
During charge and discharge, (protons) are incorporated into or removed from the crystal lattice, which generates storage of electrical energy without chemical transformation. The OH groups are deposited as a molecular layer on the electrode surface and remain in the region of the Helmholtz layer. Since the measurable voltage from the redox reaction is proportional to the charged state, the reaction behaves like a capacitor rather than a battery, whose voltage is largely independent of the state of charge.
Conducting polymers
Another type of material with a high amount of pseudocapacitance is electron-conducting polymers. Conductive polymer such as polyaniline, polythiophene, polypyrrole and polyacetylene have a lower reversibility of the redox processes involving faradaic charge transfer than transition metal oxides, and suffer from a limited stability during cycling. Such electrodes employ electrochemical doping or dedoping of the polymers with anions and cations. Highest capacitance and power density are achieved with a n/p-type polymer configuration, with one negatively charged (n-doped) and one positively charged (p-doped) electrode.
Structure
Pseudocapacitance may originate from the electrode structure, especially from the material pore size. The use of carbide-derived carbons (CDCs) or carbon nanotubes (CNTs) as electrodes provides a network of small pores formed by nanotube entanglement. These nanoporous materials have diameters in the range of <2 nm that can be referred to as intercalated pores. Solvated ions in the electrolyte are unable to enter these small pores, but de-solvated ions that have reduced their ion dimensions are able to enter, resulting in larger ionic packing density and increased charge storage. The tailored sizes of pores in nano-structured carbon electrodes can maximize ion confinement, increasing specific capacitance by faradaic adsorption treatment. Occupation of these pores by de-solvated ions from the electrolyte solution occurs according to (faradaic) intercalation.
Verification
Pseudocapacitance properties can be expressed in a cyclic voltammogram. For an ideal double-layer capacitor, the current flow is reversed immediately upon reversing the potential yielding a rectangular-shaped voltammogram, with a current independent of the electrode potential. For double-layer capacitors with resistive losses, the shape changes to a parallelogram. In faradaic electrodes the electrical charge stored in the capacitor is strongly dependent on the potential, therefore, the voltammetry characteristics deviate from the parallelogram due to a delay while reversing the potential, ultimately coming from kinetic charging processes.
Examples
Brezesinki et al. showed that mesoporous films of α-MoO3 have improved charge storage due to lithium ions inserting into the gaps of α-MoO3. They claim this intercalation pseudocapacitance takes place on the same timescale as redox pseudocapacitance and gives better charge-storage capacity without changing kinetics in mesoporous MoO3. This approach is promising for batteries with rapid charging ability, comparable to that of lithium batteries, and is promising for efficient energy materials.
Other groups have used vanadium oxide thin films on carbon nanotubes for pseudocapacitors. Kim et al. electrochemically deposited amorphous V2O5·H2O onto a carbon nanotube film. The three-dimensional structure of the carbon nanotubes substrate facilitates high specific lithium-ion capacitance and shows three times higher capacitance than vanadium oxide deposited on a typical Pt substrate. These studies demonstrate the capability of deposited oxides to effectively store charge in pseudocapacitors.
Conducting polymers, such as polypyrrole (PPy) and poly(3,4-ethylenedioxythiophene) (PEDOT), have tunable electronic conductivity and can achieve high doping levels with the proper counterion. A high-performing conducting polymer pseudocapacitor has high cycling stability after undergoing charge/discharge cycles. Successful approaches include embedding the redox polymer in a host phase (e.g. titanium carbide) for stability and depositing a carbonaceous shell onto the conducting polymer electrode. These techniques improve cyclability and stability of the pseudocapacitor device.
Applications
Pseudocapacitance is an important property in supercapacitors.
References
Literature
Capacitors | Pseudocapacitance | [
"Physics"
] | 2,535 | [
"Capacitance",
"Capacitors",
"Physical quantities"
] |
41,721,530 | https://en.wikipedia.org/wiki/Double-layer%20capacitance | Double-layer capacitance is the important characteristic of the electrical double layer which appears at the interface between a surface and a fluid (for example, between a conductive electrode and an adjacent liquid electrolyte). At this boundary two layers of electric charge with opposing polarity form, one at the surface of the electrode, and one in the electrolyte. These two layers, electrons on the electrode and ions in the electrolyte, are typically separated by a single layer of solvent molecules that adhere to the surface of the electrode and act like a dielectric in a conventional capacitor. The amount of charge stored in double-layer capacitor depends on the applied voltage.
The double-layer capacitance is the physical principle behind the electrostatic double-layer type of supercapacitors.
History
Development of the double layer and pseudocapacitance model see Double layer (interfacial)
Development of the electrochemical components see Supercapacitors
Capacitance
Helmholtz laid the theoretical foundations for understanding the double layer phenomenon. The formation of double layers is exploited in every electrochemical capacitor to store electrical energy.
Every capacitor has two electrodes, mechanically separated by a separator. These are electrically connected via the electrolyte, a mixture of positive and negative ions dissolved in a solvent such as water. Where the liquid electrolyte contacts the electrode's conductive metallic surface, an interface is formed which represents a common boundary between the two phases of matter. It is at this interface that the double layer effect occurs.
When a voltage is applied to the capacitor, two layers of polarized ions are generated at the electrode interfaces. One layer is within the solid electrode (at the surfaces of crystal grains from which it is made that are in contact with the electrolyte). The other layer, with opposite polarity, forms from dissolved and solvated ions distributed in the electrolyte that have moved towards the polarized electrode. These two layers of polarized ions are separated by a monolayer of solvent molecules. The molecular monolayer forms the inner Helmholtz plane (IHP). It adheres by physical adsorption on the electrode surface and separates the oppositely polarized ions from each other, forming a molecular dielectric.
The amount of charge in the electrode is matched by the magnitude of counter-charges in the outer Helmholtz plane (OHP). This is the area close to the IHP, in which the polarized electrolyte ions are collected. This separation of two layers of polarized ions through the double-layer stores electrical charges in the same way as in a conventional capacitor. The double-layer charge forms a static electric field in the molecular IHP layer of the solvent molecules that corresponds to the strength of the applied voltage.
The "thickness" of a charged layer in the metallic electrode, i.e., the average extension perpendicular to the surface, is about 0.1 nm, and mainly depends on the electron density because the atoms in solid electrodes are stationary. In the electrolyte, the thickness depends on the size of the solvent molecules and of the movement and concentration of ions in the solvent. It ranges from 0.1 to 10 nm as described by the Debye length. The sum of the thicknesses is the total thickness of a double layer.
The IHP's small thickness creates a strong electric field over the separating solvent molecules. At a potential difference of, for example, = 2 V and a molecular thickness of = 0.4 nm, the electric field strength is
To compare this figure with values from other capacitor types requires an estimation for electrolytic capacitors, the capacitors with the thinnest dielectric among conventional capacitors. The voltage proof of aluminum oxide, the dielectric layer of aluminum electrolytic capacitors, is approximately 1.4 nm/V. For a 6.3 V capacitor therefore the layer is 8.8 nm. The electric field is 6.3 V/8.8 nm = 716 kV/mm, around 7 times lower than in the double-layer. The field strength of some 5000 kV/mm is unrealizable in conventional capacitors. No conventional dielectric material could prevent charge carrier breakthrough. In a double-layer capacitor the chemical stability of the solvent's molecular bonds prevents breakthrough.
The forces that cause the adhesion of solvent molecules in the IHP are physical forces rather than chemical bonds. Chemical bonds exist within the adsorbed molecules, but they are polarized.
The magnitude of the electric charge that can accumulate in the layers corresponds to the concentration of the adsorbed ions and the electrodes surface. Up to the electrolyte's decomposition voltage, this arrangement behaves like a capacitor in which the stored electrical charge is linearly dependent on the voltage.
The double-layer is like the dielectric layer in a conventional capacitor, but with the thickness of a single molecule. Using the early Helmholtz model to calculate the capacitance the model predicts a constant differential capacitance independent from the charge density, even depending on the dielectric constant and the charge layer separation .
If the electrolyte solvent is water then the influence of the high field strength creates a permittivity of 6 (instead of 80 without an applied electric field) and the layer separation ca. 0.3 nm, the Helmholtz model predicts a differential capacitance value of about 18 μF/cm2. This value can be used to calculate capacitance values using the standard formula for conventional plate capacitors if only the surface of the electrodes is known. This capacitance can be calculated with:
.
The capacitance is greatest in components made from materials with a high permittivity , large electrode plate surface areas and a small distance d between plates. Because activated carbon electrodes have a very high surface area and an extremely thin double-layer distance which is on the order of a few ångströms (0.3-0.8 nm), it is understandable why supercapacitors have the highest capacitance values among the capacitors (in the range of 10 to 40 μF/cm2).
In real produced supercapacitors with a high amount of double-layer capacitance the capacitance value depends first on electrode surface and DL distance. Parameters such as electrode material and structure, electrolyte mixture, and amount of pseudocapacitance also contribute to capacitance value.
Because an electrochemical capacitor is composed out of two electrodes, electric charge in the Helmholtz layer at one electrode is mirrored (with opposite polarity) in the second Helmholtz layer at the second electrode. Therefore, the total capacitance value of a double-layer capacitor is the result of two capacitors connected in series. If both electrodes have approximately the same capacitance value, as in symmetrical supercapacitors, the total value is roughly half that of one electrode.
Literature
Double layer (surface science)
References
Capacitors | Double-layer capacitance | [
"Physics"
] | 1,489 | [
"Capacitance",
"Capacitors",
"Physical quantities"
] |
41,726,381 | https://en.wikipedia.org/wiki/Nanoprobing | Nanoprobing is method of extracting device electrical parameters through the use of nanoscale tungsten wires, used primarily in the semiconductor industry. The characterization of individual devices is instrumental to engineers and integrated circuit designers during initial product development and debug. It is commonly utilized in device failure analysis laboratories to aid with yield enhancement, quality and reliability issues and customer returns. Commercially available nanoprobing systems are integrated into either a vacuum-based scanning electron microscope (SEM) or atomic force microscope (AFM). Nanoprobing systems that are based on AFM technology are referred to as Atomic Force nanoProbers (AFP).
Principles and operation
AFM based nanoprobers, enable up to eight probe tips to be scanned to generate high resolution AFM topography images, as well as Conductive AFM, Scanning Capacitance, and Electrostatic Force Microscopy images. Conductive AFM provides pico-amp resolution to identify and localize electrical failures such as shorts, opens, resistive contacts and leakage paths, enabling accurate probe positioning for current-voltage measurements. AFM based nanoprobers enable nanometer scale device defect localization and accurate transistor device characterization without the physical damage and electrical bias induced by high energy electron beam exposure.
For SEM based nanoprobers, the ultra-high resolution of the microscopes that house the nanoprobing system allow the operator to navigate the probe tips with precise movement, allowing the user to see exactly where the tips will be landed, in real time. Existing nanoprobe needles or “probe tips” have a typical end-point radius ranging from 5 to 35 nm. The fine tips enable access to individual contacts nodes of modern IC transistors. Navigation of the probe tips in SEM based nanoprobers are typically controlled by precision piezoelectric manipulators. Typical systems have anywhere from 2 to 8 probe manipulators with high end tools having better than 5 nm of placement resolution in the X, Y & Z axes and a high accuracy sample stage for navigation of the sample under test.
Application and capabilities for semiconductor devices
Common nanoprobing techniques include, but are not limited to:
General
DC transistor characterization (Id-Vg and Id-Vd Measurements)
Characterizing SRAM bitcells
BEOL Metal Resistance Measurements
AFM-based tools specific
Conductive Atomic Force Microscopy (CAFM)
Scanning Capacitance Microscopy (SCM)
Electrostatic Force Microscopy (EFM)
SEM-based tools specific
Electron-Beam Absorbed Current Imaging (EBAC)
Electron-Beam Induced Current (EBIC)
Electron Beam Induced Resistance Change (EBIRCH)
Challenges
Common issues that arise:
Nanoprobe manipulator stability
Live image resolution
Maintaining probe conductivity
Chamber/Surface contamination
References
External links
Conference proceedings of the ASM International Symposium for Testing and Failure Analysis (ISTFA)
IEEE International Symposium on the Physical and Failure Analysis of Integrated Circuits (IPFA)
Technical papers on SEM-based nanoprober
SEM-based shuttle nanoprober
Mobile robot based nanoprober for SEM
Python scriptable nanoprober
Electronic engineering
Nanoelectronics
Semiconductor analysis | Nanoprobing | [
"Materials_science",
"Technology",
"Engineering"
] | 644 | [
"Computer engineering",
"Electronic engineering",
"Nanoelectronics",
"Nanotechnology",
"Electrical engineering"
] |
31,905,871 | https://en.wikipedia.org/wiki/S-Adenosylmethionine%20synthetase%20enzyme | S-Adenosylmethionine synthetase (), also known as methionine adenosyltransferase (MAT), is an enzyme that creates S-adenosylmethionine (also known as AdoMet, SAM or SAMe) by reacting methionine (a non-polar amino acid) and ATP (the basic currency of energy).
Function
AdoMet is a methyl donor for transmethylation. It gives away its methyl group and is also the propylamino donor in polyamine biosynthesis. S-adenosylmethionine synthesis can be considered the rate-limiting step of the methionine cycle.
As a methyl donor SAM allows DNA methylation. Once DNA is methylated, it switches the genes off and therefore, S-adenosylmethionine can be considered to control gene expression.
SAM is also involved in gene transcription, cell proliferation, and production of secondary metabolites. Hence SAM synthetase is fast becoming a drug target, in particular for the following diseases: depression, dementia, vacuolar myelopathy, liver injury, migraine, osteoarthritis, and as a potential cancer chemopreventive agent.
This article discusses the protein domains that make up the SAM synthetase enzyme and how these domains contribute to its function. More specifically, this article explores the shared pseudo-3-fold symmetry that makes the domains well-adapted to their functions.
This enzyme catalyses the following chemical reaction
ATP + L-methionine + H2O phosphate + diphosphate + S-adenosyl-L-methionine
Conserved motifs in the 3'UTR of MAT2A mRNA
A computational comparative analysis of vertebrate genome sequences have identified a cluster of 6 conserved hairpin motifs in the 3'UTR of the MAT2A messenger RNA (mRNA) transcript. The predicted hairpins (named A-F) have strong evolutionary conservation and 3 of the predicted RNA structures (hairpins A, C and D) have been confirmed by in-line probing analysis. No structural changes were observed for any of the hairpins in the presence of metabolites SAM, S-adenosylhomocysteine or L-Methionine. They are proposed to be involved in transcript stability and their functionality is currently under investigation.
Protein overview
The S-adenosylmethionine synthetase enzyme is found in almost every organism bar parasites which obtain AdoMet from their host. Isoenzymes are found in bacteria, budding yeast and even in mammalian mitochondria. Most MATs are homo-oligomers and the majority are tetramers. The monomers are organised into three domains formed by nonconsecutive stretches of the sequence, and the subunits interact through a large flat hydrophobic surface to form the dimers.
S-adenosylmethionine synthetase N terminal domain
In molecular biology the protein domain S-adenosylmethionine synthetase N terminal domain is found at the N-terminal of the enzyme.
N terminal domain function
The N terminal domain is well conserved across different species. This may be due to its important function in substrate and cation binding. The residues involved in methionine binding are found in the N-terminal domain.
N terminal domain structure
The N terminal region contains two alpha helices and four beta strands.
S-adenosylmethionine synthetase Central domain
Central terminal domain function
The precise function of the central domain has not been fully elucidated, but it is thought to be important in aiding catalysis.
Central domain structure
The central region contains two alpha helices and four beta strands.
S-adenosylmethionine synthetase, C terminal domain
In molecular biology, the protein domain S-adenosylmethionine synthetase, C-terminal domain refers to the C terminus of the S-adenosylmethionine synthetase
C terminal domain function
The function of the C-terminal domain has been experimentally determined as being important for cytoplasmic localisation. The residues are scattered along the C-terminal domain sequence however once the protein folds, they position themselves closely together.
C terminal domain structure
The C-terminal domains contains two alpha-helices and four beta-strands.
References
External links
Protein domains
Enzymes
Methylating agents
Gene expression
Protein families
Transferases
Metabolism
EC 2.5.1 | S-Adenosylmethionine synthetase enzyme | [
"Chemistry",
"Biology"
] | 935 | [
"Gene expression",
"Protein classification",
"Methylating agents",
"Molecular genetics",
"Cellular processes",
"Methylation",
"Molecular biology",
"Biochemistry",
"Protein domains",
"Protein families",
"Metabolism"
] |
31,906,011 | https://en.wikipedia.org/wiki/Squamosa%20promoter%20binding%20protein | The SQUAMOSA promoter binding protein-like (SBP or SPL) family of transcription factors are defined by a plant-specific DNA-binding domain. The founding member of the family was identified based on its specific in vitro binding to the promoter of the snapdragon SQUAMOSA gene. SBP proteins are thought to be transcriptional activators.
Function
SPB proteins have roles in leaf development, vegetative phase change, flower and fruit development, plant architecture, sporogenesis, Gibberelic acid signaling and toxin response.
Structure
The domain contains 10 conserved cysteine and histidine residues that probably are zinc ligands.
The SBP domain is a highly conserved DNA-binding domain. It is approximately 80 amino acids in length and contains a zinc finger motif
with two zinc-binding sites: Cys-Cys-His-Cys and Cys-Cys-Cys-His. It has a three-stranded antiparallel beta-sheet.
References
External links
SBP family at PlantTFDB: Plant Transcription Factor Database
Protein domains
Transcription factors | Squamosa promoter binding protein | [
"Chemistry",
"Biology"
] | 225 | [
"Gene expression",
"Protein classification",
"Molecular biology stubs",
"Signal transduction",
"Protein domains",
"Induced stem cells",
"Molecular biology",
"Transcription factors"
] |
31,906,999 | https://en.wikipedia.org/wiki/Dihalomethane | The dihalomethanes are organic compounds in which two hydrogen atoms in methane are replaced by halogen atoms. They belong to the haloalkanes, specifically the subgroup of halomethanes, and contains ten members.
There are four members with only one kind of halogen atom: difluoromethane, dichloromethane, dibromomethane and diiodomethane.
There are six members with two kinds of halogen atoms:
Bromochloromethane
Bromofluoromethane
Bromoiodomethane
Chlorofluoromethane
Chloroiodomethane
Fluoroiodomethane
Reference
See also
Monohalomethane
Trihalomethane
Tetrahalomethane
Chemical substances | Dihalomethane | [
"Physics",
"Chemistry"
] | 160 | [
"Materials",
"Chemical substances",
"nan",
"Matter"
] |
31,907,150 | https://en.wikipedia.org/wiki/Phonon%20noise | Phonon noise, also known as thermal fluctuation noise, arises from the random exchange of energy between a thermal mass and its surrounding environment. This energy is quantized in the form of phonons. Each phonon has an energy of order , where is the Boltzmann constant and is the temperature. The random exchange of energy leads to fluctuations in temperature. This occurs even when the thermal mass and the environment are in thermal equilibrium, i.e. at the same time-average temperature. If a device has a temperature-dependent electrical resistance, then these fluctuations in temperature lead to fluctuations in resistance. Examples of devices where phonon noise is important include bolometers and calorimeters. The superconducting transition edge sensor (TES), which can be operated either as a bolometer or a calorimeter, is an example of a device for which phonon noise can significantly contribute to the total noise.
Although Johnson–Nyquist noise shares many similarities with phonon noise (e.g. the noise spectral density depends on the temperature and is white at low frequencies), these two noise sources are distinct. Johnson–Nyquist noise arises from the random thermal motion of electrons, whereas phonon noise arises from the random exchange of phonons. Johnson–Nyquist noise is easily modeled at thermal equilibrium, where all components of the circuit are held at the same temperature. A general equilibrium model for phonon noise is usually impossible because different components of the thermal circuit are nonuniform in temperature and also often not time invariant, as in the occasional energy deposition from particles incident on a detector. The transition edge sensor typically maintains the temperature through negative electrothermal feedback associated with changes in internal electrical power.
An approximate formula for the noise-equivalent power (NEP) due to phonon noise in a bolometer when all components are very close to a temperature T is
where G is the thermal conductance and the NEP is measured in . In calorimetric detectors, the rms energy resolution due to phonon noise near quasi-equilibrium is described using a similar formula,
where C is the heat capacity.
A real bolometer or calorimeter is not at equilibrium because of a temperature gradient between the absorber and the bath. Since G and C are generally nonlinear functions of temperature, a more advanced model may include the temperature of both the absorber and the bath and treat G or C as a power law across this temperature range.
See also
Thermal fluctuations
References
Condensed matter physics
Noise (electronics)
Superconducting detectors | Phonon noise | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 527 | [
"Matter",
"Superconductivity",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Superconducting detectors"
] |
31,907,287 | https://en.wikipedia.org/wiki/Heterogeneous%20random%20walk%20in%20one%20dimension | In dynamics, probability, physics, chemistry and related fields, a heterogeneous random walk in one dimension is a random walk in a one dimensional interval with jumping rules that depend on the location of the random walker in the interval.
For example: say that the time is discrete and also the interval. Namely, the random walker jumps every time step either left or right. A possible heterogeneous random walk draws in each time step a random number that determines the local jumping probabilities and then a random number that determines the actual jump direction. Specifically, say that the interval has 9 sites (labeled 1 through 9), and the sites (also termed states) are connected with each other linearly (where the edges sites are connected their adjacent sites and together). In each time step, the jump probabilities (from the actual site) are determined when flipping a coin; for head we set: probability jumping left =1/3, where for tail we set: probability jumping left = 0.55. Then, a random number is drawn from a uniform distribution: when the random number is smaller than probability jumping left, the jump is for the left, otherwise, the jump is for the right. Usually, in such a system, we are interested in the probability of staying in each of the various sites after t jumps, and in the limit of this probability when t is very large, .
Generally, the time in such processes can also vary in a continuous way, and the interval is also either discrete or continuous. Moreover, the interval is either finite or without bounds. In a discrete system, the connections are among adjacent states. The basic dynamics are either Markovian, semi-Markovian, or even not Markovian depending on the model. In discrete systems, heterogeneous random walks in 1d have jump probabilities that depend on the location in the system, and/or different jumping time (JT) probability density functions (PDFs) that depend on the location in the system.
General solutions for heterogeneous random walks in 1d obey equations ()-(), presented in what follows.
Introduction
Random walks in applications
Random walks can be used to describe processes in biology, chemistry, and physics, including chemical kinetics and polymer dynamics. In individual molecules, random walks appear when studying individual molecules, individual channels, individual biomolecules, individual enzymes, and quantum dots. Importantly, PDFs and special correlation functions can be easily calculated from single molecule measurements but not from ensemble measurements. This unique information can be used for discriminating between distinct random walk models that share some properties, and this demands a detailed theoretical analysis of random walk models. In this context, utilizing the information content in single molecule data is a matter of ongoing research.
Formulations of random walks
The actual random walk obeys a stochastic equation of motion, but its probability density function (PDF) obeys a deterministic equation. PDFs of random walks can be formulated in terms of the (discrete in space) master equation and the generalized master equation or the (continuous in space and time) Fokker Planck equation and its generalizations. Continuous time random walks, renewal theory, and the path representation are also useful formulations of random walks. The network of relationships between the various descriptions provides a powerful tool in the analysis of random walks. Arbitrarily heterogeneous environments make the analysis difficult, especially in high dimensions.
Results for random walks in one dimension
Simple systems
Known important results in simple systems include:
In a symmetric Markovian random walk, the Green's function (also termed the PDF of the walker) for occupying state i is a Gaussian in the position and has a variance that scales like the time. This is correct for a system with discrete time and space, yet also in a system with continuous time and space. These results is for systems without bounds.
When there is a simple bias in the system (i.e. a constant force is applied on the system in a particular direction), the average distance of the random walker from its starting position is linear with time.
When trying to reach a distance L from the starting position in a finite interval of length L, the time for reaching this distance is exponential with the length L: . Here, the diffusion is against a linear potential.
Heterogeneous systems
The solution for the Green's function for a semi-Markovian random walk in an arbitrarily heterogeneous environment in 1D was recently given using the path representation. (The function is the PDF for occupying state i at time t given that the process started at state j exactly at time 0.) A semi-Markovian random walk in 1D is defined as follows: a random walk whose dynamics are described by the (possibly) state- and direction-dependent JT-PDFs, , for transitions between states i and i ± 1, that generates stochastic trajectories of uncorrelated waiting times that are not-exponential distributed. obeys the normalization conditions (see fig. 1)
The dynamics can also include state- and direction-dependent irreversible trapping JT-PDFs, , with I=i+L. The environment is heterogeneous when depends on i. The above process is also a continuous time random walk and has an equivalent generalized master equation representation for the Green's function..
Explicit expressions for heterogeneous random walks in 1D
In a completely heterogeneous semi-Markovian random walk in a discrete system of L (> 1) states, the Green's function was found in Laplace space (the Laplace transform of a function is defined with, ). Here, the system is defined through the jumping time (JT) PDFs: connecting state i with state j (the jump is from state i). The solution is based on the path representation of the Green's function, calculated when including all the path probability density functions of all lengths:
Here,
and
Also, in Eq. (),
and
with
and
For L = 1, . In this paper, the symbol [L/2], as appearing in the upper bound of the sum in eq. () is the floor operation (round towards zero). Finally, the factor in eq. () has the same form as in in eqs. ()-(), yet it is calculated on a lattice . Lattice is constructed from the original lattice by taking out from it the states i and j and the states between them, and then connecting the obtained two fragments. For cases in which a fragment is a single state, this fragment is excluded; namely, lattice is the longer fragment. When each fragment is a single state, .
Equations ()-() hold for any 1D semi-Markovian random walk in a L-state chain, and form the most general solution in an explicit form for random walks in 1d.
Path representation of heterogeneous random walks
Clearly, in Eqs. ()-() solves the corresponding continuous time random walk problem and the equivalent generalized master equation. Equations ()-() enable
analyzing semi-Markovian random walks in 1D chains from a wide variety of aspects. Inversion to time domain gives the Green’s function, but also moments and correlation functions can be calculated from Eqs. ()-(), and then inverted into time domain (for relevant quantities). The closed-form also manifests its utility when numerical inversion of the generalized master equation is unstable. Moreover, using in simple analytical manipulations gives, (i) the first passage time PDF, (ii)–(iii) the Green’s functions for a random walk with a special WT-PDF for the first event and for a random walk in a circular L-state 1D chain, and (iv) joint PDFs in space and time with many arguments.
Still, the formalism used in this article is the path representation of the Green's function , and this supplies further information on the process. The path representation follows:
The expression for in Eq. () follows,
is the PDF of reaching state i exactly at time t when starting at state j exactly at time 0. This is the path PDF in time that is built from all paths with transitions that connect states j with i. Two different path types contribute to : paths made of the same states appearing in different orders and different paths of the same length of transitions. Path PDFs for translation invariant chains are mono-peaked. Path PDF for translation invariant chains mostly contribute to the Green's function in the vicinity of its peak, but this behavior is believed to characterize heterogeneous chains as well.
We also note that the following relation holds, . Using this relation, we focus in what follows on solving .
Path PDFs
Complementary information on the random walk with that supplied with the Green’s function is contained in path PDFs. This is evident, when constructing approximations for Green’s functions, in which path PDFs are the building blocks in the analysis. Also, analytical properties of the Green’s function are clarified only in path PDF analysis. Here, presented is the recursion relation for in the length n of path PDFs for any fixed value of L. The recursion relation is linear in path PDFs with the s in Eq. () serving as the n independent coefficients, and is of order [L / 2]:
The recursion relation is used for explaining the universal formula for the coefficients in Eq. ().
The solution of the recursion relation is obtained by applying a z transform:
Setting in Eq. () gives . The Taylor expansion of Eq. () gives . The result follows:
In Eq. () is one for , and otherwise,
where
The initial number follow:
and,
References
Other Bibliography
Variants of random walks
Statistical mechanics | Heterogeneous random walk in one dimension | [
"Physics"
] | 2,036 | [
"Statistical mechanics"
] |
31,908,510 | https://en.wikipedia.org/wiki/Bootleg%20ground | In building wiring installed with separate neutral and protective ground bonding conductors (a TN-S network), a bootleg ground (or a false ground) is a connection between the neutral side of a receptacle or light fixture and the ground lug or enclosure of the wiring device.
Description
A bootleg ground connects the neutral side of the receptacle to the conductive casing of an appliance or lamp. This can be a hazard because the neutral wire is a current-carrying conductor, which means the exposed casing can become energized. In addition, a fault condition to a bootleg ground will not trip a GFCI breaker, nor protect a receptacle that is wired from the load side of a GFCI receptacle.
Before 1996, in the United States it was common to ground the frames of large 120/240-volt permanently-connected appliances (such as a clothes dryer or oven) to neutral conductors. This has been prohibited in new installations since the 1996 National Electrical Code (upon local adoption by legislation or regulation). Existing installations are permitted to continue in accordance with NEC 250.140 Exception.
Correct-polarity bootleg ground
In the less-dangerous instance of a bootleg ground, a short wire jumper is connected between the bonding screw terminal (usually colored green) on a NEMA 5-15R or 5-20R outlet to the neutral (a.k.a. grounded conductor, colored white according to code) or directly to the white neutral wire via a pigtail. This practice is a NEC code violation, but a standard 3-lamp receptacle tester will report the outlet as correctly wired.
Reverse-polarity bootleg ground
In the very-dangerous instance of a reverse polarity bootleg ground, the hot and neutral wires have been connected to the opposite terminals, and a jumper or pigtail connection is made between the green bonding screw terminal and what is believed to be the neutral circuit. But because the wiring has been crossed at some point, the hot 120-volt wire is now connected directly to the ground on the receptacle, placing low-impedance live voltage on all grounded parts of all equipment plugged into that outlet. This hazardous connection allows people to come into contact with a deadly voltage with a current path back to the source (the power transformer) that will not trip either a normal circuit breaker, a GFCI, nor an AFCI quickly enough to prevent electrocution.
Safer alternatives
If a safe equipment ground path is not provided via the existing cables installed within the building, they ideally should be replaced with new cabling which includes a safety ground conductor.
A safe alternative (where local electrical code allows it), permitted by recent editions of the National Electrical Code [NEC Sec. 406.4(D)(2)(b)] if a grounding connection is not practicable, is to install a GFCI and leave the grounding terminal screw unconnected. This is permitted if a permanent label is installed that says "No Equipment Ground" on the GFCI, and a label that states “GFCI Protected” and “No Equipment Ground” is placed on all downstream receptacles.
Other countries
West Germany banned bootleg grounding in 1973, although it was common practice before and can still be found in older installations.
In Finland, using neutral as a ground conductor was a common practice until 1989. After that, a thicker PEN-wire was used as both ground and neutral until it was banned in 2007.
See also
Earthing System
Electrical wiring in North America
References
Electric power
Electric power in the United States
Electrical safety
Electrical wiring | Bootleg ground | [
"Physics",
"Engineering"
] | 747 | [
"Physical quantities",
"Electrical systems",
"Building engineering",
"Physical systems",
"Power (physics)",
"Electric power",
"Electrical engineering",
"Electrical wiring"
] |
31,913,717 | https://en.wikipedia.org/wiki/Two-state%20trajectory | A two-state trajectory (also termed two-state time trajectory or a trajectory with two states) is a dynamical signal that fluctuates between two distinct values: ON and OFF, open and closed, , etc. Mathematically, the signal has, for every either the value or .
In most applications, the signal is stochastic; nevertheless, it can have deterministic ON-OFF components. A completely deterministic two-state trajectory is a square wave. There are many ways one can create a two-state signal, e.g. flipping a coin repeatedly.
A stochastic two-state trajectory is among the simplest stochastic processes. Extensions include: three-state trajectories, higher discrete state trajectories, and continuous trajectories in any dimension.
Two state trajectories in biophysics, and related fields
Two state trajectories are very common. Here, we focus on relevant trajectories in scientific experiments: these are seen in measurements in chemistry, physics, and the biophysics of individual molecules (e.g. measurements of protein dynamics and DNA and RNA dynamics, activity of ion channels, enzyme activity, quantum dots). From these experiments, one aims at finding the correct model explaining the measured process. We explain about various relevant systems in what follows.
Ion channels
Since the ion channel is either opened or closed, when recording the number of ions that go through the channel when time elapses, observed is a two-state trajectory of the current versus time.
Enzymes
Here, there are several possible experiments on the activity of individual enzymes with a two-state signal. For example, one can create substrate that only upon the enzymatic activity shines light when activated (with a laser pulse). So, each time the enzyme acts, we see a burst of photons during the time period that the product molecule is in the laser area.
Dynamics of biological molecules
Structural changes of molecules are viewed in various experiments' type. Förster resonance energy transfer is an example.
In many cases one sees a time trajectory that fluctuates among several cleared defined states.
Quantum dots
Another system that fluctuates among an on state and an off state is a quantum dot. Here, the fluctuations are since the molecule is either in a state that emits photons or in a dark state that does not emit photons (the dynamics among the states are influenced also from its interactions with the surroundings).
See also
Single-molecule experiment
Reduced dimensions form
Kinetic scheme
Master equation
Wave
References
Statistical mechanics
Stochastic processes | Two-state trajectory | [
"Physics"
] | 527 | [
"Statistical mechanics"
] |
37,537,790 | https://en.wikipedia.org/wiki/Xenophagy | Xenophagy (Greek "strange" + "eating") and allotrophy (Greek "other" + "nutrient") are changes in established patterns of biological consumption, by individuals or groups.
In entomology, xenophagy is a categorical change in diet, such as an herbivore becoming carnivorous, a predator becoming necrophagous, a coprophage becoming necrophagous or carnivorous, or a reversal of such changes. Allotrophy is a less extreme change in diet, such as in the case of the seven-spot ladybird, which can diversify a diet of aphids to sometimes include pollen. There are several apparent cases of allotrophy in Israeli Longitarsus beetles.
In microbiology, xenophagy is the process by which a cell directs autophagy against pathogens, as reflected in the study of antiviral defenses. Cellular xenophagy is an innate component of immune responses, though the general importance of xenophagy is not yet certain.
In ecology, allotrophy is also reflected in eutrophication, being a change in nutrient source such as an aquatic ecosystem that starts receiving new nutrients from drainage of the surrounding land.
References
Insect behavior
Eating behaviors
Cellular processes | Xenophagy | [
"Biology"
] | 268 | [
"Biological interactions",
"Eating behaviors",
"Behavior",
"Cellular processes"
] |
37,540,308 | https://en.wikipedia.org/wiki/Rulkov%20map | The Rulkov map is a two-dimensional iterated map used to model a biological neuron. It was proposed by Nikolai F. Rulkov in 2001. The use of this map to study neural networks has computational advantages because the map is easier to iterate than a continuous dynamical system. This saves memory and simplifies the computation of large neural networks.
The model
The Rulkov map, with as discrete time, can be represented by the following dynamical equations:
where represents the membrane potential of the neuron. The variable in the model is a slow variable due to a very small value of . Unlike variable , variable does not have explicit biological meaning, though some analogy to gating variables can be drawn. The parameter can be thought of as an external dc current given to the neuron and is a nonlinearity parameter of the map. Different combinations of parameters and give rise to different dynamical states of the neuron like resting, tonic spiking and chaotic bursts. The chaotic bursting is enabled above
Analysis
The dynamics of the Rulkov map can be analyzed by analyzing the dynamics of its one dimensional fast submap. Since the variable evolves very slowly, for moderate amount of time it can be treated as a parameter with constant value in the variable's evolution equation (which we now call as one dimensional fast submap because as compared to , is a fast variable). Depending on the value of , this submap can have either one or three fixed points. One of these fixed points is stable, another is unstable and third may change the stability. As increases, two of these fixed points (stable one and unstable one) merge and disappear by saddle-node bifurcation.
Coupling
Coupling of two neurons has been investigated by Irina Bashkirtseva and Alexander Pisarchik who explored transitions between stationary, periodic, quasiperiodic, and chaotic regimes. They also addresses the additional consequences of random disturbances on this system, leading to noise-induced transitions between periodic and chaotic stochastic oscillations.
Other applications
Adaptations of the Rulkov map have found applications in labor and industrial economics, particularly in the realm of corporate dynamics. The proposed framework leverages synchronization and chaos regularization to account for dynamic transitions among multiple equilibria, incorporate skewness and idiosyncratic elements, and unveil the influence of effort on corporate profitability. The results are substantiated through empirical validation with real-world data. Orlando and Bufalo introduced a deterministic model based on the Rulkov map, effectively modeling volatility fluctuations in corporate yields and spreads, even during distressed periods like COVID-19. Comparing it to the ARIMA-EGARCH model, designed for handling various volatility aspects, both models yield comparable results. Nevertheless, the deterministic nature of the Rulkov map model may provide enhanced explanatory capabilities.
Other applications of the Rulkov map include memristors, financial markets, biological systems, etc.
See also
Biological neuron model
Hodgkin–Huxley model
FitzHugh–Nagumo model
Chialvo map
References
Dynamical systems | Rulkov map | [
"Physics",
"Mathematics"
] | 650 | [
"Mechanics",
"Dynamical systems"
] |
37,541,861 | https://en.wikipedia.org/wiki/Big%20Brake | The Big Brake is a theoretical scientific model suggested as one of the possibilities for the ultimate fate of the universe. In this model the effect of dark energy reverses, stopping the accelerating expansion of the Universe, and causing an infinite rate of deceleration. All cosmic matter would be subjected to extreme tidal forces and be destroyed. Another possibility is matter may still exist, albeit in a different form and organization. The consequences for space and time are also unclear.
See also
Big Bang
Big Crunch
Big Freeze
Big Rip
Big Whimper
References
Physical cosmology
Ultimate fate of the universe | Big Brake | [
"Physics",
"Astronomy"
] | 115 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Physical cosmology",
"Astrophysics"
] |
37,542,163 | https://en.wikipedia.org/wiki/Horrocks%20construction | In mathematics, the Horrocks construction is a method for constructing vector bundles, especially over projective spaces, introduced by . His original construction gave an example of an indecomposable rank 2 vector bundle over 3-dimensional projective space, and generalizes to give examples of vector bundles of higher ranks over other projective spaces. The Horrocks construction is used in the ADHM construction to construct instantons over the 4-sphere.
References
Vector bundles | Horrocks construction | [
"Mathematics"
] | 92 | [
"Topology stubs",
"Topology"
] |
28,117,700 | https://en.wikipedia.org/wiki/Coulomb%20excitation | Coulomb excitation is a technique in experimental nuclear physics to probe the electromagnetic aspect of nuclear structure. In Coulomb excitation, a nucleus is excited by an inelastic collision with another nucleus through the electromagnetic interaction. In order to ensure that the interaction is electromagnetic in nature — and not nuclear — the distance of closest approach of the colliding nuclei has to be sufficiently large. In particular, in low-energy Coulomb excitation (taking place at beam energies of a few megaelectronvolts per nucleon) the commonly adopted empirical criterion is that if the surfaces of the colliding nuclei are separated by at least 5 femtometers, the contribution of the short-range nuclear interaction to the excitation process can be neglected.
From the measured excitation cross sections, electromagnetic transition probabilities between the nuclear energy levels can be extracted.
This method is particularly useful for investigating collectivity in nuclei, as collective excitations are often connected by strong electric quadrupole transitions.
Moreover, it is the only experimental method in nuclear physics that is sensitive to electric quadrupole moments of excited nuclear states with lifetimes shorter than nanoseconds.
References
Nuclear physics | Coulomb excitation | [
"Physics"
] | 250 | [
"Nuclear and atomic physics stubs",
"Nuclear physics"
] |
28,125,915 | https://en.wikipedia.org/wiki/Solid-state%20reaction%20route | The solid-state reaction route is the most widely used method for the preparation of polycrystalline solids from a mixture of solid starting materials. Solids do not react together at room temperature over normal time scales and it is necessary to heat them to much higher temperatures, often to 1000 to 1500 °C, in order for the reaction to occur at an appreciable rate. The factors on which the feasibility and rate of a solid state reaction depend include, reaction conditions, structural properties of the reactants, surface area of the solids, their reactivity and the thermodynamic free energy change associated with the reaction.
Outline of the experimental procedure
Reagents
These are the solid reactants from which it is proposed to prepare a solid crystalline compound. The selection of reactant chemicals depends on the reaction conditions and expected nature of the product. The reactants are dried thoroughly prior to weighing. As increase in surface area enhances the reaction rate, fine grained materials should be used if possible.
Mixing
After the reactants have been weighed out in the required amounts, they are mixed. For manual mixing of small quantities, usually an agate mortar and pestle are employed. Sufficient amount of some volatile organic liquid – preferably acetone or alcohol – is added to the mixture to aid homogenization. This forms a paste which is mixed thoroughly. During the process of grinding and mixing, the organic liquid gradually volatilizes and has usually evaporated completely after 10 to 15 minutes. For quantities much larger than ~20g, mechanical mixing is usually adopted using a ball mill and the pro
Container material
For the subsequent reaction at high temperatures, it is necessary to choose a suitable container material which is chemically inert to the reactants under the heating conditions used. The noble metals, platinum and gold, are usually suitable. Containers may be crucibles or boats made from foil. For low temperature reactions, other metals like Nickel (below 600–700 °C) can be used.
Heat treatment
The heating programme to be used depends very much on the form and reactivity of the reactants. In the control of either temperature or atmosphere, nature of the reactant chemicals are considered in detail. A good furnace is used for heat treatment. Pelleting of samples is preferred prior to heating, since it increases the area of contact between the grains.
Analysis
The product materials are analyzed using various characterization techniques such as X-ray diffraction (XRD), scanning electron microscopy (SEM), transmission electron microscopy (TEM), etc.
References
Chemical reactions
Chemical reaction engineering | Solid-state reaction route | [
"Chemistry",
"Engineering"
] | 523 | [
"Chemical engineering",
"Chemical reaction engineering",
"nan"
] |
28,126,249 | https://en.wikipedia.org/wiki/C24H28N2O3 | {{DISPLAYTITLE:C24H28N2O3}}
The molecular formula C24H28N2O3 (molar mass: 392.49 g/mol, exact mass: 392.2100 u) may refer to:
ADL-5859
Indacaterol
Ivacaftor
Naftopidil | C24H28N2O3 | [
"Chemistry"
] | 74 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
44,957,547 | https://en.wikipedia.org/wiki/Reuptake%20modulator | A reuptake modulator, or transporter modulator, is a type of drug which modulates the reuptake of one or more neurotransmitters via their respective neurotransmitter transporters. Examples of reuptake modulators include reuptake inhibitors (transporter blockers) and reuptake enhancers.
See also
Releasing agent
Release modulator
Transporter substrate
Channel modulator
Enzyme modulator
Receptor modulator
Drugs by mechanism of action
Psychopharmacology | Reuptake modulator | [
"Chemistry"
] | 103 | [
"Psychopharmacology",
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
44,959,691 | https://en.wikipedia.org/wiki/Rosiwal%20scale | The Rosiwal scale is a hardness scale in mineralogy, with its name given in memory of the Austrian geologist August Karl Rosiwal. The Rosiwal scale attempts to give more quantitative values of scratch hardness, unlike the Mohs scale which is a qualitative measurement with relative values.
The Rosiwal method (also called the Delesse-Rosiwal method) is a method of petrographic analysis and is performed by scratching a polished surface under a known load using a scratch-tip with a known geometry. The hardness is calculated by finding the volume of removed material, but this measurement can be difficult and must sample a large enough number of grain in order to have statistical significance.
Rosiwal scale values
Measures the scratch hardness of a mineral expressed on a quantitative scale. These measurements must be performed in a laboratory, since the surfaces must be flat and smooth. The base value of the Rosiwal scale is defined as corundum set to 1000 (unitless).
See also
Hardness
August Karl Rosiwal
Friedrich Mohs
References
Bibliography
The Great Encyclopedia of minerals 451 photographs, 520 pages 20'5 x 29'2 cm. Original: Artia, Prague 1986 Catalan version: Editorial Susaeta SA 1989, (printed in Czechoslovakia)
Accurate mineralogy. De Lapparent, A .: 1965 Paris
Minerals and study how to Them. Dana L. Hurlbut, S .: New York 1949
Schöne und seltene Mineral. Hofmann and F. Karpinski, J .: 1980 Leipzig
CORDUANT, William S. "The Hardness of Minerals and Rocks". Lapidary Digest c. 1990.
External links
hardness minerals
Geotopo XXI:Description of program content
hardness and toughness
Glossary of technical mining
Materials science
Mineralogy
Hardness tests | Rosiwal scale | [
"Physics",
"Materials_science",
"Engineering"
] | 369 | [
"Applied and interdisciplinary physics",
"Materials science",
"Materials testing",
"nan",
"Hardness tests"
] |
44,960,861 | https://en.wikipedia.org/wiki/Auticon | Auticon (stylized "auticon") is an international information technology consulting firm that exclusively employs adults on the autism spectrum as Information technology (IT) consultants. Auticon identifies as a social enterprise.
Services mainly focus on IT quality management, including transformation, migration, data analytics, data analysis, security and deep web analysis, as well as compliance and reporting.
Auticon is based in Berlin and currently employs more than 200 members of staff, around 150 of whom are on the autism spectrum. Auticon has offices in the United Kingdom, United States, Germany, France, Switzerland, Canada, Australia, and Italy.
History
Dirk Müller-Remus, who has a son on the autism spectrum, launched Auticon in 2011 with an investment of the Munich-based Ananda Social Venture Fund. The launch was inspired by the Belgian company Passwerk. Auticon's concept to employ people on the autism spectrum as ICT consultants has since been acknowledged internationally. The Auticon model was presented at the G8 Social Impact Investment Forum, held in London on 6 June 2013, in front of 150 leaders in social impact investment.
In 2018, Auticon acquired MindSpark, a company in Santa Monica, California, founded in 2013 by Gray Benoist, whose two sons are also on the autism spectrum.
Awards
2013: IQ Award
2014: BITKOM Innovator's Pitch
2015: Deutsche Bank, Land der Ideen
2015: New Work Award
2015: Sonderpreis, Deutscher Gründerpreis
2017: Social Enterprise UK Awards, One to Watch Award
2019: Milestone Autism Resources "Visionary Employer Award"
2020: Fast Company World Changing Ideas
See also
Specialisterne
References
Autism-related organizations
Software testing
Social enterprises
Companies based in Berlin
ICT service providers
German companies established in 2011 | Auticon | [
"Engineering"
] | 367 | [
"Software engineering",
"Software testing"
] |
44,962,990 | https://en.wikipedia.org/wiki/Grid%20compass | A grid compass known as well as grid steering compass, is a navigating instrument. It is a design of magnetic compass that facilitates steering a steady course without the risk of parallax error.
The grid compass is the simplest steering compass from the pilot's or helmsman's point of view, because he doesn't need to watch the number (or the division mark) of the wanted course. He has only to steer the craft so that the N/S compass needle lies parallel between the lines of the overlay disc. The principle is similar to the compass-controlled autopilot. Although sophisticated electronics have taken over for commercial navigation, light aircraft, gliders and yachtsmen still use the grid compass because of its simplicity and ease of use.
Description
The compass card is in the form of a bold parallel sided arrow which indicates magnetic north. Some models have an east/west cross bar as well. Overlaying this but in the same gimbal or suspension is a transparent plate which can be rotated around the same axis as the compass card but has sufficient friction (or a mechanical clamp) to stay fixed relative to the gimbal system once set to a course. Across this disk are engraved a series of parallel lines. The outer edge of this disk is marked in clockwise in degrees, the radial line meeting 0º being parallel to the engraved lines, so that a course can be laid for any bearing from 0º to 359º. By keeping the arrow on the card and the lines on the overlay parallel, the pilot or helmsman can keep the course set. The frequency of the degree markings depend upon the size of the compass.
To set a course the rotating ring is (unlocked and) turned so that the heading in degrees on the ring alines with the centre line of the craft. The craft comes on to the required course when the arrow on the compass card is parallel with the lines on the ring.
The grid steering compasses (Type P8 to Type P11) were fitted in World War II Spitfire aeroplanes, replacing the old P4 series of instruments. They were used for course setting and reading, and as a check compass on aircraft fitted with a remote indicating compass.
See also
Astrocompass
Compass
Silva Compass
Solar compass
Marine sandglass
Bearing compass
References
Bibliography
Aircraft Instruments
External links
A Compass for ‘Sandpiper’
"A Job Thought Impossible" , the story of Chrysler Corporation's mass-production of previously hand-made compasses for World War II naval requirements.
Navigational equipment
Avionics
Aircraft instruments | Grid compass | [
"Technology",
"Engineering"
] | 520 | [
"Avionics",
"Aircraft instruments",
"Measuring instruments"
] |
44,965,027 | https://en.wikipedia.org/wiki/Dipicolylamine | Dipicolylamine is an organic compound with the formula HN(CH2C5H4N)2. It is a yellow liquid that is soluble in polar organic solvents. The molecule is a secondary amine with two picolyl substituents. The compound is a common tridentate ligand in coordination chemistry.
The compound can be prepared by many methods, alkylation of picolinylamine with picolinyl chloride, deamination of picolinylamine, and reductive amination of picolinyl amine and pyridine-2-carboxaldehyde. It is commonly used to bind to bacteria in purifying mixtures that require separation.
Related compounds
Tris(2-pyridylmethyl)amine
References
Amines
2-Pyridyl compounds | Dipicolylamine | [
"Chemistry"
] | 172 | [
"Amines",
"Bases (chemistry)",
"Functional groups"
] |
44,965,598 | https://en.wikipedia.org/wiki/Eoxin%20C4 | {{DISPLAYTITLE:Eoxin C4}}
Eoxin C4 (EXC4), also known as 14,15-leukotriene C4, is an eoxin. Cells make eoxins by metabolizing arachidonic acid with a 15-lipoxygenase enzyme to form 15(S)-hydroperoxyeicosapentaenoic acid (i.e. 15(S)-HpETE). This product is then converted serially to EXA4, EXC4, EXD4, and EXE4 by LTC4 synthase, an unidentified gamma-glutamyltransferase, and an unidentified dipeptidase, respectively, in a pathway which appears similar if not identical to the pathway which forms leukotreines, i.e. LTA4, LTC4, LTD4, and LTE4. This pathway is schematically shown as follows:
EXA4 is viewed as an intracellular-bound, short-lived intermediate which is rapidly metabolized to the downstream eoxins. The eoxins downstream of EXA4 are secreted from their parent cells and, it is proposed but not yet proven, serve to regulate allergic responses and the development of certain cancers (see eoxins).
References
Eicosanoids | Eoxin C4 | [
"Chemistry",
"Biology"
] | 284 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
44,965,599 | https://en.wikipedia.org/wiki/Eoxin%20D4 | {{DISPLAYTITLE:Eoxin D4}}
Eoxin D4 (EXD4), also known as 14,15-leukotriene D4, is an eoxin. Cells make eoxins by metabolizing arachidonic acid with a 15-lipoxygenase enzyme to form 15(S)-hydroperoxyeicosapentaenoic acid (i.e. 15(S)-HpETE). This product is then converted serially to EXA4, EXC4, EXD4, and EXE4 by LTC4 synthase, an unidentified gamma-glutamyltransferase, and an unidentified dipeptidase, respectively, in a pathway which appears similar if not identical to the pathway which forms leukotreines, i.e. LTA4, LTC4, LTD4, and LTE4. This pathway is schematically shown as follows:
EXA4 is viewed as an intracellular-bound, short-lived intermediate which is rapidly metabolized to the downstream eoxins. The eoxins downstream of EXA4 are secreted from their parent cells and, it is proposed but not yet proven, serve to regulate allergic responses and the development of certain cancers (see eoxins).
References
Eicosanoids | Eoxin D4 | [
"Chemistry",
"Biology"
] | 284 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
44,965,600 | https://en.wikipedia.org/wiki/Eoxin%20E4 | {{DISPLAYTITLE:Eoxin E4}}
Eoxin E4 (EXE4), also known as 14,15-leukotriene E4, is an eoxin. Cells make eoxins by metabolizing arachidonic acid with a 15-lipoxygenase enzyme to form 15(S)-hydroperoxyeicosapentaenoic acid (i.e. 15(S)-HpETE). This product is then converted serially to EXA4, EXC4, EXD4, and EXE4 by LTC4 synthase, an unidentified gamma-glutamyltransferase, and an unidentified dipeptidase, respectively, in a pathway which appears similar if not identical to the pathway which forms leukotreines, i.e. LTA4, LTC4, LTD4, and LTE4. This pathway is schematically shown as follows:
EXA4 is viewed as an intracellular-bound, short-lived intermediate which is rapidly metabolized to the downstream eoxins. The eoxins downstream of EXA4 are secreted from their parent cells and, it is proposed but not yet proven, serve to regulate allergic responses and the development of certain cancers (see eoxins).
References
Alpha-Amino acids
Eicosanoids
Thioethers | Eoxin E4 | [
"Chemistry",
"Biology"
] | 292 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
29,486,469 | https://en.wikipedia.org/wiki/Phase%20curve%20%28astronomy%29 | In astronomy, a phase curve describes the brightness of a reflecting body as a function of its phase angle (the arc subtended by the observer and the Sun as measured at the body). The brightness usually refers the object's absolute magnitude, which, in turn, is its apparent magnitude at a distance of one astronomical unit from the Earth and Sun.
The phase curve is useful for characterizing an object's regolith (soil) and atmosphere. It is also the basis for computing the geometrical albedo and the Bond albedo of the body. In ephemeris generation, the phase curve is used in conjunction with the distances from the object to the Sun and the Earth to calculate the apparent magnitude.
Mercury
The phase curve of Mercury is very steep, which is characteristic of a body on which bare regolith (soil) is exposed to view. At phase angles exceeding 90° (crescent phase) the brightness falls off especially sharply. The shape of the phase curve indicates a mean slope on the surface of Mercury of about 16°, which is slightly smoother than that of the Moon. Approaching phase angle 0° (fully illuminated phase) the curve rises to a sharp peak. This surge in brightness is called the opposition effect because for most bodies (though not Mercury) it occurs at astronomical opposition when the body is opposite from the Sun in the sky. The width of the opposition surge for Mercury indicates that both the compaction state of the regolith and the distribution of particle sizes on the planet are similar to those on the Moon.
Early visual observations contributing to the phase curve of Mercury were obtained by G. Muller in the 1800s and by André-Louis Danjon in the mid-twentieth century. W. Irvine and colleagues used photoelectric photometry in the 1960s. Some of these early data were analyzed by G. de Vaucouleurs, summarized by D. Harris and used for predicting apparent magnitudes in the Astronomical Almanac for several decades. Highly accurate new observations covering the widest range of phase angles to date (2 to 170°) were carried out by A. Mallama, D. Wang and R. Howard using the Large Angle and Spectrometric Coronograph (LASCO) on the Solar and Heliospheric Observatory (SOHO) satellite. They also obtained new CCD observations from the ground. These data are now the major source of the phase curve used in the Astronomical Almanac for predicting apparent magnitudes.
The apparent brightness of Mercury as seen from Earth is greatest at phase angle 0° (superior conjunction with the Sun) when it can reach magnitude −2.6. At phase angles approaching 180° (inferior conjunction) the planet fades to about magnitude +5 with the exact brightness depending on the phase angle at that particular conjunction. This difference of more than 7 magnitudes corresponds to a change of over a thousand times in apparent brightness.
Venus
The relatively flat phase curve of Venus is characteristic of a cloudy planet. In contrast to Mercury where the curve is strongly peaked approaching phase angle zero (full phase) that of Venus is rounded. The wide illumination scattering angle of clouds, as opposed to the narrower scattering of regolith, causes this flattening of the phase curve. Venus exhibits a brightness surge near phase angle 170°, when it is a thin crescent, due to forward scattering of sunlight by droplets of sulfuric acid that are above the planet's cloud tops. Even beyond 170° the brightness does not decline very steeply.
The history of observation and analysis of the phase curve of Venus is similar to that of Mercury. The best set of modern observations and interpretation was reported by A. Mallama, D. Wang and R. Howard. They used the LASCO instrument on SOHO and ground-based, CCD equipment to observe the phase curve from 2 to 179°. As with Mercury, these new data are the major source of the phase curve used in the Astronomical Almanac for predicting apparent magnitudes.
In contrast to Mercury the maximal apparent brightness of Venus as seen from Earth does not occur at phase angle zero. Since the phase curve of Venus is relatively flat while its distance from the Earth can vary greatly, maximum brightness occurs when the planet is a crescent, at phase angle 125°, at which time Venus can be as bright as magnitude −4.9. Near inferior conjunction the planet typically fades to about magnitude −3 although the exact value depends on the phase angle. The typical range in apparent brightness for Venus over the course of one apparition is less than a factor of 10 or merely 1% that of Mercury.
Earth
The phase curve of the Earth has not been determined as accurately as those for Mercury and Venus because its integrated brightness is difficult to measure from the surface. Instead of direct observation, earthshine reflected from the portion of the Moon not lit by the Sun has served as a proxy. A few direct measurements of the Earth's luminosity have been obtained with the EPOXI spacecraft. While they do not cover much of the phase curve they reveal a rotational light curve caused by the transit of dark oceans and bright land masses across the hemisphere. P. Goode and colleagues at Big Bear Solar Observatory have measured the earthshine and T. Livengood of NASA analyzed the EPOXI data.
Earth as seen from Venus near opposition from the Sun would be extremely bright at magnitude −6. To an observer outside the Earth's orbit on Mars our planet would appear most luminous near the time of its greatest elongation from the Sun, at about magnitude −1.5.
Mars
Only about half of the Martian phase curve can be observed from Earth because it orbits farther from the Sun than our planet. There is an opposition surge but it is less pronounced than that of Mercury. The rotation of bright and dark surface markings across its disk and variability of its atmospheric state (including its dust storms) superimpose variations on the phase curve. R. Schmude obtained many of the Mars brightness measurements used in a comprehensive phase curve analysis performed by A. Mallama.
Because the orbit of Mars is considerably eccentric its brightness at opposition can range from magnitude −3.0 to −1.4. The minimum brightness is about magnitude +1.6 when Mars is on the opposite site of the Sun from the Earth. Rotational variations can elevate or suppress the brightness of Mars by 5% and global dust storms can increase its luminosity by 25%.
Giant planets
The outermost planets (Jupiter, Saturn, Uranus, and Neptune) are so distant that only small portions of their phase curves near 0° (full phase) can be evaluated from the Earth. That part of the curve is generally fairly flat, like that of Venus, for these cloudy planets.
The apparent magnitude of Jupiter ranges from −2.9 to −1.4, Saturn from −0.5 to +1.4, Uranus from +5.3 to +6.0, and Neptune from +7.8 to +8.0. Most of these variations are due to distance. However, the magnitude range for Saturn also depends on its ring system as explained below.
The rings of Saturn
The brightness of the Saturn system depends on the orientation of its ring system. The rings contribute more to the overall brightness of the system when they are more inclined to the direction of illumination from the Sun and to the view of the observer. Wide open rings contribute about one magnitude of brightness to the disk alone. The icy particles that compose the rings also produce a strong opposition surge. Hubble Space Telescope and Cassini spacecraft images have been analyzed in an attempt to characterize the ring particles based on their phase curves.
The Moon
The phase curve of the Moon approximately resembles that of Mercury due to the similarities of the surfaces and the lack of an atmosphere on either body. Clementine spacecraft data analyzed by J. Hillier, B. Buratti and K. Hill indicate a lunar opposition surge. The Moon's apparent magnitude at full phase is −12.7 while at quarter phase it is 21 percent as bright.
Planetary satellites
The phase curves of many natural satellites of other planets have been observed and interpreted. The icy moons often exhibit opposition brightness surges. This behavior has been used to model their surfaces.
Asteroids
The phase curves of many asteroids have also been observed and they too may exhibit opposition surges. Asteroids can be physically classified in this way. The effects of rotation can be very large and have to be factored in before the phase curve is computed. An example of such a study is reported by R. Baker and colleagues.
Exoplanets
Programs for characterizing planets outside of the solar system depend largely on spectroscopy to identify atmospheric constituents and states, especially those that point to the presence of life forms or which could support life. However, brightness can be measured for very distant Earth-sized objects that are too faint for spectroscopic analysis. A. Mallama has demonstrated that phase curve analysis may be a useful tool for identifying planets that are Earth-like. Additionally, J. Bailey has pointed out that phase curve anomalies such as the brightness excess of Venus could be useful indicators of atmospheric constituents such as water, which might be essential to life in the universe.
Criticisms on phase curve modelling
Inferences about regoliths from phase curves are frequently based on Hapke parameterization. However, in a blind test M. Shepard and P. Helfenstein found no strong evidence that a particular set of Hapke parameters derived from photometric data could uniquely reveal the physical state of laboratory samples. These tests included modeling the three-term Henyey-Greenstein phase functions and the coherent backscatter opposition effect. This negative finding suggests that the radiative transfer model developed by B. Hapke may be inadequate for physical modeling based on photometry.
References
Observational astronomy
Radiometry
Scattering, absorption and radiative transfer (optics) | Phase curve (astronomy) | [
"Chemistry",
"Astronomy",
"Engineering"
] | 2,011 | [
"Telecommunications engineering",
" absorption and radiative transfer (optics)",
"Observational astronomy",
"Scattering",
"Astronomical sub-disciplines",
"Radiometry"
] |
29,491,194 | https://en.wikipedia.org/wiki/Radiation%20stress | In fluid dynamics, the radiation stress is the depth-integrated – and thereafter phase-averaged – excess momentum flux caused by the presence of the surface gravity waves, which is exerted on the mean flow. The radiation stresses behave as a second-order tensor.
The radiation stress tensor describes the additional forcing due to the presence of the waves, which changes the mean depth-integrated horizontal momentum in the fluid layer. As a result, varying radiation stresses induce changes in the mean surface elevation (wave setup) and the mean flow (wave-induced currents).
For the mean energy density in the oscillatory part of the fluid motion, the radiation stress tensor is important for its dynamics, in case of an inhomogeneous mean-flow field.
The radiation stress tensor, as well as several of its implications on the physics of surface gravity waves and mean flows, were formulated in a series of papers by Longuet-Higgins and Stewart in 1960–1964.
Radiation stress derives its name from the analogous effect of radiation pressure for electromagnetic radiation.
Physical significance
The radiation stress – mean excess momentum-flux due to the presence of the waves – plays an important role in the explanation and modeling of various coastal processes:
Wave setup and setdown – the radiation stress consists in part of a radiation pressure, exerted at the free surface elevation of the mean flow. If the radiation stress varies spatially, as it does in the surf zone where the wave height reduces by wave breaking, this results in changes of the mean surface elevation called wave setup (in case of an increased level) and setdown (for a decreased water level);
Wave-driven current, especially a longshore current in the surf zone – for oblique incidence of waves on a beach, the reduction in wave height inside the surf zone (by breaking) introduces a variation of the shear-stress component Sxy of the radiation stress over the width of the surf zone. This provides the forcing of a wave-driven longshore current, which is of importance for sediment transport (longshore drift) and the resulting coastal morphology;
Bound long waves or forced long waves, part of the infragravity waves – for wave groups the radiation stress varies along the group. As a result, a non-linear long wave propagates together with the group, at the group velocity of the modulated short waves within the group. While, according to the dispersion relation, a long wave of this length should propagate at its own – higher – phase velocity. The amplitude of this bound long wave varies with the square of the wave height, and is only significant in shallow water;
Wave–current interaction – in varying mean-flow fields, the energy exchanges between the waves and the mean flow, as well as the mean-flow forcing, can be modeled by means of the radiation stress.
Definitions and values derived from linear wave theory
One-dimensional wave propagation
For uni-directional wave propagation – say in the x-coordinate direction – the component of the radiation stress tensor of dynamical importance is Sxx. It is defined as:
where p(x,z,t) is the fluid pressure, is the horizontal x-component of the oscillatory part of the flow velocity vector, z is the vertical coordinate, t is time, z = −h(x) is the bed elevation of the fluid layer, and z = η(x,t) is the surface elevation. Further ρ is the fluid density and g is the acceleration by gravity, while an overbar denotes phase averaging. The last term on the right-hand side, ρg(h+)2, is the integral of the hydrostatic pressure over the still-water depth.
To lowest (second) order, the radiation stress Sxx for traveling periodic waves can be determined from the properties of surface gravity waves according to Airy wave theory:
where cp is the phase speed and cg is the group speed of the waves. Further E is the mean depth-integrated wave energy density (the sum of the kinetic and potential energy) per unit of horizontal area. From the results of Airy wave theory, to second order, the mean energy density E equals:
with a the wave amplitude and H = 2a the wave height. Note this equation is for periodic waves: in random waves the root-mean-square wave height Hrms should be used with Hrms = Hm0 / , where Hm0 is the significant wave height. Then E = ρgHm02.
Two-dimensional wave propagation
For wave propagation in two horizontal dimensions the radiation stress is a second-order tensor with components:
With, in a Cartesian coordinate system (x,y,z):
where and are the horizontal x- and y-components of the oscillatory part of the flow velocity vector.
To second order – in wave amplitude a – the components of the radiation stress tensor for progressive periodic waves are:
where kx and ky are the x- and y-components of the wavenumber vector k, with length k = |k| = and the vector k perpendicular to the wave crests. The phase and group speeds, cp and cg respectively, are the lengths of the phase and group velocity vectors: cp = |cp| and cg = |cg|.
Dynamical significance
The radiation stress tensor is an important quantity in the description of the phase-averaged dynamical interaction between waves and mean flows. Here, the depth-integrated dynamical conservation equations are given, but – in order to model three-dimensional mean flows forced by or interacting with surface waves – a three-dimensional description of the radiation stress over the fluid layer is needed.
Mass transport velocity
Propagating waves induce a – relatively small – mean mass transport in the wave propagation direction, also called the wave (pseudo) momentum. To lowest order, the wave momentum Mw is, per unit of horizontal area:
which is exact for progressive waves of permanent form in irrotational flow. Above, cp is the phase speed relative to the mean flow:
with σ the intrinsic angular frequency, as seen by an observer moving with the mean horizontal flow-velocity while ω is the apparent angular frequency of an observer at rest (with respect to 'Earth'). The difference k⋅ is the Doppler shift.
The mean horizontal momentum M, also per unit of horizontal area, is the mean value of the integral of momentum over depth:
with v(x,y,z,t) the total flow velocity at any point below the free surface z = η(x,y,t). The mean horizontal momentum M is also the mean of the depth-integrated horizontal mass flux, and consists of two contributions: one by the mean current and the other (Mw) is due to the waves.
Now the mass transport velocity is defined as:
Observe that first the depth-integrated horizontal momentum is averaged, before the division by the mean water depth (h+) is made.
Mass and momentum conservation
Vector notation
The equation of mean mass conservation is, in vector notation:
with including the contribution of the wave momentum Mw.
The equation for the conservation of horizontal mean momentum is:
where ⊗ denotes the tensor product of with itself, and τw is the mean wind shear stress at the free surface, while τb is the bed shear stress. Further I is the identity tensor, with components given by the Kronecker delta δij. Note that the right hand side of the momentum equation provides the non-conservative contributions of the bed slope ∇h, as well the forcing by the wind and the bed friction.
In terms of the horizontal momentum M the above equations become:
Component form in Cartesian coordinates
In a Cartesian coordinate system, the mass conservation equation becomes:
with x and y respectively the x and y components of the mass transport velocity .
The horizontal momentum equations are:
Energy conservation
For an inviscid flow the mean mechanical energy of the total flow – that is the sum of the energy of the mean flow and the fluctuating motion – is conserved. However, the mean energy of the fluctuating motion itself is not conserved, nor is the energy of the mean flow. The mean energy E of the fluctuating motion (the sum of the kinetic and potential energies satisfies:
where ":" denotes the double-dot product, and ε denotes the dissipation of mean mechanical energy (for instance by wave breaking). The term is the exchange of energy with the mean motion, due to wave–current interaction. The mean horizontal wave-energy transport ( + cg) E consists of two contributions:
E : the transport of wave energy by the mean flow, and
cg E : the mean energy transport by the waves themselves, with the group velocity cg as the wave-energy transport velocity.
In a Cartesian coordinate system, the above equation for the mean energy E of the flow fluctuations becomes:
So the radiation stress changes the wave energy E only in case of a spatial-inhomogeneous current field (x,y).
Notes
References
Primary sources
Further reading
Physical oceanography
Water waves | Radiation stress | [
"Physics",
"Chemistry"
] | 1,847 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Water waves",
"Waves",
"Physical oceanography",
"Fluid dynamics"
] |
29,491,519 | https://en.wikipedia.org/wiki/Hypercycle%20%28chemistry%29 | In chemistry, a hypercycle is an abstract model of organization of self-replicating molecules connected in a cyclic, autocatalytic manner. It was introduced in an ordinary differential equation (ODE) form by the Nobel Prize in Chemistry winner Manfred Eigen in 1971 and subsequently further extended in collaboration with Peter Schuster. It was proposed as a solution to the error threshold problem encountered during modelling of replicative molecules that hypothetically existed on the primordial Earth (see: abiogenesis). As such, it explained how life on Earth could have begun using only relatively short genetic sequences, which in theory were too short to store all essential information. The hypercycle is a special case of the replicator equation. The most important properties of hypercycles are autocatalytic growth competition between cycles, once-for-ever selective behaviour, utilization of small selective advantage, rapid evolvability, increased information capacity, and selection against parasitic branches.
Central ideas
The hypercycle is a cycle of connected, self-replicating macromolecules. In the hypercycle, all molecules are linked such that each of them catalyses the creation of its successor, with the last molecule catalysing the first one. In such a manner, the cycle reinforces itself. Furthermore, each molecule is additionally a subject for self-replication. The resultant system is a new level of self-organization that incorporates both cooperation and selfishness. The coexistence of many genetically non-identical molecules makes it possible to maintain a high genetic diversity of the population. This can be a solution to the error threshold problem, which states that, in a system without ideal replication, an excess of mutation events would destroy the ability to carry information and prevent the creation of larger and fitter macromolecules. Moreover, it has been shown that hypercycles could originate naturally and that incorporating new molecules can extend them. Hypercycles are also subject to evolution and, as such, can undergo a selection process. As a result, not only does the system gain information, but its information content can be improved. From an evolutionary point of view, the hypercycle is an intermediate state of self-organization, but not the final solution.
Over the years, the hypercycle theory has experienced many reformulations and methodological approaches. Among them, the most notable are applications of partial differential equations, cellular automata, and stochastic formulations of Eigen's problem. Despite many advantages that the concept of hypercycles presents, there were also some problems regarding the traditional model formulation using ODEs: a vulnerability to parasites and a limited size of stable hypercycles. In 2012, the first experimental proof for the emergence of a cooperative network among fragments of self-assembling ribozymes was published, demonstrating their advantages over self-replicating cycles. However, even though this experiment proves the existence of cooperation among the recombinase ribozyme subnetworks, this cooperative network does not form a hypercycle per se, so we still lack the experimental demonstration of hypercycles.
Model formulation
Model evolution
Error threshold problem
When a model of replicating molecules was created, it was found that, for effective storage of information, macromolecules on prebiotic Earth could not exceed a certain threshold length. This problem is known as the error threshold problem. It arises because replication is an imperfect process, and during each replication event, there is a risk of incorporating errors into a new sequence, leading to the creation of a quasispecies. In a system that is deprived of high-fidelity replicases and error-correction mechanisms, mutations occur with a high probability. As a consequence, the information stored in a sequence can be lost due to the rapid accumulation of errors, a so-called error catastrophe. Moreover, it was shown that the genome size of any organism is roughly equal to the inverse of mutation rate per site per replication. Therefore, a high mutation rate imposes a serious limitation on the length of the genome. To overcome this problem, a more specialized replication machinery that is able to copy genetic information with higher fidelity is needed. Manfred Eigen suggested that proteins are necessary to accomplish this task. However, to encode a system as complex as a protein, longer nucleotide sequences are needed, which increases the probability of a mutation even more and requires even more complex replication machinery. John Maynard Smith and Eörs Szathmáry named this vicious circle Eigen's Paradox.
According to current estimations, the maximum length of a replicated chain that can be correctly reproduced and maintained in enzyme-free systems is about 100 bases, which is assumed to be insufficient to encode replication machinery. This observation was the motivation for the formulation of the hypercycle theory.
Models
It was suggested that the problem with building and maintaining larger, more complex, and more accurately replicated molecules can be circumvented if several information carriers, each of them storing a small piece of information, are connected such that they only control their own concentration. Studies of the mathematical model describing replicating molecules revealed that to observe a cooperative behaviour among self-replicating molecules, they have to be connected by a positive feedback loop of catalytic actions. This kind of closed network consisting of self-replicating entities connected by a catalytic positive-feedback loop was named an elementary hypercycle. Such a concept, apart from an increased information capacity, has another advantage. Linking self-replication with mutual catalysis can produce nonlinear growth of the system. This, first, makes the system resistant to so-called parasitic branches. Parasitic branches are species coupled to a cycle that do not provide any advantage to the reproduction of a cycle, which, in turn, makes them useless and decreases the selective value of the system. Secondly, it reinforces the self-organization of molecules into the hypercycle, allowing the system to evolve without losing information, which solves the error threshold problem.
Analysis of potential molecules that could form the first hypercycles in nature prompted the idea of coupling an information carrier function with enzymatic properties. At the time of the hypercycle theory formulation, enzymatic properties were attributed only to proteins, while nucleic acids were recognized only as carriers of information. This led to the formulation of a more complex model of a hypercycle with translation. The proposed model consists of a number of nucleotide sequences I (I stands for intermediate) and the same number of polypeptide chains E (E stands for enzyme). Sequences I have a limited chain length and carry the information necessary to build catalytic chains E. The sequence Ii provides the matrix to reproduce itself and a matrix to build the protein Ei. The protein Ei gives the catalytic support to build the next sequence in the cycle, Ii+1. The self-replicating sequences I form a cycle consisting of positive and negative strands that periodically reproduce themselves. Therefore, many cycles of the +/− nucleotide collectives are linked together by the second-order cycle of enzymatic properties of E, forming a catalytic hypercycle. Without the secondary loop provided by catalysis, I chains would compete and select against each other instead of cooperating. The reproduction is possible thanks to translation and polymerization functions encoded in I chains. In his principal work, Manfred Eigen stated that the E coded by the I chain can be a specific polymerase or an enhancer (or a silencer) of a more general polymerase acting in favour of formation of the successor of nucleotide chain I. Later, he indicated that a general polymerase leads to the death of the system. Moreover, the whole cycle must be closed, so that En must catalyse I1 formation for some integer n > 1.
Alternative concepts
During their research, Eigen and Schuster also considered types of protein and nucleotide coupling other than hypercycles. One such alternative was a model with one replicase that performed polymerase functionality and that was a translational product of one of the RNA matrices existing among the quasispecies. This RNA-dependent RNA polymerase catalysed the replication of sequences that had specific motifs recognized by this replicase. The other RNA matrices, or just one of their strands, provided translational products which had specific anticodons and were responsible for unique assignment and transportation of amino acids.
Another concept devised by Eigen and Schuster was a model in which each RNA template's replication was catalysed by its own translational product; at the same time, this RNA template performed a transport function for one amino acid type. Existence of more than one such RNA template could make translation possible.
Nevertheless, in both alternative concepts, the system will not survive due to the internal competition among its constituents. Even if none of the constituents of such a system is selectively favoured, which potentially allows coexistence of all of the coupled molecules, they are not able to coevolve and optimize their properties. In consequence, the system loses its internal stability and cannot live on. The reason for inability to survive is the lack of mutual control of constituent abundances.
Mathematical model
Elementary hypercycle
The dynamics of the elementary hypercycle can be modelled using the following differential equation:
where
In the equation above, xi is the concentration of template Ii; x is the total concentration of all templates; ki is the excess production rate of template Ii, which is a difference between formation fi by self-replication of the template and its degradation di, usually by hydrolysis; ki,j is the production rate of template Ii catalysed by Ij; and φ is a dilution flux; which guarantees that the total concentration is constant. Production and degradation rates are expressed in numbers of molecules per time unit at unit concentration (xi = 1). Assuming that at high concentration x the term ki can be neglected, and, moreover, in the hypercycle, a template can be replicated only by itself and the previous member of the cycle, the equation can be simplified to:
where according to the cyclic properties, it can be assumed that
Hypercycle with translation
A hypercycle with translation consists of polynucleotides Ii (with concentration xi) and polypeptides Ei (with concentration yi). It is assumed that the kinetics of nucleotide synthesis follows a Michaelis–Menten-type reaction scheme in which the concentration of complexes cannot be neglected. During replication, molecules form complexes IiEi-1 (occurring with concentration zi). Thus, the total concentration of molecules (xi0 and yi0) will be the sum of free molecules and molecules involved in a complex:
The dynamics of the hypercycle with translation can be described using a system of differential equations modelling the total number of molecules:
where
In the above equations, cE and cI are total concentrations of all polypeptides and all polynucleotides, φx and φy are dilution fluxes, ki is the production rate of polypeptide Ei translated from the polynucleotide Ii, and fi is the production rate of polynucleotide Ii synthesised by the complex IiEi-1 (through replication and polymerization).
Coupling nucleic acids with proteins in such a model of hypercycle with translation demanded the proper model for the origin of translation code as a necessary condition for the origin of hypercycle organization. At the time of hypercycle theory formulation, two models for the origin of translation code were proposed by Crick and his collaborators. These were models stating that the first codons were constructed according to either an RRY or an RNY scheme, in which R stands for the purine base, Y for pyrimidine, and N for any base, with the latter assumed to be more reliable. Nowadays, it is assumed that the hypercycle model could be realized by utilization of ribozymes without the need for a hypercycle with translation, and there are many more theories about the origin of the genetic code.
Evolution
Formation of the first hypercycles
Eigen made several assumptions about conditions that led to the formation of the first hypercycles. Some of them were the consequence of the lack of knowledge about ribozymes, which were discovered a few years after the introduction of the hypercycle concept and negated Eigen's assumptions in the strict sense. The primary of them was that the formation of hypercycles had required the availability of both types of chains: nucleic acids forming a quasispecies population and proteins with enzymatic functions. Nowadays, taking into account the knowledge about ribozymes, it may be possible that a hypercycle's members were selected from the quasispecies population and the enzymatic function was performed by RNA. According to the hypercycle theory, the first primitive polymerase emerged precisely from this population. As a consequence, the catalysed replication could exceed the uncatalysed reactions, and the system could grow faster. However, this rapid growth was a threat to the emerging system, as the whole system could lose control over the relative amount of the RNAs with enzymatic function. The system required more reliable control of its constituents—for example, by incorporating the coupling of essential RNAs into a positive feedback loop. Without this feedback loop, the replicating system would be lost. These positive feedback loops formed the first hypercycles.
In the process described above, the fact that the first hypercycles originated from the quasispecies population (a population of similar sequences) created a significant advantage. One possibility of linking different chains I—which is relatively easy to achieve taking into account the quasispecies properties—is that the one chain I improves the synthesis of the similar chain I’. In this way, the existence of similar sequences I originating from the same quasispecies population promotes the creation of the linkage between molecules I and I’.
Evolutionary dynamics
After formation, a hypercycle reaches either an internal equilibrium or a state with oscillating concentrations of each type of chain I, but with the total concentration of all chains remaining constant. In this way, the system consisting of all chains can be expressed as a single, integrated entity. During the formation of hypercycles, several of them could be present in comparable concentrations, but very soon, a selection of the hypercycle with the highest fitness value will take place. Here, the fitness value expresses the adaptation of the hypercycle to the environment, and the selection based on it is very sharp. After one hypercycle wins the competition, it is very unlikely that another one could take its place, even if the new hypercycle would be more efficient than the winner. Usually, even large fluctuations in the numbers of internal species cannot weaken the hypercycle enough to destroy it. In the case of a hypercycle, we can speak of one-for-ever selection, which is responsible for the existence of a unique translation code and a particular chirality.
The above-described idea of a hypercycle's robustness results from an exponential growth of its constituents caused by the catalytic support. However, Eörs Szathmáry and Irina Gladkih showed that an unconditional coexistence can be obtained even in the case of a non-enzymatic template replication that leads to a subexponential or a parabolic growth. This could be observed during the stages preceding a catalytic replication that are necessary for the formation of hypercycles. The coexistence of various non-enzymatically replicating sequences could help to maintain a sufficient diversity of RNA modules used later to build molecules with catalytic functions.
From the mathematical point of view, it is possible to find conditions required for cooperation of several hypercycles. However, in reality, the cooperation of hypercycles would be extremely difficult, because it requires the existence of a complicated multi-step biochemical mechanism or an incorporation of more than two types of molecules. Both conditions seem very improbable; therefore, the existence of coupled hypercycles is assumed impossible in practice.
Evolution of a hypercycle ensues from the creation of new components by the mutation of its internal species. Mutations can be incorporated into the hypercycle, enlarging it if, and only if, two requirements are satisfied. First, a new information carrier Inew created by the mutation must be better recognized by one of the hypercycle's members Ii than the chain Ii+1 that was previously recognized by it. Secondly, the new member Inew of the cycle has to better catalyse the formation of the polynucleotide Ii+1 that was previously catalysed by the product of its predecessor Ii. In theory, it is possible to incorporate into the hypercycle mutations that do not satisfy the second condition. They would form parasitic branches that use the system for their own replication but do not contribute to the system as a whole. However, it was noticed that such mutants do not pose a threat to the hypercycle, because other constituents of the hypercycle grow nonlinearly, which prevents the parasitic branches from growing.
Evolutionary dynamics: a mathematical model
According to the definition of a hypercycle, it is a nonlinear, dynamic system, and, in the simplest case, it can be assumed that it grows at a rate determined by a system of quadratic differential equations. Then, the competition between evolving hypercycles can be modelled using the differential equation:
where
Here, Cl is the total concentration of all polynucleotide chains belonging to a hypercycle Hl, C is the total concentration of polynucleotide chains belonging to all hypercycles, ql is the rate of growth, and φ is a dilution flux that guarantees that the total concentration is constant. According to the above model, in the initial phase, when several hypercycles exist, the selection of the hypercycle with the largest ql value takes place. When one hypercycle wins the selection and dominates the population, it is very difficult to replace it, even with a hypercycle with a much higher growth rate q.
Compartmentalization and genome integration
Hypercycle theory proposed that hypercycles are not the final state of organization, and further development of more complicated systems is possible by enveloping the hypercycle in some kind of membrane. After evolution of compartments, a genome integration of the hypercycle can proceed by linking its members into a single chain, which forms a precursor of a genome. After that, the whole individualized and compartmentalized hypercycle can behave like a simple self-replicating entity. Compartmentalization provides some advantages for a system that has already established a linkage between units. Without compartments, genome integration would boost competition by limiting space and resources. Moreover, adaptive evolution requires the package of transmissible information for advantageous mutations in order not to aid less-efficient copies of the gene. The first advantage is that it maintains a high local concentration of molecules, which helps to locally increase the rate of synthesis. Secondly, it keeps the effect of mutations local, while at the same time affecting the whole compartment. This favours preservation of beneficial mutations, because it prevents them from spreading away. At the same time, harmful mutations cannot pollute the entire system if they are enclosed by the membrane. Instead, only the contaminated compartment is destroyed, without affecting other compartments. In that way, compartmentalization allows for selection for genotypic mutations. Thirdly, membranes protect against environmental factors because they constitute a barrier for high-weight molecules or UV irradiation. Finally, the membrane surface can work as a catalyst.
Despite the above-mentioned advantages, there are also potential problems connected to compartmentalized hypercycles. These problems include difficulty in the transport of ingredients in and out, synchronizing the synthesis of new copies of the hypercycle constituents, and division of the growing compartment linked to a packing problem.
In the initial works, the compartmentalization was stated as an evolutionary consequence of the hypercyclic organization. Carsten Bresch and coworkers raised an objection that hypercyclic organization is not necessary if compartments are taken into account. They proposed the so-called package model in which one type of a polymerase is sufficient and copies all polynucleotide chains that contain a special recognition motif. However, as pointed out by the authors, such packages are—contrary to hypercycles—vulnerable to deleterious mutations as well as a fluctuation abyss, resulting in packages that lack one of the essential RNA molecules. Eigen and colleagues argued that simple package of genes cannot solve the information integration problem and hypercycles cannot be simply replaced by compartments, but compartments may assist hypercycles. This problem, however, raised more objections, and Eörs Szathmáry and László Demeter reconsidered whether packing hypercycles into compartments is a necessary intermediate stage of the evolution. They invented a stochastic corrector model that assumed that replicative templates compete within compartments, and selective values of these compartments depend on the internal composition of templates. Numerical simulations showed that when stochastic effects are taken into account, compartmentalization is sufficient to integrate information dispersed in competitive replicators without the need for hypercycle organization. Moreover, it was shown that compartmentalized hypercycles are more sensitive to the input of deleterious mutations than a simple package of competing genes. Nevertheless, package models do not solve the error threshold problem that originally motivated the hypercycle.
Ribozymes
At the time of the hypercycle theory formulation, ribozymes were not known. After the breakthrough of discovering RNA's catalytic properties in 1982, it was realized that RNA had the ability to integrate protein and nucleotide-chain properties into one entity. Ribozymes potentially serving as templates and catalysers of replication can be considered components of quasispecies that can self-organize into a hypercycle without the need to invent a translation process. In 2001, a partial RNA polymerase ribozyme was designed via directed evolution. Nevertheless, it was able to catalyse only a polymerization of a chain having the size of about 14 nucleotides, even though it was 200 nucleotides long. The most up-to-date version of this polymerase was shown in 2013. While it has an ability to catalyse polymerization of longer sequences, even of its own length, it cannot replicate itself due to a lack of sequence generality and its inability to transverse secondary structures of long RNA templates. However, it was recently shown that those limitations could in principle be overcome by the assembly of active polymerase ribozymes from several short RNA strands. In 2014, a cross-chiral RNA polymerase ribozyme was demonstrated. It was hypothesized that it offers a new mode of recognition between an enzyme and substrates, which is based on the shape of the substrate, and allows avoiding the Watson-Crick pairing and, therefore, may provide greater sequence generality. Various other experiments have shown that, besides bearing polymerase properties, ribozymes could have developed other kinds of evolutionarily useful catalytic activity such as synthase, ligase, or aminoacylase activities. Ribozymal aminoacylators and ribozymes with the ability to form peptide bonds might have been crucial to inventing translation. An RNA ligase, in turn, could link various components of quasispecies into one chain, beginning the process of a genome integration. An RNA with a synthase or a synthetase activity could be critical for building compartments and providing building blocks for growing RNA and protein chains as well as other types of molecules. Many examples of this kind of ribozyme are currently known, including a peptidyl transferase ribozyme, a ligase, and a nucleotide synthetase. A transaminoacylator described in 2013 has five nucleotides, which is sufficient for a trans-amino acylation reaction and makes it the smallest ribozyme that has been discovered. It supports a peptidyl-RNA synthesis that could be a precursor for the contemporary process of linking amino acids to tRNA molecules. An RNA ligase's catalytic domain, consisting of 93 nucleotides, proved to be sufficient to catalyse a linking reaction between two RNA chains. Similarly, an acyltransferase ribozyme 82 nucleotides long was sufficient to perform an acyltransfer reaction. Altogether, the results concerning the RNA ligase's catalytic domain and the acyltransferase ribozyme are in agreement with the estimated upper limit of 100 nucleotides set by the error threshold problem. However, it was hypothesized that even if the putative first RNA-dependent RNA-polymerases are estimated to be longer—the smallest reported up-to-date RNA-dependent polymerase ribozyme is 165 nucleotides long—they did not have to arise in one step. It is more plausible that ligation of smaller RNA chains performed by the first RNA ligases resulted in a longer chain with the desired catalytically active polymerase domain.
Forty years after the publication of Manfred Eigen's primary work dedicated to hypercycles, Nilesh Vaidya and colleagues showed experimentally that ribozymes can form catalytic cycles and networks capable of expanding their sizes by incorporating new members. However, this is not a demonstration of a hypercycle in accordance with its definition, but an example of a collectively autocatalytic set. Earlier computer simulations showed that molecular networks can arise, evolve and be resistant to parasitic RNA branches. In their experiments, Vaidya et al. used an Azoarcus group I intron ribozyme that, when fragmented, has an ability to self-assemble by catalysing recombination reactions in an autocatalytic manner. They mutated the three-nucleotide-long sequences responsible for recognition of target sequences on the opposite end of the ribozyme (namely, Internal Guide Sequences or IGSs) as well as these target sequences. Some genotypes could introduce cooperation by recognizing target sequences of the other ribozymes, promoting their covalent binding, while other selfish genotypes were only able to self-assemble. In separation, the selfish subsystem grew faster than the cooperative one. After mixing selfish ribozymes with cooperative ones, the emergence of cooperative behaviour in a merged population was observed, outperforming the self-assembling subsystems. Moreover, the selfish ribozymes were integrated into the network of reactions, supporting its growth. These results were also explained analytically by the ODE model and its analysis. They differ substantially from results obtained in evolutionary dynamics. According to evolutionary dynamics theory, selfish molecules should dominate the system even if the growth rate of the selfish subsystem in isolation is lower than the growth rate of the cooperative system. Moreover, Vaidya et al. proved that, when fragmented into more pieces, ribozymes that are capable of self-assembly can not only still form catalytic cycles but, indeed, favour them. Results obtained from experiments by Vaidya et al. gave a glimpse on how inefficient prebiotic polymerases, capable of synthesizing only short oligomers, could be sufficient at the pre-life stage to spark off life. This could happen because coupling the synthesis of short RNA fragments by the first ribozymal polymerases to a system capable of self-assembly not only enables building longer sequences but also allows exploiting the fitness space more efficiently with the use of the recombination process. Another experiment performed by Hannes Mutschler et al. showed that the RNA polymerase ribozyme, which they described, can be synthesized in situ from the ligation of four smaller fragments, akin to a recombination of Azoarcus ribozyme from four inactive oligonucleotide fragments described earlier. Apart from a substantial contribution of the above experiments to the research on the origin of life, they have not proven the existence of hypercycles experimentally.
Related problems and reformulations
The hypercycle concept has been continuously studied since its origin. Shortly after Eigen and Schuster published their main work regarding hypercycles, John Maynard Smith raised an objection that the catalytic support for the replication given to other molecules is altruistic. Therefore, it cannot be selected and maintained in a system. He also underlined hypercycle vulnerability to parasites, as they are favoured by selection. Later on, Josef Hofbauer and Karl Sigmund indicated that in reality, a hypercycle can maintain only fewer than five members. In agreement with Eigen and Schuster's principal analysis, they argued that systems with five or more species exhibit limited and unstable cyclic behaviour, because some species can die out due to stochastic events and break the positive feedback loop that sustains the hypercycle. The extinction of the hypercycle then follows. It was also emphasized that a hypercycle size of up to four is too small to maintain the amount of information sufficient to cross the information threshold.
Several researchers proposed a solution to these problems by introducing space into the initial model either explicitly or in the form of a spatial segregation within compartments. Bresch et al. proposed a package model as a solution for the parasite problem. Later on, Szathmáry and Demeter proposed a stochastic corrector machine model. Both compartmentalized systems proved to be robust against parasites. However, package models do not solve the error threshold problem that originally motivated the idea of the hypercycle. A few years later, Maarten Boerlijst and Paulien Hogeweg, and later Nobuto Takeuchi, studied the replicator equations with the use of partial differential equations and cellular automata models, methods that already proved to be successful in other applications. They demonstrated that spatial self-structuring of the system completely solves the problem of global extinction for large systems and, partially, the problem of parasites. The latter was also analysed by Robert May, who noticed that an emergent rotating spiral wave pattern, which was observed during computational simulations performed on cellular automata, proved to be stable and able to survive the invasion of parasites if they appear at some distance from the wave core. Unfortunately, in this case, rotation decelerates as the number of hypercycle members increases, meaning that selection tends toward decreasing the amount of information stored in the hypercycle. Moreover, there is also a problem with adding new information into the system. In order to be preserved, the new information has to appear near to the core of the spiral wave. However, this would make the system vulnerable to parasites, and, as a consequence, the hypercycle would not be stable. Therefore, stable spiral waves are characterized by once-for-ever selection, which creates the restrictions that, on the one hand, once the information is added to the system, it cannot be easily abandoned; and on the other hand, new information cannot be added.
Another model based on cellular automata, taking into account a simpler replicating network of continuously mutating parasites and their interactions with one replicase species, was proposed by Takeuchi and Hogeweg and exhibited an emergent travelling wave pattern. Surprisingly, travelling waves not only proved to be stable against moderately strong parasites, if the parasites' mutation rate is not too high, but the emergent pattern itself was generated as a result of interactions between parasites and replicase species. The same technique was used to model systems that include formation of complexes. Finally, hypercycle simulation extending to three dimensions showed the emergence of the three-dimensional analogue of a spiral wave, namely, the scroll wave.
Comparison with other theories of life
The hypercycle is just one of several current theories of life, including the chemoton of Tibor Gánti, the (M,R) systems of Robert Rosen, autopoiesis (or self-building) of Humberto Maturana and Francisco Varela, and the autocatalytic sets of Stuart Kauffman, similar to an earlier proposal by Freeman Dyson.
All of these (including the hypercycle) found their original inspiration in Erwin Schrödinger's book What is Life? but at first they appear to have little in common with one another, largely because the authors did not communicate with one another, and none of them made any reference in their principal publications to any of the other theories. Nonetheless, there are more similarities than may be obvious at first sight, for example between Gánti and Rosen. Until recently there have been almost no attempts to compare the different theories and discuss them together.
Last Universal Common Ancestor (LUCA)
Some authors equate models of the origin of life with LUCA, the Last Universal Common Ancestor of all extant life. This is a serious error resulting from failure to recognize that L refers to the last common ancestor, not to the first ancestor, which is much older: a large amount of evolution occurred before the appearance of LUCA.
Gill and Forterre expressed the essential point as follows:
LUCA should not be confused with the first cell, but was the product of a long period of evolution. Being the "last" means that LUCA was preceded by a long succession of older "ancestors."
References
External links
J. Padgett's Hypercycle model implemented in repast
Origin of life
Self-organization | Hypercycle (chemistry) | [
"Mathematics",
"Biology"
] | 6,823 | [
"Biological hypotheses",
"Self-organization",
"Origin of life",
"Dynamical systems"
] |
29,493,295 | https://en.wikipedia.org/wiki/Wave%E2%80%93current%20interaction | In fluid dynamics, wave–current interaction is the interaction between surface gravity waves and a mean flow. The interaction implies an exchange of energy, so after the start of the interaction both the waves and the mean flow are affected.
For depth-integrated and phase-averaged flows, the quantity of primary importance for the dynamics of the interaction is the wave radiation stress tensor.
Wave–current interaction is also one of the possible mechanisms for the occurrence of rogue waves, such as in the Agulhas Current. When a wave group encounters an opposing current, the waves in the group may pile up on top of each other which will propagate into a rogue wave.
Classification
identifies five major sub-classes within wave–current interaction:
interaction of waves with a large-scale current field, with slow – as compared to the wavelength – two-dimensional horizontal variations of the current fields;
interaction of waves with small-scale current changes (in contrast with the case above), where the horizontal current varies suddenly, over a length scale comparable with the wavelength;
the combined wave–current motion for currents varying (strongly) with depth below the free surface;
interaction of waves with turbulence; and
interaction of ship waves and currents, such as in the ship's wake.
See also
generalized Lagrangian mean
rip current
Footnotes
References
Physical oceanography
Water waves | Wave–current interaction | [
"Physics",
"Chemistry"
] | 271 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Water waves",
"Waves",
"Physical oceanography",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
29,496,018 | https://en.wikipedia.org/wiki/North%20Atlantic%20Radio%20System | The North Atlantic Radio System (NARS) was a chain of 5 tropospheric scatter communication sites. It was an expansion of the former Distant Early Warning Line (DEW Line). NARS was built for the United States Air Force (USAF) by Western Electric (AT&T) and its sites were maintained under contract by ITT Federal Electric Corporation (now ITT Federal Services Corp.). All NARS stations were supervised and controlled by the USAF, by agreement with the Canadian and Danish Governments.
Historical information
In the early 1950s arctic surroundings and weather conditions of northern Canada made construction and manning of HF and VHF radio or microwave relay stations almost impossible. However, there was an urgent need of reliable data and communication facilities from the radarstations in the north to their control centers in the south.
The initial phase was using tropospheric scatter radio communication (troposcatter). Powerful radio signals in the kiloWatt range were scattered off the troposphere onwards to distant receiving stations using gigantic ‘billboard’ like antennas picking up just a fraction of the transmitted signals which had been scattered forward, meaning that the antenna and equipment maintenance and alignment had to be executed very carefully.
Construction of this system coded Polevault started in 1954, becoming ops in 1955 and becoming extended as of 1956. This troposcatter system had been supported by an undersea datacable system stretching from Thule airbase Greenland via Cape Dyer to Newfoundland Canada. The undersea cable system however appeared to be unreliable being cut many times by trawlers and icebergs so a better data transfersystem was definitely needed.
As of 1962 the new Semi Automatic Ground Environment (SAGE) system led to a gradual shut down of the Polevault system. SAGE consisted of large computers and associated networking equipment that coordinated data from many radar sites and processed it to produce a single unified image of the airspace over a wide area. SAGE directed and controlled the former North American Air Defense (NORAD) response to a Soviet air attack, operating in this role from the late 1950s into the 1980s.
Construction of the large Ballistic Missile Early Warning System (BMEWS) radars at Thule airbase and at Fylingdales (UK) and another radar chain through Greenland, Iceland and the Faroe Islands also called for new powerful troposcatter communication stations linking all radarsites to the NORAD Hq at Colorado (US). This communication chain became known as the North Atlantic Radio System (NARS).
Equipment used
The NARS used AN/FRC-39(V) and AN/FRC-56(V) transmitting and receiving equipment, manufactured by Radio Engineering Laboratories, which could be configured for 1 kW, 10 kW or 50 kW power output depending on the range and/or quality of signal required.
NARS sites were configured for 10 kW output, with the exception of site 41 in both directions and site 42's connection to site 41. Each set consisted of 2 transmitters and 4 receivers, for redundancy and to boost signal to noise ratios, using vacuum tube technology which proved time-consuming to maintain at high levels of efficiency. The 50kW used 120 ft antennas and the 10kW shots used 60 ft antennas. Equipment was configured for Quad diversity as follows; Polarity diversity, Space Diversity, Frequency Diversity and Combiner diversity as was typical for most Tropo-scatter communications over difficult paths.
Levels of service proved extremely variable with the effects of weather and finicky equipment frequently causing loss of connection. Improvements were gained through better maintenance procedures but did not change significantly until the introduction of solid state technology, with the system able to transmit at 9.6 kbit/s, (a very fast internet connection for that time), by the time the system was closed down in 1992 after 30 years of service.
With the advent of satellite communications (SATCOM) the days of the Troposcatter networks were over, but NARS was closed down early due to the loss of the DYE-2 DEW Line station in 1988, severing the networks connection with the rest of the DEW line. Site 46 also had to close to make way for the new BMEWS Phased Array Radar at RAF Fylingdales.
NARS sites
From 1960 the troposcatter sites were built as:
Site 41 – Keflavic Air Base Keflavik at grid 64°02′07″N 22°39′16″W in the SW of Iceland 1952–1992. The west facing antennas/building was Dye 5 and Tech Control. This shot was a 50kW shot to Dye 4 in Greenland. The east facing antennas/building shot to NARS site 42 at Hofn Iceland and was also a 50kW shot. The dividing line between the Dew Line system and the NARS system was between the two buildings. Today the site is closed, the Tropo towers are gone and all buildings have been removed.
Site 42 – Höfn, Iceland at grid 64°14'38"N 14°57'50"W was a dual-purpose troposcatter radio comms relay site sharing its location with the USAF/NATO radar station at Hofn from 1961 – 1992 until its closure. The site was linked to Site 41 (Keflavik, Iceland) and to Site 43 at Sornfelli Tórshavn (Faroe Islands) by 235 and 292 mile shots respectively. Equipped with 2x 120 ft antennas for the Keflavik shot operating at 50 kilowatt the Sornfelli Tórshavn shot was accomplished by 2x 60 ft antennas operating at 10 kilowatt. Little known is that this site also was the entry point for the SOSUS system. When satellite communications made the troposcatter communications system obsolete, Site 42 was shut down along with the rest of the NARS system in 1988. Remaining idle through the end of manned ops at Hofn Air Station, the troposcatter antennas and support buildings have been demolished in the mid-1990s with the rest of the Air Station. The concrete blocks for the billboard antennas and feed horns still remain.
Site 43 – Tórshavn, Faroe Islands Sornfelli Mtn at grid 62°4'1"N 6°58'0"W was a Danish installation (Island Command Faroes) and NATO early warning radar system consisting of 2 radars until closure in 2002. One of the radars is currently still operating as a civilian airtraffic control radar. This site had Tropo-scatter shots to site 44 and to site 42
Site 44 – RAF Mormond Hill at grid 57°36'13"N 2°1'58"W in the NE of Scotland was home to several troposcatter antennas respectively operated by USAF - providing comms to and from RAF Buchan via site 46 - the British Army and British Telecom. After USAF closure the site was transferred to the MoD in 1993 and is now used for commercial British Telecom ops.
Site 46 – RAF Fylingdales at grid 54°21'32"N 0°39'50"W station was built by the Radio Corporation of America (RCA) in 1962, and originally maintained by RCA but later was operated and maintained by ITT/FELEC. This site consisted of a 10kW shot to site 44 and a 10kW shot south that eventually interfaced with the 486L system and the rest of Europe
See also
Radio propagation
Microwave
ACE High - Cold war era NATO European troposcatter network
White Alice Communications System - Cold war era Alaskan tropospheric communications link
List of White Alice Communications System sites
TV-FM DX
List of DEW Line Sites
Distant Early Warning Line
References
External links
Sornfelli radar
Eyes and Ears of the arctic
Telecommunications equipment of the Cold War
Radio frequency propagation
Tropospheric scatter systems
Military radio systems | North Atlantic Radio System | [
"Physics"
] | 1,629 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Waves"
] |
29,497,778 | https://en.wikipedia.org/wiki/Omrania%20and%20Associates | Omrania and Associates (), also known as Omrania, is an international architectural, engineering, and urban planning firm based in Riyadh, Saudi Arabia. Founded in 1973, it specializes in the design of contextual and high-performance design projects.
The firm has designed a diverse range of buildings and infrastructure projects in Saudi Arabia, as well as in the Middle East, Europe, and North Africa. Omrania's best-known projects include the Kingdom Centre (Kingdom Tower), the Public Investment Fund (PIF) (formerly Capital Market Authority Headquarters) Tower, and the Aga Khan Award-winning Tuwaiq Palace in Riyadh's Diplomatic Quarter. The firm's most recent high-profile project in development is the King Salman Park in Riyadh, which will become the largest city park in the world.
With approximately 500 employees, Omrania has two offices in Riyadh and additional offices in Jeddah and Amman, Jordan. The four office teams include professionals from more than 30 countries. The company is led by a board of directors representing all design disciplines and administration units.
On November 14, 2023, Egis announced that it had completed the purchase of Omrania.
History and growth
Omrania was founded when architects Basem Al-Shihabi and Nabil Fanous collaborated to win an international design competition for the General Organization for Social Insurance (GOSI) Headquarters in Riyadh. In order to design and supervise the building's construction, they opened a small office that eventually grew into a large, multi-disciplinary practice of Omrania. Since then, Omrania has become a major contributor to Riyadh’s urban expansion, as the capital city grew from 150,000 people in 1960 to 7.6 million by 2017.
In 1980, Omrania established a branch office in London, designing international projects such as the Tuwaiq Palace, the Radwa 4 planned community in Yanbu, Saudi Arabia, and the interior renovation of historic Four Millbank in the City of Westminster, London.
In addition to designing numerous corporate headquarters, one of the firm's key early projects was the Tuwaiq Palace, a diplomatic club in Riyadh completed in 1985. In collaboration with Buro Happold and Frei Otto, Omrania's design drew from vernacular architectural forms such as the tent and fortification, adapted to the desert climate.
Omrania continued to expand its portfolio of mixed-use projects throughout the 1990s. In 1999, in conjunction with Ellerbe Becket, Omrania designed and supervised the construction of Riyadh's Kingdom Centre; a shopping center, hotel, office, and residential complex with a 300-meter (984-feet) tower and skybridge overlooking Riyadh.
In the 21st century, the firm has increased its portfolio of urban planning and public space designs. Omrania designed (with Aukett Fitzroy Robinson) Riyadh's Salam Park (2003), a 25-hectare public park that offers an oasis of urban green space. Omrania also completed a comprehensive Olayya-Batha Corridor planning and transportation study with Perkins + Will and Dornier Consulting.
While most of Omrania's completed works are within the Kingdom of Saudi Arabia, the company opened branch offices in Bahrain and Jordan and, from this expanded base of operations, has completed projects in Tunisia, Yemen, Lebanon, Qatar, the United Arab Emirates, Jordan, and Bahrain. The firm also was an equity holder from 1992 to 2004 in Chovet Engineering, a French manufacturing and process engineering company.
In 2013, Omrania was selected—along with architects Zaha Hadid Architects, Snøhetta, and Gerber Architekten—to design one of the four main intermodal transit hubs of the new Arriyadh Metro (Riyadh Metro) system. Omrania's Arriyadh Metro Western Station, when completed in 2020, will incorporate public gardens and community space as well as a fully enclosed intermodal transit hub serving the new Metro and express bus network. In 2018, Omrania's newly completed Grand Mosque in King Abdullah Financial District (KAFD) in Riyadh gained international recognition for its angular geometric forms and delicate treatment of light. The mosque's distinctive design reflects its modern context, the arid environment of Saudi Arabia, and the traditions of Islam. In 2019, the firm was master planning 25 new residential communities covering 45 square km (17.3 square mi.) in Saudi Arabia's western region on behalf of the nation's Ministry of Housing, all with pedestrian-friendly streetscapes and sustainable infrastructure.
The firm's largest skyscraper is the 385-meter (1,263-foot) Public Investment Fund (PIF) Tower (formerly known as CMA Tower) designed by Omrania and HOK in a joint venture. The tower is the tallest building in Riyadh and the anchor of the city's King Abdullah Financial District (KAFD). Its hexagonal-shaped plan tapers inward and outward as it rises, providing open floor plans and a distinctively crystalline profile on the skyline, "inspired from geologic formations polished by the hand of man," according to the architects. Additional Omrania projects completed in 2019 include the Riyadh Hilton Hotel & Residences and the Radission Blu Hotel & Residences in Riyadh's Diplomatic Quarter. The 2019 announcement of Omrania's role in designing the future King Salman Park, said to be the world's largest city park, placed the firm once again in the international spotlight.
Omrania has teamed with the Center for the Study of the Built Environment since 2008 to sponsor of the Omrania | CSBE Student Award for Architectural Design, which recognizes outstanding design projects by graduating students in architecture across the Middle East.
Omrania and sustainable design
The new headquarters campus of the Saudi Electricity Company (SEC) — one of the firm's current large-scale projects — includes vast rooftop solar arrays, computer-modeled shading systems and other passive design measures to reduce energy consumption. This "green building" project has also been described in the architectural media as a healthy building, "designed with the ultimate comfort of its employees in mind.”
Other examples of Omrania's sustainable design include high-performance façade systems on the PIF Tower, Waha Office Building, and the Radisson Blu Diplomatic Quarter, all in Riyadh. The PIF Tower, designed with HOK to achieve LEED_NC Gold certification, has an external layer of fins, gantries, and perforated panels that provide enhanced shade, minimize internal cooling loads, and reduce energy costs. A photovoltaic array further boosts energy performance. The King Abdullah Financial District (KAFD) in Riyadh, the site of Omrania's PIF Tower and Grand Mosque, was the largest green development in the world seeking green building accreditationin 2011 and is set to become the world's first LEED-certified district.
Major projects
Cultural and diplomatic
Grand Mosque, King Abdullah Financial District, Riyadh (2017).
Royal Embassy of the Kingdom of Saudi Arabia, Amman, Jordan (2014).
Tuwaiq Palace, Diplomatic Quarter, Riyadh (1985).
Major office building
PIF Headquarters Tower (formerly CMA Tower), KAFD, Riyadh (2018).
GOSI Office Park (2013).
Kingdom Centre, Riyadh (2001).
NCCI Headquarters, Riyadh (1998).
Gulf Cooperation Council Headquarters, Riyadh (1987).
Four Millbank, London, UK (1988).
Transportation
Western Hub Station, Arriyadh Metro, Riyadh (2020)
Streetscape redesign, Prince Mohammed Bin Abdulaziz Street (2004)
Hospitality and leisure
KAAR Gateway Development (2022).
Hilton Riyadh Hotel and Residences (2019).
Radisson Blu Hotel and Residences, Diplomatic Quarter, Riyadh (2019).
Healthcare
Kingdom Hospital, Riyadh (2001).
Master planning
SUKNA Living Community, Asfahan, Jeddah, KSA (2020)
Ministry of Housing New Town Program, KSA (ongoing)
KA-CARE, Southwest Riyadh, KSA (2010).
Radwa 4 Community Development, Yanbu, KSA (1984)
Public spaces (landscape architecture)
King Salman Park, Riyadh (2024).
Prince Mohammed Bin Abdulaziz Street Revitalisation, Riyadh (2004).
Salam Park, Riyadh (2003).
Diplomatic Quarter Internal Landscaping, Riyadh (1986).
Interiors
Four Seasons Hotel, Riyadh (2001).
Kingdom Centre Mall, Riyadh. (2001).
SAMBA Offices, Riyadh. (2002).
Services
Architectural design
Urban design
Master planning
Landscape architecture
Interior design
Structural and building services engineering (mechanical, electrical, plumbing and fire protection)
Civil engineering and utilities infrastructure
Transportation engineering
Value engineering
Quantity surveying
Construction tendering
Construction contract administration and site supervision
Board of directors
Basem Al-Shihabi: managing director
Othman Al-Washmi: director — Saudi Arabia
Abdulsalam Al-Haddad: marketing and business development director
Majdi El-Shami: technical director
Zouheir Kodeih: head of construction supervision
Mahmoud Abughazal: head of architecture and interior design departments
Mutasem Diab: director — Jordan
Rukn Eldeen Mohammed: senior project manager
Maher Shkoukani: senior project manager
Hisham Al-Weher: corporate financial manager
References
External links
Official website
2023 mergers and acquisitions
Architecture firms of Saudi Arabia
Companies based in Riyadh
International engineering consulting firms
Design companies established in 1973
Saudi Arabian companies established in 1973 | Omrania and Associates | [
"Engineering"
] | 2,016 | [
"Engineering consulting firms",
"International engineering consulting firms"
] |
30,921,568 | https://en.wikipedia.org/wiki/ORiN | ORiN (Open Robot/Resource interface for the Network) is a standard network interface for FA (factory automation) systems. The Japan Robot Association proposed ORiN in 2002, and the ORiN Forum develops and maintains the ORiN standard.
Background
The installation of PC (Personal Computer) applications in the factory has increased dramatically recently. Various types of application software systems, such as production management systems, process management systems, operation monitoring systems and failure analysis systems, have become vital to factory operation. These software systems are becoming indispensable for the manufacturing system.
However, most of these software systems are only compatible with specific models or specific manufacturers of the FA system. This is because the software system is “custom made” depending on the specific special network or protocol. Once this type of application is installed in a factory and if there are no resident software engineers for the system, the improvement of the system will stop, the cost-effectiveness of the system will be worsen, and the total value of the system will deteriorate.
Another recent problem in production is the rapid increase of the product demand at the initial stage of the product release. The manufactures will lose the chance of possible profit if they cannot meet the demand. To cope with the problem, manufacturing industry is trying to achieve the vertical upstart of the production, and high re-usability of both hardware and software is the key for the goal.
To solve these problems, ORiN was developed as a standard PC application platform.
Outline
ORiN was originally developed as a standard platform for robot applications. Nowadays, ORiN became a manufacturing application program platform for handling wider range of resources including robots and other FA devices like programmable logic controllers (PLC) and numerical control (NC) systems, or more generic resources like databases and local file systems. ORiN specifications are on software only and are independent from hardware. Therefore, ORiN can be smoothly integrated with other existing technologies only by developing software. By using ORiN, development of manufacture-independent and model-independent application becomes easy.
By utilizing ORiN, various application software development and active multi-vender system construction by third-party companies are expected. In addition, on economy side, increase of manufacturing competitiveness, expansion of FA market, advancement of software industry in FA, and creation of FA engineering industry are also expected.
Features
ORiN is independent from hardware, and all ORiN specifications are for software. ORiN (Version 2) is composed of the following three key technology specifications.
CAO (Controller Access Object), standard program interface specifications : Specifications to facilitate generalization of application software
CRD (Controller Resource Definition), standard data schema specifications : Specifications to facilitate data exchange between application software
CAP (Controller Access Protocol), standard communication protocol : Protocol for communication between FA devices and applications Three types of CAP are defined: CAP (SOAP), e-CAP (HTTP), b-CAP (TCP/UDP).
With these three key standard technologies, ORiN provides following features.
Unified accessing model and data representation
Variable and file based access to the resources in the device
Applicable to various devices in the factory
No device modification is required for ORiN connection
XML data representation to cooperate with other systems
Easy device access over Internet with simple parameter setup
Configurable application interface
History
ORiN project started as a part of standardization activities in Japan Robot Association (JARA).
With the support from New Energy and Industrial Technology Development Organization (NEDO), ORiN system was developed from 1999 (3 years activity).
Member companies of JARA participated field robot connection test at International Robot Exhibition (IREX) in 1999 and 2001. ORiN Version 1.0 was formally created in 2002.
ORiN Forum was created in 2002 to promote and advance the ORiN standard. (Key members of the Forum: FA device manufacture, software development company, system integrator company, etc.)
ORiN Forum member companies participated various ORiN field application tests for three years. The test results were reflected to the ORiN Version 2.0 (created in 2005). ORiN2 SDK was released as a supported software product in 2005.
ORiN application was proposed as an annex of ISO20242-Part4. The DIS of the standard was approved in July 2010.
ORiN related terms
ORiN SDK
ORiN SDK is a software development kit for ORiN Version 1.0, including RAO, standard providers and development tools. The SDK is used to develop original RAO providers and ORiN applications, and the SDK is also used as ORiN execution environment. ORiN Forum distributes the SDK, but the Forum plans to stop the distribution and technical support of the SDK at the end of March 2011.
ORiN2 SDK
ORiN 2 SDK is a software development kit for ORiN Version 2.0. The SDK provides standard interface specifications for applications and devices, standard data schema, and standard communication protocol. Development of provider module (extension module) based on the specifications is also possible with the SDK. DENSO WAVE INCORPORATED, a subsidiary of Denso Corporation, distributes and supports the SDK.
See also
Application software
Architecture
Interface (computing)
Computer network
Software framework
Middleware
XML
External links
ORiN Forum
Industrial computing
2002 establishments in Japan | ORiN | [
"Technology",
"Engineering"
] | 1,075 | [
"Industrial computing",
"Industrial engineering",
"Automation"
] |
30,923,636 | https://en.wikipedia.org/wiki/Weld-On | Weld-On is a division of IPS Corporation, a manufacturer of solvent cements, primers, and cleaners for PVC, CPVC, and ABS plastic piping systems. Weld-On products are commonly used for joining plastic pipes and fittings. Weld-On also manufactures specialty products from repair adhesives for leaking pipes, pipe thread sealants / joint compounds, to test plugs for pipeline pressure testing. Their products are most commonly utilized in the irrigation, industrial, pool & spa, electrical conduit, and plumbing industries.
Headquartered in California, Weld-On has operations throughout the United States, as well as in China, and a worldwide network of sales representatives and distributors.
History
1954: Weld-On founded and established as a division of IPS Corporation.
1955: Weld-On developed the first clear, reactive acrylic adhesive that met U.S. Department of Defense military specification (MIL-SPEC) for use on aircraft canopies
1958: IPS Corporation began endorsing the solvent welding technique, a type of plastic welding, and patented Weld-On solvent cement for use on plastic pipe and fittings.
1978: Weld-On developed and released color-match acrylic adhesive for bonding solid surface countertops.
1992: Weld-On formulated and introduced the first low volatile organic compound (VOC) emission solvent cement in response to growing air quality concerns.
1997: Weld-On 724 was introduced as the first high-strength solvent cement for chemical resistant CPVC plastic joints in the market and formulated for use in a variety of harsh chemical applications such as hypochlorites, acids and caustics.
1999: Weld-On began offering environmentally-responsible, all low-VOC solvent cements, primers and cleaners, and began phasing out regular VOC products.
References
Cement
Piping
Plumbing
Manufacturing companies based in California
American companies established in 1954
Manufacturing companies established in 1954
Companies based in Los Angeles County, California
Compton, California | Weld-On | [
"Chemistry",
"Engineering"
] | 418 | [
"Building engineering",
"Chemical engineering",
"Plumbing",
"Construction",
"Mechanical engineering",
"Piping"
] |
30,928,917 | https://en.wikipedia.org/wiki/Chemfluence | Chemfluence is a national level technical symposium of the Department of Chemical Engineering, Alagappa College of Technology, Anna University, India. Started in 1994 as a college level symposium, it is now in its 29th year. Paper presentations, poster presentations, guest lectures, workshops and events form an integral part of the symposium. The symposium mainly aims at nourishing budding chemical engineers with knowledge of core concepts and providing an opportunity to showcase their talents. With more than 20 events across 3 days, it is one of the most prestigious tech events of South India. It is also one of the very few symposiums in India to host a cultural fest in association with university departments. Chemfluence is conducted annually by the Association of Chemical Engineers (ACE), the official student body of Department of Chemical Engineering, Anna University.
About
Chemfluence, is a national level technical symposium (Technical and Cultural Events) organized annually by the Association of Chemical Engineers, Department of Chemical Engineering, Alagappa College of Technology, Anna University every year since 1994.
With an array of events spread across three days it seeks to provide a platform for budding Chemical Engineers across the nation to show off their technical prowess and to attain their full intellectual potential.
With innovation as inspiration and technical knowledge as a tool, Chemfluence aims at bringing a complete transformation to the very grassroots of the field. Chemfluence gives an opportunity for engineering students to look beyond their course and curriculum, to roll back their sleeves, with the technical wand in their hands and do some real magic.
History
Chemfluence was started in the year 1994 by the students of The Department of Chemical Engineering of Anna University. Since then, Chemfluence had been a massive success and been attracting more participants and has become a phenomenal hit among the events by other Chemical Engineers around the country.
Chemfluence 2013
Chemfluence for the academic year 2012-2013 was conducted by Department of Chemical Engineering, from 27 February to 1 March 2013.
Some of the major events conducted were:
Paper Presentation
Poster Presentation
Live Model
Contraption
Math Modelling
Monetarist
Chemfluence 2014
Chemfluence 2014 was a six-day technical extravaganza organized by the students of Department of Chemical Engineering from 25 February to 3 March 2014 under the theme of Energy and Environment. Several Workshops and Guest lectures were conducted to impart practical and technical knowledge to the budding Engineers. A National Conference on Energy and Environment, EECON'14 was conducted as a part of Chemfluence'14 3 March 2014.
Workshops
Acknowledging the mindset of budding engineers to assimilate concepts in a practical way, Chemfluence'14 was a platform for several Workshops such as
Programmable Logic Controllers
Industrial Safety & Risk Analysis
Statistical Tools for Researchers & Engineers
Instrumental Methods of Analysis
Computational Fluid Dynamics
MATLAB
EECON'14
As part of Chemfluence'14, the final day of the symposium was reserved for EECON'14 - the first National Conference on Energy & Environment. Being the first ever symposium to be organised in the university by the student fraternity, the conference begun with the EECON'14 souvenir release and a keynote address by Dr. G. Sekaran, Chief Scientist, CLRI on handling wastes in the tanning sector at the Colin Mckenzie Auditorium. Subsequently, six paper presentation sessions and a poster presentation session were held on a plethora of topics such as Solid Waste Management, Air Pollution Control and Modeling of Environmentally benign processes. Chairpersons for the sessions were personalities high on caliber including Dr. T. Renganathan, Assistant Professor, IIT- Madras, Dr. S. Kanmani, Director, CTDT, Anna University and Dr. M. K. Gowthaman, CLRI amongst many others.
Chemfluence 2016
Chemfluence for the academic year 2015-2016 was held from 18 March 2016 to 22 March 2016 based on the theme of waste management. GMW 2k16, A national conference on Global Management of Waste was conducted by the organizing committee consisting of the students and faculties of the Department of Chemical Engineering. Today's industries are in need of innovative ideas and cost cutting techniques to increase the demands for their products in the highly competitive global market with fast depleting resources. Waste management is one of the most serious concerns in industries which needs to be tackled with highly efficient and out-of-the-box ideas that revamp the whole scenario, thus in a way, Transforming Waste into Wealth. With this in concern, Chemfluence ’16 had a theme of Waste Management. Consortium of Chemical Engineers had conducted the following:
An educational fair
An alumni meet
Guest lectures and workshops
National Conference - Global Management of Waste' (GMW 2k16)Conference On Global Management of Waste, 'GMW 2k16'
GMW 2k16 is the national conference on Global Management of Waste 2016. It is a nationwide congregation of engineering graduates, research scholars and professionals to deal with waste management. The aim of the conference is to generate new perspectives on ways to handle waste. Waste generation is a pressing issue. In India, 94% of waste is being dumped on land. Waste includes Municipal, Agricultural, Industrial, Electronic, Fuel Waste and Land waste. Energy can also be generated from waste. Though methods such as conversion of waste water to ethanol and Vermicomposting are available, a large scale, cost effective and feasible solution is very much required. And this is exactly what we hope to achieve through GMW 2k16.
GMW 2k16 will provide a unique stage for establishing new waste management related technologies that would "Reform Refuse to Riches". This year, for the first time in India, video presentation will be conducted instead of the usual Poster Presentation. It is a common occurrence in International Conferences and GMW 2k16 will follow the suit.
Topics
Solid waste management
Waste to energy
Bioremediation of waste
Wastewater treatment and reclamation
Chemfluence 2020
Chemfluence 2020 was the Silver Jubilee edition of the technical symposium of the Department of Chemical Engineering of ACTech, Anna University.
Featuring over 10+ events (both technical and non-technical) in a challenging environment is a salient feature of every Chemfluence.
The three-day symposium was conducted from 24 to 26 February 2020, wherein students from more than 25 colleges took part.
Chemfluence 2021
Chemfluence 2021 was the 26th installment of the technical symposium of the Department of Chemical Engineering, Anna University. The event was organized by the student committee, Association of Chemical Engineers.
It was a four-day event conducted from March 15 to March 18 in online mode due to the Covid-19 Pandemic.
The title sponsor for this edition of Chemfluence was Kothari Petrochemicals and was Co-sponsored by Chemfine, Redwood Innovation Partners. Events sponsor was Monitpro Solutions.
The following events were conducted:
Oral Presentation
Mock Interview
Picture Perception
Chem - Auction
Chem Connexions
Solve It - Numerical Questions
Chemfluence 2022Chemfluence 2022 marked the 27th edition of Chemfluence and was organized by the Association of Chemical Engineers from 28 to 30 April 2022. The themes chosen were Carbon Neutrality, Green Energy and Industrial Process Optimization and Safety.
The themes presented were related to the current trend of Environmental Awareness.
The title sponsor for the three-day event was Korcomptenz and was co-sponsored by Ultramarine & Pigments Limited. Event sponsors include Ecolab, Kothari Petrochemicals and Xpro India.
More than 10 technical events including Paper Presentation, Poster Presentation and Workshops were conducted. Workshops on Matlab and Aspen HYSYS were conducted. Events such as Trouble Shooter, Chemcavalier and Chemrelay were some of the key attractions. This edition of Chemfluence witnessed participation of students from about 20 colleges in an around Tamil Nadu.
Chemfluence 2023
ACE conducted its 28th year of technical excellence - 'Chemfluence' from April 11 to 13 with a lot more on offer for the students.
Association of Chemical Engineers had chosen Sustainable Development as the theme of this year.
The following technical events were conducted:
Paper Presentation
Poster Presentation
Mock Interview
Chemcavalier
Chemextract
Chem-o-ladders
Troubleshooter
Chem Relay
Who Am I?
Keep it Safe and Sound
Pen It Down
Workshops Conducted:
Start-up Management
Aspen HYSYS
Technical Process Safety
There were many other non-technical events also conducted.
The title sponsors of the symposium were ONGC, ALCHEMISTS (Alumni Batch 1999-2003)
.
The event sponsors were CPCL, MUGI, Alumni Batch 2004-2008, Korcomptenz and Indian Oil Corporation.
Chemfluence 2024
Chemfluence, in its 29th installment would be a three-day event to be conducted by the Association of Chemical Engineers, in the month of April 2024.
A variety of events are being planned under the themes of Process Intensification, Alternative Energy Sources and Advanced Process Control.
ACE has also planned to conduct a number of workshops including Matlab and Process Control'.
This year, mock interviews and placement training would be provided to the second and third year students of Actech to improve their technical knowledge.
References
External links
https://www.linkedin.com/in/ace-actech-baaaa7228/ LinkedIn Page
https://www.instagram.com/ace_actech/ Instagram Page
Anna University
Chemfluence FB
Anna University
Chemical engineering organizations
Organizations established in 1994
1994 establishments in Tamil Nadu | Chemfluence | [
"Chemistry",
"Engineering"
] | 2,003 | [
"Chemical engineering",
"Chemical engineering organizations"
] |
30,930,441 | https://en.wikipedia.org/wiki/PERISCOP | The PERISCOP is a pressurized recovery device designed for retrieving deep-sea marine life at depths exceeding 2,000 metres. The device was designed by Bruce Shillito and Gerard Hamel at the Universite Pierre et Marie Curie. The name is an acronym for the French phrase Projet d’Enceinte de Remontée Isobare Servant la Capture d’Organismes Profonds ("Enclosure project for isobaric ascent serving to capture deep organisms").
History
The PERISCOP is a unique pressurized recovery devices that contains three chambers – one for capture, one for recovery under exterior pressure, and one for transfer to the laboratory while maintaining pressure. Previous recovery devices used one chamber for all purposes. An arm designed to capture samples by force of suction is attached to the device. During ascent, pressure is maintained within the chamber by use of pressurized water. Upon surfacing, samples can be observed, filmed, and/or photographed through transparent view ports in the device. Due to fluctuations in atmospheric pressure and temperature recorded pressures during ascent and at the surface may ranged from 74%-111% of the natural pressure at sea depth. The device set a record for the deepest live-fish capture under pressure when it captured a Pachycara at 2,300 m. The previous record was 1,400 m. The capture was the first to be performed at a hydrothermal vent. The device has also recovered several shrimp species (Mirocaris fortunata, Chorocaris chacei, and Rimicaris exoculata) at vent fields Lucky Strike and Rainbow.
References
Deep sea fish
Marine organisms
Oceanography
Research methods | PERISCOP | [
"Physics",
"Environmental_science"
] | 343 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
36,113,327 | https://en.wikipedia.org/wiki/Electromagnetic%20absorbers | Electromagnetic absorbers are specifically chosen or designed materials that can inhibit the reflection or transmission of electromagnetic radiation. For example, this can be accomplished with materials such as dielectrics combined with metal plates spaced at prescribed intervals or wavelengths. The particular absorption frequencies, thickness, component arrangement and configuration of the materials also determine capabilities and uses. In addition, researchers are studying advanced materials such as metamaterials in hopes of improved performance and diversity of applications. Some intended applications for the new absorbers include emitters, sensors, spatial light modulators, infrared camouflage, wireless communication, and use in thermophotovoltaics.
Generally, there are two types of absorbers: resonant absorbers and broadband absorbers. The resonant absorbers are frequency-dependent because of the desired resonance of the material at a particular wavelength. Different types of resonant absorbers are the Salisbury screen, the Jaumann absorber, the Dallenbach layer, crossed grating absorbers, and circuit analog (CA) absorbers.
Broadband absorbers are independent of a particular frequency and can therefore be effective across a broad spectrum.
References
Further reading
The Salisbury screen, invented by American engineer Winfield Salisbury in 1952.
Salisbury W. W. "Absorbent body for electromagnetic waves", United States patent number 2599944 June 10, 1952. Also cited in Munk
Electromagnetic components
Materials science | Electromagnetic absorbers | [
"Physics",
"Materials_science",
"Engineering"
] | 284 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
39,001,926 | https://en.wikipedia.org/wiki/Cohesive%20zone%20model | The cohesive zone model (CZM) is a model in fracture mechanics where fracture formation is regarded as a gradual phenomenon and separation of the crack surfaces takes place across an extended crack tip, or cohesive zone, and is resisted by cohesive tractions.
The origin of this model can be traced back to the early sixties by Dugdale (1960) and Barenblatt (1962) to represent nonlinear processes located at the front of a pre-existent crack.
Description
The major advantages of the CZM over the conventional methods in fracture mechanics like those including LEFM (Linear Elastic Fracture Mechanics), CTOD (Crack Tip open Displacement) are:
It is able to adequately predict the behaviour of uncracked structures, including those with blunt notches.
Size of non-linear zone need not be negligible in comparison with other dimensions of the cracked geometry in CZM, while in other conventional methods, it is not so.
Even for brittle materials, the presence of an initial crack is needed for LEFM to be applicable.
Another important advantage of CZM falls in the conceptual framework for interfaces.
The Cohesive Zone Model does not represent any physical material, but describes the cohesive forces which occur when material elements are being pulled apart.
As the surfaces (known as cohesive surfaces) separate, traction first increases until a maximum is reached, and then subsequently reduces to zero which results in complete separation. The variation in traction in relation to displacement is plotted on a curve and is called the traction-displacement curve. The area under this curve is equal to the energy needed for separation.
CZM maintains continuity conditions mathematically; despite physical separation. It eliminates singularity of stress and limits it to the cohesive strength of the material.
The traction-displacement curve gives the constitutive behavior of the fracture. For each material system, guidelines are to be formed and modelling is done individually. This is how the CZM works.
The amount of fracture energy dissipated in the work region depends on the shape of the model considered. Also, the ratio between the maximum stress and the yield stress affects the length of the fracture process zone. The smaller the ratio, the longer is the process zone. The CZM allows the energy to flow into the fracture process zone, where a part of it is spent in the forward region and the rest in the wake region.
Thus, the CZM provides an effective methodology to study and simulate fracture in solids.
Dugdale and Barenblatt models
Dugdale Model
The Dugdale model (named after Donald S. Dugdale) assumes thin plastic strips of length, , (sometimes referred to as the strip yield model) are at the forefront of two Mode I crack tips in a thin elastic-perfectly plastic plate.
Plastic zone size
In the case where , and therefore , the plastic zone size is:
which is similar to, but slightly smaller than Irwin's predicted plastic zone diameter.
Crack-tip opening displacement
The general form of the crack tip opening displacement according to the Dugdale model at the points and is:
This can be simplified for cases where to:
Barenblatt model
The Barenblatt model (after G.I. Barenblatt) is analogous to the Dugdale model, but is applied to brittle solids. This approach considers the interatomic stresses involved cracking, but considers a large enough area to apply to continuum fracture mechanics. Barenblatt's model assumes that "the width of the edge [cohesive] region of a crack is small compared to the size of the whole crack" in addition to the assumption for most fracture mechanics models that the stress fields of all cracks are the same for a given specimen geometry regardless of the remote applied stress. In the Barenblatt model, the traction, , is equal to the theoretical bond rupture strength of a brittle solid. This allows the strain energy release rate, , to be defined by the critical crack opening displacement, or the critical cohesive zone size, , as follows:
where is the surface energy.
References
Fracture mechanics | Cohesive zone model | [
"Materials_science",
"Engineering"
] | 827 | [
"Structural engineering",
"Materials degradation",
"Materials science",
"Fracture mechanics"
] |
39,002,402 | https://en.wikipedia.org/wiki/Optical%20stretcher | The Optical Stretcher is a dual-beam optical trap that is used for trapping and deforming ("stretching") micrometer-sized soft matter particles, such as biological cells in suspension.
The forces used for trapping and deforming objects arise from photon momentum transfer on the surface of the objects, making the Optical Stretcher – unlike atomic force microscopy or micropipette aspiration – a tool for contact-free rheology measurements.
Overview
The trapping of micrometre-sized particles by two laser beams was first demonstrated by Arthur Ashkin in 1970,
before he developed the single-beam trap now known as optical tweezers.
An advantage of the single-beam design is that no two laser beams need to be exactly adjusted to make their optical axes match.
From the late 1980s on, optical tweezers have been used to trap and hold biological dielectrica, such as cells or viruses.
However, in order to ensure trap stability, the single beam must be highly focused, with the particle trapped close to the focus point.
Preventing damage done to biological material (see Opticution) by the high local light intensities in the focus limits the laser powers that one can use in the optical tweezers to a force range too low for rheology experiments, i.e. optical tweezers are suitable for trapping biological particles, but unsuitable for deforming them.
The optical stretcher, developed at the end of the 1990s by Jochen Guck and Josef A. Käs,
circumvents this problem by going back to the dual-beam design originally developed by Ashkin.
This allows for weakly divergent laser, thus preventing damage done by localized light intensities and increasing the possible stretching forces to a range that is sufficient for the deformation of soft matter.
The laser powers used in stretching cells are typically on the order of 1 W, generating stretch forces on the order of 100 pN.
The resulting relative cellular deformation then usually lies in the range from 1%–10%.
The optical stretcher has since been developed into a versatile biophysical tool used by many groups worldwide for contact-free, marker-free measurements of whole-cell rheology.
Using automated setups, high throughput rates of more than 100 cells/hour have been achieved, allowing for statistical analysis of the data.
Applications
Cell mechanics and cell rheology play a crucial role in cellular development and also in many diseases.
Due to its high throughput, the optical stretcher has in many biomechanical studies been the tool of choice to investigate the development of or changes in cell mechanics, among them studies on the development of cancer and stem cell differentiation.
An exemplary study in stem cell research sheds light on the process of cell differentiation: Hematopoietic stem cells residing in the bone marrow differentiate into different types of blood cells to produce human blood – i.e., red blood cells and different types of white blood cells. In this study, it was shown that the white blood cell types show different mechanical behaviour depending on their later physiological function and that these differences arise during the process of stem cell differentiation.
Using the optical stretcher, it was also shown that cancerous cells differ significantly in their mechanical properties from their healthy counterparts.
The authors claim that the 'optical deformability' can be used as a biomechanical marker to distinguish cancerous from healthy cells, and even that higher stages of malignancy can be detected.
Optical stretcher setup
A typical optical stretcher setup consists of the following main parts:
A microfluidic system. Typically, a suspension of single cells is pumped through a capillary. When a cell is in the right position to be trapped by the lasers, the flow must be stopped and the lasers turned on.
Two opposing optical fibres from which the two laser beams emerge. One can either use two distinct lasers or one laser source and a beam splitter.
A microscope used for imaging the trapped objects. As single cells are virtually transparent, often phase contrast microscopes are used, but depending on the desired measurement, for example using fluorescence microscopy is also an option. The deformation can be extracted from the images using an edge detection algorithm.
A computer with suitable software can be used to control the microfluidic flow, the lasers and the microscope camera to record images.
Physics of the optical stretcher
Physical origin of the forces in the optical stretcher
Objects trapped in the optical stretcher usually have diameters on the scale of 10 μm, which is very large compared to the laser wavelengths used (often 1064 nm).
It is thus sufficient to consider the interaction with the laser light in terms of ray optics.
When a ray enters the object, it is refracted due to the different refractive index according to Snell's law.
Because photons carry momentum, a change in the direction of propagation of a light ray implies a momentum change, i.e. a force.
According to Newton's third law, a corresponding force pointing in the opposite direction acts on the surface of the object.
These surface forces due to photon momentum change are the origin for the ability of the optical stretcher to trap and stretch objects.
Trapping force
All surface forces can be added up to a resulting force pulling on the center of mass of the object, which is used to trap objects.
Usually, one uses Gaussian laser beams to trap particles.
The most important thing to note is that Gaussian beams have a light intensity gradient, i.e. the light intensity is high in the center of the beam (on the optical axis) and decreases off the axis.
It can be illustrative to decompose the trapping force into two components called the scattering force and the gradient force:
The Gaussian beams used in optical stretchers are – in contrast to optical tweezers – weakly divergent. The momentum carried by the photons thus basically points in the direction of light propagation. After leaving the object, the magnitude of momentum is the same but most photons have changed in direction of propagation, such that on the whole they carry less momentum in the forward direction. This missing momentum is transferred to the object. This part is called the scattering force, because it arises from scattering the light in all directions. Because the scattering force always pushes objects in the direction of beam propagation, one needs two counterpropagating beams whose scattering forces cancel mutually in order to stably trap cells.
The component of the force perpendicular to the laser direction is called the gradient force. If a spheroid-like object is aligned on the optical axis, these forces will cancel due to the rotational symmetry of the Gaussian beam and there is no gradient force. However, if the object is moved off the axis, there will be more rays interacting with it on the side near to the beam axis and less on the outer side.
The rays on the inner side are mostly refracted away from the beam axis (see figure on the right), leading to a corresponding force towards the beam axis on the object.
The gradient force thus pulls object onto the beam axis.
This requires the refractive index of that object to be higher than the index of the surrounding medium – else the refraction would lead to opposite results, pushing particles out of the beam.
However, the refractive index of biological matter is always higher than that of water or cell medium due to the additional protein content.
In the optical stretcher, two counterpropagating laser beams are used in order to cancel their corresponding scattering forces.
Because their gradient forces point in the same direction, pulling particles towards their common beam axis, they add up, and one arrives at a stable trap position.
An alternative approach to understand the trapping mechanism is to consider the interaction of the particle with the electric fields of the laser beam.
This leads to the known fact that electric dipoles (or dielectric, polarizable media like cells) are pulled to the region of highest field intensities, i.e. to the center of the beam.
See for details.
Stretching force
Once the particle is stably trapped, there is no net force on the center of mass of the particle.
However, the forces appearing at the surface of the particle do not cancel, and contrary to what one might naively expect, the light does not squeeze the cell but stretch it:
The magnitude of the photon momentum is given by
where h is the Planck constant, n is the refractive index of the medium, and λ is the wavelength of the light.
The photon momentum increases when the photon enters a medium of higher refractive index.
Conservation of momentum then leads to a surface force acting on the particle, pointing in the opposite direction, i.e. outwards.
When a photon leaves the trapped object, its momentum decreases and again conservation of momentum requires that an outward-pointing force be exerted.
Thus, as all surface forces point outwards, they do not cancel but add up.
The highest stretching forces can be found on the beam axis, where the light intensity is highest and the rays incide at right angle.
Near the poles of the cell, where virtually no rays impinge, the surface forces vanish.
Different mathematical models have been developed to calculate the stretching forces, based on ray optics
or the solution of Maxwell's equations.
References
Optical trapping
Biophysics
Cell biology
Articles containing video clips | Optical stretcher | [
"Physics",
"Chemistry",
"Biology"
] | 1,906 | [
"Cell biology",
"Applied and interdisciplinary physics",
"Optical trapping",
"Particle traps",
"Biophysics"
] |
23,387,994 | https://en.wikipedia.org/wiki/Fischler%E2%80%93Susskind%20mechanism | The Fischler–Susskind mechanism, first proposed by Willy Fischler and Leonard Susskind in 1998, is a holographic prescription based on the particle horizon.
The Fischler–Susskind prescription is used to obtain the maximum number of degrees of freedom per Planck volume at the Planck era, compatible with the holographic principle.
References
Physical cosmology | Fischler–Susskind mechanism | [
"Physics",
"Astronomy"
] | 80 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Physical cosmology",
"Astrophysics"
] |
23,388,074 | https://en.wikipedia.org/wiki/Susskind%E2%80%93Glogower%20operator | The Susskind–Glogower operator, first proposed by Leonard Susskind and J. Glogower, refers to the operator where the phase is introduced as an approximate polar decomposition of the creation and annihilation operators.
It is defined as
,
and its adjoint
.
Their commutation relation is
,
where is the vacuum state of the harmonic oscillator.
They may be regarded as a (exponential of) phase operator because
,
where is the number operator. So the exponential of the phase operator displaces the number operator in the same fashion as the momentum operator acts as the generator of translations in quantum mechanics:
.
They may be used to solve problems such as atom-field interactions, level-crossings or to define some class of non-linear coherent states, among others.
References
Quantum optics | Susskind–Glogower operator | [
"Physics"
] | 168 | [
"Quantum optics",
"Quantum mechanics",
"Quantum physics stubs"
] |
23,390,068 | https://en.wikipedia.org/wiki/Regressive%20discrete%20Fourier%20series | In applied mathematics, the regressive discrete Fourier series (RDFS) is a generalization of the discrete Fourier transform where the Fourier series coefficients are computed in a least squares sense and the period is arbitrary, i.e., not necessarily equal to the length of the data. It was first proposed by Arruda (1992a, 1992b). It can be used to smooth data in one or more dimensions and to compute derivatives from the smoothed curve, surface, or hypersurface.
Technique
One-dimensional regressive discrete Fourier series
The one-dimensional RDFS proposed by Arruda (1992a) can be formulated in a very straightforward way. Given a sampled data vector (signal) , one can write the algebraic expression:
Typically , but this is not necessary.
The above equation can be written in matrix form as
The least squares solution of the above linear system of equations can be written as:
where is the conjugate transpose of , and the smoothed signal is obtained from:
The first derivative of the smoothed signal can be obtained from:
Two-dimensional regressive discrete Fourier series (RDFS)
The two-dimensional, or bidimensional RDFS proposed by Arruda (1992b) can also be formulated in a straightforward way. Here the equally spaced data case will be treated for the sake of simplicity. The general non-equally-spaced and arbitrary grid cases are given in the reference (Arruda, 1992b). Given a sampled data matrix (bi dimensional signal) one can write the algebraic expression:
The above equation can be written in matrix form for a rectangular grid. For the equally spaced sampling case : we have:
The least squares solution may be shown to be:
and the smoothed bidimensional surface is given by:
where is the conjugate, and is the transpose of .
Differentiation with respect to can be easily implemented analogously to the one-dimensional case (Arruda, 1992b).
Current applications
Spatially dense data condensation applications: Arruda, J.R.F. [1993] applied the RDFS to condense spatially dense spatial measurements made with a laser Doppler vibrometer prior to applying modal analysis parameter estimation methods. More recently, Vanherzeele et al. (2006, 2008a) proposed a generalized and an optimized RDFS for the same kind of application. A review of optical measurement processing using the RDFS was published by Vanherzeele et al. (2009).
Spatial derivative applications: Batista et al. [2009] applied RDFS to obtain spatial derivatives of bi dimensional measured vibration data to identify material properties from transverse modes of rectangular plates.
SHM applications: Vanherzeele et al. [2009] applied a generalized version of the RDFS to tomography reconstruction.
Software
Recently, a package that includes one and two-dimensional RDFS was developed in order to make easier its use in the free and open source software R:
A R package for RDFS at Github
See also
Discrete Fourier transform
Fourier series
References
Arruda, J.R.F., 1992a: Analysis of non-equally spaced data using a Regressive discrete Fourier series. Journal of Sound and Vibration, 156(3), 571–574.
Arruda, J.R.F., 1992b: Surface smoothing and partial spatial derivatives using a regressive discrete Fourier series. Mechanical Systems and Signal Processing, 6(1), 41–50.
Arruda, J.R.F., 1993: Spatial domain modal analysis of lightly-damped structures using laser velocimeters. Journal of Vibration and Acoustics, 115, 225–231.
Batista, F.B., Albuquerque, E.L., Arruda, J.R.F., Dias Jr., M., 2009: Identification of the bending stiffness of symmetric laminates using regressive discrete Fourier series and finite differences. Journal of Sound and Vibration, 320, 793–807.
Vanherzeele, J., Guillaume, P., Vanlanduit, S., Verboten, P., 2006: Data reduction using a generalized regressive discrete Fourier series, Journal of Sound and Vibration, 298, 1–11.
Vanherzeele, J., Vanlanduit, S., Guillaume, P., 2008a: Reducing spatial data using an optimized regressive discrete Fourier series, Journal of Sound and Vibration, 309, 858–867.
Vanherzeele, J., Longo, R., Vanlanduit, S., Guillaume, P., 2008b: Tomographic reconstruction using a generalized regressive discrete Fourier series, Mechanical Systems and Signal Processing, 22, 1237–1247.
Vanherzeele, J., Vanlanduit, S., Guillaume, P., 2009: Processing optical measurements using a regressive discrete Fourier series, Optical and lasers in engineering, 47, 461–472.
Signal processing
Fourier analysis | Regressive discrete Fourier series | [
"Technology",
"Engineering"
] | 1,048 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
23,391,020 | https://en.wikipedia.org/wiki/Fusion%20Energy%20Foundation | Fusion Energy Foundation (FEF) was an American non-profit think tank co-founded by Lyndon LaRouche in 1974 in New York. It promoted the construction of nuclear power plants, research into fusion power and beam weapons and other causes. The FEF was called fusion's greatest private supporter. It was praised by scientists like John Clarke, who said that the fusion community owed it a "debt of gratitude". By 1980, its main publication, Fusion, claimed 80,000 subscribers.
The FEF included notable scientists and others on its boards, along with LaRouche movement insiders in management positions. It published a popular magazine, Fusion, and a more technical journal as well as books and pamphlets. It conducted seminars and its members testified at legislative hearings. It was known for soliciting subscriptions to their magazines in U.S. airports, where its confrontational methods resulted in conflicts with celebrities and the general public.
The FEF has been described by many writers as a "front" for the U.S. Labor Party and the LaRouche movement. By the mid-1980s, the FEF was being accused of fraudulent fundraising on behalf of other LaRouche entities. Federal prosecutors forced it into bankruptcy in 1986 to collect contempt of court fines, a decision that was later overturned when a federal bankruptcy court found that the government had acted "in bad faith". Key personnel were convicted in 1988.
Personnel
According to an article in The Nation, the Fusion Energy Foundation had physicists, corporate executives, and government planners on its board of advisors, many unaware of the foundations connection to the U.S. Labor Party, while the board of directors was filled with LaRouche movement regulars and some party outsiders.
A 1983 report published by The Heritage Foundation said that the foundation briefly gained the confidence of respected scientists who lent their reputations to it but it warned that they risked their reputations by doing so.
Lyndon LaRouche was a co-founder and one of the three members of the foundation's board of directors. Steven Bardwell, a nuclear physicist, was another director. The executive director was Morris Levitt in the 1970s and Paul Gallagher in the 1980s. Michael Gelber was the Central New York regional representative. Dennis Speed was the regional coordinator for Boston and Harley Schlanger was the southern regional coordinator. Uwe Parpart Henke was the director of research. Jon Gilbertson was the director of nuclear engineering. Marsha Freeman was a representative of the FEF's International Press Service. Charles B. Stevens, a chemical engineer, authored scores of articles on fusion energy research and development for both the earlier publication, The Fusion Energy Foundation Newsletter, and its successor, Fusion.
Eric Lerner was director of physics in 1977. Other notable scientists who wrote for FEF publications and lectured under its auspices include Friedwardt Winterberg, Krafft Arnold Ehricke, and Winston H. Bostick. Melvin B. Gottlieb received an award from the FEF. Adolf Busemann also received an award at a special dinner.
Advocacy
Nuclear energy
In 1977, Executive Director Morris Levitt asserted that nuclear fusion power plants could be built by 1990 if the U.S. spent $50 to $100 billion on research. The same year he predicted that there would be no United States in the 21st century if President Jimmy Carter's ban on building breeder reactors was maintained. The director of the fusion power program at Argonne National Laboratory, Charles Baker, said in 1983 that the FEF was "overstating" the prospect of practical fusion power in the near future. "The judgment of the vast majority of the people actually working in fusion believe it will take substantially longer" than the few years predicted by the FEF, according to Baker.
By 1980, the Fusion Energy Foundation had close contacts with fusion researchers. They became a conduit for information between researchers who were sequestered in secret research. Even the head of fusion research for the Federal Government cooperated with the foundation. It was praised by scientists like John Clarke, who said that the fusion community owed it a "debt of gratitude". However the politicization of the foundation's journals and the LaRouche views printed in them repelled the scientists involved, according to The Nation.
The FEF received publicity in 1981 when it published a book explaining how to build a hydrogen bomb written by University of Nevada, Reno, professor Friedwardt Winterberg. The publication came two years after a magazine, The Progressive, had tried to print similar information but was prevented by an injunction that became the United States v. The Progressive. The government dropped the case after the information was published by the FEF. The author of the original article later learned that a diagram by Uwe Papert published in 1976 in a LaRouche publication contained two important details of the weapon's design that he had been wrong about.
The colonization of Mars is a major proposal of the LaRouche movement. Friedwardt Winterberg described how rocket engines incorporating fusion micro-explosions could provide enough acceleration to convey a large mass in a reasonable amount of time, a concept derived from Project Daedalus.
Independent Commission of Inquiry
In 1979 the Fusion Energy Foundation created the Independent Commission of Inquiry to investigate the accident at the Three Mile Island nuclear power plant. The commission's members included Morris Levitt, Jon Gilbertson, Charles Bonilia. The commission determined that the accident must have been caused by sabotage because no other explanation was possible. According to Gallagher, "New evidence is accumulating that sabotage very likely occurred". According to the Herald newspaper of Titusville, Pennsylvania, when asked by reporters for evidence Gilbertson said he had none.
Beam weapons
According to Fusion, two members of the FEF went to the Soviet Union to attend conference on "laser interaction" in December 1978.
In 1982 and 1983, members of the LaRouche movement met repeatedly with the director of defense programs for the National Security Council, Ray Pollock, while he was developing the basis for Ronald Reagan's "Star Wars" program, officially called the Strategic Defense Initiative (SDI). Pollock eventually said in the National Security Council (NSC) that LaRouche is "a frightening kind of fellow". The FEF held a seminar on beam weapons in October 1983, at the Dirksen Senate Office Building. According to the American Physical Society, FEF members disrupted a 1986 conference on SDI to which they were not invited, and only stopped after being threatened with police action.
After Ronald Reagan announced SDI the LaRouche movement made claims for having been the originators of the proposal, which reportedly "concerned" some people in the administration and in Congress, but no correction was made by them. The FEF lobbied state legislatures and testified before congressional hearings on behalf of beam weapons. Steven Bardwell resigned from the board of advisors in early 1984, reportedly because of money questions and a belief that the organization was losing its independence by becoming too solicitous of the Reagan administration in general, and in particular the Central Intelligence Agency, the Defense Intelligence Agency, and the NSC.
Other advocacy
As with other LaRouche entities, representatives of the Fusion Energy Foundation gave testimony to a number of congressional hearings. In addition to addressing committees on energy matters, FEF representatives, including Eric Lerner, also testified on matters such as the nomination of Cyrus Vance for Secretary of State.
The FEF campaigned on behalf of Arthur Rudolph, a NASA rocket scientist who was forced to leave the U.S. in 1982 following an investigation into his role in the Mittelwerk rocket factory in Nazi Germany.
Psychiatrist Ned Rosinsky spoke as a representative of the FEF at a Wisconsin state legislative hearing on criminal penalties for drug possession in 1977. He testified that "marijuana is a medically dangerous drug until proved otherwise", citing studies showing brain damage and a reduction in white blood cells caused by the habitual use of cannabis.
Under the auspices of Pakdee Tanapura, a wealthy Thai landowner, the FEF and EIR held a seminar in 1983 on the proposed construction of the Kra Canal across Thailand. Their plan favored the use of nuclear explosions to speed excavation. A second seminar was held in 1984, and in 1986 the FEF published a report by U.H. Von Papart on the feasibility and financing for the project.
Conferences
May 2, 1978: "Conference on the Industrial Development of Southern Africa", held in Washington D.C.
October 1980: "A High Technology Policy for U.S. Reindustrialization", held in California.
1985: "Krafft Ehricke Memorial Conference", held in Reston, Virginia. Cosponsored by the Schiller Institute.
Fundraising
The FEF has been described by many writers as a "front" for the U.S. Labor Party and the LaRouche movement, In a National Review article published in 1979, former member Gregory Rose said that the primary purpose of the Fusion Energy Foundation was raising money. Milton Copulous, director of energy studies for The Heritage Foundation, called FEF "a front the USLP uses to win the confidence of unsuspecting businessmen". In 1981, the FEF reported $3.5 million of revenue.
According to a representative in Toronto, Richard Sanders, FEF contributions gathered in Canada were sent to the United States to support the presidential campaigns of Lyndon LaRouche. In 1983, an FEF spokesperson said that there was no financial link between the foundation and LaRouche's campaigns. Although the FEF denied any financial connection to LaRouche's U.S. Labor Party, the two organizations reportedly shared offices in New York City. According to an interview with a former member presented as evidence in LaRouche vs. NBC in 1984:
Money from the . . . profit-making organizations went into political campaigns and was not correctly reported. Money from the tax-exempt [FEF] was given to the political campaign, unbeknownst to the people who made the contributions. . . . Someone would contribute to the [FEF] because they believed in nuclear power and their contribution would turn up as a contribution for . . . [LaRouche's] presidential campaign.
Barbara Mikulski filed a complaint with the Federal Election Commission asserting that the FEF was improperly raising funds for a LaRouche-affiliated candidate, Debra Freeman, in a 1982 congressional campaign. The FEF replied that the fundraising was done under contract to the Caucus Distributors, Inc. (CDI), another LaRouche enterprise.
When FEF director Steven Bardwell resigned in 1984, he complained that funds raised by the FEF through subscriptions were being diverted to other LaRouche entities. According to Bardwell, LaRouche said that Bardwell's sense of obligation to subscribers was "misplaced", and that "whether or not they knew it, they had contributed money to support Lyndon LaRouche and his ideas". LaRouche reportedly also said that the most important expenditures were for his personal security, and other expenses had a lower priority.
In September 1985, the Internal Revenue Service (IRS) withdrew the FEF's status as a tax-deductible non-profit, Section 501(c)(3), which it had had since 1978. The stated reason was that it had failed to file a tax return in the prior two years. In October 1986, New York Attorney General Robert Abrams sued to dissolve the FEF, charging that it fraudulently solicited donations as tax-deductible after their exemption had been withdrawn, and for failure to file required forms. Paul Gallagher, described the suit as "part of an escalating witch hunt against FEF board member Lyndon LaRouche." Two weeks later, the IRS restored the FEF's tax exempt status, saying it had made an error though privacy rules prevented further elaboration.
Subscribers to Fusion complained that their credit cards were being billed for unauthorized charges. In one example a man who had subscribed to Fusion found that he had been billed for $1000, for which he received promissory notes in the mail. Prosecutors charged that the FEF and other LaRouche related groups had made improper charges to the credit cards of about 1,000 people.
Fundraisers also solicited larger sums. A 71-year-old California woman loaned the FEF $100,000 after making smaller loans to other LaRouche-related entities. FEF fundraisers refused to take a check and drove her to the bank so she could wire the money directly. The FEF made no interest or principal payments on the loans. After she sued the FEF for repayment they settled, acknowledged the loans, and agreed to a schedule of payments. They stopped making payments after sending a few checks, one of which bounced. She filed suit in Virginia in an attempt to attach FEF assets there.
In a widely reported case, a 79-year-old retired steel executive gave or loaned a total of $2.6 million over a 14 months period in amounts ranging from $250 to $350,000, according to a lawsuit. He said he was not a supporter of LaRouche political campaigns, and that he gave the money, "Because I got so many telephone calls requesting donations". He said "I'm mad at myself now" for having turned over the money, most of which went to the FEF. When he told the fundraisers that he only wanted to give money to his family in the future, he was reportedly told that gifts to the LaRouche movement "would be of greater benefit" to the family because LaRouche's supporters "were changing the world situation". The FEF gave the donor a plaque which said, "Benjamin Franklin Award Honoring Special Contributions to the Future of Science". In a Nightline interview, LaRouche called him "a person who's been associated with us as a supporter for a long time." LaRouche's treasurer, Edward Spannaus, said the "drug lobby" was responsible for accusations that the LaRouche movement had encouraged supporters to turn over their savings.
During a 1986 Virginia state investigation, an undercover policeman purchased subscriptions to Fusion and another LaRouche movement publication, Executive Intelligence Review, at Washington National Airport. He then received 22 "abusive and demanding" telephone calls asking for loans or donations. He was told the money was needed to fight AIDS and to keep LaRouche out of jail. When he agreed to make a loan he received a letter of acknowledgement and an invitation to tour the LaRouche headquarters in Leesburg, Virginia.
Not all supporters contributed due to pressure. An Oklahoman oilman subscribed to Fusion and liked LaRouche's views on nuclear power. He donated thousands of dollars as well as buying a $900,000 estate for LaRouche's use, charging rent to cover the mortgage.
Airports
Supporters of the Fusion Energy Foundation became well known for their aggressive fundraising in U.S. airports in the late 1970s and early 1980s, along with Hare Krishnas and Moonies. They set up tables to sell publications from the FEF and other LaRouche organizations and displayed provocatively captioned, hand-lettered posters. The FEF members would shout slogans to passers-by to get attention, and sometimes accused those who disagreed with them of being homosexuals. One writer called them the "most obnoxious of the groups...infesting the airports." An article in The Boston Globe called them "the kooks at the airport" who solicited money using posters often denouncing Jane Fonda, a target of the LaRouche movement because of her support for environmental causes.
The FEF had slogans and bumper stickers with texts like:
Beam the Bomb
More nukes, less kooks
Nuclear plants are built better than Jane Fonda
Nuke Jane Fonda
Feed Jane Fonda to the Whales
In 1981, Fonda's brother, actor Peter Fonda was enraged by a sign in Denver's Stapleton Airport that said, "Feed Jane Fonda to the Whales." He cut up the sign with his pocketknife. The FEF members pressed charges for destruction of property leading Fonda to miss his flight, though he was allowed to leave without posting bond. The case was dropped when the FEF members failed to appear on the court date.
In 1982, Ellen Kaplan, an FEF member raising money in the Newark Airport, spotted former Secretary of State Henry Kissinger and his wife Nancy. Kissinger was flying to Boston for a heart operation. Kaplan went up to Kissinger and asked him why he had "prolonged the war in Vietnam", and then, "Mr. Kissinger, do you sleep with young boys at the Carlyle Hotel?" At that point Nancy Kissinger grabbed Kaplan by the throat and asked, "Do you want to get slugged?" Kaplan later explained that she was a "longtime opponent" of Kissinger, and that she "wanted to confront the man with how low he is." She pressed charges and Dennis Speed, an FEF coordinator, said they would make Kissinger into "a laughingstock". The Newark municipal judge acquitted Mrs. Kissinger, saying that she had exhibited "a reasonable spontaneous, somewhat human reaction" and that there was no injury.
Legal issues
In 1977, the Fusion Energy Foundation received a temporary injunction to prevent the Federal Bureau of Investigation (FBI) from harassing it or interfering with its activities. The suit claimed that the FBI Director, Clarence M. Kelley, had personally ordered FBI agents to disrupt FEF conferences and dissuade scientists from participating. The injunction also included U.S. Attorney General Griffin Bell and Secretary of Energy James R. Schlesinger.
In 1986, the FEF was ordered by a state court to stop raising funds in California due to complaints. In a separate action the same year, the FEF, along with other LaRouche entities, was named in a lawsuit charging violations of the Federal Racketeer Influenced and Corrupt Organizations Act (RICO) that was filed in San Francisco. In an unusual move, the assets of the FEF and related entities were seized before the suit was unsealed, because the plaintiff's lawyer convinced the judge that the entities would hide their assets. In 1987, the FEF and five other LaRouche entities were prohibited from operating in Virginia. In 1988, the FEF was sued by the California Attorney General's office. The suit alleged that FEF fundraisers had flown down from Washington to take the 79-year old Laguna Hills resident to her bankbox where they got from her stock certificates worth $104,452, described by her accountant as the woman's life savings. In their place was a receipt signed by Paul Gallagher, executive director of the FEF. LaRouche said the charges were "totally frivolous" and the result of corruption in the Attorney General's office.
During a federal grand jury investigation into fundraising practices in 1985, the FEF and other LaRouche entities were given subpoenas requiring that they turn over documents and provide a keeper of records to testify. They failed to surrender the documents and the keepers of records they sent were appointed the day before. When ordered to give the home address of FEF Executive Director Gallagher, the address turned out to be a vacant lot. Five months after the subpoenas were served, and after several hearings on the matter, U.S. District Judge A. David Mazzone found the FEF in contempt of court and levied a fine of $10,000 per day to enforce the subpoena starting in March 1986. Similar fines were placed on other LaRouche organizations, totalling $45,000 per day. The FEF and the other LaRouche entities appealed the fines repeatedly, and were denied each time. They appealed to the U.S. Supreme Court, which refused to review the lower court decision.
In October 1986, hundreds of federal and state law enforcement conducted a coordinated raid on the offices of LaRouche enterprises, including those of the FEF, and seized the documents that had been subpoenaed in 1985. The FEF and other entities argued in court that the search warrants had been improperly executed, and that documents were taken in violation of their Fourth Amendment rights. The Court of Appeals denied their appeal.
Six months later, in April 1987, the federal prosecutors obtained an unusual involuntary bankruptcy procedure against the FEF and other groups in order to settle the contempt of court fines which had grown to $21.4 million. The government claimed that the LaRouche groups were selling properties in order to hide the cash. The petition was granted by Judge Martin V.B. Bostetter and the federal government seized the property of the FEF and other groups. Reportedly, they only recovered $86,000 in assets. In October 1989, the FEF's bankruptcy petition was reviewed by Judge Bostetter who dismissed it, effectively reversing his April 1987 ruling. He noted that two of the entities, including FEF, were nonprofit fund-raisers and therefore ineligible for involuntary bankruptcy actions. He found that the government's actions and representations in obtaining the bankruptcy had the effect of misleading the court as to the status of the organization.
Members of the scientific and fusion community noted the closing of the FEF publications. A full-page advertisement protesting the closures, published in IEEE Spectrum, was signed by people associated with the fusion and SDI fields, including 22 employees of the Lawrence Livermore National Laboratory.
Publications
International Journal of Fusion Energy
The International Journal of Fusion Energy was published intermittently from March 1977 to October 1985, putting out at least 11 issues. For some time Robert James Moon acted as editor-in-chief.
Fusion Magazine
Morris Levitt was the editor-in-chief as of 1979, but by the mid-1980s the job was taken over by Steven Bardwell, and by 1986 it was Carol White. Marjorie Mazel Hecht was the managing editor. By 1980, it claimed 80,000 subscribers.
21st Century Science and Technology
21st Century Science and Technology is a quarterly magazine established in 1988 following the federal government's closing down of its predecessor Fusion Magazine (1977 to 1987). It has the same editor and material as Fusion. The last hard copy issue of the magazine published was the Winter 2005-2006 issue. Subsequent issues are available in electronic PDF format only. The magazine deals with a variety of issues, including criticism of claims of anthropogenic global warming, promotion of the use of DDT and support for an alternative to the standard atomic theory, based on the "Moon model" of Robert James Moon. Notable writers include: J. Gordon Edwards, Zbigniew Jaworowski and Paul Marmet. According to Science and other sources, it is published by supporters of Lyndon LaRouche.
Notable books and pamphlets
The Physical Principles of Thermonuclear Explosive Devices by Friedwardt Winterberg, 1981
As 21st Century Science Associates:
The holes in the ozone scare: the scientific evidence that the sky isn't falling By Rogelio Maduro, Ralf Schauerhammer, 1992
Notes
References
External links
21st Century Science and Technology Official website
Archive Fusion Magazine & IJFE
see [PDF]the larouche connection - CIA
CIA.gov
Page 1. Page 2. Page 3. Page 4. Page 5. Page 6. Page 7. Page 8. Page 9. Page 10. Page 11.
Organizations established in 1974
1974 establishments in New York (state)
1986 disestablishments in New York (state)
Nuclear organizations
LaRouche movement | Fusion Energy Foundation | [
"Engineering"
] | 4,851 | [
"Nuclear organizations",
"Energy organizations"
] |
40,291,583 | https://en.wikipedia.org/wiki/Styelin%20A | Styelin A is an antibiotic peptide (nonadecapeptide) isolated from Styela clava.
Notes
Antibiotics | Styelin A | [
"Biology"
] | 28 | [
"Antibiotics",
"Biocides",
"Biotechnology products"
] |
40,291,666 | https://en.wikipedia.org/wiki/Halocyamine | Halocyamines are antibiotic peptides isolated from the ascidian Halocynthia roretzi.
Notes
Antibiotics
Peptides | Halocyamine | [
"Chemistry",
"Biology"
] | 29 | [
"Biomolecules by chemical classification",
"Biotechnology products",
"Halogen-containing alkaloids",
"Antibiotics",
"Alkaloids by chemical classification",
"Molecular biology",
"Biocides",
"Peptides"
] |
40,291,736 | https://en.wikipedia.org/wiki/Polydiscamide%20B | Polydiscamide B, and related compounds, are sea sponge isolates and human sensory neuron-specific G protein coupled receptor agonists.
References
Receptor agonists
4-Bromophenyl compounds
Tryptamines
Formamides
Sulfonic acids
Guanidines | Polydiscamide B | [
"Chemistry"
] | 60 | [
"Receptor agonists",
"Guanidines",
"Functional groups",
"Sulfonic acids",
"Neurochemistry"
] |
40,300,017 | https://en.wikipedia.org/wiki/AdS/CMT%20correspondence | In theoretical physics, anti-de Sitter/condensed matter theory correspondence is the program to apply string theory to condensed matter theory using the AdS/CFT correspondence.
Overview
Over the decades, experimental condensed matter physicists have discovered a number of exotic states of matter, including superconductors and superfluids. These states are described using the formalism of quantum field theory, but some phenomena are difficult to explain using standard field theoretic techniques. Some condensed matter theorists including Subir Sachdev hope that the AdS/CFT correspondence will make it possible to describe these systems in the language of string theory and learn more about their behavior.
So far some success has been achieved in using string theory methods to describe the transition of a superfluid to an insulator. A superfluid is a system of electrically neutral atoms that flows without any friction. Such systems are often produced in the laboratory using liquid helium, but recently experimentalists have developed new ways of producing artificial superfluids by pouring trillions of cold atoms into a lattice of criss-crossing lasers. These atoms initially behave as a superfluid, but as experimentalists increase the intensity of the lasers, they become less mobile and then suddenly transition to an insulating state. During the transition, the atoms behave in an unusual way. For example, the atoms slow to a halt at a rate that depends on the temperature and on the Planck constant, the fundamental parameter of quantum mechanics, which does not enter into the description of the other phases. This behavior has recently been understood by considering a dual description where properties of the fluid are described in terms of a higher dimensional black hole.
Criticism
Despite many physicists turning towards string-based methods to address problems in condensed matter physics, some theorists working in this area have expressed doubts about whether the AdS/CFT correspondence can provide the tools needed to realistically model real-world systems. In a letter to Physics Today, Nobel laureate Philip W. Anderson wrote
See also
AdS/QCD correspondence
Notes
References
Condensed matter physics
String theory | AdS/CMT correspondence | [
"Physics",
"Chemistry",
"Materials_science",
"Astronomy",
"Engineering"
] | 414 | [
"Astronomical hypotheses",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"String theory",
"Matter"
] |
40,304,821 | https://en.wikipedia.org/wiki/Triangular%20tiling%20honeycomb | The triangular tiling honeycomb is one of 11 paracompact regular space-filling tessellations (or honeycombs) in hyperbolic 3-space. It is called paracompact because it has infinite cells and vertex figures, with all vertices as ideal points at infinity. It has Schläfli symbol {3,6,3}, being composed of triangular tiling cells. Each edge of the honeycomb is surrounded by three cells, and each vertex is ideal with infinitely many cells meeting there. Its vertex figure is a hexagonal tiling.
Symmetry
]
It has two lower reflective symmetry constructions, as an alternated order-6 hexagonal tiling honeycomb, ↔ , and as from , which alternates 3 types (colors) of triangular tilings around every edge. In Coxeter notation, the removal of the 3rd and 4th mirrors, [3,6,3*] creates a new Coxeter group [3[3,3]], , subgroup index 6. The fundamental domain is 6 times larger. By Coxeter diagram there are 3 copies of the first original mirror in the new fundamental domain: ↔ .
Related Tilings
It is similar to the 2D hyperbolic infinite-order apeirogonal tiling, {∞,∞}, with infinite apeirogonal faces, and with all vertices on the ideal surface.
Related honeycombs
The triangular tiling honeycomb is a regular hyperbolic honeycomb in 3-space, and one of eleven paracompact honeycombs.
There are nine uniform honeycombs in the [3,6,3] Coxeter group family, including this regular form as well as the bitruncated form, t1,2{3,6,3}, with all truncated hexagonal tiling facets.
The honeycomb is also part of a series of polychora and honeycombs with triangular edge figures.
Rectified triangular tiling honeycomb
The rectified triangular tiling honeycomb, , has trihexagonal tiling and hexagonal tiling cells, with a triangular prism vertex figure.
Symmetry
A lower symmetry of this honeycomb can be constructed as a cantic order-6 hexagonal tiling honeycomb, ↔ . A second lower-index construction is ↔ .
Truncated triangular tiling honeycomb
The truncated triangular tiling honeycomb, , is a lower-symmetry form of the hexagonal tiling honeycomb, . It contains hexagonal tiling facets with a tetrahedral vertex figure.
Bitruncated triangular tiling honeycomb
The bitruncated triangular tiling honeycomb, , has truncated hexagonal tiling cells, with a tetragonal disphenoid vertex figure.
Cantellated triangular tiling honeycomb
The cantellated triangular tiling honeycomb, , has rhombitrihexagonal tiling, trihexagonal tiling, and triangular prism cells, with a wedge vertex figure.
Symmetry
It can also be constructed as a cantic snub triangular tiling honeycomb, , a half-symmetry form with symmetry [3+,6,3].
Cantitruncated triangular tiling honeycomb
The cantitruncated triangular tiling honeycomb, , has truncated trihexagonal tiling, truncated hexagonal tiling, and triangular prism cells, with a mirrored sphenoid vertex figure.
Runcinated triangular tiling honeycomb
The runcinated triangular tiling honeycomb, , has triangular tiling and triangular prism cells, with a hexagonal antiprism vertex figure.
Runcitruncated triangular tiling honeycomb
The runcitruncated triangular tiling honeycomb, , has hexagonal tiling, rhombitrihexagonal tiling, triangular prism, and hexagonal prism cells, with an isosceles-trapezoidal pyramid vertex figure.
Symmetry
It can also be constructed as a runcicantic snub triangular tiling honeycomb, , a half-symmetry form with symmetry [3+,6,3].
Omnitruncated triangular tiling honeycomb
The omnitruncated triangular tiling honeycomb, , has truncated trihexagonal tiling and hexagonal prism cells, with a phyllic disphenoid vertex figure.
Runcisnub triangular tiling honeycomb
The runcisnub triangular tiling honeycomb, , has trihexagonal tiling, triangular tiling, triangular prism, and triangular cupola cells. It is vertex-transitive, but not uniform, since it contains Johnson solid triangular cupola cells.
See also
Convex uniform honeycombs in hyperbolic space
Regular tessellations of hyperbolic 3-space
Paracompact uniform honeycombs
References
Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. . (Tables I and II: Regular polytopes and honeycombs, pp. 294–296)
The Beauty of Geometry: Twelve Essays (1999), Dover Publications, , (Chapter 10, Regular Honeycombs in Hyperbolic Space) Table III
Jeffrey R. Weeks The Shape of Space, 2nd edition (Chapter 16-17: Geometries on Three-manifolds I, II)
Norman Johnson Uniform Polytopes, Manuscript
N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966
N.W. Johnson: Geometries and Transformations, (2018) Chapter 13: Hyperbolic Coxeter groups
Regular 3-honeycombs
Self-dual tilings
Triangular tilings | Triangular tiling honeycomb | [
"Physics"
] | 1,151 | [
"Tessellation",
"Self-dual tilings",
"Symmetry"
] |
26,266,872 | https://en.wikipedia.org/wiki/Corium%20%28nuclear%20reactor%29 | Corium, also called fuel-containing material (FCM) or lava-like fuel-containing material (LFCM), is a material that is created in a nuclear reactor core during a nuclear meltdown accident. Resembling lava in consistency, it consists of a mixture of nuclear fuel, fission products, control rods, structural materials from the affected parts of the reactor, products of their chemical reaction with air, water, steam, and in the event that the reactor vessel is breached, molten concrete from the floor of the reactor room.
Composition and formation
The heat causing the melting of a reactor may originate from the nuclear chain reaction, but more commonly decay heat of the fission products contained in the fuel rods is the primary heat source. The heat production from radioactive decay drops quickly, as the short half-life isotopes provide most of the heat and radioactive decay, with the curve of decay heat being a sum of the decay curves of numerous isotopes of elements decaying at different exponential half-life rates. A significant additional heat source can be the chemical reaction of hot metals with oxygen or steam.
Hypothetically, the temperature of corium depends on its internal heat generation dynamics: the quantities and types of isotopes producing decay heat, dilution by other molten materials, heat losses modified by the corium physical configuration, and heat losses to the environment. An accumulated mass of corium will lose less heat than a thinly spread layer. Corium of sufficient temperature can melt concrete. A solidified mass of corium can remelt if its heat losses drop, by being covered with heat insulating debris, or if water that is cooling the corium evaporates.
Crust can form on the corium mass, acting as a thermal insulator and hindering thermal losses. Heat distribution throughout the corium mass is influenced by different thermal conductivity between the molten oxides and metals. Convection in the liquid phase significantly increases heat transfer.
The molten reactor core releases volatile elements and compounds. These may be gas phase, such as molecular iodine or noble gases, or condensed aerosol particles after leaving the high temperature region. A high proportion of aerosol particles originates from the reactor control rod materials. The gaseous compounds may be adsorbed on the surface of the aerosol particles.
Composition and reactions
The composition of corium depends on the design type of the reactor, and specifically on the materials used in the control rods, coolant and reactor vessel structural materials. There are differences between pressurized water reactor (PWR) and boiling water reactor (BWR) coriums.
In contact with water, hot boron carbide from BWR reactor control rods forms first boron oxide and methane, then boric acid. Boron may also continue to contribute to reactions by the boric acid in an emergency coolant.
Zirconium from zircaloy, together with other metals, reacts with water and produces zirconium dioxide and hydrogen. The production of hydrogen is a major danger in reactor accidents. The balance between oxidizing and reducing chemical environments and the proportion of water and hydrogen influences the formation of chemical compounds. Variations in the volatility of core materials influence the ratio of released elements to unreleased elements. For instance, in an inert atmosphere, the silver-indium-cadmium alloy of control rods releases almost only cadmium. In the presence of water, the indium forms volatile indium(I) oxide and indium(I) hydroxide, which can evaporate and form an aerosol of indium(III) oxide. The indium oxidation is inhibited by a hydrogen-rich atmosphere, resulting in lower indium releases. Caesium and iodine from the fission products can react to produce volatile caesium iodide, which condenses as an aerosol.
During a meltdown, the temperature of the fuel rods increases and they can deform, in the case of zircaloy cladding, above . If the reactor pressure is low, the pressure inside the fuel rods ruptures the control rod cladding. High-pressure conditions push the cladding onto the fuel pellets, promoting formation of uranium dioxide–zirconium eutectic with a melting point of . An exothermic reaction occurs between steam and zirconium, which may produce enough heat to be self-sustaining without the contribution of decay heat from radioactivity. Hydrogen is released in an amount of about of hydrogen (at normal temperature/pressure) per kilogram of zircaloy oxidized. Hydrogen embrittlement may also occur in the reactor materials and volatile fission products can be released from damaged fuel rods. Between , the silver-indium-cadmium alloy of control rods melts, together with the evaporation of control rod cladding. At , the cladding oxides melt and begin to flow. At the uranium oxide fuel rods melt and the reactor core structure and geometry collapses. This can occur at lower temperatures if a eutectic uranium oxide-zirconium composition is formed. At that point, the corium is virtually free of volatile constituents that are not chemically bound, resulting in correspondingly lower heat production (by about 25%) as the volatile isotopes relocate.
The temperature of corium can be as high as in the first hours after the meltdown, potentially reaching over . A large amount of heat can be released by reaction of metals (particularly zirconium) in corium with water. Flooding of the corium mass with water, or the drop of molten corium mass into a water pool, may result in a temperature spike and production of large amounts of hydrogen, which can result in a pressure spike in the containment vessel. The steam explosion resulting from such sudden corium-water contact can disperse the materials and form projectiles that may damage the containment vessel by impact. Subsequent pressure spikes can be caused by combustion of the released hydrogen. Detonation risks can be reduced by the use of catalytic hydrogen recombiners.
Brief re-criticality (resumption of neutron-induced fission) in parts of the corium is a theoretical but remote possibility with commercial reactor fuel, due to low enrichment and the loss of moderator. This condition could be detected by presence of short life fission products long after the meltdown, in amounts that are too high to remain from the pre-meltdown reactor or be due to spontaneous fission of reactor-created actinides.
Reactor vessel breaching
In the absence of adequate cooling, the materials inside of the reactor vessel overheat and deform as they undergo thermal expansion, and the reactor structure fails once the temperature reaches the melting point of its structural materials. The corium melt then accumulates at the bottom of the reactor vessel. In the case of adequate cooling of the corium, it can solidify and the damage is limited to the reactor itself. Corium may also melt through the reactor vessel and flow out or be ejected as a molten stream by the pressure inside the reactor vessel. The reactor vessel failure may be caused by heating of its vessel bottom by the corium, resulting first in creep failure and then in breach of the vessel. Cooling water from above the corium layer, in sufficient quantity, may obtain a thermal equilibrium below the metal creep temperature, without reactor vessel failure.
If the vessel is sufficiently cooled, a crust between the corium melt and the reactor wall can form. The layer of molten steel at the top of the oxide may create a zone of increased heat transfer to the reactor wall; this condition, known as "heat knife", increases the probability of formation of a localized weakening of the side of the reactor vessel and subsequent corium leak.
In the case of high pressure inside the reactor vessel, breaching of its bottom may result in high-pressure blowout of the corium mass. In the first phase, only the melt itself is ejected; later a depression may form in the center of the hole and gas is discharged together with the melt with a rapid decrease of pressure inside the reactor vessel; the high temperature of the melt also causes rapid erosion and enlargement of the vessel breach. If the hole is in the center of the bottom, nearly all corium can be ejected. A hole in the side of the vessel may lead to only partial ejection of corium, with a retained portion left inside the reactor vessel.
Melt-through of the reactor vessel may take from a few tens of minutes to several hours.
After breaching the reactor vessel, the conditions in the reactor cavity below the core govern the subsequent production of gases. If water is present, steam and hydrogen are generated; dry concrete results in production of carbon dioxide and a smaller amount of steam.
Interactions with concrete
Thermal decomposition of concrete produces water vapor and carbon dioxide, which may further react with the metals in the melt, oxidizing the metals, and reducing the gases to hydrogen and carbon monoxide. The decomposition of the concrete and volatilization of its alkali components is an endothermic process. Aerosols released during this phase are primarily based on concrete-originating silicon compounds; otherwise volatile elements, for example, caesium, can be bound in nonvolatile insoluble silicates.
Several reactions occur between the concrete and the corium melt. Free and chemically bound water is released from the concrete as steam. Calcium carbonate is decomposed, producing carbon dioxide and calcium oxide. Water and carbon dioxide penetrate the corium mass, exothermically oxidizing the non-oxidized metals present in the corium and producing gaseous hydrogen and carbon monoxide; large amounts of hydrogen can be produced. The calcium oxide, silica, and silicates melt and are mixed into the corium. The oxide phase, in which the nonvolatile fission products are concentrated, can stabilize at temperatures of for a considerable period of time. An eventually present layer of more dense molten metal, containing fewer radioisotopes (Ru, Tc, Pd, etc., initially composed of molten zircaloy, iron, chromium, nickel, manganese, silver, and other construction materials and metallic fission products and tellurium bound as zirconium telluride) than the oxide layer (which concentrates Sr, Ba, La, Sb, Sn, Nb, Mo, etc. and is initially composed primarily of zirconium dioxide and uranium dioxide, possibly with iron oxide and boron oxides), can form an interface between the oxides and the concrete farther below, slowing down the corium penetration and solidifying within a few hours. The oxide layer produces heat primarily by decay heat, while the principal heat source in the metal layer is exothermic reaction with the water released from the concrete. Decomposition of concrete and volatilization of the alkali metal compounds consumes a substantial amount of heat.
The fast erosion phase of the concrete basemat lasts for about an hour and progresses to about one meter in depth, then slows to several centimeters per hour, and stops completely when the melt cools below the decomposition temperature of concrete (about ). Complete melt-through can occur in several days even through several meters of concrete; the corium then penetrates several meters into the underlying soil, spreads around, cools and solidifies.
During the interaction between corium and concrete, very high temperatures can be achieved. Less volatile aerosols of Ba, Ce, La, Sr, and other fission products are formed during this phase and introduced into the containment building at a time when most of the early aerosols are already deposited. Tellurium is released with the progress of zirconium telluride decomposition. Bubbles of gas flowing through the melt promote aerosol formation.
The thermal hydraulics of corium-concrete interactions (CCI, or also MCCI, "molten core-concrete interactions") is sufficiently understood.
The dynamics of the movement of corium in and outside the reactor vessel is highly complex, however, and the number of possible scenarios is wide; slow drip of melt into an underlying water pool can result in complete quenching, while the fast contact of a large mass of corium with water may result in a destructive steam explosion. Corium may be completely retained by the reactor vessel, or the reactor floor or some of the instrument penetration holes can be melted through.
The thermal load of corium on the floor below the reactor vessel can be assessed by a grid of fiber optic sensors embedded in the concrete. Pure silica fibers are needed as they are more resistant to high radiation levels.
Some reactor building designs, for example, the EPR, incorporate dedicated corium spread areas (core catchers), where the melt can deposit without coming in contact with water and without excessive reaction with concrete.
Only later, when a crust is formed on the melt, limited amounts of water can be introduced to cool the mass.
Materials based on titanium dioxide and neodymium(III) oxide seem to be more resistant to corium than concrete.
Deposition of corium on the containment vessel inner surface, e.g. by high-pressure ejection from the reactor pressure vessel, can cause containment failure by direct containment heating (DCH).
Specific incidents
Three Mile Island accident
During the Three Mile Island accident, a slow partial meltdown of the reactor core occurred. About of material melted and relocated in about 2 minutes, approximately 224 minutes after the reactor scram. A pool of corium formed at the bottom of the reactor vessel, but the reactor vessel was not breached. The layer of solidified corium ranged in thickness from 5 to 45 cm.
Samples were obtained from the reactor. Two masses of corium were found, one within the fuel assembly, one on the lower head of the reactor vessel. The samples were generally dull grey, with some yellow areas.
The mass was found to be homogeneous, primarily composed of molten fuel and cladding. The elemental constitution was about 70 wt.% uranium, 13.75 wt.% zirconium, 13 wt.% oxygen, with the balance being stainless steel and Inconel incorporated into the melt; the loose debris showed somewhat lower content of uranium (about 65 wt.%) and higher content of structural metals. The decay heat of corium at 224 minutes after scram was estimated to be 0.13 W/g, falling to 0.096 W/g at scram+600 minutes. Noble gases, caesium and iodine were absent, signifying their volatilization from the hot material. The samples were fully oxidized, signifying the presence of sufficient amounts of steam to oxidize all available zirconium.
Some samples contained a small amount of metallic melt (less than 0.5%), composed of silver and indium (from the control rods). A secondary phase composed of chromium(III) oxide was found in one of the samples. Some metallic inclusions contained silver but not indium, suggesting a sufficiently high temperature to cause volatilization of both cadmium and indium. Almost all metallic components, with the exception of silver, were fully oxidized; even silver was oxidized in some regions. The inclusion of iron and chromium rich regions probably originate from a molten nozzle that did not have enough time to be distributed through the melt.
The bulk density of the samples varied between 7.45 and 9.4 g/cm3 (the densities of UO2 and ZrO2 are 10.4 and 5.6 g/cm3). The porosity of samples varied between 5.7% and 32%, averaging at 18±11%. Striated interconnected porosity was found in some samples, suggesting the corium was liquid for a sufficient time for formation of bubbles of steam or vaporized structural materials and their transport through the melt. A well-mixed (U,Zr)O2 solid solution indicates peak temperature of the melt between .
The microstructure of the solidified material shows two phases: (U,Zr)O2 and (Zr,U)O2. The zirconium-rich phase was found around the pores and on the grain boundaries and contains some iron and chromium in the form of oxides. This phase segregation suggests slow gradual cooling instead of fast quenching, estimated by the phase separation type to be between 3–72 hours.
Chernobyl accident
The largest known amounts of corium were formed during the Chernobyl disaster. The molten mass of reactor core dripped under the reactor vessel and now is solidified in forms of stalactites, stalagmites, and lava flows; the best-known formation is the "Elephant's Foot", located under the bottom of the reactor in a Steam Distribution Corridor.
The corium was formed in three phases.
The first phase lasted only several seconds, with temperatures locally exceeding , when a zirconium-uranium-oxide melt formed from no more than 30% of the core. Examination of a hot particle showed a formation of Zr-U-O and UOx-Zr phases; the 0.9-mm-thick niobium zircaloy cladding formed successive layers of UOx, UOx+Zr, Zr-U-O, metallic Zr(O), and zirconium dioxide. These phases were found individually or together in the hot particles dispersed from the core.
The second stage, lasting for six days, was characterized by interaction of the melt with silicate structural materials—sand, concrete, serpentinite. The molten mixture is enriched with silica and silicates.
The third stage followed, when lamination of the fuel occurred and the melt broke through into the floors below and solidified there.
The Chernobyl corium is composed of the reactor uranium dioxide fuel, its zircaloy cladding, molten concrete, as well as other materials in and below the reactor, and decomposed and molten serpentinite packed around the reactor as its thermal insulation. Analysis has shown that the corium was heated to at most , and remained above for at least 4 days.
The molten corium settled in the bottom of the reactor shaft, forming a layer of graphite debris on its top. Eight days after the meltdown the melt penetrated the lower biological shield and spread on the reactor room floor, releasing radionuclides. Further radioactivity was released when the melt came in contact with water.
Three different lavas are present in the basement of the reactor building: black, brown and a porous ceramic. They are silicate glasses with inclusions of other materials present within them. The porous lava is brown lava that had dropped into water thus being cooled rapidly.
During radiolysis of the Pressure Suppression Pool water below the Chernobyl reactor, hydrogen peroxide was formed. The hypothesis that the pool water was partially converted to H2O2 is confirmed by the identification of the white crystalline minerals studtite and metastudtite in the Chernobyl lavas, the only minerals that contain peroxide.
The coriums consist of a highly heterogeneous silicate glass matrix with inclusions. Distinct phases are present:
uranium oxides, from the fuel pellets
uranium oxides with zirconium (UOx+Zr)
Zr-U-O
zirconium dioxide with uranium
zirconium silicate with up to 10% of uranium as a solid solution, (Zr,U)SiO4, called chernobylite
uranium-containing glass, the glass matrix material itself; mainly a calcium aluminosilicate with small amount of magnesium oxide, sodium oxide, and zirconium dioxide
metal, present as solidified layers and as spherical inclusions of Fe-Ni-Cr alloy in the glass phase
Five types of material can be identified in Chernobyl corium:
Black ceramics, a glass-like coal-black material with a surface pitted with many cavities and pores. Usually located near the places where corium formed. Its two versions contain about 4–5 wt.% and about 7–8 wt.% of uranium.
Brown ceramics, a glass-like brown material usually glossy but also dull. Usually located on a layer of a solidified molten metal. Contains many very small metal spheres. Contains 8–10 wt.% of uranium. Multicolored ceramics contain 6–7% of fuel.
Slag-like granulated corium, slag-like irregular gray-magenta to dark-brown glassy granules with crust. Formed by prolonged contact of brown ceramics with water, located in large heaps in both levels of the Pressure Suppression Pool.
Pumice, friable pumice-like gray-brown porous formations formed from molten brown corium foamed with steam when immersed in water. Located in the pressure suppression pool in large heaps near the sink openings, where they were carried by water flow as they were light enough to float.
Metal, molten and solidified. Mostly located in the Steam Distribution Corridor. Also present as small spherical inclusions in all the oxide-based materials above. Does not contain fuel per se, but contains some metallic fission products, e.g. ruthenium-106.
The molten reactor core accumulated in room 305/2, until it reached the edges of the steam relief valves; then it migrated downward to the Steam Distribution Corridor. It also broke or burned through into room 304/3. The corium flowed from the reactor in three streams. Stream 1 was composed of brown lava and molten steel; steel formed a layer on the floor of the Steam Distribution Corridor, on the Level +6, with brown corium on its top. From this area, brown corium flowed through the Steam Distribution Channels into the Pressure Suppression Pools on the Level +3 and Level 0, forming porous and slag-like formations there. Stream 2 was composed of black lava, and entered the other side of the Steam Distribution Corridor. Stream 3, also composed of black lavas, flowed to other areas under the reactor. The well-known "Elephant's Foot" structure is composed of two metric tons of black lava, forming a multilayered structure similar to tree bark. It is said to be melted deep into the concrete. The material is dangerously radioactive and hard and strong, and using remote controlled systems was not possible due to high radiation interfering with electronics.
The Chernobyl melt was a silicate melt that contained inclusions of Zr/U phases, molten steel and high levels of uranium zirconium silicate ("chernobylite", a black and yellow technogenic mineral). The lava flow consists of more than one type of material—a brown lava and a porous ceramic material have been found. The uranium to zirconium ratio in different parts of the solid differs a lot, in the brown lava a uranium-rich phase with a U:Zr ratio of 19:3 to about 19:5 is found. The uranium-poor phase in the brown lava has a U:Zr ratio of about 1:10. It is possible from the examination of the Zr/U phases to determine the thermal history of the mixture. It can be shown that before the explosion, in part of the core the temperature was higher than 2,000 °C, while in some areas the temperature was over .
The composition of some of the corium samples is as follows:
Degradation of the lava
The corium undergoes degradation. The Elephant's Foot, hard and strong shortly after its formation, is now cracked enough that a cotton ball treated with glue can remove 1-2 centimeters of material. The structure's shape itself is changed as the material slides down and settles. The corium temperature is now just slightly different from ambient. The material is therefore subject to both day–night temperature cycling and weathering by water. The heterogeneous nature of corium and different thermal expansion coefficients of the components causes material deterioration with thermal cycling. Large amounts of residual stresses were introduced during solidification due to the uncontrolled cooling rate. The water, seeping into pores and microcracks, has frozen there. This is the same process that creates potholes on roads, accelerates cracking.
Corium (and also highly irradiated uranium fuel) has the property of spontaneous dust generation, or spontaneous self-sputtering of the surface. The alpha decay of isotopes inside the glassy structure causes Coulomb explosions, degrading the material and releasing submicron particles from its surface. The level of radioactivity is such that during 100 years, the lava's self irradiation ( α decays per gram and 2 to of β or γ) will fall short of the level required to greatly change the properties of glass (1018 α decays per gram and 108 to 109 Gy of β or γ). Also the lava's rate of dissolution in water is very low (10−7 g·cm−2·day−1), suggesting that the lava is unlikely to dissolve in water.
It is unclear how long the ceramic form will retard the release of radioactivity. From 1997 to 2002, a series of papers were published that suggested that the self irradiation of the lava would convert all 1,200 tons into a submicrometre and mobile powder within a few weeks. But it has been reported that it is likely that the degradation of the lava is to be a slow and gradual process rather than a sudden rapid process. The same paper states that the loss of uranium from the wrecked reactor is only per year. This low rate of uranium leaching suggests that the lava is resisting its environment. The paper also states that when the shelter is improved, the leaching rate of the lava will decrease.
Some of the surfaces of the lava flows have started to show new uranium minerals such as UO3·2H2O (eliantinite), (UO2)O2·4H2O (studtite), uranyl carbonate (rutherfordine), čejkaite (), and the unnamed compound Na3U(CO3)2·2H2O. These are soluble in water, allowing mobilization and transport of uranium. They look like whitish yellow patches on the surface of the solidified corium. These secondary minerals show several hundred times lower concentration of plutonium and several times higher concentration of uranium than the lava itself.
Fukushima Daiichi
The March 11, 2011, Tōhoku earthquake and tsunami caused various nuclear accidents, the worst of which was the Fukushima Daiichi nuclear disaster. At an estimated eighty minutes after the tsunami strike, the temperatures inside Unit 1 of the Fukushima Daiichi Nuclear Power Plant reached over 2,300 ˚C, causing the fuel assembly structures, control rods and nuclear fuel to melt and form corium. (The physical nature of the damaged fuel has not been fully determined but it is assumed to have become molten.) The reactor core isolation cooling system (RCIC) was successfully activated for Unit 3; the Unit 3 RCIC subsequently failed, however, and at about 09:00 on March 13, the nuclear fuel had melted into corium. Unit 2 retained RCIC functions slightly longer and corium is not believed to have started to pool on the reactor floor until around 18:00 on March 14. TEPCO believes the fuel assembly fell out of the pressure vessel to the floor of the primary containment vessel, and that it has found fuel debris on the floor of the primary containment vessel.
References
External links
INSP Chornobyl Photobook (captions)
Nuclear chemistry
Nuclear accidents and incidents
Nuclear reactor safety | Corium (nuclear reactor) | [
"Physics",
"Chemistry"
] | 5,693 | [
"Nuclear accidents and incidents",
"Nuclear chemistry",
"nan",
"Nuclear physics",
"Radioactivity"
] |
26,270,420 | https://en.wikipedia.org/wiki/Formazine | Formazine (formazin) is a heterocyclic polymer produced by reaction of hexamethylenetetramine with hydrazine sulfate.
The hexamethylenetetramine tetrahedral cage-like structure, similar to adamantane, serves as molecular building block to form a tridimensional polymeric network.
Formazine is very poorly soluble in water and when directly synthesized in aqueous solution, by simply mixing its two highly soluble precursors, it forms small size colloidal particles. These organic colloids are responsible of the light scattering of the formazine suspensions in all the directions. Optical properties of colloidal suspensions depend on the suspended particles size and size distribution. Because formazine is a stable synthetic material with uniform particle size it is commonly used as a standard to calibrate turbidimeters and to control the reproducibility of their measurements. Formazin use was first proposed by Kingsbury et al. (1926) for the rapid standardization of turbidity measurements of albumin in urine. The unit is called Formazin Turbidity Unit (FTU). A suspension of 1.25 mg/L hydrazine sulfate and 12.5 mg/L hexamethylenetetramine in water has a turbidity of one FTU.
In the United States environmental monitoring the turbidity standard unit is called Nephelometric Turbidity Units (NTU), while the international standard unit is called Formazin Nephelometric Unit (FNU). The most generally applicable unit is Formazin Turbidity Unit (FTU), although different measurement methods can give quite different values as reported in FTU.
Turbidity measurement
For turbidity measurement, a formazine suspension is prepared by mixing solutions of 10 g/L hydrazine sulfate and 100 g/L hexamethylenetetramine with ultrapure water. The resulting solution is left for 48 hours, at 25 °C ±1 °C, for the suspension to develop. This produces a suspension with a turbidity value of 4000 NTU/FAU/FTU/FNU. This is then diluted to a value to suit the instrument range. There is no straightforward relationship between FTU/FAU and NTU/FNU because it depends on the optical characteristics of the particular matter in the sample. A general difficulty encountered for preparing formazine standards is to obtain sufficiently reproducible and accurate results. The preparation temperature is essential because it affects the size of the formazine particles. Uncertainties related to temperature fluctuations are of the order of 1.2% per °C.
The purity of the water used in the preparation of the formazine dispersion is also important as it cannot initially contain colloidal particles. Experience shows that water filtered as required has a residual scatter of about 0.02 FTU = 20 mFTU (inherent brightening effect). This has to be taken into account during calibration and for the detection of very low turbidity levels. The commercially available aqueous dispersions of formazine standard are often traceable according to the EN ISO 7027 norm. The shelf life of formazine dispersions does not exceed a few months (a few weeks if the bottle has been opened) as their characteristics evolve with time due to the ageing of the colloidal particles (Ostwald ripening: change in their size distribution and in their number due to their coalescence/aggregation) and the possible development of micro-organisms such bacteria, microscopic fungi, and yeast.
See also
Nephelometer
ISO 7027
Water purification
References
External links
EPA water quality: guide for turbidity measurements
Analytical standards
Colloids
Colloidal chemistry
Measuring instruments
Water quality indicators
Water treatment | Formazine | [
"Physics",
"Chemistry",
"Materials_science",
"Technology",
"Engineering",
"Environmental_science"
] | 801 | [
"Colloidal chemistry",
"Water treatment",
"Surface science",
"Colloids",
"Water pollution",
"Environmental engineering",
"Measuring instruments",
"Chemical mixtures",
"Condensed matter physics",
"Analytical standards",
"Water quality indicators",
"Water technology"
] |
26,270,834 | https://en.wikipedia.org/wiki/Consensus%20dynamics | Consensus dynamics or agreement dynamics is an area of research lying at the intersection of systems theory and graph theory. A major topic of investigation is the agreement or consensus problem in multi-agent systems that concerns processes by which a collection of interacting agents achieve a common goal. Networks of agents that exchange information to reach consensus include: physiological systems, gene networks, large-scale energy systems and fleets of vehicles on land, in the air or in space. The agreement protocol or consensus protocol is an unforced dynamical system that is governed by the interconnection topology and the initial condition for each agent. Other problems are the rendezvous problem, synchronization, flocking, formation control. One solution paradigm is distributed constraint reasoning.
To investigate the argumentation of different subjects, a simulation is a useful tool. It can be measured, if an argument provides an additional truth value for a debate.
See also
Consensus (computer science)
References
Ghapani, S.; Mei, J.; Ren, W.; Song, Y. (2016), "Fully distributed flocking with a moving leader for lagrange networks with parametric uncertainties", Automatica, 67–76, doi:10.1016/j.automatica.2016.01.004
Multi-agent systems
Network theory
Control theory
Graph theory
Game theory
Distributed computing
Constraint programming | Consensus dynamics | [
"Mathematics",
"Engineering"
] | 275 | [
"Discrete mathematics",
"Applied mathematics",
"Control theory",
"Graph theory",
"Multi-agent systems",
"Combinatorics",
"Network theory",
"Game theory",
"Mathematical relations",
"Artificial intelligence engineering",
"Dynamical systems"
] |
26,271,144 | https://en.wikipedia.org/wiki/Functional%20renormalization%20group | In theoretical physics, functional renormalization group (FRG) is an implementation of the renormalization group (RG) concept which is used in quantum and statistical field theory, especially when dealing with strongly interacting systems. The method combines functional methods of quantum field theory with the intuitive renormalization group idea of Kenneth G. Wilson. This technique allows to interpolate smoothly between the known microscopic laws and the complicated macroscopic phenomena in physical systems. In this sense, it bridges the transition from simplicity of microphysics to complexity of macrophysics. Figuratively speaking, FRG acts as a microscope with a variable resolution. One starts with a high-resolution picture of the known microphysical laws and subsequently decreases the resolution to obtain a coarse-grained picture of macroscopic collective phenomena. The method is nonperturbative, meaning that it does not rely on an expansion in a small coupling constant. Mathematically, FRG is based on an exact functional differential equation for a scale-dependent effective action.
The flow equation for the effective action
In quantum field theory, the effective action is an analogue of the classical action functional and depends on the fields of a given theory. It includes all quantum and thermal fluctuations. Variation of yields exact quantum field equations, for example for cosmology or the electrodynamics of superconductors. Mathematically, is the generating functional of the one-particle irreducible Feynman diagrams. Interesting physics, as propagators and effective couplings for interactions, can be straightforwardly extracted from it. In a generic interacting field theory the effective action , however, is difficult to obtain. FRG provides a practical tool to calculate employing the renormalization group concept.
The central object in FRG is a scale-dependent effective action functional often called average action or flowing action. The dependence on the RG sliding scale is introduced by adding a regulator (infrared cutoff) to the full inverse propagator . Roughly speaking, the regulator decouples slow modes with momenta by giving them a large mass, while high momentum modes are not affected. Thus, includes all quantum and statistical fluctuations with momenta . The flowing action obeys the exact functional flow equation
derived by Christof Wetterich and Tim R. Morris in 1993. Here denotes a derivative with respect to the RG scale at fixed values of the fields. Furthermore, denotes the functional derivative of from the left-hand-side and the right-hand-side respectively, due to the tensor structure of the equation. This feature is often shown simplified by the second derivative of the effective action.
The functional differential equation for must be supplemented with the initial condition , where the "classical action" describes the physics at the microscopic ultraviolet scale . Importantly, in the infrared limit the full effective action is obtained. In the Wetterich equation denotes a supertrace which sums over momenta, frequencies, internal indices, and fields (taking bosons with a plus and fermions with a minus sign). The exact flow equation for has a one-loop structure. This is an important simplification compared to perturbation theory, where multi-loop diagrams must be included. The second functional derivative is the full inverse field propagator modified by the presence of the regulator .
The renormalization group evolution of can be illustrated in the theory space, which is a multi-dimensional space of all possible running couplings allowed by the symmetries of the problem. As schematically shown in the figure, at the microscopic ultraviolet scale one starts with the initial condition .
As the sliding scale is lowered, the flowing action evolves in the theory space according to the functional flow equation. The choice of the regulator is not unique, which introduces some scheme dependence into the renormalization group flow. For this reason, different choices of the regulator correspond to the different paths in the figure. At the infrared scale , however, the full effective action is recovered for every choice of the cut-off , and all trajectories meet at the same point in the theory space.
In most cases of interest the Wetterich equation can only be solved approximately. Usually some type of expansion of is performed, which is then truncated at finite order leading to a finite system of ordinary differential equations. Different systematic expansion schemes (such as the derivative expansion, vertex expansion, etc.) were developed. The choice of the suitable scheme should be physically motivated and depends on a given problem. The expansions do not necessarily involve a small parameter (like an interaction coupling constant) and thus they are, in general, of nonperturbative nature.
Note however, that due to multiple choices regarding (prefactor-)conventions and the concrete definition of the effective action, one can find other (equivalent) versions of the Wetterich equation in the literature.
Aspects of functional renormalization
The Wetterich flow equation is an exact equation. However, in practice, the functional differential equation must be truncated, i.e. it must be projected to functions of a few variables or even onto some finite-dimensional sub-theory space. As in every nonperturbative method, the question of error estimate is nontrivial in functional renormalization. One way to estimate the error in FRG is to improve the truncation in successive steps, i.e. to enlarge the sub-theory space by including more and more running couplings. The difference in the flows for different truncations gives a good estimate of the error. Alternatively, one can use different regulator functions in a given (fixed) truncation and determine the difference of the RG flows in the infrared for the respective regulator choices. If bosonization is used, one can check the insensitivity of final results with respect to different bosonization procedures.
In FRG, as in all RG methods, a lot of insight about a physical system can be gained from the topology of RG flows. Specifically, identification of fixed points of the renormalization group evolution is of great importance. Near fixed points the flow of running couplings effectively stops and RG -functions approach zero. The presence of (partially) stable infrared fixed points is closely connected to the concept of universality. Universality manifests itself in the observation that some very distinct physical systems have the same critical behavior. For instance, to good accuracy, critical exponents of the liquid–gas phase transition in water and the ferromagnetic phase transition in magnets are the same. In the renormalization group language, different systems from the same universality class flow to the same (partially) stable infrared fixed point. In this way macrophysics becomes independent of the microscopic details of the particular physical model.
Compared to the perturbation theory, functional renormalization does not make a strict distinction between renormalizable and nonrenormalizable couplings. All running couplings that are allowed by symmetries of the problem are generated during the FRG flow. However, the nonrenormalizable couplings approach partial fixed points very quickly during the evolution towards the infrared, and thus the flow effectively collapses on a hypersurface of the dimension given by the number of renormalizable couplings. Taking the nonrenormalizable couplings into account allows to study nonuniversal features that are sensitive to the concrete choice of the microscopic action and the finite ultraviolet cutoff .
The Wetterich equation can be obtained from the Legendre transformation of the Polchinski functional equation, derived by Joseph Polchinski in 1984. The concept of the effective average action, used in FRG, is, however, more intuitive than the flowing bare action in the Polchinski equation. In addition, the FRG method proved to be more suitable for practical calculations.
Typically, low-energy physics of strongly interacting systems is described by macroscopic degrees of freedom (i.e. particle excitations) which are very different from microscopic high-energy degrees of freedom. For instance, quantum chromodynamics is a field theory of interacting quarks and gluons. At low energies, however, proper degrees of freedom are baryons and mesons. Another example is the BEC/BCS crossover problem in condensed matter physics. While the microscopic theory is defined in terms of two-component nonrelativistic fermions, at low energies a composite (particle-particle) dimer becomes an additional degree of freedom, and it is advisable to include it explicitly in the model. The low-energy composite degrees of freedom can be introduced in the description by the method of partial bosonization (Hubbard–Stratonovich transformation). This transformation, however, is done once and for all at the UV scale . In FRG a more efficient way to incorporate macroscopic degrees of freedom was introduced, which is known as flowing bosonization or rebosonization. With the help of a scale-dependent field transformation, this allows to perform the Hubbard–Stratonovich transformation continuously at all RG scales .
Functional renormalization-group for Wick-ordered effective interaction
Contrary to the flow equation for the effective action, this scheme is formulated for the effective interaction
which generates n-particle interaction vertices, amputated by the bare propagators ;
is the "standard" generating functional for the n-particle Green functions.
The Wick ordering of effective interaction with respect to Green function can be defined by
.
where is the Laplacian in the field space. This operation is similar to Normal order and excludes from the interaction all possible terms, formed by a convolution of source fields with respective Green function D. Introducing some cutoff the Polchinskii equation
takes the form of the Wick-ordered equation
where
Applications
The method was applied to numerous problems in physics, e.g.:
In statistical field theory, FRG provided a unified picture of phase transitions in classical linear -symmetric scalar theories in different dimensions , including critical exponents for and the Berezinskii–Kosterlitz–Thouless phase transition for , .
In gauge quantum field theory, FRG was used, for instance, to investigate the chiral phase transition and infrared properties of QCD and its large-flavor extensions.
In condensed matter physics, the method proved to be successful to treat lattice models (e.g. the Hubbard model or frustrated magnetic systems), repulsive Bose gas, BEC/BCS crossover for two-component Fermi gas, Kondo effect, disordered systems and nonequilibrium phenomena.
Application of FRG to gravity provided arguments in favor of nonperturbative renormalizability of quantum gravity in four spacetime dimensions, known as the asymptotic safety scenario.
In mathematical physics FRG was used to prove renormalizability of different field theories.
See also
Renormalization group
Renormalization
Critical phenomena
Scale invariance
Asymptotic safety in quantum gravity
References
Papers
Pedagogic reviews
Statistical mechanics
Renormalization group
Scaling symmetries
Fixed points (mathematics) | Functional renormalization group | [
"Physics",
"Mathematics"
] | 2,266 | [
"Symmetry",
"Physical phenomena",
"Mathematical analysis",
"Fixed points (mathematics)",
"Critical phenomena",
"Renormalization group",
"Topology",
"Statistical mechanics",
"Scaling symmetries",
"Dynamical systems"
] |
43,234,914 | https://en.wikipedia.org/wiki/Spectral%20line%20ratios | The analysis of line intensity ratios is an important tool to obtain information about laboratory and space plasmas. In emission spectroscopy, the intensity of spectral lines can provide various information about the plasma (or gas) condition. It might be used to determine the temperature or density of the plasma. Since the measurement of an absolute intensity in an experiment can be challenging, the ratio of different spectral line intensities can be used to achieve information about the plasma, as well.
Theory
The emission intensity density of an atomic transition from the upper state to the lower state is:
where:
is the density of ions in the upper state,
is the energy of the emitted photon, which is the product of the Planck constant and the transition frequency,
is the Einstein coefficient for the specific transition.
The population of atomic states N is generally dependent on plasma temperature and density. Generally, the more hot and dense the plasma, the more the higher atomic states are populated. The observance or not-observance of spectral lines from certain ion species can, therefore, help to give a rough estimation of the plasma parameters.
More accurate results can be obtained by comparing line intensities:
The transition frequencies and the Einstein coefficients of transitions are well known and listed in various tables as in NIST Atomic Spectra Database. It is often that atomic modeling is required for determination of the population densities and as a function of density and temperature. While for the temperature determination of plasma in thermal equilibrium Saha's equation and Boltzmann's formula might be used, the density dependence usually requires atomic modeling.
See also
Plasma diagnostics
Spectral line
External links
NIST Atomic Spectra Database
References
Spectroscopy
Emission spectroscopy
Plasma diagnostics | Spectral line ratios | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 338 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Plasma physics",
"Instrumental analysis",
"Emission spectroscopy",
"Measuring instruments",
"Plasma diagnostics",
"Spectroscopy"
] |
43,236,838 | https://en.wikipedia.org/wiki/Infrared%20atmospheric%20sounding%20interferometer | The infrared atmospheric sounding interferometer (IASI) is a Fourier transform spectrometer based on the Michelson interferometer, associated with an integrated imaging system (IIS).
As part of the payload of the MetOp series of polar-orbiting meteorological satellites, there are currently two IASI instruments in operation: on MetOp-A (launched 19 October 2006 with end of mission in November 2021), on Metop-B (launched 17 September 2012) and Metop-C launched in November 2018.
IASI is a nadir-viewing instrument recording infrared emission spectra from 645 to 2760 cm−1 at 0.25 cm−1 resolution (0.5 cm−1 after apodisation). Although primarily intended to provide information in near real-time on atmospheric temperature and water vapour to support weather forecasting, the concentrations of various trace gases can also be retrieved from the spectra.
Origin and development
IASI belongs to the thermal infrared (TIR) class of spaceborne instruments, which are devoted to tropospheric remote sensing. On the operational side, IASA is a replacement for the HIRS instruments, whereas on the scientific side, it continues the mission of instruments dedicated to atmospheric composition, which are also nadir viewing, Fourier Transform instruments (e.g. Atmospheric Chemistry Experiment). Thus, it blends the demands imposed by both meteorology - high spatial coverage, and atmospheric chemistry - accuracy and vertical information for trace gases. Designed by the Centre national d'Études Spatiales, it now combines a good horizontal coverage and a moderate spectral resolution. Its counterpart on the Suomi NPP is the Cross-track Infrared Sounder (CrIS).
Under an agreement between CNES and EUMETSAT (European Organisation for the Exploitation of Meteorological Satellites), the former was responsible for developing the instrument and data processing software. The latter is responsible for archiving and distributing the data to the users, as well as for operating IASI itself. Currently, Alcatel Space is the prime contractor of the project and oversees the production of the recurring models.
Main characteristics
Spectral range
The IASI spectral range has been chosen such that the instrument can record data from the following ranges:
carbon dioxide strong absorption around 15 μm
ozone absorption ν2 around 9.6 μm
water vapour ν3 strong absorption
methane absorption up to the edge of TIR
As such, the spectral range of IASI is 645 – 2760 cm−1 (15.5 - 3.62 μm). It has 8461 spectral samples that are aligned in 3 bands within the spectral range, shown in the table below. Correspondingly, the spectral resolution at which the measurements are made is 0.5 cm−1.
Each band has a specific purpose, as shown in the following table:
Sampling parameters
As an across track scanning system, IASI has a scan range of 48°20′ on either side of the nadir direction; the corresponding swath is then around 2×1100 km. Here, with respect to the flight direction of MetOp, the scanning executed by IASI starts on the left.
Also, a nominal scan line has three targets it must cover. First, a scan of the Earth where, within each step, there are 30 (15 in each 48°20′ branch) positions at which measurements are made. In addition to that, two views dedicated to calibration - henceforth, they will be referred to as reference views. One of the two is directed into deep space (cold reference), while the other is observing the internal black body (hot reference).
The elementary (or effective) field of view (EFOV) is defined as the useful field of view at each scan position. Each such element consists of a 2×2 circular pixel matrix of what is called instantaneous fields of view (IFOV). Each of the four pixels projected on the ground is circular and has a diameter of 12 km at nadir. The shape of the IFOV at the edge of the scan line is no longer circular: across track, it measures 39 km and along track, 20 km.
Lastly, the IIS field of view is a square area, the side of which has an angular width of 59.63 mrad. Within this area, there are 64×64 pixels and they measure the same area as the EFOV above.
Data processing system
The IASI instrument produces around 1 300 000 spectra every day. It takes around 8 seconds for IASI to acquire data from one complete across track and the onboard calibration. The former consists of 120 interferograms, each one corresponding to one pixel. Of course, as researchers are really interested in the spectra, the data gathered by IASI has to pass through several stages of processing.
Furthermore, IASI has an allocated data transmission rate of 1.5 Megabits (Mb) per second. However, the data production rate is 45 Mbit/s and therefore, a major part of the data processing is set to be on board. As such, the transmitted data is an encoded spectrum that is band merged and roughly calibrated.
Additionally, there is an offline processing chain located at the Technical Expertise Centre, also referred to as TEC. Its task is to monitor the instrument performance, to compute the level 0 and 1 initialisation parameters in relation to the preceding point and to compute the long-term varying IASI products, as well as to monitor the Near Real Time (NTR) processing (i.e. levels 0 and 1).
IASI processing levels
There are three such processing levels for the IASI data, numbered from 0 to 2. First, Level 0 data gives the raw output of the detectors, which Level 1 transforms into spectra by applying FFT and the necessary calibrations, and finally, Level 2 executes retrieval techniques so as to describe the physical state of the atmosphere that was observed.
The first two levels are dedicated to transforming the interferograms into spectra that are fully calibrated and independent of the state of the instrument at any given time. By contrast, the third is dedicated to the retrieval of meaningful parameters not only from IASI, but from other instruments from MetOp as well.
For example, since the instrument is expected to be linear in energy, a non linearity correction is applied to the interferograms before the computation of the spectra. Next, the two reference views are used for the first step of radiometric calibration. A second step, performed on ground, is used to compensate for certain physical effects that have been ignored in the first (e.g., incidence correction for the scanning mirror, non-blackness effect etc.).
A digital processing subsystem executes a radiometric calibration and an inverse Fourier transform in order to obtain the raw spectra.
Level 0
The central objective of the Level 0 processing is to reduce the transmission rate by calibrating the spectra in terms of radiometry and merging the spectral bands. This is divided into three processing sub-chains:
Interferogram preprocessing that is concerned with:
the non-linearity correction
spike detection that prevents the use of corrupted interferograms during calibration
the computation of NZPD (Number sampler of the Zero Path Difference) which determines the pivot sample corresponding to the Fourier Transform
the algorithm that applies a Fourier Transform to the interferogram to give the spectrum corresponding to the measured interferogram.
The computation of the radiometric coefficients and filtering
The computation of atmospheric spectra involving applying the calibration coefficients, merging the bands and coding the spectra.
by applying a spectral scaling law, removing the offset and applying a bit mask to the merged spectra, the transmission is done at an average rate of 8.2 bits per spectral sample, without losing useful information
Level 1
Level 1 is divided into three sublevels. Its main aim is to give the best estimate of the geometry of the interferometer at the time of the measurement. Several of the parameters of the estimation model are computing by the TEC processing chain and serve as input for the Level 1 estimations.
The estimation model is used as a basis to compute a more accurate model by calculating the corresponding spectral calibration and apodisation functions. This allows the removal of all spectral variability of the measurements.
Level 1a
The estimation model is used here to give the correct spectral positions of the spectra samples, since the positions are varying from one pixel to another. Moreover, certain errors ignored in Level 0 are now accounted for, such as the emissivity of the black body not being unity or the dependency of the scanning mirror on temperature.
Also, it estimates the geolocation of IASI using the results from the correlation of AVHRR and the calibrated IIS image.
Level 1b
Here, the spectra are resampled. To perform this operation, the spectra from Level 1a are over-sampled by a factor of 5. These over-sampled spectra are finally interpolated on a new constant wave-number basis (0.25 cm−1), by using a cubic spline interpolation.
Level 1c
The estimated apodisation functions are applied.
It generates the radiance cluster analysis based on AVHRR within the IASI IFOV using the IASI point spread function.
Level 2
This level is concerned with deriving geophysical parameters from the radiance measurements:
Temperature profiles
Humidity profiles
Columnar ozone amounts in thick layers
Surface temperature
Surface emissivity
Fractional cloud cover
Cloud top temperature
Cloud top pressure
Cloud phase
Total column of N2O
Total column of CO
Total column of CH4
Total column of CO2
Error covariance
Processing and equality flags
The processes here are performed synergically with the ATOVS instrument suite, AVHRR and forecast data from numerical weather prediction.
Methods of research
Some researchers prefer to use their own retrieval algorithms, which process Level 1 data, while others use directly the IASI Level 2 data. Multiple algorithms exist to produce Level 2 data, which differ in their assumptions and formulation and will therefore have different strengths and weaknesses (which can be investigated by intercomparison studies). The choice of algorithm is guided by knowledge of these limitations, the resources available and the specific features of the atmosphere that wish to be investigated.
In general, algorithms are based on the optimal estimation method. This essentially involves comparing the measured spectra with an a priori spectrum. Subsequently, the a priori model is contaminated with a certain amount of the item one wants to measure (e.g. SO2) and the resulting spectra are once again compared to the measured ones. The process is repeated again and again, the aim being to adjust the amount of contaminants such that simulated spectrum resembles the measured one as closely as possible. It must be noted that a variety of errors must be taken into consideration while perturbing the a priori, such as the error on the a priori, the instrumental error or the expected error.
Alternatively, the IASI Level 1 data can be processed by least square fit algorithms. Again, the expected error must be taken into consideration.
Design
IASI's main structure comprises 6 sandwich panels that have an aluminium honeycomb core and carbon cyanate skins. Out of these, the one that supports optical sub-assemblies, electronics and mechanisms is called the main panel.
The instrument's thermal architecture was engineered to split IASI in independent enclosures, optimising the design of every such enclosure in particular. For example, the optical components can be found in a closed volume containing only low dissipative elements, while the cube corners are exterior to this volume. Furthermore, the enclosure which contains the interferometer is almost entirely decoupled from the rest of the instrument by Multi-Layer Insulation (MLI). This determines a very good thermal stability for the optics of the interferometer: the temporal and spatial gradients are less than 1 °C, which is important for the radiometric calibration performance. Furthermore, other equipments are either sealed in specific enclosures, such as dissipative electronics, laser sources or thermally controlled through the thermal control section of the main structure, for example the scan mechanisms or the blackbody.
Upon entering the interferometer, the light will encounter the following instruments:
Scan mirror which provides the ±48.3° swath symmetrically about the nadir. Moreover, it views the calibration hot and cold blackbody (internal blackbody and the deep space, respectively). For the step-by-step scene scanning, fluid lubricated bearings are used.
Off-axis afocal telescope which transfers the aperture stop onto the scan mirror.
Michelson Interferometer that has the general structure of the Michelson Interferometer, but two silicon carbide cube corner mirrors. The advantage of using corner reflectors over plane mirrors is that the latter would impose dynamic alignment.
Folding and off-axis focusing mirrors of which the first directs the recombined beam onto the latter. This results in an image of the Earth forming at the entrance of the cold box.
The cold box which contains: aperture stops, field stops, field lens that images the aperture stop on the cube corners, dichroic plates dividing the whole spectrum range into the three spectral bands, lenses which produce an image of the field stop onto the detection unit, three focal planes that are equipped with micro lenses. These have the role to image the aperture stop on the detectors and preamplifiers.
So as to reduce the instrument background and thermo-elerctronic detector noise, the temperature of the cold box is maintained at 93 K by a passive cryogenic cooler. This was preferred to a cryogenic machine due to the fact that the vibration levels of the latter can potential cause the degradation of the spectral quality.
Measures against ice contamination
Ice accumulation on the optical surfaces determines loss of transmission. In order to reduce IASI's sensitivity to ice contamination, the emissive cavities have been added with two even holes.
Moreover, it was necessary to ensure protection for the cold optics from residual contamination. To achieve this, sealing improvements have been made (bellows and joints).
Suggested images
IASI at the European Space Agency
IASI data product profile
IASI observations
Depiction of MetOp in orbit
External links
IASI at Centre national d'études spatiales
IASI scanning the Earth
IASI at TACT, LATMOS
IASI at EODG, University of Oxford
References
Interferometers
Atmospheric sounding satellite sensors
Satellite meteorology | Infrared atmospheric sounding interferometer | [
"Technology",
"Engineering"
] | 2,974 | [
"Interferometers",
"Measuring instruments"
] |
43,238,552 | https://en.wikipedia.org/wiki/Flamelet%20generated%20manifold | Flamelet-Generated Manifold (FGM) is a combustion chemistry reduction technique. The approach of FGM is based on the idea that the most important aspects of the internal structure of the flame front should be taken into account. In this view, a low-dimensional chemical manifold is created on the basis of one-dimensional flame structures, including nearly all of the transport and chemical phenomena as observed in three-dimensional flames. In addition, the progress of the flame is generally described by transport equations for a limited number of control variables.
See also
Combustion models for CFD
References
Further reading
Combustion | Flamelet generated manifold | [
"Chemistry"
] | 119 | [
"Chemical reaction stubs",
"Combustion",
"Chemical process stubs"
] |
43,238,710 | https://en.wikipedia.org/wiki/Antarctic%20sea%20ice | Antarctic sea ice is the sea ice of the Southern Ocean. It extends from the far north in the winter and retreats to almost the coastline every summer. Sea ice is frozen seawater that is usually less than a few meters thick. This is the opposite of ice shelves, which are formed by glaciers; they float in the sea, and are up to a kilometre thick. There are two subdivisions of sea ice: fast ice, which are attached to land; and ice floes, which are not.
Sea ice that comes from the Southern Ocean melts from the bottom instead of the surface like Arctic ice because it is covered in snow on top. As a result, melt ponds are rarely observed. On average, Antarctic sea ice is younger, thinner, warmer, saltier, and more mobile than Arctic sea ice. Another difference between the two ice packs, is that while there is clear Arctic sea ice decline, the trend in Antarctica is roughly flat. Antarctic sea ice is not studied very well in comparison to Arctic ice since it is less accessible.
Measurements of sea ice
Extent
The Antarctic sea ice cover is highly seasonal, with very little ice in the austral summer, expanding to an area roughly equal to that of Antarctica in winter. It peaks (~18 × 10^6 km2) during September (comparable to the surface area of Pluto), which marks the end of austral winter, and retreats to a minimum (~3 × 10^6 km2) in February. Consequently, most Antarctic sea ice is first year ice, a few meters thick, but the exact thickness is not known. The area of 18 million km^2 of ice is 18 trillion square meters, so for each meter of thickness, given that the density of ice is about 0.88 teratonnes/million km^3, the mass of the top meter of Antarctic sea ice is roughly 16 teratonnes (trillion metric tons) in late winter. Record low summer sea ice was measured in February 2022 at 741,000 square miles (1.9 million square kilometers) by the National Snow and Ice Data Center.
Since the ocean off the Antarctic coast usually is much warmer than the air above it, the extent of the sea ice is largely controlled by the winds and currents that push it northwards. If it is pushed quickly, the ice can travel much further north before it melts. Most ice is formed along the coast, as the northward-moving ice leaves areas of open water (coastal latent-heat polynyas), which rapidly freeze.
Thickness
Because Antarctic ice is mainly first-year ice, which is not as thick as multiyear ice, it is generally less than a few meters thick. Snowfall and flooding of the ice can thicken it substantially, and the layer structure of Antarctic ice is often quite complex.
Recent trends and climate change
Sea ice extent in Antarctica varies a lot year by year. This makes it difficult determine a trend, and record highs and record lows have been observed between 2013 and 2023. The general trend since 1979, the start of the satellite measurements, has been roughly flat. Between 2015 and 2023, there has been a decline in sea ice, but due to the high variability, this does not correspond to a significant trend. The flat trend is in contrast with Arctic sea ice, which has seen a declining trend.
The IPCC AR5 report concluded that "it is very likely" that annual mean Antarctic sea ice extent increased 1.2 to 1.8% per decade, which is 0.13 to 0.20 million km2 per decade, during the period 1979 to 2012. IPCC AR5 also concluded that the lack of data precludes determining the trend in total volume or mass of the sea ice. The increase in sea ice area probably has a number of causes. These are tied to changes in the southern hemispheric westerly winds, which are a combination of natural variability and forced change from greenhouse gases and the ozone hole. The winds drive sea ice drift, and modelling research suggests that the observed sea ice expansion was driven by changes in the sea ice drift velocity.
Another possible driver is ice-shelves melting, which increases freshwater input to the ocean; this increases the weakly stratified ocean surface layer and so reduces the ability of warm subsurface water to reach the surface. A 2015 study found this effect in climate models run to simulate future climate change, resulting in an increase of sea ice in the winter months.
Recent changes in wind patterns, which are connected to regional changes in the number of extratropical cyclones and anticyclones, around Antarctica have advected the sea ice farther north in some areas and not as far north in others.
Atmospheric and oceanic drivers likely have contributed to the formation of regionally varying trends in Antarctic sea-ice extent. For example, temperatures in the atmosphere and Southern Ocean have increased during the period 1979–2004. However, sea ice grows faster than it melts, because of a weakly stratified ocean. Thus, this oceanic mechanism is, among others, contributing to an increase in the net ice production, potentially resulting in more sea ice.
Although thickness observations are limited, modelling suggests that observed ice-drift toward the coastal regions makes an additional contribution for dynamical sea-ice thickening during autumn and winter.
Observed autumn and spring trends in the number of extratropical cyclones, anticyclones and blocks, which have a strong thermodynamic control through temperature advection, and a strong dynamic control through ice-drift, on sea-ice extent during the same and also during following seasons are almost everywhere around Antarctica in agreement with the observed, regionally varying, trends in sea-ice extent. Consequently, the near-surface winds steered around weather systems are thought to explain large parts of the inhomogeneous Antarctica sea-ice trends.
The 2021 IPCC AR6 report confirms the observed increasing trend in the mean Antarctic sea ice area over the period from 1979 to 2014 but assesses that there was a decline after 2014, with the least extent reached in 2017, and a following growth. The report then concludes that there is “high confidence” that there is no significant trend in the satellite observed Antarctic sea ice area from 1979 to 2020 in both winter and summer.
In early January 2023, the National Snow and Ice Data Center reported that Antarctic sea ice extent stood at the lowest in the 45-year satellite record—more than 500,000 square kilometers (193,000 square miles) below the previous record (2018), with four of the five lowest years for the last half of December having occurred since 2016.
Implications
Monitoring changes in sea ice is important as this impacts the psychrophiles that live here.
Changes in Antarctic sea ice are also important because of implications for atmospheric and oceanic circulation. When sea ice forms, it rejects salt (ocean water is saline but sea ice is largely fresh) so dense salty water is formed which sinks and plays a key role in formation of Antarctic Bottom Water.
Effects on Navigation
The force of moving ice is considerable; it can crush ships that are caught in the ice pack, and severely limits the areas where ships can reach the land, even in summer. Icebreakers, iceports and ice piers are used to land supplies.
See also
Arctic sea ice decline
Antarctic ice sheet
References
Forms of water
Hydrology
Sea ice
Sea ice
Oceans surrounding Antarctica
Articles containing video clips
Climate change and the environment | Antarctic sea ice | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,504 | [
"Physical phenomena",
"Earth phenomena",
"Sea ice",
"Hydrology",
"Phases of matter",
"Forms of water",
"Environmental engineering",
"Matter"
] |
43,239,622 | https://en.wikipedia.org/wiki/Physics%20education%20research | <noinclude>
Physics education research (PER) is a form of discipline-based education research specifically related to the study of the teaching and learning of physics, often with the aim of improving the effectiveness of student learning. PER draws from other disciplines, such as sociology, cognitive science, education and linguistics, and complements them by reflecting the disciplinary knowledge and practices of physics. Approximately eighty-five institutions in the United States conduct research in science and physics education.
Goals
One primary goal of PER is to develop pedagogical techniques and strategies that will help students learn physics more effectively and help instructors to implement these techniques. Because even basic ideas in physics can be confusing, together with the possibility of scientific misconceptions formed from teaching through analogies, lecturing often does not erase common misconceptions about physics that students acquire before they are taught physics. Research often focuses on learning more about common misconceptions that students bring to the physics classroom so that techniques can be devised to help students overcome these misconceptions.
In most introductory physics courses, mechanics is usually the first area of physics that is taught. Newton's laws of motion about interactions between forces and objects are central to the study of mechanics. Many students hold the Aristotelian misconception that a net force is required to keep a body moving; instead, motion is modeled in modern physics with Newton's first law of inertia, stating that a body will keep its state of rest or movement unless a net force acts on the body. Like students who hold this misconception, Newton arrived at his three laws of motion through empirical analysis, although he did it with an extensive study of data that included astronomical observations. Students can erase such as misconception in a nearly frictionless environment, where they find that objects move at an almost constant velocity without a constant force.
Major areas
The broad goal of the PER community is to understand the processes involved in the teaching and learning of physics through rigorous scientific investigation.
According to the University of Washington PER group, one of the pioneers in the field, work within PER tends to fall within one or more of several broad descriptions, including:
Identifying student difficulties
Developing methods to address these difficulties and measure learning gains
Developing surveys to measure student performance and other characteristics
Investigating student attitudes and beliefs as relating to physics
Studying small and large group dynamics analyzing student patterns using framing and other new and existing epistemological methods
"An Introduction to Physics Education Research", by Robert Beichner, identifies eight trends in PER:
Conceptual understanding: Investigating what students know and how they learn it is a centerpiece of PER. Early research involved identifying and treating misconceptions about the principles of physics. The term has since evolved to "student difficulties" based on the consideration of alternative theoretical frameworks for student learning. A difficulty with a concept can be built into a correct concept; a misconception is rooted out and replaced by a correct conception. The PER group at the University of Washington specializes in research about conceptual understanding and student difficulty.
Epistemology: PER began as a trial-and-error approach to improve learning. Because of the downsides of such an approach, theoretical bases for research were developed early on, most notably through the University of Maryland. The theoretical underpinnings of PER are mostly built around Piagettean constructivism. Theories on cognition in physics learning were put forward by Redish, Hammer, Elby and Scherr, who built off of diSessa's "Knowledge in Pieces". The Resources Framework, developed from this work, builds off of research in neuroscience, sociology, linguistics, education and psychology. Additional frameworks are forthcoming, most recently the "Possibilities Framework", which builds off of deductive reasoning research started by Wason and Philip Johnson-Laird.
Problem solving: It plays an important role in the processes that advance physics research, featured in high numbers of exercises in conventional textbooks. Most research in this area rests on examining the difference between novice and expert problem solvers (freshmen and sophomores, and graduate-level and postdoctorate students, respectively). Approaches in researching problem solving have been a focus for the University of Minnesota's PER group. Recently, a paper was published in PRL Special Section: PER that identified over 30 behaviors, attitudes, and skills that are used in the solving of a typical physics problem. Greater resolution and specific attention to the details are used in the field of problem solving.
Attitudes: The University of Colorado developed an instrument that reveals student attitudes and expectations about physics as a subject and as a class. Student attitudes are often found to decline after traditional instruction, but recent work by Redish and Hammer show that this can be reversed and positive attitudinal gains can be seen if attention is paid to "explicate the epistemological elements of the implicit curriculum."
Social aspects: Research has been conducted into gender, race, and other socioeconomic issues that can influence learning in physics and other fields. Other research has investigated the impacts on learning physics of body language, group dynamics, and classroom setup.
Technology: Student response systems (clickers) are based on Eric Mazur's work in Peer Instruction. Research in PER examines the influence, applications of, and possibilities for technology in the classroom.
Instructional interventions: PER's curriculum design is based on more than two decades of research in physics education. Notable textbooks include Tutorials in Physics, Physics by Inquiry, Investigative Science Learning Environment, and Paradigms in Physics, as well as many new textbooks in introductory and junior level coursework. The Kansas State University Physics Education Research Group has developed a program, Visual Quantum Mechanics (VQM), to teach quantum mechanics to high school and college students who do not have advanced backgrounds in physics or math.
Instructional materials: For undergraduates, publishers now emphasize a PER basis for their physics textbooks as a major selling point. One of the earliest comprehensive physics textbooks to incorporate PER findings was written by Serway and Beichner. Apart from textbooks, instructional material for pre-college physics students now include PhET (Physics Education Technology) simulations. This is made possible through advances in personal computer hardware, platform-independent software such as Adobe Flash Player and Java, and more recently HTML5, CSS3 and JavaScript. According to Wieman, PhET simulations offer a direct and powerful tool for probing student thinking and learning.
Journal association
Physics education research papers in the United States are primarily issued among four publishing venues. Papers submitted to the American Journal of Physics: Physics Education Research Section (PERS) are mostly to consumers of physics education research. The Journal of the Learning Sciences (JLS) publishes papers that regard real-life or non-laboratory environments, often in the context of technology, and are about learning, not teaching. Meanwhile, papers at Physical Review Special Topics: Physics Education Research (PRST:PER) are aimed at those for whom research is conducted on PER rather than to consumers. The audience for Physics Education Research Conference Proceedings (PERC) is designed for a mix of consumers and researchers. The latter provides a snapshot of the field and as such is open to preliminary results and research in progress, as well as papers that would simply be thought-provoking to the PER community. Other journals include Physics Education (UK), the European Journal of Physics (UK), and The Physics Teacher. Leon Hsu and others published an article about publishing and refereeing papers in physics education research in 2007.
See also
Teaching quantum mechanics
References
Physics education
Educational research | Physics education research | [
"Physics"
] | 1,532 | [
"Applied and interdisciplinary physics",
"Physics education"
] |
43,239,882 | https://en.wikipedia.org/wiki/Environmental%20Toxicology%20and%20Pharmacology | Environmental Toxicology and Pharmacology is a bimonthly peer-reviewed scientific journal covering research on the toxicological and pharmacological effects of environmental contaminants. It is published by Elsevier and was established in 1992 as the Environmental Toxicology and Pharmacology Section of the European Journal of Pharmacology, obtaining its current name in February 1996, when it was founded by Jan H. Koeman (Agricultural University, Wageningen) and Nico. P. E. Vermeulen Vrije Universiteit Amsterdam. Vermeulen was editor-in-chief until 2017, when he retired and Michael D. Coleman (Aston University) took over. According to the Journal Citation Reports, the journal has a 2020 impact factor of 4.860 The journal is included in the Index Medicus and in MEDLINE.
References
External links
Toxicology journals
Environmental health journals
Academic journals established in 1992
Elsevier academic journals
Bimonthly journals
Pharmacology journals
English-language journals | Environmental Toxicology and Pharmacology | [
"Environmental_science"
] | 207 | [
"Environmental science journals",
"Toxicology journals",
"Toxicology",
"Environmental health journals"
] |
43,240,637 | https://en.wikipedia.org/wiki/Bohr%E2%80%93Sommerfeld%20model | The Bohr–Sommerfeld model (also known as the Sommerfeld model or Bohr–Sommerfeld theory) was an extension of the Bohr model to allow elliptical orbits of electrons around an atomic nucleus. Bohr–Sommerfeld theory is named after Danish physicist Niels Bohr and German physicist Arnold Sommerfeld. Sommerfeld showed that, if electronic orbits are elliptical instead of circular (as in Bohr's model of the atom), the fine-structure of the hydrogen atom can be described.
The Bohr–Sommerfeld model added to the quantized angular momentum condition of the Bohr model with a radial quantization (condition by William Wilson, the Wilson–Sommerfeld quantization condition):
where pr is the radial momentum canonically conjugate to the coordinate q, which is the radial position, and T is one full orbital period. The integral is the action of action-angle coordinates. This condition, suggested by the correspondence principle, is the only one possible, since the quantum numbers are adiabatic invariants.
History
In 1913, Niels Bohr displayed rudiments of the later defined correspondence principle and used it to formulate a model of the hydrogen atom which explained its line spectrum. In the next few years Arnold Sommerfeld extended the quantum rule to arbitrary integrable systems making use of the principle of adiabatic invariance of the quantum numbers introduced by Hendrik Lorentz and Albert Einstein. Sommerfeld made a crucial contribution by quantizing the z-component of the angular momentum, which in the old quantum era was called "space quantization" (German: Richtungsquantelung). This allowed the orbits of the electron to be ellipses instead of circles, and introduced the concept of quantum degeneracy. The theory would have correctly explained the Zeeman effect, except for the issue of electron spin. Sommerfeld's model was much closer to the modern quantum mechanical picture than Bohr's.
In the 1950s Joseph Keller updated Bohr–Sommerfeld quantization using Einstein's interpretation of 1917, now known as Einstein–Brillouin–Keller method. In 1971, Martin Gutzwiller took into account that this method only works for integrable systems and derived a semiclassical way of quantizing chaotic systems from path integrals.
Predictions
The Sommerfeld model predicted that the magnetic moment of an atom measured along an axis will only take on discrete values, a result which seems to contradict rotational invariance but which was confirmed by the Stern–Gerlach experiment. This was a significant step in the development of quantum mechanics. It also described the possibility of atomic energy levels being split by a magnetic field (called the Zeeman effect). Walther Kossel worked with Bohr and Sommerfeld on the Bohr–Sommerfeld model of the atom introducing two electrons in the first shell and eight in the second.
Issues
The Bohr–Sommerfeld model was fundamentally inconsistent and led to many paradoxes. The magnetic quantum number measured the tilt of the orbital plane relative to the xy plane, and it could only take a few discrete values. This contradicted the obvious fact that an atom could be turned this way and that relative to the coordinates without restriction. The Sommerfeld quantization can be performed in different canonical coordinates and sometimes gives different answers. The incorporation of radiation corrections was difficult, because it required finding action-angle coordinates for a combined radiation/atom system, which is difficult when the radiation is allowed to escape. The whole theory did not extend to non-integrable motions, which meant that many systems could not be treated even in principle. In the end, the model was replaced by the modern quantum-mechanical treatment of the hydrogen atom, which was first given by Wolfgang Pauli in 1925, using Heisenberg's matrix mechanics. The current picture of the hydrogen atom is based on the atomic orbitals of wave mechanics, which Erwin Schrödinger developed in 1926.
However, this is not to say that the Bohr–Sommerfeld model was without its successes. Calculations based on the Bohr–Sommerfeld model were able to accurately explain a number of more complex atomic spectral effects. For example, up to first-order perturbations, the Bohr model and quantum mechanics make the same predictions for the spectral line splitting in the Stark effect. At higher-order perturbations, however, the Bohr model and quantum mechanics differ, and measurements of the Stark effect under high field strengths helped confirm the correctness of quantum mechanics over the Bohr model. The prevailing theory behind this difference lies in the shapes of the orbitals of the electrons, which vary according to the energy state of the electron.
The Bohr–Sommerfeld quantization conditions lead to questions in modern mathematics. Consistent semiclassical quantization condition requires a certain type of structure on the phase space, which places topological limitations on the types of symplectic manifolds which can be quantized. In particular, the symplectic form should be the curvature form of a connection of a Hermitian line bundle, which is called a prequantization.
Relativistic orbit
Arnold Sommerfeld derived the relativistic solution of atomic energy levels. We will start this derivation with the relativistic equation for energy in the electric potential
After substitution we get
For momentum , and their ratio the equation of motion is (see Binet equation)
with solution
The angular shift of periapsis per revolution is given by
With the quantum conditions
and
we will obtain energies
where is the fine-structure constant. This solution (using substitutions for quantum numbers) is equivalent to the solution of the Dirac equation. Nevertheless, both solutions fail to predict the Lamb shifts.
See also
Bohr model
Old quantum theory
References
Atomic physics
Hydrogen physics
Foundational quantum physics
History of physics
Niels Bohr
Arnold Sommerfeld
Old quantum theory | Bohr–Sommerfeld model | [
"Physics",
"Chemistry"
] | 1,227 | [
"Foundational quantum physics",
"Quantum mechanics",
"Old quantum theory",
"Atomic physics",
" molecular",
"Atomic",
" and optical physics"
] |
24,863,994 | https://en.wikipedia.org/wiki/Wipe%20test%20counter | A wipe test counter is a device used to measure for possible radioactive contamination in a variety of environments. When using radioactive materials it is necessary to test for accidental contamination, whether from use of liquid unsealed sources or to check for leaking sealed sources. A swab or small absorbent smear can be used to “wipe” an area, the wipe is then placed into a test tube and counted, typically using a gamma counter. Testing for leaks in this manner is a method described in the ISO 9978 standard.
Equipment
Survey instruments may be used to detect surface contamination without requiring wiping, however this requires careful calibration and technique to ensure adequate sensitivity is achieved.
A gamma counter is a typical choice for measuring wipe samples for radioactivity as it allows multiple tests to be counted in a largely automated way. These systems detect radiation using a scintillator and photomultiplier tube and may allow the energy spectrum of a sample to be recorded, which can be used to identify the contaminant.
Use of a gamma camera has also been proposed, where collimators are removed to improve sensitivity.
Regulation
Wipe testing is typically a requirement of licenses to hold radioactive materials. In the United States the Nuclear Regulatory Commission requires wipe testing of sealed sources "periodically" using equipment sensitive down to 185 Becquerels. In the United Kingdom the Health and Safety Executive guidance for the Ionising Radiations Regulations 1999 requires wipe testing (usually every two years) and it is also likely to be a requirement of Environment Agency permits. In Australia licence conditions may require adherence to Australian standard AS2243.4 and ISO 9978 for wipe testing of sealed sources.
References
Radiation health effects
Particle detectors | Wipe test counter | [
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 345 | [
"Radiation health effects",
"Particle detectors",
"Measuring instruments",
"Radiation effects",
"Radioactivity"
] |
24,864,184 | https://en.wikipedia.org/wiki/Vladimir%20Korepin | Vladimir E. Korepin (born 1951) is a professor at the C. N. Yang Institute of Theoretical Physics of the Stony Brook University. Korepin made research contributions in several areas of mathematics and physics.
Educational background
Korepin completed his undergraduate study at Saint Petersburg State University, graduating with a diploma in theoretical physics in 1974. In that same year he was employed by the Mathematical Institute of Academy of Sciences. He worked there until 1989, obtaining his PhD in 1977 under the supervision of Ludwig Faddeev. At the same institution he completed his postdoctoral studies.
In 1985, he received a Doctor of Science degree in mathematical physics.
Contributions to physics
Korepin has made contributions to several fields of theoretical physics. Although he is best known for his involvement in condensed matter physics and mathematical physics, he significantly contributed to quantum gravity as well. In recent years, his work has focused on aspects of condensed matter physics relevant for quantum information.
Condensed matter
Among his contributions to condensed matter physics, we mention his studies on low-dimensional quantum gases. In particular, the 1D Hubbard model of strongly correlated fermions, and the 1D Bose gas with delta potential interactions.
In 1979, Korepin presented a solution of the massive Thirring model in one space and one time dimension using the Bethe ansatz. In this work, he provided the exact calculation of the mass spectrum and the scattering matrix.
He studied solitons in the sine-Gordon model. He determined their mass and scattering matrix, both semiclassically and to one loop corrections.
Together with Anatoly Izergin, he discovered the 19-vertex model (sometimes called the Izergin-Korepin model).
In 1993, together with A. R. Its, Izergin and N. A. Slavnov, he calculated space, time and temperature dependent correlation functions in the XX spin chain. The exponential decay in space and time separation of the correlation functions was calculated explicitly.
Quantum gravity
In this field, Korepin has worked on the cancellation of ultra-violet infinities in one loop on mass shell gravity.
Contributions to mathematics
In 1982, Korepin introduced domain wall boundary conditions for the six vertex model, published in Communications in Mathematical Physics. The result plays a role in diverse fields of mathematics such as algebraic combinatorics, alternating sign matrices, domino tiling, Young diagrams and plane partitions. In the same paper the determinant formula was proved for the square of the norm of the Bethe ansatz wave function. It can be represented as a determinant of linearized system of Bethe equations. It can also be represented as a matrix determinant of second derivatives of the Yang action.
The so-called "Quantum Determinant" was discovered in 1981 by A.G. Izergin and V.E. Korepin. It is the center of the Yang–Baxter algebra.
The study of differential equations for quantum correlation functions led to the discovery of a special class of Fredholm integral operators. Now they are referred to as completely integrable integral operators. They have multiple applications not only to quantum exactly solvable models, but also to random matrices and algebraic combinatorics.
Contributions to quantum information and quantum computation
Vladimir Korepin has produced results in the evaluation of the entanglement entropy of different dynamical models, such as interacting spins, Bose gases, and the Hubbard model. He considered models with unique ground states, so that the entropy of the whole ground state is zero. The ground state is partitioned into two spatially separated parts: the block and the environment. He calculated the entropy of the block as a function of its size and other physical parameters. In a series of articles, Korepin was the first to compute the analytic formula for the entanglement entropy of the XX (isotropic) and XY Heisenberg models. He used Toeplitz Determinants and Fisher-Hartwig Formula for the calculation. In the Valence-Bond-Solid states (which is the ground state of the Affleck-Kennedy-Lieb-Tasaki model of interacting spins), Korepin evaluated the entanglement entropy and studied the reduced density matrix. He also worked on quantum search algorithms with Lov Grover. Many of his publications on entanglement and quantum algorithms can be found on ArXiv.
In May 2003, Korepin helped organize a conference on quantum and reversible computations in Stony Brook. Another conference was on November 15–18, 2010, entitled the Simons Conference on New Trends in Quantum Computation.
Books
Essler, F. H. L.; Frahm, H., Goehmann, F., Kluemper, A., & Korepin, V. E., The One-Dimensional Hubbard Model. Cambridge University Press (2005).
V.E. Korepin, N.M. Bogoliubov and A.G. Izergin, Quantum Inverse Scattering Method and Correlation Functions, Cambridge University Press (1993).
Exactly Solvable Models of Strongly Correlated Electrons. Reprint volume, eds. F.H.L. Essler and V.E. Korepin, World Scientific (1994).
Honours
Korepin's H-index is 68 with over 20431 citations.
In 1996 Korepin was elected fellow of the American Physical Society.
Fellow of the International Association of Mathematical Physics and the Institute of Physics.
Editor of Reviews in Mathematical Physics, the International Journal of Modern Physics and Theoretical and Mathematical Physics.
His 60-th birthday was celebrated by Institute of Advanced Studies in Singapore in 2011.
References
External links
Research and achievements
Publications on arXiv
Korepin on INSPIRE-HEP
Faculty webpage
Early publications
1951 births
Quantum physicists
21st-century American physicists
Russian physicists
Russian mathematicians
Living people
Stony Brook University faculty
Russian theoretical physicists
Fellows of the American Physical Society | Vladimir Korepin | [
"Physics"
] | 1,212 | [
"Quantum physicists",
"Quantum mechanics"
] |
24,864,800 | https://en.wikipedia.org/wiki/Fedosov%20manifold | In mathematics, a Fedosov manifold is a symplectic manifold with a compatible torsion-free connection, that is, a triple (M, ω, ∇), where (M, ω) is a symplectic manifold (that is, is a symplectic form, a non-degenerate closed exterior 2-form, on a -manifold M), and ∇ is a symplectic torsion-free connection on (A connection ∇ is called compatible or symplectic if X ⋅ ω(Y,Z) = ω(∇XY,Z) + ω(Y,∇XZ) for all vector fields X,Y,Z ∈ Γ(TM). In other words, the symplectic form is parallel with respect to the connection, i.e., its covariant derivative vanishes.) Note that every symplectic manifold admits a symplectic torsion-free connection. Cover the manifold with Darboux charts and on each chart define a connection ∇ with Christoffel symbol . Then choose a partition of unity (subordinate to the cover) and glue the local connections together to a global connection which still preserves the symplectic form. The famous result of Boris Vasilievich Fedosov gives a canonical deformation quantization of a Fedosov manifold.
Examples
For example, with the standard symplectic form has the symplectic connection given by the exterior derivative Hence, is a Fedosov manifold.
References
Mathematical physics | Fedosov manifold | [
"Physics",
"Mathematics"
] | 309 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical physics"
] |
41,730,066 | https://en.wikipedia.org/wiki/Gillies%27%20conjecture | In number theory, Gillies' conjecture is a conjecture about the distribution of prime divisors of Mersenne numbers and was made by Donald B. Gillies in a 1964 paper in which he also announced the discovery of three new Mersenne primes. The conjecture is a specialization of the prime number theorem and is a refinement of conjectures due to I. J. Good and Daniel Shanks. The conjecture remains an open problem: several papers give empirical support, but it disagrees with the widely accepted (but also open) Lenstra–Pomerance–Wagstaff conjecture.
The conjecture
He noted that his conjecture would imply that
The number of Mersenne primes less than is .
The expected number of Mersenne primes with is .
The probability that is prime is .
Incompatibility with Lenstra–Pomerance–Wagstaff conjecture
The Lenstra–Pomerance–Wagstaff conjecture gives different values:
The number of Mersenne primes less than is .
The expected number of Mersenne primes with is .
The probability that is prime is with a = 2 if p = 3 mod 4 and 6 otherwise.
Asymptotically these values are about 11% smaller.
Results
While Gillie's conjecture remains open, several papers have added empirical support to its validity, including Ehrman's 1964 paper.
References
Conjectures
Unsolved problems in number theory
Hypotheses
Mersenne primes | Gillies' conjecture | [
"Mathematics"
] | 301 | [
"Unsolved problems in mathematics",
"Unsolved problems in number theory",
"Conjectures",
"Mathematical problems",
"Number theory"
] |
41,733,409 | https://en.wikipedia.org/wiki/Rigidity%20theory%20%28physics%29 | Rigidity theory, or topological constraint theory, is a tool for predicting properties of complex networks (such as glasses) based on their composition. It was introduced by James Charles Phillips in 1979 and 1981, and refined by Michael Thorpe in 1983. Inspired by the study of the stability of mechanical trusses as pioneered by James Clerk Maxwell, and by the seminal work on glass structure done by William Houlder Zachariasen, this theory reduces complex molecular networks to nodes (atoms, molecules, proteins, etc.) constrained by rods (chemical constraints), thus filtering out microscopic details that ultimately don't affect macroscopic properties. An equivalent theory was developed by P. K. Gupta and A. R. Cooper in 1990, where rather than nodes representing atoms, they represented unit polytopes. An example of this would be the SiO tetrahedra in pure glassy silica. This style of analysis has applications in biology and chemistry, such as understanding adaptability in protein-protein interaction networks. Rigidity theory applied to the molecular networks arising from phenotypical expression of certain diseases may provide insights regarding their structure and function.
In molecular networks, atoms can be constrained by radial 2-body bond-stretching constraints, which keep interatomic distances fixed, and angular 3-body bond-bending constraints, which keep angles fixed around their average values. As stated by Maxwell's criterion, a mechanical truss is isostatic when the number of constraints equals the number of degrees of freedom of the nodes. In this case, the truss is optimally constrained, being rigid but free of stress. This criterion has been applied by Phillips to molecular networks, which are called flexible, stressed-rigid or isostatic when the number of constraints per atoms is respectively lower, higher or equal to 3, the number of degrees of freedom per atom in a three-dimensional system.
The same condition applies to random packing of spheres, which are isostatic at the jamming point.
Typically, the conditions for glass formation will be optimal if the network is isostatic, which is for example the case for pure silica. Flexible systems show internal degrees of freedom, called floppy modes, whereas stressed-rigid ones are complexity locked by the high number of constraints and tend to crystallize instead of forming glass during a quick quenching.
Derivation of isostatic condition
The conditions for isostaticity can be derived by looking at the internal degrees of freedom of a general 3D network. For nodes, constraints, and equations of equilibrium, the number of degrees of freedom is
The node term picks up a factor of 3 due to there being translational degrees of freedom in the x, y, and z directions. By similar reasoning, in 3D, as there is one equation of equilibrium for translational and rotational modes in each dimension. This yields
This can be applied to each node in the system by normalizing by the number of nodes
where , , and the last term has been dropped since for atomistic systems . Isostatic conditions are achieved when , yielding the number of constraints per atom in the isostatic condition of .
An alternative derivation is based on analyzing the shear modulus of the 3D network or solid structure. The isostatic condition, which represents the limit of mechanical stability, is equivalent to setting in a microscopic theory of elasticity that provides as a function of the internal coordination number of nodes and of the number of degrees of freedom. The problem has been solved by Alessio Zaccone and E. Scossa-Romano in 2011, who derived the analytical formula for the shear modulus of a 3D network of central-force springs (bond-stretching constraints):
.
Here, is the spring constant, is the distance between two nearest-neighbor nodes, the average coordination number of the network (note that here and ), and in 3D. A similar formula has been derived for 2D networks where the prefactor is instead of .
Hence, based on the Zaccone–Scossa-Romano expression for , upon setting , one obtains , or equivalently in different notation, , which defines the Maxwell isostatic condition.
A similar analysis can be done for 3D networks with bond-bending interactions (on top of bond-stretching), which leads to the isostatic condition , with a lower threshold due to the angular constraints imposed by bond-bending.
Developments in glass science
Rigidity theory allows the prediction of optimal isostatic compositions, as well as the composition dependence of glass properties, by a simple enumeration of constraints. These glass properties include, but are not limited to, elastic modulus, shear modulus, bulk modulus, density, Poisson's ratio, coefficient of thermal expansion, hardness, and toughness. In some systems, due to the difficulty of directly enumerating constraints by hand and knowing all system information a priori, the theory is often employed in conjunction with computational methods in materials science such as molecular dynamics (MD). Notably, the theory played a major role in the development of Gorilla Glass 3. Extended to glasses at finite temperature and finite pressure, rigidity theory has been used to predict glass transition temperature, viscosity and mechanical properties. It was also applied to granular materials and proteins.
In the context of soft glasses, rigidity theory has been used by Alessio Zaccone and Eugene Terentjev to predict the glass transition temperature of polymers and to provide a molecular-level derivation and interpretation of the Flory–Fox equation. The Zaccone–Terentjev theory also provides an expression for the shear modulus of glassy polymers as a function of temperature which is in quantitative agreement with experimental data, and is able to describe the many orders of magnitude drop of the shear modulus upon approaching the glass transition from below.
In 2001, Boolchand and coworkers found that the isostatic compositions in glassy alloys—predicted by rigidity theory—exist not just at a single threshold composition; rather, in many systems it spans a small, well-defined range of compositions intermediate to the flexible (under-constrained) and stressed-rigid (over-constrained) domains. This window of optimally constrained glasses is thus referred to as the intermediate phase or the reversibility window, as the glass formation is supposed to be reversible, with minimal hysteresis, inside the window. Its existence has been attributed to the glassy network consisting almost exclusively of a varying population of isostatic molecular structures. The existence of the intermediate phase remains a controversial, but stimulating topic in glass science.
See also
Rigidity Percolation
References
Materials science
Glass physics | Rigidity theory (physics) | [
"Physics",
"Materials_science",
"Engineering"
] | 1,334 | [
"Glass engineering and science",
"Applied and interdisciplinary physics",
"Materials science",
"Glass physics",
"Condensed matter physics",
"nan"
] |
41,734,511 | https://en.wikipedia.org/wiki/C22H26O7 | The molecular formula C22H26O7 (molar mass: 402.44 g/mol, exact mass: 402.16785312 u) may refer to:
Gmelinol, a lignan
Habenariol, a phenolic compound found in orchids
L-165041, a PPARδ receptor agonist
Molecular formulas | C22H26O7 | [
"Physics",
"Chemistry"
] | 74 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
35,051,717 | https://en.wikipedia.org/wiki/SecDF%20protein-export%20membrane%20protein | SecD and SecF are prokaryotic protein export membrane proteins. They are a part of the larger multimeric protein export complex comprising SecA, D, E, F, G, Y, and YajC. SecD and SecF are required to maintain a proton motive force.
Secretion across the inner membrane in some Gram-negative bacteria occurs via the preprotein translocase pathway. Proteins are produced in the cytoplasm as precursors, and require a chaperone subunit to direct them to the translocase component. From there, the mature proteins are either targeted to the outer membrane, or remain as periplasmic proteins. The translocase protein subunits are encoded on the bacterial chromosome.
The translocase itself comprises 7 proteins, including a chaperone protein (SecB), an ATPase (SecA), an integral membrane complex (SecCY, SecE and SecG), and two additional membrane proteins that promote the release of the mature peptide into the periplasm (SecD and SecF). The chaperone protein SecB is a highly acidic homotetrameric protein that exists as a "dimer of dimers" in the bacterial cytoplasm. SecB maintains preproteins in an unfolded state after translation, and targets these to the peripheral membrane protein ATPase SecA for secretion. Together with SecY and SecG, SecE forms a multimeric channel through which preproteins are translocated, using both proton motive forces and ATP-driven secretion. The latter is mediated by SecA. The structure of the Escherichia coli SecYEG assembly revealed a sandwich of two membranes interacting through the extensive cytoplasmic domains. Each membrane is composed of dimers of SecYEG. The monomeric complex contains 15 transmembrane helices.
This family consists of various prokaryotic SecD and SecF protein export membrane proteins. The SecD and SecF equivalents of the Gram-positive bacterium Bacillus subtilis are jointly present in one polypeptide, denoted SecDF, that is required to maintain a high capacity for protein secretion. Unlike the SecD subunit of the pre-protein translocase of E. coli, SecDF of B. subtilis was not required for the release of a mature secretory protein from the membrane, indicating that SecDF is involved in earlier translocation steps. Comparison with SecD and SecF proteins from other organisms revealed the presence of 10 conserved regions in SecDF, some of which appear to be important for SecDF function. The SecDF protein of B. subtilis has 12 putative transmembrane domains. Thus, SecDF does not only show sequence similarity but also structural similarity to secondary solute transporters.
References
Protein families
Secretion | SecDF protein-export membrane protein | [
"Biology"
] | 588 | [
"Protein families",
"Protein classification"
] |
35,060,092 | https://en.wikipedia.org/wiki/Reverse%20migration%20%28immunology%29 |
Reverse Migration
Within molecular and cell biology, reverse migration is the phenomenon in which some neutrophils migrate away from the inflammation site, against the chemokine gradient, during inflammation resolution. The activation of in vivo inflammatory pathways (such as hypoxia-inducible factor, HIF), alters this behavior of reverse migration. The introduction of HIF and other related inflammatory pathways can alter the usual behavior and pattern of neutrophil migration, allowing these neutrophils to move away from the injury site rather than toward it. Several studies in the last few years have shown that reverse migration of neutrophils can play a dual role in the immune system response. On one hand, reverse migration can help in the resolution of inflammation by removing neutrophils once they have played their role at the site of injury. On the other hand, neutrophils re-entering the bloodstream can further contribute to the spread of a systemic infection. Therefore, it is essential to understand the regulation of reverse migration to treat a wide variety of inflammation-driven diseases including sepsis. However, the mechanisms that regulate the complex process of reverse migration remain poorly understood for the most part.
Role of Reverse Migration in Sepsis
Sepsis is a life-threatening organ dysfunction caused by failure for the host immune system to respond adequately to infection. Sepsis can result from the spread of any type of infection, but the majority of cases of septic shock are the result of hospital-acquired gram-negative bacilli or gram-positive cocci infections. Sepsis shock occurs more often in patients who are immunocompromised and in patients that have chronic or debilitating diseases.
During the progression of sepsis, polymorphonuclear neutrophils (PMNs) are the most abundantly recruited innate immune cells at the site of infection, playing a critical role in the healing process. PMNs exhibit reverse migration as sepsis progresses as they migrate away from the injury site back into the vasculature, the arrangement of blood vessels around the site, following initial PMN infiltration. The role of reverse migration in the immune response requires further investigation, but the current thinking is that reverse migration can play a role in both a protective response and also a tissue-damaging event. A better understanding of the role of reverse migration in sepsis can provide a critical branching point in the development of therapeutic approaches to sepsis.
Current Knowledge on The Mechanisms of PMN Reverse Migration (rM)
The mechanisms that regulate polymorphonuclear neutrophil (PMN) migration from inflammatory sites are still not entirely or well understood. Several factors that contribute to PMN forward migration, such as chemotaxis, chemotactic attractants and repellents, chemokine receptors, interactions with endothelial cells, and changes in PMN behavior, are however thought to play integral roles in controlling PMN reverse migration (rM).
Polymorphonuclear Neutrophil (PMN) Response
In a typical infection response, polymorphonuclear neutrophils (PMNs) exhibit antimicrobial activity to clear pathogens from a site of inflammation through degranulation, phagocytosis, and the release of cytokines. Another process recently found to play a critical role in coagulation and neutrophil immune response is the formation of neutrophil extracellular traps (NETs). NETs are networks composed of chromatin fibres and with granules associated with antimicrobial peptides and enzymes, which assist in the capture and removal of invading microbial pathogens. Once the antimicrobial functions of PMNs are carried out, it is essential to clear PMNs to restore homeostasis. Previously, PMN clearance was thought to occur through apoptosis or necrosis, followed by phagocytosis by macrophages. However, recent findings in imaging technology have revealed that PMNs can also migrate back into circulation, providing an alternative mechanism for removal of PMNs from the site of inflammation.
Mechanisms of Neutrophil Motility
Neutrophils are highly motile immune cells that play a crucial role in the body’s defense against infection and injury. They exhibit two distinct types of movement: chemokinesis, in which they migrate randomly in response to environmental cues, and chemotaxis, which is a more directed, regulated movement toward a specific location in response to chemical signals. During an inflammation event or injury, a variety of chemical signals, including chemokines and cytokines, orchestrate the movement of neutrophils to and from the injury site. Once neutrophils exit the bloodstream through transendothelial migration, they encounter several chemoattractants that help direct them toward the injured tissue. Once they have arrived at the site of inflammation, neutrophils perform several immune functions to eliminate pathogens and clear any possible debris. However, the effective resolution of inflammation depends not only on the neutrophils' ability to reach the site of injury but also on their timely removal from the site after their immune functions are completed. This removal can occur through programmed cell death (apoptosis) or reverse migration, where neutrophils return to the bloodstream and circulation. Consequently, any impairment in neutrophils' ability to interpret and respond to chemoattractants and complex signaling cues can lead to immune dysfunction or contribute to chronic inflammatory diseases.
Reverse Migration of PMNs as a Novel Drug Target
A major goal in immunology is to identify molecular targets involved in the body's response to wound-induced inflammation, which may include the process of reverse migration and the neutrophils involved. The introduction of necrosis or apoptosis-inducing drugs may cause an overall response of increased inflammation, even though they would aid in the clearance of neutrophils. Thus, there is heightened interest in targeting reverse migration of PMNs for the development of anti-inflammatory therapies. Several clinical trials are currently in place aiming to specifically target neutrophil migration signals. One current phase II trial involves the drug Reparixin, which has the potential to combat ischaemia–reperfusion injury and inflammation after on-pump coronary artery bypass graft surgery. Since this initial study in 2015, Reparixin has also been investigated as a treatment for patients with severe cases of COVID-19 related pneumonia. Innovative approaches to inflammation and infection such as the study of potential therapeutic compounds like Reparixin have the potential to provide unprecedented treatments for traditionally life-threatening infections.
References
Cell biology
Immunology
Copyright Information
Figure 1:
Copyright © 2021 Ji and Fan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Figure 2:
© 2017 The Authors. Journal of Cellular and Molecular Medicine published by John Wiley & Sons Ltd and Foundation for Cellular and Molecular Medicine.
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
Figure 3:
Copyright © 2020 Capucetti, Albano and Bonecchi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. | Reverse migration (immunology) | [
"Chemistry",
"Biology"
] | 1,650 | [
"Biochemistry stubs",
"Immunology",
"Molecular and cellular biology stubs",
"Cell biology"
] |
21,888,215 | https://en.wikipedia.org/wiki/Swendsen%E2%80%93Wang%20algorithm | The Swendsen–Wang algorithm is the first non-local or cluster algorithm for Monte Carlo simulation for large systems near criticality. It has been introduced by Robert Swendsen and Jian-Sheng Wang in 1987 at Carnegie Mellon.
The original algorithm was designed for the Ising and Potts models, and it was later generalized to other systems as well, such as the XY model by Wolff algorithm and particles of fluids. The key ingredient was the random cluster model, a representation of the Ising or Potts model through percolation models of connecting bonds, due to Fortuin and Kasteleyn. It has been generalized by Barbu and Zhu to arbitrary sampling probabilities by viewing it as a Metropolis–Hastings algorithm and computing the acceptance probability of the proposed Monte Carlo move.
Motivation
The problem of the critical slowing-down affecting local processes is of fundamental importance in the study of second-order phase transitions (like ferromagnetic transition in the Ising model), as increasing the size of the system in order to reduce finite-size effects has the disadvantage of requiring a far larger number of moves to reach thermal equilibrium. Indeed the correlation time usually increases as with or greater; since, to be accurate, the simulation time must be , this is a major limitation in the size of the systems that can be studied through local algorithms. SW algorithm was the first to produce unusually small values for the dynamical critical exponents: for the 2D Ising model ( for standard simulations); for the 3D Ising model, as opposed to for standard simulations.
Description
The algorithm is non-local in the sense that a single sweep updates a collection of spin variables based on the Fortuin–Kasteleyn representation. The update is done on a "cluster" of spin variables connected by open bond variables that are generated through a percolation process, based on the interaction states of the spins.
Consider a typical ferromagnetic Ising model with only nearest-neighbor interaction.
Starting from a given configuration of spins, we associate to each pair of nearest neighbours on sites a random variable which is interpreted in the following way: if then there is no link between the sites and (the bond is closed); if then there is a link connecting the spins (the bond is open). These values are assigned according to the following (conditional) probability distribution:
;
;
;
;
where is the ferromagnetic coupling strength.
This probability distribution has been derived in the following way: the Hamiltonian of the Ising model is
,
and the partition function is
.
Consider the interaction between a pair of selected sites and and eliminate it from the total Hamiltonian, defining
Define also the restricted sums:
;
Introduce the quantity
;
the partition function can be rewritten as
Since the first term contains a restriction on the spin values whereas there is no restriction in the second term, the weighting factors (properly normalized) can be interpreted as probabilities of forming/not forming a link between the sites:
The process can be easily adapted to antiferromagnetic spin systems, as it is sufficient to eliminate in favor of (as suggested by the change of sign in the interaction constant).
After assigning the bond variables, we identify the same-spin clusters formed by connected sites and make an inversion of all the variables in the cluster with probability 1/2. At the following time step we have a new starting Ising configuration, which will produce a new clustering and a new collective spin-flip.
Correctness
It can be shown that this algorithm leads to equilibrium configurations. To show this, we interpret the algorithm as a Markov chain, and show that the chain is both ergodic (when used together with other algorithms) and satisfies detailed balance, such that the equilibrium Boltzmann distribution is equal to the stationary distribution of the chain.
Ergodicity means that it is possible to transit from any initial state to any final state with a finite number of updates. It has been shown that the SW algorithm is not ergodic in general (in the thermodynamic limit). Thus in practice, the SW algorithm is usually used in conjunction with single spin-flip algorithms such as the Metropolis–Hastings algorithm to achieve ergodicity.
The SW algorithm does however satisfy detailed-balance. To show this, we note that every transition between two Ising spin states must pass through some bond configuration in the percolation representation. Let's fix a particular bond configuration: what matters in comparing the probabilities related to it is the number of factors for each missing bond between neighboring spins with the same value; the probability of going to a certain Ising configuration compatible with a given bond configuration is uniform (say ). So the ratio of the transition probabilities of going from one state to another is
since .
This is valid for every bond configuration the system can pass through during its evolution, so detailed balance is satisfied for the total transition probability. This proves that the algorithm is correct.
Efficiency
Although not analytically clear from the original paper, the reason why all the values of z obtained with the SW algorithm are much lower than the exact lower bound for single-spin-flip algorithms () is that the correlation length divergence is strictly related to the formation of percolation clusters, which are flipped together. In this way the relaxation time is significantly reduced. Another way to view this is through the correspondence between the spin statistics and cluster statistics in the Edwards-Sokal representation. Some mathematically rigorous results on the mixing time of this process have been obtained by Guo and Jerrum .
Generalizations
The algorithm is not efficient in simulating frustrated systems, because the correlation length of the clusters is larger than the correlation length of the spin model in the presence of frustrated interactions. Currently, there are two main approaches to addressing this problem, such that the efficiency of cluster algorithms is extended to frustrated systems.
The first approach is to extend the bond-formation rules to more non-local cells, and the second approach is to generate clusters based on more relevant order parameters. In the first case, we have the KBD algorithm for the fully-frustrated Ising model, where the decision of opening bonds are made on each plaquette, arranged in a checkerboard pattern on the square lattice. In the second case, we have replica cluster move for low-dimensional spin glasses, where the clusters are generated based on spin overlaps, which is believed to be the relevant order parameter.
See also
Random cluster model
Monte Carlo method
Wolff algorithm
http://www.hpjava.org/theses/shko/thesis_paper/node69.html
http://www-fcs.acs.i.kyoto-u.ac.jp/~harada/monte-en.html
References
Kasteleyn P. W. and Fortuin (1969) J. Phys. Soc. Jpn. Suppl. 26s:11
Monte Carlo methods
Statistical mechanics
Critical phenomena
Phase transitions | Swendsen–Wang algorithm | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 1,428 | [
"Physical phenomena",
"Phase transitions",
"Monte Carlo methods",
"Phases of matter",
"Critical phenomena",
"Computational physics",
"Condensed matter physics",
"Statistical mechanics",
"Matter",
"Dynamical systems"
] |
21,889,398 | https://en.wikipedia.org/wiki/Vitellin | Vitellin is a protein found in the egg yolk. It is a phosphoprotein. Vitellin is a generic name for major of many yolk proteins.
Vitellin has been known since the 1900s. The periodic acid-Schiff method and Sudan black B dye was used to help determine that Vitellin is a glycolipoprotein because it stained positive when tested. This protein was found to weigh ~540 kDa, given the weight of each of its 4 major subunits.
Vitellin is essential in the fertilization process, and embryonic development in egg-laying organisms.
This phosphoprotein acts as a membrane, 1-3.5μm, that encloses the egg that comprises at least five glycoproteins that resemble the zona pellucida evident in mammalian organisms. When the egg is fertilized, it buds off from the gamete surface, which results in the fertilization of the membrane in most invertebrates, amphibians, birds, and fishes. During fertilization, the acrosome of the sperm interacts with the vitelline envelope that has species-specific recognition and binding for the sperm. The vitelline membrane consists of two major layers found below the ovary and the outer layer found in the oviduct. This membrane supports the yolk and separates from the albumen, or egg white. The proteins that primarily compose the vitelline membrane are the lysozyme and ovomucin foundational for membrane growth during embryonic development. Aside from structural functions, it is also a barrier that permits the diffusion of water and nutrients, and in chickens especially, it is a barrier against microbial infection. Vitellin comprises a vast fraction of the proteins found in eggs, and due to this, they are easily characterized with biochemical methods in order to elucidate molecular, developments, and physiological regulation studies.
Vitellin was studied in honey bees and was found that it contributed to the health of the embryo in the form of immunity in the embryonic stages. This immunity came from two of the four domains of this protein structure. The honey bees, the domain of unknown function 1943 (DUF1943), and the von Willebrand factor (vWF) type D domain were linked to pathogen recognition and increased immunity in the embryonic honey bees.
Allergen
Vitellin is an umbrella term for many vitellin sub proteins and some of these sub proteins are linked with egg allergies.
See also
Egg allergy
Vitellogenin
References
Hagedorn, H. H., and J. G. Kunkel. "Vitellogenin and vitellin in insects." Annual review of entomology 24.1 (1979): 475-505. https://doi.org/10.1146/annurev.en.24.010179.002355
Zhu, Jiang, Leslie S. Indrasith, and Okitsugu Yamashita. "Characterization of vitellin, egg-specific protein and 30 kDa protein from Bombyx eggs, and their fates during oogenesis and embryogenesis." Biochimica et Biophysica Acta (BBA) - General Subjects 882.3 (1986): 427-436. https://doi.org/10.1016/0304-4165(86)90267-9
Avian proteins
Phosphoproteins | Vitellin | [
"Chemistry"
] | 736 | [
"Biochemistry stubs",
"Protein stubs"
] |
21,890,097 | https://en.wikipedia.org/wiki/Multilinear%20polynomial | In algebra, a multilinear polynomial is a multivariate polynomial that is linear (meaning affine) in each of its variables separately, but not necessarily simultaneously. It is a polynomial in which no variable occurs to a power of or higher; that is, each monomial is a constant times a product of distinct variables. For example is a multilinear polynomial of degree (because of the monomial ) whereas is not. The degree of a multilinear polynomial is the maximum number of distinct variables occurring in any monomial.
Definition
Multilinear polynomials can be understood as a multilinear map (specifically, a multilinear form) applied to the vectors [1 x], [1 y], etc. The general form can be written as a tensor contraction:
For example, in two variables:
Properties
A multilinear polynomial is linear (affine) when varying only one variable, :where and do not depend on . Note that is generally not zero, so is linear in the "shaped like a line" sense, but not in the "directly proportional" sense of a multilinear map.
All repeated second partial derivatives are zero:In other words, its Hessian matrix is a symmetric hollow matrix.
In particular, the Laplacian , so is a harmonic function. This implies has maxima and minima only on the boundary of the domain.
More generally, every restriction of to a subset of its coordinates is also multilinear, so still holds when one or more variables are fixed. In other words, is harmonic on every "slice" of the domain along coordinate axes.
On a rectangular domain
When the domain is rectangular in the coordinate axes (e.g. a hypercube), will have maxima and minima only on the vertices of the domain, i.e. the finite set of points with minimal and maximal coordinate values. The value of the function on these points completely determines the function, since the value on the edges of the boundary can be found by linear interpolation, and the value on the rest of the boundary and the interior is fixed by Laplace's equation, .
The value of the polynomial at an arbitrary point can be found by repeated linear interpolation along each coordinate axis. Equivalently, it is a weighted mean of the vertex values, where the weights are the Lagrange interpolation polynomials. These weights also constitute a set of generalized barycentric coordinates for the hyperrectangle. Geometrically, the point divides the domain into smaller hyperrectangles, and the weight of each vertex is the (fractional) volume of the hyperrectangle opposite it.
Algebraically, the multilinear interpolant on the hyperrectangle is:where the sum is taken over the vertices . Equivalently,where V is the volume of the hyperrectangle.
The value at the center is the arithmetic mean of the value at the vertices, which is also the mean over the domain boundary, and the mean over the interior. The components of the gradient at the center are proportional to the balance of the vertex values along each coordinate axis.
The vertex values and the coefficients of the polynomial are related by a linear transformation (specifically, a Möbius transform if the domain is the unit hypercube , and a Walsh-Hadamard-Fourier transform if the domain is the symmetric hypercube ).
Applications
Multilinear polynomials are the interpolants of multilinear or n-linear interpolation on a rectangular grid, a generalization of linear interpolation, bilinear interpolation and trilinear interpolation to an arbitrary number of variables. This is a specific form of multivariate interpolation, not to be confused with piecewise linear interpolation. The resulting polynomial is not a linear function of the coordinates (its degree can be higher than 1), but it is a linear function of the fitted data values.
The determinant, permanent and other immanants of a matrix are homogeneous multilinear polynomials in the elements of the matrix (and also multilinear forms in the rows or columns).
The multilinear polynomials in variables form a -dimensional vector space, which is also the basis used in the Fourier analysis of (pseudo-)Boolean functions. Every (pseudo-)Boolean function can be uniquely expressed as a multilinear polynomial (up to a choice of domain and codomain).
Multilinear polynomials are important in the study of polynomial identity testing.
See also
Bilinear and trilinear interpolation, using multivariate polynomials with two or three variables
Zhegalkin polynomial, a multilinear polynomial over
Multilinear form and multilinear map, multilinear functions that are strictly linear (not affine) in each variable
Linear form, a multivariate linear function
Harmonic polynomial
References
Polynomials | Multilinear polynomial | [
"Mathematics"
] | 1,005 | [
"Polynomials",
"Algebra"
] |
21,890,398 | https://en.wikipedia.org/wiki/Eastern%20blot | The eastern blot, or eastern blotting, is a biochemical technique used to analyze protein post-translational modifications including the addition of lipids, phosphates, and glycoconjugates. It is most often used to detect carbohydrate epitopes. Thus, eastern blot can be considered an extension of the biochemical technique of western blot. Multiple techniques have been described by the term "eastern blot(ting)", most use phosphoprotein blotted from sodium dodecyl sulfate–polyacrylamide gel electrophoresis (SDS-PAGE) gel on to a polyvinylidene fluoride or nitrocellulose membrane. Transferred proteins are analyzed for post-translational modifications using probes that may detect lipids, carbohydrate, phosphorylation or any other protein modification. Eastern blotting should be used to refer to methods that detect their targets through specific interaction of the post-translational modifications and the probe, distinguishing them from a standard far-western blot. In principle, eastern blotting is similar to lectin blotting (i.e., detection of carbohydrate epitopes on proteins or lipids).
History and multiple definitions
Definition of the term eastern blot is somewhat confused due to multiple sets of authors dubbing a new method as eastern blot, or a derivative thereof. All of the definitions are a derivative of the technique of western blot developed by Towbin in 1979. The current definitions are summarized below in order of the first use of the name; however, all are based on some earlier works. In some cases, the technique had been in practice for some time before the introduction of the term.
(1982) The term eastern blotting was specifically rejected by two separate groups: Reinhart and Malamud referred to a protein blot of a native gel as a native blot; Peferoen et al., opted to refer to their method of drawing sodium dodecyl sulfate-gel separated proteins onto nitrocellulose using a vacuum as Vacuum blotting.
(1984) Middle-eastern blotting has been described as a blot of polyA RNA (resolved by agarose) which is then immobilized. The immobilized RNA is then probed using DNA.
(1996) Eastern-western blot was first used by Bogdanov et al. The method involved blotting of phospholipids on polyvinylidene fluorideor nitrocellulose membrane prior to transfer of proteins onto the same nitrocellulose membrane by conventional western blotting and probing with conformation specific antibodies. This method is based on earlier work by Taki et al. in 1994, which they originally dubbed TLC blotting, and was based on a similar method introduced by Towbin in 1984.
(2000) Far-eastern blotting seems to have been first named in 2000 by Ishikawa & Taki. The method is described more fully in the article on far-eastern blot, but is based on antibody or lectin staining of lipids transferred to polyvinylidene fluoride membranes.
(2001) Eastern blotting was described as a technique for detecting glycoconjugates generated by blotting BSA onto polyvinylidene fluoride membranes, followed by periodate treatment. The oxidized protein is then treated with a complex mixture, generating a new conjugate on the membrane. The membrane is then probed with antibodies for epitopes of interest. This method has also been discussed in later work by the same group. The method is essentially far-eastern blot.
(2002) Eastern blot has also been used to describe an immunoblot performed on proteins blotted to a polyvinylidene fluoride membrane from a PAGE gel run with opposite polarity. Since this is essentially a western blot, the charge reversal was used to dub this method an eastern blot.
(2005) Eastern blot has been used to describe a blot of proteins on polyvinylidene fluoride membrane where the probe is an aptamer rather than an antibody. This could be seen as similar to a Southern blot, however the interaction is between a DNA molecule (the aptamer) and a protein, rather than two DNA molecules. The method is similar to southwestern blot.
(2006) Eastern blotting has been used to refer to the detection of fusion proteins through complementation. The name is based on the use of an enzyme activator (EA) as part of the detection.
(2009) Eastern blotting has most recently been re-dubbed by Thomas et al. as a technique which probes proteins blotted to polyvinylidene fluoride membrane with lectins, cholera toxin and chemical stains to detect glycosylated, lipoylated or phosphorylated proteins. These authors distinguish the method from the far-eastern blot named by Taki et al. in that they use lectin probes and other staining reagents.
(2009) Eastern blot has been used to describe a blot of proteins on nitrocellulose membrane where the probe is an aptamer rather than an antibody. The method is similar to southwestern blot.
(2011) A recent study used the term eastern blotting to describe detection of glycoproteins with lectins such as concanavalin A
There is clearly no single accepted definition of the term. A recent highlight article has interviewed Ed Southern, originator of the Southern blot, regarding a rechristening of eastern blotting from Tanaka et al. The article likens the eastern blot to "fairies, unicorns, and a free lunch" and states that eastern blots "don't exist." The eastern blot is mentioned in an immunology textbook which compares the common blotting methods (Southern, northern and western), and states that "the eastern blot, however, exists only in test questions."
The principles used for eastern blotting to detect glycans can be traced back to the use of lectins to detect protein glycosylation. The earliest example for this mode of detection is Tanner and Anstee in 1976, where lectins were used to detect glycosylated proteins isolated from human erythrocytes. The specific detection of glycosylation through blotting is usually referred to as lectin blotting. A summary of more recent improvements of the protocol has been provided by H. Freeze.
Applications
One application of the technique includes detection of protein modifications in two bacterial species Ehrlichia- E. muris and IOE. Cholera toxin B subunit (which binds to gangliosides), concanavalin A (which detects mannose-containing glycans) and nitrophospho molybdate-methyl green (which detects phosphoproteins) were used to detect protein modifications. The technique showed that the antigenic proteins of the non-virulent E.muris is more post-translationally modified than the highly virulent IOE.
Significance
Most proteins that are translated from mRNA undergo modifications before becoming functional in cells. These modifications are collectively known as post-translational modifications. The nascent or folded proteins, which are stable under physiological conditions, are then subjected to a battery of specific enzyme-catalyzed modifications on the side chains or backbones.
Post-translational modification of proteins can include acetylation, acylation (myristoylation, palmitoylation), alkylation, arginylation, ADP-ribosylation, biotinylation, formylation, geranylgeranylation, glutamylation, glycosylation, glycylation, hydroxylation, isoprenylation, lactylation, lipoylation, methylation, nitroalkylation, phosphopantetheinylation, phosphorylation, prenylation, selenation, S-nitrosylation, succinylation, sulfation, transglutamination, sulfinylation, sulfonylation and ubiquitination (sumoylation, neddylation).
Post-translational modifications occurring at the N-terminus of the amino acid chain play an important role in translocation across biological membranes. These include secretory proteins in prokaryotes and eukaryotes and also proteins that are intended to be incorporated in various cellular and organelle membranes such as lysosomes, chloroplast, mitochondria and plasma membrane. Expression of posttranslated proteins is important in several diseases.
See also
Western blot
Northwestern blot
Far-eastern blot
Blot
References
Carbohydrate chemistry
Molecular biology techniques
Protein methods
Biochemistry methods | Eastern blot | [
"Chemistry",
"Biology"
] | 1,893 | [
"Biochemistry methods",
"Biochemistry",
"Protein methods",
"Protein biochemistry",
"Molecular biology techniques",
"Carbohydrate chemistry",
"nan",
"Molecular biology",
"Chemical synthesis",
"Glycobiology"
] |
37,548,284 | https://en.wikipedia.org/wiki/Cancer/testis%20antigen%20family%2045%2C%20member%20a5 | Cancer/testis antigen family 45, member A5 is a protein in humans that is encoded by the CT45A5 gene.
This gene represents one of a cluster of six similar genes located on the q arm of chromosome X. The genes in this cluster encode members of the cancer/testis (CT) family of antigens, and are distinct from other CT antigens. These antigens are thought to be novel therapeutic targets for human cancers. Alternative splicing results in multiple transcript variants. A related pseudogene has been identified on chromosome 5. [provided by RefSeq, May 2010].
References
Further reading
Proteins | Cancer/testis antigen family 45, member a5 | [
"Chemistry"
] | 131 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
37,551,054 | https://en.wikipedia.org/wiki/C3H4N2O | {{DISPLAYTITLE:C3H4N2O}}
The molecular formula C3H4N2O may refer to:
2-Aminooxazole
Cyanoacetamide | C3H4N2O | [
"Chemistry"
] | 42 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
37,552,738 | https://en.wikipedia.org/wiki/SNAP-tag | SNAP-tag® is a self-labeling protein tag commercially available in various expression vectors. SNAP-tag is a 182 residue polypeptide (19.4 kDa) that can be fused to any protein of interest and further specifically and covalently tagged with a suitable ligand, such as a fluorescent dye. Since its introduction, SNAP-tag has found numerous applications in biochemistry and for the investigation of the function and localisation of proteins and enzymes in living cells.
Applications
Cell biology utilizes tools that allow manipulation and visualization of proteins in living cells. An important example is the use of fluorescent proteins, such as the green fluorescent protein (GFP) or yellow fluorescent protein (YFP). Molecular biology methods allow these fluorescent proteins to be introduced and expressed in living cells as fusion proteins. However, the photo-physical properties of the fluorescent proteins are generally not suited for single-molecule spectroscopy. Fluorescent proteins have, in comparison to commercially available dyes, a much lower fluorescence quantum yield and are quickly destroyed upon excitation with a focused laser beam (photobleaching).
The SNAP-tag® protein is an engineered version of the ubiquitous mammalian enzyme AGT, encoded in humans by the O-6-methylguanine-DNA methyltransferase (MGMT) gene. SNAP-tag was obtained using a directed evolution strategy, leading to a hAGT variant that accepts O6-benzylguanine derivatives instead of repairing alkylated guanine derivatives in damaged DNA.
An orthogonal tag, called CLIP-tag™, was further engineered from SNAP-tag to accept O2-benzylcytosine derivatives as substrates, instead of O6-benzylguanine. Therefore, Clip-tag- and SNAP-tag-fused proteins can be labeled simultaneously in the same cells. A split-SNAP-tag version suitable for protein complementation assay and protein-protein interaction studies was later developed.
Apart from fluorescence microscopy, SNAP-tag and CLIP-tag have proven useful in the elucidation of numerous biological processes, including the identification of multiprotein complexes using various approaches such as FRET, cross-linking, proximity ligation assay, as well as the purification of insulin secretory granules of distinct age by doing pulse-chase experiments Other application include the measurement of protein half-lives in vivo, and small molecule-protein interactions.
SNAP-tag® is a registered trademark of New England Biolabs, Inc.
CLIP-tag™ is a trademark of New England Biolabs, Inc.
See also
Protein tag
HaloTag
SpyTag
Fluorescent proteins
References
Further reading
External links
Darstellung SNAP-Tag und CLIP-Tag (NEB)
Self Labeling Protein Tags. In: Bioforum. Jg. 2005, Nr. 6, S. 50-51.
Biochemistry detection methods | SNAP-tag | [
"Chemistry",
"Biology"
] | 578 | [
"Biochemistry methods",
"Chemical tests",
"Biochemistry detection methods"
] |
37,552,988 | https://en.wikipedia.org/wiki/Kempf%20vanishing%20theorem | In algebraic geometry, the Kempf vanishing theorem, introduced by , states that the higher cohomology group Hi(G/B,L(λ)) (i > 0) vanishes whenever λ is a dominant weight of B. Here G is a reductive algebraic group over an algebraically closed field, B a Borel subgroup, and L(λ) a line bundle associated to λ. In characteristic 0 this is a special case of the Borel–Weil–Bott theorem, but unlike the Borel–Weil–Bott theorem, the Kempf vanishing theorem still holds in positive characteristic.
and found simpler proofs of the Kempf vanishing theorem using the Frobenius morphism.
References
Algebraic groups
Theorems in algebraic geometry | Kempf vanishing theorem | [
"Mathematics"
] | 156 | [
"Theorems in algebraic geometry",
"Theorems in geometry"
] |
47,857,832 | https://en.wikipedia.org/wiki/SAE%20310S%20stainless%20steel | SAE 310S stainless steel is the low carbon version of 310 and is suggested for applications where sensitisation, and subsequent corrosion by high temperature gases or condensates during shutdown may pose a problem.
Overview
SAE 310 stainless steel is a highly alloyed austenitic stainless steel used for high temperature application. The high chromium and nickel content give the steel excellent oxidation resistance as well as high strength at high temperature. This grade is also very ductile, and has good weldability enabling its widespread usage in many applications.
310/310S find wide application in all high-temperature environments where scaling and corrosion resistance, as well as high temperature strength and good creep resistance are required.
Chemical composition
See also
SAE steel grades
17-4 stainless steel
External links
Stainless Steel Products
Chromium alloys
Stainless steel
Building materials | SAE 310S stainless steel | [
"Physics",
"Chemistry",
"Engineering"
] | 168 | [
"Building engineering",
"Architecture",
"Construction",
"Materials",
"Alloys",
"Chromium alloys",
"Matter",
"Building materials"
] |
47,863,111 | https://en.wikipedia.org/wiki/Susan%20Brown%20%28mathematician%29 | Susan North Brown (22 December 1937 – 11 August 2017) was a professor of mathematics at University College London and a leading researcher in the field of fluid mechanics.
Background and employment
An exact timeline for Susan Brown's career has been difficult to pin down, but a newsletter published by Department of Mathematical and Physical Sciences at UCL shortly after her death offers a framework for her career achievements and highlights the esteem in which she was held by colleagues and students. Her undergraduate degree in mathematics was from St Hilda's College, Oxford. For about two years more years she continued studies at Oxford, in theoretical fluid mechanics—and she then moved to the University of Durham to complete her DPhil in 1964. During this time she held temporary Lectureships at both Durham and Newcastle and in 1964 began a Lectureship that started her long association with UCL. From her Lectureship she advanced to a Readership in 1971 and was appointed to a Professorship in 1986. The afore-mentioned departmental newsletter that recapped her accomplishments after her death expresses the belief that Brown was the first female in the UK to be appointed to a professorship in Mathematics, but Joan E. Walsh was promoted to a professorship in mathematics at the University of Manchester in 1974.
Research
Brown's department described her as an outstanding teacher and someone with an international reputation for her research. She had a productive partnership in fluid dynamics with UCL colleague Keith Stewartson—who also arrived at UCL in 1964. Quoting from the afore-mentioned departmental newsletter, " Together they published 29 papers and pioneered early developments of 'triple-deck' theory, which, in turn, enabled resolution of long-standing questions in steady and unsteady trailing-edge flows, and addressed associated important aerodynamic applications. Another area for which Brown was especially renowned was a series of discussions of critical layers, especially effects of viscosity and nonlinearity and applications to geophysical flows such as atmospheric jets." Google-Scholar lists numerous papers for the pair "SN Brown, K Stewartson" and several of these are listed below.
Death
Brown died on 11 August 2017, aged 79, in London.
Selected publications
References
1937 births
2017 deaths
British mathematicians
British women mathematicians
Academics of University College London
Fluid mechanics
Alumni of Durham University
Academics of Durham University
Academics of Newcastle University
Alumni of St Hilda's College, Oxford
Scientists from Southampton | Susan Brown (mathematician) | [
"Engineering"
] | 474 | [
"Civil engineering",
"Fluid mechanics"
] |
46,366,448 | https://en.wikipedia.org/wiki/Candida%20insectamens | Candida insectamens is a yeast species in the genus Candida.
References
insectamens
Fungus species | Candida insectamens | [
"Biology"
] | 23 | [
"Fungi",
"Fungus species"
] |
46,373,048 | https://en.wikipedia.org/wiki/Quantum%20microscopy | Quantum microscopy allows microscopic properties of matter and quantum particles to be measured and imaged. Various types of microscopy use quantum principles. The first microscope to do so was the scanning tunneling microscope, which paved the way for development of the photoionization microscope and the quantum entanglement microscope.
Scanning tunneling
The scanning tunneling microscope (STM) uses the concept of quantum tunneling to directly image atoms. A STM can be used to study the three-dimensional structure of a sample, by scanning the surface with a sharp, metal, conductive tip close to the sample. Such an environment is conducive to quantum tunneling: a quantum mechanical effect that occurs when electrons move through a barrier due to their wave-like properties. Tunneling depends on the thickness of the barrier; the Schrödinger equation gives the probability that a particle will be detected on the far side and, for a sufficiently thin barrier, predicts some electrons will cross it. This creates a current across the tunnel. The number of electrons that tunnel is dependent on the thickness of the barrier, therefore the current through the barrier also depends on this thickness. The distance between the tip and the sample affects the current measured by the tip. The tip is formed by a single atom that slowly moves across the surface at a distance of one atomic diameter. By observing the current, the distance can be kept fairly constant, allowing the tip to move up and down according to the structure of the sample.
The STM works best with conducting materials in order to create a current. However, since its creation, various implementations allow for a larger variety of samples, such as spin polarized scanning tunneling microscopy (SPSTM), and atomic force microscopy (AFM).
Photoionization
The wave function is central to quantum mechanics. It contains the maximum information that can be known about a single particle's quantum state. The square of the wave function is the probability of a particle's location at any given moment. Direct imaging of a wave function used to be considered only a gedanken experiment, but became routine. An image of an atom's exact position or the movement of its electrons is almost impossible to measure because any direct observation of an atom disturbs its quantum coherence. As such, observing an atom's wave function and getting an image of its full quantum state requires many measurements to be made, which are then statistically averaged. The photoionization microscope directly visualizes atomic structure and quantum states.
A photoionization microscope employs photoionization, along with quantum properties and principles, to measure atomic properties. The principle is to study the spatial distribution of electrons ejected from an atom in a situation in which the De Broglie wavelength becomes large enough to be observed on a macroscopic scale. An atom in an electric field is ionized by a focused laser. The electron is drawn toward a position-sensitive detector, and the current is measured as a function of position. The application of an electric field during photoionization allows confining the electron flux along one dimension.
Multiple classical paths lead from the atom to any point in the classically allowed region on the detector, and waves travelling along these paths produce an interference pattern. An infinite set of trajectory families lead to a complicated interference pattern on the detector. As such, photoionization microscopy relies on the existence of interference between various trajectories by which the electron moves from the atom to the plane of observation, for example, of a hydrogen atom in parallel electric and magnetic fields.
History and development
The idea stemmed from an experiment proposed by Demkov and colleagues in the early 1980s. The researchers suggested that electron waves could be imaged when interacting with a static electric field as long as the de Broglie Wavelength of these electrons was large enough. It was not until 1996 that anything resembling these ideas bore fruit. In 1996 a team of French researchers developed the first photodetachment microscope. It allowed for direct observation of the oscillatory structure of a wave function. Photodetachment is the removal of electrons from an atom using interactions with photons or other particles. Photodetachment microscopy made it possible to image the spatial distribution of the ejected electron. The microscope developed in 1996 was the first to image photodetachment rings of a negative Bromine ion. These images revealed interference between two electron waves on their way to the detector.
The first attempts to use photoionization microscopy were performed on atoms of Xenon by a team of Dutch researchers in 2001. The differences between direct and indirect ionization create different trajectories for the outbound electron. Direct ionization corresponds to electrons ejected down-field towards the bottleneck in the Coulomb + dc electric field potential, whereas indirect ionization corresponds to electrons ejected away from the bottleneck in the Coulomb + dc electric field and only ionize upon further Coulomb interactions. These trajectories produce a distinct pattern that can be detected by a two-dimensional flux detector and subsequently imaged. The images exhibit an outer ring that correspond to the indirect ionization process and an inner ring, which correspond to the direct ionization process. This oscillatory pattern can be interpreted as interference among the trajectories of the electrons moving from the atom to the detector.
The next group to attempt photoionization microscopy used the excitation of Lithium atoms in the presence of a static electric field. This experiment was the first to reveal evidence of quasibound states. A quasibound state is a "state having a connectedness to true bound state through the variation of some physical parameter". This was done by photoionizing the Lithium atoms in the presence of a ≈1 kV/cm static electric field. This experiment was an important precursor to the imaging of the hydrogen wave function because, contrary to the experiments done with Xenon, Lithium wave function microscopy images are sensitive to the presence of resonances. Therefore, the quasibound states were directly revealed.
In 2013, Aneta Stodolna and colleagues imaged the hydrogen atom's wave function by measuring an interference pattern on a 2D detector. The electrons are excited to their Rydberg state. In this state, the electron orbital is far from the centre nucleus. The Rydberg electron is in a dc field, which causes it to be above the classical ionization threshold, but below the field-free ionization energy. The electron wave ends up producing an interference pattern because the portion of the wave directed towards the 2D detector interferes with the portion directed away from the detector. This interference pattern shows a number of nodes that is consistent with the nodal structure of the Hydrogen atom orbital
Future directions
The same team of researchers that imaged the hydrogen electron's wave function are attempting to image helium. They report considerable differences, since helium has two electrons, which may enable them to 'see' entanglement.
Quantum entanglement
Quantum metrology makes precise measurements that cannot be achieved classically. Typically, entanglement of N particles is used to measure a phase with precision ∆φ = 1/N, called the Heisenberg limit. This exceeds the ∆φ = 1/ precision limit possible with N non-entangled particles, called the standard quantum limit (SQL). The signal-to-noise ratio (SNR) for a given light intensity is limited by SQL, which is critical for measurements where the probe light intensity is limited in order to avoid damaging the sample. The SQL can be tackled using entangled particles.
The microscope first imaged a relief pattern of a glass plate. In one test, the pattern was 17 nanometers higher than the plate.
Quantum entanglement microscopes are a form of confocal-type differential interference contrast microscope. Entangled photon pairs and more generally, NOON states are the illumination source. Two beams of photons are beamed at adjacent spots on a flat sample. The interference pattern of the beams are measured after they are reflected. When the two beams hit the flat surface, they both travel the same length and produce a corresponding interference pattern. This interference pattern changes when the beams hit regions of different heights. The patterns can be resolved by analysing the interference pattern and phase difference. A standard optical microscope would be unlikely to detect something so small. The image is precise when measured with entangled photons, as each entangled photon gives information about the other. Therefore, they provide more information than independent photons, creating sharper images.
Future directions
Entanglement-enhancement principles can be used to improve the image. Researchers are thereby able to overcome the Rayleigh criterion. This is ideal for studying biological tissues and opaque materials. However, the light intensity must be lowered to avoid damaging the sample.
Entangled-photon microscopy can avoid the phototoxicity and photobleaching that comes with two-photon scanning fluorescence microscopy. In addition, since the interaction region within entangled microscopy is controlled by two beams, image site selection is flexible, which provides enhanced axial and lateral resolution
In addition to biological tissues, high-precision optical phase measurements have applications such as gravitational wave detection, measurement of materials properties, as well as medical and biological sensing.
Biological quantum light microscopes
Researchers have developed quantum light microscopes based on squeezed states of light. Squeezed states of light have noise characteristics that are reduced beneath the shot noise level in one quadrature (such as amplitude or phase) at the expense of increased noise in the orthogonal quadrature. This reduced noise can be used to improve signal-to-noise ratio. Squeezed states have been shown to allow a signal-to-noise ratio improvement of as much as a factor of thirty.
The first biological quantum light microscope used squeezed light in an optical tweezer to probe the interior of a living yeast cell. In experiments it was shown that squeezed light allowed more precise tracking of lipid granules that naturally occur within the cell, and that this provided a more accurate measurement of the local viscosity of the cell. Viscosity is an important property of cells that is connected to their health, structural properties and local function. Later, the same microscope was employed as a photonic force microscope, tracking a granule as it diffused spatially. This allowed quantum enhanced resolution to be demonstrated, and for this to be achieved in a far-sub-diffraction limited microscope.
Squeezed light has also been used to improve nonlinear microscopy. Nonlinear microscopes use intense laser illumination, close to the levels at which biological damage can occur. This damage is a key barrier to improving their performance, preventing the intensity from being increased and therefore putting a hard limit on SNR. By using squeezed light in such a microscope, researchers have shown that this limit can be broken - that SNR beyond that achievable beneath photo-damage limits of regular microscopy can be achieved.
Quantum enhanced fluorescence super-resolution
In a fluorescence microscope, images of objects that contain fluorescent particles are recorded. Each such particle can emit not more than one photon at a time, a quantum-mechanical effect known as photon antibunching. Recording anti-bunching in a fluorescence image provides additional information that can be used to enhance the microscope's resolution beyond the diffraction limit, and was demonstrated for several types of fluorescent particles.
Intuitively, antibunching can be thought of as detection of ‘missing’ events of two photons emitted from every particle that cannot simultaneously emit two photons. It is therefore used to produce an image that would have been produced using photons with half the wavelength of the detected photons. By detecting N-photon events, the resolution can be improved by up to a factor of N over the diffraction limit.
In conventional fluorescence microscopes, antibunching information is ignored, as simultaneous detection of multiple photon emission requires temporal resolution higher than that of most commonly available cameras. However, improved detector technology enabled demonstrations of quantum enhanced super-resolution using fast detector arrays, such as single-photon avalanche diode arrays.
Quantum enhanced Raman microscopy
Quantum correlations offer an SNR beyond the photo-damage limit (the amount of energy that can be delivered without damage to the sample) of conventional microscopy. A coherent Raman microscope offers sub-wavelength resolution and incorporates bright quantum correlated illumination. Molecular bonds within a cell can be imaged with a 35 per cent improved SNR compared with conventional microscopy, corresponding to a 14% concentration sensitivity improvement.
References
External links
Quantum mechanics
Microscopes | Quantum microscopy | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 2,532 | [
"Theoretical physics",
"Quantum mechanics",
"Measuring instruments",
"Microscopes",
"Microscopy"
] |
46,373,107 | https://en.wikipedia.org/wiki/Adsorption/bio-oxidation%20process | The adsorption/bio-oxidation process (AB process) is a two-stage modification of the activated sludge process used for wastewater treatment. It consists of a high-loaded A-stage and low-loaded B-stage. The process is operated without a primary clarifier, with the A-stage being an open dynamic biological system. Both stages have separate settling tanks and sludge recycling lines, thus maintaining unique microbial communities in both reactors.
History
Adsorption/bio-oxidation process was invented in the mid-1970s by Botho Böhnke, a professor of the RWTH Aachen University. It was based on the finding made by the German engineer Karl Imhoff in the 1950s. Imhoff stated that the treatment efficiency of 60–80 percent could be achieved in highly loaded activated sludge basins.
In 1977 Böhnke published his first article on adsorption/bio-oxidation process. The same year the patent was issued. Extensive research of the following years, conducted by Böhnke together with Bernd and Andreas Diering, ended up in 1985 with the establishment of the company Dr.-Ing. Bernd Diering GmbH. The same year, the AB-process was for the first time applied in a full-scale at the Krefeld, Germany sewage treatment plant (800 000 P.E.). In 1990, 19 full scale installations existed in Western Germany alone. Further application of the process in Europe was hindered by the tightening of the effluent discharge requirements with respect to nitrogen and phosphorus. The process came into notice in 2000 again due to the increased interest in energy recovery from wastewater.
Principle of operation
The A-stage, or adsorption stage is the most innovative component of the process. It is not preceded by primary treatment. Influent organic matter is removed in the A-stage mainly by flocculation and sorption to sludge due to the high loading rates (2–10 g BOD • g VSS−1 • d−1) and low sludge age (typically 4–10 h). Hydrolysis of complex organic molecules occurs improving biodegradability of the influent of the B-stage. High loading rates and low sludge age favours development of dynamic biocoenosis with a large fraction of microorganisms present in the exponential growth phase. Diverse sludge biocoenosis increase variety of organic compounds that can be degraded in the A-stage and makes the process more stable towards the shock loads. Altogether, up to 80% of the influent organic matter can be removed in the A-stage. The required reactor volume and oxygen supply are lower if compared to the removal in the conventional activated sludge process.
The B-stage, or bio-oxidation stage, is a typical low-loaded activated sludge process, where biodegradation of the remaining organic material occurs. The B-stage can be designed for nitrogen and/or phosphorus removal by alternating aerobic, anoxic and anaerobic zones in the reactor.
Typical operational conditions of the adsorption/bio-oxidation process
Advantages of the process
Lower aeration requirements decrease energy consumption and aeration costs for 20 percent if compared with conventional single stage activated sludge plant.
The volumes of aeration tanks are 40% lower if compared with conventional single stage activated sludge plant.
Increased sludge production in the A-stage results in increased biogas production in the digester (for plants with anaerobic digestion of excess sludge).
Stability towards the shock loads (pH, chemical oxygen demand (COD), toxic substances) explained by the wide-ranging biochemical potential, high mutation capacity and adaptability of sludge in the A-stage.
A-stage can receive higher organic loads than conventional activated sludge systems.
Effluent concentrations are more stable because of the two-stage process configuration employed.
Heavy metals are mainly removed with the A-stage sludge. Therefore, B-stage sludge has lower concentrations of heavy metals than sludge from conventional activated sludge process and may comply with the agricultural standards.
Drawbacks of the process
Incomplete denitrification is often observed in the B-stage if the influent C/N ratio is low. Direct by-pass of a part of A-stage influent with high organic matter content to the B-stage is used to increase the C/N ratio.
High sludge production in the A-stage is a drawback for WWTPs that are not equipped with anaerobic digestion of sludge because it increases sludge treatment costs.
Sludge from A-stage has poor settling properties.
High retention times cause an increased need for additional reactors to maintain throughput increasing equipment costs
Nutrient removal
Nitrogen removal in the A-stage can reach 30–40%, as nitrogen of organic compounds is incorporated in upflow anaerobic sludge blanket (UASB) reactor sludge.
The sludge age of the B-stage is typically between 8 and 20 days promoting the growth of nitrifiers. Therefore, complete nitrification is usually achieved in the B-stage. Complete denitrification is difficult to achieve, because of the low C:N ratio in the influent of the B-stage. Insufficient carbon supply of carbon source to the B-stage occurs due to the high efficiency of organic matter removal in the A-stage. The problem can be solved by decreasing organic matter removal in the A-stage, external carbon source supply, intermittent aeration or decreased HRT of the A-stage and/or on-line control of certain operational parameters. To achieve biological nitrogen and phosphorus removal anaerobic and anoxic compartments are introduced before the aerated zone of the B-stage.
Phosphorus removal from the secondary effluent of the B-stage can be achieved by coagulation with ferric and aluminium salts, e.g. FeCl3 or Al2(SO4)3.
Applications for municipal wastewater treatment
The adsorption/bio-oxidation process was applied at the Krefeld plant (800 000 P.E.) in 1985 for the first time. The plant was expanded and modified and currently treats municipal and industrial wastewater of 1 200 000 P.E.
Currently adsorption/bio-oxidation process is applied at the municipal treatment plants in Germany, the Netherlands (WWTP Dokhaven (Rotterdam), WWTP Utrecht, WWTP Garmerwolde (Groningen) etc.), Austria (WWTP Salzburg, WWTP Strass etc.), Spain, US, China etc.
Adsorption/bio-oxidation process is a part of innovative wastewater treatment concept WaterSchoon, realized in the Netherlands. 250 apartments in the new district Noorderhoek (Sneek, the Netherlands) are equipped with separate collection systems for toilet wastewater and the rest of the household wastewater (or so-called greywater). Both streams are treated separately in order to maximize recovery of resources from wastewater. Adsorption/bio-oxidation process is used for grey water treatment to increase sludge production. Sludge, produced in both stages of the process, is digested together with toilet wastewater in the UASB reactor to maximize energy recovery.
Applications for industrial wastewater treatment
The adsorption/bio-oxidation process is used for treatment of industrial wastewater with high COD, including wastewater from:
Pulp and paper industry
Textile industry
Food industry, including dairy industry
Pharmaceutical industry
Leather tanning industry
The C/N and C/P ratios of industrial wastewater is often too high for complete aerobic biodegradation of the influent organic matter, even after the adsorption stage. Addition of nutrients prior to bio-oxidation stage is required in these cases.
See also
Biosorption
List of waste-water treatment technologies
References
Environmental engineering
Pollution control technologies
Treatment
Water pollution
Sanitation | Adsorption/bio-oxidation process | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,619 | [
"Chemical engineering",
"Pollution control technologies",
"Water pollution",
"Sewerage",
"Civil engineering",
"Environmental engineering"
] |
33,486,754 | https://en.wikipedia.org/wiki/Silicon%20boride | Silicon borides (also known as boron silicides) are lightweight ceramic compounds formed between silicon and boron. Several stoichiometric silicon boride compounds, SiBn, have been reported: silicon triboride, SiB3, silicon tetraboride, SiB4, silicon hexaboride, SiB6, as well as SiBn (n = 14, 15, 40, etc.). The n = 3 and n = 6 phases were reported as being co-produced together as a mixture for the first time by Henri Moissan and Alfred Stock in 1900 by briefly heating silicon and boron in a clay vessel. The tetraboride was first reported as being synthesized directly from the elements in 1960 by three independent groups: Carl Cline and Donald Sands; Ervin Colton; and Cyrill Brosset and Bengt Magnusson. It has been proposed that the triboride is a silicon-rich version of the tetraboride. Hence, the stoichiometry of either compound could be expressed as SiB4 - x where x = 0 or 1. All the silicon borides are black, crystalline materials of similar density: 2.52 and 2.47 g cm−3, respectively, for the n = 3(4) and 6 compounds. On the Mohs scale of mineral hardness, SiB4 - x and SiB6 are intermediate between diamond (10) and ruby (9). The silicon borides may be grown from boron-saturated silicon in either the solid or liquid state.
The SiB6 crystal structure contains interconnected icosahedra (polyhedra with 20 faces), icosihexahedra (polyhedra with 26 faces), as well as isolated silicon and boron atoms. Due to the size mismatch between the silicon and boron atoms, silicon can be substituted for boron in the B12 icosahedra up to a limiting stoichiometry corresponding to SiB2.89. The structure of the tetraboride SiB4 is isomorphous to that of boron carbide (B4C), B6P, and B6O. It is metastable with respect to the hexaboride. Nevertheless, it can be prepared due to the relative ease of crystal nucleation and growth.
Both SiB4 - x and SiB6 become superficially oxidized when heated in air or oxygen and each is attacked by boiling sulfuric acid and by fluorine, chlorine, and bromine at high temperatures. The silicon borides are electrically conducting. The hexaboride has a low coefficient of thermal expansion and a high nuclear cross section for thermal neutrons.
The tetraboride was used in the black coating of some of the space shuttle heat shield tiles.
References
Inorganic silicon compounds
Borides
Ceramic materials | Silicon boride | [
"Chemistry",
"Engineering"
] | 601 | [
"Inorganic silicon compounds",
"Ceramic engineering",
"Ceramic materials",
"Inorganic compounds"
] |
33,488,024 | https://en.wikipedia.org/wiki/GPS%20disciplined%20oscillator | A GPS clock, or GPS disciplined oscillator (GPSDO), is a combination of a GPS receiver and a high-quality, stable oscillator such as a quartz or rubidium oscillator whose output is controlled to agree with the signals broadcast by GPS or other GNSS satellites. GPSDOs work well as a source of timing because the satellite time signals must be accurate in order to provide positional accuracy for GPS in navigation. These signals are accurate to nanoseconds and provide a good reference for timing applications.
Applications
GPSDOs serve as an indispensable source of timing in a range of applications, and some technology applications would not be practical without them. GPSDOs are used as the basis for Coordinated Universal Time (UTC) around the world. UTC is the official accepted standard for time and frequency. UTC is controlled by the International Bureau of Weights and Measures (BIPM). Timing centers around the world use GPS to align their own time scales to UTC. GPS based standards are used to provide synchronization to wireless base stations and serve well in standards laboratories as an alternative to cesium-based references.
GPSDOs can be used to provide synchronization of multiple RF receivers, allowing for RF phase coherent operation among the receivers and applications, such as passive radar and ionosondes.
Operation
A GPSDO works by disciplining, or steering a high quality quartz or rubidium oscillator by locking the output to a GPS signal via a tracking loop. The disciplining mechanism works in a similar way to a phase-locked loop (PLL), but in most GPSDOs the loop filter is replaced with a microcontroller that uses software to compensate for not only the phase and frequency changes of the local oscillator, but also for the "learned" effects of aging, temperature, and other environmental parameters.
One of the keys to the usefulness of a GPSDO as a timing reference is the way it is able to combine the stability characteristics of the GPS signal and the oscillator controlled by the tracking loop. GPS receivers have excellent long-term stability (as characterized by their Allan deviation) at averaging times greater than several hours. However, their short-term stability is degraded by limitations of the internal resolution of the one pulse per second (1PPS) reference timing circuits, signal propagation effects such as multipath interference, atmospheric conditions, and other impairments. On the other hand, a quality oven-controlled oscillator has better short-term stability but is susceptible to thermal, aging, and other long-term effects. A GPSDO aims to utilize the best of both sources, combining the short-term stability performance of the oscillator with the long-term stability of the GPS signals to give a reference source with excellent overall stability characteristics.
GPSDOs typically phase-align the internal flywheel oscillator to the GPS signal by using dividers to generate a 1PPS signal from the reference oscillator, then phase comparing this 1PPS signal to the GPS-generated 1PPS signal and using the phase differences to control the local oscillator frequency in small adjustments via the tracking loop. This differentiates GPSDOs from their cousins NCOs (numerically controlled oscillator). Rather than disciplining an oscillator via frequency adjustments, NCOs typically use a free-running, low-cost crystal oscillator and adjust the output phase by digitally lengthening or shortening the output phase many times per second in large phase steps assuring that on average the number of phase transitions per second is aligned to the GPS receiver reference source. This guarantees frequency accuracy at the expense of high phase noise and jitter, a degradation that true GPSDOs do not suffer.
When the GPS signal becomes unavailable, the GPSDO goes into a state of holdover, where it tries to maintain accurate timing using only the internal oscillator.
Sophisticated algorithms are used to compensate for the aging and temperature stability of the oscillator while the GPSDO is in holdover.
The use of Selective Availability (SA) prior to May 2000 restricted the accuracy of GPS signals available for civilian use and in turn presented challenges to the accuracy of GPSDO derived timing. The turning off of SA resulted in a significant increase in the accuracy that GPSDOs can offer.
GPSDOs are capable of generating frequency accuracies and stabilities on the order of parts per billion for even entry-level, low-cost units, to parts per trillion for more advanced units within minutes after power-on, and are thus one of the highest-accuracy physically-derived reference standards available.
Form factor
GPSDOs could be:
Fully encapsulated, portable and standalone.
Board mounted.
Modular, connecting via external interface such as PCIe.
The main difference is in the size and the power source. When standalone GPSDO may require an external power supply, board and modular GPSDOs can draw power from the Motherboard.
References
Synchronization | GPS disciplined oscillator | [
"Engineering"
] | 1,024 | [
"Telecommunications engineering",
"Synchronization"
] |
28,132,162 | https://en.wikipedia.org/wiki/Dicing%20tape | Dicing tape is a backing tape used during wafer dicing or some other microelectronic substrate separation, the cutting apart of pieces of semiconductor or other material following wafer or module microfabrication. The tape holds the pieces of the substrate, in case of a wafer called as die, together during the cutting process, mounting them to a thin metal frame. The dies/substrate pieces are removed from the dicing tape later on in the electronics manufacturing process.
Tape types
Dicing tape can be made of PVC, polyolefin, or polyethylene backing material with an adhesive to hold the wafer or substrate in place. In some cases dicing tape will have a release liner that will be removed prior to mounting the tape to the backside of the wafers, with a variety of adhesive strengths, designed for various wafer/substrate sizes and materials.
UV tapes are dicing tapes in which the adhesive bond is broken by exposure to UV light after dicing, allowing the adhesive to be stronger during cutting while still allowing clean and easy removal. Semiconductor Tapes and Materials. UV equipment can range from low power (a few mW/cm2) to high power (more than 200 mW/cm2). Higher power results in a more complete cure, lower adhesion and reduced adhesive residue, while lower power is safer.
Thermal release tapes (typically PET material) have been developed for specific cases when etching or material printing is needed after the tape is installed. These tapes can also handle heavy substrates such as ceramic substrates or printed circuit boards (PCBs/PWBs) if needed. Their adhesion disappears when heat (typically ) is applied.
References
External links
How to mount a dicing tape
Semiconductor device fabrication | Dicing tape | [
"Materials_science"
] | 359 | [
"Semiconductor device fabrication",
"Microtechnology"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.