id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
66,062,517 | https://en.wikipedia.org/wiki/Borate%20nitrate | Borate nitrates are mixed anion compounds containing separate borate and nitrate anions.
List
References
Nitrates
Borates
Mixed anion compounds | Borate nitrate | [
"Physics",
"Chemistry"
] | 30 | [
"Matter",
"Mixed anion compounds",
"Nitrates",
"Salts",
"Oxidizing agents",
"Ions"
] |
51,902,988 | https://en.wikipedia.org/wiki/Coverage%20%28genetics%29 | In genetics, coverage is one of several measures of the depth or completeness of DNA sequencing, and is more specifically expressed in any of the following terms:
Sequence coverage (or depth) is the number of unique reads that include a given nucleotide in the reconstructed sequence. Deep sequencing refers to the general concept of aiming for high number of unique reads of each region of a sequence.
Physical coverage, the cumulative length of reads or read pairs expressed as a multiple of genome size.
Genomic coverage, the percentage of all base pairs or loci of the genome covered by sequencing.
Sequence coverage
Rationale
Even though the sequencing accuracy for each individual nucleotide is very high, the very large number of nucleotides in the genome means that if an individual genome is only sequenced once, there will be a significant number of sequencing errors. Furthermore, many positions in a genome contain rare single-nucleotide polymorphisms (SNPs). Hence to distinguish between sequencing errors and true SNPs, it is necessary to increase the sequencing accuracy even further by sequencing individual genomes a large number of times.
Ultra-deep sequencing
The term "ultra-deep" can sometimes also refer to higher coverage (>100-fold), which allows for detection of sequence variants in mixed populations. In the extreme, error-corrected sequencing approaches such as Maximum-Depth Sequencing can make it so that coverage of a given region approaches the throughput of a sequencing machine, allowing coverages of >10^8.
Transcriptome sequencing
Deep sequencing of transcriptomes, also known as RNA-Seq, provides both the sequence and frequency of RNA molecules that are present at any particular time in a specific cell type, tissue or organ. Counting the number of mRNAs that are encoded by individual genes provides an indicator of protein-coding potential, a major contributor to phenotype. Improving methods for RNA sequencing is an active area of research both in terms of experimental and computational methods.
Calculation
The average coverage for a whole genome can be calculated from the length of the original genome (G), the number of reads (N), and the average read length (L) as . For example, a hypothetical genome with 2,000 base pairs reconstructed from 8 reads with an average length of 500 nucleotides will have 2× redundancy. This parameter also enables one to estimate other quantities, such as the percentage of the genome covered by reads (sometimes also called breadth of coverage). A high coverage in shotgun sequencing is desired because it can overcome errors in base calling and assembly. The subject of DNA sequencing theory addresses the relationships of such quantities.
Physical coverage
Sometimes a distinction is made between sequence coverage and physical coverage. Where sequence coverage is the average number of times a base is read, physical coverage is the average number of times a base is read or spanned by mate paired reads.
Genomic coverage
In terms of genomic coverage and accuracy, whole genome sequencing can broadly be classified into either of the following:
A draft sequence, covering approximately 90% of the genome at approximately 99.9% accuracy
A finished sequence, covering more than 95% of the genome at approximately 99.99% accuracy
Producing a truly high-quality finished sequence by this definition is very expensive. Thus, most human "whole genome sequencing" results are draft sequences (sometimes above and sometimes below the accuracy defined above).
References
Molecular biology
DNA sequencing | Coverage (genetics) | [
"Chemistry",
"Biology"
] | 688 | [
"Biochemistry",
"Molecular biology techniques",
"DNA sequencing",
"Molecular biology"
] |
64,550,872 | https://en.wikipedia.org/wiki/Katie%20Doores | Katherine Jane Doores is a British biochemist who is a senior lecturer in the School of Immunology & Microbial Sciences at King's College London. During the COVID-19 pandemic Doores studied the levels of antibodies in patients who had suffered from COVID-19.
Early life and education
Doores was born in the United Kingdom. In 2003, Doores received an MChem in chemistry from the University of Oxford. In 2008, Doores received a PhD in organic chemistry from the University of Oxford. Ben G. Davis was her advisor. In 2013, she completed post doctoral work in the Department of Immunology and Microbial Sciences at the Scripps Research Institute in La Jolla, California.
Research and career
When Doores was a graduate student at Oxford, she studied glycoimmunology in the laboratory of professor Davis. Glycoimmunology is an emerging research field which looks at how immune response is moderated by carbohydrates (glycans). At the Scripps Research Institute as part of her post doctoral work, she worked on the glycobiology on HIV and broadly-neutralizing antibodies. At Scripps, Doores worked alongside Dennis Burton, where she studied the "flower-like" envelope protein on HIV. These envelope protein penetrates host cells and create antibody-resistant glycans. By investigating this envelope protein, Doores looked to identify sites which are involved with viral function. By neutralising sites such as these (the high-mannose patch), Doores hoped to protect against HIV infection.
From 2013 to 2017, Doores was a lecturer in the Department of Infectious Diseases at King's College London. Doores was awarded a Medical Research Council fellowship to establish her own laboratory at King's College. She was made a European Molecular Biology Organization (EMBO) Young Investigator in 2017. In 2017, Doores became a senior lecturer in the Department of Infectious Diseases at King's College.
Many disease-causing pathogens are coated in carbohydrates. Moores investigates the behaviour of these carbohydrates in host–pathogen interactions. She hopes that by understanding the role of these carbohydrates it will be possible to develop novel therapeutic strategies and vaccinations. Alongside developing new medical therapies, Moores is interested in how the body responds to carbohydrate antigens in the form of antibody recognition. Her work has primarily focussed on the carbohydrate antigens on HIV-1. The envelope glycoprotein GP120 of HIV-1 is covered in N-linked-glycans. These glycans are the target of BNabs (broadly neutralizing HIV-1 antibodies), and Moores is studying how these antibodies evolve in vivo. This understanding will allow the develop of new vaccines that encourage the generation of antibodies that can protect against pathogenic bacteria.
During the COVID-19 pandemic Doores studied the levels of antibodies in patients who had suffered from COVID-19 in Guy's and St Thomas' NHS Foundation Trust. Her research showed that while 60% of COVID-19 patients elicited a strong antibody response, only 17% of them retained this potency three months later. In some cases, patients entirely lost their antibody response. These results implied that immunity to COVID-19 might be short lived, and that people may become reinfected during a second wave of infection.
Selected publications
References
External links
Katie Doores at King's College London
The Doores Lab
Living people
Year of birth missing (living people)
British women biochemists
Alumni of the University of Oxford
HIV vaccine research
Academics of King's College London
Scripps Research faculty
COVID-19 researchers | Katie Doores | [
"Chemistry"
] | 775 | [
"HIV vaccine research",
"Drug discovery"
] |
64,555,681 | https://en.wikipedia.org/wiki/Medicines%20Discovery%20Catapult | The Medicines Discovery Catapult (MDC) is the United Kingdom's catapult centre for medicine research and innovation, headquartered at Alderley Park in Cheshire.
History
The intention to form the company was announced by the Chancellor on 13 July 2015 with funding of £5m, on a visit to Cheshire. It would be part of the Northern Powerhouse initiative.
The Medicines Technologies Catapult was established in December 2015, funded by a £10m grant from Innovate UK and based at the Alderley Park science park in Cheshire. On 1 March 2016 its name changed to the Medicines Discovery Catapult. Further funding of approximately £10m per year was secured from Innovate UK for the years 2018 to 2023.
Precision Medicine Catapult
The PMC was based in Cambridge and had regional centres of excellence at Belfast, Glasgow, Cardiff, Oxford, Leeds and Manchester. It worked with precision medicine. It started from April 2015, and worked with regional parts of the Diagnostic Evidence Cooperative and Academic Health Science Networks (AHSN).
On 26 June 2017 it was announced that the PMC would close, with most of its functions transferred to the MDC. The Leeds site is now the Leeds Centre for Personalised Medicine and Health.
Activities
A not-for-profit company, the MDC works with a range of UK innovators to advance projects and products towards clinical impact. In 2019, the company stated that it worked in four sectors:
Predictive biological models of human disease, for new drug testing
Predictive computational techniques for drug discovery
Collaboration between health service providers and government bodies
Collaboration on drug discovery between research charities and industry.
In the same year, the number of staff increased from 40 to 75, and the company reported that its income comprised £8.5m from Innovate UK and £152,000 from collaborative research and development. After charging £7.1m to administrative expenses, the company reported a loss for the year of £16,000.
In 2020, the company was given the task of setting up one of the first PCR analysis centres for COVID-19 tests – known as Lighthouse labs – elsewhere at the Alderley Park site. By 2021, this centre employed over 700 staff and had a stated capacity of 80,000 test samples per day.
Key people
Dr Robin Brown has been the company's chairman since July 2018; he has a PhD in molecular biology and has worked in venture capital at Advent Healthcare. The company has no shareholders.
Previously, Professor Graham Boulnois was chairman from January 2016; he was head of research from 1992 to 2000 at Zeneca Pharmaceuticals in Cheshire, and Professor of Microbiology from 1984 to 1992 at the University of Leicester.
See also
Innovative Medicines Initiative, OpenPHACTS and European Lead Factory
References
External links
2015 establishments in the United Kingdom
Borough of Cheshire East
British medical research
Catapult centres
Government agencies established in 2015
Health in Cheshire
Medical and health organisations based in the United Kingdom
Medical research organizations
Medicinal chemistry
Organisations based in Cheshire
Pharmaceutical industry in the United Kingdom
Pharmacognosy
Pharmacy organisations in the United Kingdom
Science and technology in Cheshire | Medicines Discovery Catapult | [
"Chemistry",
"Biology"
] | 631 | [
"Pharmacology",
"Pharmacognosy",
"Medicinal chemistry stubs",
"Biochemistry stubs",
"nan",
"Medicinal chemistry",
"Biochemistry"
] |
64,556,269 | https://en.wikipedia.org/wiki/A.%20Charles%20Catania | Anthony Charles Catania (born June 22, 1936) is an American researcher in behavior analysis known for his theoretical, experimental, and applied work. He is an Emeritus professor of psychology at the University of Maryland, Baltimore County (UMBC), where he taught and conducted research for 35 years prior to his retirement in 2008. He received a B.A. (1957) and M.A. (1958) at Columbia University in Psychology. He received his Ph.D. in Psychology at Harvard University in 1961. He remained at Harvard to conduct research as a postdoctoral researcher in B. F. Skinner's laboratory. Prior to his career at UMBC, he held a faculty position for nearly a decade at New York University (NYU).
He studies the behavior of both human and nonhuman animals. He has written over 200 journal articles and book chapters, has edited or co-edited six books, and has written two textbooks on learning. Topics on which he has published include schedules of reinforcement, human verbal behavior, and the history of behavior analysis.
Related to his professional interests in learning and verbal behavior, since 2022 Catania has participated in a published series of one-on-one conversations with linguist and political activist Noam Chomsky. Topics discussed include the history of cognitive science and scientific and philosophical disputes concerning verbal behavior.
At UMBC, Catania founded the graduate-level (MA) program in Applied Behavior Analysis.
Catania was the chief editor at the Journal of the Experimental Analysis of Behavior (1966–69) and served as an associate editor at several journals, including Behavioral and Brain Sciences, Behaviorism, and the European Journal of Behavior Analysis. He served as President of the Maryland Association for Behavior Analysis. He twice served as President of the Society for the Experimental Analysis of Behavior (SEAB; from 1966-1967 and 1981–83) and as President of the Association for Behavior Analysis [now Association for Behavior Analysis International (ABAI)], from 1981 to 1984. He is a Fellow of Divisions 3, 6, 25, and 28 of the American Psychological Association (APA) and served as President of Division 25 from 1996 to 1998.
He resides in Columbia, Maryland.
References
1936 births
Living people
Behaviourist psychologists
New York University faculty
Harvard Graduate School of Arts and Sciences alumni
University of Maryland, Baltimore County faculty
Columbia College (New York) alumni
American academic journal editors
Scientists from New York City
20th-century American psychologists
21st-century American psychologists
Fellows of the American Psychological Association | A. Charles Catania | [
"Biology"
] | 506 | [
"Behaviourist psychologists",
"Behavior",
"Behaviorism"
] |
64,556,980 | https://en.wikipedia.org/wiki/Yau%20Usman%20Idris | Yau Usman Idris is a Nigerian nuclear physicist and the current director general (CEO) of the Nigerian Nuclear Regulatory Authority (NNRA). He was appointed by Muhammadu Buhari, President of the Federal Republic of Nigeria.
Early life and education
Idris was born in Kauru, Local Government Area of Kaduna State. He obtained his B.Sc in physics at the University of Maiduguri in 1988, M.Sc. in physics at the University of Ibadan in 1992, and PhD in nuclear physics at Ahmadu Bello University, Zaria in 1998.
He also obtained a certificate in BPTC - reactor safety, nuclear technology from Korea Institute of Nuclear Safety, South Korea in 2013. Another certificate was also obtained from Argonne National Laboratory, Chicago, Illinois in nuclear power plant development in 2009.
Career
Idris worked in various places nationally and internationally in the field of nuclear and now is the director general of the Nigerian Nuclear Regulatory Authority (NNRA).
He was appointed as commissioner for Environment and Natural Resources by the Kaduna State Governor, Nasiru Ahmed El-rufai from August 2015 to February 2016.
He was then a lecturer researcher, Ahmadu Bello University, Zaria from 1999 to April 2004, and then lecturer researcher, Department of Physics, University of Maiduguri from 1989 to 1999.
Moreover, in the international responsibility of nuclear safety, he is a Vice Chairman of the Forum of Nuclear Regulatory Bodies in Africa (FNRA), African Regional Co-ordinator of the International Atomic Energy Agency (IAEA), Advisory Board Member of the African Nuclear Business Platform (AFNBP), and Co-ordinator of the Website of the Forum of Nuclear Regulatory Bodies in Africa (FNRBA).
Award and membership
He has received several awards such as international Leadership Gold Award for Excellent, Nigerian Self-service Gold Award (NISSGA) and Legends' Noble Award
Membership of professional association
He is the Secretary of the Forum of Nuclear Regulatory Body in Africa (FNRBA), Advisory Board Member for African Nuclear Business Platform ( AFNBP), Member Forum of Nuclear Regulatory Body in Africa (FNRBA), Nigerian Society for Radiation Protection (NSRP), Member Nigerian Institute of Physics (MNIP), and Member, Material Society of Nigeria (MMSN)
Personal life
He is married with children.
See also
Heat energy
Nuclear energy policy by country
Nuclear reactor
Nuclear weapon
Radiation
References
Year of birth missing (living people)
Living people
Ahmadu Bello University alumni
Nigerian physicists
Nuclear physicists
People from Kaduna State
University of Ibadan alumni
University of Maiduguri alumni | Yau Usman Idris | [
"Physics"
] | 542 | [
"Nuclear physicists",
"Nuclear physics"
] |
64,559,901 | https://en.wikipedia.org/wiki/Micralign | The Perkin-Elmer Micralign was a family of aligners introduced in 1973. Micralign was the first projection aligner, a concept that dramatically improved semiconductor fabrication. According to the Chip History Center, it "literally made the modern IC industry".
The Micralign addressed a significant problem in the early integrated circuit (IC) industry, that the vast majority of ICs printed contained defects that rendered them useless. On average, about 1 in 10 complex ICs produced would be operational, a 10% yield. The Micralign improved this to over 50%, and as great as 70% in many applications. In doing so, the price of microprocessors and dynamic RAM products fell about 10 times between 1974 and 1978, by which time the Micralign had become practically universal in the high-end market.
Initially predicting to sell perhaps 50 units, Perkin-Elmer eventually sold about 2,000, making them the by far largest vendor in the semiconductor fabrication equipment space through the second half of the 1970s and early 1980s. Formed into the Microlithography Division, by 1980 its income was the largest of Perkin-Elmer's divisions and provided the majority of the company's profits.
The company was slow to respond to the challenge of the stepper, which replaced the projection aligners in most roles starting in the mid-1980s. Their move to extreme ultraviolet as a response failed, as the technology was not mature. Another attempt, buying a European stepper company, did nothing to reverse their fortunes. In 1990, Perkin-Elmer sold the division to Silicon Valley Group, which is today part of ASML Holding.
Background
Integrated circuits (ICs) are produced in a multi-step process known as photolithography. The process begins with thin disks of highly pure silicon being sawn from a crystalline cylinder known as a boule. After initial processing, these disks are known as wafers. The IC consists of one or more layers of lines and areas patterned onto the surface of the wafer.
The wafers are coated in a chemical known as photoresist. One layer of the ultimate chip design is printed on a "mask", similar to a stencil. The mask is placed over the wafer and an ultraviolet (UV) lamp, typically a mercury arc lamp, is shone on the mask. Depending on the process, areas of the photoresist that are exposed to the light either harden or soften, and then the softer areas are washed away using a solvent. The result is a duplication of the pattern from the mask onto the surface of the wafer. Chemical processing is then used on the pattern to give it the desired electrical qualities.
This entire process is repeated several times to build up the complete IC design. Each step uses a different design on a different mask. The features are measured in micrometres, so any previous design already deposited has to be precisely aligned with the new mask that will be applied. This is the purpose of the aligner, a task that was originally completed manually using a microscope.
There is a strong economic argument to use larger wafers, as more individual IC's can be patterned on the surface and produced in a single series of operations, thereby producing more chips during the same period of time. However, larger wafers give rise to significant optical issues; focussing the light over the area while maintaining very high uniformity was a major challenge. By the early 1970s, wafers had been about 2.5 inches in diameter for some time and were just moving to 3 inches, but existing optical systems were having problems with this size. Every time a new wafer size was introduced, the optical systems had to be redesigned from scratch.
Contact aligners
In the 1960s, the most common way to hold the mask during the exposure processes was to use a contact aligner. As the name implies, the purpose of this device was to precisely align the mask between each patterning step, and once aligned, hold the mask directly on the surface of the wafer. The reason for holding the mask on the wafer was that at the scale of the lines being drawn, diffraction of the light around the edges of the lines on the mask would blur the image if there was any distance between the mask and the wafer.
There were significant problems with the contact-mask concept. One of the most annoying was that any dust that reached the aligner's interior might stick to the mask and would be imaged on subsequent wafers as if it were part of the pattern. Equally annoying was that uncured photoresist would stick to the mask, and when the mask was lifted, it would pull off the top surface from the wafer, destroying that wafer and once again adding spurious images on the mask. Any one error might not be an issue because only the ICs in that location will be affected, but eventually, enough errors will be picked up that the mask is no longer useful.
As a result of issues like these, masks generally lasted only a dozen times before having to be replaced. To supply the required number of masks, copies of the original mask were repeatedly printed using conventional silver halide photography on photographic stock, which was then used in the machine. The thermal stability of these masks during exposure to bright light caused distortions, which were not a concern in the early days but became an issue as feature sizes continued to shrink. This forced a move from film to glass masks, further increasing costs.
Because any particular wafer could be damaged at any given masking step, the chance that any one wafer would make it through to production without damage was a function of the number of steps. This limited the complexity of the IC designs in spite of the designers being able to make use of many more layers. Microprocessors, in particular, were complex multi-layer designs that had extremely low yield, with perhaps 1 in 10 of the patterns on a wafer delivering a working chip.
Microprojector
The Micralign traces its history to a 1967 contract with the US Air Force for a higher-resolution aligner. At the time, the Air Force was one of the largest users of ICs, which were used in many of their missile systems, notably the Minuteman missile. The cost, and especially time to market, was a significant problem that the Air Force was interested in improving.
There was a second type of aligner in use, the proximity aligner. As the name implies, these held the mask in close proximity to the wafer rather than in direct contact. This improved the life of the mask and allowed a more complex design, but had the downside that diffraction effects limited its use to relatively large features compared to the contact aligners. More annoying was the fact that the mask had to be aligned in three axes to make it perfectly flat relative to the wafer, which was a very slow process, and had to hold the mask in such a way that it didn't sag.
The Air Force had worked with Perkin-Elmer for many years on reconnaissance optics, and the Air Force Materiel Command at Wright-Patterson Air Force Base offered them a contract to see whether they could improve the proximity masking system. The result was the Microprojector. The key to the design was a 16-element lens system that produced an extremely focused light source. The resulting system could produce 2.5 μm features, 100 millionths of an inch, equal to the best contact aligners.
Although the system was effective, meeting the goals set by the Air Force, it was not practical. With a large number of lenses, dispersion was a significant problem, which they addressed by filtering out everything but a single band of UV only 200-angstrom wide (the G-line), throwing away the majority of the light coming from the 1,000 W lamp. This made the exposure times even longer than existing proximity designs.
Another significant problem was that the filters removed the visible light as well as UV, which made it impossible for the operators to view the chips during the alignment process. To solve this problem, they added an image intensifier system that produced a visible image from the UV that could be used during alignment, but this added to the unit's cost.
New concept
Harold Hemstreet, manager of what was then the Electro-Optical Division, felt that Perkin-Elmer could improve on the Microprojector. He called on Abe Offner, the company's main optical designer, to come up with a solution. Offner decided to explore systems that would focus the light using mirrors instead of lenses, thus avoiding the problem of dispersion. Mirrors suffer from another problem, aberration, which makes it difficult to focus near the edges of the mirror. Combined with the desire to move to the larger 3-inch wafers, a mirror would be a difficult solution in spite of its advantages.
Offner's solution was to use only a small portion of the mirror system to image the mask, a section where the focus was guaranteed to be correct. This was along a thin ring running about halfway out from the center of the primary mirror. That meant only this sliver of the mask's image was properly focussed. This could be used if the resulting light was magnified to the size of the mask, but Rod Scott suggested that it instead be used by scanning the sliver of light across the mask.
Scanning requires the light to shine on the photoresist for the same time as it would for the entire wafer in a contact aligner, so this implied that a scanner would be much slower to operate, as it imaged only a small portion at a time. However, because the mirror was achromatic, the entire output of the lamp could be used, rather than just a small window of frequencies. In the end, the two effects offset each other, and the new system's imaging time was as good as contact systems.
John Bossung built a proof-of-concept system that copied a mask onto a photographic slide. This won another $100,000 contract from the Air Force to produce a working example.
Practical design
The $100,000 would not be enough to bring such a system to commercial production, so Hemstreet had to persuade management to fund development. At the time, another division was asking for funds to develop a laser letterpress, a high-speed currency printing system, and Hemstreet had to argue they should be funded instead of that project. When the board of directors asked about the potential market, he suggested that the company might sell 50 of the systems, which was laughed at as no one could imagine a requirement for 50 such machines. Nevertheless, Hemstreet managed to win approval for the project.
In May 1971 a production team was formed, led by Jere Buckley, a mechanical designer, and Dave Markle, an optical engineer. Offner's original design required the mask and wafer to be scanned horizontally in precisely the same motion as the mask passed over the active area of the mirror system. This appeared to be fantastically difficult to arrange with the required precision. They developed a new layout where both the mask and wafer were held on opposite ends of a C-shaped holder, at right angles to the main mirror. New mirrors reflected the light through right angles so vertical motion of the holder was translated into horizontal scanning over the main mirror, and a roof prism flipped the final image so that the mask and wafer did not produce mirror images. By making the C-shaped holder large enough, rotating the assembly produced a facsimile of horizontal scanning that was more than accurate enough for the desired resolution. A flexure bearing was used to provide super-smooth rotational motion. Perkin-Elmer boasted that one could throw a handful of sand into the mechanism and it would still work perfectly. There is no record of the scanner ever failing.
The basic mechanical design was completed by November 1971. The next step was to come up with a lamp that could efficiently light the curved section of the mirror. They called Ray Paquette at Advanced Radiation Corporation, and after working on it for about two hours he had produced a sample of a curved lamp. Offner then designed a new collimator that worked with the curved shape. Because almost all of the light from the lamp was being used, scanning took 10 to 12 seconds, a dramatic improvement over older systems. The next problem was how to align the mask, as the system focussed only UV light. This was solved by adding a dielectric coating that reflected the UV but not visible light. A separate lamp was used during the alignment process, with the light passing through the optics to the microscope that the operator used to align the mask.
The product was set to launch in the summer of 1973. In a pre-launch sales effort, the company ran a series of wafers for Texas Instruments, which they then used as their "golden wafers" to show to potential clients. They showed the wafers to Raytheon who rejected them, National Semiconductor who were impressed, and Fairchild Semiconductor who produced electron microscope images of the wafers which showed they had "horrible edges". By the time they returned to company headquarters in Norcross, Raytheon had indicated that the problem might not be with the aligner itself, but the photoresist layers. They sent one of their experienced operators to Perkin-Elmer and began sorting out the practical problems of fabrication that the company had not had to deal with previously.
Micralign 100
The first sale of what was now known as the Micralign 100 was in 1974 to Texas Instruments, which paid $98,000 for the machine, , about three times that of existing high-end contact aligners. Sales to Intel and Raytheon followed. Intel kept their system secret, and were able to introduce new products, notably memory devices, at prices no one else could touch. The secret finally leaked out when various Intel workers left the company.
The sales pitch to early customers was simple; they could use their existing glass master masks, or "reticles", without the need to print working masks at all. The masks would last 100,000 uses instead of 10. By the next year, the company was in full-out production and had a year-long backlog of orders. By 1976, they were selling 30 a month. The only issue found during initial use was that the longer exposures led to new issues with thermal expansion, which was cured by moving from conventional soda-lime glass to borosilicate glass for the masks.
The real advantage was not a reduction in mask costs, but improved yield. A 1975 report by a 3rd party research firm outlined the impressive advantages; because the contact problems with dirt and sticking emulsion were eliminated, yields had improved dramatically. For simple single-layer ICs like the 7400-series, yields improved from 75 percent with contact printing to 90 percent with the Micralign. Results were more dramatic for larger chips; a typical four-function calculator chip yielded 30 percent using contact printing, Micralign yielded 65 percent.
Microprocessors were only truly useful after the introduction of the Micralign. The Intel 8088 had yields of about 20% on older systems, improving to 60% on the Micralign. Other microprocessors were designed from the start specifically for fabrication on the Micralign. The Motorola 6800 was produced using contact aligners and sold for $295 in single units. Chuck Peddle found customers would not buy it at that cost and designed a low-cost replacement. When Motorola management refused to fund development, he left and moved to MOS Technology. Their MOS 6502 was designed specifically for the Micralign in mind, with a combination of high yield and smaller feature set allowing them to hit their design cost of $5 per unit. They introduced the 6502 only a year after the 6800, selling it for $25 in singles, and sold the subsequent 6507 with their RIOT support IC to Atari for a total of $12 per pair.
Later generations
Several improvements were introduced into the line to adapt to changes in the IC market. One of the first, on the Model 110, was the addition of an automated wafer loader, which allowed the operators to rapidly mask many wafers in a row.
The Model 111 was a single-wafer model that replaced the 100, and could be adapted for use with 2-, 2.5- or 3-inch wafers, and optionally 4×4-, 3.5×3.5- or 3×3-inch masks. The Model 120 was a 111 with automatic wafer loading. The 130 worked with 100 mm wafers and 5×5-inch masks on a single wafer system, and the 140 added wafer loading to the 130. Any existing model could be adapted to other wafer and mask sizes, or add wafer loading, through conversion kits.
The second-generation Micralign was introduced in 1979. This offered higher resolutions and the ability to work with larger wafers, but also cost much more at $250,000, . This higher price was offset by its ability to print more chips per wafer, due to the smaller feature sizes. 1981's Model 500 increased throughput to 100 wafers an hour, offsetting its $675,000 price, via improved throughput.
By the early 1980s, Perkin-Elmer was firmly in control of the majority of the aligner market, in spite of concerted efforts on the parts of many companies to enter the space. Between 1976 and 1980, overall company sales tripled to $966 million, , of which $104 million was from the Microlithography Division, making it the single largest division of the company, and by far the most profitable.
Exiting the market
While Perkin-Elmer was introducing the Micralign, several other companies were working on different solutions to the same basic problem of focussing a light across the ever-growing wafers. GCA, formerly Geophysical Corporation of America, had been working on a concept that focused on only a small part of the wafer at a time, magnifying the image of the mask about 10-to-1 so it could shine more light through a much larger mask and make up for the fact that it used only a single band of UV light. IBM had purchased one at about the same time the Micralign came to market, but gave up on the system and concluded it could never work.
By 1981, GCA had solved the problems in the stepper system. During that period, the chip industry had continually moved to denser features and more complex designs. The Micralign was running out of resolution, while the additional magnification in the GCA system allowed it to operate at finer feature sizes. With roughly the same speed that the Micralign ended sales of contact printers, GCA's stepper ended sales of the Micralign. Perkin-Elmer had simply not listened to its customers who were clamoring for higher resolution, and ignored the research and development of newer systems.
Instead of steppers, the Model 600 bet on Deep UV (note: correcting interview "EUV" blooper) (DUV) as a solution to the resolution problem. IBM used these to run a memory chip series, but no one else had an effective photoresist that worked in DUV, and few other customers purchased the system. Steppers were far slower than the Micralign and much more expensive, so sales started very slowly, but by the mid-1980s the stepper was rapidly taking over the market.
In an effort to stay in the market, in 1984 Perkin-Elmer purchased Censor, a stepper company from Liechtenstein. The product never made major inroads in the market, and in spite of GCA's bankruptcy in 1987, Perkin-Elmer decided to give up on the Microlithography Division and put it on the market in April 1989, along with their electron-beam lithography (EBL) division. The EBL work quickly sold, but the aligner division lingered. In 1990 it was purchased by the Silicon Valley Group (SVGL) in a multi-way deal involving IBM whose involvement was brokered by Nikon. SVGL was purchased by ASML Holding in 2001.
Notes
References
Citations
Bibliography
Lithography (microfabrication)
Machines | Micralign | [
"Physics",
"Materials_science",
"Technology",
"Engineering"
] | 4,190 | [
"Machines",
"Microtechnology",
"Physical systems",
"Mechanical engineering",
"Nanotechnology",
"Lithography (microfabrication)"
] |
57,979,975 | https://en.wikipedia.org/wiki/Abernethy%20and%20Co%20Stonemason%27s%20Lathe | The Abernethy and Co Stonemason's Lathe is a heritage-listed former stonemason's lathe on public display in Moruya, on the South Coast of New South Wales in Australia. It was built during 1881 by J. Abernethy & Co, . The property is owned by the Office of Environment and Heritage, an agency of the Government of New South Wales. It was added to the New South Wales State Heritage Register on 2 April 1999. It is currently maintained by the Moruya and District Historical Society, located at 85 Campbell Street, Moruya.
History
This twin bed lathe was made in 1881 at Aberdeen by J. Abernethy and Co. and brought to Sydney where it was used to turn columns for the Sydney GPO, The Queen Victoria Building and for the granite pedestal for the Queen Victoria statue in Queen's Square, Sydney. It was last in use at Loveridge and Hudson's Yard in Sydney in the 1960s.
It was given by Mr Ted Hudson to the Lachlan Vintage Village at Forbes and disposed at auction in May 1987. It was subsequently repurchased by the Heritage Council of NSW and ownership transferred to the Minister for Planning under the NSW Heritage Act 1977. The lathe was reassembled at the Lachlan Vintage Village, Forbes, following recognition of its significance. In 1999, the lathe was transferred to the State Heritage Register under the Act.
The Heritage Council resolved at its 4 November 2009 meeting to recommend that the Minister:
acting as a corporation sole under section 102 of the NSW Heritage Act 1977, transfers title of the Abernethy Stonemason's Lathe from the corporation sole to Eurobodalla Shire Council.
The Abernethy Stonemason's Lathe is the only moveable item of its type known to have survived in Australia. The item is currently owned and managed by the Minister for Planning under the Heritage Act 1977. Eurobodalla Shire Council has approached the Heritage Branch, NSW Department of Planning, to obtain the item for public display at Moruya. Council has also indicated its willingness to obtain legal title to the lathe in perpetuity, from the Minister for Planning, to assist their active care and long term management of the item. The Heritage Branch has identified this action as being beneficial to the long-term care, display and public enjoyment of the lathe.
Eurobodalla Shire Council has obtained planning permission to display the item in a public outdoor setting adjacent to the Moruya Historical Society building near Campbell Street, Moruya. Transfer of the item will require later Heritage Council approval under section 57 and/or section 60 of the Act. Under the Heritage Act, transfer of an item of environmental heritage from the control of the Minister and the corporation to another party requires the recommendation of the Heritage Council of NSW under section 116(2). The Heritage Council agreed to recommend the disposal of the item at its meeting of 4 November 2009. The Minister wrote to Eurobodalla Council in March 2010 to formally transfer title.
Records indicate that the lathe was last used at Loveridge & Hudson's Yard in Sydney in the 1960s. It is unclear whether the Lathe was ever situated near the main source of granite at Moruya, or whether it was always located in Sydney. The Moruya Antique Tractor and Machinery Association (MATMA) suggests that a collection of historic photographs show the Lathe situated at Louttit's quarry, Moruya, at one stage.
For many years the lathe was in limbo its last known use was at the stonemasonry firm of Loveridge and Hudson in Sydney during the 1960s.
Loveridge and Hudson was registered in 1882, just after the lathe's construction. Whether they ordered it or had it shipped it to Australia is unknown. The company records are now lost.
The lathe is however attributed to turning the stone columns for some of Sydney's most majestic buildings. The extension of the Sydney GPO in the 1880s, the Queen Victoria Building (1898), Circular Quay railway station (early 1900s), and the Martin Place Savings Bank (1925), amongst others. At this time, suggestions are that the lathe was based in Sydney.
It is unknown whether the lathe spent its entire working life in Sydney. There is some evidence that it had once been at Louttit's Quarry on the banks of the Moruya River. We do know that it turned Moruya granite and we know that by 1977 the Sydney company of Loveridge and Hudson had donated it to the then growing Lachlan Vintage Village in Forbes, an Old Sydney Town-type heritage village. The lathe was in Sydney at that time. Here its fortunes were a little more certain, until the village owners sold it to a local scrap metal business a decade later in 1987. Heritage enthusiasts including the Department of Public Works and the National Trust, lamented the lathe's loss.
Bob Carr, then Minister for Heritage, placed an Order over the historic monument that year under the Heritage Act. The Heritage Council under its then Chair Justice Hope stepped in and purchased it and it was given a stay of life in the Village grounds.
With Lachlan Vintage Village's verging on collapse, the lathe was once again in jeopardy. Calls to remove it were made from 1991. The relocation of the lathe to Moruya in May 2010 was made possible by the coming together of key community members and the local Council to rescue this important part of New South Wales' industrial heritage. Local groups recognised the importance of this item to the local area and were instrumental in starting the dialogue to get the lathe relocated.
The role of the Heritage Council has been instrumental in the preservation of the lathe, having saved it from scrapping in the 1980s and later recommending to the Minister that it be transferred to Eurobodalla for final display and protection.
Description
A lathe for turning large stone columns. Assembled from components of cast iron with milled gears and shafts mounted on a bed of approximately length with toothed rails for positioning the travelling end.
Condition
As at 9 September 2010, the lathe was substantially intact. Reassembled following confirmation of heritage status after initial dismantling. It is not presently in working order but weather protection had been provided by the Heritage Branch, NSW Department of Planning, when on display at Forbes's Lachlan Vintage Village. Relocated and reassembled in Moruya in May 2010 and undergoing conservation treatments.
Largely intact but not in working order. Reassembly may not have included functional connectors.
Modifications and dates
Reassembly completed in April 1993 by W. A. Knights Steel Fabricators & Erectors, Forbes. Moved to Forbes from the Sydney yard of Loveridge & Hudson, stonemasons.
Relocated from Forbes to Moruya in May 2010. Set up in new display venue adjacent to the Moruya Historical Society in Campbell Street, Moruya on land set aside by the Eurobodalla Shire Council for this purpose. Cover shed established around lathe. The Moruya Antique Tractor and Machinery Association Inc. reassembled the lathe and begun conservation treatments, including fish oil applications, on behalf of the present owner, Eurobodalla Shire Council.
Heritage listing
As at 17 August 2010, it was a rare surviving piece of Victorian machinery which was in use for nearly a century, this stonemason's lathe demonstrates changes in technology and in the taste for the use of stone elements in public buildings. It is associated with many significant public buildings in Sydney of the late Victorian period including Sydney General Post Office, the Queen Victoria Building and the pedestal for Queen Victoria's statue in Queen's Square. It is rare for its size and demonstrates aspects of late 19th century toolmaking technology.
Abernethy and Co Stonemason's Lathe was listed on the New South Wales State Heritage Register on 2 April 1999 having satisfied the following criteria.
The place is important in demonstrating the course, or pattern, of cultural or natural history in New South Wales.
Associated with construction of major public works in the late Victorian period in Sydney including the Sydney General Post Office, The Queen Victoria Building and the Pedestal for Queen Victoria's Statue in Queen's Square.
The place has a strong or special association with a particular community or cultural group in New South Wales for social, cultural or spiritual reasons.
Indicative of the fluctuating preference for stone elements in public buildings.
The place has potential to yield information that will contribute to an understanding of the cultural or natural history of New South Wales.
An example of 19th century technology which survived in use for its original specialised purpose for nearly a century.
The place possesses uncommon, rare or endangered aspects of the cultural or natural history of New South Wales.
A rare surviving Victorian stonemason's lathe, possibly unique outside Europe.
See also
References
Attribution
New South Wales State Heritage Register
Eurobodalla Shire
Stonemasonry tools
Industrial machinery
Lathes
Articles incorporating text from the New South Wales State Heritage Register | Abernethy and Co Stonemason's Lathe | [
"Engineering"
] | 1,852 | [
"Industrial machinery"
] |
47,732,302 | https://en.wikipedia.org/wiki/Vanadium%20phosphates | Vanadium phosphates are inorganic compounds with the formula VOxPO4 as well related hydrates with the formula VOxPO4(H2O)n. Some of these compounds are used commercially as catalysts for oxidation reactions.
Vanadium(V) phosphates
A common vanadium phosphate is VOPO4•2H2O.
Seven polymorphs are known for anhydrous VOPO4, denoted αI, αII, β, γ, δ, ω, and ε. These materials are composed of the vanadyl group (VO) and phosphate (PO43−). They are yellow, diamagnetic solids, although when contaminated with vanadium(IV) derivatives, samples exhibit EPR signals and have bluish cast. For these materials, vanadyl refers to both vanadium(V) oxo and vanadium(IV) oxo centers, although conventionally vanadyl is reserved for derivatives of VO2+.
Preparation, reactions, and applications of VOPO4•2H2O
Heating a suspension of vanadium pentoxide and phosphoric acid gives VOPO4•2H2O, isolated as a bright yellow solid. According to X-ray crystallography, the V(V) centers are octahedral, with long, weak bonds to aquo ligands.
Reduction of this compound with alcohols gives the vanadium(IV) phosphates.
These compounds are catalysts for the oxidation of butane to maleic anhydride. A key step in the activation of these catalysts is the conversion of VO(HPO4)•0.5H2O to the pyrophosphate (VO)2(P2O7). This material (CAS#58834-75-6) is called vanadyl pyrophosphate as well as vanadium oxide pyrophosphate.
Vanadium(IV) phosphates
Several vanadium(IV) phosphates are known. These materials are typically blue. In these species, the phosphate anion is singly or doubly protonated. Examples include the hydrogenphosphates, VOHPO4.4H2O and VO(HPO4).0.5H2O, as well as the dihydrogen phosphate VO(H2PO4)2.
Vanadium(III) phosphates
Vanadium(III) phosphates lacking the oxo ligand have the formula VPO4•H2O and VPO4•2H2O. The monohydrate is isostructural with MgSO4•H2O It adopts the structure of the corresponding hydrated aluminium phosphate. Oxidation of VPO4•H2O yields the two-electron electroactive material ε-VOPO4
Notes
References
Vanadium compounds
Catalysts
Phosphates | Vanadium phosphates | [
"Chemistry"
] | 603 | [
"Catalysis",
"Catalysts",
"Salts",
"Phosphates",
"Chemical kinetics"
] |
47,734,015 | https://en.wikipedia.org/wiki/Parakaryon | Parakaryon myojinensis, also known as the Myojin parakaryote, is a highly unusual species of single-celled organism known only from a single specimen, described in 2012. It has features of both prokaryotes and eukaryotes but is apparently distinct from either group, making it unique among organisms discovered thus far. It is the sole species in the genus Parakaryon.
Etymology
The generic name Parakaryon comes from Greek παρά (pará, "beside", "beyond", "near") and κάρυον (káryon, "nut", "kernel", "nucleus"), and reflects its distinction from eukaryotes and prokaryotes. The specific name myojinensis reflects the locality where the only sample was collected: from the bristle of a scale worm collected from hydrothermal vents at Myōjin Knoll (明神海丘, ), about deep in the Pacific Ocean, near Aogashima island, southeast of the Japanese archipelago. The authors explain the full binomial as "next to (eu)karyote from Myojin".
Structure
Parakaryon myojinensis has some structural features unique to eukaryotes, some features unique to prokaryotes, and some features different to both. The table below details these structures, with matching traits coloured beige.
Interpretations
Genuine species or artifact
Yamaguchi et al. proposed in their 2012 paper that there were three reasons why the specimen they named P. myojinensis was not simply a result of parasitic or predatory bacteria living within another prokaryote host, which they acknowledged is known from several examples:
"It is difficult to imagine that multiple bacteria of different species attacked a host at the same time." They referred to Figure 2d, showing the isolated forms of the inclusions, one large helix with three turns (volume 2.3 μm³) and two much smaller pieces (volumes 0.2 & 0.1 μm³).
"Secondly, because the cytoplasms of the host and the endosymbionts show orderly and electron-dense cellular structures, no digestion in either host or endosymbionts appears to have occurred."
"Lastly, if Parakaryon myojinensis originated due to a current interaction between predators and hosts, then there must be dense populations of predators and hosts, because predators need to find hosts quickly for survival once they are released from the previous host."
In 2016, Yamaguchi et al. detailed the discovery of helical bacteria on polychaetes collected from the same location, which they named "Myojin spiral bacteria". In 2020, Yamaguchi and two others published a new short paper on their studies of the microbiota of polychaetes from Myojin Knoll. The authors stated "Among them, we often observed bacteria that contained intracellular bacteria on ultrathin sections." They studied one such specimen and concluded that the "host" bacterium was dead and its cell wall broken. The smaller bacteria could have been feeding on the larger bacterium but they also suggest "The association of the bacteria with dead bacteria could also have been artificially caused by the centrifugation steps used for the preparation of specimens for electron microscopy." In this paper, all five mentions of P. myojinensis were as a valid taxon with no implication that it is an artifact.
Evolutionary significance
It is not clear whether P. myojinensis can or should be classified as an eukaryote or a prokaryote, the two categories to which all other cellular life belongs. Adding to the difficulties of classification, only one instance of this organism has been discovered to date, and so scientists have been unable to study it further. Its discoverers suggested that additional specimens would be needed for culturing and DNA sequencing to place the organism in a phylogenetic context.
British evolutionary biochemist Nick Lane hypothesized in a 2015 book that the existence of P. myojinensis could be the first known example of symbiogenesis outside eukaryotes, which could offer clues to the requirements for the development of complex life in general.
See also
Anatoma fujikurai, a species of sea snail discovered at the same location
References
Further reading
Species described in 2012
Incertae sedis
Monotypic genera
Biota of the Pacific Ocean
Japanese archipelago
Microorganisms
Marine organisms
Endosymbiotic events
Species known from a single specimen | Parakaryon | [
"Biology"
] | 927 | [
"Biota of the Pacific Ocean",
"Symbiosis",
"Endosymbiotic events",
"Species known from a single specimen",
"Taxonomy (biology)",
"Individual organisms",
"Incertae sedis",
"Microorganisms",
"Biota by sea or ocean"
] |
47,737,366 | https://en.wikipedia.org/wiki/Ti-6Al-2Sn-4Zr-2Mo | Ti-6Al-2Sn-4Zr-2Mo (UNS designation R54620), also known as Ti 6-2-4-2, is a near alpha titanium alloy known for its high strength and excellent corrosion resistance. It is often used in the aerospace industry for creating high-temperature jet engines and the automotive industry to create high performance automotive valves.
Chemistry
Markets
Aerospace
Automobile
Applications
High-temp jet engines
Gas turbine compressor components (Blades, Discs, Spacers and Seals)
High performance automotive valves
Sheet metal parts in afterburners and hot airframe sections
Aircraft brake parts (e.g. Boeing 787)
Specifications
AMS: 4919, 4975, 4976, 4979, T 9047
MIL-T : 9046, 9047
MIL-F: 81556, 82142
Werkstoff: 3.7145
EN: 3.71450
GE: B50TF22, B50TF21, C50TF7, B50TF22
PWA: 1220
DIN: 3.7164
UNS: R54620
References
Titanium alloys | Ti-6Al-2Sn-4Zr-2Mo | [
"Chemistry"
] | 233 | [
"Titanium alloys",
"Alloys"
] |
77,651,409 | https://en.wikipedia.org/wiki/High-pressure%20torsion | High-pressure torsion (HPT) is a severe plastic deformation technique used to refine the microstructure of materials by applying both high pressure and torsional strain. HPT involves compressing a material between two anvils while simultaneously rotating one of the anvils, inducing shear deformation. HPT is widely used in materials science to create ultrafine-grained and nanostructured metallic and non-metallic materials, control phase transformations, synthesize new materials or investigate mechanisms underlying some natural phenomena. This process leads to significant grain refinement, resulting in materials with enhanced mechanical properties such as increased tensile strength and hardness. It was introduced in 1935 by P.W. Bridgman, who developed early methods to apply extreme strain under high pressures in material processing.
HPT also has applications in producing metals with enhanced superplasticity, improving the toughness of alloys, and creating materials with unique properties like high wear resistance. Researchers use HPT to study fundamental aspects of deformation and phase transition under extreme conditions. Additionally, HPT is being explored for potential applications in the energy field. Progress in HPT science and technology opens new possibilities in the development of advanced materials with superior properties.
References
Metallurgy
Industrial processes | High-pressure torsion | [
"Chemistry",
"Materials_science",
"Engineering"
] | 254 | [
"Metallurgy",
"Materials science stubs",
"Materials science",
"nan"
] |
77,652,894 | https://en.wikipedia.org/wiki/Crystallography%20on%20stamps | The depiction of crystallography on stamps began in 1939 with the issue of a Danzig stamp commemorating Wilhelm Röntgen who discovered X-rays. Crystallographic stamps contribute to crystallography education and to the public understanding of science.
Crystallography on stamps was promoted as part of the International Year of Crystallography in 2014.
Scope
A crystallography stamp has one or more of the following characteristics:
It depicts a crystallographer, or a polymath who did significant work in the crystallography field
It depicts a crystallographic concept, such as quasicrystals, or a crystallographic object, such as a crystal prepared for X-ray diffraction
It depicts a crystallographic symbol or formula such as Bragg's law
It commemorates a crystallographic event, such as an international congress, or an international year in the crystallographic field
The following types of material are excluded (although they may also be collected by crystallography stamp enthusiasts):
Postal stationery, e.g. a postcard depicting a crystallographer with a non-crystallographic stamp affixed
Cinderella, local, private or personal issues, i.e. unofficial stamps
Non-postal stamps, e.g. revenue stamps
Stamps issued by non-existing/unrecognized countries and/or in excess of actual postal requirements
Examples
Crystallographers
Stamps depicting individual crystallographers are sometimes issued by countries to commemorate the birth or death anniversaries of their significant national crystallographers, For example, on August 6, 1996, the British postal service (Royal Mail) issued a stamp honouring Dorothy Hodgkin, a pioneer of protein crystallography (Great Britain's first female Nobel laureate, in 1964, in Chemistry). Some countries have also issued stamps depicting internationally famous scientists associated with crystallography. For example, up to 2023, 55 stamps from 40 countries have been issued commemorating Wilhelm Röntgen the discoverer of X-rays.
A number of crystallographers have been awarded the Nobel Prize and have subsequently appeared on stamps. The following Nobel prize-winning crystallographers (or their work) have been depicted on stamps: Charles Glover Barkla, Paul D. Boyer, Lawrence Bragg, William Henry Bragg, Georges Charpak, Emmanuelle Charpentier, Francis Crick, Robert Curl, Clinton Davisson, Peter Debye, Johann Deisenhofer, Louis de Broglie, Jennifer Doudna, Ben Feringa, Andre Geim, Herbert A. Hauptman, Dorothy Hodgkin, Jerome Karle, Martin Karplus, Aaron Klug, Brian Kobilka, Harry Kroto, Robert Lefkowitz, Michael Levitt, Hartmut Michel, Konstantin Novoselov, Ardem Patapoutian, Linus Pauling, Max Perutz, Venki Ramakrishnan. Wilhelm Röntgen, Jean-Pierre Sauvage, Dan Shechtman, Richard Smalley, Thomas A. Steitz, Fraser Stoddart, George Paget Thomson, Max von Laue, Arieh Warshel, James Watson, Maurice Wilkins, Ada Yonath.
Crystallographic concepts and objects
Stamps depicting a crystallographic concept or object are sometimes combined with a portrait of the crystallographer responsible for inventing the concept or object. Examples of crystallographic concepts and objects are shown in the gallery above: a 1958 Belgian stamp illustrating body-centred cubic structure, a 1963 Canadian stamp illustrating mining for minerals, and a 1978 Soviet stamp depicting crystallogenesis and commemorating Soviet-Polish cooperation in space flight.
Crystals and crystallographic symbols
Stamps depicting crystals or crystallographic symbols are shown in the gallery above: a 1968 Soviet stamp depicting a geologist and garnet crystals, a 1968 Soviet stamp depicting a Rhenium dimeric anion with a quadruple Re-Re bond (Re2Cl82-), and a 2006 Romanian stamp illustrating amethyst crystals.
International Year of Crystallography
The International Year of Crystallography (IYCr) took place in 2014. To promote crystallography the following countries issued stamps to commemorate the IYCr: Austria (personalised), Belgium, India, Israel, Liechtenstein, Mexico, Moldova (personalised), North Korea, Poland, Portugal, South Korea, Slovakia, Slovenia and Switzerland.
Publications
No book has yet been published exclusively in the area of crystallographic stamps, however much crystallographic material is included in the book A philatelic ramble through chemistry by Edgar Heilbronner and Foil Miller.
Daniel Rabinovich is the current leading writer in the field having published articles on the International Year of Crystallography, and 35 articles covering chemistry, crystallography and physics philatelic subjects in the journal Chemistry International from 2007 to 2013.
The Chemistry and Physics on Stamps Study Unit (CPOSSU) of the American Topical Association has published a members' journal Philatelia Chimica et Physica since 1979 and a number of articles cover crystallographic topics.
Listings of new issues of crystallographic stamps are included in the monthly Scott Stamp magazine and in Linn's Stamp News; they are also available online from October 2010 to date in the Science & Technology section.
References
Stamp collecting
Philately
Topical postage stamps
Crystallography | Crystallography on stamps | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,100 | [
"Crystallography",
"Condensed matter physics",
"Materials science"
] |
56,234,193 | https://en.wikipedia.org/wiki/Ulocladium%20botrytis | Ulocladium botrytis is an anamorphic filamentous fungus belonging to the phylum Ascomycota. Commonly found in soil and damp indoor environments, U.botrytis is a hyphomycetous mould found in many regions of the world. It is also occasionally misidentified as a species of the genera Alternaria or Pithomyces due to morphological similarities. Ulocladium botrytis is rarely pathogenic to humans but is associated with human allergic responses and is used in allergy tests. Ulocladium botrytis has been implicated in some cases of human fungal nail infection. The fungus was first discovered in 1851 by German mycologist Carl Gottlieb Traugott Preuss.
History and taxonomy
The genus Ulocladium was first discovered in 1851 by German mycologist, Preuss, in a small batch of his specimens. An abundant hyphomycetous growth of Ulocladium was found on a thin sliver of wood and was drawn and labeled by Preuss as Ulocladium botrytis in his manuscript. This sample was later acquired by the Botanisches Museum in Berlin. At the time, the name of the genus and the species type was published as a nomen nudum due to insufficient description. Furthermore, certain taxa of Ulocladium greatly resemble Alternaria species, resulting in occasional misidentifications. During the late 1900s, a mycologist named Curran described Alternaria maritima as a species new to Ireland. However, Curran's new claim was questioned when another mycologist, Kohlmeyer, initiated a movement to verify the classification of this fungus. After much study, it was found that Alternaria maritima was in fact Ulocladium botrytis. Although Ulocladium is now a genus of its own, it was once included in the genus Alternaria. Several recent DNA-based phylogenetic studies have presented convincing data which places Ulocladium species within the genus Alternaria; however, Ulocladium species do not produce certain compounds and metabolites produced by Alternaria species. Some modern sources believe that Ulocladium botrytis should be considered conspecific with Ulocladium atrum.
Growth and morphology
Ulocladium botrytis is a hyphomycetous mould that favors growth in damp indoor environments. Although it mainly uses nitrogen, other nutrient sources have been tested to determine that U. botrytis growth rate is dependent on the type of media provided. Ulocladium botrytis colonies are commonly velvety in texture and grow in an assortment of colors ranging from dark blackish brown to black. The hyphae are 3-4 μm in diameter and yellow to golden brown in colour with a smooth or slightly rough texture. Conidiophores are short and either erect and ascending, or contorted into various shapes. In addition, they are often bifurcated near the apex at sharp angles. Ulocladium botrytis conidiophores are typically light golden brown in color and smooth, with a length of up to 100 μm and a thickness of around 3-5 μm. The conidia themselves are typically ellipsoidal or obovoid in shape; spheroidal conidia are uncommon in this species. They are golden brown in color and frequently have a minute hilum and a warty, verrucose exterior ornamentation. Ulocladium botrytis conidia typically have three transverse septa and longitudinal septum, but these septa rarely overlap to form a cross. This species never forms conidial chains and the conidia never have a beak.
Physiology
Ulocladium botrytis is an anamorphic fungus, thus it undergoes asexual reproduction. Although it is an asexual fungus, U. botrytis possesses the mating type locus, which consists of two dissimilar DNA sequences termed MAT1-1-1 and MAT1-2-1. These U. botrytis MAT genes are essential for controlling colony size and asexual traits such as conidial size and number in U.botrytis. The U. botrytis MAT genes have lost the ability to regulate sexual reproduction in U. botrytis; however, they have the ability to partially induce sexual reproduction in Cochliobolus heterostrophus, a heterothallic species, upon heterologous complementation.
Ulocladium botrytis has cellulolytic ability and contains a cellulose-degrading enzyme complex that can degrade recalcitrant plant litter under alkaline conditions, a trait that is uncommon in other cellulolytic systems. This fungus' ability to hydrolyze cellulose in the solid form is best at a pH of 6.0, as this pH allows maximal growth of U. botrytis under alkaline conditions. In contrast, its ability to hydrolyze liquid cellulose under alkaline conditions is best at a pH of 8.0. Additionally, a new tyrosine kinase (p56tck) inhibitor called ulocladol, with the molecular formula C16H14O7, was found in ethyl acetate extract from U. botrytis. Ulocladium botrytis also synthesizes extracellular keratinases and can grow in the presence of keratin. Moreover, this fungus can produce carboxymethyl cellulase and protease on Eichhornia crassipes wastes.
As a fungus, Ulocladium botrytis produces a diverse collection of chemical compounds and metabolites. It produces mixtures of volatile organic compounds that include terpenes, alcohols, ketones, and nitrogen-containing compounds. Furthermore, U. botrytis aids in decreasing aldehyde levels. Dodecane and 9,10,12,13-tetrahydroxyheneicosanoic acid were also found as metabolites of U. botrytis. Another U. botrytis metabolite is 1-hydroxy-6-methyl-8-(hydroxymethyl)xanthone, which has antimicrobial effects indicating its identification as an antifungal metabolite. Importantly, a major protein allergen of Alternaria alternata, termed Alt a 1, and an allergen homologous to it is expressed in the excretory-secretory materials of U. botrytis.
Habitat and ecology
The distribution of Ulocladium botrytis is fairly broad, wherein it has been found worldwide in areas of Europe, North America, Egypt, India, Pakistan, and Kuwait. It is often isolated from soil, where it is a common contaminant; however, U. botrytis also grows on rotten wood, paper, and other textiles or on dead herbaceous plants. It also heavily favors growth in damp indoor environments. This fungus has been found growing on deciduous alder trees (Alnus) which belong to the birch family Betulaceae. Trees in this family include the American green alder and the mountain alder. U. botrytis can also be found growing on the evergreen coniferous tree genus Pseudotsuga of the family Pinaceae; different trees include the Douglas fir and the big-cone spruce. In addition, this fungus can grow on the flowering plant genus Sphaeralcea of the mallow family Malvaceae; plants include the desert hollyhock and the prairie mallow. A previously conducted study also isolated a unique strain of Ulocladium botrytis, strain number 193A4, from the marine sponge Callyspongia vaginalis. Another independent study found seed-borne Ulocladium botrytis from pearl millet (Pennisetum typhoides).
Relationships with other organisms coexisting in the same ecosystem has served to be beneficial for some organisms and this applies to U. botrytis. Ulocladium botrytis is capable of surviving in xerophilic ecosystems and alkaline-calcareous soils, both extreme habitats, when associating with the tree species Scutia buxifolia. The U. botrytis strain associated with this environment is called LPSC 813 and has great cellulolytic ability. Ulocladium. botrytis has potential, albeit limited, to be used as a biocontrol agent against the parasitic herbaceous plant genus Orobanche that affect the yield of certain crops like tomatoes. Ulocladium botrytis is also capable of in vitro antagonism of root-disease pathogens such as Heterobasidion annosum, Phellinus weirii, and Armillaria ostoyae. Apart from U. botrytis, other Ulocladium species such as U. atrum and U. oudemansii also present biocontrol potential.
Impact on human health
Ulocladium botrytis is currently regarded as a source of home allergen sensitization and is used in skin-prick tests that test for mould allergens and work-related allergens. This is due to the production and detection of Alt a 1, the major allergen produced by Alternaria alternata, in U.botrytis. In addition, U. botrytis also releases another allergen, homologous to Alt a 1, that possesses the capacity to cause allergic responses in humans. The allergic symptoms caused by U. botrytis are compatible with rhinitis and asthma; however, U. botrytis was also found in patients of allergic fungal sinusitis. Importantly, Ulocladium botrytis is rarely pathogenic to humans but has been found to be associated with cases of onychomycosis, a fungal infection of the nail.
References
Ulocladium
Cereal diseases
Fungal plant pathogens and diseases
Fungi described in 1851
Fungus species | Ulocladium botrytis | [
"Biology"
] | 2,083 | [
"Fungi",
"Fungus species"
] |
63,201,494 | https://en.wikipedia.org/wiki/Geopolymer%20bonded%20wood%20composite | Geopolymer bonded wood composite (GWC) are similar and a green alternatives to cement bonded wood composites. These products are composed of geopolymer binder, wood fibers/ wood particles. Depending on the wood and geopolymer ratio in the material, the properties of the wood-geopolymer composite material vary.
Function
The main functions of wood in the composite material are weight reduction, reduction of thermal conductivity and the fixture function whereas the main functions of geopolymer are bonding of wood particles, improvement of fire resistance, providing mechanical strength, improvement of humidity resistance and protection against fungal and insect damages.
They serve similar functions and purposes like all other mineral bonded wood composites. The fact that the binder agent (geopolymer) are mostly produced from industrial residue and waste puts these materials at a greater advantage over other mineral bonded wood composites. However, most of the works under this topic remains at the research and development phase. Some of the core difficulties in production and commercialization of standardize product is the variation in the sources of the aluminosilicate binder and the cost involve in activating the binder. Currently, metakaolin binder remains as the one key source to produce or bind these products with huge variations in other sources of the binder such as slag, fly ash etc.
Uses
The inherent properties and the incorporation of wood fiber and particles in this composite, has made it possible to produce GWC building materials that are light weight and has a variety of uses due to its heat storage capacity, for example in areas of thermal insulation, fire and noise protection. The wood-geopolymer composite material in the building walls can serve as a microclimate regulator absorbing the moisture when the air humidity is high and returning the moisture when there is a low air humidity period, thus improving the hygrothermal comfort in the building.
Commercialization
Currently, there is no commercialization of these products. More research is still ongoing on these composite materials as to ascertain the properties and how best to utilize these materials.
References
External links
Geopolymers
Composite materials
Engineered wood | Geopolymer bonded wood composite | [
"Physics",
"Chemistry"
] | 438 | [
"Materials",
"Composite materials",
"Geopolymers",
"Matter"
] |
63,202,233 | https://en.wikipedia.org/wiki/Genome-wide%20CRISPR-Cas9%20knockout%20screens | Genome-wide CRISPR-Cas9 knockout screens aim to elucidate the relationship between genotype and phenotype by ablating gene expression on a genome-wide scale and studying the resulting phenotypic alterations. The approach utilises the CRISPR-Cas9 gene editing system, coupled with libraries of single guide RNAs (sgRNAs), which are designed to target every gene in the genome. Over recent years, the genome-wide CRISPR screen has emerged as a powerful tool for performing large-scale loss-of-function screens, with low noise, high knockout efficiency and minimal off-target effects.
History
Early studies in Caenorhabditis elegans and Drosophila melanogaster saw large-scale, systematic loss of function (LOF) screens performed through saturation mutagenesis, demonstrating the potential of this approach to characterise genetic pathways and identify genes with unique and essential functions. The saturation mutagenesis technique was later applied in other organisms, for example zebrafish and mice.
Targeted approaches for gene knockdown emerged in the 1980s with techniques such as homologous recombination, trans-cleaving ribozymes, and antisense technologies.
By the year 2000, RNA interference (RNAi) technology had emerged as a fast, simple, and inexpensive technique for targeted gene knockdown, and was routinely being used to study in vivo gene function in C. elegans. Indeed, in the span of only a few years following its discovery by Fire et al. (1998), almost all of the ~19,000 genes in C. elegans had been analysed using RNAi-based knockdown.
The production of RNAi libraries facilitated the application of this technology on a genome-wide scale, and RNAi-based methods became the predominant approach for genome-wide knockdown screens.
Nevertheless, RNAi-based approaches to genome-wide knockdown screens have their limitations. For one, the high off-target effects cause issues with false-positive observations. Additionally, because RNAi reduces gene expression at the post-transcriptional level by targeting RNA, RNAi-based screens only result in partial and short-term suppression of genes. Whilst partial knockdown may be desirable in certain situations, a technology with improved targeting efficiency and fewer off-target effects was needed.
Since initial identification as a prokaryotic adaptive immune system, the bacterial type II clustered regularly interspaced short palindrome repeats (CRISPR)/Cas9 system has become a simple and efficient tool for generating targeted LOF mutations. It has been successfully applied to edit human genomes, and has started to displace RNAi as the dominant tool in mammalian studies. In the context of genome-wide knockout screens, recent studies have demonstrated that CRISPR/Cas9 screens are able to achieve highly efficient and complete protein depletion, and overcome the off-target issues seen with RNAi screens. In summary, the recent emergence of CRISPR-Cas9 has dramatically increased our ability to perform large-scale LOF screens. The versatility and programmability of Cas9, coupled with the low noise, high knockout efficiency and minimal off-target effects, have made CRISPR the platform of choice for many researchers engaging in gene targeting and editing.
Methods
CRISPR/Cas9 Loss of function
The clustered regularly interspaced short palindrome repeats (CRISPR)/Cas9 system is a gene-editing technology that can introduce double-strand breaks (DSBs) at a target genomic locus. By using a single guide RNA (sgRNA), the endonuclease Cas9 can be delivered to a specific DNA sequence where it cleaves the nucleotide chain. The specificity of the sgRNA is determined by a 20-nt sequence, homologous to the genomic locus of interest, and the binding to Cas9 is mediated by a constant scaffold region of the sgRNA. The desired target site must be immediately followed (5’ to 3’) by a conserved 3 nucleotide protospacer adjacent motif (PAM). In order to repair the DSBs, the cell may use the highly error prone non-homologous end joining, or homologous recombination. By designing suitable sgRNAs, planned insertions or deletions can be introduced into the genome. In the context of genome-wide LOF screens, the aim is to cause gene disruption and knockout.
sgRNA libraries
Constructing a Library
To perform CRISPR knockouts on a genome-wide scale, collections of sgRNAs known as sgRNA libraries, or CRISPR knockout libraries, must be generated. The first step in creating a sgRNA library is to identify genomic regions of interest based on known sgRNA targeting rules. For example, sgRNAs are most efficient when targeting the coding regions of genes and not the 5’ and 3’ UTRs. Conserved exons present as attractive targets, and position relative to the transcription start site should be considered. Secondly, all the possible PAM sites are identified and selected for. On- and off-target activity should be analysed, as should GC content, and homopolymer stretches should be avoided. The most commonly used Cas9 endonuclease, derived from Streptococcus pyogenes, recognises a PAM sequence of NGG.
Furthermore, specific nucleotides appear to be favoured at specific locations. Guanine is strongly favoured over cytosine on position 20 right next to the PAM motif, and on position 16 cytosine is preferred over guanine. For the variable nucleotide in the NGG PAM motif, it has been shown that cytosine is preferred and thymine disfavoured. With such criteria taken into account, the sgRNA library is computationally designed around the selected PAM sites.
Multiple sgRNAs (at least 4–6) should be created against every single gene to limit false-positive detection, and negative control sgRNAs with no known targets should be included. The sgRNAs are then created by in situ synthesis, amplified by PCR, and cloned into a vector delivery system.
Existing libraries
Developing a new sgRNA library is a laborious and time-consuming process. In practice, researchers may select an existing library depending on their experimental purpose and cell lines of interest.
As of February 2020, the most widely used resources for genome-wide CRISPR knockout screens have been the two Genome-Scale CRISPR Knock-Out (GeCKO) libraries created by the Zhang lab. Available through Addgene, these lentiviral libraries respectively target human and mouse exons, and both are available as a one-vector system (where the sgRNAs and Cas9 are present on the same plasmid) or as a two-vector system (where the sgRNAs and Cas9 are present on separate plasmids). Each library is delivered as two half-libraries, allowing researchers to screen with 3 or 6 sgRNAs/gene.
Aside from GeCKO, a number of other CRISPR libraries have been generated and made available through Addgene. The Sabatini & Lander labs currently have 7 separate human and mouse libraries, including targeted sublibraries for distinct subpools such as kinases and ribosomal genes (Addgene #51043–51048). Further, improvements to the specificity of sgRNAs have resulted in ‘second generation’ libraries, such as the Brie (Addgene #73632) and Brunello (Addgene #73178) libraries generated by the Doench and Root labs, and the Toronto knockout (TKO) library (Addgene #1000000069) generated by the Moffat lab.
Lentiviral vectors
Targeted gene knockout using CRISPR/Cas9 requires the use of a delivery system to introduce the sgRNA and Cas9 into the cell. Although a number of different delivery systems are potentially available for CRISPR, genome-wide loss-of-function screens are predominantly carried out using third generation lentiviral vectors. These lentiviral vectors are able to efficiently transduce a broad range of cell types and stably integrate into the genome of dividing and non-dividing cells.
Third generation lentiviral particles are produced by co-transfecting 293T human embryonic kidney (HEK) cells with:
two packaging plasmids, one encoding Rev and the other Gag and Pol;
an interchangeable envelope plasmid that encodes for an envelope glycoprotein of another virus (most commonly the G protein of vesicular stomatitis virus (VSV-G));
one or two (depending on the applied library) transfer plasmids, encoding for Cas9 and sgRNA, as well as selection markers.
The lentiviral particle-containing supernatant is harvested, concentrated and subsequently used to infect the target cells. The exact protocol for lentiviral production will vary depending on the research aim and applied library. If a two vector-system is used, for example, cells are sequentially transduced with Cas9 and sgRNA in a two-step procedure. Although more complex, this has the advantage of a higher titre for the sgRNA library virus.
Phenotypic selection
In general, there are two different formats of genome-wide CRISPR knockout screens: arrayed and pooled. In an arrayed screen, each well contains a specific and known sgRNA targeting a specific gene. Since the sgRNA responsible for each phenotype is known based on well location, phenotypes can be identified and analysed without requiring genetic sequencing. This format allows for the measurement of more specific cellular phenotypes, perhaps by fluorescence or luminescence, and allows researchers to use more library types and delivery methods. For large-scale LOF screens, however, arrayed formats are considered low-efficiency, and expensive in terms of financial and material resources because cell populations have to be isolated and cultured individually.
In a pooled screen, cells grown in a single vessel are transduced in bulk with viral vectors collectively containing the entire sgRNA library. To ensure that the amount of cells infected by more than one sgRNA-containing particle is limited, a low multiplicity of infection (MOI) (typically 0.3-0.6) is used. Evidence so far has suggested that each sgRNA should be represented in a minimum of 200 cells. Transduced cells will be selected for, followed by positive or negative selection for the phenotype of interest, and genetic sequencing will be necessary to identify the integrated sgRNAs.
Next-generation sequencing & hit analysis
Following phenotypic selection, genomic DNA is extracted from the selected clones, alongside a control cell population. In the most common protocols for genome-wide knockouts, a 'Next-generation sequencing (NGS) library' is created by a two step polymerase chain reaction (PCR). The first step amplifies the sgRNA region, using primers specific to the lentiviral integration sequence, and the second step adds Illumina i5 and i7 sequences. NGS of the PCR products allows the recovered sgRNAs to be identified, and a quantification step can be used to determine the relative abundance of each sgRNA.
The final step in the screen is to computationally evaluate the significantly enriched or depleted sgRNAs, trace them back to their corresponding genes, and in turn determine which genes and pathways could be responsible for the observed phenotype. Several algorithms are currently available for this purpose, with the most popular being the Model-based Analysis of Genome-wide CRISPR/Cas9 Knockout (MAGeCK) method. Developed specifically for CRISPR/Cas9 knockout screens in 2014, MAGeCK demonstrated better performance compared with alternative algorithms at the time, and has since demonstrated robust results and high sensitivity across different experimental conditions. As of 2015, the MAGeCK algorithm has been extended to introduce quality control measurements, and account for the previously overlooked sgRNA knockout efficiency. A web-based visualisation tool (VISPR) was also integrated, allowing users to interactively explore the results, analysis, and quality controls.
Applications
Cellular signaling mechanisms
Over recent years, the genome-wide CRISPR screen has emerged as a powerful tool for studying the intricate networks of cellular signaling. Cellular signaling is essential for a number of fundamental biological processes, including cell growth, proliferation, differentiation, and apoptosis.
One practical example is the identification of genes required for proliferative signaling in cancer cells. Cells are transduced with a CRISPR sgRNA library, and studied for growth over time. By comparing sgRNA abundance in selected cells to a control, one can identify which sgRNAs become depleted and in turn which genes may be responsible for the proliferation defect. Such screens have been used to identify cancer-essential genes in acute myeloid leukemia and neuroblastoma, and to describe tumor-specific differences between cancer cell lines.
Identifying synthetic lethal partners
Targeted cancer therapies are designed to target the specific genes, proteins, or environments contributing to tumor cell growth or survival. After a period of prolonged treatment with these therapies, however, tumor cells may develop resistance. Although the mechanisms behind cancer drug resistance are poorly understood, potential causes include: target alteration, drug degradation, apoptosis escape, and epigenetic alterations. Resistance is well-recognised and poses a serious problem in cancer management.
To overcome this problem, a synthetic lethal partner can be identified. Genome-wide LOF screens using CRISPR-Cas9 can be used to screen for synthetic lethal partners. For this, a wild-type cell line and a tumor cell line containing the resistance-causing mutation are transduced with a CRISPR sgRNA library. The two cell lines are cultivated, and any under-represented or dead cells are analyzed to identify potential synthetic lethal partner genes. A recent study by Hinze et al. (2019) used this method to identify a synthetic lethal interaction between the chemotherapy drug asparaginase and two genes in the Wnt signalling pathway NKD2 and LGR6.
Host dependency factors for viral infection
Due to their small genomes and limited number of encoded proteins, viruses exploit host proteins for entry, replication, and transmission. Identification of such host proteins, also termed host dependency factors (HDFs), is particularly important for identifying therapeutic targets. Over recent years, many groups have successfully used genome-wide CRISPR/Cas9 as a screening strategy for HDFs in viral infections.
One example is provided by Marceau et al. (2017), who aimed to dissect the host factors associated with dengue and hepatitis C (HCV) infection (two viruses in family Flaviviridae). ELAVL1, an RNA-binding protein encoded by the ELAVL1 gene, was found to be a critical receptor for HCV entry, and a remarkable divergence in host dependency factors was demonstrated between the two flaviviridae.
Further applications
Additional reported applications of genome-wide CRISPR screens include the study of: mitochondrial metabolism, bacterial toxin resistance, genetic drivers of metastasis, cancer drug resistance, West Nile virus-induced cell death, and immune cell gene networks.
Limitations
This section will specifically address genome-wide CRISPR screens. For a review of CRISPR limitations see Lino et al. (2018)
The sgRNA library
Genome-wide CRISPR screens will ultimately be limited by the properties of the chosen sgRNA library. Each library will contain a different set of sgRNAs, and average coverage per gene may vary. Currently available libraries tend to be biased towards sgRNAs targeting early (5’) protein-coding exons, rather than those targeting the more functional protein domains. This problem was highlighted by Hinze et al. (2019), who noted that genes associated with asparaginase sensitivity failed to score in their genome-wide screen of asparaginase-resistant leukemia cells.
If an appropriate library is not available, creating and amplifying a new sgRNA library is a lengthy process which may take many months. Potential challenges include: (i) effective sgRNA design; (ii) ensuring comprehensive sgRNA coverage throughout the genome; (iii) lentiviral vector backbone design; (iv) producing sufficient amounts of high-quality lentivirus; (v) overcoming low transformation efficiency; (vi) proper scaling of the bacterial culture.
Maintaining cellular sgRNA coverage
One of the largest hurdles for genome-wide CRISPR screening is ensuring adequate coverage of the sgRNA library across the cell population. Evidence so far has suggested that each sgRNA should be represented and maintained in a minimum of 200-300 cells.
Considering that the standard protocol uses a multiplicity of infection of ~0.3, and a transduction efficiency of 30-40% the number of cells required to produce and maintain suitable coverage becomes very large. By way of example, the most popular human sgRNA library is the GeCKO v2 library created by the Zhang lab; it contains 123,411 sgRNAs. Studies using this library commonly transduce more than 1x108 cells
As CRISPR continues to exhibit low noise and minimal off-target effects, an alternative strategy is to reduce the number of sgRNAs per gene for a primary screen. Less stringent cut-offs are used for hit selection, and additional sgRNAs are later used in a more specific secondary screen. This approach is demonstrated by Doench et al. (2016), who found that >92% of genes recovered using the standard protocol were also recovered using fewer sgRNAs per gene. They suggest that this strategy could be useful in studies where scale-up is prohibitively costly.
Lentiviral limitations
Lentiviral vectors have certain general limitations. For one, it is impossible to control where the viral genome integrates into the host genome, and this may affect important functions of the cell. Vannucci et al. provide an excellent review of viral vectors along with their general advantages and disadvantages. In the specific context of genome-wide CRISPR screens, producing and transducing the lentiviral particles is relatively laborious and time-consuming, taking about two weeks in total. Additionally, because the DNA integrates into the host genome, lentiviral delivery leads to long-term expression of Cas9, potentially leading to off-target effects.
Arrayed vs pooled screens
In an arrayed screen, each well contains a specific and known sgRNA targeting a specific gene. Arrayed screens therefore allow for detailed profiling of a single cell, but are limited by high costs and the labour required to isolate and culture the high number of individual cell populations. Conventional pooled CRISPR screens are relatively simple and cost effective to perform, but are limited to the study of the entire cell population. This means that rare phenotypes may be more difficult to identify, and only crude phenotypes can be selected for e.g. cell survival, proliferation, or reporter gene expression.
Culture media
The choice of culture medium might affect the physiological relevance of findings from cell culture experiments due to the differences in the nutrient composition and concentrations. A systematic bias in generated datasets was recently shown for CRISPR and RNAi gene silencing screens (especially for metabolic genes), and for metabolic profiling of cancer cell lines. For example, a stronger dependence on ASNS (asparagine synthetase) was found in cell lines cultured in DMEM, which lacks asparagine, compared to cell lines cultured in RPMI or F12 (containing asparagine). Avoiding such bias might be achieved by using a uniform media for all screened cell lines, and ideally, using a growth medium that better represents the physiological levels of nutrients. Recently, such media types, as Plasmax and Human Plasma Like Medium (HPLM), were developed.
Future Directions
CRISPR + single cell RNA-seq
Emerging technologies are aiming to combine pooled CRISPR screens with the detailed resolution of massively parallel single-cell RNA-sequencing (RNA-seq). Studies utilising “CRISP-seq”, “CROP-seq”, and “PERTURB-seq” have demonstrated rich genomic readouts, accurately identifying gene expression signatures for individual gene knockouts in a complex pool of cells. These methods have the added benefit of producing transcriptional profiles of the sgRNA-induced cells.
References
Genetics
Genome editing
Molecular biology | Genome-wide CRISPR-Cas9 knockout screens | [
"Chemistry",
"Engineering",
"Biology"
] | 4,233 | [
"Genetics techniques",
"Genome editing",
"Genetic engineering",
"Molecular biology",
"Biochemistry"
] |
63,204,538 | https://en.wikipedia.org/wiki/Hans%20R.%20Griem | Hans Rudolf Griem (October 7, 1928 – October 2, 2019) was a German-American physicist who specialized in experimental plasma physics and spectroscopy.
Early life and career
Griem received his doctorate from the University of Kiel in 1954 and in the same year accepted a Fulbright Fellowship at the University of Maryland to work on the physics of the upper atmosphere. He then returned to the University of Kiel for a two-year appointment dealing with high temperature physics. In 1957, he began working at the University of Maryland, first as an assistant professor in plasma physics before becoming an associate professor in 1961 and then full professor in 1963. He retired as professor emeritus in 1994.
From 1976 to 1994, Griem was a consultant at Los Alamos National Laboratory.
Honors and awards
In 1967, Griem was elected a fellow of the American Physical Society. In 1991, he received the James Clerk Maxwell Prize for Plasma Physics for "his numerous contributions to experimental plasma physics and spectroscopy, particularly in the area of improved diagnostic methods for high temperature plasmas, and for his books on plasma spectroscopy and spectral line broadening in plasmas that have become standard references in the field".
Griem also received a Guggenheim Fellowship, a Humboldt Award and the William F. Meggers Award of the Optical Society.
Books
References
1928 births
2019 deaths
Experimental physicists
20th-century American physicists
20th-century German physicists
Fellows of the American Physical Society
Plasma physicists | Hans R. Griem | [
"Physics"
] | 292 | [
"Plasma physicists",
"Experimental physics",
"Experimental physicists",
"Plasma physics"
] |
63,209,698 | https://en.wikipedia.org/wiki/Matrix%20factorization%20%28algebra%29 | In homological algebra, a branch of mathematics, a matrix factorization is a tool used to study infinitely long resolutions, generally over commutative rings.
Motivation
One of the problems with non-smooth algebras, such as Artin algebras, are their derived categories are poorly behaved due to infinite projective resolutions. For example, in the ring there is an infinite resolution of the -module whereInstead of looking at only the derived category of the module category, David Eisenbud studied such resolutions by looking at their periodicity. In general, such resolutions are periodic with period after finitely many objects in the resolution.
Definition
For a commutative ring and an element , a matrix factorization of is a pair of n-by-n matrices such that . This can be encoded more generally as a -graded -module with an endomorphism such that .
Examples
(1) For and there is a matrix factorization where for .
(2) If and , then there is a matrix factorization where
Periodicity
definition
Main theorem
Given a regular local ring and an ideal generated by an -sequence, set and let
be a minimal -free resolution of the ground field. Then becomes periodic after at most steps. https://www.youtube.com/watch?v=2Jo5eCv9ZVY
Maximal Cohen-Macaulay modules
page 18 of eisenbud article
Categorical structure
Support of matrix factorizations
See also
Derived noncommutative algebraic geometry
Derived category
Homological algebra
Triangulated category
References
Further reading
Homological Algebra on a Complete Intersection with an Application to Group Representations
Geometric Study of the Category of Matrix Factorizations
https://web.math.princeton.edu/~takumim/takumim_Spr13_JP.pdf
https://arxiv.org/abs/1110.2918
Homological algebra | Matrix factorization (algebra) | [
"Mathematics"
] | 383 | [
"Fields of abstract algebra",
"Mathematical structures",
"Category theory",
"Homological algebra"
] |
63,210,668 | https://en.wikipedia.org/wiki/Genome%20skimming | Genome skimming is a sequencing approach that uses low-pass, shallow sequencing of a genome (up to 5%), to generate fragments of DNA, known as genome skims. These genome skims contain information about the high-copy fraction of the genome. The high-copy fraction of the genome consists of the ribosomal DNA, plastid genome (plastome), mitochondrial genome (mitogenome), and nuclear repeats such as microsatellites and transposable elements. It employs high-throughput, next generation sequencing technology to generate these skims. Although these skims are merely 'the tip of the genomic iceberg', phylogenomic analysis of them can still provide insights on evolutionary history and biodiversity at a lower cost and larger scale than traditional methods. Due to the small amount of DNA required for genome skimming, its methodology can be applied in other fields other than genomics. Tasks like this include determining the traceability of products in the food industry, enforcing international regulations regarding biodiversity and biological resources, and forensics.
Current Uses
In addition to the assembly of the smaller organellar genomes, genome skimming can also be used to uncover conserved ortholog sequences for phylogenomic studies. In phylogenomic studies of multicellular pathogens, genome skimming can be used to find effector genes, discover endosymbionts and characterize genomic variation.
High-copy DNA
Ribosomal DNA
The Internal transcribed spacers (ITS) are non-coding regions within the 18-5.8-28S rDNA in eukaryotes and are one feature of rDNA that has been used in genome skimming studies. ITS are used to detect different species within a genus, due to their high inter-species variability. These have low individual variability, preventing the identification of distinct strains or individuals. They are also present in all eukaryotes, have a high evolution rate and has been used in phylogenetic analysis between and across species.
When targeting nuclear rDNA, it is suggested that a minimum final sequencing depth of 100X is achieved, and sequences with less than 5X depth are masked.
Plastomes
The plastid genome, or plastome, has been used extensively in identification and evolutionary studies using genome skimming due to its high abundance within plants (~3-5% of cell DNA), small size, simple structure, greater conservation of gene structure than nuclear or mitochondrial genes. Plastids studies have previously been limited by the number of regions that could be assessed in traditional approaches. Using genome skimming, the sequencing of the entire plastid genome, or plastome, can be done at a fraction of the cost and time required for typical sequencing approaches like Sanger sequencing. Plastomes have been suggested as a method to replace traditional DNA barcodes in plants, such as the rbcL and matK barcode genes. Compared to the typical DNA barcode, genome skimming produces plastomes at a tenth of the cost per base. Recent uses of genome skims of plastomes have allowed greater resolution of phylogenies, higher differentiation of specific groups within taxa, and more accurate estimates of biodiversity. Additionally, the plastome has been used to compare species within a genus to look at evolutionary changes and diversity within a group.
When targeting plastomes, it is suggested that a minimum final sequencing depth of 30X is achieved for single-copy regions to ensure high-quality assemblies. Single nucleotide polymorphisms (SNPs) with less than 20X depth should be masked.
Mitogenomes
The mitochondrial genome, or mitogenome, is used as a molecular marker in a great variety of studies because of its maternal inheritance, high copy-number in the cell, lack of recombination, and high mutation rate. It is often used for phylogenetic studies as it is very uniform across metazoan groups, with a circular, double-stranded DNA molecule structure, about 15 to 20 kilobases, with 37 ribosomal RNA genes, 13 protein-coding genes, and 22 transfer RNA genes. Mitochondrial barcode sequences, such as COI, NADH2, 16S rRNA, and 12S rRNA, can also be used for taxonomic identification. The increased publishing of complete mitogenomes allows for inference of robust phylogenies across many taxonomic groups, and it can capture events such as gene rearrangements and positioning of mobile genetic elements. Using genome skimming to assemble complete mitogenomes, the phylogenetic history and biodiversity of many organisms can be resolved.
When targeting mitogenomes, there are no specific suggestions for minimum final sequencing depth, as mitogenomes are more variable in size and more variable in complexity in plant species, increasing the difficulty of assembling repeated sequences. However, highly conserved coding sequences and nonrepetitive flanking regions can be assembled using reference-guided assembly. Sequences should be masked similarly to targeting plastomes and nuclear ribosomal DNA.
Nuclear repeats (satellites or transposable elements)
Nuclear repeats in the genome are an underused source of phylogenetic data. When the nuclear genome is sequenced at 5% of the genome, thousands of copies of the nuclear repeats will be present. Although the repeats sequenced will only be representative of those in the entire genome, it has been shown that these sequenced fractions accurately reflect genomic abundance. These repeats can be clustered de novo and their abundance is estimated. The distribution and occurrence of these repeat types can be phylogenetically informative and provide information about the evolutionary history of various species.
Low-copy DNA
Low-copy DNA can prove useful for evolution developmental and phylogenetic studies. It can be mined from high-copy fractions in a number of ways such as developing primers from databases that contain conserved orthologous genes, single‐copy conserved orthologous gene, and shared copy genes. Another method is looking for novel probes that target low-copy genes using transcriptomics via Hyb-Seq. While nuclear genomes assembled using genome skims are extremely fragmented, some low-copy single-copy nuclear genes can be successfully assembled.
Low-quantity degraded DNA
Previous methods of trying to recover degraded DNA were based on Sanger sequencing and relied on large intact DNA templates and were affected by contamination and method of preservation. Genome skimming, on the other hand, can be used to extract genetic information from preserved species in herbariums and museums, where the DNA is often very degraded, and very little remains. Studies in plants show that DNA as old as 80 years and with as little as 500 pg of degraded DNA, can be used with genome skimming to infer genomic information. In herbaria, even with low yield and low-quality DNA, one study was still able to produce "high-quality complete chloroplast and ribosomal DNA sequences" at a large scale for downstream analyses.
In field studies, invertebrates are stored in ethanol which is usually discarded during DNA-based studies. Genome skimming has been shown to detect the low quantity of DNA from this ethanol-fraction and provide information about the biomass of the specimens in a fraction, the microbiota of outer tissue layers and the gut contents (like prey) released by the vomit reflex. Thus, genome skimming can provide an additional method of understanding ecology via low copy DNA.
Workflow
DNA extraction
DNA extraction protocols will vary depending on the source of the sample (i.e. plants, animals, etc.). The following DNA extraction protocols have been used in genome skimming:
Plants
Plant DNAzol Reagent
Qiagen DNeasy Plant Mini kit
Tiangen DNAsecure Plant kit
Invitrogen ChargeSwitch gDNA Plant kit
Other
Quick-DNA Plus Extraction kit
Cetyl Trimethylammonium Bromide (CTAB) method
Qiagen DNeasy Tissue Extraction kit
Qiagen DNeasy Blood and Tissue kit
Library preparation
Library preparation protocols will depend on a variety of factors: organism, tissue type, etc. In the cases of preserved specimens, specific library preparation protocols modifications may have to be made. The following library preparation protocols have been used in genome skimming:
Sequencing
Sequencing with short reads or long reads will depend on the target genome or genes. Microsatellites in nuclear repeats require longer reads. The following sequencing platforms have been used in genome skimming:
The Illumina MiSeq platform has been chosen by certain researchers for its long read length for short reads.
Assembly
After genome skimming, high-copy organellar DNA can be assembled with a reference guide or assembled de novo. High-copy nuclear repeats can be clustered de novo. Assemblers chosen will depend on the target genome and whether short or long reads are used. The following tools have been used to assemble genomes from genome skims:
Plastomes
Fast-Plast
NOVOPlasty
ORGanelle
Mitogenomes
Fast-Plast
NOVOPlasty
ORGanelle
MITObim
Other
Annotation
Annotation is used to identify genes in the genome assemblies. The annotation tool chosen will depend on the target genome and the target features of that genome. The following annotation tools have been used in genome skimming to annotate organellar genomes:
Plastomes
cpGAVAS
Dual Organellar GenoMe Annotator (DOGMA)
Mitogenomes
MITOS
MITOS2
Dual Organellar GenoMe Annotator (DOGMA)
tRNAs
ARWEN
tRNAscan-SE
rRNAs
RNAmmer
Other
BLAST
Geneious
ORF Finder
GeneWise
TransDecoder
EMBOSS Transeq
Phylogenetic reconstruction
The assembled sequences are globally aligned, and then phylogenetic trees are inferred using phylogenetic reconstruction software. The software chosen for phylogeny reconstruction will depend on whether a Maximum Likelihood (ML), Maximum Parsimony (MP), or Bayesian Inference (BI) method is appropriate. The following phylogenetic reconstruction programs have been used in genome skimming:
Maximum Likelihood (ML)
RAxML
RAxML-HPC
PhyML
Geneious
IG-TREE
Maximum Parsimony (MP)
PAUPRat
PAUP*
Bayesian Inference (BI)
MrBayes
BEAST
ExaBayes
PhyloBayes
Other
MEGA4
MEGA6
MEGA7
Tools and Pipelines
Various protocols, pipelines, and bioinformatic tools have been developed to help automate the downstream processes of genome skimming.
Hyb-Seq
Hyb-Seq is a new protocol for capturing low-copy nuclear genes that combines target enrichment and genome skimming. Target enrichment of the low-copy loci is achieved through designed enrichment probes for specific single-copy exons, but requires a nuclear draft genome and transcriptome of the targeted organism. The target-enriched libraries are then sequenced, and the resulting reads processed, assembled, and identified. Using off-target reads, rDNA cistrons and complete plastomes can also be assembled. Through this process, Hyb-Seq is able to produce genome-scale datasets for phylogenomics.
GetOrganelle
GetOrganelle is a toolkit that assembles organellar genomes uses genome skimming reads. Organelle-associated reads are recruited using a modified “baiting and iterative mapping” approach. The reads aligning to the target genome, using Bowtie2, are referred to as “seed reads”. The seed reads are used as “baits” to recruit more organelle-associated reads via multiple iterations of extension. The read extension algorithm uses a hashing approach, where the reads are cut into substrings of certain lengths, referred to as “words”. At each extension iteration, these “words” are added to a hash table, referred to as a “baits pool”, which dynamically increases in size with each iteration. Due to the low sequencing coverage of genome skims, non-target reads, even those with high sequence similarity to target reads, are largely not recruited. Using the final recruited organellar-associated reads, GetOrganelle conducts a de novo assembly, using SPAdes. The assembly graph is filtered and untangled, producing all possible paths of the graph, and therefore all configurations of the circular organellar genomes.
Skmer
Skmer is an assembly-free and alignment-free tool to compute genomic distances between the query and reference genome skims. Skmer uses a 2 stage approach to compute these distances. First, it generates k-mer frequency profiling using a tool called JellyFish and then these k-mers are converted into hashes. A random subset of these hashes are selected to form a so-called "sketch". For its second stage, Skmer uses Mash to estimate the Jaccard index of two of these sketches. The combination of these 2 stages is used to estimate the evolutionary distance.
Geneious
Geneious is an integrative software platform that allows users to perform various steps in bioinformatic analysis such as assembly, alignment, and phylogenetics by incorporating other tools within a GUI based platform.
PhyloHerb
PhyloHerb is a bioinformatic pipeline write in python. It uses built-in database or user specified reference to extract orthologous sequences from plastid, mitochondrial and nuclear ribosomal regions using a BLAST search.
In silico Genome skimming
Although genome skimming is usually chosen as a cost-effective method to sequence organellar genomes, genome skimming can be done in silico if (deep) whole-genome sequencing data has already been obtained. Genome skimming has been demonstrated to simplify organellar genome assembly by subsampling the reads of the nuclear genome via in silico genome skimming. Since the organellar genomes will be high-copy in the cell, in silico genome skimming essentially filters out nuclear sequences, leaving a higher organellar to nuclear sequence ratio for assembly, reducing the complexity of the assembly paradigm. In silico genome skimming was first done as a proof-of-concept, optimizing the parameters for read type, read length, and sequencing coverage.
Other Applications
Other than the current uses listed above, genome skimming has also been applied to other tasks, such as quantifying pollen mixtures, monitoring and conservation of certain populations. Genome skimming can also be used for variant calling, to examine single nucleotide polymorphisms across a species.
Advantages
Genome skimming is a cost-effective, rapid and reliable method to generate large shallow datasets, since several datasets (plastid, mitochondrial, nuclear) are generated per run. It is very simple to implement, requires less lab work and optimization, and does not require a priori knowledge of the organism nor its genome size. This provides a low-risk avenue for biological inquiry and hypothesis generation without a huge commitment of resources.
Genome skimming is an especially advantageous approach regarding cases where the genomic DNA may be old and degraded from chemical treatments, such as specimens from herbarium and museum collections, a largely untapped genomic resource. Genome skimming allows for the molecular characterization of rare or extinct species. The preservation processes in ethanol often damage the genomic DNA, which hinders the success of standard PCR protocols and other amplicon-based approaches. This presents an opportunity to sequence samples with very low DNA concentrations, without the need for DNA enrichment or amplification. Library preparation for specific to genome skimming has been shown to work with as low as 37 ng of DNA (0.2 ng/ul), 135-fold less than recommended by Illumina.
Although genome skimming is mostly used to extract high-copy plastomes and mitogenomes, it can also provide partial sequences of low-copy nuclear sequences. These sequences may not be sufficiently complete for phylogenomic analysis, but can be sufficient for designing PCR primers and probes for hybridization-based approaches.
Genome skimming is not dependent on any specific primers and remains unaffected by gene rearrangements.
Limitations
Genome skimming scratches the surface of the genome, so it will not suffice for biological questions that require gene prediction and annotation. These downstream steps are required for deep and more meaningful analyses.
Although plastid genomic sequences are abundant in genome skims, the presence of mitochondrial and nuclear pseudogenes of plastid origin can potentially pose issues for plastome assemblies.
A combination of sequencing depth and read type, as well as genomic target (plastome, mitogenome, etc.), will influence the success of single-end and paired-end assemblies, so these parameters must be carefully chosen.
Scalability
Both the wet-lab and the bioinformatics parts of genome skimming have certain challenges with scalability. Although the cost of sequencing in genome skimming is affordable at $80 for 1 Gb in 2016, the library preparation for sequencing is still very expensive, at least ~$200 per sample (as of 2016). Additionally, most library preparation protocols have not been fully automated with robotics yet. On the bioinformatics side, large complex databases and automated workflows need to be designed to handle the large amounts of data resulting from genome skimming. The automation of the following processes need to be implemented:
Assembly of the standard barcodes
Assembly of organellar DNA (as well as nuclear ribosomal tandem repeats)
Annotation of the different assembled fragments
Removal of potential contaminant sequences
Estimation of sequencing coverage for single-copy genes
Extraction of reads corresponding to single-copy genes
Identification of unknown specimen from a small shotgun sequencing or any DNA fragment
Identification of the different organisms from shotgun sequencing of environmental DNA (metagenomics)
Some of these scalability challenges have already been implemented, as shown above in the "Tools and Pipelines" section.
See also
References
Genomics
DNA sequencing methods | Genome skimming | [
"Biology"
] | 3,696 | [
"Genetics techniques",
"DNA sequencing methods",
"DNA sequencing"
] |
63,210,872 | https://en.wikipedia.org/wiki/MNase-seq | MNase-seq, short for micrococcal nuclease digestion with deep sequencing, is a molecular biological technique that was first pioneered in 2006 to measure nucleosome occupancy in the C. elegans genome, and was subsequently applied to the human genome in 2008. Though, the term ‘MNase-seq’ had not been coined until a year later, in 2009. Briefly, this technique relies on the use of the non-specific endo-exonuclease micrococcal nuclease, an enzyme derived from the bacteria Staphylococcus aureus, to bind and cleave protein-unbound regions of DNA on chromatin. DNA bound to histones or other chromatin-bound proteins (e.g. transcription factors) may remain undigested. The uncut DNA is then purified from the proteins and sequenced through one or more of the various Next-Generation sequencing methods.MNase-seq is one of four classes of methods used for assessing the status of the epigenome through analysis of chromatin accessibility. The other three techniques are DNase-seq, FAIRE-seq, and ATAC-seq. While MNase-seq is primarily used to sequence regions of DNA bound by histones or other chromatin-bound proteins, the other three are commonly used for: mapping Deoxyribonuclease I hypersensitive sites (DHSs), sequencing the DNA unbound by chromatin proteins, or sequencing regions of loosely packaged chromatin through transposition of markers, respectively.
History
Micrococcal nuclease (MNase) was first discovered in S. aureus in 1956, protein crystallized in 1966, and characterized in 1967. MNase digestion of chromatin was key to early studies of chromatin structure; being used to determine that each nucleosomal unit of chromatin was composed of approximately 200bp of DNA. This, alongside Olins’ and Olins’ “beads on a string” model, confirmed Kornberg’s ideas regarding the basic chromatin structure. Upon additional studies, it was found that MNase could not degrade histone-bound DNA shorter than ~140bp and that DNase I and II could degrade the bound DNA to as low as 10bp. This ultimately elucidated that ~146bp of DNA wrap around the nucleosome core, ~50bp linker DNA connect each nucleosome, and that 10 continuous base-pairs of DNA tightly bind to the core of the nucleosome in intervals.
In addition to being used to study chromatin structure, micrococcal nuclease digestion had been used in oligonucleotide sequencing experiments since its characterization in 1967. MNase digestion was additionally used in several studies to analyze chromatin-free sequences, such as yeast (Saccharomyces cerevisiae) mitochondrial DNA as well as bacteriophage DNA through its preferential digestion of adenine and thymine-rich regions. In the early 1980s, MNase digestion was used to determine the nucleosomal phasing and associated DNA for chromosomes from mature SV40, fruit flies (Drosophila melanogaster), yeast, and monkeys, among others. The first study to use this digestion to study the relevance of chromatin accessibility to gene expression in humans was in 1985. In this study, nuclease was used to find the association of certain oncogenic sequences with chromatin and nuclear proteins. Studies utilizing MNase digestion to determine nucleosome positioning without sequencing or array information continued into the early 2000s.
With the advent of whole genome sequencing in the late 1990s and early 2000s, it became possible to compare purified DNA sequences to the eukaryotic genomes of S. cerevisiae, Caenorhabditis elegans, D. melanogaster, Arabidopsis thaliana, Mus musculus, and Homo sapiens. MNase digestion was first applied to genome-wide nucleosome occupancy studies in S. cerevisiae accompanied by analyses through microarrays to determine which DNA regions were enriched with MNase-resistant nucleosomes. MNase-based microarray analyses were often utilized at genome-wide scales for yeast and in limited genomic regions in humans to determine nucleosome positioning, which could be used as an inference for transcriptional inactivation.
In 2006, Next-Generation sequencing was first coupled with MNase digestion to explore nucleosome positioning and DNA sequence preferences in C. elegans,''. This was the first example of MNase-seq in any organism.
It was not until 2008, around the time Next-Generation sequencing was becoming more widely available, when MNase digestion was combined with high-throughput sequencing, namely Solexa/Illumina sequencing, to study nucleosomal positioning at a genome-wide scale in humans. A year later, the terms “MNase-Seq” and “MNase-ChIP”, for micrococcal nuclease digestion with chromatin immunoprecipitation, were finally coined. Since its initial application in 2006, MNase-seq has been utilized to deep sequence DNA associated with nucleosome occupancy and epigenomics across eukaryotes. As of February 2020, MNase-seq is still applied to assay accessibility in chromatin.
Description
Chromatin is dynamic and the positioning of nucleosomes on DNA changes through the activity of various transcription factors and remodeling complexes, approximately reflecting transcriptional activity at these sites. DNA wrapped around nucleosomes are generally inaccessible to transcription factors. Hence, MNase-seq can be used to indirectly determine which regions of DNA are transcriptionally inaccessible by directly determining which regions are bound to nucleosomes.
In a typical MNase-seq experiment, eukaryotic cell nuclei are first isolated from a tissue of interest. Then, MNase-seq uses the endo-exonuclease micrococcal nuclease to bind and cleave protein-unbound regions of DNA of eukaryotic chromatin, first cleaving and resecting one strand, then cleaving the antiparallel strand as well. The chromatin can be optionally crosslinked with formaldehyde. MNase requires Ca2+ as a cofactor, typically with a final concentration of 1mM. If a region of DNA is bound by the nucleosome core (i.e. histones) or other chromatin-bound proteins (e.g. transcription factors), then MNase is unable to bind and cleave the DNA. Nucleosomes or the DNA-protein complexes can be purified from the sample and the bound DNA can be subsequently purified via gel electrophoresis and extraction. The purified DNA is typically ~150bp, if purified from nucleosomes, or shorter, if from another protein (e.g. transcription factors). This makes short-read, high-throughput sequencing ideal for MNase-seq as reads for these technologies are highly accurate but can only cover a couple hundred continuous base-pairs in length. Once sequenced, the reads can be aligned to a reference genome to determine which DNA regions are bound by nucleosomes or proteins of interest, with tools such as Bowtie. The positioning of nucleosomes elucidated, through MNase-seq, can then be used to predict genomic expression and regulation at the time of digestion.
Extended Techniques
MNase-ChIP/CUT&RUN sequencing
Recently, MNase-seq has also been implemented in determining where transcription factors bind on the DNA. Classical ChIP-seq displays issues with resolution quality, stringency in experimental protocol, and DNA fragmentation. Classical ChIP-seq typically uses sonication to fragment chromatin, which biases heterochromatic regions due to the condensed and tight binding of chromatin regions to each other. Unlike histones, transcription factors only transiently bind DNA. Other methods, such as sonication in ChIP-seq, requiring the use of increased temperatures and detergents, can lead to the loss of the factor. CUT&RUN sequencing is a novel form of an MNase-based immunoprecipitation. Briefly, it uses an MNase tagged with an antibody to specifically bind DNA-bound proteins that present the epitope recognized by that antibody. Digestion then specifically occurs at regions surrounding that transcription factor, allowing for this complex to diffuse out of the nucleus and be obtained without having to worry about significant background nor the complications of sonication. The use of this technique does not require high temperatures or high concentrations of detergent. Furthermore, MNase improves chromatin digestion due to its exonuclease and endonuclease activity. Cells are lysed in an SDS/Triton X-100 solution. Then, the MNase-antibody complex is added. And finally, the protein-DNA complex can be isolated, with the DNA being subsequently purified and sequenced. The resulting soluble extract contains a 25-fold enrichment in fragments under 50bp. This increased enrichment results in cost-effective high-resolution data.
Single-cell MNase-seq
Single-cell micrococcal nuclease sequencing (scMNase-seq) is a novel technique that is used to analyze nucleosome positioning and to infer chromatin accessibility with the use of only a single-cell input. First, cells are sorted into single aliquots using fluorescence-activated cell sorting (FACS). The cells are then lysed and digested with micrococcal nuclease. The isolated DNA is subjected to PCR amplification and then the desired sequence is isolated and analyzed. The use of MNase in single-cell assays results in increased detection of regions such as DNase I hypersensitive sites as well as transcription factor binding sites.
Comparison to other Chromatin Accessibility Assays
MNase-seq is one of four major methods (DNase-seq, MNase-seq, FAIRE-seq, and ATAC-seq) for more direct determination of chromatin accessibility and the subsequent consequences for gene expression. All four techniques are contrasted with ChIP-seq, which relies on the inference that certain marks on histone tails are indicative of gene activation or repression, not directly assessing nucleosome positioning, but instead being valuable for the assessment of histone modifier enzymatic function.
DNase-seq
As with MNase-seq, DNase-seq was developed by combining an existing DNA endonuclease with Next-Generation sequencing technology to assay chromatin accessibility. Both techniques have been used across several eukaryotes to ascertain information on nucleosome positioning in the respective organisms and both rely on the same principle of digesting open DNA to isolate ~140bp bands of DNA from nucleosomes or shorter bands if ascertaining transcription factor information. Both techniques have recently been optimized for single-cell sequencing, which corrects for one of the major disadvantages of both techniques; that being the requirement for high cell input.
At sufficient concentrations, DNase I is capable of digesting nucleosome-bound DNA to 10bp, whereas micrococcal nuclease cannot. Additionally, DNase-seq is used to identify DHSs, which are regions of DNA that are hypersensitive to DNase treatment and are often indicative of regulatory regions (e.g. promoters or enhancers). An equivalent effect is not found with MNase. As a result of this distinction, DNase-seq is primarily utilized to directly identify regulatory regions, whereas MNase-seq is used to identify transcription factor and nucleosomal occupancy to indirectly infer effects on gene expression.
FAIRE-seq
FAIRE-seq differs more from MNase-seq than does DNase-seq. FAIRE-seq was developed in 2007 and combined with Next-Generation sequencing three years later to study DHSs. FAIRE-seq relies on the use of formaldehyde to crosslink target proteins with DNA and then subsequent sonication and phenol-chloroform extraction to separate non-crosslinked DNA and crosslinked DNA. The non-crosslinked DNA is sequenced and analyzed, allowing for direct observation of open chromatin.
MNase-seq does not measure chromatin accessibility as directly as FAIRE-seq. However, unlike FAIRE-seq, it does not necessarily require crosslinking, nor does it rely on sonication, but it may require phenol and chloroform extraction. Two major disadvantages of FAIRE-seq, relative to the other three classes, are the minimum required input of 100,000 cells and the reliance on crosslinking. Crosslinking may bind other chromatin-bound proteins that transiently interact with DNA, hence limiting the amount of non-crosslinked DNA that can be recovered and assayed from the aqueous phase. Thus, the overall resolution obtained from FAIRE-seq can be relatively lower than that of DNase-seq or MNase-seq and with the 100,000 cell requirement, the single-cell equivalents of DNase-seq or MNase-seq make them far more appealing alternatives.
ATAC-seq
ATAC-seq is the most recently developed class of chromatin accessibility assays. ATAC-seq uses a hyperactive transposase to insert transposable markers with specific adapters, capable of binding primers for sequencing, into open regions of chromatin. PCR can then be used to amplify sequences adjacent to the inserted transposons, allowing for determination of open chromatin sequences without causing a shift in chromatin structure. ATAC-seq has been proven effective in humans, amongst other eukaryotes, including in frozen samples. As with DNase-seq and MNase-seq, a successful single-cell version of ATAC-seq has also been developed.
ATAC-seq has several advantages over MNase-seq in assessing chromatin accessibility. ATAC-seq does not rely on the variable digestion of the micrococcal nuclease, nor crosslinking or phenol-chloroform extraction. It generally maintains chromatin structure, so results from ATAC-seq can be used to directly assess chromatin accessibility, rather than indirectly via MNase-seq. ATAC-seq can also be completed within a few hours, whereas the other three techniques typically require overnight incubation periods. The two major disadvantages to ATAC-seq, in comparison to MNase-seq, are the requirement for higher sequencing coverage and the prevalence of mitochondrial contamination due to non-specific insertion of DNA into both mitochondrial DNA and nuclear DNA. Despite these minor disadvantages, use of ATAC-seq over the alternatives is becoming more prevalent.
References
Molecular biology techniques | MNase-seq | [
"Chemistry",
"Biology"
] | 3,229 | [
"Molecular biology techniques",
"Molecular biology"
] |
63,211,357 | https://en.wikipedia.org/wiki/Thermoneutral%20voltage | In electrochemistry, a thermoneutral voltage is a voltage drop across an electrochemical cell which is sufficient not only to drive the cell reaction, but to also provide the heat necessary to maintain a constant temperature. For a reaction of the form
The thermoneutral voltage is given by
where is the change in enthalpy and F is the Faraday constant.
Explanation
For a cell reaction characterized by the chemical equation:
at constant temperature and pressure, the thermodynamic voltage (minimum voltage required to drive the reaction) is given by the Nernst equation:
where is the Gibbs energy and F is the Faraday constant. The standard thermodynamic voltage (i.e. at standard temperature and pressure) is given by:
and the Nernst equation can be used to calculate the standard potential at other conditions.
The cell reaction is generally endothermic: i.e. it will extract heat from its environment. The Gibbs energy calculation generally assumes an infinite thermal reservoir to maintain a constant temperature, but in a practical case, the reaction will cool the electrode interface and slow the reaction occurring there.
If the cell voltage is increased above the thermodynamic voltage, the product of that voltage and the current will generate heat, and if the voltage is such that the heat generated matches the heat required by the reaction to maintain a constant temperature, that voltage is called the "thermoneutral voltage". The rate of delivery of heat is equal to where T is the temperature (the standard temperature, in this case) and dS/dt is the rate of entropy production in the cell. At the thermoneutral voltage, this rate will be zero, which indicates that the thermoneutral voltage may be calculated from the enthalpy.
An example
For water at standard temperature (25 C) the net cell reaction may be written:
Using Gibbs potentials ( kJ/mol), the thermodynamic voltage at standard conditions is
1.229 Volt (2 electrons needed to form H2(g))
Just as the combustion of hydrogen and oxygen generates heat, the reverse reaction generating hydrogen and oxygen will absorb heat. The thermoneutral voltage is (using kJ/mol):
1.481 Volts.
References
Physical chemistry
Electrochemistry
Electrochemical equations | Thermoneutral voltage | [
"Physics",
"Chemistry",
"Mathematics"
] | 482 | [
"Applied and interdisciplinary physics",
"Mathematical objects",
"Equations",
"Electrochemistry",
"nan",
"Physical chemistry",
"Electrochemical equations"
] |
63,214,506 | https://en.wikipedia.org/wiki/Euler%27s%20Gem | Euler's Gem: The Polyhedron Formula and the Birth of Topology is a book on the formula for the Euler characteristic of convex polyhedra and its connections to the history of topology. It was written by David Richeson and published in 2008 by the Princeton University Press, with a paperback edition in 2012. It won the 2010 Euler Book Prize of the Mathematical Association of America.
Topics
The book is organized historically, and reviewer Robert Bradley divides the topics of the book into three parts. The first part discusses the earlier history of polyhedra, including the works of Pythagoras, Thales, Euclid, and Johannes Kepler, and the discovery by René Descartes of a polyhedral version of the Gauss–Bonnet theorem (later seen to be equivalent to Euler's formula). It surveys the life of Euler, his discovery in the early 1750s that the Euler characteristic (the number of vertices minus the number of edges plus the number of faces) is equal to 2 for all convex polyhedra, and his flawed attempts at a proof, and concludes with the first rigorous proof of this identity in 1794 by Adrien-Marie Legendre, based on Girard's theorem relating the angular excess of triangles in spherical trigonometry to their area.
Although polyhedra are geometric objects, Euler's Gem argues that Euler discovered his formula by being the first to view them topologically (as abstract incidence patterns of vertices, faces, and edges), rather than through their geometric distances and angles. (However, this argument is undermined by the book's discussion of similar ideas in the earlier works of Kepler and Descartes.) The birth of topology is conventionally marked by an earlier contribution of Euler, his 1736 work on the Seven Bridges of Königsberg, and the middle part of the book connects these two works through the theory of graphs. It proves Euler's formula in a topological rather than geometric form, for planar graphs, and discusses its uses in proving that these graphs have vertices of low degree, a key component in proofs of the four color theorem. It even makes connections to combinatorial game theory through the graph-based games of Sprouts and Brussels Sprouts and their analysis using Euler's formula.
In the third part of the book, Bradley moves on from the topology of the plane and the sphere to arbitrary topological surfaces. For any surface, the Euler characteristics of all subdivisions of the surface are equal, but they depend on the surface rather than always being 2. Here, the book describes the work of Bernhard Riemann, Max Dehn, and Poul Heegaard on the classification of manifolds, in which it was shown that the two-dimensional compact topological surfaces can be completely described by their Euler characteristics and their orientability. Other topics discussed in this part include knot theory and the Euler characteristic of Seifert surfaces, the Poincaré–Hopf theorem, the Brouwer fixed point theorem, Betti numbers, and Grigori Perelman's proof of the Poincaré conjecture.
An appendix includes instructions for creating paper and soap-bubble models of some of the examples from the book.
Audience and reception
Euler's Gem is aimed at a general audience interested in mathematical topics, with biographical sketches and portraits of the mathematicians it discusses, many diagrams and visual reasoning in place of rigorous proofs, and only a few simple equations. With no exercises, it is not a textbook. However, the later parts of the book may be heavy going for amateurs, requiring at least an undergraduate-level understanding of calculus and differential geometry. Reviewer Dustin L. Jones suggests that teachers would find its examples, intuitive explanations, and historical background material useful in the classroom.
Although reviewer Jeremy L. Martin complains that "the book's generalizations about mathematical history and aesthetics are a bit simplistic or even one-sided", points out a significant mathematical error in the book's conflation of polar duality with Poincaré duality, and views the book's attitude towards computer-assisted proof as "unnecessarily dismissive", he nevertheless concludes that the book's mathematical content "outweighs these occasional flaws". Dustin Jones evaluates the book as "a unique blend of history and mathematics ... engaging and enjoyable", and reviewer Bruce Roth calls it "well written and full of interesting ideas". Reviewer Janine Daems writes, "It was a pleasure reading this book, and I recommend it to everyone who is not afraid of mathematical arguments".
See also
List of books about polyhedra
References
Polyhedral combinatorics
Topological graph theory
Books about the history of mathematics
2008 non-fiction books | Euler's Gem | [
"Mathematics"
] | 976 | [
"Graph theory",
"Combinatorics",
"Polyhedral combinatorics",
"Topology",
"Mathematical relations",
"Topological graph theory"
] |
63,215,116 | https://en.wikipedia.org/wiki/Linda%20Zou | Linda Zou is a professor of Civil Infrastructure and Environmental Engineering from Khalifa University, Abu Dhabi, United Arab Emirates. Prof. Zou has received contribution to her work on nanotechnology to accelerate water condensation from the National University of Singapore and the University of Belgrade. Zou has developed a new aerosol material for use in cloud seeding: salt crystals coated in titanium dioxide nanoparticles.
The technique developed by Prof. Zou was used in the January 2020 Cloud Seeding experiment in the UAE.
References
Living people
Year of birth missing (living people)
Environmental engineers
Nanotechnologists
Academic staff of Khalifa University | Linda Zou | [
"Chemistry",
"Materials_science",
"Engineering"
] | 128 | [
"Nanotechnology",
"Nanotechnologists",
"Environmental engineers",
"Environmental engineering"
] |
63,216,140 | https://en.wikipedia.org/wiki/Lamb%E2%80%93Chaplygin%20dipole | The Lamb–Chaplygin dipole model is a mathematical description for a particular inviscid and steady dipolar vortex flow. It is a non-trivial solution to the two-dimensional Euler equations. The model is named after Horace Lamb and Sergey Alexeyevich Chaplygin, who independently discovered this flow structure. This dipole is the two-dimensional analogue of Hill's spherical vortex.
The model
A two-dimensional (2D), solenoidal vector field may be described by a scalar stream function , via , where is the right-handed unit vector perpendicular to the 2D plane. By definition, the stream function is related to the vorticity via a Poisson equation: . The Lamb–Chaplygin model follows from demanding the following characteristics:
The dipole has a circular atmosphere/separatrix with radius : .
The dipole propages through an otherwise irrorational fluid ( at translation velocity .
The flow is steady in the co-moving frame of reference: .
Inside the atmosphere, there is a linear relation between the vorticity and the stream function
The solution in cylindrical coordinates (), in the co-moving frame of reference reads:
where are the zeroth and first Bessel functions of the first kind, respectively. Further, the value of is such that , the first non-trivial zero of the first Bessel function of the first kind.
Usage and considerations
Since the seminal work of P. Orlandi, the Lamb–Chaplygin vortex model has been a popular choice for numerical studies on vortex-environment interactions. The fact that it does not deform make it a prime candidate for consistent flow initialization. A less favorable property is that the second derivative of the flow field at the dipole's edge is not continuous. Further, it serves a framework for stability analysis on dipolar-vortex structures.
References
Fluid dynamics | Lamb–Chaplygin dipole | [
"Chemistry",
"Engineering"
] | 384 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
54,697,412 | https://en.wikipedia.org/wiki/Simone%20Severini | Simone Severini is an Italian-born British computer scientist. He is currently Professor of Physics of Information at University College London, and Director of Quantum Computing at Amazon Web Services in Seattle.
Work
Severini worked in quantum information science and complex systems. Together with Adan Cabello and Andreas Winter, he defined a graph-theoretic framework for studying quantum contextuality, and together with Tomasz Konopka, Fotini Markopoulou, and Lee Smolin, he introduced a random graph model of spacetime called quantum graphity. In network theory, he co-introduced the Braunstein–Ghosh–Severini entropy, with applications to quantum gravity.
He served as an editor of Philosophical Transactions of the Royal Society A. In 2015 he was the technical co-founder and one of the first scientific advisors of Cambridge Quantum Computing, with Béla Bollobás, Imre Leader, and Fernando Brandão. He co-founded Phasecraft in 2018 with Toby Cubitt, Ashley Montanaro, and John Morton.
Publications
References
Year of birth missing (living people)
Living people
British physicists
Quantum physicists
Academics of University College London | Simone Severini | [
"Physics"
] | 236 | [
"Quantum physicists",
"Quantum mechanics"
] |
54,700,471 | https://en.wikipedia.org/wiki/Fink%20truss | The Fink truss is a commonly used truss in residential homes and bridge architecture. It originated as a bridge truss although its current use in bridges is rare.
History
The Fink Truss Bridge was patented by Albert Fink in 1854.
Albert Fink designed his truss bridges for several American railroads especially the Baltimore and Ohio and the Louisville and Nashville. The 1865 Annual Report of the President and Directors of the Louisville and Nashville Railroad Company lists 29 Fink Truss bridges out of a total of 66 bridges on the railroad.
The first Fink Truss bridge was built by the Baltimore and Ohio Railroad in 1852 to span the Monongahela River at Fairmont, Virginia (now West Virginia). It consisted of three spans, each 205 feet long. It was the longest iron railroad bridge in the United States at the time.
Several other Fink trusses held world records for their time including the Green River Bridge (c. 1858) carrying the Louisville and Nashville Railroad over its namesake river near Munfordville, Kentucky, and the first bridge to span the Ohio River which included a 396-foot span built between 1868 and 1870. Although the design is no longer used for major structures, it was widely used from 1854 through 1875.
Design
It is identified by the presence of multiple diagonal members projecting down from the top of the end posts at a variety of angles. These diagonal members extend to the bottom of each of the vertical members of the truss with the longest diagonal extending to the center vertical member. Many Fink trusses do not include a lower chord (the lowest horizontal member). This gives the bridge an unfinished saw-toothed appearance when viewed from the side or below, and makes the design very easy to identify. If the bridge deck is carried along the bottom of the truss (called a through truss) or if a lightweight lower chord is present, identification is made solely by the multiple diagonal members emanating from the end post tops.
An Inverted Fink Truss has a bottom chord without a top chord.
Notable examples
Only two Fink Truss bridges remain intact in the United States. Neither bridge is in its original location.
The Zoarville Station Bridge consists of one of the original three spans of a through truss of Fink design built in 1868 by Smith, Latrobe and Company of Baltimore, Maryland. It originally carried Factory Street over the Tuscarawas River in Tuscarawas County, Ohio. In 1905 one span of the structure was relocated to Conotton Creek where it is now a pedestrian only crossing. It is listed on the National Register of Historic Places, documented by the Historic American Engineering Record and carries the Zoar Valley Trail, the intrastate Buckeye Trail, and the interstate North Country Trail.
A 56 foot long single span deck truss of Fink design was built in 1870 to carry trains of the Atlantic, Mississippi and Ohio Railroad (later Norfolk and Western Railway, now Norfolk Southern Railway). The original location of this structure is unknown. In 1893 it was relocated to carry Old Forest Road over the Norfolk and Western in Lynchburg, Virginia, and in 1985 the structure was again relocated to Riverside Park in the City of Lynchburg to preserve the historic structure for future generations. It now carries pedestrians only.
A third bridge, the Fink-Type Truss Bridge, survived in Clinton Township, New Jersey until it was destroyed by a traffic accident in 1978.
Current use
Fink design trusses are used today for pedestrian bridges and as roof trusses in building construction in an inverted (upside down) form where the lower chord is present and a central upward projecting vertical member and attached diagonals provide the bases for roofing.
References
Bridge design
Truss bridges | Fink truss | [
"Engineering"
] | 743 | [
"Structural engineering",
"Bridge design",
"Architecture"
] |
54,709,463 | https://en.wikipedia.org/wiki/Periodic%20counter-current%20chromatography | Periodic counter-current chromatography (PCC) is a method for running affinity chromatography in a quasi-continuous manner. Today, the process is mainly employed for the purification of antibodies in the biopharmaceutical industry as well as in research and development. When purifying antibodies, protein A is used as affinity matrix. However, periodic counter-current processes can be applied to any affinity type chromatography.
Basic principle
In conventional affinity chromatography, a single chromatography column is loaded with feed material up to the point before target material (product) cannot be retained by the affinity material anymore.
The resin with the adsorbed product on it is then washed to remove impurities. Finally, the pure product is eluted with a different buffer. Notably, if too much feed material is loaded onto the column, the product can break through and product is consequently lost. Therefore, it is very important to only partially load the column to maximize the yield.
Periodic counter-current chromatography puts this problem aside by utilizing more than one column. PCC processes can be run with any number of columns, starting from two. The following paragraph will explain a two-column version of PCC, but other protocols with more columns rely on the same principles (see below).
A diagram depicting the individual process steps is shown on the right. In Step 1, the so-called sequential loading phase, columns 1 and 2 are interconnected. Column 1 is fully loaded with sample (red) while its breakthrough is captured on column 2. In Step 2, column 1 is washed, eluted, cleaned and re-equilibrated while loading separately continues on column 2. In Step 3, after regeneration of column 1, the columns are again inter-connected and column 2 is fully loaded while its breakthrough is captured on column 1. Finally, in Step 4 column 2 is washed, eluted, cleaned and re-equilibrated while loading continues independently on column 1. This cyclic process is repeated in a continuous way.
Several variations of periodic counter-current chromatography with more than two columns exist. In these cases, additional columns are either placed within the feed stream during loading, having the same effect as using longer columns. Alternatively, additional columns can be kept in an unoccupied stand-by mode during loading. This mode offers additional assurance that the main process is not influenced by washing and cleaning protocols, albeit in practice this is rarely required. On the other hand, the underutilized columns reduce the theoretical maximum productivity for such processes. Generally, the advantages and disadvantages of different multi-column protocols are the subject of debate. However, without a doubt, compared to single column batch processes, periodic counter-current processes provide significantly increased productivity.
Dynamic process control
On the time scale of continuous chromatography runs, it is fairly common to observe changes in important process parameters, such as column health, buffer quality, feed titer (concentration) or feed composition. Such changes result in an altered maximum column capacity, relative to the amount of loaded feed material. In order to achieve a steady quality and yield for each process cycle, the timing of the individual process steps therefore has to be adjusted. Manual changes are in principle conceivable, but rather impractical. More commonly, dynamic process control algorithms monitor the process parameters and apply changes as needed automatically.
There are two different operating modes for dynamic process controllers in use today (see Figure on the right).
The first one, called DeltaUV, monitors the difference between two signals from detectors situated before and after the first column. During initial loading, there is a large difference between the two signals, but it is diminishing as the impurities make their way through the column. Once the column is fully saturated with impurities and only additional product is being held back, the difference between the signals reaches a constant value. As long as the product is completely being captured on the column, the difference between the signals will remain constant. As soon as some of the product breaks through the column (compare above), the difference diminishes. Thus, the timing and amount of product breakthrough can be determined.
The second possibility, called AutomAb, requires only the signal of a single detector situated behind the first column. During initial loading, the signal increases, as more and more impurities make their way through the column. When the column is saturated with impurities and as long as the product is completely being captured on the column, the signal then remains constant. As soon as some of the product breaks through the column (compare above), the signal increases again. Thus, the timing and amount of product breakthrough can again be determined.
Both iterations work equally well in theory. In practice, the requirement for two synced signals and the exposure of one detector to unpurified feed material, makes the DetaUV approach less reliable than AutomAb.
Commercial situation
As of 2017, GE Healthcare holds patents around three-column periodic counter-current chromatography: this technology is used in their Äkta PCC instrument. Likewise, ChromaCon holds patents for an optimized two-column version (CaptureSMB). CaptureSMB is used in ChromaCon's Contichrom CUBE and under license in YMC's Ecoprime Twin systems. Additional manufacturers of systems capable of periodic counter-current chromatography include Novasep and Pall.
References
Chromatography | Periodic counter-current chromatography | [
"Chemistry"
] | 1,126 | [
"Chromatography",
"Separation processes"
] |
54,709,700 | https://en.wikipedia.org/wiki/Oil%20purification | Oil purification (transformer, turbine, industrial, etc.) removes oil contaminants in order to prolong oil service life.
Contaminants of industrial oils
Contaminants and various impurities get into industrial oils during storage and operation. The most common contaminants are:
water;
solid particles (like soot and dirt);
gases;
asphalt-resinous paraffin deposits;
acids;
oil sludge;
organometallic compounds;
unsaturated hydrocarbons;
polyaromatic hydrocarbons;
additive remains;
products of oil decomposition.
Methods of oil purification
Industrial oils are purified through sedimentation, filtration, centrifugation, vacuum treatment and adsorption purification.
Sedimentation is precipitation of solid particles and water to the bottom of oil tanks under gravity. The main drawback of this process is its longevity.
Filtration is a partial removal of solid particles through filter medium. Oil filtration systems generally use a multistage filtration with coarse and fine filters.
Centrifugation is separation of oil and water, or oil and solid particles by centrifugal forces.
Vacuum treatment degasses and dehydrates industrial oil. This method is well suited for removing dispersed and dissolved water, as well as dissolved gases.
Adsorption purification, in contrast to the methods mentioned above, does not remove solid particles and gases, but it shows good results at removing water, oil sludge and aging products. This process uses adsorbents of natural or artificial origin: bleaching clays, synthetic aluminosilicates, silica gels, zeolites, etc.
The difference between purification and regeneration of industrial oil
Often the terms "oil purification" and "oil regeneration" are used synonymously. Although in fact they are not the same. Oil purification cleans oil from contaminants. It can be used independently or as a part of oil regeneration. Oil regeneration also removes aging products (with the help of adsorbents) and stabilizes oil with additives. Regenerated oil is clean from carcinogenic products of oil aging and stabilized with the help of additives.
References
Oils
Recycling | Oil purification | [
"Chemistry"
] | 451 | [
"Oils",
"Carbohydrates"
] |
76,224,343 | https://en.wikipedia.org/wiki/28%20nm%20process | The "28 nm" lithography process is a half-node semiconductor manufacturing process based on a die shrink of the "32 nm" lithography process. It appeared in production in 2010.
Since at least 1997, "process nodes" have been named purely on a marketing basis, and have no direct relation to the dimensions on the integrated circuit; neither gate length, metal pitch or gate pitch on a "28nm" device is twenty-eight nanometers.
Taiwan Semiconductor Manufacturing Company has offered "28 nm" production using high-K metal gate process technology.
GlobalFoundries offers a "28nm" foundry process called the "28SLPe" ("28nm Super Low Power") foundry process, which uses high-K metal gate technology.
According to a 2016 presentation by Sophie Wilson, 28nm has the lowest cost per logic gate. Cost per gate had decreased as processes shrunk until reaching 28nm, and has slowly risen since then.
Design
"28nm" requires twice the number of design rules for ensuring reliability in manufacturing as "80nm".
Shipped devices
AMD's Radeon HD 7970 uses a graphics processing unit manufactured using a "28nm" process.
Some models of the PS3 use a RSX 'Reality Synthesizer' chip manufactured using a "28nm" process.
FPGAs produced with "28 nm" process technology include models of the Xilinx Artix 7 FPGAs and Altera Cyclone V FPGAs.
References
Application-specific integrated circuits
International Technology Roadmap for Semiconductors lithography nodes | 28 nm process | [
"Technology",
"Engineering"
] | 331 | [
"Application-specific integrated circuits",
"Computer engineering"
] |
76,230,111 | https://en.wikipedia.org/wiki/Ippolit%20S.%20Gromeka | Ippolit Stepanovich Gromeka (or Hippolyte Stepanovich Gromeka) was a 19th century Russian scientist who made significant contributions to the science of fluid mechanics.
Biography
Ippolit was born on 27 January in 1851, in Berdychiv to Stepan Stepanovich Gromeka, a well-known publicist and a governor (1867–1875) of Siedlce and Yekaterina Fyodorovna Shcherbatska. He grew up in Siedlce and also earned a gold medal in the Siedlce high school. He completed his Bachelors degree from the Imperial Moscow University in 1873 and worked as a teacher in the university for two years. He then worked as a teacher in Moscow High School until 1879, and in Belsk high school from 1879. In 1879, he also completed his Master's degree with a dissertation on capillary phenomena. In 1880, he became an assistant professor at the Kazan University. In 1881, he obtained his PhD with a dissertation on Some cases of the motion of an incompressible fluid. He became a professor in 1882.
In the winter of 1888-1889, Gromeka fell from a sleigh during hunting with a severe bruise in his chest. Due to his injury, he died on 13 October 1889 in Kutaisi at the age of only 38. One of his brother, Mikhail Stepanovich Gromeka, was a well known literary critic, who died in 1883.
Research
During his short research career, just over than 10 years, Gromeka has produced many important contributions to the field of fluid mechanics through 11 works, starting from his Master's thesis on capillary phenomena and his last work in 1889 on the effect of temperature distribution on sound waves. He provided an original and modern description of the capillarity phenomena, settling for the first time the discrepancy that was prevalent between Young's and Laplace's theories. He pioneered the studies on Beltrami flows in his PhD thesis in 1882 and because of it, he is referred as the father of the helical flows. He also studied unsteady flows in tubes, wave motion in elastic tubes and others.
His scientific works were published in Russian in 1952. A special issue in the journal Fluids in honour of Gromeka was produced in 2024.
Published works
Gromeka's published works are
Gromeka. I.S. Essay on the Theory of Capillary Phenomena. Theory of Surface Fluid Adhesion (Master’s Thesis). Mat. Sb. 1879, 9, 435–500.
Gromeka. I.S. Some Cases of Incompressible Fluid Flow. Ph.D. Thesis, Kazan University, Kazan, Russia, 1882; pp. 1–107.
Gromeka. I.S. On the Theory of Fluid Motion in Narrow Cylindrical Tubes; Scientific notes of Kazan University; Kazan University: Kazan, Russia, 1882; pp. 1–32.
Gromeka. I.S. On the Velocity of Propagation of Wave-Like Motion of Fluids in Elastic Tubes; Scientific notes of Kazan University; Kazan University: Kazan, Russia, 1883; pp. 1–19.
Gromeka. I.S. On the Vortex Motions of a Liquid on a Sphere; Scientific Notes of Kazan University; Kazan University: Kazan, Russia, 1885; pp. 1–35.
Gromeka. I.S. On the motion of liquid drops. Bull. De La Société Mathématique De Kasan Kasan 1886, 5, 8–47.
Gromeka. I.S. Some cases of equilibrium of a perfect gas. Bull. De La Société Mathématique De Kasan Kasan 1886, 5, 66–82.
Gromeka. I.S. Lectures on the Mechanics of Liquid Bodies; Kazan University Press: Kazan, Russia, 1887; pp. 1–174.
Gromeka. I.S. On infinite values of integrals of second-order linear differential equations. Bull. De La Société Mathématique De Kasan Kasan 1887, 6, 14–40.
Gromeka. I.S. On the Effect of Temperature on Small Variations in Air Masses; Scientific Notes of Kazan University; Kazan University: Kazan, Russia, 1888; pp. 1–40.
Gromeka. I.S. Influence of the Uneven Distribution of the Temperature on the Propagation of Sound. Mat. Sb. 1889, 14, 283–302.
References
1851 births
1889 deaths
People from Berdychiv
Fluid dynamicists
19th-century mathematicians from the Russian Empire
Inventors from the Russian Empire
Russian physicists | Ippolit S. Gromeka | [
"Chemistry"
] | 954 | [
"Fluid dynamicists",
"Fluid dynamics"
] |
76,235,042 | https://en.wikipedia.org/wiki/Mid-Holocene%20hemlock%20decline | The mid-Holocene hemlock decline was an abrupt decrease in Eastern Hemlock (Tsuga canadensis) populations noticeable in fossil pollen records across the tree's range. It has been estimated to have occurred approximately 5,500 calibrated radiocarbon years before 1950 AD. The decline has been linked to insect activity and to climate factors. Post-decline pollen records indicate changes in other tree species' populations after the event and an eventual recovery of hemlock populations over a period of about 1000-2000 years at some sites.
Causes
Some relatively earlier studies on this event link it to insect outbreaks (e.g. hemlock looper), while more recent research has argued for climate changes as the driving factors in this decline. Evidence used to point towards an insect outbreak includes sudden nature of the event and the debated assertion that similar trends were not shown in other species. Fossil evidence used to support the insect pathogen argument include the presence of fossil hemlock looper and spruce budworm head capsules, and more prevalent than normal macrofossil hemlock needles with evidence of feeding by the hemlock looper. Arguments for climate changes as the driving factor of this event include linking the decline in hemlock fossil pollen to trends from other tree species and to lake-level reconstructions from sediment cores and ground-penetrating radar that indicate a change to drier conditions. These climate changes may have been associated with shifts in atmospheric and ocean circulation. While its causes have been debated, this event may be used to provide insight into how modern forests may respond to pathogen outbreaks or to anthropogenic climate change.
Post-decline dynamics
Increases in the fossil pollen of other tree species such as birch have been found at some sites following the decline in hemlock pollen. In some areas, hemlock fossil pollen indicates a recovery of the population that took place over the period from about 1000-2000 years after the decline, while in other areas, fossil pollen indicates that the hemlock population never fully recovered or that forest composition was forever altered following the event. A continuation of drought conditions may have delayed hemlock recovery in some areas.
References
Wikipedia Student Program
Paleoecology | Mid-Holocene hemlock decline | [
"Biology"
] | 437 | [
"Evolution of the biosphere",
"Paleoecology"
] |
59,615,541 | https://en.wikipedia.org/wiki/Institute%20of%20Acoustics%2C%20Chinese%20Academy%20of%20Sciences | The Institute of Acoustics (IOA, ) of the Chinese Academy of Sciences (CAS) was established in 1964 by the Chinese government in the context of China's national defense needs for acoustic research, under the auspices of Marshall Nie Rongzhen.
By the end of 2017, the IOA counts more than 700 researchers focusing on the study of basic and applied acoustics, in the following fields:
Underwater acoustics and underwater acoustical detection;
Environmental acoustics and noise control technologies;
Ultrasonics and acoustical micro-electromechanical system technologies;
Communication acoustics, language/speech information processing; Integration of acoustics with digital systems, and network new media technologies.
Seven academicians of the Chinese Academy of Sciences have been elected from the IOA, they were/are: Wang Dezhao, Ma Dayou, Ying Chongfu, , Hou Chaohuan, Li Qihu, Wang Chenghao.
IOA is the de facto sponsor of Acoustical Society of China (ASC), a nongovernmental organization officially affiliated to China Association for Science and Technology.
In 2012, the ASC co-hosted with the Acoustical Society of America a joint meeting in Hong Kong. In 2014, the IOA hosted the International Congress on Sound and Vibration in Beijing. In 2015, the IOA co-hosted with The French Acoustics Society The 9th International Conference Auditorium Acoustics in Paris. And according to the IOA official web site, it will co-host with the ASC an International Congress on Ultrasonics in 2021.
The IOA publishes 7 academic journals, among others, Acta Acustica (incl. English version) and Applied Acoustics.
References
Research institutes of the Chinese Academy of Sciences
Acoustics
1964 establishments in China
Physics research institutes | Institute of Acoustics, Chinese Academy of Sciences | [
"Physics"
] | 366 | [
"Classical mechanics",
"Acoustics"
] |
59,617,028 | https://en.wikipedia.org/wiki/The%20spider%20and%20the%20fly%20problem | The spider and the fly problem is a recreational mathematics problem with an unintuitive solution, asking for a shortest path or geodesic between two points on the surface of a cuboid. It was originally posed by Henry Dudeney.
Problem
In the typical version of the puzzle, an otherwise empty cuboid room 30 feet long, 12 feet wide and 12 feet high contains a spider and a fly. The spider is 1 foot below the ceiling and horizontally centred on one 12′×12′ wall. The fly is 1 foot above the floor and horizontally centred on the opposite wall. The problem is to find the minimum distance the spider must crawl along the walls, ceiling and/or floor to reach the fly, which remains stationary.
Solutions
A naive solution is for the spider to remain horizontally centred, and crawl up to the ceiling, across it and down to the fly, giving a distance of 42 feet. Instead, the shortest path, 40 feet long, spirals around five of the six faces of the cuboid. Alternatively, it can be described by unfolding the cuboid into a net and finding a shortest path (a line segment) on the resulting unfolded system of six rectangles in the plane. Different nets produce different segments with different lengths, and the question becomes one of finding a net whose segment length is minimum. Another path, of intermediate length , crosses diagonally through four faces instead of five.
For a room of length l, width w and height h, the spider a distance b below the ceiling, and the fly a distance a above the floor, length of the spiral path is while the naive solution has length . Depending on the dimensions of the cuboid, and on the initial positions of the spider and fly, one or another of these paths, or of four other paths, may be the optimal solution. However, there is no rectangular cuboid, and two points on the cuboid, for which the shortest path passes through all six faces of the cuboid.
A different lateral thinking solution, beyond the stated rules of the puzzle, involves the spider attaching dragline silk to the wall to lower itself to the floor, and crawling 30 feet across it and 1 foot up the opposite wall, giving a crawl distance of 31 feet. Similarly, it can climb to the ceiling, cross it, then attach the silk to lower itself 11 feet, also a 31-foot crawl.
History
The problem was originally posed by Henry Dudeney in the English newspaper Weekly Dispatch on 14 June 1903 and collected in The Canterbury Puzzles (1907). Martin Gardner calls it "Dudeney's best-known brain-teaser".
A version of the problem was recorded by Adolf Hurwitz in his diary in 1908. Hurwitz stated that he heard it from L. Gustave du Pasquier, who in turn had heard it from Richard von Mises.
References
Recreational mathematics
Geodesic (mathematics) | The spider and the fly problem | [
"Mathematics"
] | 589 | [
"Recreational mathematics"
] |
59,621,106 | https://en.wikipedia.org/wiki/Ulrich%20M%C3%BCller | Ulrich Müller (born 6 July 1940 in Bogotá) is a German chemist who is known for his works on solid-state chemistry and the application of crystallographic group theory to crystal chemistry. He is the author of several textbooks on chemistry, solid-state chemistry, and crystallography.
Life
Müller studied chemistry at the University of Stuttgart from 1959 to 1963. He worked on his dissertation at the Purdue University and the University of Stuttgart. He finished it in 1966 in the group of Kurt Dehnicke. From 1967 to 1970, he worked in the group of Hartmut Bärnighausen at the University of Marburg. In 1972, he finished his habilitation. From 1972 to 1975, Müller was a professor for inorganic chemistry at the University of Marburg. From 1975 to 1977, he was a guest professor at the University of Costa Rica. Then, several professorships for inorganic chemistry followed: University of Marburg from 1977 to 1992, University of Kassel from 1992 to 1999, and University of Marburg from 2000 to 2005. Since 2005, he has been an emeritus professor.
Research
His research focused on the following topics:
application of crystallographic group theory in crystal chemistry to investigate structural relationships of crystalline solids and to predict possible structure types for inorganic compounds
synthesis of thio, polysulfido, and polyselenido complexes
structural analysis of crystalline solids with X-ray diffraction
Awards
He was awarded the Literaturpreis des Fonds der chemischen Industrie for his textbook "Anorganische Strukturchemie" (engl. Inorganic Structural Chemistry).
Publications
References
Living people
20th-century German chemists
Crystallographers
1940 births
Academic staff of the University of Marburg
University of Stuttgart alumni
Solid state chemists | Ulrich Müller | [
"Chemistry"
] | 359 | [
"Solid state chemists"
] |
59,624,966 | https://en.wikipedia.org/wiki/Changchun%20Institute%20of%20Optics%2C%20Fine%20Mechanics%20and%20Physics | The Changchun Institute of Optics, Fine Mechanics and Physics (CIOMP; ), of the Chinese Academy of Sciences (CAS), is a state research institution in Changchun, Jilin, China.
It was founded in 1952 as the Institute of Instrumentation of the CAS, by a group of scientists led by Wang Daheng. It was later renamed as the Changchun Institute of Optics and Fine Mechanics. The current name was adopted in 1999 when the institute was merged with the Changchun Institute of Physics, headed by Xu Xurong.
Under the leadership of Wang Daheng, the institute played a crucial role in the development of China's strategic weapons, developing high-precision optics for missile guidance systems. It made major breakthroughs for the submarine-launched ballistic missile program.
The institute focuses on luminescence, applied optics, optical engineering, and precision mechanics and instruments. It is involved in a number of technology ventures based out of the nearby CAS Changchun Optoelectronics Industrial Park with total assets worth US$403 million.
The institute offers undergraduate, master’s and doctoral education programs.
The institute developed the Bilibili Video Satellite, launched in September 2020.
CGSTL
The institute includes the Chang Guang Satellite Technology Corporation (Charming Globe or CGSTL), a commercial offshoot of the institute which manufactures remote sensing satellite buses and unmanned aerial vehicles (drones). Chang Guang Satellite Technology owns Jilin-1 satellite constellation. In September 2024, it launched six Jilin Kuanfu satellites.
It already has 31 satellites in orbit and plans to have their constellation reach 138 satellites over the next 4 years.
See also
Jilin-1
References
External links
Research institutes of the Chinese Academy of Sciences
Education in Changchun
1952 establishments in China
Optics institutions
Mechanics
Physics research institutes
Educational institutions established in 1952 | Changchun Institute of Optics, Fine Mechanics and Physics | [
"Physics",
"Engineering"
] | 377 | [
"Mechanics",
"Mechanical engineering"
] |
70,449,007 | https://en.wikipedia.org/wiki/IEEE%20Computer%20Society%20Charles%20Babbage%20Award | In 1989, the International Parallel and Distributed Processing Symposium established the Charles Babbage Award to be given each year to a conference participant in recognition of exceptional contributions to the field. In almost all cases, the award is given to one of the invited keynote speakers at the conference. The selection was made by the steering committee chairs, upon recommendation from the Program Chair and General Chair who have been responsible for the technical program of the conference, including inviting the speakers. It is presented immediately following the selected speaker's presentation at the conference, and he or she is given a plaque that specifies the nature of their special contribution to the field that is being recognized by IPDPS.
In 2019, the management of the IEEE CS Babbage Award was transferred to the IEEE Computer Society's Awards Committee.
Past recipients:
1989 - Irving S. Reed
1990 - H.T. Kung
1991 - Harold S. Stone
1992 - David Kuck
1993 - K. Mani Chandy
1994 - Arvind
1995 - Richard Karp
1997 - Frances E. Allen
1998 - Jim Gray
1999 - K. Mani Chandy
2000 - Michael O. Rabin
2001 - Thomson Leighton
2002 - Steve Wallach
2003 - Michel Cosnard
2004 - Christos Papadimitriou
2005 - Yale N. Patt
2006 - Bill Dally
2007 - Mike Flynn
2008 - Joel Saltz
2009 - Wen-Mei Hwu
2010 - Burton Smith
2011 - Jack Dongarra
2012 - Chris Johnson
2013 - James Demmel
2014 - Peter Kogge
2015 - Alan Edelman
2017 - Mateo Valero. "For contributions to parallel computation through brilliant technical work, mentoring PhD students, and building an incredibly productive European research environment."
2019 - Ian Foster. "For his outstanding contributions in the areas of parallel computing languages, algorithms, and technologies for scalable distributed applications."
2020 - Yves Robert. "For contributions to parallel algorithms and scheduling techniques."
2021 - Guy Blelloch. "For contributions to parallel programming, parallel algorithms, and the interface between them."
2022 - Dhabaleswar K. (DK) Panda. "For contributions to high performance and scalable communication in parallel and high-end computing systems."
2023 - Keshav K Pingali. "For contributions to programmability of high-performance parallel computing on irregular algorithms and graph algorithms."
2024 - Franck Cappello. "For pioneering contributions and inspiring leadership in distributed computing, high-performance computing, resilience, and data reduction."
See also
List of computer science awards
List of awards named after people
References
External links
IEEE Computer Society Charles Babbage Award
Awards established in 1989
Computer science awards
IEEE society and council awards | IEEE Computer Society Charles Babbage Award | [
"Technology"
] | 548 | [
"Science and technology awards",
"Computer science",
"Computer science awards"
] |
70,449,997 | https://en.wikipedia.org/wiki/Focal%20spot%20blooming | Focal spot blooming is the unwanted change in the focal spot size of an X-ray tube during change in exposure.
Cause
Focal spot blooming is caused due to increased mAs. When high exposure setting are used, the electron beam from the cathode fail to focus on a particular point because of electrostatic repulsion.
References
Radiology
X-ray instrumentation | Focal spot blooming | [
"Technology",
"Engineering"
] | 74 | [
"X-ray instrumentation",
"Measuring instruments"
] |
70,450,437 | https://en.wikipedia.org/wiki/Audi/Bentley%2090%C2%B0%20twin-turbocharged%20V8%20racing%20engine | The Audi/Bentley 90° twin-turbocharged V8 racing engine is a 3.6-liter and 4.0-liter, twin-turbocharged, four-stroke, 90-degree, V8 racing engine, used in the Audi R8C, Audi R8R, Audi R8 and Bentley Speed 8 Le Mans Prototype race cars, between 1999 and 2005.
Audi R8C/R8R engine
The R8C and R8R both use 3.6-liter, twin-turbocharged V8 engines, producing between , and between of torque, while using two air restrictors, and pushing of absolute boost pressure. While the R8R has a large number of vents placed on the nose, most of the intakes and air exits on the R8C are placed on the sides.
The R8R was estimated to boast around from its V8 engine, allowing it to hit in 1999 at Le Mans (the original claims were that the car could go ).
Audi R8 engine
The R8 is powered by a 3.6 L Audi V8 with Fuel Stratified Injection (FSI), which is a variation on the concept of gasoline direct injection developed by VW; it maximizes both power and fuel economy at the same time. FSI technology can be found in products available to the public, across all brands in the Volkswagen Group.
The power supplied by the R8, officially listed at about in 2000, 2001, and 2002, in 2003 and 2004, and in 2005, is sent to the rear wheels via a Ricardo six-speed sequential transmission with an electropneumatic paddle shift. Unofficially, the works team Audi R8 for Le Mans (2000, 2001, and 2002) is said to have had around instead of the quoted 610 hp. The numbers were quoted at speed, and were due to the car making 50 extra horsepower due to twin ram-air intakes at speeds over . Official torque numbers were quoted for this version of the engine at at 6500 rpm (2004/2005), but the 2002/2003-spec engine produced more torque; with at 5500 rpm, with boost pressure set at absolute. The equation for horsepower (torque divided by 5250, multiplied by rpm) for these numbers produces a horsepower rating of at the same 6500 rpm (516/5250*6500=638).
Restrictor changes for 2003 brought the power down to 550 bhp for anyone still racing with the R8, but the maximum torque hardly changed.
For 2005, The ACO still felt that the R8 needed to be kept in check, so they reduced the restrictor size on the R8's engine, due to the car not meeting new hybrid regulations, and stipulated the car shall carry ballast weight in an attempt to make the races more competitive. The R8 was restricted even further to only 520 bhp.
Bentley Speed 8/EXP Speed 8 engine
The engine from the Audi R8, a 3.6-liter V8, with (Honeywell Turbo Technologies) turbocharger, was used as the initial powerplant for the Bentley in 2001. It produced and over of torque, via two intake restrictor, with boost pressure limited to by regulations.
Following its initial year of competition, the Audi-sourced V8 was modified to better suit the EXP Speed 8. This saw the engine expand to 4.0 liters, producing between , and of torque, using two intake restrictor plates, with boost pressure still being limited to by regulations. This would ultimately lead to Bentley redesigning the car for 2003, leading to the change of name to simply Speed. Without the intake restrictor plates (completely unrestricted), and with boost pressure set at around , the 4.0-liter engine is reportedly capable of producing up to , and about of torque.
Applications
Audi R8C
Audi R8R
Audi R8
Bentley Speed 8
References
Volkswagen Group
V8 engines
Audi engines
Bentley engines
Volkswagen Group engines
Gasoline engines by model
Engines by model
Piston engines
Internal combustion engine | Audi/Bentley 90° twin-turbocharged V8 racing engine | [
"Technology",
"Engineering"
] | 815 | [
"Internal combustion engine",
"Engines",
"Engines by model",
"Piston engines",
"Combustion engineering"
] |
70,451,466 | https://en.wikipedia.org/wiki/LRSVM%20Tamnava | LRSVM Tamnava (, named for Tamnava, a river in Serbia) is a modular multiple rocket launcher developed by Yugoimport SDPR. Vehicle is based on Kamaz 6560 8x8 truck chassis, but chassis from other manufacturers can also be used.
Development
The new 122/267mm dual MLRS was revealed by Yugoimport SDPR in 2020 with intended purpose to strengthen the artillery capabilities of the Serbian Armed Forces.
LRSVM 262/122 mm is designed as a modular system. The representation of modularity is the possibility of using launch containers with 262 mm caliber missiles as well as 122 mm Grad missiles. The system is fully automated, equipped with GPS and INS systems, and can operate completely autonomously with the possible execution of programmed combat mission. The system has the ability to accept two spare launchers containers 122 mm. Charging and discharging of the containers is done by a crane that is mounted on a platform. There is also the option of installing reusable launch tubes.
Tamnava's modular containers when combined use 122 and 262mm missiles consisting of 2 launched modules 122mm (24 missiles) and 2 modules 262mm (6 missiles). When it only uses 122mm missiles, this system has 48 missiles at its disposal.
Operators
Future operators
– Cypriot National Guard bought one battery of six launchers.
– at 2021 Partner military fair it was announced that system should enter into service of the Serbian Army in the short-term period.
Potential operators
– in 2022 Hellenic Army was presented the MLRS Tamnava as it seeks to replace RM-70s.
See also
References
Rocket artillery
Self-propelled artillery of Serbia
Military Technical Institute Belgrade
Multiple rocket launchers of Serbia
Modular rocket launchers
Military vehicles introduced in the 2010s | LRSVM Tamnava | [
"Engineering"
] | 363 | [
"Modular design",
"Modular rocket launchers"
] |
70,452,570 | https://en.wikipedia.org/wiki/Xenon%20dioxydifluoride | Xenon dioxydifluoride is an inorganic chemical compound with the formula XeO2F2. At room temperature it exists as a metastable solid, which decomposes slowly into xenon difluoride, but the cause of this decomposition is unknown.
Preparation
Xenon dioxydifluoride is prepared by reacting xenon trioxide with xenon oxytetrafluoride.
References
Nonmetal halides
Oxyfluorides
Xenon(VI) compounds | Xenon dioxydifluoride | [
"Chemistry"
] | 111 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
51,910,723 | https://en.wikipedia.org/wiki/Conservation%20banking | Conservation banking is an environmental market-based method designed to offset adverse effects, generally, to species of concern, are threatened, or endangered and protected under the United States Endangered Species Act (ESA) through the creation of conservation banks. Conservation banking can be viewed as a method of mitigation that allows permitting agencies to target various natural resources typically of value or concern, and it is generally contemplated as a protection technique to be implemented before the valued resource or species will need to be mitigated. The ESA prohibits the "taking" of fish and wildlife species which are officially listed as endangered or threatened in their populations. However, under section 7(a)(2) for Federal Agencies, and under section 10(a) for private parties, a take may be permissible for unavoidable impacts if there are conservation mitigation measures for the affected species or habitat. Purchasing “credits” through a conservation bank is one such mitigation measure to remedy the loss.
Conservation banks are permanently protected parcels of land with inherent abilities to harbor, preserve, and manage the survival of endangered and threatened species, along with their critical habitat. This allows the acquisition and protection of the parcels of land prior to future loss or disturbance to valued resources. Banks are often considered to be the more ecologically efficient option for mitigation because they generally incorporate larger tracts of land that enables higher quality habitat and range connectivity, thereby creating a stronger chance of survival and sustainability for the species. Rather than have developments offset their effects by conserving small areas of habitat, conservation banking allows pooling multiple mitigation resources into a larger reserve. The intention of conservation banking is to create a no-net loss of the intended resources. Conservation banking may be used by various entitles as a method of species and habitat protection, as long as it is approved by the permitting agency.
Background
Mitigation
Mitigation is the preservation of natural resources in order to offset unavoidable impacts to similar resources. Conservation banking mitigation is specific to species and their habitat which are protected under the Endangered Species Act. There are two other forms of mitigation besides conservation banking, including in-lieu fee and permittee-responsible programs. In-Lieu fee programs allows a permittee to contribute money into a United States Fish and Wildlife Service (USFWS) approved fund in lieu of implementing their own mitigation. To date, In-Lieu fee programs have only applied to wetlands. The sponsor of the fund then implements an appropriate mitigation project when enough money has been collected through the fund. In these situations, the fund sponsor is fully liable for the success of the mitigation. The second alternative form of mitigation is the Permittee-Responsible program, which allows the permittee takes on implementation and assumes liability for their own mitigation project to offset effects.
Benefits
There is generally greater security associated with a conservation bank. This is due to the stringent performance standards imposed on bank owners by the USFWS, which also requires them to have adequate funding into perpetuity, and to have long term management plans. Purchasing of credits by the easement holder from the landowner creates a legal contract, known as a conservation easement. The conservation easement binds the landowner to uphold the requirements of the conservation bank. Another advantage is that purchasing credits from a conservation bank ensures that species and/or habitat protection is already in place before the impact occurs. In addition, liability for habitat and species mitigation success is shifted to the conservation bank owner is a benefit to the developer or permittee.
History
Conservation banking is derived from wetland mitigation banks that were created in the early 1990s. Through Federal agency efforts, mitigation banks were created to focus on preserving wetlands, streams, and other aquatic habitats or resources and offered compensatory mitigation credits to offset unavoidable effects on the habitats or resources under Section 404 of the Clean Water Act. After the “Federal Guidance for the Establishment, Use, and Operation of Mitigation Banks (60 FR 5860558614)” was published in 1995, California contemporaneously led efforts to create conservation banks as to further increase regional conservation due to growing development threatening species and their habitat. Approval of conservation banks for various federally-listed species by the USFWS, in conjunction with other Federal agencies, began throughout the early 1990s. Collectively, the nation’s 130 conservation banks is equivalent to over 160,000 acres of permanently protected land.
Endangered Species Act Connection
Under the Endangered Species Act of 1973, endangered or threatened species and their respective critical habitat and geographical range are protected for conservation with efforts made to restore the species and habitat back to well-being. Under ESA Section 10(b), takings are permitted only if the taking is incidental and otherwise lawful activity but requires that effects be minimized and mitigated to the maximum extent practicable. For projects and development that will damage an at-risk species’ habitat, such as reducing, modifying, or degrading its habitat, its permittees are required to mitigate the impact. Conservation banks act as a mechanism for compensation when a species or habitat is affected during development by providing credits that can be purchased by permitees to offset their negative impact.
Function
Conservation banking is a market program that increases the bank owner or landowner's stewardship and incentive for permanently protecting their land by providing them a set number of habitat or species credits that the respective owners are able to sell. In order to satisfy the requirements of a species or habitat conservation measures, these conservation credits can be sold to projects or developments that result in unavoidable and adverse impacts to species. Essentially, conservation banks offsite the cost to mitigate the loss or damage to a species and/or their habitat.
Traditionally, preservation of some habitat area of an at-risk species were required by a project permitee during development. This could result in habitat that became isolated, small, with reduced connectivity or functionality, and was more costly to maintain. Comparatively, conservation banks are more cost effective as they are able to maintain larger blocks of land with greater functionality for a species, such as allowing habitat connectivity. For purchasers, this is also time-effective by allowing them to forgo their responsibility of handling on-or-off mitigation measures that can run into administrative delays due to the USFWS review and approval process. After the public or private party purchases credits, a bank transfer occurs between the project party and banker. The banker is then perpetually bound to conserve and manage the conservation bank.
Creation Process
In California, a multi-agency process oversees the review and approval of conservation banks by the Inter-agency Review Team, which can be composed of all or some of the following agencies; typically the U.S. Fish and Wildlife Service, National Oceanic and Atmospheric Administration's National Marine Fisheries Service, and the California Department of Fish and Wildlife. After review and approval, the Inter-agency Review Team and conservation bank sponsor signs a legally-binding conservation bank enabling instrument, which details the responsibilities of each party and includes a management plan, endowment funding agreement, and other documents detailing the operations of the conservation bank.
Unless the lands have been previously listed or designated for other conservation purposes, Private, Tribal, State, and local government lands are all considered eligible to be conservation banks. Agricultural lands, such as used for farming, ranching, timber operations, croplands or related, may be suitable for the establishment of a conservation bank if the special-status species habitat on-site is intact or restored; however, agricultural and forestry activities may need to be modified, reduced, or stopped entirely if necessary to protect the conservation values of the land.
Management Plan
Establishment of a bank requires a management plan that outlines necessary management activities and endowment funds. The intention of the plan is to describe the long-term management activities of the conservation bank. The plan describes restricted and allowed activities and provides guidance on all monitoring and reporting requirements. The minimal requirements of a management plan includes:
A full geophysical description of the site, including the area, geographical setting, neighboring land uses, and any relevant cultural or historic features located in the site.
Identifies the biological resources within the bank, such as a vegetation map.
Describes restricted and permitted activities that can occur on-site
The objectives and biological goals of the conservation bank is described.
All management activities are fully described for the conservation bank in order to meet the objectives and biological goals, such as necessary ecological restoration of its habitats, incorporation of public use and access, and budgeting requirements.
Necessary monitoring schedules, including special management plan activities.
If necessary, outlines how future management will occur, such as decision trees or similar.
Creation of the bank must also include plans for remedial action in case bank owners are unable to fulfill their agreements. Remedial actions can include forfeiting the property to a third party to uphold the requirements of the bank or posting a bond valued equivalent to the property. Typically acts of nature, including earthquakes, floods, or fires, are excluded from liability of the bank owner.
Credits
Credits are essentially the currency of the conservation value associated with the habitat and/or species which may be affected by development. It is the ecological value of a species or habitat. The permitting agency is responsible for determining the credits available at any given bank, based on the number of species and the habitat characteristic for those species, on the land owned by the bank. They then allocate the appropriate number of credits to the bank owner, who can then establish the price through negotiation with agencies. Pricing of conservation credits are variable based on the type of species impacted through a developmental take. Additionally, the market forces of supply and demand largely dictate the price of any given credit, and the value may fluctuate based on many other economic factors such as land value, competition, and speculation about development in a certain habitat area. Current data suggests that conservation credits range in price from a low of $1500 per mitigation of a Gopher Tortoise to as much as $325,000 for vernal pool preservation.
Locations currently used
There are currently fourteen states and Saipan, the largest of the Northern Mariana Islands, with approved USFWS conservation banks. These states include Arizona, California, Colorado, Florida, Kansas, Maryland, Mississippi, Oklahoma, Oregon, South Carolina, Texas, Utah, Washington, and Wyoming.
Nationally, some species with the largest respective habitat coverage include: American burying beetle, California tiger salamander, California red-legged frog, calippe silverspot butterfly, Florida panther, golden-cheeked warbler, lesser prairie chicken, Utah prairie dog, valley elderberry longhorn beetle, vernal pool fairy shrimp, vernal pool tadpole shrimp.
In 1995, California was the first state to create a conservation bank and continues to be the national leader in number of conservation banks, with over 30 established banks. Species benefited in these banks include the burrowing owl, coastal sage scrub, delta smelt, California giant garter snake, longfin smelt, California salmonids, San Bernardino kangaroo rat, San Joaquin kit fox, Santa Ana River Woollystar, Swainson's Hawk, and valley elderberry longhorn beetle. Examples of Californian habitats include ephemeral drainages, riparian zones, vernal pools, and wetlands.
Future outlook
Two pieces of recent legislation were created, which will likely affect the future of conservation banking. A draft of the Endangered Species Act Compensatory Mitigation Policy was proposed by the Department of Fish and Wildlife Service in September, 2016 with the intention to create a mechanism for the US Department of the Interior to comply with Executive order (80 FR 68743), which directs Federal agencies that manage natural resources “to avoid and then minimize harmful effects to land, water, wildlife, and other ecological resources (natural resources) caused by land- or -water-disturbing activities…” This policy would provide guidance to the USFWS about planning and implementation of compensatory mitigation strategies. If adopted, the policy would require a shift from project-by-project compensatory mitigation approaches to broader, landscape oriented approaches such as conservation banking.
In addition, the California legislature passed Assembly Bill 2087, which will enable large conservation goals to be achieved through the creation of advance mitigation credits associated with FWS Regional Conservation Investment Strategies (RCIS). This is important for the future of conservation banking because the bill allows for consideration of mitigation for impacts to wildlife and habitat in conservation strategy planning and decision making.
See also
Endangered Species Act
US Fish and Wildlife Service
Mitigation banking
Environmental mitigation
Biodiversity banking
Biodiversity offsetting
References
Banking
Environmental law
Environmental mitigation | Conservation banking | [
"Chemistry",
"Engineering"
] | 2,594 | [
"Environmental mitigation",
"Environmental engineering"
] |
51,919,012 | https://en.wikipedia.org/wiki/Gilmour%20Space%20Technologies | Gilmour Space Technologies is a venture-funded Australian aerospace company that is developing hybrid-propellant rocket engines and associated technologies to support the deployment of a low-cost launch vehicle.
Founded in 2012, Gilmour Space's function is to provide space launch services to the small satellite market using Australian-built Eris orbital rockets, launched from Gilmour’s private spaceport in North Queensland. The company also intends to offer a ride-sharing service, in addition to a modular G-Sat small satellite bus/platform.
The maiden flight of Eris Block 1, which was unveiled by Prime Minister Anthony Albanese as Australia's first sovereign orbital rocket, is planned for no earlier than January of 2025 from the Bowen Orbital Spaceport in Abbot Point, Bowen.
Gilmour Space has long-term ambitions to develop a range of Eris-class launch vehicles capable of carrying larger satellites/payloads into low Earth orbits, and eventually provide space access for crewed orbital missions.
Founding
Gilmour Space was founded in Singapore (2012; closed 2019) and Australia (2013) by former banker, Adam Gilmour, and his brother James Gilmour.
The company's first project in 2013 was to design and manufacture high-fidelity spaceflight simulators and replicas for a number of space-related exhibits and the Spaceflight Academy Gold Coast. It began its rocket development program in 2015; and within 18 months, successfully launched Australia and Singapore's first privately developed hybrid test rocket using proprietary 3D printed fuel.
Since then, the company has been developing larger rockets, including the One Vision suborbital rocket and Eris orbital launch vehicle (more below).
Investors
As a leading New Space pioneer in Australia, Gilmour Space is backed by some of the country's largest investors, including Blackbird Ventures (which led its Series A fund raise) and Main Sequence Ventures (which led its Series B raise); as well as international investors like Fine Structure Ventures (Series C) and 500 Startups. Other investors include Queensland Investment Corporation and Australian superannuation funds Hostplus, HESTA and NGS Super.
Launch vehicles
RASTA test rocket
RASTA (Reusable Ascent SeparaTion Article) was a sub-orbital sounding rocket launched by Gilmour Space on 22 July 2016, propelled by a proprietary hybrid rocket engine. It performed nominally during the test flight and reached an apogee of 5 km. RASTA was the first launch vehicle flown by Gilmour Space and was the world's first demonstration of a rocket launch using 3D printed fuel.
One Vision suborbital rocket
One Vision was a sub-orbital sounding rocket designed to test Gilmour Space's new mobile launch platform and their hybrid rocket engines. On 29 July 2019, One Vision was prepared and fuelled for its maiden test flight, however, during the countdown to launch, the vehicle suffered an anomaly, resulting in a premature end to the mission. The anomaly was caused by a pressure regulator in the oxidiser tank that had failed to maintain required pressure, causing damage to the tank. According to the company, after a detailed investigation into the anomaly, 15 key recommendations were implemented into the design of Eris. As part of the One Vision launch campaign, the company also designed and built a mobile rocket launch platform (as there were no commercial Australian launch sites at the time), which was successfully tested during the campaign.
Eris-1 (Orbital rocket)
Gilmour Space is currently developing its Eris Block 1 rocket, a three-stage small-lift launch vehicle designed to launch up to 300 kg of payload to low Earth orbit. The vehicle is known to have four of Gilmour's Sirius hybrid rocket motors propelling the first stage, another Sirius motor in its second stage, and a new Phoenix liquid rocket engine in its third stage. Eris has a height of 25m and a diameter of 2m for the first stage, which tapers at the interstage of the first and second stage to 1.5m. The payload fairing has two diameter configurations, being 1.5m and 1.2m.
Eris' maiden launch is targeted for early 2025, pending final approvals from the Federal Government and Australian Space Agency. It will be the first Australian orbital rocket to launch from Australia, and the first orbital launch attempt from Australia in over 50 years. Moreover, if successful, Eris could be the world's first hybrid rocket to achieve orbit.
Gilmour Space has revealed it is developing an Eris Block 2 vehicle capable of lifting up to 1,000 kg to low Earth orbit, which is expected to enter commercial service in 2026. The company has also unveiled future plans for an Eris Heavy variant, which would be capable of lifting 4,000 kg payloads into orbit. If built, Eris Heavy would be classified as a medium-lift launch vehicle, potentially capable of carrying human-rated spacecraft.
Eris first went vertical on the launchpad on 11 April 2024 in preparation for launch, and successfully conducted its first full wet dress rehearsal on 30 September 2024. Gilmour Space was granted a launch permit for Eris by the Australian Space Agency on 4 November. The second and final wet test was conducted in early December. As of 13 January 2025, Gilmour is still waiting on a launch permit from the Civil Aviation Safety Authority (CASA). The company is targeting a prospective launch date in February.
Engine static tests
Since starting its rocket program in 2015, Gilmour Space has conducted hundreds of engine static test firings, most recently:
Bowen Orbital Spaceport (BOS)
In May 2021, results from an environmental and technical study conducted by the Queensland government for Abbot Point, Bowen gave Gilmour Space the green light to begin work on an orbital launch facility at located in the Abbot Point Development Area.
Since then, the company has engaged with the indigenous Juru people and local businesses to construct the Bowen Orbital Spaceport. When approved, this privately operated site will provide Gilmour Space with launch access to 20° to 65° low- to mid-inclination equatorial orbits.
Following final approvals from the Federal Government and Australian Space Agency, BOS became Australia's first commercial orbital spaceport on the 5th of March 2024, with its maiden launch of Eris-1 (also Australia's first orbital launch vehicle) originally planned for later in 2024. Following delays with the issuance of launch permits, this has since been pushed back to early 2025.
Others
In February 2018 (since lapsed), Gilmour Space signed a reimbursable Space Act Agreement with NASA to collaborate on various research, technology development and educational initiatives, including the testing of its MARS rover at Kennedy Space Center.
In December 2019, Gilmour Space signed a statement of strategic intent with the Australian Space Agency as a demonstration of its commitment to launch Australia to space.
In June 2022, it was confirmed that Gilmour Space had been awarded a federal Modern Manufacturing Initiative Collaboration grant worth $52 million to establish the Australian Space Manufacturing Network (ASMN).
In mid 2024, construction was completed on a common testing and manufacturing facility in Yatala, Queensland, which will also serve as Gilmour Space’s new headquarters. The facility is a large warehouse with an annexe for corporate offices, and is located within the Stockland Distribution Centre South.
References
Space technology
Private spaceflight companies
Aerospace companies of Australia
Companies based on the Gold Coast, Queensland
Space programme of Australia | Gilmour Space Technologies | [
"Astronomy"
] | 1,512 | [
"Space technology",
"Outer space"
] |
51,924,657 | https://en.wikipedia.org/wiki/Prince%20%28cipher%29 | Prince is a block cipher targeting low latency, unrolled hardware implementations. It is based on the so-called FX construction. Its most notable feature is the alpha reflection: the decryption is the encryption with a related key which is very cheap to compute. Unlike most other "lightweight" ciphers, it has a small number of rounds and the layers constituting a round have low logic depth. As a result, fully unrolled implementation are able to reach much higher frequencies than AES or PRESENT. According to the authors, for the same time constraints and technologies, PRINCE uses 6–7 times less area than PRESENT-80 and 14–15 times less area than AES-128.
Overview
The block size is 64 bits and the key size is 128 bits. The key is split into two 64 bit keys and . The input is XORed with , then is processed by a core function using . The output of the core function is xored by to produce the final output ( is a value derived from ). The decryption is done by exchanging and and by feeding the core function with xored with a constant denoted alpha.
The core function contain 5 "forward" rounds, a middle round, and 5 "backward" rounds, for 11 rounds in total. The original paper mentions 12 rounds without explicitly depicting them; if the middle round is counted as two rounds (as it contains two nonlinear layers), then the total number of rounds is 12.
A forward round starts with a round constant XORed with , then a nonlinear layer , and finally a linear layer . The "backward" rounds are exactly the inverse of the "forward" rounds except for the round constants.
The nonlinear layer is based on a single 4-bit S-box which can be chosen among the affine-equivalent of 8 specified S-boxes.
The linear layer consists of multiplication by a 64x64 matrix and a shift row similar to the one in AES but operating on 4-bit nibbles rather than bytes.
is constructed from 16x16 matrices and in such a way that the multiplication by can be computed by four smaller multiplications, two using and two using .
The middle round consists of the layer followed by followed by the layer.
Cryptanalysis
To encourage cryptanalysis of the Prince cipher, the organizations behind it created the
The paper "Security analysis of PRINCE" presents several attacks on full and round reduced variants, in particular, an attack of complexity 2125.1 and a related key attack requiring 233 data.
A generic time–memory–data tradeoff for FX constructions has been published, with an application to Prince. The paper argues that the FX construction is a fine solution to improve the security of a widely deployed cipher (like DES-X did for DES) but that it is a questionable choice for new designs. It presents a tweak to the Prince cipher to strengthen it against this particular kind of attack.
A biclique cryptanalysis attack has been published on the full cipher. It is somewhat inline with the estimation of the designers since it reduces the key search space by 21.28 (the original paper mentions a factor 2).
The paper "Reflection Cryptanalysis of PRINCE-Like Ciphers" focuses on the alpha reflection and establishes choice criteria for the alpha constant. It shows that a poorly chosen alpha would lead to efficient attacks on the full cipher; but the value randomly chosen by the designers is not among the weak ones.
Several meet-in-the-middle attacks have been published on round reduced versions.
An attack in the multi-user setting can find the keys of 2 users among a set of 232 users in time 265.
An attack on 10 rounds with overall complexity of 118.56 bits has been published.
An attack on 7 rounds with time complexity of 257 operations has been published.
A differential fault attack has been published using 7 faulty cipher texts under random 4 bit nibble fault model.
The paper "New approaches for round-reduced PRINCE cipher cryptanalysis" presents boomerang attack and known-plaintext attack on reduced round versions up to 6 rounds.
In 2015 few additional attacks have been published but are not freely available.
Most practical attacks on reduced round versions
References
External links
http://eprint.iacr.org/2012/529.pdf original paper: "PRINCE – A Low-latency Block Cipher for Pervasive Computing Applications"
https://www.emsec.rub.de/research/research_startseite/prince-challenge The Prince challenge home page
https://github.com/sebastien-riou/prince-c-ref Software Implementations in C
https://github.com/weedegee/prince Software Implementations in Python
https://github.com/huljar/prince-vhdl Hardware Implementation in VHDL
Block ciphers
Cryptography | Prince (cipher) | [
"Mathematics",
"Engineering"
] | 991 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
51,924,664 | https://en.wikipedia.org/wiki/Florida%20Building%20Code | The Florida Building Code (FBC) is a set of standards designed by the Florida Building Commission for the construction of buildings in the US state of Florida. Many regulations and guidelines distributed are important benchmarks regarding hurricane protection. Miami-Dade County was the first in Florida to certify hurricane-resistant standards for structures which the Florida Building Code subsequently enacted across all requirements for hurricane-resistant buildings. Many other states reference the requirements set in the Florida Building codes, or have developed their own requirements for hurricanes.
The Florida Building Code is also based upon the International Building Code (IBC) used in the United States.
Hurricane guidelines
The 2010 edition of the Florida Building Code introduced significant changes to wind load design, in particular the presentation of the wind speed maps.
The Miami-Dade and Broward County norms, are both included in the High-Velocity Hurricane Zones (HVHZ) and contain more stringent requirements. Other counties such as Palm Beach County do not require the same HVHZ building standards for compliance with the Florida Building Codes.
Both Miami-Dade County and the State of Florida maintain web-searchable databases of products approved for use as hurricane protection. These typically include not only actual test results from certified independent testing laboratories, they also contain "Product Approval Drawings" or "Installation Instructions" which provide specifications for hurricane shutter assembly and installation.
Impact tests conducted on building materials are measured and tested via TAS201, 202 and 203.
See also
International Building Code (IBC)
References
Analysis of Laws Relating to Florida Coastal Zone Management. University of Florida, Center for Governmental Responsibility. 1976.
State Consumer Action: Summary '74. pp 133 & 134.
Tropical cyclone preparedness
Building codes
Standards of the United States | Florida Building Code | [
"Engineering"
] | 349 | [
"Building engineering",
"Building codes"
] |
51,925,110 | https://en.wikipedia.org/wiki/Blastocladia%20bonaerensis | Blastocladia bonaerensis is a species of aquatic fungus from Argentina.
References
External links
Mycobank entry
Fungi described in 2006
Blastocladiomycota
Fungus species | Blastocladia bonaerensis | [
"Biology"
] | 38 | [
"Fungus stubs",
"Fungi",
"Fungus species"
] |
51,926,348 | https://en.wikipedia.org/wiki/Fig%20Tree%20Formation | The Fig Tree Formation, also called Fig Tree Group, is a stromatolite-containing geological formation in South Africa. The rock contains fossils of microscopic life forms of about 3.26 billion years old. Identified organisms include the bacterium Eobacterium isolatum and the algae-like Archaeosphaeroides barbertonensis. The fossils in the Fig Tree Formation are considered some of the oldest known organisms on Earth, and provide evidence that life may have existed much earlier than previously thought. The formation is composed of shales, turbiditic greywackes, volcaniclastic sandstones, chert, turbiditic siltstone, conglomerate, breccias, mudstones, and iron-rich shales.
Meteorite Impact
This formation also contains evidence of the biggest known meteorite impact on earth .
See also
Archean life in the Barberton Greenstone Belt
Warrawoona Group
References
Further reading
Byerly G.R., Lower D.R. & Walsh M.M. (1986). Stromatolites from the 3300–3500-Myr Swaziland Supergroup, Barberton Mountain Land, South Africa. Nature, 319: 489–491.
Geologic formations of South Africa
Archean Africa
Sandstone formations
Shale formations
Conglomerate formations
Siltstone formations
Mudstone formations
Chert
Fossiliferous stratigraphic units of Africa
Paleontology in South Africa
Origin of life | Fig Tree Formation | [
"Biology"
] | 294 | [
"Biological hypotheses",
"Origin of life"
] |
51,927,680 | https://en.wikipedia.org/wiki/Transition-metal%20allyl%20complex | Transition-metal allyl complexes are coordination complexes with allyl and its derivatives as ligands. Allyl is the radical with the connectivity CH2CHCH2, although as a ligand it is usually viewed as an allyl anion CH2=CH−CH2−, which is usually described as two equivalent resonance structures.
Examples
The allyl ligand is commonly in organometallic chemistry. Usually, allyl ligands bind to metals via all three carbon atoms, the η3-binding mode. The η3-allyl group is classified as an LX-type ligand in the Green LXZ ligand classification scheme, serving as a 3e– donor using neutral electron counting and 4e– donor using ionic electron counting.
Scope
Commonly, allyl ligands occur in mixed ligand complexes. Examples include (η3-allyl)Mn(CO)4 and CpPd(allyl).
Substituents on the allyl group are also common, e.g. 2-methallyl.
Homoleptic complexes
bis(allyl)nickel
bis(allyl)palladium
bis(allyl)platinum
tris(allyl)chromium
tris(allyl)rhodium
tris(allyl)iridium
Chelating bis(allyl) complexes
1,3-Dienes such as butadiene and isoprene dimerize in the coordination spheres of some metals, giving chelating bis(allyl) complexes. Such complexes also arise from ring-opening of divinylcyclobutane. Chelating bis(allyl) complexes are intermediates in the metal-catalyzed dimerization of butadiene to give vinylcyclohexene and cycloocta-1,5-diene.
Allyl σ ligands
Complexes with η1-allyl ligands (classified as X-type ligands) are also known. One example is CpFe(CO)2(η1-C3H5), in which only the methylene group is attached to the Fe centre (i.e., it has the connectivity [Fe]–CH2–CH=CH2). As is the case for many other η1-allyl complexes, the monohapticity of the allyl ligand in this species is enforced by the 18-electron rule, since CpFe(CO)2(η1-C3H5) is already an 18-electron complex, while an η3-allyl ligand would result in an electron count of 20 and violate the 18-electron rule. Such complexes can convert to the η3-allyl derivatives by dissociation of a neutral (two-electron) ligand L. For CpFe(CO)2(η1-C3H5), dissociation of L = CO occurs under photochemical conditions:
CpFe(CO)2(η1-C3H5) → CpFe(CO)(η3-C3H5) + CO
Synthetic methods
Allyl complexes are often generated by oxidative addition of allylic halides to low-valent metal complexes. This route is used to prepare (allyl)2Ni2Cl2:
2 Ni(CO)4 + 2 ClCH2CH=CH2 → Ni2(μ-Cl)2(η3-C3H5)2 + 8 CO
A similar oxidative addition involves the reaction of allyl bromide to diiron nonacarbonyl. The oxidative addition route has also been used to prepared Mo(II) allyl complexes:
Other methods of synthesis involve addition of nucleophiles to η4-diene complexes and hydride abstraction from alkene complexes. For example, palladium(II) chloride attacks alkenes to give first an alkene complex, but then abstracts hydrogen to give a dichlorohydridopalladium alkene complex, and then eliminates hydrogen chloride:
PdCl2 + >C=CHCH< → Cl2Pd–(η2-(>CCHCH<)) → Cl2Pd(H)⚟(>CCHC<) → ClPd⚟(>CCHC<) + HCl
One allyl complex can transfer an allyl ligand to another complex. An anionic metal complex can displace a halide, to give an allyl complex. However, if the metal center is coordinated to 6 or more other ligands, the allyl may end up "trapped" as a σ (η1-) ligand. In such circumstances, heating or irradiation can dislocate another ligand to free up space for the alkene-metal bond.
In principle, salt metathesis reactions can adjoin an allyl ligand from an allylmagnesium bromide or related allyl lithium reagent. However, the carbanion salt precursors require careful synthesis, as allyl halides readily undergo Wurtz coupling. Mercury and tin allyl halides appear to avoid this side-reaction.
Benzyl complexes
Benzyl and allyl ligands often exhibit similar chemical properties. Benzyl ligands commonly adopt either η1 or η3 bonding modes. The interconversion reactions parallel those of η1- or η3-allyl ligands:
CpFe(CO)2(η1-CH2Ph) → CpFe(CO)(η3-CH2Ph) + CO
In all bonding modes, the benzylic carbon atom is more strongly attached to the metal as indicated by M-C bond distances, which differ by ca. 0.2 Å in η3-bonded complexes. X-ray crystallography demonstrate that the benzyl ligands in tetrabenzylzirconium are highly flexible. One polymorph features four η2-benzyl ligands, whereas another polymorph has two η1- and two η2-benzyl ligands.
Applications
Allyl complexes are often discussed in academic research, but few have commercial applications. A popular allyl complex is allyl palladium chloride.
The reactivity of allyl ligands depends on the overall complex, although the influence of the metal center can be roughly summarized as
(more reactive) Fe ≫ Pd > Mo > W (less reactive)
Such complexes are usually electrophilic (i.e., react with nucleophiles), but nickel allyl complexes are usually nucleophilic (resp. with electrophiles). In the former case, the addition may occur at unusual locations, and can be useful in organic synthesis.
References
Organometallic chemistry
Transition metals
Allyl complexes
Coordination chemistry | Transition-metal allyl complex | [
"Chemistry"
] | 1,383 | [
"Organometallic chemistry",
"Coordination chemistry"
] |
71,905,774 | https://en.wikipedia.org/wiki/Journal%20of%20Materials%20Processing%20Technology | Journal of Materials Processing Technology is a peer-reviewed scientific journal covering research on all aspects of processing techniques used in manufacturing components from various materials. It is published by Elsevier and the editor-in-chief is J. Cao (Northwestern University).
Abstracting and indexing
The journal is abstracted and indexed in Scopus, Science Citation Index Expanded, Metadex, and Inspec. The journal has a 2021 impact factor of 6.162.
References
External links
Materials science journals
English-language journals
Elsevier academic journals | Journal of Materials Processing Technology | [
"Materials_science",
"Engineering"
] | 108 | [
"Materials science journals",
"Materials science"
] |
71,906,373 | https://en.wikipedia.org/wiki/Journal%20of%20the%20Mechanics%20and%20Physics%20of%20Solids | The Journal of the Mechanics and Physics of Solids is a monthly peer-reviewed scientific journal covering research, theory, and practice concerning the properties of materials. The journal was established in 1952 by Rodney Hill and is published by Elsevier. As of October 2022, the editor-in-chief is Huajian Gao (Nanyang Technological University). According to the Journal Citation Reports, the journal has a 2022 impact factor of 5.3.
Abstracting and indexing
The journal is abstracted and indexed in:
See also
Solid mechanics
Materials science
References
External links
Materials science journals
Elsevier academic journals
Monthly journals
Academic journals established in 1952
English-language journals | Journal of the Mechanics and Physics of Solids | [
"Materials_science",
"Engineering"
] | 132 | [
"Materials science journals",
"Materials science"
] |
64,563,432 | https://en.wikipedia.org/wiki/EM%20algorithm%20and%20GMM%20model | In statistics, EM (expectation maximization) algorithm handles latent variables, while GMM is the Gaussian mixture model.
Background
In the picture below, are shown the red blood cell hemoglobin concentration and the red blood cell volume data of two groups of people, the Anemia group and the Control Group (i.e. the group of people without Anemia). As expected, people with Anemia have lower red blood cell volume and lower red blood cell hemoglobin concentration than those without Anemia.
is a random vector such as , and from medical studies it is known that are normally distributed in each group, i.e. .
is denoted as the group where belongs, with when belongs to Anemia Group and when belongs to Control Group. Also where , and . See Categorical distribution.
The following procedure can be used to estimate .
A maximum likelihood estimation can be applied:
As the for each are known, the log likelihood function can be simplified as below:
Now the likelihood function can be maximized by making partial derivative over , obtaining:
If is known, the estimation of the parameters results to be quite simple with maximum likelihood estimation. But if is unknown it is much more complicated.
Being a latent variable (i.e. not observed), with unlabeled scenario, the Expectation Maximization Algorithm is needed to estimate as well as other parameters. Generally, this problem is set as a GMM since the data in each group is normally distributed.
In machine learning, the latent variable is considered as a latent pattern lying under the data, which the observer is not able to see very directly. is the known data, while are the parameter of the model. With the EM algorithm, some underlying pattern in the data can be found, along with the estimation of the parameters. The wide application of this circumstance in machine learning is what makes EM algorithm so important.
EM algorithm in GMM
The EM algorithm consists of two steps: the E-step and the M-step. Firstly, the model parameters and the can be randomly initialized. In the E-step, the algorithm tries to guess the value of based on the parameters, while in the M-step, the algorithm updates the value of the model parameters based on the guess of of the E-step. These two steps are repeated until convergence is reached.
The algorithm in GMM is:
Repeat until convergence:
1. (E-step) For each , set
2. (M-step) Update the parameters
With Bayes Rule, the following result is obtained by the E-step:
According to GMM setting, these following formulas are obtained:
In this way, a switch between the E-step and the M-step is possible, according to the randomly initialized parameters.
References
Machine learning
Regression models | EM algorithm and GMM model | [
"Engineering"
] | 571 | [
"Artificial intelligence engineering",
"Machine learning"
] |
64,570,030 | https://en.wikipedia.org/wiki/HAT-P-18 | HAT-P-18 is a K-type main-sequence star about 530 light-years away. The star is very old and has a concentration of heavy elements similar to solar abundance. A survey in 2015 detected very strong starspot activity on HAT-P-18.
Planetary system
In 2010 a transiting hot Saturn-sized planet was detected. Its equilibrium temperature is 841 K.
In 2014, observations utilizing the Rossiter–McLaughlin effect detected an exoplanet, HAT-P-18b, on a retrograde orbit, with an angle between orbital plane of the planet and the parent star equatorial plane equal to 132°.
Transit-timing variation measurements in 2015 did not detect additional planets in the system.
In 2016, the transmission optical spectra of the planet indicated that the atmosphere is lacking detectable clouds or hazes, and is blue in color due to Rayleigh scattering of light. The atmosphere seems to gradually evaporate, but at a slow rate - less than 2% of planetary mass is lost per one billion years. By contrast, spectra taken in 2022 has showed an extensive hazes and clear evidence of water vapour, along with the tail of escaping helium.
The dayside temperature of HAT-P-18b was measured in 2019 to be 1004 K.
References
Hercules (constellation)
K-type main-sequence stars
Planetary systems with one confirmed planet
Planetary transit variables
J17052315+3300450 | HAT-P-18 | [
"Astronomy"
] | 295 | [
"Hercules (constellation)",
"Constellations"
] |
64,570,365 | https://en.wikipedia.org/wiki/Steam%20contamination | Steam contamination is generally described as the decrease in the quality of steam commonly used in thermal power stations, the chemical industry, etc. It is frequently measured by the amount of sodium, silicon dioxide, and carbon dioxide dissolved in steam and expressed in μg/Kg.
References
Steam power | Steam contamination | [
"Physics"
] | 58 | [
"Power (physics)",
"Steam power",
"Physical quantities"
] |
53,437,321 | https://en.wikipedia.org/wiki/Pristimantis%20attenboroughi | Pristimantis attenboroughi, also known as Attenborough's rubber frog, is a species of frog in the family Strabomantidae. It is endemic to the Peruvian Andes and has been recorded in and near the Pui–Pui Protection Forest. It is the first amphibian named after David Attenborough. It was discovered by Edgar Lehr and Rudolf von May during a period of two years of studying the forests of Peru. The species description was based on 34 specimens caught at elevations of above sea level.
Description
Adult males measure and adult females in snout–vent length. The snout is short and rounded. No tympanum is present. The finger and toe tips are narrow and rounded, without circumferential grooves; neither lateral fringes nor webbing is present. The dorsal coloration ranges from pale gray to reddish brown to brownish olive. There are scattered flecks and sometimes an X-shaped scapular mark. Most specimens have dark grayish-brown canthal and supratympanic stripes. Juveniles are paler in coloration, yellowish to reddish brown, bearing contrasting dark brown flecks and distinct canthal and supratympanic stripes.
Reproduction occurs by direct development, that is, there is no free-living tadpole stage. The average egg diameter is .
Habitat and conservation
Pristimantis attenboroughi is known from upper montane forests and high Andean grasslands at above sea level where specimens were found living inside moss pads. A female was found guarding a clutch of 20 eggs inside moss.
Although this species could qualify as "endangered" or "vulnerable" because of its small range, the International Union for Conservation of Nature (IUCN) assessed it in 2018 as "near threatened". The category was chosen because the overall population is believed to be stable, the species is common, and much of the known range is within a protected area.
References
attenboroughi
Frogs of South America
Amphibians of the Andes
Amphibians of Peru
Endemic fauna of Peru
Endangered species
Amphibians described in 2017
David Attenborough
Taxa named by Edgar Lehr | Pristimantis attenboroughi | [
"Biology"
] | 434 | [
"Biota by conservation status",
"Endangered species"
] |
53,437,370 | https://en.wikipedia.org/wiki/Hazard%20elimination | Hazard elimination is a hazard control strategy based on completely removing a material or process causing a hazard. Elimination is the most effective of the five members of the hierarchy of hazard controls in protecting workers, and where possible should be implemented before all other control methods. Many jurisdictions require that an employer eliminate hazards if it is possible, before considering other types of hazard control.
Elimination is most effective early in the design process, when it may be inexpensive and simple to implement. It is more difficult to implement for an existing process, when major changes in equipment and procedures may be required. Elimination can fail as a strategy if the hazardous process or material is reintroduced at a later stage in the design or production phases.
The complete elimination of hazards is a major component to the philosophy of Prevention through Design, which promotes the practice of eliminating hazards at the earliest design stages of a project. Complete elimination of a hazard is often the most difficult control to achieve, but addressing it at the start of a project allows designers and planners to make large changes much more easily without the need to retrofit or redo work.
Understanding the 5 main hazard areas is a major part of assessing risks on a jobsite. The 5 main hazard areas are materials, environmental hazards, equipment hazards, people hazards, and system hazards. Materials can bring the hazards of inhalation, absorption, and ingestion. Equipment hazards are related taking the proper precautions to machinery and tools. People can create hazards by becoming distracted, taking shortcuts, using machinery when impaired, and general fatigue. System hazards is the practice of making sure employees are properly trained for their job, and ensuring that proper safety precautions are set in place.
Typical examples
Removing the use of a hazardous chemical is an example of elimination. Some substances are difficult or impossible to eliminate because they have unique properties necessary to the process, but it may be possible to instead substitute less hazardous versions of the substance. Elimination also applies to equipment as well. For example, noisy equipment can be removed from a room used for other purposes, or an unnecessary blade can be removed from a machine. Prompt repair of damaged equipment eliminates hazards stemming from their malfunction.
Elimination also applies to processes. For example, the risk of falls can be eliminated by eliminating the process of working in a high area, by using extending tools from the ground instead of climbing, or moving a piece to be worked on to ground level. The need for workers to enter a hazardous area such as a grain elevator can be eliminated by installing equipment that performs the task automatically. Eliminating an inspection that requires opening a package containing a hazardous material reduces the inhalation hazard to the inspector.
Complications of Hazard Elimination
Understanding the risks of a workplace environment is one of the most important ways to remain safe on a worksite and hazard elimination is the safest way to avoid serious injuries or fatalities. Assessing the risks of a workplace environment should be done at the design or development stage of the project due to the fact that taking an entire risk out of a project can change the whole trajectory of a project.[12]
For example, removing hazardous materials before any work happens in a workplace environment is the ideal case because the hazard is completely removed from the situation before anyone has to do work around it. Working backwards to fix the problem after work has begun can create challenges such as construction starting on a site without realizing that hazardous material needs to removed causing a costly repair to go back and fix the problem.
Deciding whether hazard elimination is the right solution for a project may require weighing multiple factors. Some examples include whether the elimination of the hazard is appropriate for the severity of the hazard as well as whether the approach is effective, reliable, and will last. Determining if the elimination of the hazard will done in a timely and economically beneficial manner is one of the most important parts of the decision because that is the motivation behind many projects.
Eliminating hazards around highways is a major issue due to the level of traffic. The Highway Safety Programs and Projects makes addresses major traffic concerns and takes a special priority for the safety of everyone on the road. Removing potential safety issues and addressing safety concerns is a costly project. The average price of hazard elimination is around $400,000 to $1,000,000.
References
Industrial hygiene
Safety engineering
Risk analysis | Hazard elimination | [
"Engineering"
] | 857 | [
"Safety engineering",
"Systems engineering"
] |
53,440,268 | https://en.wikipedia.org/wiki/Lithium%20bis%28trifluoromethanesulfonyl%29imide | Lithium bis(trifluoromethanesulfonyl)imide, often simply referred to as LiTFSI, is a hydrophilic salt with the chemical formula LiC2F6NO4S2. It is commonly used as Li-ion source in electrolytes for Li-ion batteries as a safer alternative to commonly used lithium hexafluorophosphate. It is made up of one Li cation and a bistriflimide anion.
Because of its very high solubility in water (> 21 m), LiTFSI has been used as lithium salt in water-in-salt electrolytes for aqueous lithium-ion batteries.
References
Lithium salts
Lithium-ion batteries
Organolithium compounds
Trifluoromethyl compounds | Lithium bis(trifluoromethanesulfonyl)imide | [
"Chemistry"
] | 163 | [
"Organolithium compounds",
"Lithium salts",
"Salts",
"Organic compounds",
"Reagents for organic chemistry",
"Organic compound stubs",
"Organic chemistry stubs"
] |
53,441,095 | https://en.wikipedia.org/wiki/Achiasmate%20meiosis | Achiasmate meiosis refers to meiosis without chiasmata, which are structures that are necessary for recombination to occur and that usually aid in the segregation of non-sister homologs. The pachytene stage of prophase I typically results in the formation of chiasmata between homologous non-sister chromatids in the tetrad chromosomes that form. The formation of a chiasma is also referred to as crossing over. When two homologous chromatids cross over, they form a chiasma at the point of their intersection. However, it has been found that there are cases where one or more pairs of homologous chromosomes do not form chiasmata during pachynema. Without a chiasma, no recombination between homologs can occur.
The traditional line of thinking was that without at least one chiasma between homologs, they could not be properly segregated during metaphase because there would be no tension between the homologs for the microtubules to pull against. This tension between the homologs is typically what allows the chromosomes to align along an axis of the cell (the metaphase plate) and to then properly segregate to opposite sides of the cell. Despite this, achiasmate homologs are still found to line up with the chiasmate chromosomes at the metaphase plate.
Chromosomal segregation strategies
Chiasmata play a crucial role in correctly segregating the chromosomes during meiosis I to maintain correct ploidy; when chiasmata fail to form, it typically results in aneuploidy and nonviable gametes. However, some species have been found to employ alternative methods to segregate chromosomes. They all involve linking the homologs together with some structure. These structures provide the same needed tension that chiasmata usually provide.
Synaptonemal complex and centromere interaction
One segregation strategy is to create a centromere-centromere interaction between achiasmate homologous chromosomes. Residual proteins from the synaptonemal complex (SC) ‘stick’ between the homologs' centromeres after diplotene, when the SC typically dissociates, allowing the homologs to achieve biorientation and attach correctly to the microtubules during anaphase I. This has been observed in budding yeast, Drosophila melanogaster, and mouse spermatocytes.
Heterochromatin
Heterochromatin is a tightly grouped type of DNA. Threads of heterochromatin have been observed in Drosophila melanogaster, connecting achiasmate homologs and allowing them to move pull back and forth by spindles as a connected duo.
Known achiasmatic species
Saccharomycodes ludwigii
While multiple species of budding yeast have been found to have residual SC proteins that connect the centromeres together when needed, nearly all of said species are chiasmatic and have been simply used as convenient model organisms. However, Saccharomycodes ludwigii also displays centromere-centromere interactions with SC proteins and is also almost entirely achiasmatic. It employs the breeding strategy of automixis (commonly used by many budding yeasts) in addition with a nearly complete lack of genetic mixing via crossovers to gain the genetic/evolutionary advantages of cloning (asexual reproduction) while maintaining the heterozygosity typically afforded by sexual reproduction. S. ludwigii also creates strong connections between the tetrads produced by meiosis to promote the breeding (automixis) within the tetrad. This breeding strategy may have evolved “through mutual selection between suppression of meiotic recombination and frequent intratetrad mating", which would have helped the trait spread to fixation.
Drosophila melanogaster
In Drosophila melanogaster, both oocytes and spermatocytes display achiasmy. In oocytes, neither the 4th nor the sex-determining chromosomes form chiasmata; in spermatocytes, no chiasmata form on any of the chromosomes. Heterochromatin threads have been observed in D. melanogaster oocytes. Unusually, D. melanogaster lack SCs all together, so SC proteins likely do not play a role in this species' segregation strategy.
Amazon Molly
Amazon Mollies (Poecilia formosa) reproduce without recombination via gynogenesis. They mate with males of other species and the sperm triggers the development of their eggs, but the Amazon Mollies create diploid eggs that have copies of only their own genes. There is no crossing over during their meiosis, indicating that they have achiasmate meiosis. It is theorized that this failure during the meiotic cycle is what creates the diploid eggs and that likely sister chromatids are separated during meiosis instead of the homologs in this species. If sister chromatids are being separated instead of homologs, than proper segregation of homologs has failed in this species.
Insects
True bugs (order Heteroptera) are partially of achiasmate species and partially of chiasmate species in reference to spermatogenesis. The infraorder Cimicomorpha, specifically its families Anthocoridae, Microphysidae, Cimicidae, Miridae, and Nabidae are achiasmate. Additionally, achiasmy has been reported in the infraorder Leptopodomorpha and in the family Micronectidae on the infraorder Nepomorpha. A deeper understanding of how meiosis proceeds in these achiasmate species is still under investigation.
Evolution
It is thought that achiasmatic meiosis is polyphyletic, as there is no distinct pattern to its occurrence, nor to the methods through which it occurs. It appears to instead be multiple instances of secondary loss of meiotic recombination that resulted in either the evolution of new segregation processes, or a shift to an existing backup system for segregation. Current evidence suggests the latter, that there are existing mechanisms to segregate homologs without chiasmata, as these mechanisms (heterochromatin and centromere-centromere interaction) have been observed in chiasmate species.
References
Cytogenetics
Classical genetics
Meiosis | Achiasmate meiosis | [
"Biology"
] | 1,326 | [
"Molecular genetics",
"Meiosis",
"Cellular processes"
] |
53,443,453 | https://en.wikipedia.org/wiki/Adiabatic%20quantum%20motor | An adiabatic quantum motor is a mechanical device, typically nanometric, driven by a flux of quantum particles and able to perform cyclic motions.
The adjective “adiabatic” in this context refers to the limit when the dynamics of the mechanical degrees of freedom is slow compared with the dwell time of the particles passing through the device. In this regime, it is commonly assumed that the mechanical degrees of freedom behave classically.
This class of devices works essentially as quantum pumps operated in reverse. While in a quantum pump, the periodic movement of some parameters pumps quantum particles from one reservoir to another, in a quantum motor a DC current of particles induces the cyclic motion of the device. One key feature of these motors is that quantum interferences can be used to increase their efficiency by enhancing the reflection coefficient of the scattered particles. Although there are several proposals for the realization of adiabatic quantum motors, none of them have been verified experimentally.
Adiabatic quantum motors
Anderson's adiabatic quantum motor
Thouless motor
Adiabatic quantum motors based on quantum dots
Nanomagnet coupled to a quantum spin Hall edge (formally equivalent to a Thouless motor)
Adiabatic quantum motors driven by temperature gradients
References
External links
Quantum models | Adiabatic quantum motor | [
"Physics"
] | 251 | [
"Quantum models",
"Quantum mechanics"
] |
53,443,624 | https://en.wikipedia.org/wiki/Primocryst | A primocryst is a reference to the earliest-formed crystals, in contact with each other in a magma. These may also be referred to as cumulus crystals.
References
Crystals | Primocryst | [
"Chemistry",
"Materials_science"
] | 39 | [
"Crystallography stubs",
"Materials science stubs",
"Crystallography",
"Crystals"
] |
53,443,652 | https://en.wikipedia.org/wiki/Infrared%20photodissociation%20spectroscopy | Infrared photodissociation (IRPD) spectroscopy uses infrared radiation to break bonds in, often ionic, molecules (photodissociation), within a mass spectrometer. In combination with post-ionization, this technique can also be used for neutral species. IRPD spectroscopy has been shown to use electron ionization, corona discharge, and electrospray ionization to obtain spectra of volatile and nonvolatile compounds. Ionized gases trapped in a mass spectrometer can be studied without the need of a solvent as in infrared spectroscopy.
History
Scientists began to wonder about the energetic of cluster formation early in the 19th century. Henry Eyring developed the activated-complex theory describing kinetics of reactions. Interest in studying the weak interactions of molecules and ions(e.g. van der Waals) in clusters encouraged gas phase spectroscopy, in 1962 D.H. Rank studied weak interactions in the gas phase using traditional infrared spectroscopy. D.S. Bomse used IRPD with an ICR to study isotopic compounds in 1980 at California Institute of Technology. Spectroscopy for weak bonding clusters was limited by low cluster concentration and the variety of accessible cluster states. Cluster states vary in part due to frequent collisions with other species, to reduce collisions in gas phase IRPD forms clusters in low pressure ion traps (e.g. FT-ICR). Nitrogen and water were one of the first complexes studied with the aid of a mass spectrometer by A. Good at University of Alberta in the 1960s.
Instrumentation
Photodissociation is used to detect electromagnetic activity of ions, compounds, and clusters when spectroscopy cannot be directly applied. Low concentrations of analyte can be one inhibiting factor to spectroscopy esp. in the gas phase. Mass spectrometers, time-of-flight and ion cyclotron resonance have been used to study hydrated ion clusters. Instruments are able to use ESI to effectively form hydrated ion clusters. Laser ablation and corona discharge have also been used to form ion clusters. Complexes are directed through a mass spectrometer where they are irradiated with infrared light, Nd:YAG laser.
Application
Infrared photodissociation spectroscopy maintains a powerful capability to study bond energies of coordination complexes. IRPD can measure varying bond energies of compounds, including dative bonds and coordination energies of molecular clusters. Structural information about analytes can acquired by using mass selectivity and interpreting fragmentation. The spectroscopic information usually resembles that of linear infrared spectra and can be used to obtain detailed structural information of gas-phase species, in case of metal complexes, insights into ligand coordination, bond activations and successive reactions can be obtained.
References
photodissociation spectroscopy
Mass spectrometry
Chemical bond properties
Spectroscopy | Infrared photodissociation spectroscopy | [
"Physics",
"Chemistry"
] | 559 | [
"Matter",
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Electromagnetic spectrum",
"Mass spectrometry",
"Spectroscopy",
"Infrared",
"Chemical bond properties"
] |
67,532,073 | https://en.wikipedia.org/wiki/Tetranucleotide%20hypothesis | The tetranucleotide hypothesis of Phoebus Levene proposed that DNA was composed of repeating sequences of four nucleotides. It was very influential for three decades, and was developed by Levene at least into the 1910, and the diagram at the right illustrates the view of Levene and Tipson. In 1940, at the time of Levene's death, Bass wrote in his obituary
As a result of Levene’s work we have an exact concept of the structures of these huge molecules, probably the most complex biological materials whose architectural picture has been reconstructed.
In that form there is an implication that the four bases are present in equal amounts in DNA, and small variations in the experimental values were assumed to be the result of experimental error.
However, Erwin Chargaff showed that the four frequencies were not equal, with variations consistent between different studies. Specifically, according to his rules the correct relationship is G = C ≠ A = T. The equalities G = C and A = T suggested that these bases were paired, this pairing being the basis of the DNA structure that is now known to be correct. Conversely the inequalities G ≠ A etc. meant that DNA could not have a systematic repetition of a fundamental unit, as required by the tetranucleotide hypothesis. Thus there was no reason why the sequence could not store information.
In later years some authorities considered the tetranucleotide hypothesis to have been harmful to the development of molecular biology. Bentley Glass, for example, called it a "scientific catastrophe". More recently, Hargittai saw it in a more positive light, and Frixione and Ruiz-Zamarripa wrote as follows:
[Levene's] work was to culminate in Levene and Tipson’s 1935 report showing accurately for the first time the actual molecular structure of DNA, as well as a nearly correct depiction of the RNA structure. This achievement merits the distinction of this paper as a Classic in molecular biology literature.
References
Biochemistry
Molecular biology
DNA
History of genetics
Nucleic acids
Protein structure
Obsolete scientific theories
Obsolete theories in chemistry | Tetranucleotide hypothesis | [
"Chemistry",
"Biology"
] | 430 | [
"Biomolecules by chemical classification",
"Biochemistry",
"Structural biology",
"Molecular biology",
"nan",
"Protein structure",
"Nucleic acids"
] |
67,537,580 | https://en.wikipedia.org/wiki/Fire%20Safety%20Act%202021 | The Fire Safety Act 2021 (c. 24) an act of the Parliament of the United Kingdom which arose out of the 2017 Grenfell Tower fire and relates to fire safety in buildings in England and Wales with two or more domestic residences, making changes to the Regulatory Reform (Fire Safety) Order 2005 (the "Fire Safety Order"). It was sponsored by the Home Office.
The bill received royal assent on 29 April 2021.
The Fire Protection Association welcomed the legislation, noting that it "makes good a long-standing issue about who is responsible for the fire safety of communal doors, external walls and anything attached, such as balconies", but also regretting that provision was not made to exclude a requirement for leaseholders to pay for remedial works to remove dangerous cladding from their buildings.
Contents
The Act includes the following provisions:
those responsible for the fire safety of a building will be required to share information about its external walls with the local fire and rescue service
building owners or managers will have to inspect flat entrance doors annually and lifts monthly, notifying the local fire and rescue service if there are any faults with the lifts
residents of buildings with two or more flats will need to have access to evacuation and fire safety instructions provided by the building owner or manager
a public register of fire risk assessments will be established
future changes to be made to identify which premises are covered by the Fire Safety Order, without further primary legislation.
References
Fire prevention law
Fire protection
Acts of the Parliament of the United Kingdom concerning England and Wales
Fire and rescue in the United Kingdom
Housing legislation in the United Kingdom
Local government legislation in England and Wales
United Kingdom Acts of Parliament 2021 | Fire Safety Act 2021 | [
"Engineering"
] | 333 | [
"Building engineering",
"Fire protection"
] |
67,541,524 | https://en.wikipedia.org/wiki/Lanthanum%20hafnate | Lanthanum hafnate () or lanthanum hafnium oxide is a mixed oxide of lanthanum and hafnium.
Properties
Lanthanum hafnate is a colorless ceramic material with the La and Hf atoms arranged in a cubic lattice. The arrangement is a disordered fluorite-like structure below , above which it transitions to a pyrochlore phase; an amorphous phase also exists below .
The compound decomposes into its constituent oxides at 18 GPa.
Luminescence
Oxygen vacancies in the base material give luminescence spanning across the visible light spectrum, with a peak near 460 nm. The luminescent properties can be fine-tuned by doping with various rare earth and group 4 metals; for example, nanoparticles exhibit a red photoluminescence or radioluminescence near 612 nm when exposed to ultraviolet or X-ray radiation.
Synthesis
Bulk ceramics can obtained by combusting the elements in powder form, and then pressing and sintering the powder at 180 MPa and for 6 hours:
4 La + 4 Hf + 7 → 2 .
It may also be made by precipitating hafnium and lanthanum hydroxides from solution and then calcinating in air at for 3 hours:
2 + 2 → + 7 .
References
Lanthanum compounds
Hafnium compounds
Oxides
Phosphors and scintillators
Ceramic materials | Lanthanum hafnate | [
"Chemistry",
"Engineering"
] | 298 | [
"Luminescence",
"Oxides",
"Salts",
"Phosphors and scintillators",
"Ceramic materials",
"Ceramic engineering"
] |
67,541,582 | https://en.wikipedia.org/wiki/Transfer%20constant | Transfer constants are low-frequency gains (or in general ratios of the output to input variables) evaluated under different combinations of shorting and opening of reactive elements in the circuit (i.e., capacitors and inductors). They are used in general time- and transfer constant (TTC) analysis to determine the numerator terms and the zeros in the transfer function. The transfer constants are calculated under similar zero- and infinite-value conditions of reactive elements used in the Cochran-Grabel (CG) method to calculate time constants, but calculating the low-frequency transfer functions from a defined input source to the output terminal, instead of the resistance seen by the reactive elements.
Transfer constants are shown as , where the superscripts , are the indexes of the elements infinite valued (short-circuited capacitors and open-circuited inductors) in calculation of the transfer constant and the remains elements zero valued.
The zeroth order transfer constant denotes the ratio of the output to input when all elements are zero-valued (hence the superscript of 0). often corresponds to the dc gain of the system.
References
Electronics
Electronic circuits | Transfer constant | [
"Engineering"
] | 243 | [
"Electronic engineering",
"Electronic circuits"
] |
73,317,084 | https://en.wikipedia.org/wiki/HD%20187474 | HD 187474, also known as HR 7552 and V3961 Sagittarii, is a star about 315 light years from the Earth, in the constellation Sagittarius. It is a 5th magnitude star, so it will be faintly visible to the naked eye of an observer far from city lights. It is a variable star, whose brightness varies slightly from magnitude 5.28 to 5.34. HD 187474 is classified as an Alpha2 Canum Venaticorum variable star, but it has a rotation period of 2345 days - more than an order of magnitude longer than is typical for that class. HD 187474 is an Ap star.
In 1958, Horace Babcock announced that HD 187474 has a magnetic field, the strength of which he estimated to be 1867 gauss. He found the star to be remarkable, because it was the only A-type star he had found that seemed to have a magnetic field which did not vary in time. However this perceived consistency turned out to be the result of Babcock's observations covering only a small portion of the star's unexpectedly long variability period.
HD 187474 is a single-lined spectroscopic binary. This was discovered by Sylvia Burd at Palomar Observatory, and the result was communicated to S. Leeman by Babcock. Leeman published the finding in 1964, along with orbital elements derived by Burd. The initial estimate of the orbital period was 700 days.
The variability of HD 187474 was apparently discovered by Babcock sometime prior to 1976, the year when it was given the variable star designation V3961 Sagittarii, but the result was never published by him.
Several groups have tried to determine the strength and geometry of HD 187474's magnetic field. In 1987 Pierre Didelon derived a surface field strength of kilogauss, from observations of Zeeman splitting of spectral lines. In the year 2000, John Landstreet and Gautier Mathys found that the variation of the measured magnetic field as the star rotated was far from sinusoidal. They obtained an acceptable fit to the data with a model of the field which contained colinear dipole, quadrupole and octupole terms. Two years later, Stefano Bagnulo et al. modeled the field as a dipole inclined 80° with respect to the rotation axis plus a nonlinear quadrupole term. The next year, V. R. Khalack et al. modeled the field as a set of virtual magnetic charges, with the constraint that the total magnetic charge must be zero. In 2005, Yu. V. Glagolevskij modeled the field as a dipole displaced from the star's center, and inclined relative to the rotation axis by 24°.
References
Sagittarius (constellation)
Sagittarii, V3961
Alpha2 Canum Venaticorum variables
187474
097749
7552
Ap stars
A-type main-sequence stars | HD 187474 | [
"Astronomy"
] | 619 | [
"Sagittarius (constellation)",
"Constellations"
] |
73,319,222 | https://en.wikipedia.org/wiki/List%20of%20endemic%20plants%20in%20the%20Mariana%20Islands | Micronesia is a biodiversity hotspot with an exceptionally high richness of endemic plant species, 10 times higher than that of Hawaii. The Mariana Islands form an archipelago in the northwest of the Micronesian region.
In 2012, Craig M. Costion and David H. Lorence compiled a list of Micronesian endemic plants, and assessed that the Mariana Islands had 22 endemic plant species (16 species in the southern Mariana Islands, of which 11 were isolated to Guam, and 5 species in the northern Mariana Islands). They concluded that there was an approximately 3% rate of endemism in the Mariana Islands (endemic species per km2), which is comparable to the rates in Hawaii (4%) and Tonga (2%) but lower than the 14% rate of endemism among all Micronesian islands. However, the number of known Marianas endemics has greatly expanded since then with new discoveries, taxonomic revisions, and improvements in digitized databases.
Plants endemic to the Mariana Islands
The following list includes plants that have an endemic range only within the Mariana Islands. "Mariana Islands" is defined in accordance with the World Geographical Scheme for Recording Plant Distributions (WGSRPD), level 3 code "MRN," and includes the following geopolitical territories:
Guam
Commonwealth of the Northern Marianas Islands (CNMI)
Only species, subspecies, varieties, and forms that are recognized as "accepted" by the Plants of the World Online database are included in this list; synonyms are not include. Some synonyms are listed next to the accepted name if they are still in common use in recent botanical literature. Subspecies are abbreviated "subs.," varieties "var.," and forms "f." after the species name.
The following list of Marianas endemic plants was compiled based on a list generated by Plants of the World Online in May 2023. This list may not be complete and may not reflect recent changes in taxonomy. An updated list of accepted species for a geographic area can be generated on the Plants of the World Online.
Plants are listed in order by approximate size of their native range, beginning with those with the most restricted native ranges.
Plants endemic to the Pacific islands, including the Mariana Islands
The following list includes plants that are native to the Marianas but have a broader native range in Micronesia or the Pacific islands. Plants with native ranges into the Asian continent are not included.
Only species or subspecies that are recognized as "accepted" by the Plants of the World Online database are included in this list (synonyms are not include). Some synonyms are listed with the accepted name if they are still in common use in recent botanical literature. This list is not complete and may not reflect recent changes in taxonomy. An updated and complete list can be generated on the Plants of the World Online.
Plants are listed roughly in order by size of their native range, beginning with plants with the most restricted native ranges.
Plants formerly considered or currently proposed to be endemic to the Mariana Islands
The following list includes select species or variants that have previously been considered endemic to the Mariana Islands, and may still be listed as such in certain publications, but are not considered to be valid species according to the Plants of the World Online database. Also included here are species that have more recently been proposed to be distinct species or subspecies, but have not yet been accepted.
See also
List of threatened, endangered and extinct species in the Mariana Islands
References
Mariana Islands | List of endemic plants in the Mariana Islands | [
"Biology"
] | 697 | [
"Lists of biota",
"Lists of plants",
"Plants"
] |
66,073,161 | https://en.wikipedia.org/wiki/Deuterium%E2%80%93tritium%20fusion | Deuterium–tritium fusion (DTF) is a type of nuclear fusion in which one deuterium (H) nucleus (deuteron) fuses with one tritium (H) nucleus (triton), giving one helium-4 nucleus, one free neutron, and 17.6 MeV of total energy coming from both the neutron and helium. It is the best known fusion reaction for fusion power and thermonuclear weapons.
Tritium, one of the reactants for DTF, is radioactive. In fusion reactors, a 'breeding blanket' made of lithium is placed on the walls of the reactor, as lithium, when exposed to energetic neutrons, will produce tritium.
Concept
In DTF, one deuteron fuses with one tritium, yielding one helium nucleus, a free neutron, and 17.6 MeV, which is derived from about 0.02 AMU. The amount of energy obtained is described by the mass–energy equivalence: E = mc. 80% of the energy (14.1 MeV) becomes kinetic energy of the neutron traveling at 1/6 the speed of light.
The mass difference between H+H and neutron+He is described by the semi-empirical mass formula that describes the relation between mass defects and binding energy in a nucleus.
Discovery
Evidence of DTF was first detected at the University of Michigan in 1938 by Arthur J. Ruhlig. His experiment detected the signature of neutrons with energy greater than 15 MeV in secondary reactions of H created in H(d,p)H reactions of a 0.5 MeV incident deuteron beam on a heavy phosphoric acid target, HPO. This discovery was largely unrecognized until recently.
Reactant sourcing
About 1 in every 6700 hydrogen atoms in seawater is deuterium, making it easy to acquire.
Tritium, however, is a radioisotope, and can't be sourced naturally. This can be circumvented by exposing lithium to energetic neutrons, which produces tritons. Also, DTF itself emits a free neutron, which can be used to bombard lithium. A 'breeding blanket', made of lithium, is often placed along the walls of fusion reactors so that free neutrons created by DTF react with it to produce more H. This process is called tritium breeding.
Fusion reactors
DTF is planned to be used in ITER, and many other proposed fusion reactors. It has many advantages over other types of fusion, as it has a relatively low minimum temperature, 10 kelvin.
Spin polarization
Spin-polarized D-T fuel can increase tritium burn efficiency (TBE) by an order of magnitude or more without compromising output. TBE increases nonlinearly with decreasing tritium fraction, while power density increases roughly linearly with D-T cross section. In a 481 MW ARC-like tokamak with unpolarized 53:47 D-T fuel, the minimum tritium inventory was 0.69 kg. Spin-polarizing the fuel with a 63:37 D-T mix reduces the required tritium to 0.03 kg. With advancements in helium divertor pumping efficiency, TBE values of approximately 10%–40% could be achieved using low-tritium-fraction spin-polarized fuel with minimal power loss. This lowers tritium startup inventory requirements.
See also
Commonwealth Fusion Systems
Deuterium fusion
Fusion power#Deuterium, tritium
References
Nuclear fusion reactions
Hydrogen technologies
Tritium fusion
Tritium | Deuterium–tritium fusion | [
"Chemistry"
] | 730 | [
"Nuclear fusion",
"Nuclear fusion reactions"
] |
66,073,823 | https://en.wikipedia.org/wiki/Gas-diffusion%20electrocrystallization | Gas-diffusion electrocrystallization (GDEx) is an electrochemical process consisting on the reactive precipitation of metal ions in solution (or dispersion) with intermediaries produced by the electrochemical reduction of gases (such as oxygen), at gas diffusion electrodes. It can serve for the recovery of metals or metalloids into solid precipitates or for the synthesis of libraries of nanoparticles.
History
The gas-diffusion electrocrystallization process was invented in 2014 by Xochitl Dominguez Benetton at the Flemish Institute for Technological Research, in Belgium. The patent for the process granted in Europe was filed in 2015 and its expiration is anticipated in 2036.
Process
Gas-diffusion electrocrystallization is a process electrochemically driven at porous gas-diffusion electrodes, in which a triple phase boundary is established between a liquid solution, an oxidizing gas, and an electrically conducting electrode. The liquid solution containing dissolved metal ions (e.g., CuCl2, ZnCl2) flows through an electrochemical cell equipped with a gas diffusion electrode, making contact with its electrically conducting part (typically a porous layer). The oxidizing gas (e.g., pure O2, O2 in air, CO2, etc.) percolates through a hydrophobic layer on the gas diffusion electrode, acting as a cathode. After the gas diffuses to the electrically conducting layer acting as an electrocatalyst (e.g., hydrophilic activated carbon), the gas is electrochemically reduced. For instance, by imposing specific cathodic polarization conditions (e.g., −0.145 VSHE O2 is reduced, to H2O2 in a 2 electron (2 e–) transfer process and H2O in a 4 electron (4 e–) transfer process. OH– are also produced in the process. As this happens, abrupt local pH and local electrolyte redox potential changes arise within the cathode porosity. As the hydroxyl ions spread to the bulk electrolyte, systematic pH increases become consistently manifest in the electrolyte bulk. In due course, low amounts of H2O2 are generated. In steady state, a reaction front is fully developed throughout the hydrodynamic boundary layer. This creates local saturation conditions at the electrochemical interface, where metal ions precipitate in metastable or stable phases depending on the operational variables. When oxygen is the oxidizing gas, the mechanism for gas-diffusion electrocrystallization has been explained as an oxidation-assisted alkaline precipitation using gas-diffusion electrodes.
Honors
In 2020, the gas-diffusion electrocrystallization process was presented as a great EU-funded innovation by the Innovation Radar of the European Commission, for its application on the secondary recovery of platinum group metals.
References
Electrochemistry | Gas-diffusion electrocrystallization | [
"Chemistry"
] | 608 | [
"Electrochemistry"
] |
61,067,687 | https://en.wikipedia.org/wiki/Buckmaster%20equation | In mathematics, the Buckmaster equation is a second-order nonlinear partial differential equation, named after John D. Buckmaster, who derived the equation in 1977. The equation models the surface of a thin sheet of viscous liquid. The equation was derived earlier by S. H. Smith and by P Smith, but these earlier derivations focused on the steady version of the equation.
The Buckmaster equation is
where is a known parameter.
References
Differential equations
Fluid dynamics | Buckmaster equation | [
"Chemistry",
"Mathematics",
"Engineering"
] | 94 | [
"Applied mathematics",
"Chemical engineering",
"Mathematical objects",
"Differential equations",
"Equations",
"Applied mathematics stubs",
"Piping",
"Fluid dynamics"
] |
56,245,450 | https://en.wikipedia.org/wiki/Council%20architect | A council architect or municipal architect (properly titled county architect, borough architect, city architect or district architect) is an architect employed by a local authority. The name of the position varies depending on the type of local authority and is similar to that of county surveyor or chief engineer used by some authorities. Council architects are employed in the United Kingdom but also used in Malta and Ireland.
History
The role was once widespread with many counties, cities and other local authorities employing their own architect to design public works. Council architects acted as designer, client and regulator for their authority, and having significant buying power, they were able to influence suppliers to accommodate their requirements. They worked closely with the council planning department, with whom they were often co-located. In 1953, the London County Council (LCC) employed more than 1,500 people within its architects department.
The LCC architects were key innovators, with the guaranteed salary and relative anonymity allowing them to develop experimental designs without risk to income or the stigma of failure. The LCC architects department also provided research funding, including for the Survey of London, and had in-house testing and development teams. The smaller scale firms in private practice at the time could not provide such luxuries.
Current role
The trend in recent decades has been for councils to close their architects departments. As of 2015, there were 237 council architects in England, 159 in Scotland and 24 in Wales. The biggest employers are Hampshire (44), Glasgow (18), the Highland Council (13) and Lancashire (11). Despite their predecessors having one of the largest and most active architects departments in the country, no London borough now employs more than five council architects.
Once closed, a local authority is highly unlikely to revive an architects department and will instead rely on outsourcing to private firms. One exception is the London Borough of Croydon, which re-established a council architect position in 2015. Hampshire County Architects remains the largest council architects department, and is recognized as a leader in its field, winning several awards for its school designs since the 1980s.
References
Architecture occupations
Government occupations | Council architect | [
"Engineering"
] | 425 | [
"Architecture occupations",
"Architecture"
] |
56,248,002 | https://en.wikipedia.org/wiki/Integrated%20Electronics%20Piezo-Electric | Integrated Electronics Piezo-Electric (IEPE) characterises a technical standard for piezoelectric sensors which contain built-in impedance conversion electronics. IEPE sensors are used to measure acceleration, force or pressure. Measurement microphones also apply the IEPE standard.
Other proprietary names for the same principle are ICP, CCLD, IsoTron or DeltaTron.
The electronics of the IEPE sensor (typically implemented as FET circuit) converts the high impedance signal of the piezoelectric material into a voltage signal with a low impedance of typically 100 Ω. A low impedance signal is advantageous because it can be transmitted across long cable lengths without a loss of signal quality. In addition, special low noise cables, which are otherwise required for use with piezoelectric sensors, are no longer necessary.
The sensor circuit is supplied with constant current. A distinguishing feature of the IEPE principle is that the power supply and the sensor signal are transmitted via one shielded wire.
Most IEPE sensors work at a constant current between 2 and 20 mA. A common value is 4 mA. The higher the constant current the longer the possible cable length. Cables of several hundred meters length can be used without a loss of signal quality. Supplying the IEPE sensor with constant current, results in a positive bias voltage, typically between 8 and 12 volts, at the output. The actual measuring signal of the sensor is added to this bias voltage. The supply or compliance voltage of the constant current source should be 24 to 30 V which is about two times the bias voltage. This ensures maximum amplitudes in positive and negative direction.
A typical IEPE sensor supply with 4 mA constant current and 25 V compliance voltage has a power consumption of 100 mW. This can be a drawback in battery powered systems. For such applications low-power IEPE sensors exist which can be operated at only 0.1 mA constant current from a 12 V supply. This may save up to 90 % power.
Many measuring instruments designed for piezoelectric sensors or measurement microphones have an IEPE constant current source integrated at the input.
In measuring instruments with IEPE input the bias voltage is often used for sensor detection.
If the signal lies close to the constant current supply voltage, there is no sensor present or the cable path has been interrupted. A signal close to the saturation voltage, indicates a short circuit in the sensor or cable. In between these two limits a functional sensor has been detected.
The bias voltage is cut off by a coupling capacitor at the instrument input and only the AC signal is processed further.
Piezoelectric sensors which do not possess IEPE electronics, meaning with charge output, remain reserved for applications where lowest frequencies, high operating temperatures, an extremely large dynamic range, very energy saving operation or extremely small design is required.
References
External links
IEPE principle, Metra
Sensors
Accelerometers | Integrated Electronics Piezo-Electric | [
"Physics",
"Technology",
"Engineering"
] | 587 | [
"Accelerometers",
"Physical quantities",
"Acceleration",
"Measuring instruments",
"Sensors"
] |
56,248,459 | https://en.wikipedia.org/wiki/Bituminite | Bituminite is an autochthonous maceral that is a part of the liptinite group in lignite, that occurs in petroleum source rocks originating from organic matter such as algae which has undergone alteration or degradation from natural processes such as burial . It occurs as fine-grained groundmass, laminae or elongated structures that appear as veinlets within horizontal sections of lignite and bituminous coals, and also occurs in sedimentary rocks. Its occurrence in sedimentary rocks is typically found surrounding alginite, and parallel along bedding planes. Bituminite is not considered to be bitumen because its properties are different from most bitumens. It is described to have no definite shape or form when present in bedding and can be identified using different kinds of visible and fluorescent lights. There are three types of bituminite: type I, type II and type III, of which type I is the most common. The presence of bituminite in oil shales, other oil source rocks and some coals plays an important factor when determining potential petroleum-source rocks.
Physical properties
The internal structure of bituminite varies from deposit to deposit. It may be homogeneous, streaky, fluidal or finely granular. These properties of internal structure, however, are only visible when particles are irradiated with blue or violet light.
Bituminite is commonly found in the size and shape of irregular, discoidal particles that are typically 100–200 μm in diameter. When observed under transmitted light with oil immersion, the color of bituminite is orange, reddish to brown. Under reflected light, bituminite is dark brown to dark grey and sometimes black in color. Bituminite has an approximate density of ≈1.2–1.3 g/cm3, which was determined by gradient centrifuging. Bituminite has a very low polishing hardness. It usually smears during the polishing process because it is unconsolidated and very soft.
Occurrence
Bituminite is found in oxic to anoxic lacustrine and marine environments commonly associated with other maceral minerals such as alginite and liptodetrinite. The organic material undergoes diagenesis, forming an amorphous matrix. Framboidal pyrite is a common feature associated with bituminite. This is caused by bacterial reworking of the digestible organic matter. Due to the reworking of organic matter the particles/grains of bituminite often appear diffuse and blurred. Though grains of bituminite are often too blurred to distinguish, its optical properties widely vary, and are therefore used to determine the bituminite type.
It is very common to have all three types of bituminite in organic-rich sedimentary rocks, however, the modal percentage of bituminite type varies. Typically type I bituminite is found to be much larger than other types and exude a negative alteration when irradiated with blue light. Type II is determined by its yellowish or reddish-brown fluorescence and its occasional oil expulsions when irradiated. Type III, the rarest kind of bituminite, appears dark grey under a reflected white light, however, lacks fluorescence. It is also distinguished by the fine granular structure and its association with faunal relics.
History
Bituminite was a general term given to rocks which are rich in bitumen. The term was also used informally to describe irregularly shaped macerals until 1975 when the ICCP clearly defined the term.
Applications and uses
Bituminite is the main source for low-temperature coal tar, which is used in industry, medicine and construction . The value of bituminite increases with grade. At high grade, i.e. high maturity, bituminite has high hydrogen to carbon content . A high hydrogen/carbon ratio bituminite indicates a good hydrocarbon source. However, low grades of bituminite vary depending on type, meaning that there is variable hydrogen/carbon ratios.
Bituminite can also be used as potential indicators in the petroleum industry. Research has shown that type I bituminite in modes upwards of 10% of the total organic matter, is indicative of a potential petroleum-source rock.
References
Sedimentology
Organic minerals
Petrology | Bituminite | [
"Chemistry"
] | 895 | [
"Organic compounds",
"Organic minerals"
] |
56,251,626 | https://en.wikipedia.org/wiki/Ministry%20of%20Energy%20and%20Petroleum%20%28Sudan%29 | The Ministry of Ministry of Energy and Petroleum (MOP) (), previously known as the Ministry of Oil and Gas (); was the governmental body in the Sudan responsible for developing and implementing the government policy for exploiting the oil and gas resources in Sudan in 2017.
References
External links
Ministry of Energy and Petroleum
Oil and Gas
Economy of Sudan
Energy ministries | Ministry of Energy and Petroleum (Sudan) | [
"Engineering"
] | 73 | [
"Energy organizations",
"Energy ministries"
] |
56,255,646 | https://en.wikipedia.org/wiki/Transcriptome%20instability | Transcriptome instability is a genome-wide, pre-mRNA splicing-related characteristic of certain cancers. In general, pre-mRNA splicing is dysregulated in a high proportion of cancerous cells. For certain types of cancer, like in colorectal and prostate, the number of splicing errors per cancer has been shown to vary greatly between individual cancers, a phenomenon referred to as transcriptome instability. Transcriptome instability correlates significantly with reduced expression level of splicing factor genes. Mutation of DNMT3A contributes to development of hematologic malignancies, and DNMT3A-mutated cell lines exhibit transcriptome instability as compared to their isogenic wildtype counterparts.
References
Gene expression
Cancer | Transcriptome instability | [
"Chemistry",
"Biology"
] | 155 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
56,256,054 | https://en.wikipedia.org/wiki/Testbed%20aircraft | A testbed aircraft is an aeroplane, helicopter or other kind of aircraft intended for flight research or testing the aircraft concepts or on-board equipment. These could be specially designed or modified from serial production aircraft.
Use of testbed aircraft
For example, in development of new aircraft engines, these are fitted to a testbed aircraft for flight testing, before certification. New instruments wiring and equipment, a fuel system and piping, structural alterations to the wings, and other adjustments are needed for this adaptation.
The Folland Fo.108 (nicknamed the "Folland Frightful") was a dedicated engine testbed aircraft in service from 1940. The aircraft had a mid-fuselage cabin for test instrumentation and observers. Twelve were built and provided to British aero-engine companies. A large number of aircraft-testbeds have been produced and tested since 1941 in the USSR and Russia by the Gromov Flight Research Institute.
AlliedSignal, Honeywell Aerospace, Pratt & Whitney, and other aerospace companies used Boeing jetliners as flying testbed aircraft.
See also
Index of aviation articles
List of experimental aircraft
List of aerospace flight test centres
Development mule
Iron bird (aviation)
References
Aerospace engineering
Experimental aircraft
Aviation industry
Civil aviation
Military aviation
Aircraft operations
History of aviation | Testbed aircraft | [
"Engineering"
] | 256 | [
"Aerospace engineering"
] |
63,216,947 | https://en.wikipedia.org/wiki/X-ray%20emission%20spectroscopy | X-ray emission spectroscopy (XES) is a form of X-ray spectroscopy in which a core electron is excited by an incident x-ray photon and then this excited state decays by emitting an x-ray photon to fill the core hole. The energy of the emitted photon is the energy difference between the involved electronic levels. The analysis of the energy dependence of the emitted photons is the aim of the X-ray emission spectroscopy.
There are several types of XES and can be categorized as non-resonant XES (XES), which includes -measurements, valence-to-core (VtC/V2C)-measurements, and ()-measurements, or as resonant XES (RXES or RIXS), which includes XXAS+XES 2D-measurement, high-resolution XAS, 2p3d RIXS, and Mössbauer-XES-combined measurements. In addition, Soft X-ray emission spectroscopy (SXES) is used in determining the electronic structure of materials.
History
The first XES experiments were published by Lindh and Lundquist in 1924
In these early studies, the authors utilized the electron beam of an X-ray tube to excite core electrons and obtain the -line spectra of sulfur and other elements. Three years later, Coster and Druyvesteyn performed the first experiments using photon excitation. Their work demonstrated that the electron beams produce artifacts, thus motivating the use of X-ray photons for creating the core hole. Subsequent experiments were carried out with commercial X-ray spectrometers, as well as with high-resolution spectrometers.
While these early studies provided fundamental insights into the electronic configuration of small molecules, XES only came into broader use with the availability of high intensity X-ray beams at synchrotron radiation facilities, which enabled the measurement of (chemically) dilute samples.
In addition to the experimental advances, it is also the progress in quantum chemical computations, which makes XES an intriguing tool for the study of the electronic structure of chemical compounds.
Henry Moseley, a British physicist was the first to discover a relation between the -lines and the atomic numbers of the probed elements. This was the birth hour of modern x-ray spectroscopy. Later these lines could be used in elemental analysis to determine the contents of a sample.
William Lawrence Bragg later found a relation between the energy of a photon and its diffraction within a crystal. The formula he established, says that an X-ray photon with a certain energy bends at a precisely defined angle within a crystal.
Equipment
Analyzers
A special kind of monochromator is needed to diffract the radiation produced in X-Ray-Sources. This is because X-rays have a refractive index n ≈ 1. Bragg came up with the equation that describes x-ray/neutron diffraction when those particles pass a crystal lattice.(X-ray diffraction)
For this purpose "perfect crystals" have been produced in many shapes, depending on the geometry and energy range of the instrument. Although they are called perfect, there are miscuts within the crystal structure which leads to offsets of the Rowland plane.
These offsets can be corrected by turning the crystal while looking at a specific energy(for example: -line of copper at 8027.83eV).
When the intensity of the signal is maximized, the photons diffracted by the crystal hit the detector in the Rowland plane. There will now be a slight offset in the horizontal plane of the instrument which can be corrected by increasing or decreasing the detector angle.
In the Von Hamos geometry, a cylindrically bent crystal disperses the radiation along its flat surface's plane and focuses it along its axis of curvature onto a line like feature.
The spatially distributed signal is recorded with a position sensitive detector at the crystal's focusing axis providing the overall spectrum. Alternative wavelength dispersive concepts have been proposed and implemented based on Johansson geometry having the source positioned inside the Rowland circle, whereas an instrument based on Johann geometry has its source placed on the Rowland circle.
X-ray sources
X-ray sources are produced for many different purposes, yet not every X-ray source can be used for spectroscopy. Commonly used sources for medical applications generally generate very "noisy" source spectra, because the used cathode material must not be very pure for these measurements. These lines must be eliminated as much as possible to get a good resolution in all used energy ranges.
For this purpose normal X-ray tubes with highly pure tungsten, molybdenum, palladium, etc. are made. Except for the copper they are embedded in, they produce a relatively "white" spectrum. Another way of producing X-rays are particle accelerators. The way they produce X-rays is from vectoral changes of their direction through magnetic fields. Every time a moving charge changes direction it has to give off radiation of corresponding energy. In X-ray tubes this directional change is the electron hitting the metal target (Anode) in synchrotrons it is the outer magnetic field accelerating the electron into a circular path.
There are many different kind of X-ray tubes and operators have to choose accurately depending on what it is, that should be measured.
Modern spectroscopy and the importance of -lines in the 21st Century
Today, XES is less used for elemental analysis, but more and more do measurements of -line spectra find importance, as the relation between these lines and the electronic structure of the ionized atom becomes more detailed.
If an 1s-Core-Electron gets excited into the continuum(out of the atoms energy levels in MO),
electrons of higher energy orbitals need to lose energy and "fall" to the 1s-Hole that was created to fulfill Hund's Rule.(Fig.2)
Those electron transfers happen with distinct probabilities. (See Siegbahn notation)
Scientists noted that after an ionization of a somehow bonded 3d-transition metal-atom the -lines intensities and energies shift with oxidation state of the metal and with the species of ligand(s). This gave way to a new method in structural analysis:
By high-resolution scans of these lines the exact energy level and structural configuration of a chemical compound can be determined.
This is because there are only two major electron transfer mechanisms, if we ignore every transfer not affecting valence electrons.
If we include the fact that chemical compounds of 3d-transition metals can either be high-spin or low-spin we get 2 mechanisms for each spin configuration.
These two spin configurations determine the general shape of the and -mainlines as seen in figure one and two, while the structural configuration of electrons within the compound causes different intensities, broadening, tailing and piloting of the and -lines.
Although this is quite a lot of information, this data has to be combined with absorption measurements of the so-called "pre-edge" region.
Those measurements are called XANES (X-ray absorption near edge structure).
In synchrotron facilities those measurement can be done at the same time, yet the experiment setup is quite complex and needs exact and fine tuned crystal monochromators to diffract the tangential beam coming from the electron storage ring. Method is called HERFD, which stands for High Energy Resolution Fluorescence Detection. The collection method is unique in that, after a collection of all wavelengths coming from "the source" called ,
the beam is then shone onto the sample holder with a detector behind it for the XANES part of the measurement. The sample itself starts to emit X-rays and after those photons have been monochromatized they are collected, too.
Most setups use at least three crystal monochromators or more. The is used in absorption measurements as a part of the Beer-Lambert Law in the equation
where is the intensity of transmitted photons. The received values for the extinction are wavelength specific which therefore creates a spectrum of the absorption.
The spectrum produced from the combined data shows clear advantage in that background radiation is almost completely eliminated while still having an extremely resolute view on features on a given absorption edge.(Fig.4)
In the field of development of new catalysts for more efficient energy storage, production and usage in form of hydrogen fuel cells and new battery materials, the research of the -lines is essential nowadays.
The exact shape of specific oxidation states of metals is mostly known, yet newly produced chemical compounds with the potential
of becoming a reasonable catalyst for electrolysis, for example, are measured every day.
Several countries encourage many different facilities all over the globe in this special field of science in the hope for clean, responsible and cheap energy.
Soft x-ray emission spectroscopy
Soft X-ray emission spectroscopy or (SXES) is an experimental technique for determining the electronic structure of materials.
Uses
X-ray emission spectroscopy (XES) provides a means of probing the partial occupied density of electronic states of a material. XES is element-specific and site-specific, making it a powerful tool for determining detailed electronic properties of materials.
Forms
Emission spectroscopy can take the form of either resonant inelastic X-ray emission spectroscopy (RIXS) or non-resonant X-ray emission spectroscopy (NXES). Both spectroscopies involve the photonic promotion of a core level electron, and the measurement of the fluorescence that occurs as the electron relaxes into a lower-energy state. The differences between resonant and non-resonant excitation arise from the state of the atom before fluorescence occurs.
In resonant excitation, the core electron is promoted to a bound state in the conduction band. Non-resonant excitation occurs when the incoming radiation promotes a core electron to the continuum. When a core hole is created in this way, it is possible for it to be refilled through one of several different decay paths. Because the core hole is refilled from the sample's high-energy free states, the decay and emission processes must be treated as separate dipole transitions. This is in contrast with RIXS, where the events are coupled, and must be treated as a single scattering process.
Properties
Soft X-rays have different optical properties than visible light and therefore experiments must take place in ultra high vacuum, where the photon beam is manipulated using special mirrors and diffraction gratings.
Gratings diffract each energy or wavelength present in the incoming radiation in a different direction. Grating monochromators allow the user to select the specific photon energy they wish to use to excite the sample. Diffraction gratings are also used in the spectrometer to analyze the photon energy of the radiation emitted by the sample.
See also
X-ray absorption spectroscopy
References
External links
Soft X-ray Emission Spectroscopy - Description at beamteam.usask.ca
Emission spectroscopy
X-ray spectroscopy
Synchrotron-related techniques
1924 introductions | X-ray emission spectroscopy | [
"Physics",
"Chemistry"
] | 2,266 | [
"Emission spectroscopy",
"X-ray spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
63,219,935 | https://en.wikipedia.org/wiki/Levitation%20based%20inertial%20sensing | Levitation based inertial sensing is a new and rapidly growing technique for measuring linear acceleration, rotation and orientation of a body. Based on this technique, inertial sensors such as accelerometers and gyroscopes, enables ultra-sensitive inertial sensing. For example, the world's best accelerometer used in the LISA Pathfinder in-flight experiment is based on a levitation system which reaches a sensitivity of and noise of .
History
The pioneering work related to the microparticle levitation was performed by Artur Ashkin in 1970. He demonstrated optical trapping of dielectric microspheres for the first time, forming an optical levitation system, by using a focused laser beam in air and liquid. This new technology was later named "optical tweezer" and applied in biochemistry and biophysics. Later, significant scientific progress on optically levitated systems was made, for example the cooling of the center of mass motion of a micro- or nanoparticle in the millikelvin regime. Very recently a research group published a paper showing motional quantum ground state cooling of a levitated nanoparticle. In addition, levitation based on electrostatic and magnetic approaches have also been proposed and realized.
Levitation systems have shown high force sensitivities in the range. For example, an optically levitated dielectric particle has been shown to exhibit force sensitivities beyond ~ . Thus, levitation systems show promise for ultra-sensitive force sensing, such as detection of short-range interactions. By levitating micro- or mesoparticles with a relatively large mass, this system can be employed as a high-performance inertial sensor, demonstrating nano-g sensitivity.
Method
One possible working principle behind a levitation based inertial sensing system is the following. By levitating a micro-object in vacuum and after a cool-down process, the center of mass motion of the micro-object can be controlled and coupled to the kinematic states of the system. Once the system's kinematic state changes (in other words, the system undergoes linear or rotational acceleration), the center of mass motion of the levitated micro-object is affected and yields a signal. This signal is related to the changes of the system's kinematic states and can be read out.
Regarding levitation techniques, there are generally three different approaches: optical, electrostatic and magnetic.
Applications
The sub-attonewton force sensitivity of levitation based system could show promise for applications in many different fields, such as Casimir force sensing, gravitational wave detection and inertial sensing. For inertial sensing, levitation based system could be used to make high-performance accelerometers and gyroscopes employed in inertial measurement units (IMUs) and inertial navigation systems (INSs). These are used in such applications as drone navigation in tunnels and mines, guidance of unmanned aerial vehicles (UAVs), or stabilization of micro-satellites. Levitation based Inertial sensors that have sufficient sensitivity and low noise () for measurements in the seismic band ( to ) can be used in the field of seismometry, in which current inertial sensors cannot meet the requirements.
There are already some commercial products on the market. One example is the iOSG Superconducting gravity sensor, which is based on magnetic levitation and shows a noise of .
Advantages
The future trends in inertial sensing require that inertial sensors have lower cost, higher performance, and smaller in size. Levitation based inertial sensing systems have already shown high performance. For example, the accelerometer used in the LISA Pathfinder in-flight experiment has a sensitivity of and noise of .
References
Levitation
Sensors | Levitation based inertial sensing | [
"Physics",
"Technology",
"Engineering"
] | 784 | [
"Physical phenomena",
"Measuring instruments",
"Levitation",
"Motion (physics)",
"Sensors"
] |
63,221,711 | https://en.wikipedia.org/wiki/Cyclooctadiene%20iridium%20methoxide%20dimer | Cyclooctadiene iridium methoxide dimer is an organoiridium compound with the formula Ir2(OCH3)2(C8H12)2, where C8H12 is the diene 1,5-cyclooctadiene. It is a yellow solid that is soluble in organic solvents. The complex is used as a precursor to other iridium complexes, some of which are used in homogeneous catalysis.
The compound is prepared by treating cyclooctadiene iridium chloride dimer with sodium methoxide. In terms of its molecular structure, the iridium centers are square planar as is typical for a d8 complex. The Ir2O2 core is folded.
References
Homogeneous catalysis
Cyclooctadiene complexes
Organoiridium compounds | Cyclooctadiene iridium methoxide dimer | [
"Chemistry"
] | 168 | [
"Catalysis",
"Homogeneous catalysis"
] |
63,223,403 | https://en.wikipedia.org/wiki/SnRNA-seq | snRNA-seq, also known as single nucleus RNA sequencing, single nuclei RNA sequencing or sNuc-seq, is an RNA sequencing method for profiling gene expression in cells which are difficult to isolate, such as those from tissues that are archived or which are hard to be dissociated. It is an alternative to single cell RNA seq (scRNA-seq), as it analyzes nuclei instead of intact cells.
snRNA-seq minimizes the occurrence of spurious gene expression, as the localization of fully mature ribosomes to the cytoplasm means that any mRNAs of transcription factors that are expressed after the dissociation process cannot be translated, and thus their downstream targets cannot be transcribed. Additionally, snRNA-seq technology enables the discovery of new cell types which would otherwise be difficult to isolate.
Methods and technology
The basic snRNA-seq method requires 4 main steps: tissue processing, nuclei isolation, cell sorting, and sequencing. In order to isolate and sequence RNA inside the nucleus, snRNA-seq involves using a quick and mild nuclear dissociation protocol. This protocol allows for minimization of technical issues that can affect studies, especially those concerned with immediate early gene (IEG) behavior.
The resulting dissociated cells are suspended and the suspension gently lysed, allowing the cell nuclei to be separated from their cytoplasmic lysates using centrifugation. These separated nuclei/cells are sorted using fluorescence-activated cell sorting (FACS) into individual wells, and amplified using microfluidics machinery. Sequencing occurs as normal and the data can be analyzed as appropriate for its use.
This basic snRNA-seq methodology is capable of profiling RNA from tissues that are preserved or cannot be dissociated, but it does not have high throughput capability due to its reliance on nuclei sorting by FACS. This technique cannot be scaled easily to profiling large numbers of nuclei or samples. Massively parallel scRNA-seq methods exist and can be readily scaled but their requirement of a single cell suspension as input is not ideal and eliminates some of the flexibility that is available with the snRNA-seq method in regards to the types of tissues and cells that can be examined. In response, the DroNc-Seq method of massively parallel snRNA-seq with droplet technology was developed by researchers from the Broad Institute of MIT and Harvard. In this technique, nuclei that have been isolated from their fixed or frozen tissue are encapsulated in droplets with uniquely barcoded beads that are coated with oligonucleotides containing a 30-terminal deoxythymine (dT) stretch. This coating captures the polyadenylated mRNA content produced when the nuclei are lysed inside the droplets. The captured mRNA is reverse transcribed into cDNA after emulsion breakage. Sequencing this cDNA produces the transcriptomes of all the single nuclei being looked at and these can be used for many purposes, including identification of unique cell types.
The sequencing tools and equipment used in scRNA-seq can be used with modifications for snRNA-seq experiments. Illumina outlines a workflow for the basic snRNA-seq method which can be performed with existing equipment. DroNc-Seq can be accomplished with microfluidic platforms which are meant for the Drop-seq scRNA-seq method. However, Dolomite Bio has adapted one of their instruments, the automated Nadia platform for scRNA-seq, to be used natively for DroNc-Seq as well. This instrument could simplify the generation of single nuclei sequencing libraries, as it is being used for its intended purpose.
In regard to data analysis after sequencing, a computational pipeline known as dropSeqPipe was developed by the McCarroll Lab at Harvard. Although the pipeline was originally developed for use with Drop-seq scRNA-seq data, it can be used with DroNc-Seq data as it also utilizes droplet technology.
Difference between snRNA-seq and scRNA-seq
snRNA-seq uses isolated nuclei instead of the entire cells to profile gene expression. That is to say, scRNA-seq measures both cytoplasmic and nuclear transcripts, while snRNA-seq mainly measures nuclear transcripts (though some transcripts might be attached to the rough endoplasmic reticulum and partially preserved in nuclear preps). This allows for snRNA-seq to process only the nucleus and not the entire cell. For this reason, compared to scRNA-seq, snRNA-Seq is more appropriate to profile gene expression in cells that are difficult to isolate (e.g. adipocytes, neurons), as well as preserved tissues.
Additionally, the nuclei required for snRNA-seq can be obtained quickly and easily from fresh, lightly fixed, or frozen tissues, whereas isolating single cells for single-cell RNA-seq (scRNA-seq) involves extended incubations and processing. This gives researchers the ability to obtain transcriptomes which are not as perturbed during isolation.
Application
In neuroscience, neurons have an interconnected nature which makes it extremely hard to isolate intact single neurons. As snRNA-seq has emerged as an alternative method of assessing a cell's transcriptome through the isolation of single nuclei, it has been possible to conduct single-neuron studies from postmortem human brain tissue. snRNA-seq has also enabled the first single neuron analysis of immediate early gene expression (IEGs) associated with memory formation in the mouse hippocampus. In 2019, Dmitry et al used the method on cortical tissue from ASD patients to identify ASD-associated transcriptomic changes in specific cell types, which is the first cell-type-specific transcriptome assessment in brains affected by ASD.
Outside of neuroscience, snRNA-seq has also been used in other research areas. In 2019, Haojia et al compared both scRNA-seq and snRNA-seq in a genomic study around the kidney. They found snRNA-seq accomplishes an equivalent gene detection rate to that of scRNA-seq in adult kidney with several significant advantages (including compatibility with frozen samples, reduced dissociation bias and so on ). In 2019, Joshi et al used snRNA-seq in a human lung biology study in which they found snRNA-seq allowed unbiased identification of cell types from frozen healthy and fibrotic lung tissues. Adult mammalian heart tissue can be extremely hard to dissociate without damaging cells, which does not allow for easy sequencing of the tissue. However, in 2020, German scientists presented the first report of sequencing an adult mammalian heart by using snRNA-seq and were able to provide practical cell‐type distributions within the heart
Pros and cons of snRNA-seq
Pros
In scRNA-seq, the dissociation process may impair some sensitive cells and some cells in certain tissues (e.g. collagenous matrix) can be extremely hard to dissociate. Such issues can be prevented in snRNA-seq as we only need to isolate a single nucleus instead of an entire single cell.
Unlike scRNA-seq, snRNA-seq has quick and mild nuclei dissociation protocols that would forestall technical issues emerging from heating, protease digestion.
snRNA-seq works very well for preserved/frozen tissues.
Cons
Sequencing RNA in the cytoplasm (gene isoforms, RNA in mitochondria and chloroplast etc.) is not possible, as snRNA-seq mostly measures nuclear transcripts.
References
RNA sequencing
Molecular biology techniques | SnRNA-seq | [
"Chemistry",
"Biology"
] | 1,647 | [
"Genetics techniques",
"RNA sequencing",
"Molecular biology techniques",
"Molecular biology"
] |
54,720,086 | https://en.wikipedia.org/wiki/Vortex%20flowmeter | A Vortex flowmeter is a type of flowmeter used for measuring fluid flow rates in an enclosed conduit.
Composition of vortex flowmeter
A vortex flowmeter has the following components: A flow sensor operable to sense pressure variations due to vortex-shedding of a fluid in a passage and to convert the pressure variations to a flow sensor signal, in the form of an electrical signal; and a signal processor operable to receive the flow sensor signal and to generate an output signal corresponding to the pressure variations due to vortex-shedding of the fluid in the passage.
Working principle
When the medium flows through the Bluff body at a certain speed, an alternately arranged vortex belt is generated behind the sides of the Bluff body, called the "von Kármán vortex". Since both sides of the vortex generator alternately generate the vortex, the pressure pulsation is generated on both sides of the generator, which makes the detector produce alternating stress. The piezoelectric element encapsulated in the detection probe body generates an alternating charge signal with the same frequency as the vortex, under the action of alternating stress. The frequency of these pulses is directly proportional to flow rate. The signal is sent to the intelligent flow totalizer to be processed after being amplified by the pre-amplifier.
In certain range of Reynolds number (2×10^4~7×10^6), the relationship among vortex releasing frequency, fluid velocity, and vortex generator facing flow surface width can be expressed by the following equation:
where is the releasing frequency of Carmen vortex, is the Strouhal number, is velocity, and is the width of the triangular cylinder.
Industrial applications
The vortex flowmeter is a broad-spectrum flow meter which can be used for metering, measurement and control of most steam, gas and liquid flow for a very unique medium versatility, high stability and high reliability with no moving parts, simple structure and low failure rate. The vortex flowmeter is relatively economical because of its simple flow measurement system and ease of maintenance. It is widely used in heavy industrial applications, power facilities, and energy industries, particularly in steam processes.
See also
References
Flow meters | Vortex flowmeter | [
"Chemistry",
"Technology",
"Engineering"
] | 433 | [
"Measuring instruments",
"Flow meters",
"Fluid dynamics"
] |
54,723,517 | https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein%20condensation%20of%20polaritons | Bose–Einstein condensation of polaritons is a growing field in semiconductor optics research, which exhibits spontaneous coherence similar to a laser, but through a different mechanism. A continuous transition from polariton condensation to lasing can be made similar to that of the crossover from a Bose–Einstein condensate to a BCS state in the context of Fermi gases. Polariton condensation is sometimes called “lasing without inversion”.
Overview
Polaritons are bosonic quasiparticles which can be thought of as dressed photons. In an optical cavity, photons have an effective mass, and when the optical resonance in a cavity is brought near in energy to an electronic resonance (typically an exciton) in a medium inside the cavity, the photons become strongly interacting, and repel each other. They therefore act like atoms which can approach equilibrium due to their collisions with each other, and can undergo Bose-Einstein condensation (BEC) at high density or low temperature. The Bose condensate of polaritons then emits coherent light like a laser. Because the mechanism for the onset of coherence is the interactions between the polaritons, and not the optical gain that comes from inversion, the threshold density can be quite low.
History
The theory of polariton BEC was first proposed by Atac Imamoglu and coauthors including Yoshihisa Yamamoto. These authors claimed observation of this effect in a subsequent paper, but this was eventually shown to be standard lasing. In later work in collaboration with the research group of Jacqueline Bloch, the structure was redesigned to include several quantum wells inside the cavity to prevent saturation of the exciton resonance, and in 2002 evidence for nonequilibrium condensation was reported which included photon-photon correlations consistent with spontaneous coherence. Later experimental groups have used essentially the same design. In 2006, the group of Benoit Deveaud and coauthors reported the first widely accepted claim of nonequilibrium Bose–Einstein condensation of polaritons based on measurement of the momentum distribution of the polaritons. Although the system was not in equilibrium, a clear peak in the ground state of the system was seen, a canonical prediction of BEC. Both of these experiments created a polariton gas in an uncontrolled free expansion. In 2007, the experimental group of David Snoke demonstrated nonequilibrium Bose–Einstein condensation of polaritons in a trap, similar to the way atoms are confined in traps for Bose–Einstein condensation experiments. The observation of polariton condensation in a trap was significant because the polaritons were displaced from the laser excitation spot, so that the effect could not be attributed to a simple nonlinear effect of the laser light. Jaqueline Bloch and coworkers observed polariton condensation in 2009, after which many other experimentalists reproduced the effect (for reviews see the bibliography). Evidence for polariton superfluidity was reported in by Alberto Amo and coworkers, based on the suppressed scattering of the polaritons during their motion. This effect has been seen more recently at room temperature, which is the first evidence of room temperature superfluidity, albeit in a highly nonequilibrium system.
Equilibrium polariton condensation
The first clear demonstration of Bose–Einstein condensation of polaritons in equilibrium was reported by a collaboration of David Snoke, Keith Nelson, and coworkers, using high quality structures fabricated by Loren Pfeiffer and Ken West at Princeton. Prior to this result, polariton condensates were always observed out of equilibrium. All of the above studies used optical pumping to create the condensate. Electrical injection, which enables a polariton laser which could be a practical device, was shown in 2013 by two groups.
Nonequilibrium condensation
Polariton condensates are an example, and the most well studied example, of Bose-Einstein condensation of quasiparticles. Because most of the experimental work on polariton condensates used structures with very short polariton lifetime, a large body of theory has addressed the properties of nonequilibrium condensation and superfluidity. In particular, Jonathan Keeling and Iacopo Carusotto and C. Ciuti have shown that although a condensate with dissipation is not a “true” superfluid, it still has a critical velocity for onset of superfluid effects.
See also
Bose-Einstein condensation of quasiparticles
References
Further reading
Universal Themes of Bose-Einstein Condensation, published by Cambridge University Press (2017). ,
John Robert Schrieffer, Theory of Superconductivity, (1964),
Bose–Einstein Condensation, published by Cambridge University Press (1996). ;
Bose–Einstein condensates
Quasiparticles | Bose–Einstein condensation of polaritons | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,026 | [
"Bose–Einstein condensates",
"Matter",
"Phases of matter",
"Condensed matter physics",
"Quasiparticles",
"Subatomic particles"
] |
77,657,810 | https://en.wikipedia.org/wiki/%282R%2C3R%29-Hydroxybupropion | (2R,3R)-Hydroxybupropion, or simply (R,R)-hydroxybupropion, is the major metabolite of the antidepressant, smoking cessation, and appetite suppressant medication bupropion. It is the (2R,3R)-enantiomer of hydroxybupropion, which in humans occurs as a mixture of (2R,3R)-hydroxybupropion and (2S,3S)-hydroxybupropion (radafaxine). Hydroxybupropion is formed from bupropion mainly by the cytochrome P450 enzyme CYP2B6. Levels of (2R,3R)-hydroxybupropion are dramatically higher than those of bupropion and its other metabolites during bupropion therapy.
Exposure with bupropion
Bupropion is substantially converted into metabolites during first-pass metabolism with oral administration and levels of its metabolites are much higher than those of bupropion itself. Exposure to (2R,3R)-hydroxybupropion is 29-fold higher than to (R)-bupropion and exposure to (2S,3S)-hydroxybupropion is 3.7-fold higher than to (S)-bupropion. Other metabolites that circulate at higher concentrations than those of bupropion include threohydrobupropion and to a lesser extent erythrohydrobupropion.
The metabolism of bupropion and its metabolites is stereoselective. During bupropion therapy, exposure to (R)-bupropion is 2- to 6-fold higher than to (S)-bupropion and exposure to (2R,3R)-hydroxybupropion is 20- to 65-fold higher than to (2S,3S)-hydroxybupropion. Hence, (2R,3R)-hydroxybupropion is a major metabolite of bupropion and (2S,3S)-hydroxybupropion is a minor metabolite.
In contrast to humans, only low levels of hydroxybupropion or (2R,3R)-hydroxybupropion occur with bupropion in rats. This highlights substantial species differences in the pharmacokinetics of bupropion between animals and humans. These differences in turn may account for differences in the pharmacodynamic effects of bupropion between species.
Pharmacology
Pharmacodynamics
(2R,3R)-Hydroxybupropion is much less pharmacologically active as a monoamine reuptake inhibitor than bupropion or (2S,3S)-hydroxybupropion. Conversely, its potency as a negative allosteric modulator of nicotinic acetylcholine receptors is variable but overall more similar to that of bupropion and (2S,3S)-hydroxybupropion.
Additional studies have characterized the affinities (Ki) of bupropion and the hydroxybupropion enantiomers at the monoamine transporters as well as affinities and potencies (IC50) using non-human proteins. In contrast to bupropion and (2S,3S)-hydroxybupropion, racemic hydroxybupropion, using rat proteins, has been found to act as a selective norepinephrine reuptake inhibitor (IC50 = 1,700nM) with no apparent inhibition of dopamine reuptake (IC50 > 10,000nM). Normally, activity with racemic mixtures is expected to be closer to that of the active enantiomer than to the inactive enantiomer. The reasons for the discrepancy in the case of racemic hydroxybupropion are unclear. In any case, it was suggested that (2R,3R)-hydroxybupropion might be acting as a negative allosteric modulator of the binding of (2S,3S)-hydroxybupropion to the dopamine transporter.
Bupropion and (2S,3S)-hydroxybupropion are substantially more potent than (2R,3R)-hydroxybupropion in various rodent behavioral tests, such as the forced swim test (an assay of antidepressant-like activity). However, sufficient doses of bupropion, (2S,3S)-hydroxybupropion, and (2R,3R)-hydroxybupropion all produce full methamphetamine-like effects in monkeys (1mg/kg, 3mg/kg, and 10mg/kg, respectively). Bupropion produces nicotine-like effects in rodents and (2S,3S)-hydroxybupropion partially substitutes for nicotine. In contrast, (2R,3R)-hydroxybupropion does not substitute for nicotine and dose-dependently antagonizes the effects of nicotine by up to 50%.
(2R,3R)-Hydroxybupropion is a strong CYP2D6 inhibitor similarly to bupropion. (2R,3R)-Hydroxybupropion alone has been estimated to account for approximately 65% of the total in vivo CYP2D6 inhibition of bupropion, whereas threohydrobupropion accounted for 21% and erythrohydrobupropion accounted for 9% (with 5% remaining or unaccounted for).
Pharmacokinetics
Hydroxybupropion, including both (2R,3R)-hydroxybupropion and (2S,3S)-hydroxybupropion, is mainly formed from bupropion by the cytochrome P450 enzyme CYP2B6. However, CYP2C19, CYP3A4, CYP1A2, and CYP2E1 appear to play a minor role.
CYP2B6 is highly polymorphic and is subject to high interindividual variability of approximately 100-fold. This may result in large interindividual differences in the metabolism of bupropion into hydroxybupropion and the effects of bupropion. However, clearance of bupropion is not affected in different CYP2B6 metabolizer phenotypes. This suggests that other enzymes compensate in the metabolism of bupropion in the context of reduced CYP2B6 function. The moderate CYP2B6 inducer rifampicin increased the clearance of (2R,3R)-hydroxybupropion and decreased its exposure and half-life by approximately 50%.
The elimination half-life of (2R,3R)-hydroxybupropion is 19 to 26hours.
Chemistry
Hydroxybupropion has two chiral centers. As a result, there are four possible enantiomers of the compound. However, only (2R,3R)-hydroxybupropion and (2S,3S)-hydroxybupropion are formed in humans. (2R,3S)- and (2S,3R)-Hydroxybupropion do not occur in humans presumably due to steric hindrance precluding their formation.
References
Antidepressants
Cathinones
3-Chlorophenyl compounds
CYP2D6 inhibitors
Enantiopure drugs
Human drug metabolites
Morpholines
Nicotinic antagonists
Norepinephrine–dopamine reuptake inhibitors
Smoking cessation
Stimulants
Tertiary alcohols | (2R,3R)-Hydroxybupropion | [
"Chemistry"
] | 1,732 | [
"Chemicals in medicine",
"Stereochemistry",
"Human drug metabolites",
"Enantiopure drugs"
] |
77,670,060 | https://en.wikipedia.org/wiki/Enterosoma%20genetic%20code | The Enterosoma genetic code (tentative code number 34) translates AGG to methionine, as determined by the codon assignment software Codetta; it was further shown that this recoding is associated with a special tRNA with the appropriate anticodon and tRNA identity elements. The code is found in a small clade of species within the Enterosoma genus, according to the GTDB taxonomy system release 220. Codetta called the Enterosoma code for the following genome assemblies: GCA_002431755.1, GCA_002439645.1, GCA_002436825.1, GCA_002451385.1, GCA_002297105.1, GCA_002297045.1, GCA_002404995.1, and GCA_900549915.1.
See also
Genetic codes: list of alternative codons
List of genetic codes
References
Genetics | Enterosoma genetic code | [
"Biology"
] | 210 | [
"Protein biosynthesis",
"Gene expression",
"Molecular genetics",
"Biosynthesis",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
76,250,051 | https://en.wikipedia.org/wiki/Applied%20Mathematical%20Modelling | Applied Mathematical Modelling is a scientific journal published by Elsevier, focusing on applied mathematics with an emphasis on mathematical modeling in engineering, environmental processes, manufacturing, and industrial systems.. The journal was established as a quarterly journal in 1976 by IPC Science and Technology Press with Christopher J. Rawlins as managing editor. The journal is currently published by Elsevier on a monthly basis, and is edited by Johann Sienz (Swansea University).
Abstracting and indexing
The journal is indexed and abstracted in the following bibliographic databases:
References
Elsevier academic journals
Applied mathematics journals
Monthly journals | Applied Mathematical Modelling | [
"Mathematics"
] | 121 | [
"Applied mathematics",
"Applied mathematics journals"
] |
70,463,339 | https://en.wikipedia.org/wiki/Genome%20India%20Project | Genome India Project (GIP) is a research initiative led by the Bangalore-based Indian Institute of Science's Centre for Brain Research and involves over 20 universities across the country in an effort to gather samples, compile data, conduct research, and create an ‘Indian reference genome' grid.
Background
The initiative is funded by Department of Biotechnology (DBT) to sequence at least 10,000 Indian genomes in phase 1. The goal of the research is to develop predictive diagnostic indicators for several high-priority diseases and other uncommon and genetic disorders. In phase 2, the project would collect genetic samples from patients with three broad categories - cardiovascular diseases, mental illness, and cancer.
Participating institutions
The list includes:
All India Institute of Medical Sciences, Jodhpur
Centre for Cellular and Molecular Biology
Centre for DNA Fingerprinting and Diagnostics
Institute of Genomics and Integrative Biology
Gujarat Biotechnology Research Centre, Gandhinagar
Indian Institute of Information Technology, Allahabad
Indian Institute of Science Education and Research, Pune
Indian Institute of Technology, Madras
Indian Institute of Technology, Delhi
Indian Institute of Technology Jodhpur
Institute of Bioresources And Sustainable Development, Imphal
Institute of Life Sciences, Bhubhaneswar
Mizoram University
National Centre for Biological Sciences
National Institute of Biomedical Genomics
National Institute of Mental Health and Neurosciences
Rajiv Gandhi Centre for Biotechnology
Sher-i-Kashmir Institute of Medical Sciences
References
Human genome projects
Genetics organizations | Genome India Project | [
"Biology"
] | 289 | [
"Human genome projects",
"Genome projects"
] |
70,463,551 | https://en.wikipedia.org/wiki/Subpsoromic%20acid | Subpsoromic acid is a depsidone with the molecular formula C17H12O8 which has been isolated from the lichen Ocellularia praestans.
References
Lichen products
Oxygen heterocycles
Carboxylic acids
Dioxepines
Heterocyclic compounds with 3 rings
Methoxy compounds
Lactones | Subpsoromic acid | [
"Chemistry"
] | 72 | [
"Carboxylic acids",
"Natural products",
"Functional groups",
"Lichen products"
] |
74,706,370 | https://en.wikipedia.org/wiki/Substance-based%20medical%20device | A substance based medical device is a medical device composed of substances or combinations of substances. They are typically differentiated from medication (drugs) in that they do not have pharmacological, immunological or metabolic mode of action but achieve their therapeutic effect through primarily physical means.
Examples
Examples of substance based medical devices include products for gastrointestinal relief like medicinal clay or simeticone-based products, as well as unmedicated nasal sprays, certain eye drops, dermal formulations, oral cough treatments, and other products for self-medication that often are available without a prescription.
Regulation
Substance-based medical devices encompass a varied array of products that fall under the purview of Regulation (EU) 2017/745 (MDR). Based on their intended purpose, they are classified according to rule 21 ("Devices composed of substances that are introduced via a body orifice or applied to the skin") of Annex VIII of the MDR .
References
Regulation of medical devices
Medical devices
Health care | Substance-based medical device | [
"Biology"
] | 205 | [
"Medical devices",
"Medical technology"
] |
74,707,919 | https://en.wikipedia.org/wiki/Raspberry%20Shake | Raspberry Shake is a Panama-based company that designs and manufactures personal seismic and infrasonic sensors, utilizing Raspberry Pi hardware.
History
Raspberry Shake was developed in the Chiriquí province under the Western Seismic Observatory of Panama which creates hardware and software for tectonic phenomena measurement.
While the origins of Raspberry Shake can be traced back to Western Seismic Observatory of Panama, it evolved into an independent company in 2020 when the trademark was registered.
In the years 2015 and 2016, Raspberry Shake began its initial forays into the development of seismic detection software and hardware with the creation of Raspberry Shake 1D. By the end of 2017, hardware and software improvements were added, resulting in the Raspberry Shake 3D Sensor, which brought the capability to capture waves vertically and horizontally. Through continuous development, the Raspberry Shake 4D sensor was launched in July 2017, featuring integrated accelerometers directly on the board.
In early 2018, the Raspberry Boom sensor focused on infrasonic detection was developed; that same year, technologies were combined with those of the Raspberry Shake 1D sensor to launch the Raspberry Shake & Boom, opening up possibilities for seismic and infrasonic detection in a single device.
Technology
The Raspberry Shake is a device that pairs with the Raspberry Pi to function as a personal seismograph. It incorporates a geophone which converts ground movements into electrical signals. An additional board amplifies and digitizes this signal, which is then processed by the Raspberry Pi.
The Raspberry Shake utilizes software similar to that used by the United States Geological Survey (USGS). As technology, particularly mini-computers like the Raspberry Pi, has evolved, the company introduced additional devices, including the sensor "Raspberry Shake 1D" with different detection capabilities.
References
Technology companies
Seismic magnitude scales
Seismology measurement
Earthquake engineering | Raspberry Shake | [
"Engineering"
] | 394 | [
"Earthquake engineering",
"Civil engineering",
"Structural engineering"
] |
74,710,664 | https://en.wikipedia.org/wiki/Duan%20Huiling | Duan Huiling () is a Chinese mechanical engineer, specializing in interface mechanics, the interactions between fluids and solids, and the surface properties and elasticity of nanoscale structures. She is dean of engineering at Peking University.
Education and career
Duan studied mechanical engineering at Northeast Petroleum University, graduating with a bachelor's degree in 1995 and earning a master's degree in 1998. From 1998 to 2001 was a lecturer at the university. After returning to graduate study at Peking University, she completed a Ph.D. in solid mechanics in 2005.
She was a postdoctoral researcher at Cardiff University in the United Kingdom, supported by a Royal Society Postdoctoral Fellowship, and at the Institute of Nanotechnology of the Karlsruhe Research Center (FZK-INT, now part of the Karlsruhe Institute of Technology) in Germany, supported by an Alexander von Humboldt Fellowship. In 2007 she returned to China as an associate professor at Peking University, subsequently becoming full professor and, in 2014, Chang Jiang Chair Professor.
At Peking University, she chaired the Department of Mechanics and Engineering Science from 2013 to 2018, and was named dean of the College of Engineering in 2020.
Recognition
Duan was the 2009 recipient of the Sia Nemat-Nasser Early Career Award of the American Society of Mechanical Engineers, which elected her as a Fellow in 2020.
She is also a recipient of the Distinguished Young Scholars Award of the National Natural Science Foundation of China (2012), and was named a National Outstanding Young Female Scientist of China (2014) and National Outstanding Young Scholar of China (2015).
References
External links
Home page
Year of birth missing (living people)
Living people
Chinese mechanical engineers
Chinese women engineers
Nanotechnologists
Northeast Petroleum University alumni
Peking University alumni
Academic staff of Peking University
Fellows of the American Society of Mechanical Engineers | Duan Huiling | [
"Materials_science"
] | 360 | [
"Nanotechnology",
"Nanotechnologists"
] |
74,711,248 | https://en.wikipedia.org/wiki/Lithium%20holmium%20fluoride | Lithium holmium fluoride is a ternary salt with chemical formula . At temperatures below 1.53 K, it is ferromagnetic described by the Ising model, but the interaction coefficients arise through superexchange. Above that temperature, it paramagnetizes. Even at 0 K, exhibits a quantum phase transition, aligning with an external magnetic field.
References
Further reading
Types of magnets
Fluorides | Lithium holmium fluoride | [
"Chemistry"
] | 90 | [
"Salts",
"Fluorides",
"Inorganic compounds",
"Inorganic compound stubs"
] |
74,714,251 | https://en.wikipedia.org/wiki/X-ray%20diffraction%20computed%20tomography | X-ray diffraction computed tomography is an experimental technique that combines X-ray diffraction with the computed tomography data acquisition approach. X-ray diffraction (XRD) computed tomography (CT) was first introduced in 1987 by Harding et al. using a laboratory diffractometer and a monochromatic X-ray pencil beam. The first implementation of the technique at synchrotron facilities was performed in 1998 by Kleuker et al.
X-ray diffraction computed tomography can be divided into two main categories depending on how the XRD data are being treated, specifically the XRD data can be treated either as powder diffraction or single crystal diffraction data and this depends on the sample properties. If the sample contains small and randomly oriented crystals, then it generates smooth powder diffraction "rings" when using a 2D area detector. If the sample contains large crystals, then it generates "spotty" 2D diffraction patterns. The latter can be performed using also a letterbox, cone and parallel X-ray beam and yields 2D or 3D images corresponding to maps of the crystallites or "grains" present in the sample and their properties, such as stress or strain. There exist several variations of this approach including 3DXRD, X-ray diffraction contrast tomography (DCT) and high energy X-ray diffraction microscopy (HEDM)
X-ray diffraction computed tomography, often abbreviated as XRD-CT, typically refers to the technique invented by Harding et al. which assumes that the acquired data are powder diffraction data. For this reason, it has also been mentioned as powder diffraction computed tomography and diffraction scattering computed tomography (DSCT), however they both refer to the same method.
Data acquisition
XRD-CT employs a monochromatic pencil beam scanning approach and captures the diffraction signal in transmission geometry, producing a diffraction projection dataset. In this setup, the sample moves along an axis perpendicular to the beam's direction. It is illuminated with a monochromatic finely collimated or focused "pencil" X-ray beam. A 2D area detector then records the scattered X-rays, optimizing for best counting statistics and speed. Typically, the translational scan's size surpasses the sample's diameter, ensuring its full coverage at all assessed angles. The size of the translation step is commonly aligned with the X-ray beam's horizontal size. In a perfect scenario for any pencil-beam scanning tomographic method, the measured angles should match the number of translation steps multiplied by π/2, adhering to the Nyquist sampling theorem. However, this number can often be reduced in practice be equal to the number of translation steps without substantially compromising the quality of reconstructed images. The usual angular range spans from 0 to π.
Data reconstruction
In most studies, the predominant data reconstruction approach is the 'reverse analysis' introduced by Bleuet et al. where each sinogram is treated independently yielding a new CT image. Most often the filtered back projection reconstruction algorithm is employed to reconstruct the XRD-CT images. The outcome is an image in which every pixel, or more accurately voxel, equates to a local diffraction pattern. The reconstructed data can also be seen as a stack of 2D square images, where each image corresponds to an X-ray scattering angle.
Reconstruction artefacts
XRD-CT makes the following assumptions:
The sample is small and there are no significant parallax artefacts in the acquired diffraction data; when this assumption is not valid the reconstructed patterns contain a wide range of artefacts, such as inaccurate peak positions, peak shapes and even arteficial peak splitting
The acquired XRD data are powder diffraction-like and do not contain spotty data
The sample is not strongly absorbing the X-rays and there are no significant self-absorption problems in the acquired data
The chemistry of the sample is not changing significantly during the XRD-CT scan
In practise, one or more of these assumptions are not valid and the data suffer from artefacts. There are strategies to remove or significantly all of these artefacts:
Rather than employing the filtered back projection reconstruction algorithm to reconstruct the XRD-CT images, it is possible to use another reconstruction approach, termed "Direct Least Squares Reconstruction" (DLSR) to perform simultaneously peak fitting and tomographic reconstruction which takes into account the geometry of the experimental setup and yields parallax artefact-free reconstructed images. Performing a 0 to 2π XRD-CT scan instead of 0 to π can lead to reconstructed patterns with accurate peak position but not peak shape.
Spotty 2D XRD data acquired during the XRD-CT scan lead to streak or line artefacts in the reconstructed XRD-CT data; it is possible to remove or suppress these artefacts by applying filters during the azimuthal integration of the raw 2D diffraction patterns
The data can be corrected for self-absorption artefacts using an X-ray absorption-contrast CT scan of the same sample.
If the solid-state chemistry of the sample is changing during the XRD-CT scan, then other data acquisition approaches can be employed that can improve the temporal resolution of the method, such as the interlaced approach
Data analysis
Analyzing the local diffraction patterns can range from basic single-peak sequential batch fitting to a comprehensive one-step full-profile analysis, known as 'Rietveld-CT' (Wragg et al., 2015 ). The latter method stands out for its efficiency over the typical sequential method since it shares global parameters across all local models. Examples of these parameters include zero error and instrumental broadening, which enhance the refinement process's stability. To elaborate, each voxel in the restructured images is made up of a local model (like multi-phase scale factors, lattice parameters, and crystallite sizes) tailored to match the corresponding local diffraction pattern. This implies that only the overarching parameters are consistent across local models. However, the application of Rietveld-CT has been limited to small images, specifically those of 60 × 60 voxels, with the feasibility for larger images hinging on the computer memory available. Most often though full profile analysis of the local diffraction patterns is performed on a pixel-by-pixel or line-by-line basis using conventional XRD data analysis methods, such LeBail, Pawley and Rietveld. All these methods employ fitting based on the restructured diffraction patterns. Another approach which is also computational expensive is the DLSR which performs the tomographic data reconstruction and peak fitting in a single step. Regardless of the chosen analytical method, the final output comprises images filled with localized physico-chemical information. Each physico-chemical image corresponds to the refined parameters present in the local models, which might include maps that correspond to scale factors, lattice parameters, and crystallite sizes.
See also
X-ray diffraction
computed tomography
powder diffraction
3DXRD
synchrotron
References
Laboratory techniques in condensed matter physics
X-ray crystallography
X-ray computed tomography
1987 introductions | X-ray diffraction computed tomography | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,480 | [
"X-ray crystallography",
"Crystallography",
"Condensed matter physics",
"Laboratory techniques in condensed matter physics"
] |
74,718,016 | https://en.wikipedia.org/wiki/Carcinomyces%20polyporinus | Carcinomyces polyporinus is a species of fungus in the class Tremellomycetes. It is a parasite, growing in the hymenia of various poroid fungi, particularly species of Postia. Microscopically, it resembles a species of Tremella, but DNA research indicates that it belongs in a different family, the Carcinomycetaceae. It was first described by British mycologist Derek Reid from Scotland. It has also been recorded in continental Europe and North America.
References
Tremellomycetes
Taxa named by Derek Reid
Fungi described in 1970
Fungus species | Carcinomyces polyporinus | [
"Biology"
] | 121 | [
"Fungi",
"Fungus species"
] |
62,313,886 | https://en.wikipedia.org/wiki/MOWChIP-seq | MOWChIP-seq (Microfluidic Oscillatory Washing–based Chromatin ImmunoPrecipitation followed by sequencing) is a microfluidic technology used in molecular biology for profiling genome-wide histone modifications and other molecular bindings using as few as 30-100 cells per assay. MOWChIP-seq is a special type of ChIP-seq assay designed for low-input and high-throughput assays. The overall process of MOWChIP-seq is similar to that of conventional ChIP-seq assay except that the chromatin immunoprecipitation (ChIP) and washing steps occur in a small microfluidic chamber. MOWChIP-seq takes advantage of the capability of microfluidics for manipulating micrometer-sized beads. In the process, a packed bed of beads is formed to drastically increase the adsorption efficiency of chromatin fragments. An automated oscillatory washing is then used to remove nonspecific binding and impurity from the bead surface. Initial version of MOWChIP device contained only one microfluidic chamber. In the more recent demonstration, semi-automated MOWChIP device for running 8 parallel assays was presented.
Applications
MOWChIP-seq is enhanced and low-input ChIP-seq thus it applies to all molecular biology that can be probed using ChIP-seq. This includes analysis of histone modifications, RNA pol II binding, and transcription factor binding. Published MOWChIP-seq results include studies of various histone marks (H3K4me3, H3K27ac, H3K27me3, H3K9me3, H3K36me3, and H3K79me2)1,2.
Workflow of MOWChIP-seq
MOWChIP-seq requires a microfluidic system for running the ChIP and washing steps in a semi-automated fashion. The preparation of chromatin fragments from cells or nuclei and sequencing library using ChIP DNA is largely the same as in conventional ChIP-seq assays.
Data analysis
MOWChIP-seq produces ChIP-seq data with high quality that is comparable to those produced using large quantity of cells. Thus the data analysis is mostly identical to the analytical processes used in common ChIP-seq data analysis.
References
Biomedical engineering
Microfluidics | MOWChIP-seq | [
"Materials_science",
"Engineering",
"Biology"
] | 505 | [
"Biological engineering",
"Microfluidics",
"Microtechnology",
"Biomedical engineering",
"Medical technology"
] |
62,318,997 | https://en.wikipedia.org/wiki/H3K79me2 | H3K79me2 is an epigenetic modification to the DNA packaging protein Histone H3. It is a mark that indicates the di-methylation at the 79th lysine residue of the histone H3 protein. H3K79me2 is detected in the transcribed regions of active genes.
Nomenclature
H3K79me2 indicates dimethylation of lysine 79 on histone H3 protein subunit:
Lysine methylation
This diagram shows the progressive methylation of a lysine residue. The di-methylation (third from left) denotes the methylation present in H3K79me2.
Histone modifications
The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as Histones. The complexes formed by the looping of the DNA are known as chromatin. The basic structural unit of chromatin is the nucleosome: this consists of the core octamer of histones (H2A, H2B, H3 and H4) as well as a linker histone and about 180 base pairs of DNA. These core histones are rich in lysine and arginine residues. The carboxyl (C) terminal end of these histones contribute to histone-histone interactions, as well as histone-DNA interactions. The amino (N) terminal charged tails are the site of the post-translational modifications, such as the one seen in H3K36me3.
Epigenetic implications
The post-translational modification of histone tails by either histone modifying complexes or chromatin remodelling complexes are interpreted by the cell and lead to complex, combinatorial transcriptional output. It is thought that a Histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states which define genomic regions by grouping the interactions of different proteins and/or histone modifications together.
Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP-sequencing revealed regions in the genome characterised by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications.
The human genome was annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation.
Three forms of H3K79 methylation (H3K79me1; H3K79me2; H3K79me3) are catalyzed by DOT1 in yeast or DOT1L in mammals. H3K79 methylation participates in the DNA damage response and has multiple roles in nucleotide excision repair and sister chromatid recombinational repair.
H3K79 dimethylation has been detected in the transcribed regions of active genes.
Methods
The histone mark H3K36me3 can be detected in a variety of ways:
1. Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region.
2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well positioned nucleosomes are seen to have enrichment of sequences.
3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation.
See also
Histone code
Histone methylation
Histone methyltransferase
Methyllysine
References
Epigenetics
Post-translational modification | H3K79me2 | [
"Chemistry"
] | 1,010 | [
"Post-translational modification",
"Gene expression",
"Biochemical reactions"
] |
64,572,373 | https://en.wikipedia.org/wiki/Phase-space%20wavefunctions | Phase-space representation of quantum state vectors is a formulation of quantum mechanics elaborating the phase-space formulation with a Hilbert space. It "is obtained within the framework of the relative-state formulation. For this purpose, the Hilbert space of a quantum system is enlarged by introducing an auxiliary quantum system. Relative-position state and relative-momentum state are defined in the extended Hilbert space of the composite quantum system and expressions of basic operators such as canonical position and momentum operators, acting on these states, are obtained." Thus, it is possible to assign a meaning to the wave function in phase space, , as a quasiamplitude, associated to a quasiprobability distribution.
The first wave-function approach of quantum mechanics in phase space was introduced by Torres-Vega and Frederick in 1990 (also see). It is based on a generalised Husimi distribution.
In 2004 Oliveira et al. developed a new wave-function formalism in phase space where the wave-function is associated to the Wigner quasiprobability distribution by means of the Moyal product. An advantage might be that off-diagonal Wigner functions used in superpositions are treated in an intuitive way, , also gauge theories are treated in an operator form.
Phase space operators
Instead of thinking in terms multiplication of function using the star product, we can shift to think in terms of operators acting in functions in phase space.
Where for the Torres-Vega and Frederick approach the phase space operators are
with
and
And Oliveira's approach the phase space operators are
with
In the general case
and
with , where , , and are constants.
These operators satisfy the uncertainty principle:
Symplectic Hilbert space
To associate the Hilbert space, , with the phase space , we will consider the set of complex functions of integrable square, in , such that
Then we can write , with
where is the dual vector of . This symplectic Hilbert space is denoted by .
An association with the Schrödinger wavefunction can be made by
,
letting , we have
.
Then .
Torres-Vega–Frederick representation
With the operators of position and momentum a Schrödinger picture is developed in phase space
The Torres-Vega–Frederick distribution is
Oliveira representation
Thus, it is now, with aid of the star product possible to construct a Schrödinger picture in phase space for
deriving both side by , we have
therefore, the above equation has the same role of Schrödinger equation in usual quantum mechanics.
To show that , we take the 'Schrödinger equation' in phase space and 'star-multiply' by the right for
where is the classical Hamiltonian of the system. And taking the complex conjugate
subtracting both equations we get
which is the time evolution of Wigner function, for this reason is sometimes called quasiamplitude of probability. The -genvalue is given by the time independent equation
.
Star-multiplying for on the right, we obtain
Therefore, the static Wigner distribution function is a -genfunction of the -genvalue equation, a result well known in the usual phase-space formulation of quantum mechanics.
In the case where , worked in the beginning of the section, the Oliveira approach and phase-space formulation are indistinguishable, at least for pure states.
Equivalence of representations
As it was states before, the first wave-function formulation of quantum mechanics was developed by Torres-Vega and Frederick, its phase-space operators are given by
and
This operators are obtained transforming the operators and (developed in the same article) as
and
where .
This representation is some times associated with the Husimi distribution and it was shown to coincides with the totality of coherent-state representations for the Heisenberg–Weyl group.
The Wigner quasiamplitude, , and Torres-Vega–Frederick wave-function, , are related by
where and .
See also
Wigner quasiprobability distribution
Husimi Q representation
Quasiprobability distribution
Phase-space formulation
References
Quantum mechanics | Phase-space wavefunctions | [
"Physics"
] | 821 | [
"Theoretical physics",
"Quantum mechanics"
] |
64,574,512 | https://en.wikipedia.org/wiki/Rita%20Casadio | Rita Casadio is an Adjunct Professor of Biochemistry/Biophysics in the Department of Pharmacy and Biotechnology at the University of Bologna.
Career
She earned her degree in physics at the University of Bologna, Italy. In 1987, she began her academic career as an assistant professor of biophysics at the University of Bologna, later becoming a full professor of biochemistry/biophysics in 2001. Her research primarily focuses on membrane and protein biophysics, as well as computer modeling of biological processes, including protein folding, stability and interactions.
She has authored more than 500 scientific papers and held key roles in various editorial and organizational positions within the field of bioinformatics.
Her work in machine learning has been used for protein structure prediction and methods from her group have been highly ranked in international competitions, such as the Critical Assessment of protein Structure Prediction (CASP) and the Critical Assessment of Function Annotation (CAFA).
Awards and honours
She was elected a Fellow of the International Society for Computational Biology (ISCB) in 2020 for outstanding contributions to the fields of computational biology and bioinformatics.
See also
ELIXIR
References
External links
Living people
Italian bioinformaticians
Academic staff of the University of Bologna
Year of birth missing (living people)
21st-century women scientists
Women biochemists
Computational biologists
Women computational biologists | Rita Casadio | [
"Chemistry"
] | 266 | [
"Biochemists",
"Women biochemists"
] |
64,577,573 | https://en.wikipedia.org/wiki/MACHO%20catalyst | In homogeneous catalysis, MACHO catalysts are metal complexes containing MACHO ligands, which are of the type HN(CH2CH2PR2)2, where R is typically phenyl or isopropyl. Complexes with ruthenium(II) and iridium(III) have received much attention for their ability to hydrogenate polar bonds such as those in esters and even carbon dioxide. The catalysts appear to operate via intermediates where the amine proton and the hydride ligand both interact with the substrate. The Ru-MACHO catalyst have been commercialized for the synthesis of 1,2-propanediol from bio-derived methyl lactate.
See also
1,5-Diaza-3,7-diphosphacyclooctanes, phosphine amine ligands used in hydrogen evolution
Noyori asymmetric hydrogenation, another family of amine-phosphine catalysts
Shvo catalyst, a related bifunctional catalyst for hydrogen transfer
References
Homogeneous catalysis
Chelating agents
Diphosphines
Hydrido complexes
Chloro complexes | MACHO catalyst | [
"Chemistry"
] | 232 | [
"Catalysis",
"Chelating agents",
"Homogeneous catalysis",
"Process chemicals"
] |
64,583,610 | https://en.wikipedia.org/wiki/Reversible%20solid%20oxide%20cell | A reversible solid oxide cell (rSOC) is a solid-state electrochemical device that is operated alternatively as a solid oxide fuel cell (SOFC) and a solid oxide electrolysis cell (SOEC). Similarly to SOFCs, rSOCs are made of a dense electrolyte sandwiched between two porous electrodes. Their operating temperature ranges from 600°C to 900°C, hence they benefit from enhanced kinetics of the reactions and increased efficiency with respect to low-temperature electrochemical technologies.
When utilized as a fuel cell, the reversible solid oxide cell is capable of oxidizing one or more gaseous fuels to produce electricity and heat. When used as an electrolysis cell, the same device can consume electricity and heat to convert back the products of the oxidation reaction into valuable fuels. These gaseous fuels can be pressurized and stored for a later use. For this reason, rSOCs are recently receiving increased attention due to their potential as an energy storage solution on the seasonal scale.
Technology description
Cell structure and working principle
Reversible solid oxide cells (rSOCs), as solid oxide fuel cells, are made of four main components: the electrolyte, the fuel and oxygen electrodes, and the interconnects.
The electrodes are porous layers that favor the reactants diffusion inside their structure and catalyze electrochemical reactions. In the single technologies like SOFCs and SOECs, the electrodes serve a single purpose, hence they are called with their specific names. The anode is where the oxidation reaction occurs, while the cathode is where the reduction reaction takes place. In reversible solid oxide cells, on the other hand, both modalities can occur alternatively in the same device. For this reason, the generic names of fuel electrode and oxygen electrode are preferred instead. On the fuel electrode the reactions involving the fuel oxidation (SOFC modality) or the reduction of the products to produce the fuel (SOEC modality) takes place. On the oxygen electrode, oxygen reduction (SOFC modality) or oxygen ions oxidation to form oxygen gas (SOEC modality) takes place.
State-of-the-art materials for rSOCs are those used for SOFCs. The most common fuel electrodes are made by a mixture of nickel, that serves as electronic conductor, and yttria-stabilized zirconia (YSZ), a ceramic material characterized by high conductivity to oxygen ions at elevated temperature. The most popular oxygen electrode materials are lanthanum strontium cobalt ferrite (LSCF) and lanthanum strontium chromite (LSC), perovskite materials able to catalyze oxygen reduction and oxide ion oxidation reactions.
The electrolyte is a solid-state layer placed between the two electrodes. It is an electric insulator, it is impermeable to gas flow but permeable to oxygen ions flow. Hence, the main properties of this component are the high ion conductivity and the low electrical conductivity. When the rSOC is operated in SOFC mode, oxygen ions flow from the oxygen electrode to the fuel electrode, where the fuel oxidation occurs. In SOEC mode, the reactants are reduced in the anode with the production of oxygen ions, which flow towards the oxygen electrode. The most widespread material for electrolytes is YSZ.
The interconnects are usually made of metallic materials. They provide or collect the electrons involved in the electrochemical reactions. In addition, they are shaped internally with gas channels to distribute the reactants over the cell surface.
Polarization curve
The most common tool to characterize the performances of a reversible solid oxide cell is the polarization curve. In this chart, the current density is related to operating voltage of the cell. The usual convention is the one of positive current density for the fuel cell operation, and negative current density for the electrolysis operation. When the rSOC electrical circuit is not closed and no current is extracted or supplied to the cell, the operating voltage is the so-called open circuit voltage (OCV). If the composition of the gas in the fuel electrode and the oxygen electrode are the same for both modalities, the polarization curve for the SOEC mode and the SOFC have the same OCV. When some current density is extracted or supplied to the cell, the operating voltage starts to diverge from the OCV. This phenomenon is due to the polarization losses, which depend on three main phenomena:
the activation losses, predominant at very low current densities;
the ohmic losses, increasing linearly with the current density;
the concentration losses, occurring at very high current density, when the reactants inside the electrode get depleted.
The sum of the polarization losses takes the name of overpotential.
Other than the open circuit voltage, another fundamental theoretical voltage can be defined. The thermoneutral voltage depends on the enthalpy of the overall reaction taking place in the rSOC and the number of charges that are transferred within the electrochemical reactions. Its relationship with the operating voltage gives information about the heat demand or generation inside the cell.
During the electrolysis operation:
if , the reaction is endothermic;
if , the reaction is exothermic.
The fuel cell operation, instead, is always exothermic.
Chemistry
Various chemistries can be considered when dealing with reversible solid oxide cells, which in turn can influence their operating conditions and overall efficiency.
Hydrogen
When hydrogen and steam are considered as reactants, the overall reaction takes this form:
H2 + 1/2 O2 <=> H2O
where the forward reaction occurs during SOFC mode, and the backward reaction during SOEC mode. On the fuel electrode, hydrogen oxidation (forward reaction) takes in SOFC mode and water reduction (backward reaction) takes plain SOEC mode:
H2 + O^2- <=> H2O + 2e-
On the oxygen electrode, oxygen reduction (forward reaction) occurs in SOFC mode and oxide ions oxidation (backward reaction) occurs in SOEC mode:
O2 +2e- <=> O^2-
The thermoneutral voltage for steam electrolysis is equal to 1.29 V.
Carbonaceous reactants
Differently than low-temperature electrochemical technologies, rSOCs can process also carbon containing species with reduced risk of catalyst poisoning. Methane can be internally reformed on the Ni particles to produce hydrogen, similarly to what happens in steam reforming reactors. Subsequently, the produced hydrogen can undergo the electro-oxidation. Moreover, when working in SOEC modality, water and carbon dioxide can be co-electrolyzed to generate hydrogen and carbon monoxide to form syngas mixtures with various composition.
The reactions taking place on the oxygen electrode are the same considered for the hydrogen/steam case. Even if characterized by much slower kinetics with respect to the one involving hydrogen and steam, the direct electro-oxidation of carbon monoxide (forward reaction) or the direct electro-reduction of carbon dioxide (backward reaction) can be considered as well:
CO + 1/2 O2 <=> CO2
The thermoneutral voltage of the CO2 electrolysis is equal to 1.48 V.
One useful way to depict the cycling between SOFC and SOEC mode of the rSOC operation with carbonaceous reactants is the C-H-O ternary diagram. Each point in the diagram represents a gas mixture with a different number of carbon, hydrogen or oxygen atoms. When dealing with the operation on reversible solid oxide cells, three distinct regions can be distinguished in the graph. For different operating conditions (i.e., different temperature and pressure), distinct boundary lines between these regions can be drawn. The three regions are:
the carbon deposition region: gas mixtures lying in this region are characterized by compositions that are prone to carbon deposition on the fuel electrode;
the fully oxidized region: this region is characterized by gas mixtures that are fully oxidized, hence they cannot be used as fuels in the rSOC;
the operating region: this region is characterized by gas mixtures that are suitable for the rSOC operation.
In the operating region, the fuel mixture and the exhaust mixture can be depicted. These two points are connected by a line which runs through points characterized by a constant H/C ratio. In fact, during the rSOC operation in both modalities, the gases on the fuel electrode exchange with the oxygen electrode only oxygen atoms, while hydrogen and carbon are confined inside the fuel electrode. During the SOFC operation, the composition of the gas in the fuel electrode moves towards the boundary line of the fully oxidized region, increasing its oxygen content. During SOEC operation, on the other hand, the gas mixture evolves away from the fully oxidized region towards the carbon deposition region, while reducing its oxygen content.
Ammonia
An alternative and promising chemistry for rSOCs is that one involving ammonia conversion to hydrogen and nitrogen. Ammonia has great potential as hydrogen carrier, due to its higher volumetric density with respect to hydrogen itself, and it can be directly fed to SOFCs. It has been demonstrated that ammonia-fed SOFCs operate through successive ammonia decomposition and hydrogen oxidation:
2NH3 -> N2 + 3H2
H2 + O^2- -> H2O + 2e-
Ammonia decomposition has been demonstrated to be slightly more efficient than simple hydrogen oxidation, confirming the great potential of ammonia as a fuel other than an energy carrier.
Unfortunately, ammonia cannot be directly synthesized on the fuel electrode of a rSOC, because the equilibrium reaction
N2 + 3H2 <=> 2NH3
is completely shifted towards the left at their higher than 600°C working temperature. For this reason, for clean ammonia production, hydrogen production via electrolysis must be coupled with nitrogen production from air with hydrogen oxidation and subsequent water separation.
rSOC systems for energy storage
Reversible solid oxide cells are receiving increased attention as energy storage solutions for the weekly or the monthly scale. Other technologies for large scale electrical storage such as pumped-storage hydroelectricity and compressed air energy storage are characterized by geographical limitations. On the other hand, Li-ion batteries suffer from limited discharge capabilities. In this regard, hydrogen storage is a promising alternative, since the produced fuel can be compressed and stored for months. Among all hydrogen technologies, rSOCs are definitely the best candidates for producing and converting back hydrogen into electricity. Due to their high operating temperature, they are characterized by higher efficiency, compared to technologies like PEM fuel cells or PEM electrolyzers. Moreover, the possibility to operate both the fuel oxidation and the electrolysis on the same device is beneficial on the capacity factor of the system, helping at reducing its specific investment cost.
Roundtrip efficiency
When dealing with rSOCs, the most important parameter to consider is the roundtrip efficiency, which is a measure of the efficiency of the system considering both the charge (SOEC) and discharge (SOFC) preocesses. The roundtrip efficiency for the single cell can be defined as:
where is the charge supplied or consumed during the reactions, and is the operating voltage. If the assumption of no current or reactants leakage is made, the exchanged charges during the reactions can be assumed to be equal. Then, the roundtrip efficiency can be written as:
To maximize the roundtrip efficiency, the two operating voltages must be as close as possible. This condition can be achieved by operating the rSOC with low current densities in both modalities. In SOFC mode this is easily pursuable, while in SOEC mode a too low voltage may lead to an endothermic operation. If the operating voltage in SOEC mode is lower than the thermoneutral voltage, additional heat sources at high temperature are needed to sustain the reaction. These could come from waste industrial heat or from nuclear reactors. If not easily accessible, though, electrical heating is necessary. This can be supplied by external additions or by operating the cell with an operating voltage higher than the thermoneutral one. Both solutions, though, would inevitably lower the roundtrip efficiency of the rSOC. For this reason, in reversible operation, the thermoneutral voltage poses significant limitations in achieving high roundtrip efficiencies.
On the other hand, the thermoneutral voltage is greatly affected by the reaction chemistry. It has been demonstrated that increasing the yield of methane in the electrolysis operation can substantially decreases the thermoneutral voltage and heat demand of the reaction. For conventional electrolyzers (operating at atmospheric pressure and 750°C), the methane content in the products is very low. It can be increased effectively by lowering the operating temperature to 600°C and increasing the operating pressure up to 10 bar. For example, the thermoneutral voltage is equal to 1.27 V at 750°C and 1 bar, while it becomes equal to 1.07 V at 600°C and 10 bar. In these conditions, the rSOC can even be operated in exothermic mode at reduced voltages, permitting to produce additional heat at high temperature. This result becomes very helpful in the design of high efficiency rSOC systems for energy storage purposes.
System configurations
Single reversible solid oxide cells can be arranged in series to form stacks. Single stacks can be then arranged in modules to reach power capabilities in the order of kilowatts or megawatts.
One of the most challenging aspects in designing large rSOC systems for energy storage purposes is the thermal integration. When the rSOC is operated in electrolysis mode, thermal power is needed for the operation of the system. Thermal power must be provided at two different temperature levels. Heat is needed for water operation, and additional heat at high temperature may be needed if the SOEC modality is endothermic. The latter requirement can be avoided if the rSOC is operated with an exothermic reaction in SOEC modality, with a negative effect on the roundtrip efficiency. On the other hand, when the rSOC is operated in fuel cell mode, the reaction is characterized by a high exothermicity. A number of works in the scientific literature have proposed the exploitation of a Thermal energy storage (TES) to ease the thermal integration of the system.
Excess heat from the SOFC operation can be recovered and stored in a TES, and later used for the SOEC operation. Thermal energy storage typologies and heat transfer fluids that have been considered for this purpose are those used for Concentrated solar power (CSP) technologies. Diathermic oil can be used to store heat at relatively low temperature (for instance, 180°C) and exploited for water evaporation. Alternatively, phase-change materials characterized by high fusion points can be used to store heat at high temperature and enable the endothermic operation in the electrolysis mode. In this case, usually, rSOCs operate at different temperature levels in the two modalities (for example, 850°C in SOFC mode and 800°C in SOEC mode).
If carbonaceous chemistries are employed, the beneficial effect of methane synthesis inside the cell can be exploited to reduce the heat request of the electrolysis mode. In this regard, systems operating at high pressure and lower temperature (20 bar and 650°C) have been proposed to reduce or even eliminate the thermal power requirement of the rSOC system. Alternatively, the production of methane can be favored in external reactors. The methanation reaction is exothermic and favored at low temperature.
CO + 3H2 <=> CH4 + H2O .
The syngas that produced by the co-electrolysis can undergo a further reaction in one or multiple methanation reactors to produce methane and generate low-temperature heat for water evaporation. In addition, the formation of methane in such systems may be beneficial to the size of the tanks used for storing the fuels. In fact, methane is characterized by a higher volumetric energy density than hydrogen in the gaseous form.
When computing the roundtrip efficiencies of rSOC systems, the definition must take into account the net electric consumption (or additional electric production) of other components inside the system. The set of these component is regarded as balance of plant (BOP), and may comprehend pumps, compressors, expanders or fans, needed for fluid circulation and processing inside the system. Therefore, the system roundtrip efficiency can be defined as:
where:
is the electric power produced in SOFC mode;
is the electric power consumed in SOEC mode;
is the net electric power consumption (negative) or production (positive) by the BOP in FC mode;
is the net electric power consumption (negative) or production (positive) by the BOP in EC mode.
The roundtrip efficiencies achievable with rSOC systems operating with steam and hydrogen can reach values in the order of 60%. On the other hand, systems exploiting the beneficial effects of methane formation, either inside the rSOC or in external reactors, can reach rountrip efficiencies in the order of 70% and beyond.
See also
High temperature electrolysis
Solid oxide fuel cell
Solid oxide electrolyzer cell
Power-to-gas
Energy storage
Grid energy storage
Thermal energy storage
Hydrogen technologies
References
Electrochemistry
Energy conversion
Electrolysis
Hydrogen economy
Hydrogen technologies
Energy storage | Reversible solid oxide cell | [
"Chemistry"
] | 3,583 | [
"Electrochemistry",
"Electrolysis"
] |
51,934,343 | https://en.wikipedia.org/wiki/Photopharmacology |
History
Photopharmacology is an emerging multidisciplinary field that combines photochemistry and pharmacology. Built upon the ability of light to change the pharmacokinetics and pharmacodynamics of bioactive molecules, it aims at regulating the activity of drugs in vivo by using light. The light-based modulation is achieved by incorporating molecular photoswitches such as azobenzene and diarylethenes or photocages such as o-nitrobenzyl, coumarin, and BODIPY compounds into the pharmacophore. This selective activation of the biomolecules helps prevent or minimize off-target activity and systemic side effects. Moreover, light being the regulatory element offers additional advantages such as the ability to be delivered with high spatiotemporal precision, low to negligible toxicity, and the ability to be controlled both qualitatively and quantitatively by tuning its wavelength and intensity.
Though photopharmacology is a relatively new field, the concept of using light in therapeutic applications came into practice a few decades ago. Photodynamic therapy (PDT) is a well-established clinically practiced protocol in which photosensitizers are used to produce singlet oxygen for destroying diseased or damaged cells or tissues. Optogenetics is another method that relies on light for dynamically controlling biological functions especially brain and neural. Though this approach has proven useful as a research tool, its clinical implementation is limited by the requirement for genetic manipulation. Mainly, these two techniques laid the foundation for photopharmacology. Today, it is a rapidly evolving field with diverse applications in both basic research and clinical medicine which has the potential to overcome some of the challenges limiting the range of applications of the other light-guided therapies.
Figure 1. Schematic representation of the mechanism of (a) photopharmacology (b) photodynamic therapy, and (c) optogenetics.
The discovery of natural photoreceptors such as rhodopsins in the eye inspired the biomedical and pharmacology research community to engineer light-sensitive proteins for therapeutic applications. The development of synthetic photoswitchable molecules is the most significant milestone in the history of light-delivery systems. Scientists are continuing with their efforts to explore new photoswitches and delivery strategies with enhanced performance to target different biological molecules such as ion channels, nucleic acid, and enzyme receptors. Photopharmacology research progressed from in vitro to in vivo studies in a significantly short period of time yielding promising results in both forms. Clinical trials are underway to assess the safety and efficacy of these photopharmacological therapies further and validate their potential as an innovative drug delivery approach.
Mechanism of action
Molecular photoswitches are utilized in the field of photopharmacology, where the energetics of a molecule can be reversibly controlled with light to achieve spatial and temporal resolution of a particular effect. Photoswitches may function by undergoing photoisomerization through which light is used to conformationally adapt a molecule to a biological site, or through an environmental effect where an external factor such as a solvent effect or hydrogen bonding can selectively allow or quench an emissive state within a molecule. To visualize photophysical processes, a useful depiction is the Jablonski diagram. This is a diagram which depicts electronic and vibrational energy levels within a molecule as vertical levels and shows the possible relaxation pathways from excited states. Typically, the ground state is referred to as S0, and is drawn at the bottom of the figure with nearby vibrational excitations just above it. An absorption will promote an electron into the S1 state at any vibrational energy level, or into a higher order excited state if the absorbed energy has enough magnitude. The excited state can then undergo internal conversion which is the electronic relaxation to a lower state with the same vibrational energetics or vibrational relaxation within a state. This may be followed by an intersystem crossing wherein the electron undergoes a spin flip, or a radiative or nonradiative decay back to the ground state.
One example of an organic compound that undergoes photoisomerization is azobenzene. The structure is two phenyl rings joined with a N=N double bond and is the simplest aryl azo compound. Azobenzene and its derivatives have two accessible absorbance bands: the S1 state from a n-π* transition which can be excited into using blue light, and the S2 state from a π-π* transition that can be excited into using ultraviolet light. Azobenzene and its derivatives have two isomers, trans and cis. The trans isomer, having the phenyl rings on opposite sides of the azo double bond, is the thermally preferred isomer as there is less stereoelectronic distortion and more delocalization present. However, excitation of the trans isomer to the S2 state facilitates a shift to the cis isomer. The S1 absorption is associated with a conversion back to the trans isomer. In this way, azobenzene and its derivatives can act as reversible stores of energy by maintaining a strained configuration in the cis isomer. Modifications of the substituents on azobenzene allow the energetics of these absorptions to be tuned, and if they are engineered such that the two absorption bands overlap a single wavelength of light can be used to flip between them. There are a number of similar photoswitches which isomerize between E and Z configurations across an azo group (for instance, azobenzene and azopyrazole) or an ethylene bridge (for instance, stilbene and hemithioindigo).
Alternatively, photoswitches may themselves be emissive and exhibit environmental control over their properties. One such example is a class of ruthenium polypyridyl coordination complexes. Typically they contain two bidentate bipyridine or phenanthroline ligands and an extended phenanthroline-phenazine bidentate ligand such as dipyrido[3,2-a:2,3-c]phenazine (dppz). These complexes have an accessible metal to ligand charge transfer excited state (1MLCT) which undergoes rapid intersystem crossing to a 3MLCT state due to the strong spin-orbit coupling of the ruthenium center. These excited states are localized on the extended ligand phenazine nitrogens, and emission occurs from the 3MLCT state. Hydrogen bonding interactions such as the presence of water around these nitrogen atoms stabilizes the 3MLCT state, quenching the emission process. Thus, by controlling whether an aqueous or otherwise protic polar solvent is present, emissive behaviors can be “turned on/off”, and alternation between “bright states” and “dark states” is facilitated. This light switch behavior makes these and similar complexes of recent interest in photopharmacological applications such as photodynamic therapy.
Molecules
As previously mentioned, photopharmacology relies on the use of molecular photoswitches being incorporated into the structure of biologically active molecules which allows their potency to be controlled optically. They are introduced into the structure of bioactive compounds via insertion, extension, or bioisosteric replacement. These incorporations can be supported by structural considerations of the molecule or SAR (structure-activity relationship) analysis to determine the optimal position. Some examples of photoswitchable molecules commonly used in photopharmacology are azobenzenes, diarylethenes, and photocages.
Azobenzenes
Azobenzenes are a class of photoswitchable molecules and are used in photopharmalogical applications for their reversible photoisomerization, as described in the previous section. An example of a photoswitchable molecule that uses azobenzene is phototrexate. Phototrexate is an inhibitor of human dihydrofolate reductase and is an analogue of methotrexate, a chemotherapy agent. When in its photoactive cis form, phototrexate has been shown to be a potent antifolate and is relatively inactive when in the trans form. The azologization, or incorporation of azobenzene, of methotrexate allows for control of cytotoxic activity and is considered a step forward in developing targeted anticancer drugs with localized efficacy.
Diarylethenes
Diarylethene photoswitches have reversible cyclization and cycloreversion reactions that are photoinduced. They are a class of compounds that have aromatic functional groups bonded to each end of a carbon-carbon double bond. An example of this class of molecule that is used in photopharmacology is stilbene. Under the influence of light, stilbene switches between its two isomers (E and Z).
Figure 4. Figure showing stilbene isomerizations under light from E to Z.
Diarylethenes have been shown to have some advantages over the more researched azobenzenes switches, such as thermal irreversibility, high photoswitching efficiency, favorable cellular stability, and low toxicity. Diarylethenes have been shown to have promise in fields other than photopharmacology as well. These fields include optical data storage, optoelectronic devices, supramolecular self-assembly and anti-counterfeiting.
Photocages
A class of substances known as photocages contain “photosensitive groups, also known as 'photoremovable protecting groups", from which target substances are released upon exposure to specific wavelengths of light”. The photosensitive groups physically and chemically protect the target from being released until the molecule undergoes photoreaction. Due to these interactions with light, they are commonly used molecules in photopharmacology. More recently, they have played an important role in photoactivated chemotherapy (PACT). In PACT, photocages utilize a photoremovable protecting group that protects cytotoxic drugs until the bond is cleaved via light interaction and the cytotoxic drug is released. Some well-known photocages include “o-nitrobenzyl derivatives, coumarin derivatives, BODIPY, xanthene derivatives, quinone and diarylenes derivatives”. However, there are limitations with using photocages in clinical applications as there are not many PPGs that can be used in vivo. This is due to PPG-payload conjugates needing to have acceptable solubility and biological inertness for biocompatibility and the need for efficient uncaging above 600 nm.
Figure 5. Example of a photocage release system activated by NIR.
Application
Photopharmacology, the use of light to control the activity of drugs, has emerged as a promising approach to drug delivery and therapy. By harnessing the power of light, researchers can achieve precise control over drug release and activation, offering new possibilities for targeted and personalized treatments. This subsection explores the application of photopharmacology in drug delivery, focusing on recent advancements and potential clinical applications.
In one study, researchers designed HDAC inhibitors which can be activated or deactivated with light, providing precise therapeutic control. This approach could reduce the side effects of traditional chemotherapy by targeting inhibitors to specific body areas, potentially leading to more effective and personalized cancer treatments.
In another study, the researchers developed a strategy to attach a photoswitchable group to a common antibiotic; ciprofloxacin. By attaching the photoswitchable group, researchers can control the activity of ciprofloxacin with light. This approach could potentially lead to new ways of treating bacterial infections, with the ability to switch the antibiotic's activity on and off as needed.
In this paper an in vitro protocol to test different light wavelengths on human cancer cell lines is developed, finding that blue light most effectively inhibited cell growth. This suggests that photopharmacology could offer new cancer treatment options by targeting specific light wavelengths to modulate drug activity in tumor cells.
Another application of photopharmacology is developing a luminescent photoCORM grafted on carboxymethyl chitosan, which, when exposed to light, releases carbon monoxide (CO) to induce apoptotic death in colorectal cancer cells, demonstrating precise control over CO release for targeted cancer therapy.
Researchers developed a toolbox of photoswitchable antagonists that can interact with GPCRs, a class of proteins involved in various cellular processes. By using light to switch the activity of these antagonists, researchers can control the interaction between the antagonists and GPCRs in real time. This approach allows for precise modulation of GPCR activity, which could lead to new insights into cellular signaling pathways and potential therapeutic applications.
In another application by using light to control the assembly of nanopores, researchers can potentially regulate the flow of ions or molecules through these nanopores. This approach could have applications in various fields, including sensing, drug delivery, and nanotechnology.
Another paper reports on the use of photopharmacology to control drug activity; multifunctional fibers in the study deliver light and drugs to specific body areas. Implanted fibers activate light-responsive drugs, altering their structure, and offering precise drug delivery for conditions needing exact timing or dosage.
In another study, ligands were designed to switch their binding mode to G-quadruplex DNA upon exposure to visible light. This method could potentially modulate the activity of G-quadruplex DNA, crucial in gene expression and telomere maintenance, offering new therapeutic avenues, particularly in cancer treatment. The study underscores photopharmacology's promise in targeting specific DNA structures, suggesting G-quadruplex DNA as a viable target for future photopharmacological interventions.
Another study developed photoactivatable antibody-photoCORM conjugates targeting human ovarian cancer cells, releasing CO upon light exposure to diminish cell viability. This approach offers precise cancer cell targeting while minimizing harm to healthy tissue, showcasing the potential of photopharmacology in cancer therapy.
In another paper, a photoactivatable compound that binds to and modulates the activity of the CRY1 protein, regulating the mammalian circadian clock, was developed. By using light to control the compound's activity, researchers can potentially treat circadian rhythm disorders and related health conditions by modulating the function of CRY1. Photopharmacology involves using light to control the activity of drugs.
In another application researchers used photopharmacology to control drug release and focus on a drug interacting with tubulin, visualizing its release in real time with time-resolved serial crystallography. This technique offers insights into drug-tubulin interactions and demonstrates the potential for designing drugs with precise actions.
Future directions
The future of photopharmacology holds immense promise. It has the potential to revolutionize conventional drug therapy offering new avenues for precision medicine, treating neurological disorders, and in the field of oncology and ophthalmology. Additionally, it holds promise for the field of regenerative medicine where photoswitches can be used to modulate the activity of signaling pathways for targeted tissue repair and regeneration.
Photopharmacology will continue to grow and expand with the new discoveries and advances happening in other related fields such as synthetic chemistry, biology, nanotechnology, pharmacology, and bioengineering. While the potential of photopharmacology is vast, some challenges must be addressed to make it a clinical reality. One challenge is the development of stable and biocompatible photoswitches that are selective for their target receptors without cross-activity. It is particularly important that these photoswitchables have their absorbance bands fall within the wavelength range of 650 nm to 900 nm. Hence, optimum molecular designing of photoswitches is required to achieve the characteristics mentioned above and desired level of performance. At present, photopharmacology uses a rational drug design approach based on studying the structure-activity relationship, however, a phenotypic screening for photoswitchable drugs could also be beneficial.
In order to achieve good spatial-temporal control over drug activity there should be a significant difference between the activity of isomers. However, understanding the structural changes during the biological effects induced by photoswitching is limited. This scarcity of knowledge is also a challenge for the growth of this field, as it hampers the optimization of the activity and potency of the isomers to obtain the expected outcomes during applications.
The biggest challenge in photopharmacology is finding appropriate and effective ways to deliver light to deep tissues in the body and tissues avoiding issues such as scattering and absorption. Various strategies have been attempted in this regard, one being the development of photoswitchable ligands that respond to deep-tissue penetrating wavelengths like red or infrared light. Moreover, some recent preclinical studies have spurred the development of wireless, compact or injectable, and remotely controllable devices capable of delivering light to neural tissues with minimal damage. There are novel optofluidic systems that can simultaneously regulate both drug delivery and light activity at specific sites. Although external delivery of light is the most preferred method, the use of internal exogenous light sources such as luminescent compounds where light would be delivered directly at the site of action. This could avoid the issues related to light penetration and also enhance the degree of selectivity. In addition, this creates the opportunity to use photopharmacology as a theranostic approach that combines targeted drug delivery and molecular imaging.
References
Chemistry
Medical treatments
Medicinal chemistry
Pharmacology | Photopharmacology | [
"Chemistry",
"Biology"
] | 3,673 | [
"Biochemistry",
"Pharmacology",
"nan",
"Medicinal chemistry"
] |
63,231,005 | https://en.wikipedia.org/wiki/Convexity%20%28algebraic%20geometry%29 | In algebraic geometry, convexity is a restrictive technical condition for algebraic varieties originally introduced to analyze Kontsevich moduli spaces in quantum cohomology. These moduli spaces are smooth orbifolds whenever the target space is convex. A variety is called convex if the pullback of the tangent bundle to a stable rational curve has globally generated sections. Geometrically this implies the curve is free to move around infinitesimally without any obstruction. Convexity is generally phrased as the technical condition
since Serre's vanishing theorem guarantees this sheaf has globally generated sections. Intuitively this means that on a neighborhood of a point, with a vector field in that neighborhood, the local parallel transport can be extended globally. This generalizes the idea of convexity in Euclidean geometry, where given two points in a convex set , all of the points are contained in that set. There is a vector field in a neighborhood of transporting to each point . Since the vector bundle of is trivial, hence globally generated, there is a vector field on such that the equality holds on restriction.
Examples
There are many examples of convex spaces, including the following.
Spaces with trivial rational curves
If the only maps from a rational curve to are constants maps, then the pullback of the tangent sheaf is the free sheaf where . These sheaves have trivial non-zero cohomology, and hence they are always convex. In particular, Abelian varieties have this property since the Albanese variety of a rational curve is trivial, and every map from a variety to an Abelian variety factors through the Albanese.
Projective spaces
Projective spaces are examples of homogeneous spaces, but their convexity can also be proved using a sheaf cohomology computation. Recall the Euler sequence relates the tangent space through a short exact sequence
If we only need to consider degree embeddings, there is a short exact sequence
giving the long exact sequence
since the first two -terms are zero, which follows from being of genus , and the second calculation follows from the Riemann–Roch theorem, we have convexity of . Then, any nodal map can be reduced to this case by considering one of the components of .
Homogeneous spaces
Another large class of examples are homogenous spaces where is a parabolic subgroup of . These have globally generated sections since acts transitively on , meaning it can take a bases in to a basis in any other point , hence it has globally generated sections. Then, the pullback is always globally generated. This class of examples includes Grassmannians, projective spaces, and flag varieties.
Product spaces
Also, products of convex spaces are still convex. This follows from the Künneth theorem in coherent sheaf cohomology.
Projective bundles over curves
One more non-trivial class of examples of convex varieties are projective bundles for an algebraic vector bundle over a smooth algebraic curvepg 6.
Applications
There are many useful technical advantages of considering moduli spaces of stable curves mapping to convex spaces. That is, the Kontsevich moduli spaces have nice geometric and deformation-theoretic properties.
Deformation theory
The deformations of in the Hilbert scheme of graphs has tangent space
where is the point in the scheme representing the map. Convexity of gives the dimension formula below. In addition, convexity implies all infinitesimal deformations are unobstructed.
Structure
These spaces are normal projective varieties of pure dimension
which are locally the quotient of a smooth variety by a finite group. Also, the open subvariety parameterizing non-singular maps is a smooth fine moduli space. In particular, this implies the stacks are orbifolds.
Boundary divisors
The moduli spaces have nice boundary divisors for convex varieties given by
for a partition of and the point lying along the intersection of two rational curves .
See also
Stable curve
Moduli space
Gromov–Witten invariant
Quantum cohomology
Moduli of curves
References
External links
Gromov-Witten Classes, Quantum Cohomology, and Enumerative Geometry
Notes on Stable Maps and Quantum Cohomology
Algebraic geometry | Convexity (algebraic geometry) | [
"Mathematics"
] | 824 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
63,236,535 | https://en.wikipedia.org/wiki/Accelerator%20neutrino | An accelerator neutrino is a human-generated neutrino or antineutrino obtained using particle accelerators, in which beam of protons is accelerated and collided with a fixed target, producing mesons (mainly pions) which then decay into neutrinos. Depending on the energy of the accelerated protons and whether mesons decay in flight or at rest it is possible to generate neutrinos of a different flavour, energy and angular distribution. Accelerator neutrinos are used to study neutrino interactions and neutrino oscillations taking advantage of high intensity of neutrino beams, as well as a possibility to control and understand their type and kinematic properties to a much greater extent than for neutrinos from other sources.
Muon neutrino beam production
The process of the muon neutrino or muon antineutrino beam production consists of the following steps:
Acceleration of a primary proton beam in a particle accelerator.
Proton beam collision with a fixed target. In such a collision secondary particles, mainly pions and kaons, are produced.
Focusing, by a set of magnetic horns, the secondary particles with a selected charge: positive to produce the muon neutrino beam, negative to produce the muon anti-neutrino beam.
Decay of the secondary particles in flight in a long (of the order of hundreds meters) decay tunnel. Charged pions decay in more than 99.98% into a muon and the corresponding neutrino according to the principle of preserving electric charge and lepton number:
→ + , → +
It is usually intended to have a pure beam, containing only one type of neutrino: either or . Thus, the length of the decay tunnel is optimised to maximise the number of pion decays and simultaneously minimise the number of muon decays, in which undesirable types of neutrinos are produced:
→ + + , → + +
In most of kaon decays the appropriate type of neutrinos (muon neutrinos for positive kaons and muon antineutrinos for negative kaons) are produced:
→ + , → + , (63.56% of decays),
→ + + , → + + , (3.35% of decays),
however, decays into electron (anti)neutrinos, is also a significant fraction:
→ + + , → + + , (5.07% of decays).
Absorption of the remaining hadrons and charged leptons in a beam dump (usually a block of graphite) and in the ground. At the same time neutrinos unimpeded travel farther, close the direction of their parent particles.
Neutrino beam kinematic properties
Neutrinos do not have an electric charge, so they cannot be focused or accelerated using electric and magnetic fields, and thus it is not possible to create a parallel, mono-energetic beam of neutrinos, as is done for charged particles beams in accelerators. To some extent, it is possible to control the direction and energy of neutrinos by properly selecting energy of the primary proton beam and focusing secondary pions and kaons, because the neutrinos take over part of their kinetic energy and move in a direction close to the parent particles.
Off-axis beam
A method that allows to further narrow the energy distribution of the produced neutrinos is the usage of the so-called off-axis beam. The accelerator neutrino beam is a wide beam that has no clear boundaries, because the neutrinos in it do not move in parallel, but have a certain angular distribution. However, the farther from the axis (centre) of the beam, the smaller is the number of neutrinos, but also the distribution of energy changes. The energy spectrum becomes narrower and its maximum shifts towards lower energies. The off-axis angle, and thus the neutrino energy spectrum, can be optimised to maximize neutrino oscillation probability or to select the energy range in which the desired type of neutrino interaction is dominant.
The first experiment in which the off-axis neutrino beam was used was the T2K experiment
Monitored and tagged neutrino beams
A high level of control of neutrinos at the source can be achieved by monitoring the production of charged leptons (positrons, muons) in the decay tunnel of the neutrino beam. Facilities that employ this method are called monitored neutrino beams. If the lepton rate is sufficiently small, modern particle detectors can time-tag the charged lepton produced in the decay tunnel and associate this lepton to the neutrino observed in the neutrino detector. This idea, which dates back to the 1960s, has been developed in the framework of the tagged neutrino beam concept but it has not been demonstrated, yet. Monitored neutrino beams produce neutrinos in a narrow energy range and, therefore, can employ the off-axis technique to predict the neutrino energy by measuring the interaction vertex, that is the distance of the neutrino interaction from the nominal beam axis. An energy resolution in the 10-20% range has been demonstrated in 2021 by the ENUBET Collaboration.
Neutrino beams in physics experiments
Below is the list of muon (anti)neutrino beams used in past or current physics experiments:
CERN Neutrinos to Gran Sasso (CNGS) beam produced by Super Proton Synchrotron at CERN used in OPERA and ICARUS experiments.
Booster Neutrino Beam (BNB) produced by the Booster synchrotron at Fermilab used in SciBooNE, MiniBooNE and MicroBooNE experiments.
Neutrinos at the Main Injector (NuMI) beam produced by the Main Injector synchrotron at Fermilab used in MINOS, MINERνA and NOνA experiments.
K2K neutrino beam produced by a 12 GeV proton synchrotron at KEK in Tsukuba used in K2K experiment.
T2K neutrino beam produced by the Main Ring synchrotron at J-PARC in Tokai used in T2K experiment.
Notes
Further reading
External links
Accelerator neutrinos - Fermilab
Accelerator physics
Neutrinos | Accelerator neutrino | [
"Physics"
] | 1,322 | [
"Applied and interdisciplinary physics",
"Accelerator physics",
"Experimental physics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.