id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
73,196,215 | https://en.wikipedia.org/wiki/Keratin-associated%20protein | Keratin-associated proteins (KRTAPs, KAPs) and keratins are the major components of hair and nails. The content of KRTAPs in hair varies considerably between species, ranging from less than 3% in human hair to 30–40% in echidna quill. Both keratin and KRTAPs are extensively cross-linked in hair through disulfide bonds via numerous cysteine residues in keratins. Given the economic importance of wool, the KRTAP family has been studied intensively in sheep.
Genetics
The KRTAP family of genes is unique to mammals. The family has evolved rapidly with about 188 genes in the mouse genome, 175 in the sloth, 122 in humans, but only 35 in dolphins (where only 9 genes are functional). In humans, there are 101 intact KRTAP genes and 21 (non-functional) pseudogenes. There are two major groups of KRTAP genes: high/ultrahigh cysteine (HS-KRTAP) and high glycine-tyrosine (HGT-KRTAP), that are thought to have independently originated based on their distinct amino acid compositions.
Human KRTAP loci
The KRTAP locus on human chromosome 17 includes the following 40 genes (in this order on the chromosome; lower-case "p" indicates pseudogenes): KRTAP3-3, KRTAP3-2, KRTAP3p1, KRTAP3-1, KRTAP1-5, KRTAP1-4, KRTAP1-3, KRTAP1-1, KRTAP2-1, KRTAP2-2, KRTAP2-3, KRTAP2-4, KRTAP4p2, KRTAP4-7, KRTAP4-8, KRTAP4p1, KRTAP4-9, KRTAP4-11, KRTAP4-12, KRTAP4-6, KRTAP4-5, KRTAP4-4, KRTAP4-3, KRTAP4-2, KRTAP4-1, KRTAP4p3, KRTAP9-1, KRTAP9-9, KRTAP9-2, KRTAP9-3, KRTAP9-8, KRTAP9-4, KRTAP9-5, KRTAP9-6, KRTAP9-12 KRTAP9-7, KRTAP9-10, KRTAP29-1, KRTAP16-1, and KRTAP17-1.
Similarly, the KRTAP locus on human chromosome 21 contains the following genes: KRTAP24-1, KRTAP25-1, KRTAP26-1, KRTAP27-1, KRTAP13-6, KRTAP13p2, KRTAP13-2, KRTAP13-1, KRTAP13-3, KRTAP13-4, KRTAP13p1, KRTAP13-5, KRTAP19-1, KRTAP19-2, KRTAP19-3, KRTAP19-4, KRTAP19-5, KRTAP19p1, KRTAP19p2, KRTAP19p3, KRTAP19-6, KRTAP19p5, KRTAP19-7, KRTAP6-3, KRTAP6-2, KRTAP19-9, KRTAP6-1, KRTAP20-1, KRTAP20-2, KRTAP19p4, KRTAP21p1, KRTAP21-2, KRTAP21-1, KRTAP8p1, KRTAP8p2, KRTAP8-1, KRTAP7p1, KRTAP11-1, KRTAP19-8, KRTAP10-1, KRTAP10-2, KRTAP10-3, KRTAP10-4, KRTAP10-5, KRTAP10-6, KRTAP10-7, KRTAP10-8, KRTAP10-9, KRTAP10-10, KRTAP10-11, KRTAP12-4, KRTAP12-3, KRTAP12-2, KRTAP12-1, KRTAP12p1, KRTAP10-12, KRTAP10p1.
The other KRTAP genes form similar, but smaller clusters on chromosomes 2 and 11.
It has been proposed to change the protein names from KRTAP to KAP, with the numbering scheme remaining the same. However, the gene names would remain the same (KRTAPx-x etc.).
See also
Human chromosome 17
Keratin
Keratin-associated protein 5-6
References
External links
KRTAP proteins in Uniprot
The KRTAP locus on human chromosome 17 (NCBI genome viewer)
hair
proteins | Keratin-associated protein | [
"Chemistry",
"Biology"
] | 1,089 | [
"Biomolecules by chemical classification",
"Organ systems",
"Molecular biology",
"Proteins",
"Hair"
] |
73,196,367 | https://en.wikipedia.org/wiki/Celloscope%20automated%20cell%20counter | Celloscope automated cell counter was developed in the 1950s for enumeration of erythrocytes, leukocytes, and thrombocytes in blood samples. Together with the Coulter counter, the Celloscope analyzer can be considered one of the predecessors of today's automated hematology analyzers, as the principle of the electrical impedance method is still utilized in cell counters installed in clinical laboratories around the world.
History
The Celloscope was developed for the Swedish company AB Lars Ljungberg & Co under the direction of engineer Erik Öhlin at Linson Instrument AB. In an interview published in the Clinical Biochemistry in the Nordics, a membership magazine for the Nordic Association for Clinical Chemistry, Lars Ljungberg explains that he and his coworkers had been considering different solutions for counting blood cells for some time when they came across a method presented by the American Navy on how particles could be counted when allowed to pass a capillary hole through which a weak direct current was passed simultaneously. The Celloscope method exploits the feature of blood cells not being conductive and therefore make interruptions (pulses) to the current, which then can be counted. What Ljungberg and coworkers did not know was that Wallace H. Coulter in Chicago had applied for and received a patent on the particle count principle in 1953.
When presented at a German tradeshow in September 1957, the Celloscope counter was examined by Dr. George Brecher, the first author of one of the NIH evaluations of the Coulter counter. In a letter to Coulter, Brecher reported about what he thought was a close functional copy of the Coulter counter, yet with simpler electronics and an integrated sample stand, creating a both smaller and less costly instrument for use in clinical applications.
When the Celloscope was introduced to the market in the early 60s, a lawsuit was filed by Coulter Electronics Inc against AB Lars Ljungberg & Co for alleged infringement of the American patent. After many and long negotiations, the companies came to the agreement to compensate Coulter for the sales that had been made in USA and some European countries where he had the patent and that AB Lars Ljungberg & Co was free to sell their analyzer in other regions.
Method principle
Before the introduction of the first automated cell counters, hematologists were referred to manual cell count under the microscope. The Celloscope method for automated counting of blood cells was described in an article by Öhlin in 1958. In the described method, cells in a saline (conductive) solution are allowed to pass through a capillary with a length and diameter corresponding to the size of blood cells. At the same time, an electric current passes the capillary, and each cell then gives rise to an electric pulse through the increase in resistance that it causes in the electric circuit. The number of pulses is recorded and corresponds to the number of cells in a certain volume. Diluting the blood sample to a sufficient extent for the distance between the cells when passing through the capillary to be greater than the dimension of the cells and capillary ensures that each cell is counted individually. As cells are counted in an absolute volume of the suspension, the number of cells in mm3 of whole blood can be calculated using the dilution factor.
The described automated Celloscope cell count method enabled an improved accuracy compared with manual examination by microscopy, while decreasing manual work for the operator. The method allows 50 000 cells to be counted in about 45 seconds, with high accuracy.
The Celloscope counter was also equipped with a discriminator, or electrical threshold, which allows only pulses above a certain size to be counted, enabling different blood cells to be counted. For example, set to a threshold of 3 μm, all cells are counted. A re-count at a threshold of 4 μm allows calculation of the number of cells of a size between 3 and 4 μm from the total counts.
Diluting the blood sample 1/80 000 in a physiological saline solution allows enumeration of the erythrocytes, as the number of leukocytes does not affect the result more than by about 1/1 000 000. For the leukocyte count, cells in the same sample are hemolyzed with saponin or cetrimide so that only the nuclei of the leukocytes are counted. For platelet count, a smaller capillary diameter is used.
To identify cell morphologies and variants that the counter cannot detect, microscopy remains an essential complement to the automated cell count method.
Successor cell counters
The impedance method used in the Celloscope analyzer has been further developed to allow counting also of leukocyte subgroups. In addition to the cell counts, modern hematology analyzers are also capable of reporting parameters related to cell size, hemoglobin concentration, as well as a range of calculated parameters, for a complete blood count (CBC). These analyzers were initially intended to be used in hospital laboratories, as they required a skilled staff and a high sample load to justify their relatively high cost, however, with the increasing need for decentralized healthcare, the demand for simpler analyzers emerged and prompted the development of benchtop cell counters that could be used in a near-patient clinical setting with a minimum of training. In 1969, Erik Öhlin founded Swelab Instrument AB (today, Boule Medical AB), and later, the Swelab AutoCounter AC-series was launched to meet the needs of the smaller clinical laboratories.
The cell counters of that time used LED screens for result review. In 1982, Medonic AB, another Swedish company with focus on hematology, was founded. The founders, Ingemar Berndtsson and Abraham Bottema, both had a long history and experience in hematology, clinical chemistry, and blood banking engineering. In 1985, Medonic AB launched the Cellanalyzer CA 480 system, its first own-developed cell counter with a built-in display that also showed the cell histograms. When computers began to be incorporated into the analyzers, other brands, like the Swelab analyzers, also came with a display.
Both targeting the smaller clinical laboratories, Swelab Instrument AB and Medonic AB were competitors on the decentralized hematology testing market. In the late 90s, both Swelab Instrument AB and Medonic AB were acquired by Boule Diagnostics AB. The company has kept the parallel brands and the analyzers are still manufactured from its facilities in Stockholm, Sweden and supplied under the Swelab and Medonic trademarks for the decentralized hematology testing market.
When Coulter was acquired by Beckman, former Coulter employees Dr. Harold R Crews, Andrew C Swanson, and Donald Grantham founded Clinical Diagnostic Solutions, Inc. (CDS) in 1997, focusing on the development and production of generic reagents and control material. In 2004, CDS was acquired by Boule. By this acquisition, Boule came to master the skills of the development and production of both instruments and the consumables included in a complete hematology system.
References
External links
• Boule web site
Scientific instruments
Laboratory equipment | Celloscope automated cell counter | [
"Technology",
"Engineering"
] | 1,469 | [
"Scientific instruments",
"Measuring instruments"
] |
73,196,457 | https://en.wikipedia.org/wiki/DESY%20%28particle%20accelerator%29 | The particle accelerator DESY (acronym for Deutsches Elektronen-Synchrotron or German Electron Synchrotron) was the first particle accelerator of the DESY research centre in Hamburg and the one that gave the research centre its name. The DESY synchrotron was used for research in particle physics from 1964 to 1978 and served as a pre-accelerator for other accelerator facilities at DESY.
Construction of the synchrotron started in 1960. With a circumference of 300 m, it was the world's largest facility of its kind and accelerated electrons to 7.4 GeV. The first electrons circulated in acceleration on 25 February 1964, and research activities into elementary particles at the DESY synchrotron started in May 1964. In the experiments carried out at DESY, the electron beams were directed at fixed targets.
Research at the DESY particle accelerator
DESY first attracted international attention in 1966 with its confirmation of the theory of quantum electrodynamics. A world-first, production of proton–antiproton pairs using high-energy radiation, was also achieved at the DESY accelerator in 1966. Additionally, protons were very accurately scanned, showing that they do not have a solid nucleus. In the following decade, DESY established itself as a skills centre for developing and operating particle accelerator facilities.
Before 1964 no continuous soft-x-ray radiation sources existed. In that year, research began using the synchrotron radiation that occurs as a side effect of electron acceleration in the DESY ring.
Synchrotron radiation was first used for absorption spectroscopy at the synchrotron in 1967. The European Molecular Biology Laboratory (EMBL) made use of this new technology's potential and 1972 established a permanent branch at DESY with the aim of analyzing the structure of biological molecules through synchrotron radiation.
Pre-accelerator and test beam facility
The particle physics experiments at the original DESY synchrotron ran until 1978. After that, it was rebuilt and upgraded several times, serving as a pre-accelerator for DESY's larger accelerator facilities starting in 1973 for the storage ring DORIS, and from 1978 mainly for PETRA.
After a fundamental modification to become the proton synchrotron DESY III, the facility went back into operation in 1987 together with the newly built electron synchrotron DESY II as a pre-accelerator for HERA. With the shutdown of HERA in 2007, the proton synchrotron DESY III was also decommissioned after 43 years of operation.
Today, the DESY II electron synchrotron still serves as a pre-accelerator for PETRA III and as a test beam facility with three beamlines used by research groups worldwide to test detector components.
References
External links
DESY
Particle accelerators
Particle physics facilities
Synchrotron radiation facilities
Buildings and structures in Altona, Hamburg | DESY (particle accelerator) | [
"Materials_science"
] | 584 | [
"Materials testing",
"Synchrotron radiation facilities"
] |
73,196,545 | https://en.wikipedia.org/wiki/Dan%20Linstedt | Daniel Linstedt is an American data architect and inventor of the data modeling method data vault for data warehouses and business intelligence. He developed the model in the 1990s and published the first version in the early 2000s. In 2012, Data Vault 2.0 was announced and it was released in 2013. In addition to data modeling, the data vault method incorporates process design, database tuning and performance improvements for ETL/ELT, Capability Maturity Model Integration (CMMI) and agile software development.
Dan holds a Bachelor of Science in computer science from California State University, Chico. Since 2020, he has been the chief executive officer (CEO) of DataVaultAlliance Holdings LLC.
Selected works
The Business of Data Vault Modeling (2010-11-19) by author Daniel Linstedt and co-authors Kent Graziano and Hans Hultgren
Super Charge Your Data Warehouse: Invaluable Data Modeling Rules to Implement Your Data Vault (Data Warehouse Architecture Book 1) (2012-05-20) by Dan Linstedt and Kent Graziano
Building a Scalable Data Warehouse with Data Vault 2.0 (2015-09-15) by Daniel Linstedt and Michael Olschimke
See also
Data architecture
Enterprise architecture
References
Living people
Year of birth missing (living people)
People in information technology | Dan Linstedt | [
"Technology"
] | 263 | [
"People in information technology",
"Information technology"
] |
73,196,815 | https://en.wikipedia.org/wiki/Resilience%20%28mathematics%29 | In mathematical modeling, resilience refers to the ability of a dynamical system to recover from perturbations and return to its original stable steady state. It is a measure of the stability and robustness of a system in the face of changes or disturbances. If a system is not resilient enough, it is more susceptible to perturbations and can more easily undergo a critical transition. A common analogy used to explain the concept of resilience of an equilibrium is one of a ball in a valley. A resilient steady state corresponds to a ball in a deep valley, so any push or perturbation will very quickly lead the ball to return to the resting point where it started. On the other hand, a less resilient steady state corresponds to a ball in a shallow valley, so the ball will take a much longer time to return to the equilibrium after a perturbation.
The concept of resilience is particularly useful in systems that exhibit tipping points, whose study has a long history that can be traced back to catastrophe theory. While this theory was initially overhyped and fell out of favor, its mathematical foundation remains strong and is now recognized as relevant to many different systems.
History
In 1973, Canadian ecologist C. S. Holling proposed a definition of resilience in the context of ecological systems. According to Holling, resilience is "a measure of the persistence of systems and of their ability to absorb change and disturbance and still maintain the same relationships between populations or state variables". Holling distinguished two types of resilience: engineering resilience and ecological resilience. Engineering resilience refers to the ability of a system to return to its original state after a disturbance, such as a bridge that can be repaired after an earthquake. Ecological resilience, on the other hand, refers to the ability of a system to maintain its identity and function despite a disturbance, such as a forest that can regenerate after a wildfire while maintaining its biodiversity and ecosystem services. With time, the once well-defined and unambiguous concept of resilience has experienced a gradual erosion of its clarity, becoming more vague and closer to an umbrella term than a specific concrete measure.
Definition
Mathematically, resilience can be approximated by the inverse of the return time to an equilibrium given by
where is the maximum eigenvalue of matrix .
The largest this value is, the faster a system returns to the original stable steady state, or in other words, the faster the perturbations decay.
Applications and examples
In ecology, resilience might refer to the ability of the ecosystem to recover from disturbances such as fires, droughts, or the introduction of invasive species. A resilient ecosystem would be one that is able to adapt to these changes and continue functioning, while a less resilient ecosystem might experience irreversible damage or collapse. The exact definition of resilience has remained vague for practical matters, which has led to a slow and proper application of its insights for management of ecosystems.
In epidemiology, resilience may refer to the ability of a healthy community to recover from the introduction of infected individuals. That is, a resilient system is more likely to remain at the disease-free equilibrium after the invasion of a new infection. Some stable systems exhibit critical slowing down where, as they approach a basic reproduction number of 1, their resilience decreases, hence taking a longer time to return to the disease-free steady state.
Resilience is an important concept in the study of complex systems, where there are many interacting components that can affect each other in unpredictable ways. Mathematical models can be used to explore the resilience of such systems and to identify strategies for improving their resilience in the face of environmental or other changes. For example, when modelling networks it is often important to be able to quantify network resilience, or network robustness, to the loss of nodes. Scale-free networks are particularly resilient since most of their nodes have few links. This means that if some nodes are randomly removed, it is more likely that the nodes with fewer connections are taken out, thus preserving the key properties of the network.
See also
Engineering resilience
Ecological resilience
Critical transition
Bifurcation theory
References
Mathematical modeling
Mathematical terminology
Applied mathematics
Conceptual modelling
Knowledge representation | Resilience (mathematics) | [
"Mathematics"
] | 891 | [
"Applied mathematics",
"Mathematical modeling",
"nan"
] |
73,196,880 | https://en.wikipedia.org/wiki/David%20R.%20Shonnard | David R. Shonnard is an American engineer. He is a Richard and Bonnie Robbins Chair in Sustainable Use of Materials and former director of the Michigan Technological University Sustainable Futures Institute. He has expertise in systems analysis for sustainability, environmental life cycle assessments of renewable energy technologies, and chemical recycling of waste plastics for a circular economy.
Biography
Shonnard earned a Ph.D. from the University of California at Davis and had appointments at Lawrence Livermore National Laboratory and the University of California at Berkeley prior to joining Michigan Technological University in 1993. He has served on advisory committees for the DOE, UDSA, and the REMADE Institute in areas of biomass research and development and materials circular economy. He is co-author of two green engineering and sustainable engineering textbooks and has published over 200 works appearing in peer-reviewed research journals, technical reports, and conference proceedings.
Research interests
Shonnard has broad research interests that include diffusion and adsorption of pollutants in soils, atmospheric transport of hazardous compounds, environmental risk assessment, in-situ subsurface remediation, environmentally-conscious design of chemical processes, advanced biofuels reaction engineering, and applications of pyrolysis to waste plastics recycling. Sponsors of his research program include federal (NSF, DOE, DARPA, USDA, FAA), state (MI MTRAC), and numerous industrial firms. He holds patents in enzymatic and chemical conversion technologies and is founder of a company, SuPyRec, to commercialize chemical recycling of waste plastics.
Select publications
References
Living people
Year of birth missing (living people)
Place of birth missing (living people)
Chemical engineering academics
University of California, Davis alumni
University of Nevada, Reno alumni
Michigan Technological University faculty
American company founders | David R. Shonnard | [
"Chemistry"
] | 351 | [
"Chemical engineering academics",
"Chemical engineers"
] |
73,196,911 | https://en.wikipedia.org/wiki/2023%20Plumpang%20oil%20depot%20fire | On 3 March 2023, a fire followed by an explosion occurred at a Pertamina oil depot in Plumpang, Koja, Jakarta. The fire spread to nearby residential areas, and at least thirty three people were killed.
The depot
Pertamina's oil storage depot at Plumpang is often considered the most important fuel depot in Indonesia, having been established in 1974 and having a capacity of nearly 300 million liters of fuel. By itself, the depot handles around 20 percent of the country's fuel supply, primarily serving the Greater Jakarta area. It has previously experienced a fire in 2009, when one person was killed.
Residential areas were initially quite distant from the fuel depot, with a plot of land owned by Pertamina separating them. However, starting in 1998, locals had gradually encroached into the Pertamina plot, building houses illegally until thousands of residents were living in close proximity to the fuel depot. By the time of the fire, some houses were just one meter away from the depot's fencing, and the Pertamina plot had become a dense residential area. Pertamina intended for the minimum distance between the depot and residential areas to be 300 meters following the 2009 fire.
Fire and explosion
Sometime around 8 PM local time (UTC+7), a fire broke out at the fuel depot. According to Pertamina's spokesperson, the fire had originated from the depot's receiving pipeline. It was suspected that a lightning strike had sparked the fire. Firefighters received a report of the fire at 20:11. The fire spread to a number of nearby houses, and local witnesses reported a loud explosion. Tens of local houses were burned, although by around 23:00 most of the residential fires had been put down.
Two firetrucks and ten firefighters were initially deployed, and this gradually increased to 52 firetrucks with 260 firefighters. By around 22:30, the fire had been localized and the blaze was put down by around midnight.
Aftermath
Tens of burn victims were evacuated to nearby hospitals. By 23:00 local time, 17 people had been confirmed killed, including two children, with over fifty injured. This was later revised down to 13 killed as of the following day, with three children among the killed. A further four bodies were found in Saturday morning in the ruins, increasing the death toll to 17. The death toll was updated to 19 the following day, with another 3 teenagers still missing. Around 600 people were evacuated overnight to the North Jakarta mayoral office and nearby public buildings. By 24 March, as more victims in hospitals succumbed to injuries, the death toll had risen to 33.
Fueling operations resumed at the depot by 04:00 local time the following day. Pertamina utilized several other fuel depots in Greater Jakarta to make up for the disruption at Plumpang. State-owned enterprises minister Erick Thohir stated his intent to relocate the residents living close to the facility. President Joko Widodo visited an evacuee camp on 5 March, and floated the options of either relocating nearby residents or to move the Pertamina depot to a reclaimed island.
References
Plumpang
Plumpang
2020s in Jakarta
Plumpang
Building and structure fires in Indonesia
Explosions in Indonesia
Petroleum in Indonesia
Disasters in Java
Plumpang
Pertamina
Plumpang
Plumpang | 2023 Plumpang oil depot fire | [
"Chemistry"
] | 669 | [
"Industrial fires and explosions",
"Explosions"
] |
73,197,025 | https://en.wikipedia.org/wiki/Cerium%20stearate | Cerium stearate is a metal-organic compound, a salt of cerium and stearic acid with the chemical formula . The compound is classified as a metallic soap, i.e. a metal derivative of a fatty acid.
Synthesis
Cerium stearate is synthesized from the reaction of cerium oxide with stearic acid in an inert atmosphere at temperatures between 100 and 200 °C. It can also be obtained by the reaction of cerium nitrate and potassium stearate.
Physical properties
The compound forms a white powder which is insoluble in water.
Uses
The compound is used in a variety of industrial and laboratory applications: as a lubricant, antioxidant, and antifoaming agent. Other uses include as a catalyst in the synthesis of polymers and as a stabilizer in the production of plastics.
References
Stearates
Cerium(III) compounds | Cerium stearate | [
"Chemistry"
] | 183 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
73,197,668 | https://en.wikipedia.org/wiki/Libroadrunner | libRoadRunner is a C/C++ software library that supports simulation of SBML based models.. It uses LLVM to generate extremely high-performance code and is the fastest SBML-based simulator currently available. Its main purpose is for use as a reusable library that can be hosted by other applications, particularly on large compute clusters for doing parameter optimization where performance is critical. It also has a set of Python bindings that allow it to be easily used from Python as well as a set of bindings for Julia.
libroadrunner is often paired with Tellurium, which adds additional functionality such as Antimony scripting.
Capabilities
Time-course simulation using the CVODE, RK45, and Euler solvers of ordinary differential equations, which can report on the system's variable concentrations and reaction rates over time.
Steady-state calculations using non-linear solvers such as kinsolve and NLEQ2
Stochastic simulation using the standard Gillespie algorithm.
Supports both steady-state and time-dependent Metabolic control analysis, including calculating the elasticities towards the variable metabolites by algebraic or numerical differentiation of the rate equations, as well as the flux and concentration control coefficients by means of matrix inversion and perturbation methods.
libroadrunner will also compute the structural matrices (e.g. K- and L-matrices) of a stoichiometric model.
The stability of a system can be investigated by way of the system eigenvalues.
Data and results can be plotted via matplotlib, or saved in text files.
libroadrunner supports the import and export of standard SBML.
Applications
libroadrunner has been widely used in the systems biology community for doing research in systems biology modeling, as well as being a host for other simulation platforms.
Software applications that use libroadrunner
CompuCell3D
CRNT4SBML
DIVIPAC
massPy
pyBioNetFit
PhysiCell
pyViPR
runBiosimulations
SBMLSim
Tellurium (simulation tool)
Tissue Forge (multi-cellular simulator)
TOPAS-Tissue
Research applications
libroadrunner has been used in a large variety of research projects. The following lists a small number of those studies:
Tickman et al, describe developing multi-layer CRIPRa/i circuits for genetic programs using Tellurium/libroadrunner as the computational application.
Salazar-Cavazos et al used pyBioNetFit/libroadrunner to investigate Multisite EGFR phosphorylation.
Douilhet et al. used Tellurium/libroadrunner to investigate the use of genetic algorithms with rank selection optimization.
Schmiester et al. used pyBioNetFit/libroadrunner to investigate gradient-based parameter estimation using qualitative data.
Yang et al used CompuCell3D/libroadrunner to model transcript factor cooperation in mouse liver.
Notability
libroadrunner was the first SBML simulation to use just-in-time compilation using LLVM.
It is the only SBML simulator that exploits AUTO2000 for bifurcation analysis.
A number of reviews and commentaries have been written that discuss libroadrunner:
Maggioli et al. conduct a speed comparison of various SBML simulators and conclude libroadrunner is the fastest SBML simulator currently available to researchers.
Koster et al,discuss the speed advantages of libroadrunner for solving differential equations compared to solving stochastic systems.
Development
Development of libroadrunner is primarily funded through research grants from the National Institutes of Health
See also
List of systems biology modeling software
References
External links
GitHub page
Systems biology
Ordinary differential equations
Software using the Apache license | Libroadrunner | [
"Biology"
] | 774 | [
"Systems biology"
] |
73,198,875 | https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Kaplansky%20theorem | The Erdős–Kaplansky theorem is a theorem from functional analysis. The theorem makes a fundamental statement about the dimension of the dual spaces of infinite-dimensional vector spaces; in particular, it shows that the algebraic dual space is not isomorphic to the vector space itself. A more general formulation allows to compute the exact dimension of any function space.
The theorem is named after Paul Erdős and Irving Kaplansky.
Statement
Let be an infinite-dimensional vector space over a field and let be some basis of it. Then for the dual space ,
By Cantor's theorem, this cardinal is strictly larger than the dimension of . More generally, if is an arbitrary infinite set, the dimension of the space of all functions is given by:
When is finite, it's a standard result that . This gives us a full characterization of the dimension of this space.
References
Functional analysis
Paul Erdős | Erdős–Kaplansky theorem | [
"Mathematics"
] | 179 | [
"Theorems in mathematical analysis",
"Theorems in functional analysis"
] |
73,199,826 | https://en.wikipedia.org/wiki/PSR%20J1930%E2%80%931852 | PSR J1930–1852 is a binary pulsar system, composed of a pulsar and a neutron star and orbiting around their common center of mass. Located away from Earth in the constellation Sagittarius, it is the most distantly-separated double neutron star system known.
See also
Hulse–Taylor binary, first pulsar in a binary system discovered
PSR J0737−3039, first double pulsar binary system discovered
PSR J1946+2052, double neutron star system with the shortest known orbital period
Notes
References
Pulsars
Double neutron star systems
Sagittarius (constellation)
Astronomical objects discovered in 2015 | PSR J1930–1852 | [
"Astronomy"
] | 138 | [
"Sagittarius (constellation)",
"Constellations"
] |
73,200,138 | https://en.wikipedia.org/wiki/EPIC-Seq | EPIC-seq, (short for Epigenetic Expression Inference by Cell-free DNA Sequencing), is a high-throughput method that specifically targets gene promoters using cell-free DNA (cfDNA) sequencing. By employing non-invasive techniques such as blood sampling, it infers the expression levels of targeted genes. It consists of both wet and dry lab stages.
EPIC-seq involves deep sequencing of the transcription start sites (TSS). It hypothesizes that with deep sequencing of these TSSs, usage of fragmentomic features, chromatin fragmentation patterns or properties, can allow high-resolution analyses, as opposed to its alternatives.
The method has been shown effective for gene-level expression inference, molecular subtyping of diffuse large B cell lymphoma (DLBCL), histological classification of nonsmall-cell lung cancer (NSCLC), evaluation of results of immunotherapy agents, and assessment of the genes' prognostic importance. EPIC-seq uses machine learning to deduce the RNA expression of the genes and proposes two new metrics: promoter fragmentation entropy (PFE), an adjusted Shannon Index for entropy, and nucleosome-depleted region (NDR) score, the depth of sequencing in NDR regions. PFE showed superior performance compared to earlier metrics for fragmentomic features.
Additionally, EPIC-seq has been mentioned as a possible solution for detecting tissue damage and esophagus cancer using methylation profiles of cfDNAs, profiling of donor liver molecular networks, and inflammatory bowel disease (IBD) detection.
Background
Historical Usage of cfDNA and fragmentomic features
cfDNA, cell death-related and chromatin fragmented DNA molecules contained in blood plasma, has been used to detect transplant tissue rejection, prenatal fetal aneuploidy testing, tumour profiling, and early cancer detection in previous research. Nevertheless, prevalent liquid biopsy methods for cfDNA profiling depend on detecting germline or somatic genetic variations, which may be absent even in high disease burden-bearing patients and cancers with high tumour mutation rates.
Historically, the usage of fragmentomic features of cfDNA samples was shown to be another method to approach the problems mentioned. They demonstrated the capability to inform about the originated tissue classification of cfDNA molecules, which can help segregate tumour-related somatic mutations. However, current methods that use fragmentomic features, such as shallow whole genome sequencing (WGS) on cfDNA, do not fully cover all the tissues' effects and provide low sequencing depth and breadth to infer low-level, for example, gene level, properties. Hence, these methods require a high tumour burden from the patients.
Circulating Tumor DNA profiling
Circulating tumour DNA (ctDNA) molecules are tumour-derived cell-free DNA (cfDNA) circulating in the bloodstream and are not associated with cells. CtDNA primarily arises from chromatin fragmentation accompanying tumour cell death and can be extracted by liquid biopsy. CtDNA analysis has been implemented for noninvasive identification of tumour genetic characteristics and early recognition of various cancer forms. The majority of current ctDNA analysis depends on genetic differences in germline or somatic cells to diagnose diseases and detect tumour cells at an early stage. While looking at genetic variations of ctDNA can be beneficial, not all ctDNAs contain genetic mutations. EPIC-seq unitized epigenetic features of ctDNA to inform tissue-of-origin of these unmutated molecules, which is helpful for cancer classification.
Fragmentomic Features for Tissue-of-origin classification
The majority of circulating cfDNA molecules are fragments linked to nucleosomes, so they represent unique chromatin arrangements found in the nuclear genomes of the cells they originate from. In particular, open chromatin areas j, whereas genomic regions linked to nucleosomal complexes are often shielded from endonuclease activity. Several studies have identified specific chromatin fragmentomic characteristics that aid in informing tissue origins through cfDNA profiling. These features include:
Reduced sequencing coverage depth
Disruption of nucleosome positioning near transcription start sites (TSSs)
Length of cfDNA fragments
Principles of EPIC-seq
Currently, the majority of circulating tumour DNA (ctDNA) fragmentomic techniques lack the ability to achieve gene-level resolution and are effective mainly in inferring expression at elevated ctDNA levels. Consequently, they are primarily applicable to patients with notably advanced tumour burdens typically seen in late-stage cancer.
To address this limitation, EPIC-seq employs hybrid capture-based targeted deep sequencing of regions flanking transcription start sites (TSS) in cfDNA. This approach allows for the acquisition of ctDNA fragmentation features crucial for predicting gene expressions, such as Promoter Fragmentation Entropy (PFE) and Nucleosome Depleted Region (NDR). These key fragmentomic features possess the capability to capture associations at the gene level with expression levels throughout the genome, enabling the construction of a predictive model for transcriptional output. This would allows for the high-resolution monitoring of cfDNA fragmentation and gene-level analysis.
Promoter Fragmentation entropy
Epic-seq hypothesizes that cfDNA fragments originating from active promoters, which are less shielded by nucleosomes and thus more susceptible to endonuclease cleavage, will display more erratic cleavage patterns compared to fragments from inactive promoters, which are better protected by nucleosomes. PFE is a variation of the Shannon Index, which is a quantitative measure for estimating diversity. In the context of Epic-seq, PFE calculates the diversity of cfDNA fragment lengths where both ends of the fragment are situated within the 2 kb flanking region of each gene's TSS. The higher the PFE of a gene's TSS, the more likely the gene is highly expressed.
Nucleosome Depleted region
Actively expressed genes have open chromatin at their TSS region, they are less shielded by nucleosomes and, therefore, more susceptible to endonuclease cleavage. Consequently, the depth of cfDNA originating from the TSS of active genes tends to be shallower compared to that of inactive genes. NDR quantifies the normalized depth within each 2-kilobase window surrounding each TSS. The lower the NDR of a gene TSS site, the more likely the gene is highly expressed.
Methods
Wet Lab workflow
1. Collection and Processing of plasma
Peripheral blood samples were obtained and processed to isolate plasma following standard protocols. Upon centrifugation, plasma specimens were preserved at −80 °C, awaiting the extraction of ctDNA. The extraction of cfDNA from plasma volumes ranging from 2 to 16 ml was carried out using established laboratory procedures. Following isolation, the concentration of cfDNA was determined using fluorescence-based quantification methods.
2. Sequencing Library preparation
A typical amount of 32 ng of cfDNA was utilized for library preparation. DNA input was adjusted to mitigate the effects of high molecular-weight DNA contamination. The library preparation process encompassed end repair, A-tailing, and adapter ligation, which also incorporated molecular barcodes into each read. These procedures were conducted according to ligation based library preparation standardized protocols, with overnight ligation performed at 4 °C. Following this, shotgun cfDNA libraries underwent hybrid capture targeting specific genomic regions, as detailed below.
3. Custom Capture Panels sequencing
Custom capture panels tailored to specific cancer types or personalized selectors were utilized in EPIC-seq. The capture panels targeted transcription start site regions of genes of interest. Enrichment for EPIC-seq was performed following established laboratory protocols. Subsequently, hybridization captures were pooled, and the pooled samples underwent sequencing using short read sequencing.
Dry Lab workflow
Since EPIC-seq contains certain computational parts after the wet-lab portion for further processing, the following steps are summarized based on the developers' steps provided in the original paper.
4. Demultiplexing and Error correction
If multiplexed paired-end sequencing is used, then demultiplexing needs to be done to sort reads for different samples to different files. After the demultiplexing, error correction and read pair elimination based on unique identifier and barcode matching of pairs can be done. Developers adapt the demultiplexing and error correction steps from the CAPP-seq demultiplexing pipeline.
5. Outer Sequence Removal and trimming
For the preservation of shorter fragment reads, barcode removal and adapter trimming need to be done. After read preprocessing, the alignment of reads to the human genome reference should be performed. Original EPIC-seq used hg19 but for better results, an updated version of human genome reference can be used. One should be careful about their aligner's options since some aligners can interfere with the inclusion of shorter reads paired with longer ones. For the deduplication, attached molecular customized barcodes should be exploited. These barcodes include endogenous and exogenous unique molecular identifiers (UMIs) and are handy for distinguishing Polymerase Chain Reaction (PCR) duplicates from real duplicates and hence for PCR duplicate cleansing. This portion is especially important for oncologic applications since the low mutation abundance can be suppressed by PCR duplicates.
6. Read Normalization and quality control
If the data for different samples are going to be contrasted with each other, one can perform downsampling on the reads to achieve comparability. The reported sequencing coverage depth for reasonable analysis results was reported as bigger than 500 folds, thus any sample whose mean sequencing depth does not exceed the number can be dropped for more accurate outcomes. Also, EPIC-seq uses estimated expected cfDNA fragment length density of 140–185, based on chromatosomal length. The samples that have outlier fragment length density can be dropped for higher correlation results. As the last quality control step, mapping quality should be considered. A looser threshold can be dictated on EPIC-seq reads, compared to WGS, due to the TSS selection criteria imposed during design phases making the reads more unique for EPIC-seq.
Fragmentomic Feature Analysis
7. Shannon's entropy
For the measurement of the diversity of fragmentomic features, the PFE metric, derived from Shannon's Index of entropy, is developed. The default number of 201 bins of lengths 100 to 300 are used for density estimation by the maximum likelihood method. The probability of having a fragment with size , () is computed by the division of the number of fragments with size by the total number of fragments. Shannon's entropy is calculated with the formula: .
8. Dirichlet-Multionomial model
Next, as a cleansing against different sequencing depths from different runs and other factors that can hinder the fragment length distribution sanity, Bayesian normalization via the Dirichlet-multinomial model should be done. Per every sample, based on the fragment lengths observed in that sample, a multinomial maximum likelihood estimation-based fragment length distribution is generated. Two intervals of 250 base pair length are used, located between -1000th base pair and -750th base pair, and between 750th base pair and 1000th base pair locations to the centre of TSS. This is done due to the prevention of the impact of gene expression on the generated distribution, as the selected intervals are relatively far away from TSS. Then, the fragment length densities from that distribution are sampled for each 201-fragment size and used as a parameter for Dirichlet distribution generation.
The initial parameter for Dirichlet distribution is set to 20. From the obtained Dirichlet distribution, 2000 fragments are sampled, and Shannon's entropy is calculated for those. The Shannon entropies are subsequently compared with the Shannon entropy values of five randomly selected background sets ( where ).
9. PFE calculation
PFE is calculated as the probability of gene-specific entropy being higher than times all other background set entropies individually. The variable is sampled from the Gamma distribution with shape 1 and rate 0.5. Also, as the last step, the expected value for the sum of gene-specific entropy probability for each background is reported as PFE. That probability is based on the Dirichlet distribution generated in the previous step.
10. NDR calculation
NDR is the normalized measure of sequencing depth, which was downsampled to 2000 folds as a default in the 2000 base pair windows during read preprocessing and quality control steps.
11. Machine Learning for Expression prediction
With deep WGS data of cfDNA from a carcinoma of unknown primary patient with very low ctDNA concentration quantified, they trained a machine learning model using bootstrapping. The results of RNA-sequencing on PBMC runs for the 5 different patients are recorded and the average of 3 of these individuals' expression levels is used as a reference for gene expression. The genes are clustered into 10 clusters based on reference gene expression to increase the resolution at the core promoters. Then, genes used as a background value for PFE calculation are removed. Next, all the fragments in extended TSS regions, a region that has the center as TSS regions' center and the length of 2000 base pairs, are pooled. The PFE and NDR scores are calculated for the fragments pooled. Further normalization of these scores is done based on their 95th percentile.
Using these two features, they bootstrapped, used in a weighted fashion, 600 expression prediction models developed for WGS data. Among those models, there are 200 univariable standalone NDR, 200 univariable standalone PFE, and 200 NDR-PFE integrated models.
Advantages
High throughputness
EPIC-seq inherits the advantages of high-throughput sequencing: fast sequencing times, high scalability, higher sequencing depths, lower costs, and low error rates. Another advantage of EPIC-seq is that it is non-invasive. This also eliminates the risks of invasive methods done over risky tissues and allows scientists to study tissues that are too dangerous or difficult to do so.
Indepency of High Tumour Burden requirement
As mentioned in the introduction, two major limitations of the predecessor methods are not inherited by EPIC-Seq: germline or somatic variant dependency of common liquid biopsy methods which is also not certain to be found even in high-disease burden patients and methods like shallow WGS's insufficient range of cfDNA tissue consideration, genomic breadth and genomic depth which causes low-resolution and level of inference of gene expression and, again, requires high tumour burden for higher resolution. EPIC-seq uses fragmentomic features instead of variant calling, thus it is not bound by the existence of the variation. Also, since it does targeted sequencing instead of whole genome, it allows scientists to increase the sequencing depth and hence provide a better resolution. Moreover, it also provides more sensitive and comprehensive tissue-of-origin information.
Different Prediction sensitivities
Furthermore, the method showed consistent performance in cancer identification, classification, and treatment effect problems like NSCLC and DLBCL identification, histological classification of subtypes of NSCLC, molecular classification of subtypes of DLBCL, DLBCL COO detection, programmed death-ligand 1 immune-checkpoint inhibition response prediction against advanced NSCLC cases, and prognostic value detection of individual genes.
Generalizability
WES was done with EPIC-seq and it detected a correlation between the biological signal and active genes' exonic regions; this shows that EPIC-seq can be generalized for expression of genes of interest rather than only cancer genes
Robustness on cfDNA levels
In general, EPIC-seq analysis results showed a significant correlation between the inspected biological effect and the developed score. For the classification tasks Area Under the ROC (receiver operating characteristic curve) Curve (AUC) scores were over 90% with a sufficient significance interval. Also, for these tasks, cfDNA levels did not change the performance unfavourably even when the levels were below 1%. So, the method shows a good robustness against cfDNA levels as well. Finally, EPIC-seq did not show any significant changes under different pre-analytical factors, which proves that the method is robust under different circumstances that can be caused by the instruments and tools used before the analysis.
Limitations
While EPIC-seq offers significant potential in various biomedical applications, it also has limitations that warrant consideration in its implementation and interpretation.
Dependency on Known Cancer-Associated genes
One limitation of EPIC-seq is its reliance on prior knowledge of genes associated with specific cancers. The effectiveness of the EPIC-seq model hinges on the availability of comprehensive gene expression profiles for the targeted cancer types. This dependency may restrict its applicability to cancers with well-characterized gene expression patterns, limiting its utility in cancers with less understood molecular signatures.
Limited applicability to specific cancer types
EPIC-seq may be more effective in cancers with prominent genes or well-defined molecular subtypes. Consequently, its utility may be limited in cancers with less distinct genetic profiles or those characterized by significant interpatient variability. This restricts its generalizability across different cancer types and necessitates cautious interpretation of results in diverse oncological contexts.
Limited Performance in Early-stage cancer
EPIC-seq may exhibit enhanced performance in detecting late-stage cancer due to higher levels of ctDNA and more pronounced genetic alterations. For example, EPIC-seq's sensitivity for detecting NSCLC diminishes significantly in patients with low tumor-DNA burden (below 1%), resulting in decreased detection rates by approximately 34%.
Applications
Noninvasive cancer detection
EPIC-seq has demonstrated remarkable potential in noninvasive cancer detection, notably in the diagnosis of lung cancer, the leading cause of cancer-related mortality. Using EPIC-seq, researchers have achieved high accuracy in distinguishing between NSCLC patients, DLBCL patients and healthy individuals.
Noninvasive Classification of Cancer subtypes
EPIC-seq enables the subclassification of NSCLC into histological subtypes such as lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC). EPIC-seq can also aid with the classification of cell-of-origin (COO) subtypes in DLBCL. By analyzing epigenetic and transcriptional signatures, EPIC-seq-derived classifiers provide valuable insights into tumor heterogeneity and molecular subtyping, providing valuable insights for tailored treatment strategies.
Therapeutic Response prediction
In addition to diagnosis and classification, EPIC-seq holds promise in predicting patient response to various cancer therapies, including immune-checkpoint inhibition (ICI). By analyzing changes in gene expression patterns captured through EPIC-seq, researchers can forecast patient response to PD-(L)1 blockade therapy, which can provide great help in personalized cancer treatment. EPIC-seq-derived indices have shown significant correlation with treatment response, offering potential prognostic markers for therapy outcome prediction.
Immunotranscriptomic profiling of Classical Hodgkin Lymphoma
EPIC-seq has been shown to be effective for inferral of epigenetic expression of classical Hodgkin Lymphoma's (cHL) subtypes. Hodgkin and Reed/Sternberg cells and their corresponding T cells' expression were inferred with EPIC-seq. Bulk single-cell RNA sequencing results shows significant correlation with EPIC-seq profilings of these cell types.
Possible use cases
Research in different areas mention possible use cases of EPIC-seq. Integrated analysis toolkit for whole-genome-wide features of cfDNA (INAC) compiles different tools, including EPIC-seq's PFE and NDR scores, to provide in comprehensive silico analysis of cfDNA which can be exemplified disease state and clinical outcome inference, transcriptome modeling, and copy number profiling. EPIC-seq is also mentioned to be a potential application in clinica IBD cases. It can be used for survailance of IBD in high-risk groups and precancerous development caused by IBD. It is also named as a possible superior method in clinical IBD gut damage detection, compared to the current methods.
Alternatives
As EPIC-seq studies epigenetic markers to infer gene expression, one can study epigenetic sequencing methods like ChIP-seq, ATAC-seq, MeDIP-seq, and Bisulfite-Free DNA Methylation sequencing in combination with methods for profiling RNA expression such as RNA-seq and scRNA-seq.
Considering the method is mainly developed for early cancer detection or subgrouping, liquid biopsy methods, such as Twist cfDNA Pan-Cancer Reference Standard, can be used as an alternative. Different liquid biopsy methods focus on cell-free tumour markers, tumour methylation markers, exomes, proteins, lipids, carbohydrates, electrolytes, metabolites, RNA, extracellular vesicles, circulating tumour cells, and tumour-educated platelets for early identification of cancer non-invasively. Some of the proposed liquid biopsy methods provide a comprehensive detection of cancer types, such as ATR-FTIR spectroscopy and CancerSEEK, while others, like Dxcover and SelectMdx operate on more specific (even single) cancer targets.
EPIC-seq utilizes fragmentomic features to infer expression levels of genes. Several studies also employ fragmentomic features to infer cancer existence, infer cell death, and detect other clinical conditions such as transplant failure.
ctDNA by Fragment Size analysis
This method uses in vivo and silico ctDNA fragment length selection to enrich the variant proportion in the plasma. The method is decided on size selection criteria based on blood ctDNA fragment length properties, so it may not generalize well for other non-invasive sampling methods. Furthermore, it employs supervised machine learning methods like Random Forest and Logistic Regression on shallow WGS to classify cancer and healthy patients. The method can be used for different cancer types.
Plasma DNA End-Motif profiling
This method tries to identify 4-bp long end motifs from each stand's 5' end on bisulfite sequencing reads of plasma cfDNAs. Hierarchical clustering of the motifs is done to detect any under/overrepresentation of these motifs due to cancer existence. The method incorporates Support Vector Machines and Logistic Regression to predict cancer patients from healthy ones. The method is also applied to transplant patients with clustering and multidimensional scaling (MDS) analysis and shows applicability. The same analysis types also proved that this method applies to prenetal testing. This method is also informative for cell type origins.
Orientation-aware Plasma cell-free DNA Fragmentation analysis
Sequencing depth inconsistencies on open chromatin regions and signals derived from up/downstream orientation-sensitive sequencing read densities, this method infers the tissue of origin of the cfDNA fragments obtained from bisulfite sequencing. The method uses a mathematical formulation to generate signals for orientation-aware cfDNA fragmentation based on the empirical peak periods and positions of up/downstream ends of the reads. The method shown to be useful for inferring the tissue-of-origin, pregnancy identification, cancer detection, and transplant monitoring. This method also provides information on which tissue-of-origin contributes how much to cfDNA reads.
DNA Evaluation of Fragments for early interception
The method analyzes the shallow WGS reads in windows while considering the cfDNA fragment length and coverage. The genome-wide pattern of cfDNA fragmentation features is then fed to a gradient tree-boosting machine learning model to predict their cancer situation. They also used machine learning classifiers to predict the tissue of origin. Overall, the method can be used to identify if a patient has cancer. Even though the method does not specifically classify the cancer types during prediction, it is used for the detection of different cancers.
In vivo Nucleosome footprinting
The method produces genome-wide mappings of in vivo nucleosome occupancy to detect the tissue-of-origin of cfDNA molecules. The method uses reads' endpoint position aligned which are expected to be close to nucleosome core particle (NCP) sites. Windowed Protection Score (WPS) is proposed to quantify the cfDNA density close to NCPs using the frequency of cfDNA particles that cover 120 base pairs centred at a given location minus the frequency of fragments with an endpoint at the same interval. Then, the peaks are called heuristically for WPS to identify footprints. The cells contributing to cfDNA are then predicted from the footprints. These footprints can be used for identifying non-malignant epigenetic or genetic sites like transcription factor binding sites, and detection of malignancy-related biomarkers based on the extent of tissue damage and cell deaths.
ctDNA Nucleosome Pattern Employment for Transcriptional Regulation profiling
The method has mainly been developed for detecting the various phenotypes of metastatic castration-resistant prostate cancer. It requires the usage of patient-derived xenografts for enrichment of ctDNA in blood for further analysis. After WGS, the method utilizes the tool Griffin for inspection of local promoter coverage, nucleosome positioning, fragment size analysis, and composite transcription factor binding sites plus open chromatin sites of ctDNA reads. It also checks the histone modifications and applies dimensionality reduction on the found sites to identify putative promoter, enhancer, and gene repressive heterochromatic marks. To interrogate the chromatine phasing, distance between open chromatin regions, the method uses TritonNP, newly developed software, that uses Fourier transforms and band-pass filters. XGBoost is utilized for classification on cancer subtype with using the features detected in previous steps.
cfDNA Methylation, Copy Number, and Fragmentation Analysis for early detection of multiple cancer types
The method is proposed as an assay that employs both cfDNA whole genome methylation sequencing and fragmentomic feature information for multicancer classification. Copy number ratios calculated for healthy and cancerous tissues are used as a cancer type and cancer existence identifier. As done in EPIC-seq, the method also utilizes fragment lengths. Short fragment over long fragment ratio is used in the method as an identifier score. Using the single base or region level methylation percentages on detected cancer methylation markers for each cancer type, copy number ratios, and short/long fragment ratios; the method employs a custom Support Vector Machines algorithm to classify the cancer type if there exists one. This method reports the cancer detection and tissue-of-origin of 4 cancer types. However, it requires detection of specific methylation sites/regions of interest for cancer types
References
Biochemistry methods
Molecular biology | EPIC-Seq | [
"Chemistry",
"Biology"
] | 5,551 | [
"Biochemistry methods",
"Biochemistry",
"Molecular biology"
] |
73,201,639 | https://en.wikipedia.org/wiki/Plant%E2%80%93animal%20interaction | Plant-animal interactions are important pathways for the transfer of energy within ecosystems, where both advantageous and unfavorable interactions support ecosystem health. Plant-animal interactions can take on important ecological functions and manifest in a variety of combinations of favorable and unfavorable associations, for example predation, frugivory and herbivory, parasitism, and mutualism. Without mutualistic relationships, some plants may not be able to complete their life cycles, and the animals may starve due to resource deficiency.
Evolution
The earliest vascular plants initially formed on the planet about 425 million years ago, in the Devonian period of the early Paleozoic era. About every feeding method an animal might employ to consume plants had already been well-developed by the time the first herbivorous insects started consuming ferns during the Carboniferous epoch. In the earliest known antagonistic relationships with plants, insects consume plant pollen and spores. Since 300 million years ago, insects have been known to consume nectar and pollinate flowers. In the Mesozoic, between 200 and 150 million years ago, insects' feeding patterns started to diversify. The evolution of plant defenses to reduce cost and increase resistance to herbivores is a crucial component of the Optimal Defense Hypothesis. In order to deal with the plant's adaptability, the animal likewise evolved counter-adaptations. Over the history of their shared evolution, plants and animals have significantly diverged, in large part because of productive co-evolutionary processes that emerged from antagonistic interactions. Mutualistic interactions between plants and insects have developed and disintegrated over the course of the evolution of angiosperms.
Relationship
Defoliation or root removal caused by herbivory can control or reduce the overall phytomass, but it can also promote species diversity and have an impact on plant dispersion, which in turn controls ecological stability. In mutualistic relationships between pollinators and plants, the former receives food from the latter and in exchange acts as a plant propagation agent and a gene-transfer vector. The intricate web of species-specificity, habitat choice, and coevolution between plants and their pollinators has already been clarified by studies examining the feeding behaviors of pollinators and their interactive role in maintaining ecosystems. True mutualisms also promote development and provide pathogen protection. Plant growth and development are aided by mutualistic interactions between animals and plants, such as those between nematodes and insects.
Types
Predation
Predation is a biological interaction where one organism, the predator, kills and eats another organism, its prey. There are carnivorous plants as well as herbivores and carnivores that consume plants and animals, respectively. Due to the extremely low nutritional content of the soil in which they grow and extra nitrogen is needed by the plants, therefore carnivorous plants eat insects. By photosynthesis, these plants continue to receive energy from the sun.
Parasitism
Parasitism is a close relationship between species, where one organism, the parasite, lives on or inside another organism, the host, causing it some harm, and is adapted structurally to this way of life. Plant parasites are a common term for sap-sucking insects like aphids.
Commensalism
Commensalism is the term used to describe a situation in which one organism gains and the other is neither harmed nor benefited. For instance, epiphytes on tree trunks in rain forests are aided by the trees because they provide a surface for their growth. Unless the epiphytes' weight becomes so great that the tree branches break, the epiphytes don't seem to have any effect on the trees.
Mutualism
When both species gain from their interaction, mutualism develops. The mutualistic link between pollinators and plants is very well illustrated. In this instance, the animal pollinator (bee, butterfly, beetle, hummingbird, etc.) receives nourishment in exchange for carrying the plants' pollen from flower to flower (usually nectar or pollen). Another common method of seed dispersion involves an alliance between the plant and the animal that disperses the seeds. The tasty fruit that encases the seeds is consumed by numerous animals. The seeds are subsequently dispersed in a new spot some distance from the parent plant, frequently with feces that also serves as a little amount of fertilizer.
In every ecosystem, there are interactions of this nature between species.
References
Ecology
Ethology
Plant reproduction | Plant–animal interaction | [
"Biology"
] | 909 | [
"Behavior",
"Plant reproduction",
"Plants",
"Reproduction",
"Ecology",
"Behavioural sciences",
"Ethology"
] |
73,201,844 | https://en.wikipedia.org/wiki/HD%2028843 | HD 28843, also known as HR 1441 and DZ Eridani, is a star about 550 light years from the Earth, in the constellation Eridanus. It is a 5th magnitude star, so it will be faintly visible to the naked eye of an observer far from city lights. It is a variable star, whose brightness varies slightly from 5.70 to 5.84 during its 1.374 day rotation period. It is a member of the μ Tauri Association, a group of young stars within the larger Cassiopeia-Taurus Structure.
In 1969 Mercedes Jaschek et al. determined that HD 28843 is a helium-weak star, based on its B-V color index being bluer (more negative) than would be expected for a star with its spectral type. In 1977, Robert Davis reported that the star has an overabundance of silicon. It is classified as a chemically peculiar star.
Henning Jorgensen et al. reported that HD 28843 was a "suspected variable star" in 1971. The variability of the star was firmly established in 1977 by Holger Pedersen and Bjarne Thomsen, during a spectroscopic and photometric study of helium weak and helium strong stars. They determined its period to be days. In 1978 the star was given the variable star designation DZ Eridani.
Ermanno Borra et al. reported in 1983 the detection of the magnetic field of HD 28843, and estimated its strength to be a few hundred gauss. Later data from the International Ultraviolet Explorer implied a field strength of 250 gauss.
M. Farthmann et al. reported in 1994 that high spectral resolution observations of the 4471Å spectral line of neutral helium can be explained if HD 28843 has two helium-enriched circular "caps" separated by a region with a dramatically lower helium abundance.
References
Eridanus (constellation)
021192
028843
Eridani, DZ
SX Arietis variables
B-type giants
Helium-weak stars | HD 28843 | [
"Astronomy"
] | 418 | [
"Eridanus (constellation)",
"Constellations"
] |
73,201,873 | https://en.wikipedia.org/wiki/PSR%20J1946%2B2052 | PSR J1946+2052 is a short-period binary pulsar system located away from Earth in the constellation Vulpecula. The system consists of a pulsar and a neutron star orbiting around their common center of mass every 1.88 hours, which is the shortest orbital period among all known double neutron star systems . The general theory of relativity predicts their orbits are gradually decaying due to emitting gravitational waves, which will eventually lead to a neutron star merger and a kilonova in 46 million years.
The PSR J1946+2052 system was discovered by radio astronomers on 19 July 2017, during a survey for pulsars with the Arecibo Observatory's radio telescope at Arecibo, Puerto Rico. The primary component of PSR J1946+2052 system, the pulsar, has a rotation period of 17 milliseconds and an estimated mass below 1.31 solar masses. The invisible neutron star companion likely has a lower mass of at least 1.18 solar masses, which amounts to a total system mass of approximately 2.50 solar masses, making PSR J1946+2052 potentially the lowest-mass double neutron star system known .
Discovery
The PSR J1946+2052 system was discovered by radio astronomers on 19 July 2017, during the PALFA Survey for pulsars in the Milky Way's galactic plane with the Arecibo Observatory's radio telescope at Arecibo, Puerto Rico. A search in archival imagery shows that PSR J1946+2052 was not detected in infrared to gamma-ray wavelengths.
Location and distance
PSR J1946+2052 is located in the northern celestial hemisphere in the constellation Vulpecula. Its equatorial coordinates based on the J2000 epoch are RA and Dec ; these are indicated in its pulsar identifier PSR J1946+2052. In galactic coordinates, it lies in the Milky Way's galactic plane with a galactic latitude 1.98° south and a galactic longitude 57.66° east from the Galactic Center. The time delay between different frequencies of PSR J1946+2052's radio pulses indicates a dispersion measure of , which suggests a distance between from Earth, depending on the electron number density in the interstellar medium between the pulsar system and Earth. It is unlikely in the near future that PSR J1946+2052's distance could be determined more precisely with direct methods such as very-long-baseline interferometry or hydrogen line absorption, as it is too faint and distant.
Origin
Double neutron star systems such as PSR J1946+2052 are thought to have formed from the asynchronous evolution of two high-mass stars in a wide binary system. The higher-mass star first evolves and explodes in a supernova, leaving a neutron star remnant in an eccentric mutual orbit with the surviving companion star. As the companion star evolves into a supergiant and expands beyond its Roche lobe, it begins transferring mass to the neutron star, which energetically accretes the material and spins up to a rotation period of a few milliseconds, becoming a recycled millisecond pulsar and an X-ray binary. The aging companion star eventually engulfs the pulsar in a gaseous common envelope and their mutual orbit begins to circularize and shrink due to drag forces within the envelope. The pulsar continues accreting and strips the companion star of its hydrogen envelope, turning it into a helium star. The helium star eventually explodes in an ultra-stripped supernova with minimal ejecta, resulting in a low momentum kick that leaves the resulting neutron star pair bound in a low-eccentricity orbit around each other. The second-born neutron star from this supernova is expected to pulsate for only a few million years before its rotation slows down sufficiently for its pulsation mechanism to turn off. On the other hand, the first-born pulsar is expected to continue pulsating for billions of years thanks to the high angular momentum it had acquired from accretion.
Physical characteristics
The total mass of the PSR J1946+2052 system is , which is determined from the components' mutual orbital period using Kepler's third law. This is potentially the lowest mass measured for a double neutron star system , though it could be tied with PSR J1411+2551 () within uncertainty bounds. Although the individual component masses have not been measured directly, the binary mass function constrains them to be and for the pulsar and companion, respectively. A more detailed analysis of the Einstein delay (gravitational time dilation and Doppler shift effects) in the pulsar's pulsation timing would enable a more precise measurement of both components' masses.
Pulsar
The pulsar is the only electromagnetically detectable component of the PSR J1946+2052 system. It pulsates in radio wavelengths 59 times per second, corresponding to a rotation period of 17 milliseconds. Due to the generation of electromagnetic radiation by its rotating magnetic field, the pulsar is gradually losing rotational kinetic energy at a spin-down luminosity of ( or ) and its rotation period is increasing at a rate of seconds per second. This is a relatively low spin-down rate for a neutron star, which suggests the pulsar must have a weakened surface magnetic field strength of . This weakened magnetic field is thought to be the result of the pulsar having accreted matter from a past companion star, which accumulated onto the pulsar's surface and buried its original surface magnetic field. This indicates that the pulsar is the first-born stellar remnant of the PSR J1946+2052 system. The pulsar is estimated to have a characteristic age of 290 million years, assuming it only experienced constant spin-down to its present rotation period. However, this is likely not accurate to pulsar's true age because it underwent rotational spin-up through accretion in the past.
Companion
The companion was discovered from the periodic Doppler shifting in the pulsar's pulsation frequency, due to the orbital motion of the pulsar induced by the companion. It is inferred to be a neutron star, consistent with its high mass and expected evolutionary history. Having formed last in an ultra-stripped supernova of its progenitor star, the companion should be younger than the pulsar and likely did not undergo rotational spin-up through accretion. The companion does not exhibit radio pulsations, either because its electromagnetic beams do not point towards Earth or its pulsation mechanism has turned off. Since it could not be detected directly, very little is known about the companion's properties.
Orbit
The components orbit about their common center of mass, or barycenter, in a period of 1.88 hours. This is the shortest orbital period among all known double neutron star systems . In 2017, the system was losing () of energy to gravitational wave emissions and its orbital period was decreasing at an instantaneous rate of seconds per second. The orbital decay rate will progressively increase in magnitude as the components spiral closer to each other, and will lead to a neutron star merger in 46 million years. Integrating the components' orbital decay backwards in time shows that their mutual orbit had an eccentricity less than 0.14 and a period less than before the companion's progenitor star went supernova. The orbiting neutron stars of the PSR J1946+2052 system experience relativistic apsidal precession at a very high rate of degrees per year, making it also the fastest-precessing double neutron star system known.
Merger
Within milliseconds after merging, atomic nuclei inside the neutron stars undergo rapid neutron capture, producing copious neutrinos and numerous elements heavier than iron in the process. For low-mass neutron star mergers like PSR J1946+2052, these heavy elements are predicted to predominantly consist of lighter nuclides with atomic mass numbers less than A < 130, with small amounts of lanthanides and heavier nuclides accounting for 0.2%–0.4% of the remaining nucleosynthesized mass. Up to of these heavy elements are predicted to be ejected outward at about 0.1 times the speed of light, due to the extreme angular momentum and heating involved in the merger. These heavy elements would then be intensely irradiated by the merger-produced neutrinos about 0.1 seconds later, becoming accelerated to speeds over 0.3 times the speed of light. Over time, these ejected heavy elements undergo radioactive decay and produce electromagnetic radiation from infrared to ultraviolet wavelengths, generating a kilonova.
The elemental abundances of most known stars do not match that predicted for low-mass star mergers, suggesting that low-mass star mergers must be rare occurrences. The low total mass of the PSR J1946+2052 merger will likely form a strongly magnetized supramassive neutron star remnant that slightly exceeds the Tolman–Oppenheimer–Volkoff limit. This supramassive neutron star would be unstable as the centrifugal forces that prevent its gravitational collapse will diminish due to the loss of angular momentum to gravitational waves and other magnetohydrodynamic processes. The supramassive neutron star would survive for only a few seconds before collapsing into a black hole.
See also
GW170817, first neutron star merger discovered
Hulse–Taylor binary, first pulsar in a binary system discovered
PSR J0737−3039, first double pulsar binary system discovered
PSR J1930–1852, double neutron star system with the longest known orbital period
Notes
References
External links
Pulsars
Double neutron star systems
Vulpecula
20170719 | PSR J1946+2052 | [
"Astronomy"
] | 2,045 | [
"Vulpecula",
"Constellations"
] |
73,202,981 | https://en.wikipedia.org/wiki/Pleurotus%20novae-zelandiae | Pleurotus novae-zelandiae is a species of fungus in the genus Pleurotus first described by Miles Joseph Berkeley in 1855, endemic to New Zealand.
Description
General
The cap is hygrophanous, subgelatinous, white, fan-shaped, reniform, 6–8 cm. broad, 3–4 cm. long;
The stem is obsolete but the mushroom is attached by a narrowed base which forms a little round disc,
The gills are broad, distant, thin, interstices veiny.
Neither Greta Stevenson (1964) nor Egon Horak (1971) could trace material of P. novae-zelandiae, and according to its description by Berkeley from 1855 would indicate it to be a species of Marasmiellus or Resupinatus. Nevertheless, it is still an accepted species.
Distribution, habitat & ecology
This mushroom is saprobic on dead wood, present on North Island, of New Zealand.
References
Pleurotaceae
Fungi of New Zealand
Fungi described in 1855
Fungus species | Pleurotus novae-zelandiae | [
"Biology"
] | 215 | [
"Fungi",
"Fungus species"
] |
73,203,014 | https://en.wikipedia.org/wiki/Combined%20cycle%20powered%20railway%20locomotive | A combined cycle powered locomotive is a patented idea to use two primary movers, a gas turbine with a steam turbine to gain the efficiency of a combined cycle power plant or a combined gas and steam engine. Steam locomotives were tested in the past but were not ideal for low speeds, and gas turbine locomotives (GTELs) were used by Union Pacific until the 1970s. Combined cycle power uses the heat from the gas turbine to make steam from the water to turn a steam turbine, instead of that heat getting exhausted out and wasted. Engine efficiency for combined cycle can achieve 60% compared to diesel motors' 45% efficiency.
The gas and steam turbines would turn their separate generators and the steam turbine would have a clutch between it and its generator because steam power is not easily adjustable. Compressed hydrogen would be in one fuel tank, and water would be in another storage tank for the steam, and the Rankine cycle could condense most of the steam back to water to put back into the water tank to repeat the cycle for the steam turbine. Current diesel electric locomotives such as the GE Evolution Series with a cab could still be the lead cab, pusher, and distributed power; with the combined cycle powered locomotive as a slug.
See also
Hydrogen economy
Hydrogen fuel cell train
Turbine-electric powertrain
Schnabel car
References
External links
Researching an Air-Steam Combined-cycle Locomotive
Steam locomotives
Gas turbine locomotives
Steam vehicles
Freight transport
Locomotive engines | Combined cycle powered railway locomotive | [
"Technology"
] | 286 | [
"Locomotive engines",
"Engines"
] |
73,204,286 | https://en.wikipedia.org/wiki/Digital%20ecology | Digital ecology is a science about the interdependence of digital systems and the natural environment. This field of study looks at the methods in which digital technologies are changing the way how people interact with the environment, as well as how these technologies affects the environment itself. It is a branch of ecology that promotes green practices to fight digital pollution. Currently the total carbon footprint of the internet, our electronic devices, and supporting elements accounts for about 3.7% of global greenhouse gas emissions (including about 1.4 per cent of overall global carbon dioxide emissions).
Digital Ecology can also be used to denote the use of technology in the study of ecological systems and processing, examining how technological developments aid in the collection, analysis and management of ecological data. Important fields in this aspect of Digital Ecology include the development of drone technology for wildlife monitoring.
Digital ecology is a complex and multifaceted field that requires a holistic approach to understanding the relationship between digital technologies and the natural world. With the increasing reliance on digital technologies, it is important to consider the environmental consequences of these technologies and work towards more sustainable solutions.
Negative impact on the environment
One of the main areas of focus in digital ecology is the impact of electronic waste, or e-waste. As more and more devices become obsolete and are replaced with newer models, the amount of e-waste being produced is increasing at an alarming rate. This e-waste often ends up in landfills, where it can leach harmful chemicals into the soil and water supply.
Another aspect of digital ecology is the energy consumption of digital technologies and the digital pollution in causes. The production and use of digital devices requires significant amounts of energy, and as the demand for these devices increases, so does the amount of energy required to meet this demand.. The total carbon footprint of the internet, our electronic devices, and supporting elements add up to about 3.7% of global greenhouse gas emissions. It is as much as for the airline industry and the number keeps on rising. This increase in energy consumption has a negative impact on the environment, as it contributes to climate change and air pollution. Research has shown, that if the internet was a country, it would be the seventh largest polluter in the world, by some estimates.
Digital pollution
Digital pollution is a crucial aspect of digital ecology. It is the main problem against which digital ecology is fighting. Digital pollution refers to the negative impact of digital technology and electronic waste on the environment and human health. This can include emissions from electronic devices, toxic chemicals in electronic waste, and the proliferation of e-waste in landfills.
Technology users contribute to digital pollution on a daily basis, which include:
Data clouds and servers - keeping many data on clouds and servers and transferring them contribute to air pollution through the energy used to power and cool these data centers and sending the data. The energy consumption of data centers results in emissions of greenhouse gases, such as harmful carbon dioxide. The energy is generated by power plants, which often burn fossil fuels such as coal, oil, and natural gas, releasing harmful pollutants into the air. In addition, the air conditioning systems used to cool the servers also consume a significant amount of energy and release heat into the environment. This is primarily influenced by large databases, but people's day-to-day activities are also not insignificant. An example of this would be storing and accessing emails. An average year of emailing emits about 136 kilograms of . One inbox consumes enough energy to run a hot shower for about 4 minutes, which is equal to illuminating 40 lightbulbs for an hour and equal to pollution by driving a car 212 metres. Addictionally, data centers consume large amounts of energy to power and cool the servers that store and process spam email data, resulting in emissions of greenhouse gases and other pollutants.
Production of electronic devices - has a detrimental effect on the environment for several reasons: increasing the demand for raw materials, rare earth minerals, energy, water usage, and other resources, and contribute to environmental degradation through the associated pollution and waste. Manufacturing is the most polluting phase. It accounts for up to 80% of a device's carbon footprint during its lifetime. Buying new equipment every six years rather than every four reduces carbon dioxide production by 190 kg.
Energy-inefficient electronics - they consume more electricity, which in turn requires more energy to be generated, often from power plants that burn fossil fuels. Additionally, inefficient electronics also generate more heat, which requires more energy for cooling, further increasing the total energy consumption and associated emissions. When these electronic devices reach the end of their lifespan and are disposed of, they can also become electronic waste, which can release toxic chemicals into the environment.
Unnecessary charging of electronic devices - can contribute to air pollution through the increased energy consumption. Unnecessary charging of devices, such as smartphones or laptops, increases the overall energy demand and contributes to emissions from power plants. Charging smartphones generates more carbon dioxide than charging laptops.
Electronic waste - electronic devices eventually reach the end of their lifespan and become waste. The improper disposal of electronic waste release toxic chemicals into the environment, contaminating soil and water, and harming wildlife.
Positive impact on the environment
Despite the environmental impact of electronic devices and data centers, digital technologies positively impact the environment in a variety of ways:
Energy efficiency: Digital technologies can help increase energy efficiency through the use of smart energy systems, such as smart grid systems and energy-efficient devices.
Reduced waste: The use of digital technologies can reduce waste by reducing the need for paper and other physical materials.
Enhanced education: Digital technologies can enhance education by providing access to information and resources, promoting sustainable practices and environmental awareness.
References
Bibliography
"Digital Ecology: The Complete Guide". June 9, 2022
"Digital ecology", Philonomist. November 10, 2021
"Sustainable Web Design", Tom Greenwood
Technology
Systems ecology
Environmental sociology | Digital ecology | [
"Environmental_science"
] | 1,202 | [
"Environmental sociology",
"Environmental social science",
"Systems ecology"
] |
73,205,092 | https://en.wikipedia.org/wiki/HD%20187420/187421 | HD 187420 (HR 7548; 71 G. Telescopii) and HD 187421 (HR 7549; 72 G. Telescopii), are the components of a binary star located in the southern constellation Telescopium. Gaia DR3 parallax measurements place the stars at a distance of 407 and 414 light years respectively. The two are separated by , and they are approaching the Solar System with heliocentric radial velocities of and −21.5 km/s respectively.
The system
HD 187420 is the primary of the system. It has an apparent magnitude of 5.71, making it faintly visible to the naked eye as a yellowish-orange-hued star. However, its brightness is diminished by 0.17 magnitudes due to interstellar dust. Meanwhile, the secondary HD 187421 has an apparent magnitude of 6.37, placing it near the limit for naked eye visibility. It too suffers from extinction, which makes it 0.25 magnitudes dimmer. The stars have absolute magnitudes of −0.33 and +2.69 respectively. HD 187421 is located 23.5" away from HD 187420 along a position angle of 148° as of 2016. They were first observed as a double star in 1826 by astronomer James Dunlap.
HD 187420
HD 187420 has a stellar classification of G8/K0 III, indicating that it is an evolved star with the characteristics of a G8 and K0 giant star. It has 3 times the mass of the Sun but at the age of 377 million years, it has expanded to 11.6 times the radius of the Sun. It radiates 88.3 times the luminosity of the Sun from its photosphere at an effective temperature of . HD 187420 is metal deficient at [Fe/H] = −0.17 and spins modestly with a projected rotational velocity of .
HD 187421
HD 187421 is an A-type star with the characteristics of an A1 and A3 main sequence star, which corresponds to a classification of A1/3 V. It has 2.31 times the mass of the Sun and 2.72 times the Sun's radius. It radiates 37.1 times the luminosity of the Sun from its photosphere at an effective temperature of , giving it a white hue. HD 187421 is particularly metal enriched at [Fe/H] = +0.2 and is estimated to be 560 million years old. Like many hot stars it spins rapidly, having a projected rotational velocity of .
References
G-type giants
K-type giants
A-type main-sequence stars
Binary stars
Telescopium
Telescopii, 71 72
CD-55 08312 08313
187420 187421
097816 097819
7548 7549 | HD 187420/187421 | [
"Astronomy"
] | 588 | [
"Telescopium",
"Constellations"
] |
73,205,527 | https://en.wikipedia.org/wiki/Pleurotus%20velatus | Pleurotus velatus is a species of fungus in the genus Pleurotus first described in 1995, endemic to New Zealand.
Description
The cap is 30–10 x 20–80 mm and convex, with dark grey surface. It is first covered with fine grey, floccose squamules, and smoothens with age, drying orange-brown. The stem is cylindric, solid, strongly eccentric, up to 20 mm by 12 mm, pale brown, becoming more yellow towards gills, and tomentose towards base. The gills are decurrent and white, drying greyish orange. A partial veil is well developed, and does not leave a recognisable ring.
The spore print is unknown. Spores are 6.5–10 x 3.5–4.5 μm (mean, 8.75 x 3.75 μm) in extent, oblong to oblong-cylindric and smooth.
The species is similar to Pleurotus dryinus from the same subgenus (Lentodiopsis), but has smaller spores and a darker pileus.
Distribution and habitat
This mushroom is saprobic on dead wood in coastal forests of New Zealand.
References
Pleurotaceae
Fungi of New Zealand
Fungi described in 1995
Fungus species | Pleurotus velatus | [
"Biology"
] | 263 | [
"Fungi",
"Fungus species"
] |
73,205,573 | https://en.wikipedia.org/wiki/Safi%20Asfia | Safi Asfia (; 1916–2008) was an Iranian mining engineer, technocrat and politician who held several cabinet posts during the reign of Shah Mohammad Reza Pahlavi. He was arrested in 1979 when an Islamic revolution took place in Iran and was imprisoned for five years. Following his release Asfia did not leave Iran and was involved in computer programming.
Early life and education
Asfia was born in Tehran in 1916. Following his graduation from high school he was sent by the state to France for higher education in 1932. He attended Ecole Polytechnique in 1934 and Ecole des Mines de Paris in 1936.
Career
Following his graduation Asfia joined the University of Tehran where he taught mathematics and geology. At age 23 he was promoted to the professorship. In 1955 he began to work as an adviser at the Planning Organization and became its director in 1961. When he was in office he extensively dealt with the Iran's early nuclear program.
Asfia was appointed deputy prime minister in July 1962 to the first cabinet of Asadollah Alam. Asfia was appointed to the same post in 1966. He also served as the director of Planning Organization until 1969 when he was replaced by Mehdi Samii in the post.
Asfia was named as the minister of state for economic and development affairs to the cabinet formed by Jamshid Amouzegar on 7 August 1977. Asfia continued to serve in different cabinet posts until 1979 when the Shah was removed from power. He also served as a board member of the royal organization of social welfare headed by Ashraf Pahlavi.
Later years
Asfia was arrested during the regime change in 1979 and was imprisoned for five years. He began to deal with computer programming after he was freed and joined the Zirakzadeh Science Foundation as one of its board members. The foundation was established by Ahmad Zirakzadeh, one of his polytechnic fellows, and dealt with the creation of science centers for children and youngsters. He developed a computer software used for the calculations of the astronomical objects which was designed for astronomers.
Personal life and death
Asfia was married and had three daughters. His wife died in October 2007. Their eldest daughter died in 2001. As of 2008 one of his daughters was living in Tehran and the other one in Paris. He died in Tehran on 11 April 2008.
Awards
Asfia was the recipient of the following French awards: Legion of Honour (rank of Commander) and of the Ordre national du Mérite (Grand Cross).
References
20th-century Iranian engineers
1916 births
2008 deaths
Deputy prime ministers of Iran
Mining engineers
Politicians from Tehran
Academic staff of the University of Tehran
Commanders of the Legion of Honour
École Polytechnique alumni
Iranian computer programmers
People of Pahlavi Iran
Iranian prisoners and detainees | Safi Asfia | [
"Engineering"
] | 555 | [
"Mining engineering",
"Mining engineers"
] |
73,207,565 | https://en.wikipedia.org/wiki/Lulu%20Qian | Lulu Qian is a Chinese-American biochemist who is a professor at the California Institute of Technology. Her research uses DNA-like molecules to build artificial machines.
Early life and education
Qian is from China. She completed her bachelor's degree in biomedical engineering at Southeast University in Nanjing. Qian moved to Shanghai for her doctoral research, where she worked at Shanghai Jiao Tong University on biochemistry. She then moved to the California Institute of Technology as a postdoctoral fellow. At Caltech, she worked alongside Erik Winfree on biochemical circuits. She used a reversible strand displacement process to create a simple DNA-based building block for a biochemical logic circuit.
Research and career
Qian joined the faculty at Caltech in 2013. She was promoted to professor in 2019. Her research considers molecular robotics and the self-assembly of nanostructures from DNA. These molecular robots can explore biologically relevant surfaces at the nanoscale, picking up molecules and transporting them to specific locations. In 2011, she created the world's largest DNA circuit, which included over seventy DNA molecules.
Qian has also created complex DNA origami. She created two-dimensional images from DNA origami tiles. She used DNA to create an artificial neural network. The network consisted of a DNA gate architecture that can be scaled up into multi-layer circuits.
Awards and honors
2019 Foresight Institute Feynman Prize in nanotechnology
2023 Caltech Richard P. Feynman Prize for excellence in teaching
Selected publications
References
Year of birth missing (living people)
Living people
Chinese emigrants to the United States
Southeast University alumni
Shanghai Jiao Tong University alumni
California Institute of Technology faculty
Women biochemists
21st-century American chemists
21st-century American women scientists
American biochemists | Lulu Qian | [
"Chemistry"
] | 360 | [
"Biochemists",
"Women biochemists"
] |
63,087,259 | https://en.wikipedia.org/wiki/Northern%20European%20Enclosure%20Dam | The Northern European Enclosure Dam (NEED) is a proposed solution to the problem of rising ocean levels in Northern Europe. It would be a megaproject, involving the construction of two massive dams in the English Channel and the North Sea; the former between France and England, and the latter between Scotland and Norway. The concept was conceived by the oceanographers Sjoerd Groeskamp and Joakim Kjellsson.
, the scheme remains a thought experiment intended to portray engineered solutions to the effects of climate change as too "extreme" to be pursued. The scheme's authors describe it as "more of a warning than a solution".
Groeskamp estimates that the NEED will cost 250 to 500 billion euros and will take 50 to 100 years to complete. Groeskamp, an oceanographer, has not revealed how he determined the cost projection or construction timetable.
Channel Dam
The southern enclosure (NEED South) would be a single dam across the Channel between The Lizard, Cornwall, England in the north and Plouescat, Ploudalmézeau, Brittany, France in the south. The stipulated length is , with an average depth of about and a maximum depth of .
North Sea Dam
The northern enclosure (NEED North) would be a multiple section dam at the perimeter of the northern rim of the North Sea. The detailed engineering is not stated, although some form of continuous structure could provide for overland infrastructure—road and/or railway between Great Britain and Norway.
Scotland–Shetland
The western section of the North Sea Dam would be an island jumping, from mainland Scotland in the southwest, through the Orkney Island to Shetland in the northeast, with a total length stipulated to 145 km.
The first stretch origin at Duncansby Head, Caithness, mainland Scotland and crossing Pentland Firth to Brough Ness, the southern tip of South Ronaldsay in the Orkney Islands. Although being a narrow strait of 10 km, the sea floor is down to 100 m depth.
The stretch through southern Orkney continues to Burray, over the narrow Skerry Sound to Mainland and passing the 4 km wide and 30 m deep Shapinsay Sound to Shapinsay. From its northern tip Ness of Ork, the 6 km wide and 110 m deep Stronsay Firth is crossed to War Ness, the southern tip of Eday. The stretch through northern Orkney continues eastward over the 2 km wide and 22 m deep Eday Sound to Sanday. From its northern tip Tofts Ness, the 4 km wide and 20 m deep North Ronaldsay Firth is crossed to Strom Ness on North Ronaldsay, the northernmost of the Orkney Islands.
The crossing of the Fair Isle Channel, between Dennis Head, North Ronaldsey and Scat Ness, the southern tip of Mainland, Shetland, would be the first open waters section of the North Sea Dam, with a distance of exactly 80 km and water depths down to 110 m. A dam across the shortest distance between the two archipelagos would leave Fair Isle placed within the enclosure.
Shetland–Western Norway
The eastern section of the North Sea Dam would offer the greatest engineering challenge of the whole NEED project, stipulated to a length of 331 km through open water and with the sea floor depths exceeding 300 m in the Norwegian trench.
The dam would originate from the eastern shores of Mainland, Shetland, just north of Lerwick, heading east to Bressay and Isle of Noss to allow for the shortest ocean crossing towards Sotra island in Hordaland on the western coast of Norway.
The ocean floor east of Shetland dives down to depths below 100 m just 1 km off shore, and then continues fairly flat through Bressay Ground and Viking bank for some 210 km, until the deep decline of the Norwegian trench. Here, being the latter part of the sea crossing, the sea floor reaches some 321 m depth within 5 km from the western Norwegian coast where there is a steep incline.
Norwegian internal waters
For the NEED to work, Norwegian internal waters have to be enclosed as well, as the Sotra terminus of the North Sea Dam is an island. With the shortest distance for the crossing from Shetland, the dam will reach the western shore of Sotra, just off Telavåg. The final part of the enclosure would therefore have to be the crossing of the narrow, but deep sound separating Sotra from mainland Norway close to Bergen.
Several alternatives are viable, one being the route crossing the 1 km wide and some 140 m deep Lerøyosen sound from southern Sotra, the islands of Lerøyna, Bjelkarøyna and Storakinna to the seemingly mainland south of Hjellestad, a total distance of 5 km. As the latter lies on an island, the very narrow Ådlandstraumen would have to be closed, in order to make a complete enclosure.
A more northerly route from Sotra would cross the 650 m wide and 90 m deep Kobbaleia sound, the islands of Tyssøyna and Alvøyna, and finally the 1 km wide and 100 m deep Raunefjorden sound to the mainland just off Flesland, Bergen International Airport.
There is also a possibility that Norway would include an enclosure of Bergen, which has faced many floods in recent years, and in that case the first part of the enclosure dam would be between Sotra and Askøy, and the second between Askøy and mainland Norway, crossing Byfjorden north of the city center of Bergen.
Images
See also
Atlantropa
References
Megaprojects
Thought experiments
Macro-engineering
Climate change adaptation
Proposed dams
Sea level | Northern European Enclosure Dam | [
"Engineering"
] | 1,141 | [
"Macro-engineering",
"Megaprojects"
] |
63,087,276 | https://en.wikipedia.org/wiki/Plotting%20algorithms%20for%20the%20Mandelbrot%20set | There are many programs and algorithms used to plot the Mandelbrot set and other fractals, some of which are described in fractal-generating software. These programs use a variety of algorithms to determine the color of individual pixels efficiently.
Escape time algorithm
The simplest algorithm for generating a representation of the Mandelbrot set is known as the "escape time" algorithm. A repeating calculation is performed for each x, y point in the plot area and based on the behavior of that calculation, a color is chosen for that pixel.
Unoptimized naïve escape time algorithm
In both the unoptimized and optimized escape time algorithms, the x and y locations of each point are used as starting values in a repeating, or iterating calculation (described in detail below). The result of each iteration is used as the starting values for the next. The values are checked during each iteration to see whether they have reached a critical "escape" condition, or "bailout". If that condition is reached, the calculation is stopped, the pixel is drawn, and the next x, y point is examined. For some starting values, escape occurs quickly, after only a small number of iterations. For starting values very close to but not in the set, it may take hundreds or thousands of iterations to escape. For values within the Mandelbrot set, escape will never occur. The programmer or user must choose how many iterations–or how much "depth"–they wish to examine. The higher the maximal number of iterations, the more detail and subtlety emerge in the final image, but the longer time it will take to calculate the fractal image.
Escape conditions can be simple or complex. Because no complex number with a real or imaginary part greater than 2 can be part of the set, a common bailout is to escape when either coefficient exceeds 2. A more computationally complex method that detects escapes sooner, is to compute distance from the origin using the Pythagorean theorem, i.e., to determine the absolute value, or modulus, of the complex number. If this value exceeds 2, or equivalently, when the sum of the squares of the real and imaginary parts exceed 4, the point has reached escape. More computationally intensive rendering variations include the Buddhabrot method, which finds escaping points and plots their iterated coordinates.
The color of each point represents how quickly the values reached the escape point. Often black is used to show values that fail to escape before the iteration limit, and gradually brighter colors are used for points that escape. This gives a visual representation of how many cycles were required before reaching the escape condition.
To render such an image, the region of the complex plane we are considering is subdivided into a certain number of pixels. To color any such pixel, let be the midpoint of that pixel. We now iterate the critical point 0 under , checking at each step whether the orbit point has modulus larger than 2. When this is the case, we know that does not belong to the Mandelbrot set, and we color our pixel according to the number of iterations used to find out. Otherwise, we keep iterating up to a fixed number of steps, after which we decide that our parameter is "probably" in the Mandelbrot set, or at least very close to it, and color the pixel black.
In pseudocode, this algorithm would look as follows. The algorithm does not use complex numbers and manually simulates complex-number operations using two real numbers, for those who do not have a complex data type. The program may be simplified if the programming language includes complex-data-type operations.
for each pixel (Px, Py) on the screen do
x0 := scaled x coordinate of pixel (scaled to lie in the Mandelbrot X scale (-2.00, 0.47))
y0 := scaled y coordinate of pixel (scaled to lie in the Mandelbrot Y scale (-1.12, 1.12))
x := 0.0
y := 0.0
iteration := 0
max_iteration := 1000
while (x*x + y*y ≤ 2*2 AND iteration < max_iteration) do
xtemp := x*x - y*y + x0
y := 2*x*y + y0
x := xtemp
iteration := iteration + 1
color := palette[iteration]
plot(Px, Py, color)
Here, relating the pseudocode to , and :
-
and so, as can be seen in the pseudocode in the computation of x and y:
and
To get colorful images of the set, the assignment of a color to each value of the number of executed iterations can be made using one of a variety of functions (linear, exponential, etc.). One practical way, without slowing down calculations, is to use the number of executed iterations as an entry to a palette initialized at startup. If the color table has, for instance, 500 entries, then the color selection is n mod 500, where n is the number of iterations.
Optimized escape time algorithms
The code in the previous section uses an unoptimized inner while loop for clarity. In the unoptimized version, one must perform five multiplications per iteration. To reduce the number of multiplications the following code for the inner while loop may be used instead:
x2:= 0
y2:= 0
w:= 0
while (x2 + y2 ≤ 4 and iteration < max_iteration) do
x:= x2 - y2 + x0
y:= w - x2 - y2 + y0
x2:= x * x
y2:= y * y
w:= (x + y) * (x + y)
iteration:= iteration + 1
The above code works via some algebraic simplification of the complex multiplication:
Using the above identity, the number of multiplications can be reduced to three instead of five.
The above inner while loop can be further optimized by expanding w to
Substituting w into yields
and hence calculating w is no longer needed.
The further optimized pseudocode for the above is:
x:= 0
y:= 0
x2:= 0
y2:= 0
while (x2 + y2 ≤ 4 and iteration < max_iteration) do
y:= 2 * x * y + y0
x:= x2 - y2 + x0
x2:= x * x
y2:= y * y
iteration:= iteration + 1
Note that in the above pseudocode, seems to increase the number of multiplications by 1, but since 2 is the multiplier the code can be optimized via .
Derivative Bailout or "derbail"
It is common to check the magnitude of z after every iteration, but there is another method we can use that can converge faster and reveal structure within julia sets.
Instead of checking if the magnitude of z after every iteration is larger than a given value, we can instead check if the sum of each derivative of z up to the current iteration step is larger than a given bailout value:
The size of the dbail value can enhance the detail in the structures revealed within the dbail method with very large values.
It is possible to find derivatives automatically by leveraging Automatic differentiation and computing the iterations using Dual numbers.
Rendering fractals with the derbail technique can often require a large number of samples per pixel, as there can be precision issues which lead to fine detail and can result in noisy images even with samples in the hundreds or thousands.
Python code:
def mand_der(c0: complex, limit: int=1024):
def abs_square(c: complex):
return c.real**2 + c.imag**2
dbail = 1e6
c = c0
dc = 1 + 0j
dc_sum = 0 + 0j
for n in range(1, limit):
c = c**2 + c0
dc= 2*dc*c + 1
dc_sum += dc
if abs_square(dc_sum) >= dbail:
return n
return 0
Coloring algorithms
In addition to plotting the set, a variety of algorithms have been developed to
efficiently color the set in an aesthetically pleasing way
show structures of the data (scientific visualisation)
Histogram coloring
A more complex coloring method involves using a histogram which pairs each pixel with said pixel's maximum iteration count before escape/bailout. This method will equally distribute colors to the same overall area, and, importantly, is independent of the maximum number of iterations chosen.
This algorithm has four passes. The first pass involves calculating the iteration counts associated with each pixel (but without any pixels being plotted). These are stored in an array: IterationCounts[x][y], where x and y are the x and y coordinates of said pixel on the screen respectively.
The first step of the second pass is to create an array of size n, which is the maximum iteration count: NumIterationsPerPixel. Next, one must iterate over the array of pixel-iteration count pairs, IterationCounts[][], and retrieve each pixel's saved iteration count, i, via e.g. i = IterationCounts[x][y]. After each pixel's iteration count i is retrieved, it is necessary to index the NumIterationsPerPixel by i and increment the indexed value (which is initially zero) -- e.g. NumIterationsPerPixel[i] = NumIterationsPerPixel[i] + 1 .
for (x = 0; x < width; x++) do
for (y = 0; y < height; y++) do
i:= IterationCounts[x][y]
NumIterationsPerPixel[i]++
The third pass iterates through the NumIterationsPerPixel array and adds up all the stored values, saving them in total. The array index represents the number of pixels that reached that iteration count before bailout.
total: = 0
for (i = 0; i < max_iterations; i++) do
total += NumIterationsPerPixel[i]
After this, the fourth pass begins and all the values in the IterationCounts array are indexed, and, for each iteration count i, associated with each pixel, the count is added to a global sum of all the iteration counts from 1 to i in the NumIterationsPerPixel array . This value is then normalized by dividing the sum by the total value computed earlier.
hue[][]:= 0.0
for (x = 0; x < width; x++) do
for (y = 0; y < height; y++) do
iteration:= IterationCounts[x][y]
for (i = 0; i <= iteration; i++) do
hue[x][y] += NumIterationsPerPixel[i] / total /* Must be floating-point division. */
...
color = palette[hue[m, n]]
...
Finally, the computed value is used, e.g. as an index to a color palette.
This method may be combined with the smooth coloring method below for more aesthetically pleasing images.
Continuous (smooth) coloring
The escape time algorithm is popular for its simplicity. However, it creates bands of color, which, as a type of aliasing, can detract from an image's aesthetic value. This can be improved using an algorithm known as "normalized iteration count", which provides a smooth transition of colors between iterations. The algorithm associates a real number with each value of z by using the connection of the iteration number with the potential function. This function is given by
where zn is the value after n iterations and P is the power for which z is raised to in the Mandelbrot set equation (zn+1 = znP + c, P is generally 2).
If we choose a large bailout radius N (e.g., 10100), we have that
for some real number , and this is
and as n is the first iteration number such that |zn| > N, the number we subtract from n is in the interval [0, 1).
For the coloring we must have a cyclic scale of colors (constructed mathematically, for instance) and containing H colors numbered from 0 to H − 1 (H = 500, for instance). We multiply the real number by a fixed real number determining the density of the colors in the picture, take the integral part of this number modulo H, and use it to look up the corresponding color in the color table.
For example, modifying the above pseudocode and also using the concept of linear interpolation would yield
for each pixel (Px, Py) on the screen do
x0:= scaled x coordinate of pixel (scaled to lie in the Mandelbrot X scale (-2.5, 1))
y0:= scaled y coordinate of pixel (scaled to lie in the Mandelbrot Y scale (-1, 1))
x:= 0.0
y:= 0.0
iteration:= 0
max_iteration:= 1000
// Here N = 2^8 is chosen as a reasonable bailout radius.
while x*x + y*y ≤ (1 << 16) and iteration < max_iteration do
xtemp:= x*x - y*y + x0
y:= 2*x*y + y0
x:= xtemp
iteration:= iteration + 1
// Used to avoid floating point issues with points inside the set.
if iteration < max_iteration then
// sqrt of inner term removed using log simplification rules.
log_zn:= log(x*x + y*y) / 2
nu:= log(log_zn / log(2)) / log(2)
// Rearranging the potential function.
// Dividing log_zn by log(2) instead of log(N = 1<<8)
// because we want the entire palette to range from the
// center to radius 2, NOT our bailout radius.
iteration:= iteration + 1 - nu
color1:= palette[floor(iteration)]
color2:= palette[floor(iteration) + 1]
// iteration % 1 = fractional part of iteration.
color:= linear_interpolate(color1, color2, iteration % 1)
plot(Px, Py, color)
Exponentially mapped and cyclic iterations
Typically when we render a fractal, the range of where colors from a given palette appear along the fractal is static. If we desire to offset the location from the border of the fractal, or adjust their palette to cycle in a specific way, there are a few simple changes we can make when taking the final iteration count before passing it along to choose an item from our palette.
When we have obtained the iteration count, we can make the range of colors non-linear. Raising a value normalized to the range [0,1] to a power n, maps a linear range to an exponential range, which in our case can nudge the appearance of colors along the outside of the fractal, and allow us to bring out other colors, or push in the entire palette closer to the border.
where i is our iteration count after bailout, max_i is our iteration limit, S is the exponent we are raising iters to, and N is the number of items in our palette. This scales the iter count non-linearly and scales the palette to cycle approximately proportionally to the zoom.
We can then plug v into whatever algorithm we desire for generating a color.
Passing iterations into a color directly
One thing we may want to consider is avoiding having to deal with a palette or color blending at all. There are actually a handful of methods we can leverage to generate smooth, consistent coloring by constructing the color on the spot.
v refers to a normalized exponentially mapped cyclic iter count
f(v) refers to the sRGB transfer function
A naive method for generating a color in this way is by directly scaling v to 255 and passing it into RGB as such
rgb = [v * 255, v * 255, v * 255]
One flaw with this is that RGB is non-linear due to gamma; consider linear sRGB instead. Going from RGB to sRGB uses an inverse companding function on the channels. This makes the gamma linear, and allows us to properly sum the colors for sampling.
srgb = [v * 255, v * 255, v * 255]
HSV coloring
HSV Coloring can be accomplished by mapping iter count from [0,max_iter) to [0,360), taking it to the power of 1.5, and then modulo 360.
We can then simply take the exponentially mapped iter count into the value and return
hsv = [powf((i / max) * 360, 1.5) % 360, 100, (i / max) * 100]
This method applies to HSL as well, except we pass a saturation of 50% instead.
hsl = [powf((i / max) * 360, 1.5) % 360, 50, (i / max) * 100]
LCH coloring
One of the most perceptually uniform coloring methods involves passing in the processed iter count into LCH. If we utilize the exponentially mapped and cyclic method above, we can take the result of that into the Luma and Chroma channels. We can also exponentially map the iter count and scale it to 360, and pass this modulo 360 into the hue.
One issue we wish to avoid here is out-of-gamut colors. This can be achieved with a little trick based on the change in in-gamut colors relative to luma and chroma. As we increase luma, we need to decrease chroma to stay within gamut.
s = iters/max_i;
v = 1.0 - powf(cos(pi * s), 2.0);
LCH = [75 - (75 * v), 28 + (75 - (75 * v)), powf(360 * s, 1.5) % 360];
Advanced plotting algorithms
In addition to the simple and slow escape time algorithms already discussed, there are many other more advanced algorithms that can be used to speed up the plotting process.
Distance estimates
One can compute the distance from point c (in exterior or interior) to nearest point on the boundary of the Mandelbrot set.
Exterior distance estimation
The proof of the connectedness of the Mandelbrot set in fact gives a formula for the uniformizing map of the complement of (and the derivative of this map). By the Koebe quarter theorem, one can then estimate the distance between the midpoint of our pixel and the Mandelbrot set up to a factor of 4.
In other words, provided that the maximal number of iterations is sufficiently high, one obtains a picture of the Mandelbrot set with the following properties:
Every pixel that contains a point of the Mandelbrot set is colored black.
Every pixel that is colored black is close to the Mandelbrot set.
The upper bound b for the distance estimate of a pixel c (a complex number) from the Mandelbrot set is given by
where
stands for complex quadratic polynomial
stands for n iterations of or , starting with : , ;
is the derivative of with respect to c. This derivative can be found by starting with and then . This can easily be verified by using the chain rule for the derivative.
The idea behind this formula is simple: When the equipotential lines for the potential function lie close, the number is large, and conversely, therefore the equipotential lines for the function should lie approximately regularly.
From a mathematician's point of view, this formula only works in limit where n goes to infinity, but very reasonable estimates can be found with just a few additional iterations after the main loop exits.
Once b is found, by the Koebe 1/4-theorem, we know that there is no point of the Mandelbrot set with distance from c smaller than b/4.
The distance estimation can be used for drawing of the boundary of the Mandelbrot set, see the article Julia set. In this approach, pixels that are sufficiently close to M are drawn using a different color. This creates drawings where the thin "filaments" of the Mandelbrot set can be easily seen. This technique is used to good effect in the B&W images of Mandelbrot sets in the books "The Beauty of Fractals" and "The Science of Fractal Images".
Here is a sample B&W image rendered using Distance Estimates:
Distance Estimation can also be used to render 3D images of Mandelbrot and Julia sets
Interior distance estimation
It is also possible to estimate the distance of a limitly periodic (i.e., hyperbolic) point to the boundary of the Mandelbrot set. The upper bound b for the distance estimate is given by
where
is the period,
is the point to be estimated,
is the complex quadratic polynomial
is the -fold iteration of , starting with
is any of the points that make the attractor of the iterations of starting with ; satisfies ,
, , and are various derivatives of , evaluated at .
Analogous to the exterior case, once b is found, we know that all points within the distance of b/4 from c are inside the Mandelbrot set.
There are two practical problems with the interior distance estimate: first, we need to find precisely, and second, we need to find precisely.
The problem with is that the convergence to by iterating requires, theoretically, an infinite number of operations.
The problem with any given is that, sometimes, due to rounding errors, a period is falsely identified to be an integer multiple of the real period (e.g., a period of 86 is detected, while the real period is only 43=86/2). In such case, the distance is overestimated, i.e., the reported radius could contain points outside the Mandelbrot set.
Cardioid / bulb checking
One way to improve calculations is to find out beforehand whether the given point lies within the cardioid or in the period-2 bulb. Before passing the complex value through the escape time algorithm, first check that:
,
,
,
where x represents the real value of the point and y the imaginary value. The first two equations determine that the point is within the cardioid, the last the period-2 bulb.
The cardioid test can equivalently be performed without the square root:
3rd- and higher-order buds do not have equivalent tests, because they are not perfectly circular. However, it is possible to find whether the points are within circles inscribed within these higher-order bulbs, preventing many, though not all, of the points in the bulb from being iterated.
Periodicity checking
To prevent having to do huge numbers of iterations for points inside the set, one can perform periodicity checking, which checks whether a point reached in iterating a pixel has been reached before. If so, the pixel cannot diverge and must be in the set. Periodicity checking is a trade-off, as the need to remember points costs data management instructions and memory, but saves computational instructions. However, checking against only one previous iteration can detect many periods with little performance overhead. For example, within the while loop of the pseudocode above, make the following modifications:
xold := 0
yold := 0
period := 0
while (x*x + y*y ≤ 2*2 and iteration < max_iteration) do
xtemp := x*x - y*y + x0
y := 2*x*y + y0
x := xtemp
iteration := iteration + 1
if x ≈ xold and y ≈ yold then
iteration := max_iteration /* Set to max for the color plotting */
break /* We are inside the Mandelbrot set, leave the while loop */
period:= period + 1
if period > 20 then
period := 0
xold := x
yold := y
The above code stores away a new x and y value on every 20th iteration, thus it can detect periods that are up to 20 points long.
Border tracing / edge checking
Because the Mandelbrot set is full, any point enclosed by a closed shape whose borders lie entirely within the Mandelbrot set must itself be in the Mandelbrot set. Border tracing works by following the lemniscates of the various iteration levels (colored bands) all around the set, and then filling the entire band at once. This also provides a speed increase because large numbers of points can be now skipped.
In the animation shown, points outside the set are colored with a 1000-iteration escape time algorithm. Tracing the set border and filling it, rather than iterating the interior points, reduces the total number of iterations by 93.16%. With a higher iteration limit the benefit would be even greater.
Rectangle checking
Rectangle checking is an older and simpler method for plotting the Mandelbrot set. The basic idea of rectangle checking is that if every pixel in a rectangle's border shares the same amount of iterations, then the rectangle can be safely filled using that number of iterations. There are several variations of the rectangle checking method, however, all of them are slower than the border tracing method because they end up calculating more pixels. One variant just calculates the corner pixels of each rectangle, however, this causes damaged pictures more often than calculating the entire border, thus it only works reasonably well if only small boxes of around 6x6 pixels are used, and no recursing in from bigger boxes. (Fractint method.)
The most simple rectangle checking method lies in checking the borders of equally sized rectangles, resembling a grid pattern. (Mariani's algorithm.)
A faster and slightly more advanced variant is to first calculate a bigger box, say 25x25 pixels. If the entire box border has the same color, then just fill the box with the same color. If not, then split the box into four boxes of 13x13 pixels, reusing the already calculated pixels as outer border, and sharing the inner "cross" pixels between the inner boxes. Again, fill in those boxes that has only one border color. And split those boxes that don't, now into four 7x7 pixel boxes. And then those that "fail" into 4x4 boxes. (Mariani-Silver algorithm.)
Even faster is to split the boxes in half instead of into four boxes. Then it might be optimal to use boxes with a 1.4:1 aspect ratio, so they can be split like how A3 papers are folded into A4 and A5 papers. (The DIN approach.)
As with border tracing, rectangle checking only works on areas with one discrete color. But even if the outer area uses smooth/continuous coloring then rectangle checking will still speed up the costly inner area of the Mandelbrot set. Unless the inner area also uses some smooth coloring method, for instance interior distance estimation.
Symmetry utilization
The horizontal symmetry of the Mandelbrot set allows for portions of the rendering process to be skipped upon the presence of the real axis in the final image. However, regardless of the portion that gets mirrored, the same number of points will be rendered.
Julia sets have symmetry around the origin. This means that quadrant 1 and quadrant 3 are symmetric, and quadrants 2 and quadrant 4 are symmetric. Supporting symmetry for both Mandelbrot and Julia sets requires handling symmetry differently for the two different types of graphs.
Multithreading
Escape-time rendering of Mandelbrot and Julia sets lends itself extremely well to parallel processing. On multi-core machines the area to be plotted can be divided into a series of rectangular areas which can then be provided as a set of tasks to be rendered by a pool of rendering threads. This is an embarrassingly parallel computing problem. (Note that one gets the best speed-up by first excluding symmetric areas of the plot, and then dividing the remaining unique regions into rectangular areas.)
Here is a short video showing the Mandelbrot set being rendered using multithreading and symmetry, but without boundary following:
Finally, here is a video showing the same Mandelbrot set image being rendered using multithreading, symmetry, and boundary following:
Perturbation theory and series approximation
Very highly magnified images require more than the standard 64–128 or so bits of precision that most hardware floating-point units provide, requiring renderers to use slow "BigNum" or "arbitrary-precision" math libraries to calculate. However, this can be sped up by the exploitation of perturbation theory. Given
as the iteration, and a small epsilon and delta, it is the case that
or
so if one defines
one can calculate a single point (e.g. the center of an image) using high-precision arithmetic (z), giving a reference orbit, and then compute many points around it in terms of various initial offsets delta plus the above iteration for epsilon, where epsilon-zero is set to 0. For most iterations, epsilon does not need more than 16 significant figures, and consequently hardware floating-point may be used to get a mostly accurate image. There will often be some areas where the orbits of points diverge enough from the reference orbit that extra precision is needed on those points, or else additional local high-precision-calculated reference orbits are needed. By measuring the orbit distance between the reference point and the point calculated with low precision, it can be detected that it is not possible to calculate the point correctly, and the calculation can be stopped. These incorrect points can later be re-calculated e.g. from another closer reference point.
Further, it is possible to approximate the starting values for the low-precision points with a truncated Taylor series, which often enables a significant amount of iterations to be skipped.
Renderers implementing these techniques are publicly available and offer speedups for highly magnified images by around two orders of magnitude.
An alternate explanation of the above:
For the central point in the disc and its iterations , and an arbitrary point in the disc and its iterations , it is possible to define the following iterative relationship:
With . Successive iterations of can be found using the following:
Now from the original definition:
,
It follows that:
As the iterative relationship relates an arbitrary point to the central point by a very small change , then most of the iterations of are also small and can be calculated using floating point hardware.
However, for every arbitrary point in the disc it is possible to calculate a value for a given without having to iterate through the sequence from , by expressing as a power series of .
With .
Now given the iteration equation of , it is possible to calculate the coefficients of the power series for each :
Therefore, it follows that:
The coefficients in the power series can be calculated as iterative series using only values from the central point's iterations , and do not change for any arbitrary point in the disc. If is very small, should be calculable to sufficient accuracy using only a few terms of the power series. As the Mandelbrot Escape Contours are 'continuous' over the complex plane, if a points escape time has been calculated, then the escape time of that points neighbours should be similar. Interpolation of the neighbouring points should provide a good estimation of where to start in the series.
Further, separate interpolation of both real axis points and imaginary axis points should provide both an upper and lower bound for the point being calculated. If both results are the same (i.e. both escape or do not escape) then the difference can be used to recuse until both an upper and lower bound can be established. If floating point hardware can be used to iterate the series, then there exists a relation between how many iterations can be achieved in the time it takes to use BigNum software to compute a given . If the difference between the bounds is greater than the number of iterations, it is possible to perform binary search using BigNum software, successively halving the gap until it becomes more time efficient to find the escape value using floating point hardware.
References
Fractals
Articles with example pseudocode
Complex dynamics
Graphics software
Computer graphics
Algorithms
Articles containing video clips | Plotting algorithms for the Mandelbrot set | [
"Mathematics"
] | 6,845 | [
"Functions and mappings",
"Complex dynamics",
"Mathematical analysis",
"Applied mathematics",
"Algorithms",
"Mathematical logic",
"Mathematical objects",
"Fractals",
"Mathematical relations",
"Dynamical systems"
] |
63,087,503 | https://en.wikipedia.org/wiki/Samsung%20Galaxy%20Z%20Flip | The Samsung Galaxy Z Flip (sold as Samsung Galaxy Flip in certain territories) is an Android-based foldable smartphone developed by Samsung Electronics as part of the Samsung Galaxy Z series. Its existence was first revealed in an advertisement during the 2020 Academy Awards. Unveiled alongside the Galaxy S20 on February 11, 2020, it was released on February 14, 2020. Unlike the Galaxy Z Fold, the device folds horizontally and uses a hybrid glass coating branded as "Infinity Flex Display". It is available in three colors for the LTE version (Mirror Purple, Mirror Black, and Mirror Gold) and two colors for the 5G version (Mystic Bronze and Mystic Gray). The 5G version was also made available in a limited-edition "Mystic White" color.
Specifications
Design
The Galaxy Z Flip is constructed with an aluminum frame, and -thick "ultra-thin glass" with a plastic layer similar to the Galaxy Fold, manufactured by Samsung with materials from Schott AG, which is "produced using an intensifying process to enhance its flexibility and durability", and injected with a "special material up to an undisclosed depth to achieve a consistent hardness"; conventional Gorilla Glass is used for the back panels. The Z Flip is the first foldable smartphone to use a glass display, while previous foldable phones such as the Motorola Razr and the Galaxy Fold have used plastic displays. Using a glass display results in a more durable screen, and reduces the screen crease in the folding point. The hinge mechanism is strengthened with nylon fibers designed to keep dust out; Samsung rated the fold mechanism as supporting up to 200,000 uses. The device comes in 3 colors for the LTE version which are Mirror Purple, Mirror Black and Mirror Gold. It also comes in 2 colors for the 5G version which are Mystic Bronze and Mystic Gray. However, the color availability may vary depending on country or carrier. The Z Flip is also available in a Limited Edition Thom Browne model, featuring a red, white, and blue stripe on a gray base.
Hardware
The device uses a clamshell design to conceal a 6.7" 21:9 Dynamic AMOLED display which supports HDR10+. The screen has a circular cutout at the top of the display for the front-facing camera. The exterior features a small 1.1" external display adjacent to the camera module, which can display the time, date and battery status, interact with notifications, answer phone calls and act as a viewfinder. The Qualcomm Snapdragon 855+ SoC and Adreno 640 GPU are utilized, with 8 GB of LPDDR4X RAM and 256 GB of non-expandable UFS 3.0 storage. It uses two batteries which have a total capacity of 3300 mAh, and can be recharged over USB-C at up to 15W wired or wirelessly via Qi. The power button is embedded in the frame and doubles as the fingerprint sensor, with the volume rocker located above. A dual camera setup on the rear has a 12 MP primary sensor and a 12 MP ultrawide sensor. The front-facing camera has a 10 MP sensor.
Software
The Z Flip is pre-installed with Android 10 and Samsung's One UI 2 skin. Split-screen functionality, called "Flex mode" is supported with certain apps like YouTube and Google Duo.
Reception
The Z Flip was met with mixed to positive reviews at launch. It was praised for its flagship hardware, form factor, software/UI, display, and camera, but criticized for the price, size of the cover display, and perceived overall fragility. Sascha Segan of PC Magazine gave the Z Flip a 3/5, stating that "the Samsung Galaxy Z Flip is the first folding phone to really work, but it's still a costly and potentially fragile fashion object rather than a mainstream hit".
Jessica Dolcourt of CNET gave the Z Flip a 7.9/10, calling it "a cohesive device that's easy to pick up and use right away". Dolcourt called Flex Mode "the most unique, interesting and effective feature by far", while noting that battery life was just average and most multimedia was incompatible with the device's aspect ratio, resulting in pillarboxing. Chris Velazco of Engadget gave it a 78, praising the form factor, performance and cameras while criticizing the cover display and overall fragility.
Dieter Bohn of The Verge gave the Z Flip a 6/10, concluding that "as with previous folding phones it is more of an expensive experiment than a real product anybody should buy". Bohn praised the performance and hinge design, but was critical of the price and cameras, noting that the screen’s plastic covering was still susceptible to scratches. Samuel Gibbs of The Guardian praised the phone's durability, reporting that "the screen looks and works just as great today as it did fresh out of the box" despite being unfolded several dozen times each day for four months.
iFixit gave the device a repairability score of 2/10.
Gallery
See also
Huawei Mate X
Xiaomi Mi MIX Alpha
Motorola Razr (2020)
References
External links
– Official Samsung Galaxy Z Flip5 User Manual
Mobile phones introduced in 2020
Android (operating system) devices
Mobile phones with multiple rear cameras
Foldable smartphones
Mobile phones with 4K video recording
Flip phones
Discontinued flagship smartphones
Discontinued Samsung Galaxy smartphones
Samsung smartphones | Samsung Galaxy Z Flip | [
"Technology"
] | 1,118 | [
"Crossover devices",
"Foldable smartphones",
"Discontinued flagship smartphones",
"Flagship smartphones"
] |
63,087,895 | https://en.wikipedia.org/wiki/Energy-based%20model | An energy-based model (EBM) (also called Canonical Ensemble Learning or Learning via Canonical Ensemble – CEL and LCE, respectively) is an application of canonical ensemble formulation from statistical physics for learning from data. The approach prominently appears in generative artificial intelligence.
EBMs provide a unified framework for many probabilistic and non-probabilistic approaches to such learning, particularly for training graphical and other structured models.
An EBM learns the characteristics of a target dataset and generates a similar but larger dataset. EBMs detect the latent variables of a dataset and generate new datasets with a similar distribution.
Energy-based generative neural networks is a class of generative models, which aim to learn explicit probability distributions of data in the form of energy-based models, the energy functions of which are parameterized by modern deep neural networks.
Boltzmann machines are a special form of energy-based models with a specific parametrization of the energy.
Description
For a given input , the model describes an energy such that the Boltzmann distribution is a probability (density), and typically .
Since the normalization constant:
(also known as the partition function) depends on all the Boltzmann factors of all possible inputs , it cannot be easily computed or reliably estimated during training simply using standard maximum likelihood estimation.
However, for maximizing the likelihood during training, the gradient of the log-likelihood of a single training example is given by using the chain rule:
The expectation in the above formula for the gradient can be approximately estimated by drawing samples from the distribution using Markov chain Monte Carlo (MCMC).
Early energy-based models, such as the 2003 Boltzmann machine by Hinton, estimated this expectation via blocked Gibbs sampling. Newer approaches make use of more efficient Stochastic Gradient Langevin Dynamics (LD), drawing samples using:
,
where . A replay buffer of past values is used with LD to initialize the optimization module.
The parameters of the neural network are therefore trained in a generative manner via MCMC-based maximum likelihood estimation:
the learning process follows an "analysis by synthesis" scheme, where within each learning iteration, the algorithm samples the synthesized examples from the current model by a gradient-based MCMC method (e.g., Langevin dynamics or Hybrid Monte Carlo), and then updates the parameters based on the difference between the training examples and the synthesized ones – see equation . This process can be interpreted as an alternating mode seeking and mode shifting process, and also has an adversarial interpretation.
Essentially, the model learns a function that associates low energies to correct values, and higher energies to incorrect values.
After training, given a converged energy model , the Metropolis–Hastings algorithm can be used to draw new samples. The acceptance probability is given by:
History
The term "energy-based models" was first coined in a 2003 JMLR paper where the authors defined a generalisation of independent components analysis to the overcomplete setting using EBMs.
Other early work on EBMs proposed models that represented energy as a composition of latent and observable variables.
Characteristics
EBMs demonstrate useful properties:
Simplicity and stability–The EBM is the only object that needs to be designed and trained. Separate networks need not be trained to ensure balance.
Adaptive computation time–An EBM can generate sharp, diverse samples or (more quickly) coarse, less diverse samples. Given infinite time, this procedure produces true samples.
Flexibility–In Variational Autoencoders (VAE) and flow-based models, the generator learns a map from a continuous space to a (possibly) discontinuous space containing different data modes. EBMs can learn to assign low energies to disjoint regions (multiple modes).
Adaptive generation–EBM generators are implicitly defined by the probability distribution, and automatically adapt as the distribution changes (without training), allowing EBMs to address domains where generator training is impractical, as well as minimizing mode collapse and avoiding spurious modes from out-of-distribution samples.
Compositionality–Individual models are unnormalized probability distributions, allowing models to be combined through product of experts or other hierarchical techniques.
Experimental results
On image datasets such as CIFAR-10 and ImageNet 32x32, an EBM model generated high-quality images relatively quickly. It supported combining features learned from one type of image for generating other types of images. It was able to generalize using out-of-distribution datasets, outperforming flow-based and autoregressive models. EBM was relatively resistant to adversarial perturbations, behaving better than models explicitly trained against them with training for classification.
Applications
Target applications include natural language processing, robotics and computer vision.
The first energy-based generative neural network is the generative ConvNet proposed in 2016 for image patterns, where the neural network is a convolutional neural network. The model has been generalized to various domains to learn distributions of videos, and 3D voxels. They are made more effective in their variants. They have proven useful for data generation (e.g., image synthesis, video synthesis,
3D shape synthesis, etc.), data recovery (e.g., recovering videos with missing pixels or image frames, 3D super-resolution, etc), data reconstruction (e.g., image reconstruction and linear interpolation ).
Alternatives
EBMs compete with techniques such as variational autoencoders (VAEs), generative adversarial networks (GANs) or normalizing flows.
Extensions
Joint energy-based models
Joint energy-based models (JEM), proposed in 2020 by Grathwohl et al., allow any classifier with softmax output to be interpreted as energy-based model. The key observation is that such a classifier is trained to predict the conditional probability
where is the y-th index of the logits corresponding to class y.
Without any change to the logits it was proposed to reinterpret the logits to describe a joint probability density:
with unknown partition function and energy .
By marginalization, we obtain the unnormalized density
therefore,
so that any classifier can be used to define an energy function .
See also
Empirical likelihood
Posterior predictive distribution
Contrastive learning
Literature
Implicit Generation and Generalization in Energy-Based Models Yilun Du, Igor Mordatch https://arxiv.org/abs/1903.08689
Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One, Will Grathwohl, Kuan-Chieh Wang, Jörn-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, Kevin Swersky https://arxiv.org/abs/1912.03263
References
External links
Statistical models
Machine learning
Statistical mechanics
Hamiltonian mechanics | Energy-based model | [
"Physics",
"Mathematics",
"Engineering"
] | 1,424 | [
"Machine learning",
"Theoretical physics",
"Classical mechanics",
"Hamiltonian mechanics",
"Artificial intelligence engineering",
"Statistical mechanics",
"Dynamical systems"
] |
63,087,914 | https://en.wikipedia.org/wiki/Closing%20the%20Gap%3A%20The%20Quest%20to%20Understand%20Prime%20Numbers | Closing the Gap: The Quest to Understand Prime Numbers is a book on prime numbers and prime gaps by Vicky Neale, published in 2017 by the Oxford University Press (). The Basic Library List Committee of the Mathematical Association of America has suggested that it be included in undergraduate mathematics libraries.
Topics
The main topic of the book is the conjecture that there exist infinitely many twin primes, dating back at least to Alphonse de Polignac (who conjectured more generally in 1849 that every even number appears infinitely often as the difference between two primes), and the significant progress made recently by Yitang Zhang and others on this problem. Zhang did not solve the twin prime conjecture, but in 2013 he announced a proof that there exists an even number that is the difference between infinitely many pairs of primes. Zhang's original proof shows only that is less than 70 million, but subsequent work by others including the highly collaborative efforts of the Polymath Project reduced this bound to 246, or even, assuming the truth of the Elliott–Halberstam conjecture, to 6.
The book is structured with chapters that alternate between giving the chronological development of the twin prime problem, and providing mathematical background on related topics in number theory; reviewer Michael N. Fried describes this unusual structure as a rondo with the chronological sequence as its refrain and the mathematical parts as its verses. The mathematical topics covered in these chapters include Goldbach's conjecture that every even number is the sum of two primes, sums of squares and Waring's problem on representation by sums of powers, the Hardy–Littlewood circle method for comparing the area of a circle to the number of integer points in the circle and solving analogous problems in analytic number theory, the arithmetic of quaternions, Fermat’s Last Theorem, the fundamental theorem of arithmetic on the existence and uniqueness of prime factorizations, almost primes, Sophie Germain primes, Pythagorean triples, and Szemerédi's theorem and its connections to primes in arithmetic progression.
Beyond its mathematical content, another theme of the book involves understanding the processes that mathematicians use to develop their mathematics, and "what it means to do research in mathematics", ranging from the stereotypical "single mathematician working on his own" exemplified by Zhang, to the global networked collaboration of the Polymath Project.
Audience and reception
The book is written for a general audience untrained in mathematics, and in many cases finds clever and accessible ways of explaining mathematical concepts using visual intuition, although in other cases she uses complicated formulas and algebra that could be intimidating. The book could also be of interest to mathematics students and professional mathematicians, and reviewer Michael N. Fried suggests that it could be helpful to mathematics educators in deepening their knowledge of mathematics, providing creative visual demonstrations of mathematical concepts, and inspiring collaborative techniques in education.
Reviewer Mark Hunacek writes that Neale's "prose is clear but not patronizing, precise but accessible. The result is a very enjoyable book". Fried calls it "consistently entertaining and enlightening", and reviewer Marianne Freiberger calls it "among the clearest popular accounts of maths I've read".
References
Prime numbers
Mathematics books
2017 non-fiction books
Oxford University Press books | Closing the Gap: The Quest to Understand Prime Numbers | [
"Mathematics"
] | 664 | [
"Prime numbers",
"Mathematical objects",
"Numbers",
"Number theory"
] |
63,088,083 | https://en.wikipedia.org/wiki/HD%20191939 | HD 191939 is a single yellow (G-type) main-sequence star, located approximately 174 light-years away in the constellation of Draco, taking its primary name from its Henry Draper Catalogue designation.
Characteristics
HD 191939 is a Sun-like G-type main-sequence star, likely older than the Sun and relatively depleted in metals.
Planetary system
In 2020, an analysis carried out by a team of astronomers led by astronomer Mariona Badenas-Agusti of the TESS project confirmed the existence of three gaseous planets, all smaller than Neptune, in orbit around HD 191939. Another non-transiting gas giant planet designated HD 191939 e was detected in 2021, along with a substellar object on a highly uncertain, 9 to 46 year orbit. In 2022, a sixth planet, with a mass comparable to Uranus, was discovered in the system's habitable zone. The 2021 study also suggested the possible presence of an additional non-transiting planet with a period of 17.7 days, but the 2022 study did not support this.
See also
List of extrasolar planets
List of exoplanets discovered in 2020 (HD 191939 b, c and d)
List of exoplanets discovered in 2021 (HD 191939 e)
List of exoplanets discovered in 2022 (HD 191939 f & g)
References
191939
99175
1339
G-type main-sequence stars
HD, 191939
Draco (constellation)
Planetary systems with six confirmed planets
Planetary transit variables | HD 191939 | [
"Astronomy"
] | 311 | [
"Constellations",
"Draco (constellation)"
] |
63,088,283 | https://en.wikipedia.org/wiki/PANSAT | PANSAT (Petite Amateur Navy Satellite, also known as OSCAR 34) was an amateur radio satellite. It was launched by Space Shuttle Discovery during the STS-95 mission as part of the third International Extreme Ultraviolet Hitchhiker (IEH-3) mission, on 30 October 1998 from Kennedy Space Center, Florida.
The satellite was built by students from the Naval Postgraduate School in Monterey, California. It offered the possibility of packet radio transmission in BPSK or Direct-Sequence Spread Spectrum in the 70 cm band. The satellite was configured in a sphere-like shape, featuring 26 sides used for solar cell and antenna placement. The spacecraft supplied direct-sequence, spread-spectrum modulation with an operating center frequency of 436.5 MHz, a bit rate of 9600 bit/s and 9 MB of message storage.
References
Satellites orbiting Earth
Amateur radio satellites
Spacecraft launched by the Space Shuttle
Spacecraft launched in 1998 | PANSAT | [
"Astronomy"
] | 182 | [
"Astronomy stubs",
"Spacecraft stubs"
] |
63,089,932 | https://en.wikipedia.org/wiki/Gribov%20Medal | The Gribov Medal is a prize awarded every two years since 2001 by the European Physical Society for work in theoretical elementary particle physics or quantum field theory. It is awarded to younger physicists (age under 35) and is named after Vladimir Naumovich Gribov.
Prize Winners
External links
Official Website
The Gribov Medal Prizes (EPS Website)
References
Physics awards
Early career awards
Awards of the European Physical Society | Gribov Medal | [
"Technology"
] | 85 | [
"Science and technology awards",
"Physics awards"
] |
63,090,080 | https://en.wikipedia.org/wiki/Using%20the%20Borsuk%E2%80%93Ulam%20Theorem | Using the Borsuk–Ulam Theorem: Lectures on Topological Methods in Combinatorics and Geometry is a graduate-level mathematics textbook in topological combinatorics. It describes the use of results in topology, and in particular the Borsuk–Ulam theorem, to prove theorems in combinatorics and discrete geometry. It was written by Czech mathematician Jiří Matoušek, and published in 2003 by Springer-Verlag in their Universitext series ().
Topics
The topic of the book is part of a relatively new field of mathematics crossing between topology and combinatorics, now called topological combinatorics. The starting point of the field, and one of the central inspirations for the book, was a proof that László Lovász published in 1978 of a 1955 conjecture by Martin Kneser, according to which the Kneser graphs have no graph coloring with colors. Lovász used the Borsuk–Ulam theorem in his proof, and Matoušek gathers many related results, published subsequently, to show that this connection between topology and combinatorics is not just a proof trick but an area.
The book has six chapters. After two chapters reviewing the basic notions of algebraic topology, and proving the Borsuk–Ulam theorem, the applications to combinatorics and geometry begin in the third chapter, with topics including the ham sandwich theorem, the necklace splitting problem, Gale's lemma on points in hemispheres, and several results on colorings of Kneser graphs. After another chapter on more advanced topics in equivariant topology, two more chapters of applications follow, separated according to whether the equivariance is modulo two or using a more complicated group action. Topics in these chapters include the van Kampen–Flores theorem on embeddability of skeletons of simplices into lower-dimensional Euclidean spaces, and topological and multicolored variants of Radon's theorem and Tverberg's theorem on partitions into subsets with intersecting convex hulls.
Audience and reception
The book is written at a graduate level, and has exercises making it suitable as a graduate textbook. Some knowledge of topology would be helpful for readers but is not necessary. Reviewer Mihaela Poplicher writes that it is not easy to read, but is "very well written, very interesting, and very informative". And reviewer Imre Bárány writes that "The book is well written, and the style is lucid and pleasant, with plenty of illustrative examples."
Matoušek intended this material to become part of a broader textbook on topological combinatorics, to be written jointly with him, Anders Björner, and Günter M. Ziegler. However, this was not completed before Matoušek's untimely death in 2015.
References
Combinatorics
Algebraic topology
Mathematics textbooks
2003 non-fiction books | Using the Borsuk–Ulam Theorem | [
"Mathematics"
] | 584 | [
"Discrete mathematics",
"Algebraic topology",
"Combinatorics",
"Fields of abstract algebra",
"Topology"
] |
63,090,329 | https://en.wikipedia.org/wiki/Brequinar | Brequinar (DuP-785) is a drug that acts as a potent and selective inhibitor of the enzyme dihydroorotate dehydrogenase. It blocks synthesis of pyrimidine based nucleotides in the body and so inhibits cell growth. Brequinar was invented by DuPont Pharmaceuticals in the 1980s. In 2001, Bristol-Myers Squibb acquired DuPont, and in 2017, Clear Creek Bio acquired the rights to brequinar from BMS.
Brequinar has been investigated as an immunosuppressant for preventing rejection after organ transplant and also as an anti-cancer drug, but was not accepted for medical use in either application largely due to its narrow therapeutic dose range and severe side effects when dosed inappropriately. It has been researched both as part of a potential combination therapy for some cancers, or alternatively as an antiparasitic, or antiviral drug. Clear Creek Bio is currently developing brequinar as a potential treatment for COVID-19.
Inhibition of dihydroorotate dehydrogenase activity by brequinar may represent an efficient approach to the elimination of undifferentiated cells for safe PSC-derived differentiated cells based therapies. Brequinar has been shown to inhibit completely vaccinia virus in cell based assay.
See also
Leflunomide - Clinically used DHODH inhibitor
Methotrexate - the most widely used pyrimidine synthesis inhibitor
References
Antiviral drugs
Quinolines
Biphenyls
Carboxylic acids
Organofluorides | Brequinar | [
"Chemistry",
"Biology"
] | 331 | [
"Antiviral drugs",
"Carboxylic acids",
"Biocides",
"Functional groups"
] |
63,091,229 | https://en.wikipedia.org/wiki/Lucy%20Collinson | Lucy Collinson is a microbiologist, electron microscopist and scientist working at the Francis Crick Institute in London. She is the head of electron microscopy.
Early career
In 1998, Collinson completed a PhD in molecular microbiology at Barts and The London School of Medicine and Dentistry.
References
External links
Year of birth missing (living people)
Living people
21st-century British scientists
Alumni of Barts and The London School of Medicine and Dentistry
British microbiologists
Electron microscopy
Microscopists
Place of birth missing (living people) | Lucy Collinson | [
"Chemistry"
] | 110 | [
"Electron",
"Electron microscopy",
"Microscopists",
"Microscopy"
] |
63,092,260 | https://en.wikipedia.org/wiki/Operation%20Encore | Operation Encore was the Allied offensive timed for FebruaryMarch 1945, to break through the Gothic Line. This was initiated at the army instead of corps level. This comprised an assault of the 10th Mountain Division and the Brazilian Expeditionary Force to secure the high ground dominating where it crossed the Apennine Mountains (18 February25 February 1945), followed by a limited offensive that ended with the capture of the crossroads at Castel d'Aiano (3 March5 March 1945) Once these objectives were achieved, the Fifth Army could penetrate the northern Apennines to reach the Po valley as part of the Spring 1945 offensive in Italy.
Background
Following the capture of Rome 4 June 1944, the Allied forces proceeded north in two groups: the British Eighth Army (Lieutenant-General Oliver Leese) advancing along the coastal plain of the Adriatic, and the U.S. Fifth Army (Lieutenant General Mark Clark) to the west through the central Apennine Mountains. Before them stood the carefully prepared German defenses of the Gothic Line. General Clark's plan had initially been to drive through the Apennines at two points: the main body of II Corps would advance north along the , the highway that connects Florence to Bologna by way of the Futa Pass. When these troops encountered the expected enemy resistance, the 34th Division would launch a strong diversionary attack west of the Futa Pass, while the rest of II Corps would bypass Futa Pass to the east and attack the lightly defended Il Giogo Pass near the boundary of the German Fourteenth and Tenth Armies. This attack began 10 September 1944.
However, the Apennines were a formidable terrain and despite reduced numbers and limited supplies the Germans proved to be stubborn foes in their well-prepared defensive positions. While the American divisions managed to advance past both the Futa and Il Giogo passes, it was at a high cost. Between 10 September and 26 October, II Corps' four divisions had suffered over 15,000 casualties. On 27 October General Sir Henry Maitland Wilson, the Supreme Allied Commander in the Mediterranean, ordered a halt to these offensives.
The Allies made one last attempt to break through the Apennines, using units of the recently arrived Brazilian Expeditionary Force (BEF) and the 92nd Infantry Division. To the west of Futa Pass, the Strada statale 64 passed Mount Belvedere on its route to Bologna; control of Mount Belvedere would give the Allies control of the highway, allowing a breakthrough into the Po Valley. From 24 November through 12 December three unsuccessful assaults were made to capture the mountain, but despite Allied efforts each time they secured the peak of the mountain, German artillery drove them off the heights.
A few weeks later the US 10th Mountain Division, the only American mountain infantry unit, which had been stateside impatiently waiting to participate in the fighting, arrived in Italy. The 10th arrived at Naples piecemeal starting 22 December 1944, with the last units landing on 13 January 1945. From Naples they made their way by ship or by rail in forty-and-eights to Livorno, then by trucks to Pisa. From Pisa the men proceeded to which became divisional headquarters, and proceeded to prepare for battle.
Part 1: Battle of Mount Belvedere
Previous attempts in November 1944 to secure had ended in failure. The specialist 10th Mountain Division, under General George Price Hays, was assigned to secure it and nearby mountains. They were supported on their right by the BEF. Because the 10th Mountain was light on organic artillery, Major General Willis Crittenberger, commander of the IV Corps, lent them two field artillery battalions, a Chemical Mortar battalion, two tank destroyer battalions, and one tank battalion.
The first move was to capture the – range of mountains, known to the Americans as Riva Ridge, rising west of Mt. Belvedere. On these mountains the Germans had positioned observation posts enabling their artillery to target any Allied units on the western and southern slopes of that mountain. On the night of 18/19 February 1945 the first battalion of the 86th Mountain Infantry Regiment, augmented by company F from the second battalion, scaled the cliffs of Riva Ridge. Had the Germans learned of this night climb, American command estimated as many as 90% of the regiment would become casualties. Fortunately for the battalion, they reached the top undetected by the enemy. Compounding the German's difficulties, at the time of the ascent the men of the 1044th Grenadier regiment were in the process of being relieved by the men of the 232nd Fusilier battalion, which contributed to the success of the capture of Riva Ridge.
With Riva Ridge secure, the following night (19/20 February) the other two regiments of the division began their frontal assault on German lines: the 87th Mountain Infantry Regiment advanced on the western slopes of Mount Belvedere, while the 85th Mountain Infantry Regiment advanced up the southern slopes. To achieve maximum surprise, no preliminary artillery barrage was made. The third battalion of the 86th Mountain Infantry advanced to the right of the 87th Mountain Infantry. Despite minefields and emplaced machine gun positions, the 87th Mountain captured two of the three villages on the western slopes, as well as the Valpiana Ridge, by dawn. Meanwhile the 85th Mountain Infantry had fought their way to the top of Mount Belvedere between 3:30a and 5:30a 20 February, and one battalion of the 85th Mountain Infantry reached the summit of neighboring Mount Gorgolesco by 3:00a.
The last peak of the chain of mountains remained to be taken. For this part of the battle, General Hays brought in the artillery assets, while British Spitfires and American P-47s were coordinated by ground spotters dubbed "Rover Joe". However, supported by the recently arrived 741st Jäger Regiment, of the 114th Jäger Division, on 21 February launched a fierce counterattack on Mount Belvedere that stopped the American advance down the reverse slope but failed to gain any ground. The second battalion of the 85th Mountain Infantry began their advance along the ridgeline from Mt. Gorgolesco towards Mount della Torraccio, only to be stopped by effective use of artillery some 400 yards short of their objective. By the afternoon of 22 February the battalion was down to 400 effectives; General Hays relieved their commander, and sent the third battalion of the 86th Mountain Infantry in their place. Following an artillery barrage of the German positions on Mount della Torraccio, on 24 February the third battalion moved forward to seize the summit of the mountain in hand-to-hand combat by 9:00am. The German Mittenwald Mountain Training Battalion initiated counterattacks on the afternoon of that day that continued into the night, but failed to dislodge the men of the 86th Mountain Infantry.
Meanwhile the BEF proceeded to assault , to the southeast of Mount della Torraccio, only to find the Germans had withdrawn from it prior to their advance.
American casualties from the Battle of Mount Belvedere were 192 killed in action, 730 wounded, and 1 captured. Brazilian casualties were 22 dead and 137 wounded. German casualties are unknown, but American records note over 400 were taken prisoner.
Part 2: Battle of Castel D'Aiano
The second phase of Operation Encore began five days later. The American 10th Mountain would advance north, with the 86th Mountain Infantry on the left and the 87th Mountain Infantry on the right, to occupy four peaks that would serve as their line of departure. From west to east they were Mount Grande d'Aiano, Mount della Spe, Mount della Castellana, and Mount Valbura. The BEF would advance to the northeast to occupy Vergato which lay on Strada statale 64 Porrettana; occupying this town would sever both the German lines of supply and communications back to the Po valley.
Lieutenant General Lucian Truscott, commander of the Fifth Army, had wanted to begin the advance on 1 March, but inclement weather prevented air support until 3 March, when the attack started. The attack took the Germans by surprise: elements of the 114th Jäger Division were relieving 232nd Infantry Division at the time of the attack. The 86th Mountain Infantry quickly seized Mount Terminale, then proceeded to the town of Iola di Montese, where they encountered stiff resistance; the town was captured with the help of the 751st Tank battalion. In the fighting for Iola, Technical Sargeant Torger Tokle, a champion skier and the best known member of the division, was killed. On the right the 87th Mountain Infantry captured the town of Pietra Colora on the first day. Amongst the prisoners was the headquarters staff of the third battalion, 721st regiment, 114th Jäger Division.
The following day the 86th Mountain Infantry captured the town of Sassomolare, and shortly before 3:30pm was in control of their final objective, Mount Grande d'Aiano. Meanwhile on the right the 87th Mountain Infantry captured Mounts Acidola and della Croce, and occupied Castel d'Aiano. The 85th Mountain Infantry came into action, capturing Mounts della Spe and della Castellana despite heavy German artillery barrages. Meanwhile the BEF advanced west of Highway 64. The German commanders were forced to pull men from the 232nd Division which had been relieved the night before to stem the American advance; one German POW complained, "I don't mind being taken prisoner, but I surely hate losing out on my rest."
The American success made German Field Marshall Albert Kesselring concerned this was the beginning of a major offensive aimed to capture Bologna. He decided he could not take a chance and committed his strategic reserve, the 29th Panzer Grenadier Division. Its first unit, the 15th Panzer Grenadier regiment arrived at this point and initiated a series of counterattacks against the soldiers dug in on Mount della Spe. Despite the ferocity of the German counterattacks, the men of the 86th Mountain Infantry beat them back in determined hand-to-hand fighting, and the Germans settled on harassing them with artillery fire.
Worried that Kesselring might grow concerned enough to develop defensive positions astride Strada statale 64, on 5 March General Truscott ordered the units to halt in place. The 10th Mountain extended their control over two additional features to the east, Mount Valbura and a second Mount Belvedere. Meanwhile, between 10 and 16 March the BEF shifted their position to the left of the 10th Mountain, transferring its headquarters to the valley of the Panaro. The Allies now held a six-mile front favorable for their advance down the valley of the Reno river and along the Strada statale 64.
American casualties from the Battle of Castel d'Aiano were 146 killed in action, 512 wounded, and 3 captured. Brazilian casualties were 68 in total, of which at least 3 were killed in action and 32 wounded.
See also
Battle of Monte Castello
References
Further reading
Heinrich Boucsein, Bomber, Jabos, Partisanen (Berg am Starnberger See: Kurt-Vowinckel, 2000)
Italian campaign (World War II)
World War II defensive lines
World War II operations and battles of the Italian Campaign
Battles and operations of World War II involving Germany
1945 in Italy
Conflicts in 1945
Battles of World War II involving the United States
Battles of World War II involving Brazil | Operation Encore | [
"Engineering"
] | 2,325 | [
"World War II defensive lines",
"Fortification lines"
] |
63,092,514 | https://en.wikipedia.org/wiki/SUSPUP%20and%20SUSPPUP | SUSPUP (serum sodium to urinary sodium to serum potassium to urinary potassium) and SUSPPUP (serum sodium to urinary sodium to (serum potassium)2 to urinary potassium) are calculated structure parameters of the renin–angiotensin-aldosterone system (RAAS). They have been developed to support screening for primary or secondary aldosteronism.
Physiological principle
The steroid hormone aldosterone stimulates the reabsorption of sodium and the excretion of potassium in the distal tubuli and the collecting tubes of the kidneys. Calculating SUSPUP and/or SUSPPUP helps to determine the intensity of mineralocorticoid signalling, which may be helpful in the differential diagnosis of hypertension and hypokalaemia.
Preconditions of testing
Sodium and potassium concentrations have to be determined in serum and spot urine probes that have been obtained simultaneously or within a short time interval between.
Calculation
Interpretation
Reference ranges are 3.6–22.6 for SUSPUP and 0.6–5.3 for SUSPPUP.
Increased values support the hypothesis of increased mineralocorticoid stimulation of the distal tubules and collecting tubes, i.e. in cases of hyperaldosteronism. While these parameters have a high sensitivity for screening purposes their specificity may be inferior compared to aldosteron-to-renin ratio (ARR) and potassium concentrations
Both parameters may also be elevated in syndrome of inappropriate ADH secretion (SIADH), probably reflecting a compensatory mechanism, where the organism tries to maintain serum sodium concentrations by means of increased renin and/or aldosterone secretion.
References
Adrenal gland disorders
Human homeostasis
Static endocrine function tests | SUSPUP and SUSPPUP | [
"Biology"
] | 369 | [
"Human homeostasis",
"Homeostasis"
] |
63,093,546 | https://en.wikipedia.org/wiki/NGC%20950 | NGC 950 is a barred spiral galaxy in the constellation Cetus. It is approximately 205 million light-years away from the Solar System and has a diameter of about 85,000 light-years. The object was discovered in 1886 by American astronomer and mathematician Ormond Stone.
See also
List of NGC objects (1–1000)
References
Barred spiral galaxies
0950
Cetus
009461 | NGC 950 | [
"Astronomy"
] | 79 | [
"Cetus",
"Constellations"
] |
63,093,607 | https://en.wikipedia.org/wiki/NGC%20960 | NGC 960 is a spiral galaxy in the constellation Cetus. The galaxy was discovered in 1886 by Francis Preserved Leavenworth.
See also
List of NGC objects (1–1000)
References
External links
Intermediate spiral galaxies
0960
Cetus
009621 | NGC 960 | [
"Astronomy"
] | 52 | [
"Cetus",
"Constellations"
] |
63,093,632 | https://en.wikipedia.org/wiki/NGC%20970 | NGC 970 is an interacting galaxy pair in the constellation Triangulum. It is estimated to be 471 million light-years from the Milky Way and has a diameter of approximately 100,000 ly. The object was discovered on September 14, 1850, by Bindon Blood Stoney.
See also
List of NGC objects (1–1000)
Notes
References
970
Interacting galaxies
Triangulum
Spiral galaxies
009786 | NGC 970 | [
"Astronomy"
] | 87 | [
"Triangulum",
"Constellations"
] |
63,093,934 | https://en.wikipedia.org/wiki/Redondoviridae | Redondoviruses (members of the Redondoviridae) are a family of human-associated DNA viruses. Their name derives from the inferred circular structure of the viral genome (“” means round in Spanish). Redondoviruses have been identified in DNA sequence based surveys of samples from humans, primarily samples from the oral cavity and upper airway.
Virology
Taxonomy
Redondoviruses are assigned to a new family by the International Committee on Taxonomy of Viruses (ICTV), the Redondoviridae.
Classification
The family Redondoviridae is divided into two species, Brisavirus and Vientovirus. The names derive from the words for breeze and wind in Spanish (“” and “”), denoting the association with the human airway. Multiple strains have been proposed on the basis of viral genome structure.
The redondoviruses are members of the Circular Rep-Containing Single Stranded (CRESS) DNA Virus group.
Phylum: Cressdnaviricota
Class: Arfiviricetes (Ar from arginine; fi from finger; describes a feature of the Rep protein conserved among viruses in this class)
Order: Recrevirales (Re from redondoviruses; cre from CRESS)
Genome
The redondovirus genome is circular, and by analogy to other CRESS viruses likely single stranded. Genomes range in size from about 3.0 to 3.1 kilobases. The genome encodes three inferred proteins:
A Rep protein that likely initiates rolling-circle DNA replication.
A Cap protein that likely self-assembles to yield icosahedral particles.
An ORF3 protein of unknown function. ORF3 is entirely encoded within the Cap coding region in a different reading frame.
Epidemiology
Distribution
Redondovirus genomes have been reported primarily from human samples surveyed using metagenomic DNA sequencing. They have been found primarily in oral and airway specimens.In some human populations, oral samples can show up to 80% Redondovirus positivity.
Analysis of a variety of human-derived sample types showed a strong positive correlation of Redondovirus DNA and DNA of the oral amoeba Entamoeba gingivalis. Follow up studies showed that a xenic culture containing Entamoeba gingivalis and feeder bacteria was also positive for redondovirus DNA and RNA. Analysis using intracellular cross linking (Hi-C) showed crosslinking of redondovirus DNA to Entamoeba DNA, supporting Entamoeba gingivalis as the host.
Disease associations
It is unknown whether redondoviruses cause human disease. Some CRESS viruses are known pathogens, such as porcine circovirus type 2
Redondoviruses have been reported associated with periodontitis. In one study, the levels fell with successful treatment. Abundance of redondovirus genomes has also been found to be high in some intensive care unit patients, and in patients with severe COVID-19. At present the basis of these disease associations is unclear.
References
External links
ICTV Report: Redondoviridae
DNA viruses
Virus families | Redondoviridae | [
"Biology"
] | 661 | [
"Viruses",
"DNA viruses"
] |
63,094,759 | https://en.wikipedia.org/wiki/Poincar%C3%A9%20and%20the%20Three-Body%20Problem | Poincaré and the Three-Body Problem is a monograph in the history of mathematics on the work of Henri Poincaré on the three-body problem in celestial mechanics. It was written by June Barrow-Green, as a revision of her 1993 doctoral dissertation, and published in 1997 by the American Mathematical Society and London Mathematical Society as Volume 11 in their shared History of Mathematics series (). The Basic Library List Committee of the Mathematical Association of America has suggested its inclusion in undergraduate mathematics libraries.
Topics
The three-body problem concerns the motion of three bodies interacting under Newton's law of universal gravitation, and the existence of orbits for those three bodies that remain stable over long periods of time. This problem has been of great interest mathematically since Newton's formulation of the laws of gravity, in particular with respect to the joint motion of the sun, earth, and moon. The centerpiece of Poincaré and the Three-Body Problem is a memoir on this problem by Henri Poincaré, entitled Sur le problème des trois corps et les équations de la dynamique [On the problem of the three bodies and the equations of dynamics]. This memo won the King Oscar Prize in 1889, commemorating the 60th birthday of
Oscar II of Sweden, and was scheduled to be published in Acta Mathematica on the king's birthday, until Lars Edvard Phragmén and Poincaré determined that there were serious errors in the paper. Poincaré called for the paper to be withdrawn, spending more than the prize money to do so. In 1890 it was finally published in revised form, and over the next ten years Poincaré expanded it into a monograph, Les méthodes nouvelles de la mécanique céleste [New methods in celestial mechanics]. Poincare's work led to the discovery of chaos theory, set up a long-running separation between mathematicians and dynamical astronomers over the convergence of series, and became the initial claim to fame for Poincaré himself. The detailed story behind these events, long forgotten, was brought back to life in a sequence of publications by multiple authors in the early and mid 1990s, including Barrow-Green's dissertation, a journal publication based on the dissertation, and this book.
The first chapter of Poincaré and the Three-Body Problem introduces the problem and its second chapter surveys early work on this problem, in which some particular solutions were found by Newton, Jacob Bernoulli, Daniel Bernoulli, Leonhard Euler, Joseph-Louis Lagrange, Pierre-Simon Laplace, Alexis Clairaut, Charles-Eugène Delaunay, Hugo Glydén, Anders Lindstedt, George William Hill, and others. The third chapter surveys the early work of Poincaré, which includes work on differential equations, series expansions, and some special solutions of the three-body problem, and the fourth chapter surveys this history of the founding of Acta Arithmetica by Gösta Mittag-Leffler and of the prize competition announced by Mittag-Leffler in 1885, which Barrow-Green suggests may have been deliberately set with Poincaré's interests in mind and which Poincaré's memoir would win.
The fifth chapter concerns Poincaré's memoir itself; it includes a detailed comparison of the significant differences between the withdrawn and published versions, and overviews the new mathematical content it contained, including not only the possibility of chaotic orbits but also homoclinic orbits and the use of integrals to construct invariants of systems. After a chapter on Poincaré's expanded monograph and his other later work on the three-body problem, the remainder of the book discusses the influence of Poincaré's work on later mathematicians. This includes contributions on the singularities of solutions
by Paul Painlevé, Edvard Hugo von Zeipel, Tullio Levi-Civita, Jean Chazy, Richard McGehee, Donald G. Saari, and Zhihong Xia,
on the stability of solutions by Aleksandr Lyapunov,
on numerical results by George Darwin, Forest Ray Moulton, and Bengt Strömgren,
on power series by Giulio Bisconcini and Karl F. Sundman,
and on the KAM theory by Andrey Kolmogorov, Vladimir Arnold, and Jürgen Moser,
and additional contributions by George David Birkhoff, Jacques Hadamard, V. K. Melnikov, and Marston Morse. However, much of modern chaos theory is left out of the story "as amply dealt with elsewhere", and the work of Qiudong Wang generalizing Sundman's convergent series from three bodies to arbitrary numbers of bodies is also omitted. An epilogue considers the impact of modern computer power on the numerical study of Poincaré's theories.
Audience and reception
This book is aimed at specialists in the history of mathematics,
but can be read by any student of mathematics familiar with differential equations,
although the central part of the book, analyzing Poincaré's work, may be too light on mathematical detail to be readily understandable
without reference to other material.
Reviewer Ll. G. Chambers writes "This is a superb piece of work and it throws new light on one of the most fundamental topics of mechanics."
Reviewer Jean Mawhin calls it "the definitive work about the chaotic story of the King Oscar Prize" and "pleasantly accessible"; reviewer R. Duda calls it "clearly organized, well written, richly documented", and both Mawhin and Duda call it a "valuable addition" to the literature. And reviewer Albert C. Lewis writes that it "provides insights into higher mathematics that justify its being on every university mathematics student's reading list". Although reviewer Florin Diacu (himself a noted researcher on the -body problem) complains that Wang was omitted, that Barrow-Green "sometimes fails to see connections ... within Poincaré's own work" and that some of her translations are inaccurate, he also recommends the book.
References
Astronomical dynamical systems
Books about the history of mathematics
1997 non-fiction books | Poincaré and the Three-Body Problem | [
"Astronomy",
"Mathematics"
] | 1,249 | [
"Astronomical objects",
"Astronomical dynamical systems",
"Dynamical systems"
] |
63,094,949 | https://en.wikipedia.org/wiki/Radial%20plane | A radial plane is an anatomical plane that is used to describe a virtual slice along a radius of a somewhat cylindrical shaped body part. The radial planes need not be perfectly drawn to overlap on an exact intersection point, particularly when the body part being sectioned is not a perfect cylinder, such as in the case of the maxilla and mandible.
Usefulness
The radial plane can be useful because certain anatomical elements repeat in a circumferential manner (such as around the curvature of the dental arch (i.e. the jaw) and to speak of these entities using parallel planes becomes cumbersome and inaccurate.
For instance, the segment of bone on the outer circumference of each individual tooth is referred to as the facial plate of bone. Because the facial plate of bone is anterior to the incisors (in the front of the mouth) but lateral to the premolars and molars (in the back of the mouth), to visualize the facial plate of bone on various teeth will require sagittal slices for the former but coronal slices for the latter. To achieve greater uniformity and diminished confusion, simply speaking of radial slices provides a satisfactory solution for all teeth in both (upper and lower) arches.
Previous to the advent of this terminology, this plane was referred to as the axial plane relative to the body of the jawbone. It was believed that the jawbone was straightened out as though it were a straight tube, and then transverse (axial) sections were made of that tube.
References
Anatomical planes | Radial plane | [
"Mathematics"
] | 313 | [
"Planes (geometry)",
"Anatomical planes"
] |
63,095,115 | https://en.wikipedia.org/wiki/Jenara%20Vicenta%20Arnal%20Yarza | Jenara Vicenta Arnal Yarza (September 19, 1902 – May 27, 1960), was the first woman to hold a Ph.D. in chemistry (Chemical Sciences) in Spain. She was noted for her work in electrochemistry and her research into the formation of fluorine from potassium biflouride. In later years, she was recognized for her contribution to the pedagogy of teaching science on the elementary and secondary levels, with a focus on the practical uses of chemistry in daily life. She was awarded a national honor, the Orden Civil de Alfonso X el Sabio.
Early life and education
Born into a humble family, Arnal Yarza's father was Luis Arnal Foz, a laborer from Zaragoza who later repaired pianos. Her mother, Vicenta Yarza Marquina, of Brea (Zaragoza), was a housewife. After the death of her parents, she had the responsibility of taking care of her two younger siblings. Her sister Pilar was a pianist who studied in Paris and gave concerts in the Teatro Real de Madrid. Her brother Pablo died young, but had a short career as a professor of Physics and Chemistry at the Consejo Superior de Investigaciones Científicas (CSIC).
Jenara's vocation lead her to her teacher training studies at the Escuela de Zaragoza and to a degree in Elementary Education (primary school teaching) on December 3, 1921. Her desire for learning impelled her to continue her studies at the School of Sciences at the University of Zaragoza in the realm of Chemical Sciences, first as a non-matriculated student in 1922–23. Later she continued her studies as a matriculated student, and received high grades and honors in all of her classes. She received her graduate degree from the University of Zaragoza on March 12, 1927.
She defended her doctoral thesis on October 6, 1929, and obtained her Ph.D. in chemistry from the Faculty of Sciences of the University of Zaragoza on December 13, 1929. Her doctoral thesis was titled Estudio potenciométrico del ácido hipocloroso y de sus sales ("Potentiometric study of hypochlorous acid and its salts ”). Thus, Arnal Yarza became the first woman to obtain a doctorate in Chemical Sciences in Spain, later followed by the researchers y .
Career in chemistry
After completing her studies, in 1926 she began work as a researcher in theoretical Chemistry in the laboratories of the Faculty of the University of Zaragoza. Her research would later take her to other public and private research centers, such as the Escuela Industrial of Zaragoza, the Escuela Superior de Trabajo of Madrid, the Anstalt für Anorganische Chemie of the University of Basel (as a fellowship recipient of the Junta para Ampliación de Estudios e Investigaciones Científicas), and the National Institute of Physics and Chemistry (Instituto Nacional de Física y Química) of Madrid in the electrochemistry department (continuing and expanding upon the works she began in Switzerland and Germany, where she had gone to research electrochemistry as a fellow of the JAE). During her tenure at the INFQ, she published 11 articles about electrochemical research, and in particular, electrolytic analysis.
In 1929, Dr. Arnal Yarza became a member of the Spanish Society for Physics and Chemistry () for her distinguished research career in Spain and abroad.
While she worked at the laboratories of the Anstalt für anorganische Chemie in Basel, she studied under Friedrich Fichter, professor of inorganic chemistry and then vice-president of the International Union of Chemistry. Together they worked on the chemical oxidation of various metals, but specifically the creation of fluorine and of persulfates of zinc and lanthanum from the electrolysis of molten potassium biflouride. They published the results of their work in 1931 in the notable Swiss periodical Helvetica Chimica Acta. Arnal Yarza also researched chemical oxidation produced by the action of fluorine in gaseous states. She spent some time studying in the Technische Hochschule in Dresden thanks to a two-semester extension of her original scholarship from 1932.
After the Spanish Civil War began in Madrid, in 1937 Arnal Yarza left Spain and resided for a time in France. She later returned to the Spanish "National Zone" (). Throughout the war she was able to continue her research work without being sanctioned.
During the Spanish Civil War and the early years of Franco's dictatorship, very few women, all unmarried, were allowed to participate in scientific research. While Jenara did not return to full-time research after the war, while teaching secondary school she continued to be interested in science and completed various works for the Consejo Superior de Investigaciones Científicas (CSIC), while she served at the teaching institute Instituto de Pedagogía San José de Calasanz. She collaborated in writing for the Boletín Bibliográfico del CSIC journal, most notably in publications dedicated to primary school teachers published by of the Auxiliary Library of Education (Biblioteca Auxiliar de Educación).
She was the second woman to serve as the director of a department of physics and chemistry at a Spanish secondary school from 1930 onward.
In May 1947 Arnal Yarza obtained authorization to travel to London to attend the First Centennial of the Royal Society and the XI International Congress of Pure and Applied Chemistry. In December, the General Office of Secondary Education (Dirección General de Enseñanzas Medias) gave her permission to go on a trip to Japan as a delegate of the (Foreign) Exchange Section of CSIC. Upon her return to Spain, Arnal Yarza gave conferences and facilitated the exchange of publications by CSIC with Japanese universities and centers of advanced research. Later, she would return to Japan under the auspices of the CSIC for two years where she would advance her studies in chemistry.
In July 1953, she made a trip to attend the XIII International Congress of Pure and Applied Chemistry in Stockholm and Uppsala. That same year she began her last trip to Europe for research purposes, to attend the meeting of the International Committee of Electrochemical Thermodynamics and Kinetics in Vienna from September 28 to October 5, 1953.
Jenara Vicenta Arnal Yarza died suddenly on May 27, 1960, of a cerebral hemorrhage due to thrombosis. After her death, the Ministry of Education awarded her the distinguished honor of the Orden Civil de Alfonso X el Sabio.
Career in teaching
Arza Yarnal began her career trajectory as a teacher in 1926, working as an assistant instructor in practical classes with the goal of being a professor of Analytical Chemistry in the Faculty of Sciences of the University of Zaragoza, and worked there until 1927. At the time she was in charge of the first course of Inorganic Chemistry, as the director was on leave at the time. In the same year she obtained a contract as a temporary assistant to the professor of Electrochemistry and Advanced Physics in the same faculty, which she held until 1930. On April 9, 1930, after passing the requirements for professorship, she became the eleventh Spanish woman to receive the title of professor and the second professor of sciences, after Ángela García de la Puerta. Thus she began her career in secondary education.
Her first secondary teaching post was at the Instituto Nacional Femenino Infanta Cristina, a girls' school in Barcelona, from 1930 until its closure in 1931, where she served as acting professor. In 1933 she was transferred to the Institute of Secondary Education in Calatayud. Later she was a professor of Physics and Chemistry in the Institute of Bilbao, from which she finally transferred to Madrid where she was assigned to the Instituto Velázquez from 1935 to 1936.
When the Spanish Civil War broke out, the Republican government maintained her as a government employee, earning two-thirds of her salary, at the same time that there was a reduction of personnel at the Ministry of Public Instruction. Arza Yarnal did not have any political inclinations towards either of the two sides. This stance allowed her not to suffer any reprisals and permitted her to leave Madrid, and after a time in France, to enter the National Zone, where she presented herself before the Commission of Culture and Teaching of the Junta Técnica del Estado, which reinstated her to her position in Bilbao. In 1940, she was readmitted as a professor for the Beatriz Galindo Institute in Madrid without any sanctions, and there she was able to continue her duties in the directorial team of the center until her untimely death in 1960.
As an educator, Jenara distinguished herself with her pedagogical approach to the teaching of Natural Sciences, Physics and Chemistry. She believed that the teaching of basic sciences fomented cultural development in students, by providing knowledge of the natural world while developing their mental discipline via observation, experimentation and the interpretation of results. She believed that the method of teaching science should be adapted to the cognitive level of the student, so that for elementary students of 5–12 years of age, the focus should be on experiencing science via observation, experimentation and discovery. For older students of 12–15 years of age, she emphasized lessons that contained practical applications for science as part of professional development or for recreation. She detailed these approaches in a 1933 monograph edition of the journal Bordón, which was dedicated to the teaching of Natural Sciences.
Publications
In 1930, Arza Yarnal completed research in collaboration with and Ángela García de la Puerta on the subject of electrolytic oxidation of chlorides. In addition, she published distinguished works in the journal Helvética Chimica Acta and Transactions of the American Chemical Society. Also with Rius Miró, she published “Estudio del potencial del electrodo de cloro y sus aplicaciones al análisis”, in the Anales de la Sociedad Española de Física y Química in 1933, and “La oxidación electrolítica” in 1935.
She also authored additional educational publications:
Física y Química de la vida diaria ("Physics and Chemistry in daily life") (1954 and 1959)
Los primeros pasos en el laboratorio de Física y Química ("First steps in Physics and Chemistry labs") (1956)
Química en Acción ("Chemistry in action") (1959).
In addition, Jenara collaborated with Inés García Escalera, professor of the Institute of Secondary Education of Alcalá de Henares, on two books:
Lecciones de cosas ("Lessons about things") (1958)
El mundo del saber (ciencias y letras) ("The world of knowledge (Arts and Sciences)) (1968 and 1970), later re-edited in 1982.
She also translated specialized books about the history of science such as Historia de la Química ("The History of chemistry"), by and Historia de la Física ("The History of Physics"), by .
Legacy
One of her former students from her time in Japan, the Spanish ambassador created the Vicenta Arnal Prize in her honor for graduating students of the Instituto Beatriz Galindo, and in recent years his son has continued to award the prize.
In March 2019, the city of Zaragoza proposed changing the name of a street from Calle Rudesindo Nasarre Ariño to Calle Jenara Vicenta Arnal Yarza in her honor, in an effort to recognize the contributions of four notable women from that city and to comply with the Historical Memory Law. The plan was cancelled in September of that year.
References
Bibliography
Cien años de Política Científica en España. María Jesús Santesmases y Ana Romero de Pablos. Fundación BBVA 2008. 424 pages. (Spanish)
De analfabetas científicas a catedráticas de Física y Química de Instituto en España: el esfuerzo de un grupo de mujeres para alcanzar un reconocimiento profesional y científico. Delgado Martínez, Mª Ángeles y López Martínez, J. Damián. Revista de Educación. 2004. Number 333. pp. 255–268. (Spanish)
Pioneras españolas en las ciencias. Las mujeres del Instituto Nacional de Física y Química. Carmen Magallón Portolés. Editorial: Consejo Superior de Investigaciones Científicas. 2004. 408 pages. (Spanish)
Women in Their Element: Selected Women's Contributions To The Periodic System. Lykknes, Annette and Van Tiggelen, Brigitte. World Scientific Publishing Co. 2019. 556 pages.
1902 births
1960 deaths
Spanish chemists
Spanish women chemists
Spanish women scientists
University of Zaragoza alumni
Academic staff of the University of Zaragoza
People from Zaragoza
Electrochemists | Jenara Vicenta Arnal Yarza | [
"Chemistry"
] | 2,689 | [
"Electrochemistry",
"Electrochemists"
] |
63,095,268 | https://en.wikipedia.org/wiki/1%2C6-Hexanediol%20diacrylate | 1,6-Hexanediol diacrylate (HDDA or HDODA) is a difunctional acrylate ester monomer used in the manufacture of polymers. It is particularly useful for use in ultraviolet light cure applications. Furthermore, it is also used in adhesives, sealants, alkyd coatings, elastomers, photopolymers, and inks for improved adhesion, hardness, abrasion and heat resistance. Like other acrylate monomers it is usually supplied with a radical inhibitor such as hydroquinone added.
Preparation
The material is prepared by acid-catalyzed esterification of 1,6-hexanediol with acrylic acid.
Other uses
As the molecule has acrylic functionality, it is capable of undergoing the Michael reaction with an amine. This allows it use in epoxy chemistry where its use speeds up the cure time considerably.
See also
TMPTA (trimethylolpropane triacrylate), a triacrylate crosslinker
Pentaerythritol tetraacrylate, a tetraacrylate crosslinker
References
External links
Product Stewardship Information
Acrylate esters
Monomers | 1,6-Hexanediol diacrylate | [
"Chemistry",
"Materials_science"
] | 256 | [
"Monomers",
"Polymer chemistry"
] |
63,096,410 | https://en.wikipedia.org/wiki/FRB%20180916.J0158%2B65 | FRB 20180916B (previously known as FRB 180916.J0158+65, and less formally known as FRB 180916 or "R3"), is a repeating fast radio burst (FRB) discovered in 2018 by astronomers at the Canadian Hydrogen Intensity Mapping Experiment (CHIME) Telescope. According to a study published in the 9 January 2020 issue of the journal Nature, CHIME astronomers, in cooperation with the radio telescopes at European VLBI Network (VLBI) and the optical telescope Gemini North on Mauna Kea, Hawaii, were able to pinpoint the source of FRB 180916 to a location within a Milky Way-like galaxy named SDSS J015800.28+654253.0. This places the source at redshift 0.0337, approximately 457 million light-years from the Solar System.
Periodicity
Prior to the publication of the study in Nature, only two types of FRBs had been observed: non-repeaters and repeaters. Non-repeaters are 'one-off' FRBs, possibly associated with catastrophic stellar events. In contrast, repeaters are not one-off, but instead manifest recurring unpredictable, sporadic, and irregular radiation bursts; their sources are less well understood. FRB 180916 seems to represent a third and new type of FRB that may be termed periodic repeater. The radiation activity of FRB 180916 repeats over a period of 16.35 +/-0.18 days. Broadly, FRB 180916 emits a burst of radiation for approximately four days followed by an inactive period of about 12 days, then the cycle repeats. Additional follow-up studies of the repeating FRB by the Swift XRT and UVOT instruments were reported on 4 February 2020; by the Sardinia Radio Telescope (SRT) and Medicina Northern Cross Radio Telescope (MNC), on 17 February 2020; and, by the Galileo telescope in Asiago, also on 17 February 2020. Further observations were made by the Chandra X-ray Observatory on 3 and 18 December, 2019, with no significant x-ray emissions detected at the FRB 180916 location, or from the host galaxy SDSS J015800.28+654253.0. On 6 April 2020, follow-up studies by the Global MASTER-Net were reported on The Astronomer's Telegram and, on 4 June 2020, further follow-up studies were reported with the Giant Metrewave Radio Telescope (uGMRT). On 7 June 2020, astronomers from Jodrell Bank Observatory reported possible evidence that FRB 121102 exhibits the same radio burst behavior ("radio bursts observed in a window lasting approximately 90 days followed by a silent period of 67 days") every 157 days, suggesting that the bursts may be associated with "the orbital motion of a massive star, a neutron star or a black hole". This behavior is nearly "10 times longer than the 16-day periodicity" exhibited by FRB 180916. In March 2021, another burst from the FRB was reported. On 25 August 2021, further observations were reported.
Structure
The 4-day radiation burst is not homogeneous but is instead characterized by a pattern of sub-bursts. The pattern of radiation activity within the four-day bursts is never exactly repeated. However, there is enough similarity (i.e. alignment of the sub-bursts from period to period) to suggest that they form part of an original repeating pattern with internal structure of some complexity. In March 2021, astronomers reported that the area producing pulses of FRB 180916 is about in scale, based on studies at extremely short timescales.
References
Notes
Fast radio bursts
Cassiopeia (constellation) | FRB 180916.J0158+65 | [
"Physics",
"Astronomy"
] | 760 | [
"Physical phenomena",
"Cassiopeia (constellation)",
"Astronomical events",
"Fast radio bursts",
"Constellations",
"Stellar phenomena"
] |
63,096,878 | https://en.wikipedia.org/wiki/Livestreamed%20news | Livestreamed news refers to live videos streams of television news which are provided via streaming television or via streaming media by various television networks and television news outlets, from various countries. The majority of live news streams are produced as world news broadcasts, by major television networks, or by major news channels; however, there are some live news streams which are produced by individual local television channels as well.
A live news stream is distinct from news broadcasts that are transmitted via conventional broadcast television; since it is not transmitted via cable television services, and not via over-the-air television. These are provided through Smart TV, or else through the networks' own websites, or also possibly via internet television, especially YouTube, or via video on demand services, subscription video on demand websites such as e.g. Hulu, mobile apps, or digital media players that are designed to play streaming television, such as e.g. the Roku media player.
For some twenty-four-hour news channels, the content being shown via its streaming news service, and via its broadcast television channels, may be identical; however, for regular commercial networks, the content of the streaming news may be quite different than what is being broadcast; i.e. the broadcast channels may regularly carry standard television entertainment, while the streaming service is devoted to news only. One example of this is the American broadcaster ABC Television Network, and its streaming online ABC News Live service.
News sources by region
North American news outlets
Various networks and news outlets in North America have provided official live video streams of news for most or all of the day, as described below.
The ABC Television Network has provided a live streaming service of world news, known as "ABC News Live," for eighteen hours per day, since 2018. This is available via ABC's official platform on Hulu, as well as the network's official YouTube channel.
In 2014, the CBS Television Network launched a live streaming news service, entitled "CBSN." This livestream is transmitted 15 hours per day. It is available via the CBS website, its official YouTube channel, various mobile apps, various TV services such as Apple TV and Amazon Fire TV.
In May 2019, the NBC Television Network launched a livestream news service, "NBC News Now," with eight hours of news per day. It is available via the NBC website, mobile apps, and various TV services.
The Canadian Broadcasting Corporation streams its programs through several online platforms, including its website, its official YouTube channel, its official mobile app, and several streaming tv services.
Bloomberg Television provides a livestream through its website, its official YouTube channel, and various mobile apps and streaming television services. Bloomberg Television offers some off-air news updates via social media including Facebook, and Twitter. Rebroadcasts of news and other special programs are additionally aired on the station's official YouTube channel "Bloomberg Television". On mobile devices, Bloomberg Television released an app available for the iPad. Apple TV. It is also available for free viewing on the Pluto TV streaming service.
Yahoo! Finance has a live continuous video stream of world news. This can mainly be viewed on its official channel on YouTube.
European news outlets
Various news outlets in various European countries provide live 24/7 news streams.
Euronews provides several live streaming channels via YouTube, in different languages for different countries.
Sky News, United Kingdom. In 2009, Sky News began to provide 24-hour news streaming via its website, and via its YouTube channel.
Deutsche Welle News, Germany. In 2015, Deutsche Welle began to provide livestreaming TV news on a 24-hour basis. this is available via its website and its YouTube channel. Carsten von Nahmen, the head of DW News and Current Affairs, said that the network's goal was to compete for audiences in markets around the world, and to find relevance with audiences in Africa and Asia.
France 24, France. In 2006, France 24 launched a 24-hour streaming world news channel in English. The video stream is currently provided via its YouTube channel, along with its French, Arabic and Spanish channels. The stated mission of the channels is to "provide a global public service and a common editorial stance".
GB News broadcast live through its official website and YouTube channel.
Asian and Middle Eastern news outlets
Al Jazeera provides a livestream via its own website or at its channel on YouTube. Al Jazeera English HD launched in the United Kingdom on Freeview on 26 November 2013, and began streaming in HD on YouTube in 2015.
CCTV of China provides a livestream of world news via its website, and via its official channel on YouTube.
CNA news channel is based in Singapore, and provides a live newstream via its official online portal, and its social media presence through Facebook, Instagram, YouTube and Twitter as well as apps for tablets and mobile devices to allow viewing at any time.
i24NEWS started broadcasting in Israel on 29 August 2018. It is offered in France (including the French West Indies), Belgium, Luxembourg, Switzerland, Italy, Spain, Portugal, Poland and across the African continent. The channel is also streamed live on its website in all three languages, and on its official YouTube channel.
Indus News of Pakistan provides an English-language livestream of world news, via its official channel on YouTube.
NHK World-Japan provides a livestream of world news via its website and via mobile apps.
TRT, the official national Turkish broadcaster, provides a livestream of its English-language news channel TRT World via its website, and via YouTube, since 2017.
Indian news outlets
Republic TV, provides online streaming of world and domestic news in India.
NDTV, has 24/7 online news on its website.
The World is One News (WION) provides live streaming news, via its website and via its YouTube channel.
Oceanian news outlets
The ABC News channel, based in Australia, can be streamed online at the ABC's website and on YouTube. Its livestream on YouTube is available internationally, however its ABC iView stream is only available in Australia. Unlike other programming on iView, it is not currently offered as unmetered content by any internet service providers. The ABC News channel stream is available in medium and high bandwidth varieties on the iView site.
History and development
European channels
France 24 launched on 6 December 2006, initially available online as a web stream, followed by satellite distribution a day later, covering France and the rest of Europe, the Middle East, Africa and the United States (specifically airing in New York State and the District of Columbia) using two channels: one in English and the other in French. Since April 2007, the channel has increased its reach, airing programmes in Arabic for viewers in the Maghreb, North Africa and the Middle East, and since 2017 in Spanish for viewers in Latin America.
On 19 June 2013, Sky News International was added to Apple TV for users in the UK, Ireland, and the United States. Viewers can watch clips or live streaming of the channel at no charge. On 24 July 2013, it was added to the Roku streaming player. Sky News International is available on news.sky.com to viewers around the world. On 30 September 2014, Sky News began live streaming the channel on YouTube. The free streaming service Pluto TV also offers a live feed of Sky News to American users on channel 135.
North American channels
CBSN
CBSN was created by the CBS Television Network in 2014, and was the first streaming news provided by one of the three major broadcasting networks in the United States that was streamed throughout the day, enabling viewers to watch live news coverage on connected TVs, mobile devices, and other devices.
Rumors that CBS News was preparing a 24-hour online news service were first reported by BuzzFeed in October 2013, and later confirmed by a CBS spokesperson who stated that the company was seeking "partners" for the service. Initial reports suggested that the service would consist of a linear, multi-platform streaming channel, featuring video content from other CBS News productions, along with other online-exclusive content; The New York Times likened the rumored format to an all-news radio station, combining pre-recorded video content with regular, live news updates. On 15 May 2014, CBS Corporation CEO Leslie Moonves confirmed in an interview with Bloomberg Television that the company was working on the service. Describing it as an "exciting alternative to cable news", he went on to say that "there is so much information that we get every day that doesn’t fit into a 22-minute newscast at 6:30 or CBS This Morning."
In October 2014, Capital New York reported that CBS had recently filed for trademarks on the name CBSN as a potential name for the service. It also reported that the content would take place in an informal newsroom setting, and that its interface would consist of a video player with a playlist on a sidebar, and feature social network integration. On 5 November 2014, during a Re/code conference in Dublin, CBS Interactive President Jim Lanzone announced that the service would officially launch on 6 November 2014. CBS News President David Rhodes explained that CBSN was not designed to compete directly with traditional pay-television news outlets, but to "create something that is native for connected devices", such as smartphones, tablets, and digital media players.
There was also an emphasis placed on targeting younger viewers, particularly those who are in places with little or no access to television, or those who do not subscribe to pay television at all. As opposed to CNNGo, a similarly-formatted TV Everywhere service introduced by CNN prior to the launch of CBSN, CBSN is available at no charge and does not require users to authenticate with a subscription to a pay television provider. Rhodes argued that requiring authentication would hamper the service's viewership. CBSN uses commercial breaks similar to a conventional television channel; Amazon.com and Microsoft were among the service's initial advertisers.
See also
Television news
Live stream
Video streaming
Streaming media
Streaming television
Content delivery network
Digital television
Interactive television
Internet radio
Home theatre PC
Push technology
Smart TV
Multicast
P2PTV
Protection of Broadcasts and Broadcasting Organizations Treaty
Specific platforms
Comparison of digital media players
Comparison of streaming media systems
Comparison of video hosting services
List of free television software
List of online video platforms
List of smart TV platforms
List of streaming media systems
References
External links
Broadcaster websites
Official channel for ABC News, at YouTube website.
Reference articles
Video Format for Live Streaming
Free Live TV News to Stream Right Now: Watch CBS, NBC, Bloomberg on your mobile device or TV. by Ty Pendlebury, March 7, 2022 cnet.com.
Coverage of individual channels
CBS just launched a 24-hour streaming news channel, By Jacob Kastrenakes, Nov 6, 2014, theverge.com.
Television news
Live streaming services
Streaming media systems
Smart TV
Internet broadcasting
Streaming television
Streaming
Digital media players | Livestreamed news | [
"Technology"
] | 2,223 | [
"Computer systems",
"Streaming media systems",
"Telecommunications systems",
"Multimedia",
"Smart TV"
] |
63,096,966 | https://en.wikipedia.org/wiki/Heinrich%20Rohrer%20Medal | Heinrich Rohrer Medals are a series of awards presented to celebrate the late Nobel laureate Heinrich Rohrer for his work in the fields of nanoscience and nanotechnology, and specifically for co-creating the scanning tunneling microscope. Medals are awarded triennially by the Surface Science Society of Japan with IBM Research – Zurich, Swiss Embassy in Japan, and Ms. Rohrer. The Grand Medal is for a single researcher who has made "distinguished achievements in the field of nanoscience and nanotechnology based on surface science" but can be awarded to several individuals. The Rising Medal is presented to up to three researchers upwards of 37 years in age each with different topics. The Rising Medal is given for their outstanding efforts with the assumption that they will continue to actively work in their respective fields. Medals are given with a framed certificate and a cash prize of JPY 1,000,000 for the Grand Medal and JPY 300,000 for the Rising Medal.
Awards have been presented in 2014 and 2017 and is scheduled to be presented in November 2020 at the 9th International Symposium on Surface Science (ISSS9) in Takamatsu, Japan. The 2020 medals will be presented and laureates are requested to give award lectures at the upcoming ISSS9.
Laureates
Grand Medal
Rising Medal
See also
Feynman Prize in Nanotechnology
Kavli Prize
List of physics awards
References
Awards established in 2013
Science and technology awards | Heinrich Rohrer Medal | [
"Technology"
] | 287 | [
"Science and technology awards"
] |
63,097,242 | https://en.wikipedia.org/wiki/Anthrax%20weaponization | Anthrax weaponization is the development and deployment of the bacterium Bacillus anthracis or, more commonly, its spore (referred to as anthrax), as a biological weapon. As a biological weapon, anthrax has been used in biowarfare and bioterrorism since 1914. However, in 1975, the Biological Weapons Convention prohibited the "development, production and stockpiling" of biological weapons. It has since been used in bioterrorism.
Anthrax spores can cause infection from inhalation, skin contact, ingestion or injection and when untreated can lead to death. Likely delivery methods of weaponized anthrax include aerial dispersal or dispersal through livestock. Notable bioterrorism uses include the 2001 anthrax attacks and an incident in 1993 by the Aum Shinrikyo group in Japan.
Biological overview
Concentrated anthrax spores, and not necessarily the bacterium Bacillus anthracis, pose the biggest risk as a biological weapons to humans. When airborne, anthrax spores are not easily detectable, and are several microns in diameter. They are able to reach deep into the lungs when inhaled. Once the spores are in the lungs they are then able to replicate in blood, travel to the lymph nodes, and produce toxins which lead to death. Post exposure symptoms resemble flu-like illness followed by a fulminant phase of severe acute respiratory distress, shock and, ultimately death.
Potential threats
Anthrax spores are able to be dispersed via multiple methods and infect humans with ease. The symptoms present as a common cold or flu, and may take weeks before appearing. The destructive effects of an anthrax attack on a large city may have the destructive capacity of a nuclear weapon.
Population
A mathematical model of a simulated large-scale airborne anthrax attack in a large city (1 kg anthrax spores in a city of 10 million people) was created, which takes into account the dispersion of spores, the age-dependent dose-response, the dynamics of disease progression and the timing and organization of medical intervention. The results of this model with the most efficient medical response resulted in more than deaths, which increases by a factor of 7 with slower antibiotic distribution.
Economy
Outside of the initial threat to individuals there are the costs of economic disruption, decontamination and treatment from such an event. The economic costs of the 2001 anthrax attacks resulted in over 100 million dollars being spent to decontaminate postal plants. The contamination is thought to have been less than 1 gram of anthrax spores in the facilities. The cost to decontaminate the Hart Senate Office Building after the 2001 anthrax attacks cost approximately 23 million dollars, with approximately 2 grams of anthrax spores present.
Containment treatment and avoidance
Detection of airborne anthrax requires 24–48 hours. Rapid detection in the atmosphere is not yet technologically effective. The system put in place on 22 January 2003 to assist in detecting an airborne anthrax attack by the United States is the U.S. Bio-watch Surveillance Network, which is able to detect airborne anthrax within 24–48 hours, however with some false positives and false negatives, leading to severe lag in detection and critical time lost for prevention and treatment.
Vaccination to anthrax is available, requiring 6 shots over an 18-month period and annual booster shots for full immunity. Vaccination of military personnel and first responders is vital to sustain a post attack response. The complete vaccination of an entire population can be achieved over a period of years, resulting in the reduction of risk from anthrax comparable to the reduction of risk of nuclear weapons by anti-ballistic missile systems.
Once exposure occurs and before the fulminant stage, antibiotic treatment of ciprofloxacin 400 mg or doxycycline 100 mg intravenously twice daily as well as two other antibiotics (clindamycin, vancomycin, imipenem, meropenem, chloramphenicol, penicillin, rifampicin, clarithromycin) and close clinical observation for a 60-100 day period is recommended.
Methods of dispersal
Aerial
The passive dispersal of anthrax spores aerially has occurred from rooftops (Aum Shinrikyo), from aircraft (Operation Cherry Blossoms at Night) or potentially, as suggested in 2002, by United States' President George W. Bush, "a small container and one terrorist"
Missile
An intercontinental ballistic missile warhead containing anthrax may be able to effectively disperse anthrax spores. North Korea is believed to be conducting tests on anthrax filled warheads which may be deployed on Hwasong-15 missiles, which could be used to contaminate areas, such as military bases, in a time of war for periods of months. The concerns of reentry temperatures and pressure of the anthrax filled warhead are able to be overcome by thermal insulation of the payload.
Bomb
Similar to a warhead, an anthrax filled bomb, such as the E61 Anthrax Bomblet or other N-bomb cluster munition filled with anthrax spores could allow an area to be contaminated for months, or decades. In the case of Gruinard Island testing N-bomb cluster munition containing anthrax spores contaminated the island from 1942 until a decontamination effort in 1986.
Livestock
Anthrax spores are not only able to be used as a weapon to directly infect humans. They are also able to target livestock, which may lead to transmission of anthrax between both animals and humans. This method may use another mechanism to infect livestock, where inevitably the livestock become a mechanism to disperse anthrax themselves and also result in the loss of the livestock. However, it can also be achieved with direct feeding, such as the "cattle cakes" containing anthrax spores, which were kept on hand by the Royal Air Force for aerial dispersal during the second World War. This was to be used in retaliation to any biological warfare by Nazi Germany.
1978–1984 Rhodesian anthrax epidemic
The largest anthrax epidemic in the last 200 years occurred in Rhodesia (now Zimbabwe) in the 1980s, where there may be evidence of deliberate anthrax releases by Rhodesian and South African forces, and is the progenitor of South Africa's biological weapons program (Project Coast). This epidemic is responsible for cattle and 200 human fatalities.
History of diplomacy
1925 Geneva Protocol
As a response to the biological and chemical atrocities of the First World War, the Geneva Protocol was created. This prevented "Asphyxiating, Poisonous, or other Gasses or Bacteriological Methods of Warfare" being used. However this treaty did not prohibit the production or the research of biological agents - amendments were made to allow the use of biological weapons in retaliation.
1969 Nixon terminates United States biological weapons program
An executive order by United States president Richard Nixon in 1969 terminated the United States' biological weapons program. This led to the destruction of the biological weapons arsenal and the termination of research and production of biological weapons. This change lead to increased resources for the research and creation of methods such as "vaccines, treatments and diagnostic tests", to defend against biological weapons.
1972–1975 Biological Weapons Convention
The Biological Weapons Convention is a treaty that prohibits the "Development, Production and Stockpiling of Biological and Toxin Weapons" and the destruction of those which were already in existence, including anthrax. This treaty was created based on proposals by Great Britain and the Warsaw Pact nations, where it was ratified in April 1972 and went into force in 1975. Over 100 nations signed the treaty, including the Soviet Union, the United Kingdom, the United States of America, Brazil and Iraq.
History of use and development
1914–1918 First use as an act of aggression
During the first World War, evidence suggests that the German army used anthrax to infect the livestock of Allied Nations, resulting in the death of many livestock intended for trade between allied forces.
1932–1945 Japanese testing and attacks
In 1932 Japan tested anthrax as a weapon by infecting prisoners held in Manchuria as a part of Japan's biological weapons program "Operation Cherry Blossoms at Night". During this program the Japanese used aircraft to attack at least 11 Chinese cities by spraying homes directly with anthrax.
1942 International biological weapons Programs
As a response to possible attacks from Germany, the United States, Great Britain and Canada started biological weapons programs. Experimentation of these bombs occurred in Mississippi, Utah and Gruinard Island in Scotland.
Gruinard Island
In 1942 and 1943 n-bomb cluster munition, containing anthrax spores were detonated over Gruinard Island, as a joint research program between the United States, Canada and Great Britain. 80 sheep were placed on the island prior to the dispersal of aerosol anthrax; all of them died. More interest in Gruinard Island came in the early 1980s when a survey discovered that there was still anthrax contamination in the environment, showing the long term effects of anthrax use as a biological weapons. In 1986 Great Britain decontaminated the island with a mixture of formaldehyde and seawater, and was passed as safe by a group of scientists led by the secretary of the Agricultural and Food Research Council in 1988 after 40 sheep were raised on the island for several months without symptoms of anthrax infection.
Vigo Ordnance Plant
In 1944, the US converted the Vigo Ordnance Plant, Terre Haute, Indiana, to mass produce biological agents for the U.S. bio-weapons program. Specifically, the intention was to use the plant to produce anthrax bombs at industrial scale. Although the Vigo plant never actually produced bio-weapons before the end of World War II, based on preliminary studies performed at Camp Detrick (now Fort Detrick), it did produce 8000 pounds of the anthrax simulant, Bacillus atrophaeus (then termed Bacillus globigii), which was used in weapons development testing.
1950 United States biological weapons program expansion
Programs were expanded during the Korean War in order to protect US troops against biological agents, where a program was added for the development of vaccines and other treatments.
1979 Accidental outbreak in Sverdlovsk USSR
In April and May 1979 in the city of Sverdlovsk (population of 1.2 million), an anthrax outbreak was reported. 96 cases of anthrax infection were reported where 79 were gastrointestinal anthrax and 17 were cutaneous, of these cases 64 out of the 96 infected people died in a period of weeks. Soviet reports in 1979 denied the manufacture of biological weapons and reported that the anthrax outbreak originated from livestock, but in 1992 it was confirmed by the president of Russia, Boris Yeltsin, that the outbreak originated from a Soviet military microbiological facility within 4 kilometers of the city, and occurred from improper installation of air filters at the facility.
1993 Aum Shinrikyo unsuccessful attempt in Tokyo
1993 the Aum Shinrikyo cult released anthrax spores from the roof of an eight-story building in downtown Tokyo. Upon investigation, the spores used were from the 'Sterne strain' of anthrax, which is an attenuated bacterium that is used to vaccinate animals.
1995 Iraq biological weapons program
In 1995 UNSCOM inspectors discovered that Iraq had a biological warfare program, despite an agreement ending the Gulf War in 1991, that all programs involving weapons of mass destruction are accounted for and ended.
In July 1995 documents were confirmed by defectors who ran Iraq's biological warfare program; that the biological weapons program produced a large variety of biological weapons, including anthrax, which was able to be delivered by missiles, bombs and aerosols. It was also discovered that there was an arsenal of these weapons in 1991.
2001 Anthrax letters
After the attacks of September 11 on the United States, letters were delivered to two U.S. Senators' offices and several media agencies containing a powdered form of anthrax. The process of delivering these letters led to the postal facilities and buildings that they passed through being contaminated.
The powdered anthrax was able to disperse into the air without being detected and eventually inhaled. 43 people tested positive to anthrax exposure and 22 cases of anthrax illness were diagnosed, where 11 were inhalation anthrax and 11 were cutaneous anthrax. Five people from this group died.
See also
Smallpox
Plague
Botulism
Viral hemorrhagic fever
References
Anthrax
Bioterrorism | Anthrax weaponization | [
"Biology"
] | 2,589 | [
"Bioterrorism",
"Biological warfare"
] |
63,097,322 | https://en.wikipedia.org/wiki/Alfv%C3%A9n%20resonator | An Alfvén resonator or Ionosphere Alfvén resonator is a spectral resonance structure found within geomagnetic fields in the frequency range of 0.1–10 Hz. First reported in 1989, they are ionospheric short-period geomagnetic variations primarily seen as nighttime phenomena and rarely observed during the day. The nighttime preference is due to lower electrical conductivity in the ionospheric dynamo region, which enables the feedback instability.
See also
Earth–ionosphere waveguide
References
Physical chemistry
Ionosphere | Alfvén resonator | [
"Physics",
"Chemistry",
"Astronomy"
] | 109 | [
"Applied and interdisciplinary physics",
"Plasma physics",
"Astronomy stubs",
"Astrophysics",
"Astrophysics stubs",
"Plasma physics stubs",
"nan",
"Physical chemistry"
] |
63,097,582 | https://en.wikipedia.org/wiki/NGC%203686 | NGC 3686 is a spiral galaxy that forms with three other spiral galaxies, NGCs 3681, 3684, and 3691, a quartet of galaxies in the Leo constellation. It was discovered on 14 March 1784 by William Herschel. It is a member of the NGC 3607 Group of galaxies, which is a member of the Leo II Groups, a series of galaxies and galaxy clusters strung out from the right edge of the Virgo Supercluster.
References
External links
Leo (constellation)
3686
17840314
Barred spiral galaxies
035268 | NGC 3686 | [
"Astronomy"
] | 118 | [
"Leo (constellation)",
"Constellations"
] |
63,097,990 | https://en.wikipedia.org/wiki/Poole%27s%20multiple%20sequence%20model | Poole's multiple sequence model is a communication theory approach developed by Marshall Scott Poole in 1983. The model focuses on decision making processes in groups, and rejects other widely held communication theories in favor of less linear decision making processes. The multiple-sequence model suggests that group activity needs a changing development of communication. This model has three specific parts: developing strands, emphasison task accomplishment, and tracks of group activity
Overview
Poole's multiple sequence model states that different groups make decisions through the applications of different sequences. This model rejects the idea that decision making occurs in separate, succinct phases, as other rational phase models suggest. Rather, Poole theorized that decision making occurs in clusters of linking communication. The multiple sequence model defines different contingency variables such as group composition, task structure, and conflict management approaches, which all affect group decision-making. This model consists of 36 clusters for coding group communication and four cluster-sets, such as proposal growth, conflict, socio-emotional interests, and expressions of uncertainty. By coding group decision making processes, Poole identified a set of decision paths that are usually used by groups during decision making processes.
This theory also consists of various tracks that define different stages of interpersonal communication, problem solving, and decision making that occur in group communication. These tracks are the task track, relation track, and topic track. The task track begins with an understanding period. This is when a group decides how they will solve a problem. The relation track focuses on interpersonal relationships between group members. This track suggests that as group members spend more time together, they will form deeper relationships which aid in group communication. The topic track focuses on issues that may arise among groups which affect group communication. Task is defined in two dimensions: difficulty, which is the amount of effort required to complete the task, and coordination requirements, which is the degree to which integrated action of group members is required to complete the task. In addition to these tracks, the multiple sequence model also contains break points, which are the points within group communication at which groups shift from one task to another. Identifying breakpoints allows the researcher to identify critical incidents or turning points in group activity.
Development
The development of the multiple sequence model stemmed from Poole finding stage models to be too linear based on systematized logic. Poole believed that decisions are based on many different activities and communication, this differing from the previous stage models other theorists were following. Poole's research and development of this model came from researching why the phasic model did not work. He looked at three tracks of group activity, many different breakpoints that signify changes in the development, and a model for task accomplishment.
References
Communication theory
Human communication
Linguistic theories and hypotheses | Poole's multiple sequence model | [
"Biology"
] | 551 | [
"Human communication",
"Behavior",
"Human behavior"
] |
63,098,009 | https://en.wikipedia.org/wiki/Yaravirus | Yaravirus is an amoebic virus (a virus that reproduces in amoeba) discovered in the waters of Lake Pampulha in Minas Gerais, Brazil, in 2020. The virus was found to be significantly smaller than any known amoebic virus, and is notable in that 90% of its genome appears to have no homology to previously sequenced amino acids in other organisms. The organism was named after the Brazilian mythological figure, Iara.
One author described the virus as one that "simply makes no sense", and as "an extreme example", noting that "of Yaravirus''' 74 genes, 68 are unlike any ever found in any virus". With respect to efforts by scientists to develop a megataxonomy of viruses, Yaravirus was described as "lonely and unclassifiable". Another analysis describes the virus as "either highly reduced and divergent NCLDVs or, more probably, the first non-NCLDV isolated from Acanthamoeba'' species", also noting "an ATPase most similar to the mimivirus homologue" and a major capsid protein phylogeny that is "not compatible with that of the ATPase phylogeny", suggesting that the virus originated through a horizontal gene transfer.
References
Further reading
Brazilian scientists announced the discovery of a new amoebic "Yaravirus" in Lake Pampulha. (bioRxiv)(Science magazine)
, PDF here.
Unaccepted virus taxa
Biota of Brazil
Taxa described in 2020
Bamfordvirae
Virus genera | Yaravirus | [
"Biology"
] | 327 | [
"Viruses",
"Controversial taxa",
"Virus stubs",
"Unaccepted virus taxa",
"Biota of Brazil",
"Biota by country",
"Biological hypotheses"
] |
63,098,501 | https://en.wikipedia.org/wiki/Qbox | Qbox is an open-source software package for atomic-scale simulations of molecules, liquids and solids. It implements first principles (or ab initio) molecular dynamics, a simulation method in which inter-atomic forces are derived from quantum mechanics. Qbox is released under a GNU General Public License (GPL) with documentation provided at http://qboxcode.org. It is available as a FreeBSD port.
Main features
Born-Oppenheimer molecular dynamics in the microcanonical(NVE) or canonical ensemble (NVT)
Car-Parrinello molecular dynamics
Constrained molecular dynamics for thermodynamic integration
Efficient computation of maximally localized Wannier functions
GGA and hybrid density functional approximations (LDA, PBE, SCAN, PBE0, B3LYP, HSE06, ...)
Electronic structure in the presence of a constant electric field
Computation of the electronic polarizability
Electronic response to arbitrary external potentials
Infrared and Raman spectroscopy
Methods and approximations
Qbox computes molecular dynamics trajectories of atoms using Newton's equations of motion, with forces derived from electronic structure calculations performed using Density Functional Theory. Simulations can be performed either within the Born–Oppenheimer approximation or using Car-Parrinello molecular dynamics. The electronic ground state is computed at each time step by solving the Kohn-Sham equations. Various levels of Density Functional Theory approximations can be used, including the local-density approximation (LDA), the generalized gradient approximation (GGA), or hybrid functionals that incorporate a fraction of Hartree-Fock exchange energy. Electronic wave functions are expanded using the plane wave basis set. The electron-ion interaction is represented by pseudopotentials.
Examples of use
Electronic properties of nanoparticles
Electronic properties of aqueous solutions
Free energy landscape of molecules
Infrared and Raman spectra of hydrogen at high pressure
Properties of solid-liquid interfaces
Code architecture and implementation
Qbox is written in C++ and implements parallelism using both the message passing interface (MPI) and the OpenMP application programming interface. It makes use of the BLAS, LAPACK, ScaLAPACK, FFTW and Apache Xerces libraries. Qbox was designed for operation on massively parallel computers such as the IBM Blue Gene supercomputer, or the Cray XC40 supercomputer.
In 2006 it was used to establish a performance record on the BlueGene/L computer installed at the Lawrence Livermore National Laboratory.
Interface with other simulation software
The functionality of Qbox can be enhanced by coupling it with other simulation software using a client-server paradigm. Examples of Qbox coupled operation include:
Free energy computations: Coupled with the Software Suite for Advanced Ensemble Simulations (SSAGES).
Quasiparticle energy computations: Coupled with the WEST many-body perturbation software package.
Path integral quantum simulations: Coupled with the i-PI universal force engine.
See also
List of quantum chemistry and solid-state physics software
Density Functional Theory
References
External links
Computational chemistry software
Physics software
Free physics software | Qbox | [
"Physics",
"Chemistry"
] | 632 | [
"Computational chemistry software",
"Chemistry software",
"Computational physics",
"Computational chemistry",
"Physics software"
] |
47,618,107 | https://en.wikipedia.org/wiki/Doering%E2%80%93LaFlamme%20allene%20synthesis | In organic chemistry, the Doering–LaFlamme allene synthesis is a reaction of alkenes that converts them to allenes by insertion of a carbon atom. This name reaction is named for William von Eggers Doering and a co-worker, who first reported it.
The reaction is a two-stage process, in which first the alkene is reacted with dichlorocarbene or dibromocarbene to form a dihalocyclopropane. This intermediate is then reacted with a reducing metal, such as sodium or magnesium, or with an organolithium reagent. Either approach results in metal-halogen exchange to convert the gem-dihalogenated carbon to a 1-metallo-1-halocyclopropane. This species undergoes α-elimination of metal halide and ring-opening via an electrocyclic reaction (at least formally) to give the allene. Several different mechanisms for the electrocyclic rearrangement have been studied.
In a study in which an enantioenriched substituted cyclopropyl Grignard reagent was prepared, the reaction was shown to give the allene with very high levels of enantiospecificity, suggesting a concerted mechanism. Similarly, in a computational study of the bromolithiocyclopropane, a concerted mechanism was found to be favored. A discrete cyclopropylidene carbene was found to be unlikely, although early ejection of LiBr (roughly simultaneous to C–C bond scission and before formation of the orthogonal pi bonds of the allene) was suggested.
References
Name reactions | Doering–LaFlamme allene synthesis | [
"Chemistry"
] | 353 | [
"Name reactions"
] |
47,618,351 | https://en.wikipedia.org/wiki/Nihon%20Hidankyo | The , often shortened to , is a group that represents survivors (known as hibakusha) of the atomic bombings of Hiroshima and Nagasaki. It was formed in 1956.
Nihon Hidankyō lobbies both the Japanese government for improved support of the victims and governments worldwide for the abolition of nuclear weapons. Their activities included recording thousands of witness accounts, issuing resolutions and public appeals, and sending annual delegations to various international organisations, including the United Nations, to advocate for global nuclear disarmament.
The organisation was awarded the 2024 Nobel Peace Prize "for its efforts to achieve a world free of nuclear weapons and for demonstrating through witness testimony that nuclear weapons must never be used again".
History
Nihon Hidankyo is a nation-wide organisation formed by survivor groups of atomic bomb victims from Hiroshima and Nagasaki in each prefecture. The fallout from Castle Bravo, a thermonuclear weapon test conducted at Bikini Atoll by the United States in 1954, caused acute radiation syndrome in residents of neighbouring atolls and 23 crew members of the Japanese fishing vessel Daigo Fukuryū Maru. This led to the formation of the Japan Council against Atomic and Hydrogen Bombs in Hiroshima the following year. Inspired and supported by this movement, atomic bomb survivors established Nihon Hidankyo on 10 August 1956, at the second annual conference of the council in Nagasaki.
However, the movement's solidarity was jeopardised when the council became actively involved in the anti-U.S.-Japan Security Treaty movement alongside the left-leaning Japan Socialist Party in 1959. A large number of supporters withdrew from the council, and with the support of the conservative Liberal Democrats, a new organisation, led by Masatoshi Matsushita, leader of the staunchly anti-communist Democratic Socialist Party, was established. In 1961, when the Soviet Union resumed nuclear tests, the communist wing of the council refused to denounce them, which led to severe internal tension. This led to a further split in the movement, with a Japan Socialist Party-backed group that denounces nuclear tests by any nation breaking away as a new council. These tensions within anti-nuclear movements caused some prefectural Hidankyos to split at the local level as well, such as in Hiroshima, where there are both Socialist Party-backed and Communist Party-backed Hidankyos with the same name. The nationwide organisation itself decided not to align with any political movements in 1965, after they became highly politicised.
Activities
As of October 2024, Nihon Hidankyo's activities include:
Advocacy for the abolition of nuclear weapons and demands for state compensations
Petitioning actions towards the Japanese government, the United Nations and other governments
Elimination and removal of nuclear weapons, establishment of an international treaty for nuclear disarmament, holding of international conferences, enactment of non-nuclear laws and enhancement of hibakusha support measures
Raising awareness of the realities of the atomic bombings both domestically and internationally
Research, study, publication, exhibitions and gatherings on atomic bomb damage
Consultation and support activities for hibakusha
Key figures
Current officials
Co-chairs
Terumi Tanaka: Exposed to radiation 3.2 km away from the Nagasaki hypocentre at the age of 13; assumed office on 14 June 2017
Shigemitsu Tanaka: Exposed to radiation 6 km away from the Nagasaki hypocentre at the age of 4; assumed office on 14 June 2018
Toshiyuki Mimaki: Exposed to radiation at his home in Hiroshima at the age of 3; assumed office on 9 June 2022
Secretary general
Sueichi Kido: Exposed to radiation in Nagasaki at the age of 5; assumed office on 7 June 2017
Assistant secretaries general
Toshiko Hamanaka
Jiro Hamasumi
Michiko Kodama
Masako Wada
Former officials
Ichiro Moritaki - Founding chairperson of Nihon Hidankyo. Founding chairperson of the Hiroshima Prefectural Hidankyo, a committee head of the Atomic Water Association, and the third chairperson of the Japan National Conference on the Prohibition of Atomic and Hydrogen Bombs. Died January 25, 1994.
Sumiteru Taniguchi: Severely injured by the Nagasaki bomb 1.8 km away from the hypocentre at the age of 16; co-chairperson until his death on 20 August 2017
Takeshi Ito: Born in Hiroshima City. Ito was exposed to the atomic bomb during his third year at the former Hiroshima Prefectural First Middle School; co-chairperson until his death on 3 March 2000.
Sunao Tsuboi: Severely injured by the Hiroshima bomb 1.5 km away from the hypocentre at the age of 20; co-chairperson until his death on 24 October 2021
Mikiso Iwasa: Severely injured by the Hiroshima bomb at his home 1.2 km away from the hypocentre at the age of 16; co-chairperson until his death on 7 September 2020
Honors
2003: Seán MacBride Peace Prize
2010: Award for Social Activism from the World Summit of Nobel Peace Laureates
2024: Nobel Peace Prize
Before being awarded the Nobel Peace Prize, Nihon Hidankyo was also nominated in 1985, 1994 and 2015 by the Swiss-based International Peace Bureau.
See also
Anti-nuclear movement
Anti-nuclear power movement in Japan
International Campaign to Abolish Nuclear Weapons
Treaty on the Prohibition of Nuclear Weapons
References
External links
Nihon Hidankyō
HIBAKUSHA – Atomic Bomb Survivors
United Nations Secretary-General Ban Ki-moon's Statements
Anti-nuclear organizations
Anti–nuclear weapons movement
Hibakusha
Organizations established in 1956
1956 establishments in Japan
Organizations awarded Nobel Peace Prizes
Japanese Nobel laureates
Nuclear history of Japan | Nihon Hidankyo | [
"Engineering"
] | 1,150 | [
"Nuclear organizations",
"Anti-nuclear organizations"
] |
47,618,620 | https://en.wikipedia.org/wiki/Silicon%20organic%20water%20repellent | Organosilicon water repellent:
water solution of siliconate
The water-repelling liquid is applied:
To provide the surface of materials with excellent water resistance properties - the surface does not absorb water;
To make the material frost- and corrosion resistant;
To reduce the pollution of surface;
In addition, the treated surface does not change its appearance, maintains air permeability - material is not sweated and retains the ability to output pairs.
The water-repelling liquid is applied:
To provide the surface of materials with excellent water resistance properties - the surface does not absorb water;
To make the material frost- and corrosion resistant;
To reduce the pollution of surface;
In addition, the treated surface does not change its appearance, maintains air permeability - material is not sweated and retains the ability to output pairs.
The liquid is methyl hydride siloxane polymer with low viscosity of light-yellow color or colorless.
It is readily dissoluble in aromatic and chlorinated hydrocarbons, and is undergone to gelation in the presence of amines, amino alcohols, strong acids and alkalis.
No dissolution in lower alcohols and water.
The positive effects of the application of methyl hydride siloxane:
Improved water resistance of various building materials - water remains on the surface in the form of droplets and does not penetrate the material;
Increases frost resistance and improves thermal insulation materials;
Does not prevent air exchange – the construction outputs pair outside and does not accumulate moisture;
Prevents UV and infrared radiation;
Preserves the appearance of the material;
Extends the service life of materials;
Prevents surface mosses and lichens.
Water emulsion of organo silicon the methyl hydride siloxane with additives of emulsifier, biocides and stabilizers
Solids content in the emulsion SE 50-94M is 50%. The color is from white to light gray.
Application:
The emulsion oligo methyl hydride siloxane has properties and characteristics similar with the methyl hydride siloxane. The emulsion is also used to provide various materials with water repellency properties.
However, as oligo methyl hydride siloxane is the water emulsion, it can be applied as an additive in the production of solutions and mixtures that is by the volumetric method.
for concrete, asbestos, gypsum, ceramic, porcelain
in the production of waterproof papers and leather;
in the production of water-resistant fabrics;
by volumetric method in the manufacture of paving tiles, slabs, curbs, fences of different silicate materials;
as plasticizer in the preparation of plaster, lime and cement solutions;
as an air involving admixture in the preparation of cement solution
Liquid is a mixture of tetra ethoxy silane and polyethoxy siloxanes.
Application
Metal manufacture: binding agent in the manufacture of ceramic molds for precision core-mold casting; manufacture of rods exposed to high temperatures; manufacture of non-stick paints;
Textile industry: feltproofing of woolen cloths; abatement of carpet shrinkage; antirot and antidust protection of carpets; impregnating compound for filter cloths;
Construction engineering: hydrophobization of construction materials, treatment of coated surfaces; porosity decreasing impregnation of concrete; manufacture of acid-resistant cement;
Glasswork and cerarnics: antireflection treatment of optical glass; application of light-diffusing coat to electric light bulbs; binding agent for ceramic mixtures, resistant to strongly corrosive mediums, with high manufacture of fireproof material standing temperatures of about 1750 °C and stress of above 127 kg/cm3;
Coating industry: paint additives forming quick-drying, thermostable and water-resistant coats with constant gloss.
Chemistry
Commercially available siliconates include potassium methyl siliconate (CAS 31795-24-1, CH5KO3Si) and sodium methyl siliconate (CAS 16589-43-8, CH5NaO3Si). These are supplied as a concentrate in water with an active content of between 30 and 40% by weight. This solution is further diluted in water prior to their application by spraying, dipping or rolling to a mineral building material, such as brickwork, to make the surface water repellent. The dilution is clear, stable with a high pH of 13 to 14. When applied to a surface the siliconate reacts with carbon dioxide in the air to form an insoluble water resistant treatment within 24 hours.
CH5KO3Si + silanol functional substrate OHSi → CH4O3Si + KOH
The methyl group has now attached itself to the substrata.
2KOH + CO2 → K2CO3 + H2O
The salts formed by this reaction are often the cause of white efflorescence when too much of the solution is applied to the surface.
References
See also
Hydrophobe
Amphiphile
Froth flotation
Hydrophile
Hydrophobic effect
Hydrophobicity scales
Superhydrophobe
Superhydrophobic coating
Chemical properties
Intermolecular forces
Articles containing video clips | Silicon organic water repellent | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,066 | [
"Molecular physics",
"Materials science",
"nan",
"Intermolecular forces"
] |
47,620,324 | https://en.wikipedia.org/wiki/L%282%2C1%29-coloring | L(2, 1)-coloring or L(2,1)-labeling is a particular case of L(h, k)-coloring. In an L(2, 1)-coloring of a graph, G, the vertices of G are assigned color numbers in such a way that adjacent vertices get labels that differ by at least two, and the vertices that are at a distance of two from each other get labels that differ by at least one.
An L(2,1)-coloring is a proper coloring, since adjacent vertices are assigned distinct colors. However, rather than counting the number of distinct colors used in an L(2,1)-coloring, research has centered on the L(2,1)-labeling number, the smallest integer such that a given graph has an L(2,1)-coloring using color numbers from 0 to . The L(2,1)-coloring problem was introduced in 1992 by Jerrold Griggs and Roger Yeh, motivated by channel allocation schemes for radio communication. They proved that for cycles, such as the 6-cycle shown, the L(2,1)-labeling number is four, and that for graphs of degree it is at most .
References
Graph coloring | L(2,1)-coloring | [
"Mathematics"
] | 255 | [
"Graph theory stubs",
"Graph coloring",
"Mathematical relations",
"Graph theory"
] |
47,620,343 | https://en.wikipedia.org/wiki/Maucha%20diagram | A Maucha diagram, or Maucha symbol, is a graphical representation of the major cations and anions in a chemical sample. R. Maucha published the symbol in 1932.
It is mainly used by biologists and chemists for quickly recognising samples by their chemical composition. The symbol is similar in concept to the Stiff diagram. It conveys similar ionic information to the Piper diagram, though in a more compact format that is suitable as a map symbol or for showing changes with time. The Maucha diagram is a special case of the Radar chart and overcomes some of the limitations of the Pie chart by having equal angles for all variables and consistently showing each variable in the same position.
The star shape comprises eight kite-shaped polygons, the area of each of which is proportional to the concentration of an ion in milliequivalents per litre. The anions carbonate, bicarbonate, chloride and sulphate are on the left, while the cations potassium, sodium, calcium and magnesium are on the right. The total ionic concentration adds up to the area of the background circle, the total anion concentration adds up to the left semicircle and the total cation concentration adds up to the right semicircle. A method for drawing the diagram in R is available on GitHub.
Broch and Yake modified Maucha's original fixed-size diagram by scaling for concentration.
Further scaling using the logarithm of the ionic concentration enables the plotting of a wide range of concentrations on a single map.
References
Analytical chemistry
Diagrams | Maucha diagram | [
"Chemistry"
] | 323 | [
"nan"
] |
47,620,848 | https://en.wikipedia.org/wiki/V%20Antliae | V Antliae (V Ant) is a Mira variable star in the constellation Antlia. It varies in brightness between magnitudes 8.2 and 14.0 with a period of 303 days. Even at its brightest, it is far too faint to be seen with the naked eye.
V Antliae's variability was discovered by examining Harvard College Observatory photographic plates, and was announced by Henrietta S. Leavitt and Edward C. Pickering in 1913.
1612 MHz OH maser emission was first detected from this star in 1973.
The star's water vapor emission line at 22 GHz was first observed at Haystack Observatory in 1973.
References
Mira variables
Antlia
M-type giants
Antliae, V
050697
Emission-line stars
J10210911-3447188 | V Antliae | [
"Astronomy"
] | 164 | [
"Antlia",
"Constellations"
] |
47,621,038 | https://en.wikipedia.org/wiki/Malta%20Environment%20and%20Planning%20Authority | The Malta Environment and Planning Authority (MEPA, ) was the national agency responsible for the environment and planning in Malta. It was established to regulate the environment and planning on the Maltese islands of Malta, Gozo and other small islets of the Maltese archipelago. MEPA was bound to follow the regulations of the Environment Protection Act (2001) and the Development Planning Act (1992) of the Laws of Malta. The national agency was also responsible for the implementation of Directives, Decisions and Regulations under the EU Environmental Acquis as Malta is a member of the European Union, while considering other recommendations and opinion of the Union. The Authority employed over 420 government workers, from a wide range of educational backgrounds, all within their merit of profession.
On 4 April 2016, MEPA was dissolved and two new authorities were established to take its place: the Planning Authority and the Environment and Resources Authority.
Role
MEPA acted as the national representation under a number of international environmental conventions and multilateral agreements.
These included information supported by the Aarhus Convention:
On access to information;
Public participation in decision-making;
Access to justice in environmental matters.
Governance
The Agency was governed by a board of professional, whose responsibility was to provide strategic guidance within, laid by the laws of Malta. The board comprising a maximum of 15 personnel was led by the Executive Chairman, Perit Vincent Cassar. Members within the board included two representative members of the Parliament of Malta, who were knowledgeable and experienced about matters relating to the environment and development such commercial, industrial and social affairs. A number of appointed boards and committees provided strategic guidance or expert advise to the directorates to ensure that the organization fulfilled its functions and responsibilities efficiently and effectively, in line with legal obligations.
Responsibility
MEPA's operational functions and responsibilities were carried out by the work of four main structures, namely:
The Chairman’s Office was responsible for providing the framework within which the MEPA Board together with the Commissions and Committees operate.
The secretariat was the point of reference for issuing and communicating the Board's and Commissions' decisions and in this context was a primary point of contact for ministries, departments and agencies as well as the general public.
The Communications Office and Complaints office were an integral part of the function of this office.
Aim
The Chief Executive Officer was responsible for the implementation of the aims and supervise and control the Directorates. The CEO and other directors were responsible for developing the necessary strategies. The Planning Directorate processed environment and planning applications. It was responsible for enforcement, policy development and plan making, transport planning and research and other.
Enforcement
The Enforcement Directorate was responsible for both Development Control and Environmental Protection and for supporting the Authority in enforcement campaigns including Direct Action, enforcement, surveillance and actions as necessary to ensure compliance with the building development permits and to protect the environment to help achieve a sustainable environmental improvement. The Environment Protection Directorate advised Government on environmental standards and policies, drew up plans and provided a licensing regime to safeguard and monitor the environment and controlled the activities having environmental impact. The Directorate for Corporate Services was responsible for Human Resources, Information Technology, Mapping and Land-surveying, support services and Finance. There were a number of boards and committees, which provide strategic guidance for the Directorates to ensure the organization fulfilled its functions and responsibilities efficiently and effectively, in line with its legal obligations.
Grading of property
Here is an incomplete list of graded property by MEPA according to category. The below property are named according to the official name used by MEPA and not as how they are known among the public.
Grade 1:
Building at this grade have great historical and architectonical values and may not be altered
Palazzo Dorell
Palazzo Parisio (Valletta)
San Anton Palace
Villa Bologna
Villa Francia
Lija Belvedere Tower
La Borsa
Ponsonby's Column
Grade 2:
Building at this grade have historical and architectonical value and can have moderate alterations
Palazzo Dragonara
Palazzo Nasciaro
Grade 3:
Building at this grade are not considered important and can be demolished
References
Environment of Malta
Defunct government agencies of Malta
Urban planning
2016 disestablishments in Malta
Government agencies disestablished in 2016 | Malta Environment and Planning Authority | [
"Engineering"
] | 830 | [
"Urban planning",
"Architecture"
] |
47,621,173 | https://en.wikipedia.org/wiki/Piezoelectric%20speaker | A piezoelectric speaker (also known as a piezo bender due to its mode of operation, and sometimes colloquially called a "piezo", buzzer, crystal loudspeaker or beep speaker) is a loudspeaker that uses the piezoelectric effect for generating sound. The initial mechanical motion is created by applying a voltage to a piezoelectric material, and this motion is typically converted into audible sound using diaphragms and resonators. The prefix piezo- is Greek for 'press' or 'squeeze'.
Compared to other speaker designs piezoelectric speakers are relatively easy to drive; for example they can be connected directly to TTL outputs, although more complex drivers can give greater sound intensity. Typically they operate well in the range of 1-5 kHz and up to 100 kHz in ultrasound applications.
Usage
Piezoelectric speakers are frequently used to generate sound in digital quartz watches and other electronic devices, and are sometimes used as tweeters in less-expensive speaker systems, such as computer speakers and portable radios. They are also used for producing ultrasound in sonar systems.
Piezoelectric speakers have several advantages over conventional loudspeakers: they are resistant to overloads that would normally destroy most high frequency drivers, and they can be used without a crossover due to their electrical properties. There are also disadvantages: some amplifiers can oscillate when driving capacitive loads like most piezoelectrics, which results in distortion or damage to the amplifier. Additionally, their frequency response, in most cases, is inferior to that of other technologies, especially with regards to bass and midrange. This is why they are generally used in applications where volume and high pitch are more important than sound quality.
Piezoelectric speakers can have extended high frequency output, and this is useful in some specialized circumstances; for instance, sonar applications in which piezoelectric variants are used as both output devices (generating underwater sound) and as input devices (acting as the sensing components of underwater microphones). They have advantages in these applications, not the least of which is simple and solid state construction that resists seawater better than a ribbon or cone based device would.
See also
Buzzer
References
Loudspeakers
Energy conversion
Electrical phenomena | Piezoelectric speaker | [
"Physics"
] | 468 | [
"Physical phenomena",
"Electrical phenomena"
] |
47,621,338 | https://en.wikipedia.org/wiki/Paintbrush | A paintbrush is a brush used to apply paint or ink. A paintbrush is usually made by clamping bristles to a handle with a ferrule. They are available in various sizes, shapes, and materials. Thicker ones are used for filling in, and thinner ones are used for details. They may be subdivided into decorators' brushes used for painting and decorating and artists' brushes use for visual art.
History
Paintbrushes were used by man as early as the Paleolithic era in around 2.5 million years ago in order to apply pigment.
Old painting kits, estimated to be around 100,000 years old, were discovered in a cave in what is now modern South Africa.
Ancient Egyptian paintbrushes were made of split palm leaves and used by ancestors to beautify their surroundings. The oldest brushes ever found were also made of animal hair.
Parts
Bristles: Transfer paint onto the substrate surface
Ferrule: Retains the bristles and attaches them to the handle
Handle: The intended interface between the user and the tool
Trade
Brushes for use in non-artistic trade painting are geared to applying an even coat of paint to relatively large areas. Following are the globally recognized handles of trade painter's brushes:
Gourd handle: Ergonomic design that reduces stress on the wrist and hand whilst painting.
Short handle: The shorter handle provides greater precision when painting small spaces such as corners, trims & detail areas.
Flat beavertail handle: This shape is rounded and slightly flattened to fit perfectly into the palm of the hand whilst painting.
Square handle: Square shaped handle with bevelled corners is featured mainly in trim or sash brushes and is comfortable to hold when painting.
Rat tail handle: This handle is longer & thinner than the standard making it easy to hold to give greater control.
Long handle: Rounded and thin, a long handle is easy to hold like a pencil giving great control & precision when cutting in & painting tricky spaces.
Decorating
The sizes of brushes used for painting and decorating.
Decorating sizes
Decorators' brush sizes are given in millimeters (mm) or inches (in), which refers to the width of the head. Common sizes are:
Metric (mm): 10 • 20 • 40 • 50 • 60 • 70 • 80 • 90 • 100.
Customary (inches): • • • • • • • 1 • • • 2 • • 3 • • 4.
Decorating shapes
Angled: For painting edges, bristle length viewed from the wide face of the brush uniformly decrease from one end of the brush to the other
Flat: For painting flat surfaces, bristle length viewed from the wide face of the brush does not change
Tapered: Improves control, the bristle length viewed from the narrow face of the brush is longer in the center and tapers toward the edges
Striker: Large round (cylindrical) brush for exterior painting difficult areas
Decorating bristles
Bristles may be natural or synthetic. If the filaments are synthetic, they may be made of polyester, nylon or a blend of nylon and polyester.
Filaments can be hollow or solid and can be tapered or untapered. Brushes with tapered filaments give a smoother finish.
Synthetic filaments last longer than natural bristles. Natural bristles are preferred for oil-based paints and varnishes, while synthetic brushes are better for water-based paints as the bristles do not expand when wetted.
A decorator judges the quality of a brush based on several factors: filament retention, paint pickup, steadiness of paint release, brush marks, drag and precision painting. A chiseled brush permits the painter to cut into tighter corners and paint more precisely.
Brush handles may be made of wood or plastic while ferrules are metal (usually nickel-plated steel).
Art
Short handled brushes are usually used for flat or slightly tilted work surfaces such as watercolor painting and ink painting, while long handled brushes are held horizontally while working on a vertical canvas such as for oil paint or acrylic paint.
Art shapes
The styles of brush tip seen most commonly are:
Round: pointed tip, long closely arranged bristles for detail.
Flat: for spreading paint quickly and evenly over a surface. They will have longer hairs than their Bright counterpart.
Bright: shorter than flats. Flat brushes with short stiff bristles, good for driving paint into the weave of a canvas in thinner paint applications, as well as thicker painting styles like impasto work.
Filbert: flat brushes with domed ends. They allow good coverage and the ability to perform some detail work.
Fan: for blending broad areas of paint.
Angle: like the filbert, these are versatile and can be applied in both general painting application as well as some detail work.
Mop: a larger format brush with a rounded edge for broad soft paint application as well as for getting thinner glazes over existing drying layers of paint without damaging lower layers to protect the paintbrush
Rigger: round brushes with longish hairs, traditionally used for painting the rigging in pictures of ships. They are useful for fine lines and are versatile for both oils and watercolors.
Stippler and deer-foot stippler: short, stubby rounds
Liner: elongated rounds
Dagger: looks like angle with longish hairs, used for one stroke painting like painting long leaves.
Scripts: highly elongated rounds
Egbert: a filbert with extra long hair, used for oil painting
Some other styles of brush include:
Sumi: Similar in style to certain watercolor brushes, also with a generally thick wooden or metal handle and a broad soft hair brush that when wetted should form a fine tip. Also spelled Sumi-e (墨絵, Ink wash painting).
Hake (刷毛): An Asian style of brush with a large broad wooden handle and an extremely fine soft hair used in counterpoint to traditional Sumi brushes for covering large areas. Often made of goat hair.
Spotter: Round brushes with just a few short bristles. These brushes are commonly used in spotting photographic prints.
Stencil: A round brush with a flat top used on stencils to ensure the bristles don't get underneath. Also used to create texture.
Art sizes
Artists' brushes are usually given numbered sizes, although there is no exact standard for their physical dimensions. From smallest to largest, the sizes are: 20/0, 12/0, 10/0, 7/0, 6/0, 5/0, 4/0 (also written 0000), 000, 00, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 16, 18, 20, 22, 24, 25, 26, 28, 30, 2 inch, 4 inch, 6 inch, and 8 inch. Brushes as fine as 30/0 are manufactured by major companies, but are not a common size. Sizes 000 to 20 are most common.
Art bristles
Bristles may be natural—either soft hair or hog bristle—or synthetic.
Types include:
watercolor brushes which are usually made of sable, synthetic sable or nylon;
oil painting brushes which are usually made of sable or bristle;
acrylic brushes which are almost entirely nylon or synthetic.
Turpentine or thinners used in oil painting can destroy some types of synthetic brushes. However, innovations in synthetic bristle technology have produced solvent resistant synthetic bristles suitable for use in all media. Natural hair, squirrel, badger or sable are used by watercolorists due to their superior ability to absorb and hold water.
Soft hair brushes The best of these are made from kolinsky sable, other red sables, or miniver (Russian squirrel winter coat; tail) hair. Sabeline is ox hair dyed red to look like red sable and sometimes blended with it. Camel hair is a generic term for a cheaper and lower quality alternative, usually ox. It can be other species, or a blend of species, but never includes camel. Pony, goat, mongoose and badger hair are also used.
Hog bristle Often called China bristle or Chungking bristle. This is stiffer and stronger than soft hair. It may be bleached or unbleached.
Synthetic bristles These are made of special multi-diameter extruded nylon filament, Taklon or polyester. These are becoming ever more popular with the development of new water based paints.
Art handles
Artists' brush handles are commonly wooden but can also be made of molded plastic. Many mass-produced handles are made of unfinished raw wood; better quality handles are of seasoned hardwood. The wood is sealed and lacquered to give the handle a high-gloss, waterproof finish that reduces soiling and swelling. Many brush companies offer long or short brush handle sizes.
Metal ferrules may be of aluminum, nickel, copper, or nickel-plated steel. Quill ferrules are also found: these give a different "feel" to the brush, and are staple of French-style watercolour brushes.
References
External links
Painting materials
Hand tools
Brushes | Paintbrush | [
"Engineering"
] | 1,875 | [
"Human–machine interaction",
"Hand tools"
] |
47,621,406 | https://en.wikipedia.org/wiki/Gomphidius%20largus | Gomphidius largus is a fungus native to North America.
References
External links
Boletales
Fungi of North America
Fungus species | Gomphidius largus | [
"Biology"
] | 29 | [
"Fungi",
"Fungus species"
] |
47,622,013 | https://en.wikipedia.org/wiki/SAO%20206462 | SAO 206462 is a young binary star, surrounded by a circumstellar disc of gas and clearly defined spiral arms. It is situated about 440 light-years away from Earth in the constellation Lupus. The presence of these spiral arms seems to be related to the existence of planets inside the disk of gas surrounding the star. The disk's diameter is about twice the size of the orbit of Pluto.
Discovery
The discovery of this object was presented in October 2011 by Carol Grady, astronomer of Eureka Scientific, headquartered in the Goddard Space Flight Center at NASA. It was the first of this class that exhibited a high degree of clarity and was made using several space telescopes (Hubble, FUSE, Spitzer) and earth telescopes (Gemini Observatory and Subaru Telescope, situated in Hawaii), through an international research program of young stars and of stars with planets. A number of astronomers of different observatories collaborated.
Planetary system
The pair of spiral arms around SAO 206462 have a rotation rate of degrees per year, which are thought to be caused by a dynamically driving protoplanet within the disk, at a distance of and an orbital period of . This planet should be a challenge to be detected using direct imaging due to the presence of dust particles obscuring it, but could be detected and confirmed via high-resolucion spectroscopic observations.
Another planet candidate around SAO 206462 has been detected using observations of the JWST's NIRCam imaging instrument, with low signal-to-noise ratio, a mass of and a separation of 300 astronomical units. It has been dubbed CC1 (Companion candidate 1). Objects more massive than at distance of 120 AU have been ruled out by the observations.
References
Lupus (constellation)
Astronomical objects discovered in 2011
Astrophysics
F-type main-sequence stars
Durchmusterung objects | SAO 206462 | [
"Physics",
"Astronomy"
] | 380 | [
"Lupus (constellation)",
"Astronomical sub-disciplines",
"Astrophysics",
"Constellations"
] |
47,622,186 | https://en.wikipedia.org/wiki/Hemileccinum%20impolitum | Hemileccinum impolitum is a basidiomycete fungus of the family Boletaceae, native to Europe. It is commonly referred to as the iodine bolete, because its fruit bodies tend to emit an iodine-like odour when cut, more detectable in the stem base or overripe specimens.
Like other members of the family, H. impolitum has tubes and pores instead of gills in the hymenial surface of its fruit bodies. It is widely distributed in temperate and southern Europe, where it grows in mycorrhizal symbiosis with broad-leaved trees, particularly oak (Quercus).
Taxonomy and phylogeny
The iodine bolete was first described by Elias Magnus Fries, an eminent mycologist of the 19th century, who placed the fungus in genus Boletus. The Latin epithet impolitum (meaning "rough"), likely refers to the cap of the species, which is initially felty and covered in a finely filamentous coating when viewed under a magnifying glass. The species' taxonomic position had long remained uncertain and various authors had placed it in different genera in the past, including the now abandoned genera Tubiporus and Versipellis.
Based on preliminary analysis of the 28S ribosomal RNA locus, mycologists Manfred Binder and Halmut Besl placed the species in Xerocomus in 2000. However, in 2008 Josef Šutara transferred the fungus to the new genus Hemileccinum, based on its distinctive morphology. More elaborate phylogenetic studies by Wu and colleagues in 2014, confirmed that the iodine bolete does not belong in Boletus, Xerocomus or Leccinum, since collections identified as this species occupied a distinct phylogenetic lineage within the subfamily of Xerocomoideae, closely related to Corneroboletus. Subsequent contributions by R. Halling and colleagues, and M. Loizides and colleagues, have since confirmed the monophyly of the genus, which presently includes just two European species: H. impolitum and H. depilatum.
Description
The cap diameter usually ranges between , but can reach . It is at first hemispherical, gradually becoming convex as the fungus expands and finally flat in fully mature specimens, sometimes with a slightly uplifted margin. The colour ranges from light tan, pale brown, chestnut-brown, grey, ochraceous-brown, greyish-brown or olivaceous-brown and the cap of young fruit bodies is initially covered in a velvety, finely filamentous silvery-grey coating that disappears in age.
The cap cuticle turns violet with a drop of ammonium hydroxide.
The stem is cylindrical, clavate or ventricose, high by wide, cream to pale yellow, but typically lemon-yellow at the apex and usually narrowing at the base. It has no reticulation (net), but is covered in tiny pustules (scabrosities) below the apex, sometimes browning with age.
The tubes are pale yellow to lemon-yellow and usually do not discolour when cut, but may rarely stain faintly greenish-brown. The pores are small and rounded, lemon-yellow to chrome-yellow, not discolouring or rarely staining greenish-brown where handled or injured. The flesh is thick, soft, pale yellow to whitish, usually remaining the same colour when cut, or rarely becoming faintly pinkish-brown above the tubes and at the stem base. It has a sour smell somewhat reminiscent of iodine, more pronounced at the stem base.
The spore print is olivaceous-brown.
The spores are fusiform (spindle-shaped) or fusiform-ellipsoid, measuring 10–16 × 4–6 μm. Although under an optical microscope they appear perfectly smooth, when viewed with a scanning electron microscope (SEM) fine warts and tiny “pin-pricks” are visible on their surface. The cap cuticle is a trichodermium, composed of cylindrical smooth hyphae with clavate terminal cells that later collapse in mature specimens.
Similar species
Hemileccinum depilatum is the sister-species of H. impolitum and morphologically very similar, differing by its wrinkled or "hammered" cap surface, and its association hornbeam (Carpinus) or hop-hornbeam (Ostrya). Microscopically it is distinguished by the structure of its cap cuticle, which is a palisadoderm composed of spherical and shortly cylindrical cells.
Leccinellum lepidum can also look very similar, but typically has a viscid cap with a wrinkled or "hammered" surface not turning violet with a drop of aqueous ammonia, while its flesh slowly turns violaceous-grey and finally greyish-black when exposed to the air. Microscopically it has longer spores, often reaching 20 μm in length.
Xerocomus subtomentosus lacks scabrosities on the stem surface, while its pores are larger, angular and stain bluish when bruised. When longitudinally cut, its flesh is pinkish-brown in the lower part of the stem and sometimes discolours faintly bluish in the cap.
Distribution and habitat
Hemileccinum impolitum is ecologically versatile, forming ectomycorrhizal associations with several species of oak (Quercus), but occasionally also with beech (Fagus) and chestnut (Castanea). It does not appear to be substrate-specific and has been reported from both calcareous and acidic soil. In the United Kingdom, it is occasionally found in southern England, although finds in other parts of the country have also been reported.
Molecular phylogenetic testing has so far verified its presence in Estonia, France, Germany, Portugal, Spain, Sweden, and the Mediterranean islands of Cyprus and Sardinia.
Edibility
The iodine bolete is described as edible by some authors, and inedible by others, probably because of the peculiar odour of this species. However, it is hardly ever found in large numbers, therefore care and discretion should be exercised when intending to eat this mushroom.
References
impolitum
Fungi of Europe
Edible fungi
Fungus species
Taxa named by Elias Magnus Fries
Fungi described in 1838 | Hemileccinum impolitum | [
"Biology"
] | 1,299 | [
"Fungi",
"Fungus species"
] |
47,622,276 | https://en.wikipedia.org/wiki/Knotted%20polymers | Single Chain Cyclized/Knotted Polymers are a new class of polymer architecture with a general structure consisting of multiple intramolecular cyclization units within a single polymer chain. Such a structure was synthesized via the controlled polymerization of multivinyl monomers, which was first reported in Dr. Wenxin Wang's research lab. These multiple intramolecular cyclized/knotted units mimic the characteristics of complex knots found in proteins and DNA which provide some elasticity to these structures. Of note, 85% of elasticity in natural rubber is due to knot-like structures within its molecular chain.
An intramolecular cyclization reaction is where the growing polymer chain reacts with a vinyl functional group on its own chain, rather than with another growing chain in the reaction system. In this way the growing polymer chain covalently links to itself in a fashion similar to that of a knot in a piece of string. As such, single chain cyclized/knotted polymers consist of many of these links (intramolecularly cyclized), as opposed to other polymer architectures including branched and crosslinked polymers that are formed by two or more polymer chains in combination.
Linear polymers can also fold into knotted topologies via non-covalent linkages. Knots and slipknots have been identified in naturally evolved polymers such as proteins as well. Circuit topology and knot theory formalise and classify such molecular conformations.
Synthesis
Deactivation enhanced ATRP
A simple modification to atom transfer radical polymerization (ATRP) was introduced in 2007 to kinetically control the polymerization by increasing the ratio of inactive copper(II) catalyst to active copper(I) catalyst. The modification to this strategy is termed deactivation enhanced ATRP, whereby different ratios of copper(II)/copper(I) are added. Alternatively a copper(II) catalyst may be used in the presence of small amounts of a reducing agent such as ascorbic acid to produce low percentages of copper(I) in situ and to control the ratio of copper (II)/copper (I). Deactivation enhanced ATRP features the decrease of the instantaneous kinetic chain length ν as defined by:,
meaning an average number of monomer units are added to a propagating chain end during each activation/deactivation cycle, The resulting chain growth rate is slowed down to allow sufficient control over the reaction thus greatly increasing the percentage of multi-vinyl monomers in the reaction system (even up to 100 percent (homopolymerization)).
Polymerization process
Typically, single chain cyclized/knotted polymers are synthesized by deactivation enhanced ATRP of multivinyl monomers via kinetically controlled strategy. There are several main reactions during this polymerization process: initiation, activation, deactivation, chain propagation, intramolecular cyclization and intermolecular crosslinking. The polymerization process is explained in Figure 2.
In a similar way to normal ATRP, the polymerization is started by initiation to produce a free radical, followed by chain propagation and reversible activation/deactivation equilibrium. Unlike the polymerization of single vinyl monomers, for the polymerization of multivinyl monomers, the chain propagation occurs between the active centres and one of the vinyl groups from the free monomers. Therefore, multiple unreacted pendent vinyl groups are introduced into the linear primary polymer chains, resulting in a high local/spatial vinyl concentration. As the chain grows, the propagating centre reacts with their own pendent vinyl groups to form intramolecular cyclized rings (i.e. intramolecular cyclization). The unique alternating chain propagation/intramolecular cyclization process eventually leads to the single chain cyclized/knotted polymer architecture.
Intramolecular cyclization or intermolecular crosslinking
It is worthy to note that due to the multiple reactive sites of the multivinyl monomers, plenty of unreacted pendent vinyl groups are introduced to linear primary polymer chains. These pendent vinyl groups have the potential to react with propagating active centres either from their own polymer chain or others. Therefore, both of the intramolecular cyclization and intermolecular crosslinking might occur in this process.
Using the deactivation enhanced strategy, a relatively small instantaneous kinetic chain length limits the number of vinyl groups that can be added to a propagating chain end during each activation/deactivation cycles and thus keeps the polymer chains growing in a limited space. In this way, unlike what happens in free radical polymerization (FRP), the formation of huge polymer chains and large-scale combinations at early reaction stages is avoided. Therefore, a small instantaneous kinetic chain length is the prerequisite for further manipulation of intramolecular cyclization or intermolecular crosslinking. Based on the small instantaneous kinetic chain length, regulation of different chain dimensions and concentrations would lead to distinct reaction types. A low ratio of initiator to monomer would result in the formation of longer chains but of a lower chain concentration, This scenario would no doubt increases the chances of intramolecular cyclization due to the high local/spatial vinyl concentration within the growth boundary. Although the opportunity for intermolecular reactions can increase as the polymer chains grow, the likelihood of this occurring at the early stage of reactions is minimal due to the low chain concentration, which is why single chain cyclized/knotted polymers can form. However, in contrast, a high initiator concentration not only diminishes the chain dimension during the linear-growth phase thus suppressing the intramolecular cyclization, but it also increases the chain concentration within the system so that pendent vinyl groups in one chain are more likely to fall into the growth boundary of another chain. Once the monomers are converted to short chains, the intermolecular combination increases and allows the formation of hyperbranched structures with a high density of branching and vinyl functional groups.
Note
The monomer concentration is important for the synthesis of single chain cyclized/knotted polymers, but the kinetic chain length is the key determining factor for synthesis.
Applications
Single chain cyclized polymers consist of multiple cyclized rings which afford them some unique properties, including high density, low intrinsic viscosity, low translational friction coefficients, high glass transition temperatures, and excellent elasticity of the formed network. In particular, an abundance of internal space makes the single chain cyclized polymers ideal candidates as efficient cargo-carriers.
Gene delivery
It is well established that the macromolecular structure of nonviral gene delivery vectors alters their transfection efficacy and cytotoxicity. The cyclized structure has been proven to reduce cytotoxicity and increase circulation time for drug and gene delivery applications. The unique structure of cyclizing chains provides the single chain cyclized polymers a different method of interaction between the polymer and plasmid DNA, and results in a general trend of higher transfection capabilities than branched polymers. Moreover, due to the nature of the single chain structure, this cyclized polymer can “untie” to a linear chain under reducing conditions. Transfection profiles on astrocytes comparing 25 kDa-PEI, SuperFect® and Lipofectamine®2000 and cyclized polymer showed greater efficiency and cell viability whilst maintaining neural cell viability above 80% four days post transfections.
See also
Polymer architecture
Branching (polymer chemistry)
Molecular knot
Knotted protein
References
Polymer chemistry | Knotted polymers | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,555 | [
"Materials science",
"Polymer chemistry"
] |
47,622,595 | https://en.wikipedia.org/wiki/Waistland | Waistland: The R/evolutionary Science Behind Our Weight and Fitness Crisis is a book by Harvard psychologist Deirdre Barrett published by W. W. Norton & Company in 2007. The book examines the obesity and fitness crisis from an evolutionary standpoint. Barrett argues that our bodies, our metabolisms, and our feeding instincts evolved during humanity’s hunter-gatherer phase. We're programmed to forage for sugar and saturated fats because these were once found only in hard-to-come-by fruit and game. Now, these same foods are everywhere—in vending machines, fast food joint, restaurants, grocery stores, and school cafeterias—they're nearly impossible to avoid. She describes this as related to the focus of another of her books "supernormal stimuli"—the concept of artificial creations that appeal more to our instincts than the natural objects they mimic—supernormal stimuli for appetite have led to the obesity epidemic. The book opens with a vignette about how
zoos post signs saying "Don't Feed the Animals." People respect these orders, allowing veterinarians to prescribe just the right balanced diet for the lions, koalas, and snakes. Meanwhile, everyone stops for chips, sodas, and hot dogs on the way out of the zoo. The book explores solutions from behavior modification to willpower to change diet and exercise habits. One of the main messages of the book is that big changes in diet are actually easier than small ones, that the addictive nature of junk food means that, after a few days, eating no cookies or chips is easier than eating fewer cookies or chips.
Reviews
"An elegantly written and eye-opening analysis of what makes us fat." - Steven Pinker
"At the start of this sensible book about the "weight and fitness crisis" in America, Harvard psychologist Deirdre Barrett says the answer lies in the study of evolution. As animals, we are genetically almost identical to our Stone Age ancestors. We live in advanced societies, with supermarkets and cars and lifts, but we are built to be hunter-gatherers. We are programmed to seek out fat, sugar, starch and salt, because, in the Stone Age, these things were hard to come by. When they turn up in abundance, our bodies, for the most part, can't say no.
. . . to put it simply, human beings are evolving much more slowly than the food we eat. And the food is tricking us. We think it's what we need, but it's just what we want. What can we do? Eat sensibly and exercise, of course. One thing we have to do, though, is not to "listen to your body" – because it craves food that, in abundance, is bad for it. . . This is a clear, well-written and thoughtful guide to the fat crisis." - The Telegraph (UK)
"In a new book, Waistland, Deirdre Barrett points out that KFC or McDonald's would be happy to sell us tofu burgers and carrot strips, but our inner hunter-gatherer wants fat, and sugar, and salt - cravings that made sense for almost all of human existence, just accepting those past few thousand years. And we would all be in great shape, she argues, if we could just replicate the hunter-gatherer lifestyle." - Talk of the Nation : NPR
References
Criticism of fast food
Books about evolutionary psychology
Dieting books
Obesity
Low-carbohydrate diets
2007 non-fiction books
Waist | Waistland | [
"Chemistry"
] | 734 | [
"Carbohydrates",
"Low-carbohydrate diets"
] |
47,622,833 | https://en.wikipedia.org/wiki/Gomphidius%20maculatus | Gomphidius maculatus is an edible mushroom in the family Gomphidiaceae that is found in Europe and North America. It was first described scientifically by naturalist Giovanni Antonio Scopoli in 1772. Elias Magnus Fries transferred it to the genus Gomphidius in 1838, giving it the name by which it is known today. The specific epithet maculatus is derived from the Latin word for "spotted".
References
External links
Boletales
Edible fungi
Fungi described in 1772
Fungi of Europe
Fungi of North America
Fungus species | Gomphidius maculatus | [
"Biology"
] | 109 | [
"Fungi",
"Fungus species"
] |
47,623,232 | https://en.wikipedia.org/wiki/Dibromine%20monoxide | Dibromine monoxide is the chemical compound composed of bromine and oxygen with the formula Br2O. It is a dark brown solid which is stable below −40 °C and is used in bromination reactions. It is similar to dichlorine monoxide, the monoxide of its halogen neighbor one period higher on the periodic table. The molecule is bent, with C2v molecular symmetry. The Br−O bond length is 1.85 Å and the Br−O−Br bond angle is 112°, similar to dichlorine monoxide.
Reactions
Dibromine monoxide can be prepared by reacting bromine vapor or a solution of bromine in carbon tetrachloride with mercury(II) oxide at low temperatures:
2 Br2 + 2 HgO → HgBr2·HgO + Br2O
It can also be formed by thermal decomposition of bromine dioxide or by passing an electrical current through a 1:5 mixture of bromine and oxygen gases.
References
Bromine(I) compounds
Oxides
Nonmetal halides | Dibromine monoxide | [
"Chemistry"
] | 218 | [
"Inorganic compounds",
"Oxides",
"Inorganic compound stubs",
"Salts"
] |
47,623,288 | https://en.wikipedia.org/wiki/Recirculating%20aquaculture%20system | Recirculating aquaculture systems (RAS) are used in home aquaria and for fish production where water exchange is limited and the use of biofiltration is required to reduce ammonia toxicity. Other types of filtration and environmental control are often also necessary to maintain clean water and provide a suitable habitat for fish. The main benefit of RAS is the ability to reduce the need for fresh, clean water while still maintaining a healthy environment for fish. To be operated economically commercial RAS must have high fish stocking densities, and many researchers are currently conducting studies to determine if RAS is a viable form of intensive aquaculture.
RAS water treatment processes
A series of treatment processes is utilized to maintain water quality in intensive fish farming operations. These steps are often done in order or sometimes in tandem. After leaving the vessel holding fish the water is first treated for solids before entering a biofilter to convert ammonia, next degassing and oxygenation occur, often followed by heating/cooling and sterilization. Each of these processes can be completed by using a variety of different methods and equipment, but regardless all must take place to ensure a healthy environment that maximizes fish growth and health.
Biofiltration
All RAS relies on biofiltration to convert ammonia (NH4+ and NH3) excreted by the fish into nitrate. Ammonia is a waste product of fish metabolism and high concentrations (>.02 mg/L) are toxic to most finfish. Nitrifying bacteria are chemoautotrophs that convert ammonia into nitrite (NO2−) then nitrate (NO3−). These include bacteria of the genera Nitrobacter, Nitrococcus, Nitrospira, and Nitrospina. Although nitrite is usually converted to nitrate as quickly as it is produced, lack of biological oxidation of the nitrite will result in elevated nitrite levels that can be toxic to the fish. High levels of nitrite are also indicative of biofilter impending failure. Nitrate is the end-product of nitrification, and is the least toxic of the nitrogen compounds, with 96-hour exposure LC50 values in freshwater in excess of 1,000 mg/L.
A biofilter provides a substrate for the bacterial community, which results in thick biofilm growing within the filter. Water is pumped through the filter, and ammonia is utilized by the bacteria for energy. In recirculating systems, daily water exchanges are commonly used to control nitrogen levels. Stable environmental conditions and regular maintenance are required to ensure the biofilter is operating efficiently.
Solids removal
In addition to treating the liquid waste excreted by fish the solid waste must also be treated, this is done by concentrating and flushing the solids out of the system. Removing solids reduces bacteria growth, oxygen demand, and the proliferation of disease. The simplest method for removing solids is the creation of settling basin where the relative velocity of the water is slow and particles can settle at the bottom of the tank where they are either flushed out or vacuumed out manually using a siphon. However, this method is not viable for RAS operations where a small footprint is desired. Typical RAS solids removal involves a sand filter or particle filter where solids become lodged and can be periodically backflushed out of the filter. Another common method is the use of a mechanical drum filter where water is run over a rotating drum screen that is periodically cleaned by pressurized spray nozzles, and the resulting slurry is treated or sent down the drain. In order to remove extremely fine particles or colloidal solids a protein fractionator may be used with or without the addition of ozone (O3).
Oxygenation
Reoxygenating the system water is a crucial part to obtaining high production densities. Fish require oxygen to metabolize food and grow, as do bacteria communities in the biofilter. Dissolved oxygen levels can be increased through two methods, aeration and oxygenation. In aeration air is pumped through an air stone or similar device that creates small bubbles in the water column, this results in a high surface area where oxygen can dissolve into the water. In general due to slow gas dissolution rates and the high air pressure needed to create small bubbles this method is considered inefficient and the water is instead oxygenated by pumping in pure oxygen. Various methods are used to ensure that during oxygenation all of the oxygen dissolves into the water column. Careful calculation and consideration must be given to the oxygen demand of a given system, and that demand must be met with either oxygenation or aeration equipment.
pH control
In all RAS pH must be carefully monitored and controlled. The first step of nitrification in the biofilter consumes alkalinity and lowers the pH of the system. Keeping the pH in a suitable range (5.0-9.0 for freshwater systems) is crucial to maintain the health of both the fish and biofilter. pH is typically controlled by the addition of alkalinity in the form of lime (CaCO3) or sodium hydroxide (NaOH). A low pH will lead to high levels of dissolved carbon dioxide (CO2), which can prove toxic to fish. pH can also be controlled by degassing CO2 in a packed column or with an aerator, this is necessary in intensive systems especially where oxygenation instead of aeration is used in tanks to maintain O2 levels.
Temperature control
All fish species have a preferred temperature above and below which that fish will experience negative health effects and eventually death. Warm water species such as Tilapia and Barramundi prefer 24 °C water or warmer, where as cold water species such as trout and salmon prefer water temperature below 16 °C. Temperature also plays an important role in dissolved oxygen (DO) concentrations, with higher water temperatures having lower values for DO saturation. Temperature is controlled through the use of submerged heaters, heat pumps, chillers, and heat exchangers. All four may be used to keep a system operating at the optimal temperature for maximizing fish production.
Biosecurity
Disease outbreaks occur more readily when dealing with the high fish stocking densities typically employed in intensive RAS. Outbreaks can be reduced by operating multiple independent systems with the same building and isolating water to water contact between systems by cleaning equipment and personnel that move between systems. Also the use of an Ultraviolet (UV) or ozone water treatment system reduces the number of free floating virus and bacteria in the system water. These treatment systems reduce the disease loading that occurs on stressed fish and thus reduce the chance of an outbreak.
Advantages
Reduced water requirements as compared to raceway or pond aquaculture systems.
Reduced land needs due to the high stocking density
Site selection flexibility and independence from a large, clean water source.
Reduction in wastewater effluent volume.
Increased biosecurity and ease in treating disease outbreaks.
Ability to closely monitor and control environmental conditions to maximize production efficiency. Similarly, independence from weather and variable environmental conditions.
Disadvantages
High upfront investment in materials and infrastructure.
High operating costs mostly due to electricity, and system maintenance.
A need for highly trained staff to monitor and operate the system.
Higher greenhouse gas emissions than non-recirculating aquaculture.
Special types of RAS
Aquaponics
Combining plants and fish in a RAS is referred to as aquaponics. In this type of system ammonia produced by the fish is not only converted to nitrate but is also removed by the plants from the water. In an aquaponics system fish effectively fertilize the plants, this creates a closed looped system where very little waste is generated and inputs are minimized. Aquaponics provides the advantage of being able to harvest and sell multiple crops. Contradictory views exist on the suitability and safety of RAS effluents to sustain plant growth under aquaponics condition. Future conversions, rather ‘upgrades’, of operational RAS farms to semi-commercial Aquaponic ventures should not be deterred by nutrient insufficiency or nutrient safety arguments. Incentivizing RAS farm wastes through semi-commercial aquaponics is encouraged. Nutrients locked in RAS wastewater and sludge have sufficient and safe nutrients to sustain plant growth under aquaponics condition.
Aquariums
Home aquaria and inland commercial aquariums are a form of RAS where the water quality is very carefully controlled and the stocking density of fish is relatively low. In these systems the goal is to display the fish rather than producing food. However, biofilters and other forms of water treatment are still used to reduce the need to exchange water and to maintain water clarity. Just like in traditional RAS water must be removed periodically to prevent nitrate and other toxic chemicals from building up in the system. Coastal aquariums often have high rates of water exchange and are typically not operated as a RAS due to their proximity to a large body of clean water.
See also
Controlled-environment agriculture
References
External links
Recirculating Aquaculture System Design Manual
Recirculating Aquaculture Considerations, Design, and Management
Engineering Design of a Water Reuse System
Recirculating aquaculture systems (RAS) in fish farming
Aquaculture
Aquaponics
Environmental engineering
Water treatment | Recirculating aquaculture system | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,863 | [
"Water treatment",
"Chemical engineering",
"Water pollution",
"Civil engineering",
"Environmental engineering",
"Water technology"
] |
47,623,483 | https://en.wikipedia.org/wiki/Dibromine%20pentoxide | Dibromine pentoxide is the chemical compound composed of bromine and oxygen with the formula Br2O5. It is a colorless solid that is stable below −20 °C. It has the structure O2Br−O−BrO2, the Br−O−Br bond is bent with bond angle 121.2°. Each BrO3 group is pyramidal with the bromine atom at the apex.
Preparation
Dibromine pentoxide can be prepared by reacting a solution of bromine in dichloromethane with ozone at low temperatures and recrystallized from propionitrile.
References
Bromine(V) compounds
Oxides | Dibromine pentoxide | [
"Chemistry"
] | 136 | [
"Inorganic compounds",
"Oxides",
"Inorganic compound stubs",
"Salts"
] |
47,623,626 | https://en.wikipedia.org/wiki/Multi%20expression%20programming | Multi Expression Programming (MEP) is an evolutionary algorithm for generating mathematical functions describing a given set of data. MEP is a Genetic Programming variant encoding multiple solutions in the same chromosome. MEP representation is not specific (multiple representations have been tested). In the simplest variant, MEP chromosomes are linear strings of instructions. This representation was inspired by Three-address code. MEP strength consists in the ability to encode multiple solutions, of a problem, in the same chromosome. In this way, one can explore larger zones of the search space. For most of the problems this advantage comes with no running-time penalty compared with genetic programming variants encoding a single solution in a chromosome.
Representation
MEP chromosomes are arrays of instructions represented in Three-address code format.
Each instruction contains a variable, a constant, or a function. If the instruction is a function, then the arguments (given as instruction's addresses) are also present.
Example of MEP program
Here is a simple MEP chromosome (labels on the left side are not a part of the chromosome):
1: a
2: b
3: + 1, 2
4: c
5: d
6: + 4, 5
7: * 3, 5
Fitness computation
When the chromosome is evaluated it is unclear which instruction will provide the output of the program. In many cases, a set of programs is obtained, some of them being completely unrelated (they do not have common instructions).
For the above chromosome, here is the list of possible programs obtained during decoding:
E1 = a,
E2 = b,
E4 = c,
E5 = d,
E3 = a + b.
E6 = c + d.
E7 = (a + b) * d.
Each instruction is evaluated as a possible output of the program.
The fitness (or error) is computed in a standard manner. For instance, in the case of symbolic regression, the fitness is the sum of differences (in absolute value) between the expected output (called target) and the actual output.
Fitness assignment process
Which expression will represent the chromosome? Which one will give the fitness of the chromosome?
In MEP, the best of them (which has the lowest error) will represent the chromosome. This is different from other GP techniques: In Linear genetic programming the last instruction will give the output. In Cartesian Genetic Programming the gene providing the output is evolved like all other genes.
Note that, for many problems, this evaluation has the same complexity as in the case of encoding a single solution in each chromosome. Thus, there is no penalty in running time compared to other techniques.
Software
MEPX
MEPX is a cross-platform (Windows, macOS, and Linux Ubuntu) free software for the automatic generation of computer programs. It can be used for data analysis, particularly for solving symbolic regression, statistical classification and time-series problems.
libmep
Libmep is a free and open source library implementing Multi Expression Programming technique. It is written in C++.
hmep
hmep is a new open source library implementing Multi Expression Programming technique in Haskell programming language.
See also
Genetic programming
Cartesian genetic programming
Gene expression programming
Grammatical evolution
Linear genetic programming
Notes
External links
Multi Expression Programming website
Multi Expression Programming source code
Machine learning algorithms
Regression and curve fitting software
Software that uses wxWidgets | Multi expression programming | [
"Biology"
] | 681 | [
"Genetics techniques",
"Genetic programming"
] |
47,623,658 | https://en.wikipedia.org/wiki/Dibromine%20trioxide | Dibromine trioxide is the chemical compound composed of bromine and oxygen with the formula Br2O3. It is an orange solid that is stable below −40 °C. It has the structure Br−O−BrO2 (bromine bromate). It was discovered in 1993. The bond angle of Br−O−Br is 111.7°, the bond angle of O−Br=O is 103.1°, and the bond angle of O=Br=O is 107.6°. The Br−OBrO2 bond length is 1.845 Å, the O−BrO2 bond length is 1.855 Å and the Br=O bond length is 1.612 Å.
Reactions
Dibromine trioxide can be prepared by reacting a solution of bromine in dichloromethane with ozone at low temperatures.
It disproportionates in alkali solutions to Br and BrO.
References
Bromine(V) compounds
Sesquioxides
Bromine(I) compounds
Mixed valence compounds | Dibromine trioxide | [
"Chemistry"
] | 216 | [
"Mixed valence compounds",
"Inorganic compounds",
"Inorganic compound stubs"
] |
47,623,807 | https://en.wikipedia.org/wiki/Pittsburgh%20Sleep%20Quality%20Index | The Pittsburgh Sleep Quality Index (PSQI) is a self-report questionnaire that assesses sleep quality over a 1-month time interval. The measure consists of 19 individual items, creating 7 components that produce one global score, and takes 5–10 minutes to complete. Developed by researchers at the University of Pittsburgh, the PSQI is intended to be a standardized sleep questionnaire for clinicians and researchers to use with ease and is used for multiple populations. The questionnaire has been used in many settings, including research and clinical activities, and has been used in the diagnosis of sleep disorders. Clinical studies have found the PSQI to be reliable and valid in the assessment of sleep problems to some degree, but more so with self-reported sleep problems and depression-related symptoms than actigraphic measures.
Development and history
The PSQI was developed in 1989, by Buysse and his colleagues, to create a standardized measure designed to gather consistent information about the subjective nature of people's sleep habits and provide a clear index that both clinicians and patients can use. It gained popularity as a measure that could be used in research that looks at how sleep might be associated with sleep disorders, depression, and bipolar disorder.
Scoring and interpretation
Consisting of 19 items, the PSQI measures several different aspects of sleep, offering seven component scores and one composite score. The component scores consist of subjective sleep quality, sleep latency (i.e., how long it takes to fall asleep), sleep duration, habitual sleep efficiency (i.e., the percentage of time in bed that one is asleep), sleep disturbances, use of sleeping medication, and daytime dysfunction.
Each item is weighted on a 0–3 interval scale. The global PSQI score is then calculated by totaling the seven component scores, providing an overall score ranging from 0 to 21, where lower scores denote a healthier sleep quality.
Traditionally, the items from the PSQI have been summed to create a total score to measure overall sleep quality. Statistical analyses also support looking at three factors, which include sleep efficiency (using sleep duration and sleep efficiency variables), perceived sleep quality (using subjective sleep quality, sleep latency, and sleep medication variables), and daily disturbances (using sleep disturbances and daytime dysfunctions variables).
Reliability
*Table from Youngstrom et al., extending Hunsley & Mash, 2008
Validity
*Table from Youngstrom et al., extending Hunsley & Mash, 2008
Impact
The PSQI now is used by researchers working with people from adolescence to late life. The PSQI is recommended in independent reviews because it has accumulated a substantial amount of research evidence. In addition to the measure's promising reliability and validity, its brevity and accessibility as a free measure allow the measure great potential for clinical practice. To date, it has been translated into 56 languages. The PSQI in Bengali language is also abbreviated as BPSQI where 'B' stands for Bengali.
Limitations
The PSQI has the same problems as other self-report inventories in that scores can be easily exaggerated or minimized by the person completing them. Like all questionnaires, the way the instrument is administered can have an effect on the final score. The PSQI is a relatively new measure and as a result has not received enough investigation to determine the entirety of the psychometric measures.
See also
Epworth Sleepiness Scale
References
External links
PDF version of PSQI
Tayside children's sleep questionnaire (TCSQ)
Sleep medicine
Mental disorders screening and assessment tools | Pittsburgh Sleep Quality Index | [
"Biology"
] | 715 | [
"Behavior",
"Sleep",
"Sleep medicine"
] |
47,624,883 | https://en.wikipedia.org/wiki/Law%20on%20the%20fight%20against%20terrorism | The Law on the fight against terrorism (), abbreviated LCT, is a 2006 French counter-terrorism legislation designed to improve state security and strengthen border control. The legislation was passed on 23 January 2006 under the leadership of Nicolas Sarkozy, then the Minister of the Interior. Notably the law increased punitive measures for criminal association and gave the government more power to access personal information online.
Background
After the 2005 London bombings perpetrated by Islamic extremists, Sarkozy pushed to strengthen counter-terrorism measures in France. Sarkozy introduced the bill in the French Senate on 28 October 2005, saying that while France had never yielded to terrorist intimidation and never would, the rise in global terrorism necessitated change in policy.
Legislation
The legislation amended several previous criminal codes, including the first French counter-terrorism law, introduced in 1986. The 2006 act particularly increased the breadth of government surveillance without judicial control.
Police may access an individual's computer files without a warrant to prevent a terrorist act.
Internet service providers and Internet cafes are required to retain login and connection data for one year and to provide this data to authorities if requested.
Authorities may receive telephone and cell phone usage details, without the permission of a judge.
The time a person can be held without charges was increased from four to six days in cases of "serious risk of imminent terrorist action in France or abroad."
Increased CCTV surveillance in public
Identity checks, including on board international trains, are strengthened.
The Prime Minister or a person qualified in the Interior Ministry may authorize listening devices to record conversations.
Criticisms
The law was criticized for encroaching on personal freedoms and liberties, in particular, accessing phone and Internet data without a signed warrant from a judicial authority. "Internet surveillance has now escaped from any legal proceedings to be placed under the direct control of the state," criticized Le Monde.
See also
French criminal law
Terrorism in France
Fiche "S"
References
2006 in France
2006 in law
Government databases in France
Information privacy
Terrorism laws
Counterterrorism in France
French criminal law | Law on the fight against terrorism | [
"Engineering"
] | 414 | [
"Cybersecurity engineering",
"Information privacy"
] |
47,626,347 | https://en.wikipedia.org/wiki/Setipiprant | Setipiprant (INN; developmental code names ACT-129968, KYTH-105) is an investigational drug developed for the treatment of asthma and scalp hair loss. It was originally developed by Actelion and acts as a selective, orally available antagonist of the prostaglandin D2 receptor 2 (DP2). The drug is being developed as a novel treatment for male pattern baldness by Allergan.
Medical uses
Scalp hair loss
Acting through DP2, PGD2 can inhibit hair growth, suggesting that this receptor is a potential target for bald treatment. A phase 2A study to evaluate the safety, tolerability, and efficacy of oral setipiprant relative to a placebo in 18- to 49-year-old males with androgenetic alopecia was completed in May 2018 and did not find statistically significant improvement.
Allergic conditions
Setipiprant proved to be well tolerated and reasonably effective in reducing allergen-induced airway responses in asthmatic patient clinical trials. However, the drug, while supporting the concept that DP2 contributes to asthmatic disease, did not show sufficient advantage over existing drugs and was discontinued from further development for this application.
Adverse effects
Data from phase II and III clinical trials did not detect any severe adverse effects to setipiprant. The authors were unable to identify any pattern of adverse effects that differ from placebo, including subjective reporting of symptoms and objective laboratory monitoring.
Interactions
While setipiprant mildly induces the drug metabolizing enzyme CYP3A4 in vitro, the interaction appears to not be clinically relevant.
Pharmacology
Mechanism of action
Allergic conditions
Setipiprant binds to the DP2 receptor with a dissociation constant of 6 nM, representing potent antagonism of the receptor. The DP2 receptor, also called the CRTh2 receptor, is a G-protein-coupled receptor (GPCR) that is expressed on certain inflammatory cells, such as eosinophils, basophils, and certain lymphocytes. For its mechanism of action in the treatment of allergic conditions, setipiprant's DP2 antagonism prevents the action of prostaglandin D2 (PGD2) on these receptors. The DP2 receptor mediates the activation of type 2 helper T (Th2) cells, eosinophils, and basophils in the lungs, which are white blood cells implicated in producing the inflammatory response the characterizes allergic conditions. Activation of DP2 on Th2 cells by PGD2 induces the secretion of inflammatory cytokines (interleukin (IL) 4, IL-5, and IL-13), which cause an increase of eosinophils in the blood, remodeling of lung tissue, and hypersensitivity of lung tissue to allergens.
Setipiprant does not antagonize the thromboxane receptor (TP). The bronchoconstricting properties of PGD2 are not inhibited by setipiprant, since these are mediated by the TP receptor. As a point of contrast, ramatroban is a selective TP antagonist and DP2 receptor antagonist.
Setipiprant does not appreciably inhibit the activity of the enzyme cyclooxygenase 1 (COX-1), which is responsible for the synthesis of prostaglandins (including PGD2).
Scalp hair loss
Prostaglandin D2 synthase (PTGDS) is an enzyme that produces PGD2. In men with androgenic alopecia, the enzyme PTGDS is elevated in the bald scalp tissue, as well as its product PGD2. PGD2 inhibits the growth of hair follicles through its activity on the DP2 receptor, but not the DP1 receptor. Theoretically, setipiprant's DP2 receptor antagonism may counteract the activity of PGD2 in hair follicles, thereby stimulating hair growth.
Pharmacokinetics
The oral bioavailability of setipiprant is 44% in rats and 55% in dogs, which suggests that it should be orally bioavailable in humans. The half-life of setipiprant in humans is about 11 hours. The maximum concentration in plasma (Cmax) is 6.04 and 6.44 mcg/mL for setipiprant tablets and capsules respectively, with an area under the curve of 31.88 and 31.50 mcg×hours/mL for setipiprant tablets and capsules respectively. Cmax was reached between 1.8–4 hours after oral administration. The tablet and capsule formulations are bioequivalent.
Chemistry
Setipiprant appears as a light yellow to yellow colored solid. Based on general guidelines, the powder form is considered stable for 2 years at 4 degrees C, and for 3 years as -20 degrees C. When dissolved in a solvent, setipiprant is stable for 1 month at -20 degrees C, and 6 months at -80 degrees C. It is considered soluble in DMSO at concentrations ≥ 36 mg/mL.
History
Setipiprant was initially researched by Actelion as a treatment for allergies and inflammatory disorders, particularly asthma, but despite being well tolerated in clinical trials and showing reasonable efficacy against allergen-induced airway responses in asthmatic patients, it failed to show sufficient advantages over existing drugs and was discontinued from further development in this application.
However, following the discovery in 2012 that the prostaglandin D2 receptor (DP/PGD2) is expressed at high levels in the scalp of men affected by male pattern baldness, the rights to setipiprant were acquired by Kythera to develop the drug as a novel treatment for baldness. The favorable pharmacokinetics and relative lack of side effects seen in earlier clinical trials mean that fresh clinical trials for this new application can be conducted fairly quickly. , setipiprant is currently under development by Allergan for the prevention of androgenic alopecia after their successful acquisition of Kythera.
See also
Prostaglandin DP2 receptor
Fevipiprant
Ramatroban
References
External links
Setipiprant - AdisInsight
Prostaglandins
Receptor antagonists
1-Naphthyl compounds | Setipiprant | [
"Chemistry"
] | 1,333 | [
"Neurochemistry",
"Receptor antagonists"
] |
47,626,903 | https://en.wikipedia.org/wiki/Norman%20A.%20Ough | Norman Arthur Ough (10 November 1898 – 3 August 1965) was a marine model maker whose models of Royal Navy warships are regarded as among the very finest of warship models.
Family and early life
Ough was born in Leytonstone, London. His father, Arthur Ough (1863–1946), was an architect, surveyor and civil engineer. At the age of two Ough accompanied his parents to Hong Kong, where his father was employed as an architect for the University of Hong Kong and the Kowloon-Canton Railway, remaining there for four years. He was educated at Highfield School, Liphook, Hampshire and Bootham School in York.
Later life
Ough was a conscientious objector during the First and Second World Wars. From the mid-1930s he lived in a flat at 98 Charing Cross Road, London. He never married and there is much anecdotal evidence that he lived a frugal, even impoverished, lifestyle in which model-making was a totally absorbing pursuit even to the extent of twice being hospitalised for failing to eat adequately due to concentration on his work.
Models
Many of Ough's models are on display or held in store in museums including the Imperial War Museum, the National Maritime Museum and the Royal United Services Museum. One of his earlier models was of the battleship HMS Queen Elizabeth, which he made for Lord Howe, who presented it to Earl Beatty. There followed commissions for his models from many museums. At one time he was employed by Earl Mountbatten to make models of ships on which he had served, who remarked in a reply dated 20 July 1979 to a letter received from a visitor to his Broadlands estate "How interesting that the great model maker, Norman Ough, was a cousin of yours... I was told by the maker of the model of HMS Hampshire, also on display, that other model makers considered Norman Ough, the greatest master of his craft of this century."
As at September 2017, these models were located at the collections and research facility at No. 1 Smithery, Chatham Historic Dockyard.
In an article written for an edition of the magazine Model Maker about his model of HMS Dorsetshire in No. 14 Dry Dock, Portsmouth, which is widely regarded as among his very best, Ough writes about the benefit of his early training as an artist in achieving the model's realism:
Plans
In preparation for his models, Ough drew meticulous plans of the ships, their weapons, fittings and boats, many of which are regarded as the most authoritative drawings of their subjects in existence. For years these plans were marketed through the David MacGregor Plans Service and after Ough's death in 1965 his plans became the sole property of David MacGregor. On MacGregor's death in 2003 the combined collection was bequeathed to the SS Great Britain Trust.
Film Industry work
Ough was commissioned to construct models for effects in several films including Convoy (1940), Sailors Three (1940), Spare a Copper (1940), Ships with Wings (1941), The Big Blockade (1942), San Demetrio London (1943) and Scott of the Antarctic (1948).
References
People from Leytonstone
Model engineers
Model makers
1898 births
1965 deaths | Norman A. Ough | [
"Physics"
] | 664 | [
"Model makers"
] |
47,628,477 | https://en.wikipedia.org/wiki/Identifiability%20analysis | Identifiability analysis is a group of methods found in mathematical statistics that are used to determine how well the parameters of a model are estimated by the quantity and quality of experimental data. Therefore, these methods explore not only identifiability of a model, but also the relation of the model to particular experimental data or, more generally, the data collection process.
Introduction
Assuming a model is fit to experimental data, the goodness of fit does not reveal how reliable the parameter estimates are. The goodness of fit is also not sufficient to prove the model was chosen correctly. For example, if the experimental data is noisy or if there is an insufficient number of data points, it could be that the estimated parameter values could vary drastically without significantly influencing the goodness of fit. To address these issues the identifiability analysis could be applied as an important step to ensure correct choice of model, and sufficient amount of experimental data. The purpose of this analysis is either a quantified proof of correct model choice and integrality of experimental data acquired or such analysis can serve as an instrument for the detection of non-identifiable and sloppy parameters, helping planning the experiments and in building and improvement of the model at the early stages.
Structural and practical identifiability analysis
Structural identifiability analysis is a particular type of analysis in which the model structure itself is investigated for non-identifiability. Recognized non-identifiabilities may be removed analytically through substitution of the non-identifiable parameters with their combinations or by another way. The model overloading with number of independent parameters after its application to simulate finite experimental dataset may provide the good fit to experimental data by the price of making fitting results not sensible to the changes of parameters values, therefore leaving parameter values undetermined. Structural methods are also referred to as a priori, because non-identifiability analysis in this case could also be performed prior to the calculation of the fitting score functions, by exploring the number degrees of freedom (statistics) for the model and the number of independent experimental conditions to be varied.
Practical identifiability analysis can be performed by exploring the fit of existing model to experimental data. Once the fitting in any measure was obtained, parameter identifiability analysis can be performed either locally near a given point (usually near the parameter values provided the best model fit) or globally over the extended parameter space. The common example of the practical identifiability analysis is profile likelihood method.
See also
Notes
References
Lavielle, M.; Aarons, L. (2015), "What do we mean by identifiability in mixed effects models?", Journal of Pharmacokinetics and Pharmacodynamics, 43: 111–122; .
Stanhope, S.; Rubin, J. E.; Swigon D. (2014), "Identifiability of linear and linear-in-parameters dynamical systems from a single trajectory", SIAM Journal on Applied Dynamical Systems, 13: 1792–1815; .
Numerical analysis
Interpolation
Regression analysis | Identifiability analysis | [
"Mathematics"
] | 637 | [
"Computational mathematics",
"Mathematical relations",
"Approximations",
"Numerical analysis"
] |
47,628,929 | https://en.wikipedia.org/wiki/Pixie%20cup | Pixie cup may refer to:
Fungi
Geopyxis carbonaria, fungus also known as a "pixie cup"
Geopyxis vulcanalis, fungus commonly known as the "vulcan pixie cup"
Scutellinia scutellata, fungus also known as the "eyelash pixie cup"
Lichen
A number of species of the lichens Cladonia (cup lichen)
Cladonia asahinae, lichen commonly known as "pixie cup lichen"
See also
Elf Cup (disambiguation)
Fairy Cup (disambiguation) | Pixie cup | [
"Biology"
] | 121 | [
"Set index articles on fungus common names",
"Set index articles on organisms"
] |
68,802,240 | https://en.wikipedia.org/wiki/Germyl | Germyl, trihydridogermanate(1-), trihydrogermanide, trihydridogermyl or according to IUPAC Red Book: germanide is an anion containing germanium bounded with three hydrogens, with formula . Germyl is the IUPAC term for the – group. For less electropositive elements the bond can be considered covalent rather than ionic as "germanide" indicates. Germanide is the base for germane when it loses a proton.
The first germyl compound to be discovered was sodium germyl. Germane was reacted with sodium dissolved in liquid ammonia to produce sodium germyl. Other alkali metal germyl compounds are known. There are also numerous transition metal complexes that contain germyl as a ligand.
Formation
Alkali metal germyl compounds have been made by reacting germane with the alkali metal dissolved in liquid ammonia, or other non-reactive solvent.
Transition metal complexes cam be made by using lithium aluminium hydride to reduce a trichlorogermyl complex (−), which in turn can be made from the transition metal complex chloride and .
Salt elimination can be used in a reaction with monochlorogermane and a sodium salt of a transition metal anion:
.
In the gas phase, the germyl anion can be made from germane by capturing an electron with more than 8 eV of energy:
The germyl radical can be produced and immobilised in molecular form by exposing germane to vacuum ultraviolet light in a solid argon matrix. On heating, digermane is formed:
Properties
Germyl compounds react with water, so water cannot be used as a solvent. Liquids that have been used as solvents include liquid ammonia, ethyl amine, diglyme, or hexamethylphosphoramide. The choice of solvent depends on the temperature desired, whether alkali metals are going to be dissolved, whether the solvent needs to be distilled, and also if it reacts with the solute.
The bond between the metal ion and the germyl ion may be purely ionic, but may also be bonded via two bridging hydrogen atoms.
The energy to rip a hydrogen atom off germane to make the neutral radical is . GeH4 → GeH3• + H•. Electron affinity for the radical is 1.6 eV: GeH3• + e− → GeH3−.
Gas phase acidity of germane is ΔG is ; ΔH is for .
Both the anion and radical have C3v symmetry, and are shaped as a triangular pyramid with germanium at the top, and three hydrogen atoms at the bottom. In the radical, the H-Ge-H angle is 110°. In the anion the H-Ge-H angle is about 93°.
Reactions
Germyl compounds gradually decompose at room temperature by releasing hydrogen and forming a metal germide.
Germyl compounds react with alkyl halides to substitute the germyl − group for the halogen. With aromatic halide compounds, dihalomethanes, or neopentyl haldes they replace the halogen with hydrogen. Organogermanium compounds that can be produced include methyl germane, dimethyl germane, digermyl methane, digermyl ethane, digermyl propane.
The germyl ion reacts with water to yield germane:
Sodium germyl reacts with oxygen to form an orthogermanate:
This loses water at room temperature.
K[η5-C5H5)Mn(CO)2GeH3] reacts with acid to yield [η5-C5H5)Mn(CO)2]2Ge which has a Mn=Ge=Mn linkage in it.
List
Related
Germylidyne with formula ≡GeH has a triple bond to the metal atom.
Germylidene with base formula = has a double bond to the central metal.
References
Germanium(II) compounds
Metal hydrides
Anions | Germyl | [
"Physics",
"Chemistry"
] | 851 | [
"Matter",
"Inorganic compounds",
"Anions",
"Reducing agents",
"Metal hydrides",
"Ions"
] |
68,802,763 | https://en.wikipedia.org/wiki/DRUMS | DRUMS (Debris Removal Unprecedented Micro-Satellite) is an experimental spacecraft that will test proximity operation near space debris. The microsatellite carries two 'mock space debris' which once deployed will be used as a target for demonstrating approach and contact.
Overview
DRUMS was developed by Japanese company Kawasaki Heavy Industries (KHI), which will also operate the satellite following its launch. DRUMS will be operated from a ground station inside KHI's Gifu Works facility, and an antenna for communicating with the satellite was finished in October 2019. KHI characterizes DRUMS as a demonstration for future missions to remove launch vehicle upper stages from orbit, along with potential applications for on-orbit satellite servicing. DRUMS was launched on 9 November 2021 by an Epsilon launch vehicle. A half size model of DRUMS was displayed at the 2019 G20 Osaka summit.
Mission
Once in orbit, DRUMS will deploy two nonfunctional objects, which will act as targets for DRUMS's space debris approach test. After distancing itself from the target, DRUMS will then begin to approach it using on board optical sensors. The microsatellite has nitrogen gas propulsion for maneuvering, along with lighting it will use to illuminate the target while inside Earth's shadow. Once it has arrived near the target, DRUMS will extend a boom, which will be used to physically contact the target. DRUM's camera will record the overall sequence of the test.
See also
ClearSpace-1
RemoveDEBRIS
References
External links
DRUMS
Space debris
Satellites of Japan
Spacecraft launched in 2021
2021 in Japan | DRUMS | [
"Technology"
] | 313 | [
"Space debris"
] |
68,805,849 | https://en.wikipedia.org/wiki/The%20Apportionment%20of%20Human%20Diversity | "The Apportionment of Human Diversity" is a 1972 paper on racial categorisation by American evolutionary biologist Richard Lewontin. In it, Lewontin presented an analysis of genetic diversity amongst people from different conventionally-defined races. His main finding, that there is more genetic variation within these populations than between them, is considered a landmark in the study of human genetic variation and contributed to the abandonment of race as a scientific concept.
Background
By the 1960s, anthropologists such as Frank B. Livingstone had concluded that "there are no races, there are only clines" – smooth gradients of genetic variation in a species across its geographic range. Lewontin's mentor Theodosius Dobzhansky challenged this, arguing that there are human discrete populations that can be distinguished by differences in the frequency of genetic traits, which he called races. At that time the debate was largely semantic, stemming from their different ideas about what race is and how it would be manifested in humans genetics. The evidence that was available to Livingstone and Dobzhansky was mostly limited to qualitative observations of phenotypes thought to express genetic variation (e.g. skin colour). This changed over the course of the 1960s, as new techniques began to produce direct evidence for genetic variation in humans at a molecular level. By 1972, when Dobzhansky invited Lewontin to contribute to his edited volume of Evolutionary Biology, Lewontin felt that there was sufficient data to look at the problem anew, from a "firm quantitative basis":
Lewontin had been interested in using quantitative methods to assess taxonomic categories for some time before 1972. Over a decade earlier, palaeontologist George Gaylord Simpson had invited him to co-author a second edition of his textbook Quantitative Zoology (1960), and Lewontin added a chapter on the analysis of variance. In it, he illustrated how this approach could be used distinguish geographically distinct races with the example of Drosophila persimilis, a species of fruit fly. Though the method was similar to that he would later apply to human genetic variation, he reached the opposite conclusion: there was much greater genetic variance between geographic populations than between individual fruit flies, so there was a reasonable basis for distinguishing taxonomic races. Foreshadowing his later work on human genetic variation, he also emphasised that, because there will always be measurable differences between any two populations, it is the degree of difference compared to other axes of variation that will determine whether a grouping is biologically significant. "The Apportionment of Human Diversity" was published in an volume dedicated to Simpson, perhaps prompting Lewontin to recall this previous work.
Findings
Lewontin performed a statistical analysis of the fixation index (FST) in populations drawn from seven classically-defined "races" (Caucasian, African, Mongoloid, South Asian Aborigines, Amerinds, Oceanians, and Australian Aborigines). At that time, direct sequence data from the human genome was not sufficiently available, so he instead used 17 indirect markers, including blood group proteins. Lewontin found that the majority of the total genetic variation between humans (i.e., of the 0.1% of DNA that varies between individuals), 85.4%, is found within populations, 8.3% of the variation is found between populations within a "race", and only 6.3% was found to account for the racial classification. Numerous later studies have confirmed his findings. Based on this analysis, Lewontin concluded, "Since such racial classification is now seen to be of virtually no genetic or taxonomic significance either, no justification can be offered for its continuance."
Legacy
Many subsequent studies confirmed Lewontin's main finding.
The paper was not frequently cited in the years following its publication.
Fifty years after its publication, the paper was found to be frequently referenced in social media. In particular, Twitter users associated with far-right politics commonly used the term "Lewontin's fallacy" (referencing A. W. F. Edwards' 2003 critique of Lewontin, "Human Genetic Diversity: Lewontin's Fallacy") as a rhetorical device to dismiss scientific arguments against biological race. Commenting on the enduring significance afforded to Lewontin's paper in far-right and white nationalist discourse, geneticists Jedidiah Carlson and Kelley Harris proposed that "rejection of Lewontin's interpretation has become a tenet of white nationalist ideology".
In 2022, a special issue of the journal Philosophical Transactions of the Royal Society B: Biological Sciences was published with the theme "Celebrating 50 years since Lewontin's apportionment of human diversity", and a section of the book Remapping Race in a Global Context was devoted to discussing Lewontin's paper and defending it against Edwards' critique.
References
Biology papers
Human population genetics
Race (human categorization)
Biology controversies
Taxonomy (biology) | The Apportionment of Human Diversity | [
"Biology"
] | 1,024 | [
"Taxonomy (biology)"
] |
68,807,012 | https://en.wikipedia.org/wiki/Protein%20aggregation%20predictors | Computational methods that use protein sequence and/ or protein structure to predict protein aggregation. The table below, shows the main features of software for prediction of protein aggregation
Table
See also
PhasAGE toolbox
Amyloid
Protein aggregation
References
Protein structure
Structural bioinformatics software
Proteomics
Neurodegenerative disorders | Protein aggregation predictors | [
"Chemistry"
] | 63 | [
"Protein structure",
"Structural biology"
] |
68,807,529 | https://en.wikipedia.org/wiki/Br%C3%BCderschaft | The ( in German) or () is a drinking ritual, or a rite of passage, to consolidate friendship. Two people simultaneously drink a glass of the same alcoholic beverage each, with their arms intertwined at the elbows. A "brotherly kiss" is customary after emptying the glasses, which then seals the ritual. Thence they are considered good friends and address each other informally.
A symbolic act that establishes a closer bond between two—usually male—individuals, it has been associated with an end of formality between them and addressing each other (the informal you in German) at least since Jus Potandi oder ZechRecht, a legal-parodic text published in 1616. In the 17th and 18th centuries, the expression was also common, allegedly based on the assumption that "drinking together would bind and oblige".
References
Notes
Citations
External links
Etiquette
Drinking culture
German traditions | Brüderschaft | [
"Biology"
] | 185 | [
"Etiquette",
"Behavior",
"Human behavior"
] |
68,809,157 | https://en.wikipedia.org/wiki/Ghostwriter%20%28hacker%20group%29 | Ghostwriter, also known as UNC1151 and Storm-0257 by Microsoft, is a hacker group allegedly originating from Belarus. According to the cybersecurity firm Mandiant, the group has spread disinformation critical of NATO since at least 2016.
History
The name Ghostwriter comes from the group's first attacks, whereby they would steal credentials of journalists or publishers and publish fake articles using those credentials. Hence, the group effectively became unwanted ghostwriters for those with stolen credentials. UNC1151 is an internal company name by Mandiant given to uncategorized groups of "cyber intrusion activity."
The European Union has blamed this group for hacking German government officials.
EU's foreign policy chef Josep Borrell has threatened Russia for sanctions.
According to Serhiy Demedyuk, deputy secretary of the national security and defense council of Ukraine, the group was responsible for defacement of Ukrainian government websites in January 2022.
In February 2022 The Register reported that a Ukrainian CERT had announced that the group was targeting "private ‘i.ua’ and ‘meta.ua’ [email] accounts of Ukrainian military personnel and related individuals" as part of a phishing attack during the invasion of Ukraine. Mandiant said that two domains mentioned by the CERT, i[.]ua-passport[.]space and id[.]bigmir[.]space were known command and control domains of the group. Mandiant also said "We are able to tie the infrastructure reported by CERT.UA to UNC1151, but have not seen the phishing messages directly. However, UNC1151 has targeted Ukraine and especially its military extensively over the past two years, so this activity matches their historical pattern."
Characteristics and techniques
The group has executed spear-phishing campaigns against members of legitimate press to infiltrate the content management systems of those organizations. Then, the group uses the system to publish their own fake stories.
References
Hacker groups
Hacking in the 2020s | Ghostwriter (hacker group) | [
"Technology"
] | 414 | [
"Computer security stubs",
"Computing stubs"
] |
68,810,468 | https://en.wikipedia.org/wiki/Xenonectriella%20subimperspicua | Xenonectriella subimperspicua is a species of lichenicolous fungus in the family Nectriaceae. It has been recorded from South America, Europe, and New Zealand.
Taxonomy
The fungus was first formally described by Carlo Luigi Spegazzini in 1898 as a member of genus Nectria. Spegazzini collected the type specimen from South America, where it was growing on Punctelia constantimontium. In 1984, Rolf Santesson proposed to transfer the taxon to genus Nectriella. Rosalind Lowen transferred it and several other lichenicolous species to Pronectria in 1990. Finally, Javier Etayo transferred the species to the genus Xenonectriella in 2017, giving it the binomial name by which it is currently known.
Hosts
One of its hosts is the common foliose lichen species Punctelia borreri. Infection by X. subimperspicua creates discoloured or bleached areas on the thallus of the host; the perithecia of the fungus then become more readily visible. Two varieties of Xenonectriella subimperspicua have been defined: var. subimperspicua mostly parasitizes Parmelia and Punctelia but has also been recorded on Physcia, while var. degenerans parasitizes Parmotrema.
References
Nectriaceae
Fungi described in 1898
Fungi of Europe
Fungi of New Zealand
Fungi of South America
Lichenicolous fungi
Taxa named by Carlo Luigi Spegazzini
Fungus species | Xenonectriella subimperspicua | [
"Biology"
] | 323 | [
"Fungi",
"Fungus species"
] |
68,810,760 | https://en.wikipedia.org/wiki/Sistema%20700 | Sistema 700 was a personal professional microcomputer, introduced by the Brazilian computer company Prológica in 1981.
General information
The machine was based on the Intertec Superbrain and had similar characteristics: based on the Zilog Z80A 8-bit, 4MHz microprocessor, it had 64 KiB RAM configuration and two 5-1/4" floppy disk drives with capacity for up to 320 KiB of storage.
Its operating system was DOS-700, a version adapted by Prologica's software engineering department from the CP/M-80.
It achieved relative commercial success in financial, database and engineering applications. Due to the compatibility with the popular CP/M system, various applications like Fortran ANS, BASIC compiler, COBOL ANSI 74 compiler, Algol, Pascal, PL/I, MUMPS/M, RPG, Faturol C could be used. Other applications like word processors (WordStar), spreadsheets (CalcStar) and databases (DataStar and dBase II) were also compatible. Your applications could be programmed in BASIC, Cobol-80 and Fortran.
Models
Sistema 700 (1981 - vapourware)
Initial model announced in 1981, but never went into production.
Super Sistema 700 (1981)
Final version with graphite-colored cabinet and rounded contours.
Data Storage
Data storage was done in audio cassette. Audio cables were supplied with the computer for connection with a regular tape recorder.
Accessories
P-720 Printer.
Bibliography
Micro Computador - Curso Básico. Rio de Janeiro: Rio Gráfica, 1984, v. 1, pp. 49–50.
References
Prológica computers
Computer-related introductions in 1981
Goods manufactured in Brazil
Personal computers | Sistema 700 | [
"Technology"
] | 363 | [
"Computing stubs",
"Computer hardware stubs"
] |
68,811,200 | https://en.wikipedia.org/wiki/NE-Z8000 | The NE-Z8000 is a Brazilian homebuilt computer clone of the Sinclair ZX81, introduced in late 1982 by Prológica's subsidiary, the monthly magazine Nova Eletrônica.
General Information
The NE-Z8000 computer is based around a Z80A CPU clocked at 3.6 MHz with 1KB of RAM (expandable to 16 KB). The 8KB ROM comes with a built-in Sinclair BASIC interpreter.
The machine has four plugs on the back (9V DC, EAR, MIC and TV), and an exposed part of the circuit board where you can connect extra equipment.
The video connector cable is about 120 cm long and connects the TV plug to a regular PAL-M television, outputting a monochrome image on VHF channel 2 or 3. The EAR and MIC plugs allow to connect a cassette tape recorder for data storage, supporting a rate of 300 Baud.
It has no switch; to turn it on, you simply plug it into the power supply. A power supply provides 9V DC power usable by the machine.
The NE-Z8000 is considered rare, and in 2013 it could be auctioned by as much as R$1000.
Bibliography
Nova Eletrônica. São Paulo: Editele, 1982, Edição Nº 70, pp. 122.
References
Prológica
Computer-related introductions in 1982
Early microcomputers
Goods manufactured in Brazil
Sinclair ZX81 clones | NE-Z8000 | [
"Technology"
] | 304 | [
"Computing stubs",
"Computer hardware stubs"
] |
65,954,033 | https://en.wikipedia.org/wiki/The%20Blockhouse%20of%20Boston | The Blockhouse of Boston was a pioneering art and design cooperative of alumni from the Massachusetts College of Art in Boston, Massachusetts that opened its doors in 1947. Blockhouse artisans, primarily the then-recent art school graduate Janet Doub Erickson, designed and produced original textiles including draperies, wall hangings, table linens, costume treatments and other art. The co-op specialized in linoleum blockprints — also known as linocuts — and screen printing. Blockhouse was known for original use of New England themes and motifs intermingled with bold ethnic designs at times inspired by pre-Columbian art and sometimes with modernist motifs. As a journalist described some of Blockhouse principal designer Janet Doub Erickson's inspirations in a 1952 profile, "she goes to New Guinea for her motif, 'Checkerboard,' to China for her "Quan-Yin" design, to Guatemala for "Mayan Stele," and to a Northwest Indian reservation for "Totemotif."
Quite often, however, she just stayed home, looking for inspiration in the architecture and history of Boston and surrounding towns in New England.
Origins, organization, impact, and legacy
Origins
Founded in 1947 by twelve students and alumni of the Massachusetts College of Art, Blockhouse sought to “provide artists the opportunity to establish a dignified and mutually profitable relationship with the buying public.”
The founders described Blockhouse's mission as follows: "Blockhouse hand-printed fabrics are the product of a group of artists searching for a new and socially useful outlet for the expression of their talents. We hope that our designs conceived in freshness of vision and executed with technical skill, will contribute to and stimulate interest in contemporary design as it develops toward a universal idiom."
Organization
Originally located at a gallery in the Oceanside Hotel and Casino on Lexington Avenue in Magnolia, Massachusetts, then Cambridge Street in Boston, as Blockhouse became more successful the cooperative moved to occupy a floor of 10 Arlington Avenue overlooking Boston Common.
In the beginning, the Blockhouse had two small apartments, one for male members and another for females, where the artists could live dormitory-style at virtually no cost and work in the studio on the premises. The original members paid five dollars each to self-fund the cooperative's initial expense renting a space.
As reported in the Boston Globe, "all chores were shared. No one drew a salary. To earn money a member had to design and print. When an article was sold 70 percent of the proceeds went to the designer, the rest to the Block-house fund. Prices were set low for handiwork - as little as $5 a yard for drape material - in order to reach as wide a market as possible."
Blockhouse artists were responsible for every step in production of their designs. This traditional handicraft method, while slowing and limiting production, assured them control to carry their ideas undistorted into the final pieces. In addition to acting as a center for artists, the Blockhouse also taught classes in silk screen and block printing, ceramics, sketching and painting in watercolor and oil.
Over time, the Blockhouse evolved away from its utopian beginnings to become a more commercially focused enterprise. Of the founders, only partners Janet Doub Erickson and Paul Coombs remained active until Blockhouse's closing in 1955 and Janet Doub Erickson's subsequent departure for Mexico to pursue other artistic projects.
Impact
In addition to its innovative designs, which repeatedly won its designers awards and national recognition, Blockhouse's significance was bolstered by its use of post-war marketing techniques to move artistically innovative work into the broader New England and national marketplace through the synthesis of traditional techniques, diverse designs, and modern guerrilla marketing tactics. From 1947 to 1955, when it closed its doors, the work of Blockhouse was featured in Life, Vogue The New Yorker, The New York Times, Harper's Bazaar, The Christian Science Monitor, Women's Wear Daily, the Boston Globe and numerous other regional publications.
Designs from the Blockhouse collection were reproduced in commercial volumes by Wesley Simpson, Inc., Stoffel and Company, Strauss & Mueller, J.H. Thorp, Arundell Clarke, M. Lowenstein Sons, Century Sportswear and The Boka Company. Blockhouse textiles penetrated the larger culture through their popularity with commercial advertisers.
The Blockhouse also sought to penetrate the citadels of high culture. Blockhouse works were featured in exhibitions at Harvard's Fogg Museum, Institute of Contemporary Art, Boston, and the Boston Museum of Fine Arts. Blockhouse work also appeared at the Addison Gallery of American Art, the Wadsworth Atheneum, the Farnsworth Art Museum, the Dallas Museum of Fine Arts, and other galleries across the country.
The United States State Department included Blockhouse textiles in international exhibitions that toured in Europe and Israel during the nineteen-fifties.
Legacy
After Blockhouse disbanded members scattered about New England and other areas of the United States, producing art and teaching Blockhouse-style textile and artistic design through the country. Surviving Blockhouse textiles are mainly in the hands of private collectors and galleries.
Notable members
Blockhouse was founded and led by Paul Coombs and Janet Doub Erickson, both recent graduates of the Massachusetts College of Art. Coombs was a veteran of the two world wars who became interested in art while recovering in the hospital from an injury sustained in the Pacific Rim. Considerably older than his partners, he focused on the commercialization of Blockhouse designs and managed the business side of Blockhouse, although he also contributed original designs.
Other founding members included Elaine Biganess and David Berger.
Janet Doub Erickson was founding partner, chief designer, and head of production. She was credited with producing ninety percent of the Blockhouse's designs. Among honors, awards, and recognitions over her professional life, at Blockhouse she was the second young Boston artist chosen for recognition by the Institute of Contemporary Art and was profiled in a 1951 issue of Life She would go on to author popular books on blockprinting, including Printmaking Without A Press (Reinhold 1966) and Block Printing on Textiles (Watson-Guptill 1961). She taught block printing in Massachusetts, Connecticut, New York, California, and elsewhere over her long career after Blockhouse. Her enthusiastic promotion of block printing was influential in its post-war artistic renaissance. Later in life she wrote on textile design and vernacular architecture and published another book of her line drawings of Boston during the Blockhouse period.
Eight other artists joined Blockhouse but were less active in design, production, and commercialization.
References
Mass Art
Visual arts education
Art movements
American printmakers
Graphic design
Design companies established in 1947
Design companies disestablished in 1955
Design companies of the United States
Design history
Designing Women
American graphic designers
Massachusetts College of Art and Design alumni
Textile design
American textile designers | The Blockhouse of Boston | [
"Engineering"
] | 1,405 | [
"Design history",
"Textile design",
"Design"
] |
65,955,525 | https://en.wikipedia.org/wiki/KjPn%208 | KjPn 8 is a bipolar planetary nebula which was discovered by M.A. Kazaryan and Eh. S. Parsamyan in 1971 and independently by Luboš Kohoutek in 1972.
Very little was published about this nebula until 1995, when it was realized that KjPn 8 sits in the center of a very large filamentary nebula, 14 by 4 arc minutes in size. This is the largest known bipolar structure associated with a planetary nebula. Narrow band images centered at Hα and forbidden line transitions of nitrogen, sulphur, and oxygen reveal pairs of bow shocks at differing position angles, indicating the presence of episodic ejection of material along a precessing jet, similar to what is seen in Fleming 1, but much larger (in angular extent).
The physical size of this extended nebula is approximately 4.1 by 1.2 parsecs, much larger than a typical planetary nebula, while the core nebula known prior to 1995 is only about 0.2 parsecs in diameter.
The envelope of KjPn 8 is expanding rapidly enough to allow the proper motion of features in the nebula to be measured. In 1997 John Meaburn compared images of the nebula taken in 1954 (as part of the Palomar Sky Survey) and 1991.He measured a proper motion of 34±3 milliarcseconds per year for two knots in the nebula. Combining this proper motion with an expansion velocity derived from spectral line profile widths allowed Meaburn to derive a distance to the nebula of 1600±230 parsecs, and a kinematic age of 3400±300 years.
Microwave emission from carbon monoxide reveals the presence of a dense disk of molecular gas 30 arcseconds in diameter expanding at about 7 km/sec, with a mass ≥ 0.03 M⊙. The disk is aligned with the youngest and fastest bipolar jet, which has an expansion velocity of about 300 km/sec. The central star has begun to ionize the central region of this disk.
Hubble Space Telescope observations suggest that KjPn 8 might be a very rare object, formed by a binary system in which both stars had similar masses, which reached the end of the Asymptotic Giant Branch phase within 10 to 20 thousand years of each other, and entered the planetary nebula formation stage nearly simultaneously.
References
Planetary nebulae
Cassiopeia (constellation) | KjPn 8 | [
"Astronomy"
] | 496 | [
"Cassiopeia (constellation)",
"Constellations"
] |
65,955,532 | https://en.wikipedia.org/wiki/Alex%20James%20%28mathematician%29 | Alex James is a British and New Zealand applied mathematician and mathematical biologist whose research involves the mathematical modelling of wildlife behaviour, gender disparities in academia, and the epidemiology of COVID-19. She is a professor in the school of mathematics and statistics at the University of Canterbury in New Zealand.
Education and career
After studying mathematics at Newcastle University in England, James earned a master's degree at University College London, and completed a PhD at the University of Leeds, working there with John Brindley on combustion engineering and catalytic converters.
She became a lecturer at Sheffield Hallam University in 2001, and moved to the University of Canterbury in 2004.
Recognition
James was named a Fellow of the New Zealand Mathematical Society (NZMS) in 2015, and won the 2018 NZMS Research Award. She was on the team that won the Prime Minister's Science prize in 2020 and won the University of Canterbury Research medal jointly in 2021. She was awarded the NZIAM EO Tuck medal in 2024.
References
External links
Home page
Year of birth missing (living people)
Living people
British mathematicians
British biologists
21st-century British women mathematicians
British women biologists
New Zealand mathematicians
New Zealand biologists
21st-century New Zealand women scientists
Applied mathematicians
Theoretical biologists
Alumni of Newcastle University
Alumni of University College London
Alumni of the University of Leeds
Academics of Sheffield Hallam University
Academic staff of the University of Canterbury
New Zealand women mathematicians | Alex James (mathematician) | [
"Mathematics",
"Biology"
] | 286 | [
"Bioinformatics",
"Applied mathematics",
"Applied mathematicians",
"Theoretical biologists"
] |
65,956,019 | https://en.wikipedia.org/wiki/Agroecology%20and%20Sustainable%20Food%20Systems | Agroecology and Sustainable Food Systems is a peer-reviewed scientific journal covering sustainable agriculture. It was established in 1990 as the Journal of Sustainable Agriculture, obtaining its current title in 2013. It is published by Taylor & Francis and the editor-in-chief is Stephen R. Gliessman (University of California, Santa Cruz).
Abstracting and indexing
The journal is abstracted and indexed in the Science Citation Index Expanded and Scopus.
References
External links
Taylor & Francis academic journals
English-language journals
Academic journals established in 1990
Sustainable agriculture
Agricultural journals | Agroecology and Sustainable Food Systems | [
"Environmental_science"
] | 113 | [
"Environmental science journals",
"Environmental science journal stubs"
] |
65,960,521 | https://en.wikipedia.org/wiki/Eastern%20Micronesia%20tropical%20moist%20forests | The Eastern Micronesia tropical moist forests is a tropical and subtropical moist broadleaf forests ecoregion in Micronesia. It includes the Marshall Islands, Banaba and the Gilbert Islands in Kiribati, Nauru, and Wake Island, a possession of the United States.
Geography
The islands are mostly atolls, low islands of coralline sand ringing a central lagoon, or raised platforms of coralline limestone.
There are 30 atolls in the Marshall Islands, made up of more than 1000 islands. They form two parallel island chains that run southwest to northeast, extending 1,300 km from east to west and 1,150 km north to south. The western Ralik, or "sunset" chain extends from Eniwetok to Ebon, and the eastern Ratak or "sunrise" chain from Taongi and Rongelap to Mili.
The Gilbert Islands lie southeast of the Marshall Islands, and include 16 atolls and coral islands.
Nauru and Banaba are low islands composed of uplifted coralline limestone lying west of the Gilbert Islands. Nauru is an independent country, and Banaba is part of Kiribati.
Wake Island is an isolated atoll north of the Marshall Islands.
Climate
The climate of the islands is tropical, with little seasonal temperature variation.
The central part of the ecoregion lies in the trade wind belt, and receives the highest rainfall, up to 3000 mm annually. The northern islands, including the northern Marshalls and Wake island, are drier, as are the southern Gilberts. May through November are generally the wettest months. The Marshalls experience typhoons.
Flora
The predominant vegetation on wetter islands is tropical moist forest. Forests nearer the shore are of short to medium stature, with the trees Heliotropium foertherianum, Guettarda speciosa, Pisonia grandis, Pandanus tectorius, Allophylus timorensis, Cordia subcordata, Hernandia nymphaeifolia, and Thespesia populnea. In the island interiors, mature forests of Ochrosia oppositifolia can grow in pure stands on the more humid islands. Pisonia grandis also occurs in monospecific stands, and can grow up to 30 meters tall with trunks more than two meters in diameter. The mature Ochrosia and Pisonia forests form a dense canopy, and little grows in the shady understory.
On drier islands and in areas exposed to salt spray, vegetation includes low grasses and beach creepers, coastal scrub, and low-canopied mixed forests.
The flora is mostly of widespread coastal Indo-Pacific species, with relatively few endemic species.
Fauna
Native vertebrates are chiefly seabirds, which form large colonies on many islands. The Insular flying fox (Pteropus tonganus) and Pacific sheath-tailed bat (Emballonura semicaudata) are the only native non-marine mammals.
The Nauru reed warbler (Acrocephalus rehsei) is endemic to Nauru.
Protected areas
8.2% of the ecoregion is in protected areas.
References
External links
Eastern Micronesia tropical moist forests (DOPA)
Eastern Micronesia tropical moist forests (Encyclopedia of Earth)
Ecoregions of Kiribati
Biota of the Marshall Islands
Environment of the Marshall Islands
Geography of the Marshall Islands
Environment of Nauru
Geography of Nauru
Ecoregions of the United States
Oceanian ecoregions
Tropical and subtropical moist broadleaf forests | Eastern Micronesia tropical moist forests | [
"Biology"
] | 714 | [
"Biota by country",
"Biota of the Marshall Islands"
] |
65,960,900 | https://en.wikipedia.org/wiki/Vaccine%20Damage%20Payments%20Act%201979 | The Vaccine Damage Payments Act 1979 (c. 17) is an act of the Parliament of the United Kingdom that provides for compensation payments for injuries caused by vaccination.
It was introduced following concerns over the pertussis vaccine.
References
United Kingdom Acts of Parliament 1979
Health and safety in the United Kingdom
Safety codes
Occupational safety and health law
Vaccines
Public health in the United Kingdom
2017 in British law
Acts of the Parliament of the United Kingdom concerning healthcare | Vaccine Damage Payments Act 1979 | [
"Biology"
] | 90 | [
"Vaccination",
"Vaccines"
] |
65,961,732 | https://en.wikipedia.org/wiki/Substrate%20inhibition%20in%20bioreactors | Substrate inhibition in bioreactors occurs when the concentration of substrate (such as glucose, salts, or phenols) exceeds the optimal parameters and reduces the growth rate of the cells within the bioreactor. This is often confused with substrate limitation, which describes environments in which cell growth is limited due to of low substrate. Limited conditions can be modeled with the Monod equation; however, the Monod equation is no longer suitable in substrate inhibiting conditions. A Monod deviation, such as the Haldane (Andrew) equation, is more suitable for substrate inhibiting conditions. These cell growth models are analogous to equations that describe enzyme kinetics, although, unlike enzyme kinetics parameters, cell growth parameters are generally empirically estimated.
General Principles
Cell growth in bioreactors depends on a wide range of environmental and physiological conditions such as substrate concentration. With regards to bioreactor cell growth, substrate refers to the nutrients that the cells consume and is contained within the bioreactor medium. Cell growth can either be substrate limited or inhibited depending on whether the substrate concentration is too low or too high, respectively. The Monod equation accurately describes limiting conditions, but substrate inhibition models are more complex.
Substrate inhibition occurs when the rate of microbial growth lessens due to a high concentration of substrate. Higher substrate concentrations are usually caused by osmotic issues, viscosity, or inefficient oxygen transport. By slowly adding substrate into the medium, fed-batch bioreactor systems can help alleviate substrate inhibition. Substrate inhibition is also closely related to enzyme kinetics which is commonly modeled by the Michaelis–Menten equation. If an enzyme that is part of a rate-limiting step of microbial growth is substrate inhibited, then the cell growth will be inhibited in the same manner. However, the mechanisms are often more complex, and parameters for a model equation need to be estimated from experimental data. Additionally, information on inhibitory effects caused by mixtures of compounds is limited because most studies have been performed with single-substrate systems.
Types of Inhibition
Enzyme Kinetics Overview
One of the most well known equations to describe single-substrate enzyme kinetics is the Michaelis-Menten equation. This equation relates the initial rate of reaction to the concentration of substrate present, and deviations of model can be used to predict competitive inhibition and non-competitive inhibition. The model takes the form of the following equation:
(Michaelis-Menten equation)
Where
is the Michaelis constant
is the initial reaction rate
is the maximum reaction rate
If the inhibitor is different from the substrate, then competitive inhibition will increase Km while Vmax remains the same, and non-competitive will decrease Vmax while Km remains the same. However, under substrate inhibiting effects where two of the same substrate molecules bind to the active sites and inhibitory sites, the reaction rate will reach a peak value before decreasing. The reaction rate will either decrease to zero under complete inhibition, or it will decrease to a non-zero asymptote during partial inhibition. This can be described by the Haldane (or Andrew) equation, which is a common deviation of the Michaelis-Menten equation, and takes the following form:
(Haldane equation for single-substrate inhibition of enzymatic reaction rate)
Where
is the inhibition constant
Cell Growth in Bioreactors
Bioreactor cell growth kinetics is analogous to the equations presented in enzyme kinetics. Under non-inhibiting single-substrate conditions, the specific growth rate of biomass can be modeled by the well-known Monod equation. The Monod equation models the growth of organisms during substrate limiting conditions, and its parameters are determined through experimental observation. The Monod equation is based on a single substrate-consuming enzyme system that follows the Michaelis-Menten equation. The Monod takes the following familiar form:
(Monod equation)
Where:
is the saturation constant
is the specific growth rate
is the maximum specific growth rate
Under single-substrate inhibiting conditions, the Monod equation is no longer suitable, and the most common Monod derivative is once again in the form of the Haldane equation. As in enzyme kinetics, the growth rate will initially increase as substrate is increased before reaching a peak and decreasing at high substrate concentrations. Reasons for substrate inhibition in bioreactor cell growth includes osmotic issues, viscosity, or inefficient oxygen transport due to overly concentrated substrate in the bioreactor medium. Substrates that are known to cause inhibition include glucose, NaCl, and phenols, among others Substrate inhibition is also a concern in wastewater treatment, where one of the most studied biodegradation substrates are the toxic phenols. Due to their toxicity, there is a large interest in bioremediation of phenols, and it is well known that phenol inhibition can be modeled by the following Haldane equation:
(Haldane equation for single-substrate inhibition of cell growth)
Where:
is the inhibition constant
There are several equations that have been developed to describe substrate inhibition. Two equations listed below that are referred to as non-competitive substrate inhibition and competitive substrate inhibition models respectively by Shuler and Michael in Bioprocess Engineering: Basic Concepts. Note that the Haldane equation above is a special case of the following non-competitive substrate inhibition model, where KI >>Ks.
(non-competitive single-substrate inhibition)
(competitive single-substrate inhibition)
These equations also have enzymatic counterparts, where the equations commonly describe the interactions between substrate and inhibitors at the active and inhibitory sites. The concept of competitive and non-competitive substrate inhibition is more well defined in enzyme kinetics, but these analogous equations also apply to cell growth models.
Overcoming Substrate Inhibition in Bioreactors
Substrate inhibition can be characterized by a high substrate concentration and decreased growth rate, resulting in decreased bioreactor outputs. The most common solution is to change the growth from a batch process to a fed-batch process. Other methods to overcome substrate inhibition include the addition of another substrate type in order to develop alternative metabolic pathways, immobilizing the cells or increasing the biomass concentration.
Utilizing Fed-Batch
A fed-batch process is the most common way to decrease the effects of substrate inhibition. Fed-batch processes are characterized by the continuous addition of bioreactor media (which includes the substrate) into the inoculum (cellular solution). The addition of media will increase the overall volume within the reactor along with substrate and other growth materials. A fed-batch process will also have an output flow rate of the substrate/cell/product mixture which can be collected to retrieve the desired product. Fed-batch is a good way to overcome substrate inhibition because the amount of substrate can be changed at various points in the growth process. This allows for the bioreactor technician to provide the cells with the amount of substrate they need rather than providing them too much or too little.
Other methods
Other methods to overcome substrate inhibition include the use of Two Phase Partitioning Bioreactors, the immobilization of cells, and increasing the biomass concentration in the bioreactor.
Two Phase Partitioning Bioreactors are able to reduce the aqueous phase substrate concentration by storing substrate in an alternative phase, which can be re-released into the biomass based on metabolic demand. The cell immobilization method the bioreactor works by encapsulating the cells into a material that makes the removal of inhibitory compounds easier, thus reducing inhibition by creating a matrix with the cells which can act as a protective barrier against the inhibitory effects of toxic materials. The method of increasing cell concentration is done by supporting the cellular material on a scaffold to create a biofilm. Biofilms allow for extremely high cell concentrations while preventing the overgrowth of inhibitory substrates.
Impact On Product Production
The impact of product production depends on how the product is created. Substrate inhibition will affect products produced by enzymatic reactions differently than growth associated product formation. Substrate inhibition of enzymatic product production will inhibit the enzyme's activity, which will lower the reaction rate and reduce the rate of product formation. However, if a product is being produced by cells, then substrate inhibition will narrow product formation by limiting the growth of cells.
Growth Associated Products
There are multiple relationships that may exist between the rate of product formation, the specific rate of substrate consumption, and specific growth rate. The following equations demonstrate the relationship between cell growth and product production for growth associated production. The parameters and (specific rate of product formation and specific growth rate respectively) are defined below.
(specific rate of product formation)
(specific growth rate)
Where is the cell concentration, and is the product concentration.
The product formation and cell growth are both directly linked to the amount of substrate consumed through the yield coefficients, and respectively. These coefficients can be combined to define a yield coefficient, , that relates the product production to cell growth.
This yield coefficient can be further used to directly relate the rate of change of product to the rate of change of cell growth
Rearranging this equation gives the following relationship between the specific rate of product formation and the specific growth rate of the cells for growth associated products.
The above relationships demonstrate that for growth associated product, the specific growth rate is directly proportional to the specific rate of product formation. Furthermore, substrate inhibition limits the specific growth rate, which reduces the final biomass concentration. Increasing the substrate concentration may increase the viscosity of the media, lowers the rate of oxygen diffusivity, and affect the osmolarity of the system. These effects can be detrimental to cell growth, and by extension, the yield of product.
References
Bioreactors | Substrate inhibition in bioreactors | [
"Chemistry",
"Engineering",
"Biology"
] | 1,963 | [
"Bioreactors",
"Biological engineering",
"Chemical reactors",
"Biochemical engineering",
"Microbiology equipment"
] |
65,961,765 | https://en.wikipedia.org/wiki/Mikhail%20Kats | Mikhail A. Kats (born 1986) is an American optics/photonics researcher and applied physicist. He is Jack St. Clair Kilby associate professor of electrical and computer engineering at the University of Wisconsin–Madison. During his studies at Harvard University as a graduate student, Kats developed new nanophotonic and plasmonic technologies.
Early life and education
Kats was born in Saint Petersburg, Russia, but his family moved to Kansas when he was a child. He grew up in Overland Park, Kansas where he attended Harmony Middle School and Blue Valley Northwest High School with Arash Ferdowsi. Kats earned his Bachelor of Science degree in engineering physics in 2008 from Cornell University before enrolling at Harvard University for his graduate degrees. He had originally planned to study computer science at Cornell but was drawn to applied physics, particularly optics and photonics. Kats also pursued undergraduate research in Cornell’s Semiconductor Optoelectronics Group led by Farhan Rana.
While completing his master's degree and PhD, Kats developed new nanophotonic and plasmonic technologies under the direction of Federico Capasso. Together with Nanfang Yu, Patrice Genevet, Zeno Gaburro, and several other members of Capasso’s research group, Kats developed optical metasurfaces based on resonant plasmonic antennas that resulted in the generalization of the laws of reflection on refraction and the development of flat lenses with thickness of tens of nanometers. Kats and colleagues also demonstrated that absorbing films of the thickness of tens of atoms can display thin-film interference effects. In the same year, Kats also co-invented a device which reacts to temperature changes by reflecting dramatically more or less infrared light, making it well suited for use in a range of infrared optical devices.
Career
After spending one year as a post-doctoral fellow at Harvard, Kats started a faculty position at the University of Wisconsin–Madison (UW–Madison). When reflecting upon his choice, Kats said he was drawn to the university because of its "high level of research in electrical engineering and physics, its ample supply of excellent students to aid him in his research and its wide range of research topics and interdisciplinary research opportunities." Initially, Kats continued the research he began at Harvard at UW–Madison and led an international team of researchers to develop a way to precisely engineer the temperatures at which vanadium dioxide would undergo its phase transition. In recognition of his accomplishments studying the development of nanoscopic optical devices, Forbes magazine listed him amongst their ’30 under 30′ in 2016.
As an assistant professor of electrical and computer engineering, Kats received funding from the U.S. Office of Naval Research Young Investigator Program to "develop solutions to seemingly intractable modern problems." With this support, he began studying materials that can rapidly switch from transparent to opaque in order to develop optical diodes. Kats also co-designed multiband filters for eyeglasses to trick the eye into effectively having another type of cone cell in an effort to see more distinct colours. The following year, Kats received a CAREER award from the National Science Foundation for his efforts to tweak how substances emit light as they change temperatures. He was also named by Clarivate as a 2018 highly cited researcher and identified by the American Society for Engineering Education as a "highly promising investigator under the age of 40." In 2019, Kats received tenure from the university and earned the Vilas Faculty Early Career Investigator Award. In his research laboratory, Kats developed a solid object that for the first time decouples temperature and thermal light emission over a certain temperature range.
During the COVID-19 pandemic in North America, Kats was the recipient of the 2020 Institute of Electrical and Electronics Engineers (IEEE) Nanotechnology Council Early Career Award in Nanotechnology. He received the award for "his investigation of fundamental problems in optics and photonics with the aim of creating next-gen optical components that emit, modulate, and detect light across the visible and infrared spectrum." Alongside his research associate Yuzhe Xiao, Kats developed depth thermography, a technique to remotely determine the temperature beneath the surface of certain materials.
References
External links
Living people
Scientists from Saint Petersburg
University of Wisconsin–Madison faculty
Cornell University alumni
Harvard University alumni
Russian emigrants to the United States
21st-century American physicists
1986 births
Optical physicists
Optical engineers
Metamaterials scientists | Mikhail Kats | [
"Materials_science"
] | 914 | [
"Metamaterials scientists",
"Metamaterials"
] |
65,962,600 | https://en.wikipedia.org/wiki/Palau%20tropical%20moist%20forests | The Palau tropical moist forests is a tropical and subtropical moist broadleaf forests ecoregion in Micronesia. It encompasses the nation of Palau.
Geography
The Palau Islands are an archipelago approximately 200 km in length, located 800 km north of the equator and 800 km east of the Philippines. Kayangel is the northernmost island in the archipelago, and the island of Angaur is the southernmost. Babeldaob, or Babelthaup, is the largest island (376 km2) in the archipelago, and is 70% of the ecoregion's land area. Much of the archipelago is enclosed in a barrier reef.
Some islands, including Babeldaob and Koror, have a core of weathered volcanic rock. Much of the archipelago is composed of uplifted marine limestone, which has eroded into dramatic karstic landscapes.
The ecoregion also includes some outlying atolls stretching southwest of the Palau Islands, including The Sonsorol islands (Fanna, Sonsorol, Pulo Anna, and Merir), Tobi, and Helen.
Climate
The ecoregion has a humid tropical climate. The mean annual temperature in the capital city of Koror is 27º C, and the mean annual rainfall averages 3,730 mm. Rainfall is plentiful year-round, with more during the May through November summer rainy season, and less between February and April.
Flora
The natural vegetation consists mostly of tropical moist broadleaf forests. The forests include eight main types – upland forest on the high volcanic islands, swamp forest, mangrove forest, atoll forest, casuarina forest, limestone forest, plantation forest, and palm forest.
The forests on much of Babeldaob have been cleared and replaced with grassland.
Fauna
The ecoregion has 13 endemic species of birds – the Palau ground dove (Alopecoenas canifrons), Palau fruit dove (Ptilinopus pelewensis), Palau nightjar (Caprimulgus phalaena), Palau swiftlet (Aerodramus pelewensis), Palau scops owl ( Otus podarginis), Palau kingfisher (Todiramphus pelewensis), morningbird (Pachycephala tenebrosa), Palau cicadabird (Edolisoma monacha), Palau fantail (Rhipidura lepida), Palau flycatcher (Myiagra erythrops), Palau bush warbler (Horornis annae), giant white-eye (Megazosterops palauensis), and dusky white-eye (Zosterops finschii).
The Palau scrubfowl (Megapodius laperouse senex) is an endemic subspecies of Micronesian scrubfowl, which also inhabits the Marianas.
Protected areas
24.7% of the ecoregion is in protected areas.
References
External links
Palau tropical moist forests (DOPA)
Palau tropical moist forests (Encyclopedia of Earth)
Biota of Palau
Geography of Palau
Oceanian ecoregions
Tropical and subtropical moist broadleaf forests
Endemic Bird Areas | Palau tropical moist forests | [
"Biology"
] | 644 | [
"Biota by country",
"Biota of Palau"
] |
65,964,227 | https://en.wikipedia.org/wiki/Mercury%20oxycyanide | Mercury oxycyanide is a chemical compound, an organomercury derivative. It is both explosive and highly toxic, producing symptoms of both mercury and cyanide poisoning following exposure.
See also
Cacodyl cyanide
Mercury(II) cyanide
References
Organomercury compounds
Nitriles | Mercury oxycyanide | [
"Chemistry"
] | 66 | [
"Nitriles",
"Functional groups"
] |
62,187,512 | https://en.wikipedia.org/wiki/AirPods%20Pro | AirPods Pro are wireless Bluetooth in-ear headphones designed by Apple, initially introduced on October 30, 2019. They are Apple's mid-range wireless headphones, available alongside the base-level AirPods and the highest-end AirPods Max.
The first-generation AirPods Pro use the H1 chip, also found in the second-generation base-level AirPods. Notable additions include active noise cancellation, transparency mode, automated frequency profile adjustment, IPX4 water resistance, a charging case supporting wireless charging, and interchangeable silicone ear tips.
In September 2022, Apple announced the second-generation AirPods Pro. The newer iteration incorporates the H2 chip, Bluetooth 5.3 connectivity, improved sound quality and noise cancellation capabilities, an extended battery life, volume adjusting gestures, support Find My tracking, provide compatibility with Apple Watch chargers, and extra-small sized ear tips. A revision in 2023 added IP54 dust resistance, support for lossless audio in conjunction with the Apple Vision Pro, and a USB-C charging case.
Models
First generation
Apple announced AirPods Pro on October 28, 2019, and released them two days later on October 30, 2019. They include features of standard AirPods, such as a microphone. They also have noise cancellation to reduce exterior sounds background noise, accelerometers and optical sensors that can detect presses on the stem and in-ear placement, and automatic pausing when they are taken out of the ears. Control by tapping is replaced by pressing a force sensor on the stems. They are rated IPX4 for water resistance.
The AirPods Pro use the H1 chip also found in the second and third-generation AirPods, that supports hands-free "Hey Siri". They have active noise cancellation, accomplished by microphones detecting outside sound and speakers producing precisely opposite "anti-noise". Active noise cancellation can be turned off or switched to "transparency mode" that helps users hear surroundings. Noise cancellation modes can also be switched in iOS or by pinching the stems of the AirPods using the force sensor.
The H1 chip is embedded in a unique system in a package (SiP) module enclosing several other components, such as the audio processor and accelerometers.
Battery life is rated to be equal to the second-generation AirPods at five hours, but noise cancellation or transparency mode reduce it to 4.5 hours due to the extra processing. The charging case advertises the same 24 hours of total listening time as the standard AirPods case. It also features Qi standard wireless charging compatibility. In October 2021, Apple updated the bundled charging case with MagSafe. Like AirPods, AirPods Pro have received criticism for their battery life.
AirPods Pro come with three sizes of silicone tips, including the attached medium set. There is a software test in iOS called the Ear Tip Fit Test that "checks the fit of your AirPods ear tips to determine which size provides the best seal and acoustic performance" to ensure a correct fit, as well as a feature called "Adaptive EQ" which automatically adjusts the frequency contour, claimed to better match the wearer's ear shape. Starting in early 2020, Apple started selling tip replacements for AirPods Pro on their website.
With iOS 14 and iPadOS 14, Apple added a spatial audio mode designed to simulate 5.1 surround sound. Supported apps include the Apple TV app, Disney+, HBO Max and Netflix. Spatial audio requires an iPhone, iPad or Apple TV with an Apple A10 processor or newer.
iOS 14 also added the ability to apply headphone accommodations to transparency mode, allowing the AirPods Pro to act as rudimentary hearing aids. In October 2021, a new Conversation Boost mode was added as a customization of the regular Transparency mode. It boosts voices above background noise and music.
Second generation
The second-generation AirPods Pro were announced at an Apple media event on September 7, 2022, and were released on September 23, 2022. They use an updated H2 chip with Bluetooth 5.3 connectivity, and feature improved sound quality and noise cancellation, and longer battery life. They also include extra-small sized ear tips, and AirPods support swiping up and down to adjust volume. Ear tips are physically compatible with first generation AirPods Pro as they use the same connector, but Apple notes the second generation ear tips use a less dense mesh and recommends against intermixing them for acoustical consistency.
The charging case includes the Apple U1 chip that supports Find My tracking, and includes a speaker for locating and status updates. In addition to Lightning, Qi and MagSafe chargers, it is also compatible with Apple Watch chargers. A lanyard loop was also added to the side of the case.
In September 2023, Apple updated the second-generation AirPods Pro with improved IP54 dust resistance, an updated H2 chip that supports the 5 GHz band for lossless audio with the Apple Vision Pro, and a charging case with a USB-C port instead of a Lightning port.
iOS 17 added "Adaptive Audio," which dynamically blends Transparency and Active Noise Cancellation to tailor the noise control experience as a user moves between changing environments; "Press" to answer/mute/end a call; "Personalized volume" which uses machine learning to adjust volume based on user preferences over time and surroundings; "Conversation awareness" which automatically lowers the volume if the user starts talking to someone nearby. In iOS 18, a user can nod or shake their head to respond when talking to Siri and Voice Isolation during phone calls was also introduced. In September 2024, the United States Food and Drug Administration authorized the use of hearing aid software by Apple in the AirPods Pro.
Compatibility
Support for AirPods Pro was added in iOS 13.2, watchOS 6.1, tvOS 13.2, and macOS Catalina 10.15.1. They are compatible with any device that supports Bluetooth, including Windows and Android devices, although certain features such as automatic switching between devices and single-AirPod listening are only available on Apple devices using its iCloud service.
See also
Apple headphones
EarPods
AirPods
AirPods Max
Google Pixel Buds
Samsung Galaxy Buds
References
External links
– official site
Apple Inc. peripherals
iPhone accessories
Headphones
Products introduced in 2019 | AirPods Pro | [
"Technology"
] | 1,290 | [
"IPhone accessories",
"Components"
] |
62,187,817 | https://en.wikipedia.org/wiki/Schw%C3%A4bisch%20Gm%C3%BCnd%20Prize | The Schwäbisch Gmünd Prize for Young Scientists is an annual award given by the European Academy of Surface Technology (EAST) to an early career researcher active in Europe on the grounds of originality, creativity and excellence in surface technology. The prize aims to promote science, research and education in the field of surface technology as part of EAST efforts to promote friendship and integration within the European scientific and technological community.
The award is named in honor of the town of Schwäbisch Gmünd and its long tradition of craftsmanship of precious metals. The town has also been the location of EAST headquarters since its foundation in 1989. The prize is presented in a public lecture during an event sponsored by or co-organised by EAST, such as electrochemistry, corrosion or surface finishing related conferences.
Recipients of the Schwäbisch Gmünd Prize
To the date, three early career researchers have received the prize:
2017 - J. Zhang
2018 - N. T. Nguyen
2019 - L. F. Arenas
2020 - M. Leimbach
2021 - K. Eiler
See also
List of engineering awards
Electroplating
References
External links
The Schwäbisch Gmünd Prize for Young Scientists
The European Academy of Surface Technology
The Research Institute for Precious Metals and Metal Chemistry
The International Union for Surface Finishing
European science and technology awards
Early career awards
Research awards
Awards established in 2017
2017 establishments in Europe | Schwäbisch Gmünd Prize | [
"Technology"
] | 278 | [
"Science and technology awards",
"Research awards"
] |
62,188,806 | https://en.wikipedia.org/wiki/GAMA%20Platform | GAMA (GIS Agent-based Modeling Architecture) is a simulation platform with a complete modelling and simulation integrated development environment (IDE) for writing and experimenting spatially explicit agent-based models.
About
The GAMA Platform is agent-based modeling software that was originally (2007-2010) developed by the Vietnamese-French research team MSI (located at IFI, Hanoi, and part of the IRD - SU International Research Unit UMMISCO). It is now developed by an international consortium of academic and industrial partners led by UMMISCO , including INRAE, the University of Toulouse 1, the University of Rouen, the University of Orsay, the University of Can Tho, Vietnam, the National University of Hanoi, EDF R&D, CEA LISC, and MIT Media Lab.
GAMA was designed to allow domain experts without a programming background to model phenomena from their field of expertise.
The GAMA environment enables exploration of emergent phenomena. It comes with a models library including examples from several domains, such as economics, biology, physics, chemistry, psychology, and system dynamics.
The GAMA simulation panel allows exploration by modifying switches, sliders, choosers, inputs, and other user interface elements that the modeler chooses to make available.
Technical foundation
GAMA Platform is free and open-source software, released under a GNU General Public License (GPL3). It is written in Java and runs on the Java virtual machine (JVM). All core components and extensions are written in Java, but end users do not need to work in Java at all if they use a published build of the platform; instead, they would write all models using GAML (described below).
Multiple application domains
GAMA was developed with a very general approach and can be used for many application domains. GAMA is mostly present in applications domains like
transport,
urban planning, disaster response,
epidemiology, analysis of multirobot systems,
and the environment, with special emphasis on analyses that use GIS data.
High-level Agent-based language
GAML (GAma Modeling Language) is the dedicated language used in GAMA. It is an agent-based language, that provides the possibility to build a model with several paradigms of modeling.
This high-level language was inspired by Smalltalk and Java, GAMA has been developed to be used by non-computer scientists.
User interface
Modelers may use many visual representations for the same model, in order to highlight a certain aspect of a simulation. These include 2D/3D displays, with basic control of lighting, textures, and cameras. Standard charts such as series plots may also be constructed.
Project examples
The developers maintain a community-sourced list of scientific projects that use GAMA.
Some of the larger efforts include:
Hoan Kiem Air: Agent based modeling and simulation of the urban management on traffic and air pollution through tangible interface.
Proxymix: Visualization tool about the influence of spatial configuration on human collaboration.
CityScope Champs-Elysées: An interactive platform to improve decision-making related to the revitalization of the Champs Élysées.
ESCAPE: A Multi-modal Urban Traffic Agent-Based Framework to Study Individual Response to Catastrophic Events.
COMOKIT: Generic model of public policies to contain the spread of COVID-19 epidemics in a city, validated on the basis of different case studies.
Users
Several academic institutions teach modeling and simulation courses based on GAMA. It is taught in the Urban Simulation class at the Potsdam University of Applied Sciences, and at the University of Salzburg. It is also used and taught annually at the Multi-platform International Summer School on Agent-Based Modelling & Simulation.
See also
Agent-based model
Comparison of agent-based modeling software
NetLogo
Repast (modeling toolkit)
MASON (Java)
References
Agent-based model
Pedagogic integrated development environments
Simulation programming languages
Agent-based programming languages
Java platform
Simulation software | GAMA Platform | [
"Technology"
] | 802 | [
"Computing platforms",
"Java platform"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.