text
stringlengths 2
132k
| source
dict |
|---|---|
=== Knowledge distillation or model distillation is the process of transferring knowledge from a large model to a smaller one. The idea of using the output of one neural network to train another neural network was studied as the teacher-student network configuration. In 1992, several papers studied the statistical mechanics of teacher-student network configuration, where both networks are committee machines or both are parity machines. Another early example of network distillation was also published in 1992, in the field of recurrent neural networks (RNNs). The problem was sequence prediction. It was solved by two RNNs. One of them ("atomizer") predicted the sequence, and another ("chunker") predicted the errors of the atomizer. Simultaneously, the atomizer predicted the internal states of the chunker. After the atomizer manages to predict the chunker's internal states well, it would start fixing the errors, and soon the chunker is obsoleted, leaving just one RNN in the end. A related methodology was model compression or pruning, where a trained network is reduced in size. It was inspired by neurobiological studies showing that the human brain is resistant to damage, and was studied in the 1980s, via methods such as Biased Weight Decay and Optimal Brain Damage. == Hardware-based designs == The development of metal–oxide–semiconductor (MOS) very-large-scale integration (VLSI), combining millions or billions of MOS transistors onto a single chip in the form of complementary MOS (CMOS) technology, enabled the development of practical artificial neural networks in the 1980s. Computational devices were created in CMOS, for both biophysical simulation and neuromorphic computing inspired by the structure and function of the human brain. Nanodevices for very large scale principal components analyses and convolution may create a new class of neural computing because they are fundamentally analog rather than digital (even though the first implementations may use digital devices). ==
|
{
"page_id": 61541925,
"source": null,
"title": "History of artificial neural networks"
}
|
Notes == == References == == External links == "Lecun 2019-7-11 ACM Tech Talk". Google Docs. Retrieved 2020-02-13.
|
{
"page_id": 61541925,
"source": null,
"title": "History of artificial neural networks"
}
|
The following is a partial list of the "G" codes for Medical Subject Headings (MeSH), as defined by the United States National Library of Medicine (NLM). This list continues the information at List of MeSH codes (G04). Codes following these are found at List of MeSH codes (G06). For other MeSH codes, see List of MeSH codes. The source for this content is the set of 2006 MeSH Trees from the NLM. == MeSH G05 – genetic processes == === MeSH G05.090 – breeding === MeSH G05.090.390 – hybridization, genetic MeSH G05.090.403 – inbreeding MeSH G05.090.403.180 – consanguinity === MeSH G05.105 – cell division === MeSH G05.105.220 – cell nucleus division MeSH G05.105.220.500 – anaphase MeSH G05.105.220.625 – chromosome segregation MeSH G05.105.220.625.620 – nondisjunction, genetic MeSH G05.105.220.687 – meiosis MeSH G05.105.220.687.500 – meiotic prophase i MeSH G05.105.220.687.500.299 – chromosome pairing MeSH G05.105.220.687.500.299.500 – synaptonemal complex MeSH G05.105.220.687.500.600 – pachytene stage MeSH G05.105.220.750 – metaphase MeSH G05.105.220.781 – mitosis MeSH G05.105.220.781.050 – anaphase MeSH G05.105.220.781.625 – metaphase MeSH G05.105.220.781.812 – prometaphase MeSH G05.105.220.781.906 – prophase MeSH G05.105.220.781.953 – telophase MeSH G05.105.220.812 – prometaphase MeSH G05.105.220.875 – prophase MeSH G05.105.220.875.500 – meiotic prophase i MeSH G05.105.220.875.500.299 – chromosome pairing MeSH G05.105.220.875.500.299.500 – synaptonemal complex MeSH G05.105.220.875.500.600 – pachytene stage MeSH G05.105.220.937 – telophase === MeSH G05.180 – dna damage === MeSH G05.180.099 – chromosome breakage MeSH G05.180.185 – dna fragmentation === MeSH G05.190 – dna methylation === === MeSH G05.192 – dna packaging === MeSH G05.192.095 – chromatin assembly and disassembly === MeSH G05.195 – dna repair === MeSH G05.195.830 – sos response (genetics) === MeSH G05.200 – dna replication === MeSH G05.200.760 – dna replication timing MeSH G05.200.880 – s phase === MeSH G05.265 – evolution === MeSH G05.265.250 – evolution, molecular MeSH G05.265.350 – genetic speciation === MeSH G05.310 –
|
{
"page_id": 5115429,
"source": null,
"title": "List of MeSH codes (G05)"
}
|
gene expression === MeSH G05.310.670 – protein biosynthesis MeSH G05.310.700 – transcription, genetic MeSH G05.310.700.500 – reverse transcription === MeSH G05.315 – gene expression regulation === MeSH G05.315.095 – chromatin assembly and disassembly MeSH G05.315.125 – dosage compensation, genetic MeSH G05.315.125.970 – x chromosome inactivation MeSH G05.315.200 – down-regulation MeSH G05.315.203 – epigenesis, genetic MeSH G05.315.207 – epistasis, genetic MeSH G05.315.215 – frameshifting, ribosomal MeSH G05.315.250 – gene amplification MeSH G05.315.290 – gene expression regulation, archaeal MeSH G05.315.300 – gene expression regulation, bacterial MeSH G05.315.310 – gene expression regulation, developmental MeSH G05.315.320 – gene expression regulation, enzymologic MeSH G05.315.320.200 – enzyme induction MeSH G05.315.320.300 – enzyme repression MeSH G05.315.330 – gene expression regulation, fungal MeSH G05.315.370 – gene expression regulation, neoplastic MeSH G05.315.370.500 – gene expression regulation, leukemic MeSH G05.315.375 – gene expression regulation, plant MeSH G05.315.385 – gene expression regulation, viral MeSH G05.315.410 – gene silencing MeSH G05.315.410.790 – rna interference MeSH G05.315.425 – genomic imprinting MeSH G05.315.670 – protein modification, translational MeSH G05.315.670.600 – protein processing, post-translational MeSH G05.315.670.600.400 – protein isoprenylation MeSH G05.315.670.600.700 – protein splicing MeSH G05.315.700 – rna processing, post-transcriptional MeSH G05.315.700.225 – rna 3' end processing MeSH G05.315.700.225.710 – polyadenylation MeSH G05.315.700.250 – rna editing MeSH G05.315.700.700 – rna splicing MeSH G05.315.700.700.100 – alternative splicing MeSH G05.315.700.700.750 – trans-splicing MeSH G05.315.800 – trans-activation (genetics) MeSH G05.315.850 – up-regulation === MeSH G05.330 – gene rearrangement === MeSH G05.330.401 – gene rearrangement, b-lymphocyte MeSH G05.330.401.501 – gene rearrangement, b-lymphocyte, heavy chain MeSH G05.330.401.501.450 – immunoglobulin class switching MeSH G05.330.401.601 – gene rearrangement, b-lymphocyte, light chain MeSH G05.330.801 – gene rearrangement, t-lymphocyte MeSH G05.330.801.111 – gene rearrangement, alpha-chain t-cell antigen receptor MeSH G05.330.801.211 – gene rearrangement, beta-chain t-cell antigen receptor MeSH G05.330.801.261 – gene rearrangement, delta-chain t-cell antigen receptor MeSH G05.330.801.311 – gene rearrangement, gamma-chain t-cell antigen
|
{
"page_id": 5115429,
"source": null,
"title": "List of MeSH codes (G05)"
}
|
receptor === MeSH G05.380 – heredity === === MeSH G05.600 – mutagenesis === MeSH G05.600.220 – dna repeat expansion MeSH G05.600.220.865 – trinucleotide repeat expansion MeSH G05.600.315 – gene amplification MeSH G05.600.320 – gene duplication MeSH G05.600.420 – inversion, chromosome MeSH G05.600.550 – mutagenesis, insertional MeSH G05.600.620 – nondisjunction, genetic MeSH G05.600.800 – sequence deletion MeSH G05.600.800.180 – chromosome deletion MeSH G05.600.800.320 – gene deletion MeSH G05.600.810 – somatic hypermutation, immunoglobulin MeSH G05.600.835 – suppression, genetic MeSH G05.600.860 – translocation, genetic === MeSH G05.760 – recombination, genetic === MeSH G05.760.200 – conjugation, genetic MeSH G05.760.210 – crossing over, genetic MeSH G05.760.380 – gene conversion MeSH G05.760.385 – gene fusion MeSH G05.760.385.500 – oncogene fusion MeSH G05.760.390 – gene transfer, horizontal MeSH G05.760.840 – sister chromatid exchange MeSH G05.760.850 – transduction, genetic MeSH G05.760.860 – transfection MeSH G05.760.860.500 – transformation, bacterial MeSH G05.760.865 – transformation, genetic MeSH G05.760.865.820 – transformation, bacterial === MeSH G05.800 – selection (genetics) === === MeSH G05.865 – sex determination (genetics) === === MeSH G05.930 – virus integration === MeSH G05.930.500 – lysogeny The list continues at List of MeSH codes (G06).
|
{
"page_id": 5115429,
"source": null,
"title": "List of MeSH codes (G05)"
}
|
In organosulfur chemistry, thiadiazine is a heterocyclic compound containing a six-membered ring composed of three carbon atoms, one sulfur atom, and two nitrogen atoms. It exists in several isomeric forms, each characterized by the different arrangement of the sulfur and nitrogen atoms in the ring structure. Common isomers include 1,2,4-thiadiazine, 1,2,6-thiadiazine, and 1,3,4-thiadiazine. Thiadiazines have gained significant interest in organic and medicinal chemistry research due to their diverse potential biological activities, including antimicrobial, anti-inflammatory, and muscle relaxant properties. They have been explored their potential applications in treating conditions such as Huntington's disease, rheumatoid arthritis, and type 2 diabetes. == References ==
|
{
"page_id": 77925925,
"source": null,
"title": "Thiadiazine"
}
|
The molecular formula C33H34N4O6 (molar mass : 582.64 g/mol) may refer to : Azelnidipine, a dihydropyridine calcium channel blocker Biliverdin, a green tetrapyrrolic bile pigment and a product of heme catabolism
|
{
"page_id": 24120875,
"source": null,
"title": "C33H34N4O6"
}
|
In chemistry, the Halcon process refers to technology for the production of propylene oxide by oxidation of propylene with tert-butyl hydroperoxide. The reaction requires metal catalysts, which typically contain molybdenum: (CH3)3COOH + CH2=CHCH3 → (CH3)3COH + CH2OCHCH3 The byproduct tert-butanol is recycled or converted to other useful compounds. The process once operated at the scale of >2 billion kg/y. The lighter analogue of propylene oxide, ethylene oxide, is produced by silver-catalyzed reaction of ethylene with oxygen. Attempts to implement this relatively simple technology to the conversion of propylene to propylene oxide fail. Instead only combustion predominates. The problems are attributed to the sensitivity of allylic C-H bonds. == Mechanism == The oxidation is thought to proceed by formation of Mo(η2-O2-tert-Bu) complexes. The peroxy O center is rendered highly electrophilic, leading to attack on the alkene. == History == The Halcon process was developed by Halcon International. == References ==
|
{
"page_id": 68226603,
"source": null,
"title": "Halcon process"
}
|
The Shanghai Stem Cell Institute is an institute in Shanghai, People's Republic of China dedicated to stem cell research. == The institute == The institute, located within the Shanghai Jiao Tong University under the School of Medicine faculty, is entirely funded by the government of the People's Republic of China. In 2007, the first Shanghai International Symposium on Stem Cell Research took place at Shanghai Jiatong University. == IPS cell breakthrough == On July 24, 2009, the first publication of a successful breakthrough in Stem cell research was released, where Chinese researchers from the Shanghai Stem Cell Institute, led by Professor Fanyi Zeng, successfully reprogrammed adult stem cells to be able to differentiate into any body cell, as in the case with standard embryonic stem cells, the cells in question known as "induced pluripotent stem cells" (IPS cells). The IPS cells were obtained by genetically reprogramming the skin cells of mice to acts like embryonic stem cells, which then were able to differentiate into all forms of body tissue. The researchers have managed to use the IPS cells to create every type of cell in a mouse, creating entire mouse pups using the technique. This is the first time the technique has been used to make an entire mouse. This breakthrough, published in the journals Nature and Cell Stem Cell and developed independently by two teams in China, may possibly depreciate the usage of stem cells obtained from human embryos. The oldest living mice created by the technique are nine months old and are reproducing, albeit showing signs of abnormalities. "This gives us hope for future therapeutic intervention using patients' own re-programmed cells in our far future," according to Professor Zeng Fanyi. A total of 27 mice were successfully born from the first generation of mice created from the IPS
|
{
"page_id": 23727664,
"source": null,
"title": "Shanghai Stem Cell Institute"
}
|
cells which were able to reproduce without any issues. == See also == Stem cell == References == == External links == Mice made from induced stem cells - Nature News
|
{
"page_id": 23727664,
"source": null,
"title": "Shanghai Stem Cell Institute"
}
|
Metal allergies inflame the skin after it has been in contact with metal. They are a form of allergic contact dermatitis. They are becoming more common, as of 2021, except in areas with regulatory countermeasures. People may become sensitized to certain metals by skin contact, usually by wearing or holding consumer products (including non-metal products, like textiles and leather treated with metals), or sometimes after exposure at work. Contact with damaged skin makes sensitization more likely. Medical implants may also cause allergic reactions. Diagnosis is by patch test, a method which does not work as well for metals as it does for some other allergens. Prevention and treatment consists of avoiding the metal allergen; there is no other treatment, as of 2021. It can be difficult to identify and avoid the allergen, because many metals are common in the environment, and some are biologically necessary to humans. Regulations have successfully reduced the rates of some metal allergies in Europe, but are not widespread. The social and economic costs of metal allergies are high. Metal allergies are type IV allergies; the metals are haptens. The toxicity of some allergenic metals may contribute to the development of allergies. == Metals == Nickel allergy and allergies to mercury and chromium have long been recognised; gold, palladium, and cobalt have gotten attention more recently. There is often cross-sensitization, where a person allergic to one metal may become allergic to another, but monosensitization, reacting to just one metal, is also possible. For instance, many people allergic to nickel are often also allergic to cobalt (a similar element often found in the same places as nickel) and palladium. But it is also possible to only be allergic to one of these metals. Nickel is one of the most common contact allergens. == Exposure routes == Most
|
{
"page_id": 69209653,
"source": null,
"title": "Metal allergy"
}
|
cases of metal allergy are caused by consumer products containing metal; exposure at work can also cause metal allergies. The largest human exposure to metals is ingestion; while food or drink containing metals can cause an allergic reaction in people who already have an allergy, it's not clear if it can cause a new allergy, as of 2021. Some metal allergens are nutritionally necessary to humans. Airborne metals have been linked to higher rates of sensitization. It can be difficult to figure out what allergen a person with contact dermatitis is reacting to, especially if the allergic reaction is systemic, rather than just occurring where the allergen entered the body. Consumer products that have induced allergies include jewellery (both cheap and expensive, brand-name jewellery may release metal allergens), buttons, clothing fasteners (such as zippers, buckles, and hooks), dental restorations, mobile phones, and leather (from the tanning process). Metal hair fasteners may also leach allergens. The increase in consumer products, including consumer electronics, that use metal nanomaterials, mainly silicon, titanium, zinc and aluminum, increases exposure. Tattoo inks contaminated with metal allergens have been known to cause severe reactions, sometimes years later, when the original ink is not available for testing. Implants and prosthetics, including dental repairs, are also an exposure; dental work is the main way in which the general population is sensitized to palladium, and dental workers may get occupational palladium allergies, though cross-sensitization may also be a common way in which people develop an allergy to this fairly rare metal. Medications containing metals could also potentially cause sensitization. === Skin === Exposure on damaged skin, such as chapped hands or a piercing, increases the risk of sensitization from a low-level exposure to the allergen. == Diagnosis == Diagnosis is by patch testing, a method first used in 1895. Patches
|
{
"page_id": 69209653,
"source": null,
"title": "Metal allergy"
}
|
containing potential allergens are stuck on the skin, and the skin is monitored for inflammation. For metal allergens, patch test reproducibility is low, and the extent to which they predict implant failures is debated. If the person being tested has a rash already, it may be difficult to do a patch test. Patch testing may also worsen the allergy. it is also difficult to distinguish co-sensitivity from cross-sensitivity using a patch test. In-vitro tests, where a blood sample is examined for metal-sensitive T cells, are in development, but not widely used, partly due to cost. Many non-allergic people also have metal-specific T cells, and in some cases they seem to have more than some allergic individuals, which makes the test less useful. == Epidemiology == Metal allergies are rapidly becoming more common. Nickel is the most common contact allergen worldwide (of people with contact dermatitis, 11.4% in Europe, 8.8–25.7% in China, and 17.5% in North America are allergic to nickel). Nickel allergy, and contact allergies more generally, can develop when people are any age, but they are most likely to develop in early adulthood. This may be due to patterns in exposure or changes in the immune system with age, or both. == Prevention and care == Preventing and treating contact allergies largely involves avoiding the allergen, which may be difficult when it is a common metal. There are no other treatments for metal allergies, as of 2021. === Environmental regulation === In the Netherlands, regulations that limit the release of nickel from consumer products, introduced in the 1990s, worked. Dutch women are now significantly less likely to develop nickel allergies. Sweden followed in 1994, and later regulations were made Europe-wide. These limits cover objects inserted into piercings (0.2 μg/cm²/week) and those in direct or prolonged contact with the skin
|
{
"page_id": 69209653,
"source": null,
"title": "Metal allergy"
}
|
(0.5 μg/cm2). They also set target values for nickel in ambient air; (20 ng/mg3) increases in nickel concentrations in ambient air, even when absolute levels are quite low, have been linked to increased rates of sensitization in human populations. Nickel allergy rates in Europe have decreased, though it is still the most common contact allergy. Regulation is generally inadequate, given the amount of the social and economic harm caused by metal allergies. Regulation encouraged use of metals other than nickel, and that caused more cases of allergies to other metals. Nickel remains the most common, but cobalt is the second most common allergy, and in 2020 the EU introduced a temporary generic concentration limit (GCL) of 0.1% on cobalt. Limits on nickel and cobalt in textiles (130mg/kg nickel, 110 mg/kg cobalt) and leather (70mg/kg nickel, 60 mg/kg cobalt), were proposed in 2020 by France and Sweden. There is no allergen regulation of pallidium in Europe as of October 2021. == See also == Nickel allergy Contact dermatitis Alloy (mixtures of metals) Fast fashion == References ==
|
{
"page_id": 69209653,
"source": null,
"title": "Metal allergy"
}
|
Tetanolysin is a toxin produced by Clostridium tetani bacteria. Its function is unknown, but it is believed to contribute to the pathogenesis of tetanus. The other C. tetani toxin, tetanospasmin, is more definitively linked to tetanus. It is sensitive to oxygen. Tetanolysin belongs to a family of protein toxins known as thiol-activated cytolysins, which bind to cholesterol. It is related to streptolysin O and the θ-toxin of Clostridium perfringens. Cytolysins form pores in the cytoplasmic membrane that allows for the passage of ions and other molecules into the cell. The molecular weight of tetanolysin is around 55,000 daltons. == References == == Further reading == Alouf, J. (1997) pp 7–10 in Guidebook to Protein Toxins and Their Use in Cell Biology, Ed. Rappuoli, R. and Montecucco, C. (Oxford University Press). Ahnert-Hilger, G., Pahner, I., and Höltje, M. (1999) Pore-forming Toxins as Cell Biological and Pharmacological Tools. In press. Conti, A., Brando, C., DeBell, K.E., Alava, M.A., Hoffman, T., Bonvini, E. (1993) J. Biol. Chem. 268, 783-791. Raya, S.A., Trembovler, V., Shohami, E. and Lazarovici, P. (1993) Nat. Toxins 1, 263-70.
|
{
"page_id": 6950454,
"source": null,
"title": "Tetanolysin"
}
|
Francesca M. Kerton is a green chemist and Professor of Chemistry at Memorial University of Newfoundland, Canada. == Early life == Kerton completed her B.Sc. (Hons) in chemistry with environmental science at the University of Kent. She then completed her D.Phil. (1995–1999) at the University of Sussex. == Academic career == Following a postdoctoral fellowship at the University of British Columbia (1999–2000), Kerton was appointed as a junior lecturer at the University of York (2000–2002). She was awarded a Royal Society (UK) University Research Fellowship (2002–2004). She was appointed as an assistant professor in the Department of Chemistry at the Memorial University of Newfoundland in 2005, where she founded the Green Chemistry and Catalysis Group. She was promoted to associate professor in 2010 and promoted to professor in 2015. == Research == Kerton has authored more than 80 scientific papers related to green chemistry, organometallic chemistry, catalysis, and polymer chemistry. Kerton and her research group have contributed to the development of processes to convert waste from fish and shellfish processing plants into chemical feedstocks. Her laboratory has also developed polymerization catalysts using earth-abundant metals. == Published work == Kerton is the co-author of the book Alternative Solvents for Green Chemistry, which is published by the Royal Society for Chemistry. She has also authored Fuels, Chemicals and Materials from the Oceans and Aquatic Sources, which is published by Wiley. == Honours and awards == Kerton was selected as one of three Canadian Women for the 25 Women in Science 2024 cohort whose research focuses on environmental sustainability. This recognition aims to increase the visibility of women researchers and highlights Kerton's research targeting new environmentally-friendly technologies to transform food waste towards achieving a circular economy. In 2023, Kerton was the recipient of the Kalev Pugi Award from the Society of Chemical Industry
|
{
"page_id": 61279800,
"source": null,
"title": "Francesca M. Kerton"
}
|
(SCI) Canada Group. Kerton received the 2016 Dean's Distinguished Scholar Medal at Memorial University. In 2019, Kerton was recognized for her research with the Canadian Green Chemistry and Engineering Award (Individual). She was made a Fellow of the Royal Society of Chemistry in 2016. == Other contributions == Kerton has served on the interdisciplinary adjudication committee for Canada Research Chairs program and as an evaluator of Fellowship proposals for the Association of Commonwealth Universities Blue Charter. She is an associate editor of the journal RSC Sustainability and a member of the editorial advisory board for the journal Reaction Chemistry and Engineering both published by the Royal Society of Chemistry. She is a member of the IUPAC committee for Chemistry Research Applied to World Needs (CHEMRAWN) and Chair of this committee since January 2020. She is the co-chair for the 27th Annual Green Chemistry and Engineering Conference in 2023. == References == == External links == Kerton Faculty Website Interview with Green Chemistry blog
|
{
"page_id": 61279800,
"source": null,
"title": "Francesca M. Kerton"
}
|
Single-cell genome and epigenome by transposases sequencing (scGET-seq) is a DNA sequencing method for profiling open and closed chromatin. In contrast to single-cell assay for transposase-accessible chromatin with sequencing (scATAC-seq), which only targets active euchromatin, scGET-seq is also capable of probing inactive heterochromatin. This is achieved through the use of TnH, which is created by linking the chromodomain (CD) of heterochromatin protein-1-alpha (HP-1 α {\displaystyle \alpha } ) to the Tn5 transposase. TnH is then able to target histone 3 lysine 9 trimethylation (H3K9me3), a marker for heterochromatin. Akin to RNA velocity, which uses the ratio of spliced to unspliced RNA to infer the kinetics of changes in gene expression over the course of cellular development, the ratio of TnH to Tn5 signals obtained from scGET-seq can be used to calculate chromatin velocity, which measures the dynamics of chromatin accessibility over the course of cellular developmental pathways. == History == Transcriptional regulation is tightly linked to chromatin states. Chromatin that is open, or permissive to transcription, make up only 2-3% of the genome, but encompass 94.4% of transcription factor binding sites. Conversely, more tightly packed DNA, or heterochromatin, is responsible for genome organization and stability. Chromatin density also changes over the course of cellular differentiation processes, but there is a lack of high-throughput sequencing methods for directly assaying heterochromatin. Many genomic-related diseases such as cancer are highly linked to changes in their epigenome. Cancers in particular are characterized by single-cell heterogeneity, which can drive metastasis and treatment resistance. The mechanisms that underlie these processes are still largely unknown, although the advent of single-cell technologies, including single-cell epigenomics, has contributed greatly to their elucidation. In 2015, ATAC-seq, which uses the Tn5 transposase to fragment and tag accessible chromatin, or euchromatin, for sequencing, became feasible at the single-cell resolution. scGET-seq builds upon
|
{
"page_id": 70192699,
"source": null,
"title": "ScGET-seq"
}
|
this technology by also providing information on heterochromatin, providing a more comprehensive look at chromatin structure and dynamics within each cell. == Methods == === Sample preparation === Sample preparation for scGET-seq starts with obtaining a suspension of nuclei from cells using a method appropriate for the starting material. The next step is to produce the TnH transposase. Tn5 is a transposase that cuts and ligates adapters to genomic regions unbound by nucleosomes (open chromatin). HP-1a is a member of the HP1 family and is able to recognize and specifically bind to H3K9me3. Its chromodomain uses an induced-fit mechanism for recognizing this chromatin modification. Linking the first 112 amino acids of HP-1a containing the chromodomain to Tn5 using a three poly-tyrosine-glycine-serine (TGS) linker leads to the creation of the TnH transposase, which is capable of targeting heterochromatin marked by H3K9me3. Library preparation is done using a modified protocol for single-cell ATAC-seq, where the nuclei suspension is sequentially incubated with the Tn5 transposase first, and then TnH. === Data analysis === The goals of the data analysis are: To identify and characterize distinct cell populations using clustering To profile chromatin accessibility across the genome To predict copy-number variants and single-nucleotide variants ==== Pre-processing ==== Post-sequencing, reads need to be demultiplexed and mapped to the appropriate reference genome. Duplicated reads are identified and removed. "Peaks", or regions in the DNA enriched in the number of reads mapped, are identified. Quality control is performed, and cells with low numbers of reads or few detected features are filtered out. Four count matrices (matrices where each column is a cell and each row is a feature) are generated: Tn5-dhs, Tn5-complement, TnH-dhs and TnH-complement, representing signal from accessible and compacted chromatin. ==== Analysis ==== ===== Dimension reduction, visualization and clustering ===== Each of the matrices are
|
{
"page_id": 70192699,
"source": null,
"title": "ScGET-seq"
}
|
filtered of shared regions and then normalized and log2 transformed. Linear dimension reduction is done using principal component analysis (PCA). Groups of cells are identified using a k-NN algorithm and Leiden algorithm. Finally, the four matrices are combined using matrix factorization and UMAP reduction. ===== Cell identification annotation ===== There are two approaches to cell identity annotation: Annotation based on feature annotation of ATAC peaks, and annotation based on integration with reference scRNA-seq data. == Applications == === Current === By using the ratio of Tn5 to TnH signals, quantitative values describing how quickly and in what direction chromatin remodelling is taking place can be calculated (chromatin velocity). By isolating regions that are most dynamic and identifying which transcription factors bind there, chromatin velocity can be used to infer the dynamic epigenetic processes happening within a given cell and the contributions of various transcription factors to those processes. === Future === Chromatin remodelling precedes changes in gene expression and enhances the understanding of trajectories and mechanisms of cellular changes. Thus, platforms and tools for integration of multimodal data are areas of active research Incorporating temporal and directionality elements through integration of chromatin velocity with RNA velocity has been proposed to reveal even more information about differentiation pathways. == Limitations == scGET-seq has some of the same limitations as scATAC-seq. Both processes require nuclei samples from viable cells, and high cellular viability. Low cellular viability leads to high background DNA contamination that do not accurately represent authentic biological signals. Additionally, the sparsity and noisy nature of scATAC-seq and scGET-seq data makes analysis challenging, and there is no consensus yet on how to best manage this data Another limitation is that scGET-seq still needs the validation of SNVs results by bulk genome sequencing. Even though there is a high correlation of mutations
|
{
"page_id": 70192699,
"source": null,
"title": "ScGET-seq"
}
|
between bulk exome sequencing and scGET-seq results, scGET-seq fails to capture all exome SNVs. == References ==
|
{
"page_id": 70192699,
"source": null,
"title": "ScGET-seq"
}
|
In bioethics, ethics of organ transplantation refers to the ethical concerns on organ transplantation procedures . Both the source and method of obtaining the organ to transplant are major ethical issues to consider, as well as the notion of distributive justice. == Sources == Organ harvesting from live people is one of the most frequently discussed debate topics in organ transplantation. The World Health Organization argues that transplantation promotes health, but the notion of “transplantation tourism” has the potential to violate human rights or exploit the poor, to have unintended health consequences, and to provide unequal access to services, all of which ultimately may cause harm. Thus, the WHO called to ban compensated organ transplanting and asked member states to protect the most vulnerable from transplant tourism and organ trade. However, as disincentives becomes a must, adding incentives back, such as improving life condition for organ donors after donation, becomes difficult. Regardless of the “gift of life”, in the context of developing countries, this might be coercive. The practice of coercion could be considered exploitative of the poor population, violating basic human rights according to Articles 3 and 4 of the Universal Declaration of Human Rights. For example, in the history of major transplant countries, organs from executed prisoners are used to develop their techniques. This practice was condemned by bioethicists and was gradually abandoned and replaced by donation systems A powerfully-argued opposing view, that properly and effectively regulated trade (markets) in organs could ensure that the seller is fully informed of all the consequences of donation, is a mutually beneficial transaction between two consenting adults, and that prohibiting it could itself be interpreted as violating Articles 3 and 29 of the Universal Declaration of Human Rights. Even within developed countries there is concern that enthusiasm for increasing the supply
|
{
"page_id": 37883452,
"source": null,
"title": "Ethics of organ transplantation"
}
|
of organs may trample on respect for the right to life. The question is made even more complicated by the fact that the "irreversibly" criterion for legal death cannot be adequately defined and can easily change with changing technology. As controversies on the boundary of life and death grow, the debate on when to terminate end-of-life care and start organ harvesting ensues. Controversies also raise on how to assume consent of organ donation for dead people. In practice most countries, have legislation allowing for implied consent, asking people to opt out of organ donation instead of opt in, but allow family refusals. There are fewer debates on animal sources, as historically laboratory animals have been used to develop organ transplantation technologies for prolonging human life, such as using animal organs in xenotransplantation on human. Nevertheless, animal rights activists have objections on what they see as trading off the rights of animals to live their own lives (deontology) for rationalized ends of replacing an organ or a dialysis machine in a sufferer, where many organ donations fail. Further, organ harvesting in bear farms abuses animals. Religious groups and ethical vegetarians may object on purity grounds to transplantation as violating natural boundaries found in nature. Researchers currently explore prospects of using 3D printing or stem cells to produce organs, but some such research projects have been crictised for their use of human embryos taken through abortions, as in controversies about Planned Parenthood's selling of fetal organs and tissues for research. == Distribution == Scarcity of replacement organs, currently only from living or dead organ donors rather than factories, and insufficient for demand, results in a growing waiting list of patients and ethical issues in allocation. In 1994, E. H. Kluge objected to the equal access principle based on his argument that people
|
{
"page_id": 37883452,
"source": null,
"title": "Ethics of organ transplantation"
}
|
whose need are uncontrollable should be preferred over people who choose a poor lifestyle. Donor matching intended to optimize life-years gained is also subject to debate, as people value their organ and the remainder of their lives differently. In practice, organ and tissue banks often choose patients in ways that secure their revenue, whereas “altruistic” clinics may not have the income necessary to fund their own needs, let alone to support research and development to improve quality and availability of care People with intellectual disabilities have historically been excluded from organ transplantation waitlists. A 1993 study by Levenson and Olbrisch found transplant centers were more likely to exclude people with intellectual disabilities from certain types of organ transplants (i.e. heart, liver, and kidney) and if they had more severe intellectual disability compared to those with more moderate intellectual disability. Current commentary on the ethics or organ distribution opposes the absolute exclusion and encourages an individualized interdisciplinary assessment. == References == == Further reading == Wilkinson, Martin; Wilkinson, Stephen (2019), "The donation of human organs", in Zalta, Edward N. (ed.), Stanford Encyclopedia of Philosophy, Stanford, California: The Metaphysics Research Lab, Center for the Study of Language and Information, Stanford University. Wilkinson, Stephen (2016), "The sale of human organs", in Zalta, Edward N. (ed.), Stanford Encyclopedia of Philosophy, Stanford, California: The Metaphysics Research Lab, Center for the Study of Language and Information, Stanford University. Kolnsberg, Heather R. (2003). "An economic study: Should we sell human organs?". International Journal of Social Economics. 30 (10): 1049–1069. doi:10.1108/03068290310492850.
|
{
"page_id": 37883452,
"source": null,
"title": "Ethics of organ transplantation"
}
|
This page provides supplementary chemical data on bismuth(III) oxide. == Material Safety Data Sheet == MSDS from Fischer Scientific == Structure and properties == == Thermodynamic properties == == Spectral data == == References ==
|
{
"page_id": 3018300,
"source": null,
"title": "Bismuth(III) oxide (data page)"
}
|
In physical cosmology, baryogenesis (also known as baryosynthesis) is the physical process that is hypothesized to have taken place during the early universe to produce baryonic asymmetry, the observation that only matter (baryons) and not antimatter (antibaryons) is detected in universe other than in cosmic ray collisions.: 22.3.6 Since it is assumed in cosmology that the particles we see were created using the same physics we measure today, and in particle physics experiments today matter and antimatter are always symmetric, the dominance of matter over antimatter is unexplained. A number of theoretical mechanisms are proposed to account for this discrepancy, namely identifying conditions that favour symmetry breaking and the creation of normal matter (as opposed to antimatter). This imbalance has to be exceptionally small, on the order of 1 in every 1630000000 (≈2×109) particles a small fraction of a second after the Big Bang. After most of the matter and antimatter was annihilated, what remained was all the baryonic matter in the current universe, along with a much greater number of bosons. Experiments reported in 2010 at Fermilab, however, seem to show that this imbalance is much greater than previously assumed. These experiments involved a series of particle collisions and found that the amount of generated matter was approximately 1% larger than the amount of generated antimatter. The reason for this discrepancy is not yet known. Most grand unified theories explicitly break the baryon number symmetry, which would account for this discrepancy, typically invoking reactions mediated by very massive X bosons (X) or massive Higgs bosons (H0). The rate at which these events occur is governed largely by the mass of the intermediate X or H0 particles, so by assuming these reactions are responsible for the majority of the baryon number seen today, a maximum mass can be calculated above
|
{
"page_id": 462396,
"source": null,
"title": "Baryogenesis"
}
|
which the rate would be too slow to explain the presence of matter today. These estimates predict that a large volume of material will occasionally exhibit a spontaneous proton decay, which has not been observed. Therefore, the imbalance between matter and antimatter remains a mystery. Baryogenesis theories are based on different descriptions of the interaction between fundamental particles. Two main theories are electroweak baryogenesis, which would occur during the electroweak phase transition, and the GUT baryogenesis, which would occur during or shortly after the grand unification epoch. Quantum field theory and statistical physics are used to describe such possible mechanisms. Baryogenesis is followed by primordial nucleosynthesis, when atomic nuclei began to form. == Background == The majority of ordinary matter in the universe is found in atomic nuclei, which are made of neutrons and protons. There is no evidence of primordial antimatter. In the universe about 1 in 10,000 protons are antiprotons, consistent with ongoing production due to cosmic rays. Possible domains of antimatter in other parts of the universe is inconsistent with the lack of measurable of gamma radiation background.: 36 Furthermore, accurate predictions of Big Bang nucleosynthesis depend upon the value of the baryon asymmetry factor (see § Relation to Big Bang nucleosynthesis). The match between the predictions and observations of the nucleosynthesis model constrains the value of this baryon asymmetry factor. In particular, if the model computed with equal amounts of baryons and antibaryons, they annihilate each other so completely that not enough baryons are left to create nucleons.: 37 There are two main interpretations for this disparity: either the universe began with a small preference for matter (total baryonic number of the universe different from zero), or the universe was originally perfectly symmetric, but somehow a set of particle physics phenomena contributed to a small imbalance
|
{
"page_id": 462396,
"source": null,
"title": "Baryogenesis"
}
|
in favour of matter over time. The goal of cosmological theories of baryogenesis is to explain the baryon asymmetry factor using quantum field theory of elementary particles.: 37 == Sakharov conditions == In 1967, Andrei Sakharov proposed a set of three necessary conditions that a baryon-generating interaction must satisfy to produce matter and antimatter at different rates. These conditions were inspired by the recent discoveries of the cosmic microwave background and CP-violation in the neutral kaon system. The three necessary "Sakharov conditions" are: Baryon number B {\displaystyle B} violation. C-symmetry and CP-symmetry violation. Interactions out of thermal equilibrium. Baryon number violation is a necessary condition to produce an excess of baryons over anti-baryons. But C-symmetry violation is also needed so that the interactions which produce more baryons than anti-baryons will not be counterbalanced by interactions which produce more anti-baryons than baryons. CP-symmetry violation is similarly required because otherwise equal numbers of left-handed baryons and right-handed anti-baryons would be produced, as well as equal numbers of left-handed anti-baryons and right-handed baryons. Finally, the last condition, known as the out-of-equilibrium decay scenario, states that the rate of a reaction which generates baryon-asymmetry must be less than the rate of expansion of the universe. This ensures the particles and their corresponding antiparticles do not achieve thermal equilibrium due to rapid expansion decreasing the occurrence of pair-annihilation. The interactions must be out of thermal equilibrium at the time of the baryon-number and C/CP symmetry violating decay occurs to generate the asymmetry.: 46 == In the Standard Model == The Standard Model can incorporate baryogenesis, though the amount of net baryons (and leptons) thus created may not be sufficient to account for the present baryon asymmetry. There is a required one excess quark per billion quark-antiquark pairs in the early universe in order to provide
|
{
"page_id": 462396,
"source": null,
"title": "Baryogenesis"
}
|
all the observed matter in the universe. This insufficiency has not yet been explained, theoretically or otherwise. Baryogenesis within the Standard Model requires the electroweak symmetry breaking to be a first-order cosmological phase transition, since otherwise sphalerons wipe out any baryon asymmetry that happened up to the phase transition. Beyond this, the remaining amount of baryon non-conserving interactions is negligible. The phase transition domain wall breaks the P-symmetry spontaneously, allowing for CP-symmetry violating interactions to break C-symmetry on both its sides. Quarks tend to accumulate on the broken phase side of the domain wall, while anti-quarks tend to accumulate on its unbroken phase side. Due to CP-symmetry violating electroweak interactions, some amplitudes involving quarks are not equal to the corresponding amplitudes involving anti-quarks, but rather have opposite phase (see CKM matrix and Kaon); since time reversal takes an amplitude to its complex conjugate, CPT-symmetry is conserved in this entire process. Though some of their amplitudes have opposite phases, both quarks and anti-quarks have positive energy, and hence acquire the same phase as they move in space-time. This phase also depends on their mass, which is identical but depends both on flavor and on the Higgs VEV which changes along the domain wall. Thus certain sums of amplitudes for quarks have different absolute values compared to those of anti-quarks. In all, quarks and anti-quarks may have different reflection and transmission probabilities through the domain wall, and it turns out that more quarks coming from the unbroken phase are transmitted compared to anti-quarks. Thus there is a net baryonic flux through the domain wall. Due to sphaleron transitions, which are abundant in the unbroken phase, the net anti-baryonic content of the unbroken phase is wiped out as anti-baryons are transformed into leptons. However, sphalerons are rare enough in the broken phase as
|
{
"page_id": 462396,
"source": null,
"title": "Baryogenesis"
}
|
not to wipe out the excess of baryons there. In total, there is net creation of baryons (as well as leptons). In this scenario, non-perturbative electroweak interactions (i.e. the sphaleron) are responsible for the B-violation, the perturbative electroweak Lagrangian is responsible for the CP-violation, and the domain wall is responsible for the lack of thermal equilibrium and the P-violation; together with the CP-violation it also creates a C-violation in each of its sides. == Relation to Big Bang nucleosynthesis == The central question to baryogenesis is what causes the preference for matter over antimatter in the universe, as well as the magnitude of this asymmetry. An important quantifier is the asymmetry parameter, given by η = n B − n B ¯ n γ , {\displaystyle \eta ={\frac {n_{\text{B}}-n_{\bar {\text{B}}}}{n_{\gamma }}},} where nB and nB refer to the number density of baryons and antibaryons respectively and nγ is the number density of cosmic background radiation photons. According to the Big Bang model, matter decoupled from the cosmic background radiation (CBR) at a temperature of roughly 3000 kelvin, corresponding to an average kinetic energy of 3000 K / (10.08×103 K/eV) = 0.3 eV. After the decoupling, the total number of CBR photons remains constant. Therefore, due to space-time expansion, the photon density decreases. The photon density at equilibrium temperature T is given by n γ = 1 π 2 ( k B T ℏ c ) 3 ∫ 0 ∞ x 2 e x − 1 d x = 2 ζ ( 3 ) π 2 ( k B T ℏ c ) 3 ≈ 20.3 ( T 1 K ) 3 cm − 3 , {\displaystyle {\begin{aligned}n_{\gamma }&={\frac {1}{\pi ^{2}}}{\left({\frac {k_{\text{B}}T}{\hbar c}}\right)}^{3}\int _{0}^{\infty }{\frac {x^{2}}{e^{x}-1}}dx\\[2pt]&={\frac {2\zeta (3)}{\pi ^{2}}}{\left({\frac {k_{\text{B}}T}{\hbar c}}\right)}^{3}\\[2pt]&\approx 20.3\left({\frac {T}{\mathrm {1\,K} }}\right)^{3}{\text{cm}}^{-3},\end{aligned}}} with kB as the Boltzmann constant,
|
{
"page_id": 462396,
"source": null,
"title": "Baryogenesis"
}
|
ħ as the Planck constant divided by 2π and c as the speed of light in vacuum, and ζ(3) as Apéry's constant. At the current CBR photon temperature of 2.725 K, this corresponds to a photon density nγ of around 411 CBR photons per cubic centimeter. Therefore, the asymmetry parameter η, as defined above, is not the "best" parameter. Instead, the preferred asymmetry parameter uses the entropy density s, η s = n B − n B ¯ s {\displaystyle \eta _{s}={\frac {n_{\text{B}}-n_{\bar {\text{B}}}}{s}}} because the entropy density of the universe remained reasonably constant throughout most of its evolution. The entropy density is s = d e f e n t r o p y v o l u m e = p + ρ T = 2 π 2 45 g ⁎ ( T ) T 3 , {\displaystyle s\ {\stackrel {\mathrm {def} }{=}}\ {\frac {\mathrm {entropy} }{\mathrm {volume} }}={\frac {p+\rho }{T}}={\frac {2\pi ^{2}}{45}}g_{\text{⁎}}(T)T^{3},} with p and ρ as the pressure and density from the energy density tensor Tμν, and g⁎ as the effective number of degrees of freedom for "massless" particles at temperature T (in so far as mc2 ≪ kBT holds), g ⁎ ( T ) = ∑ i = b o s o n s g i ( T i T ) 3 + 7 8 ∑ j = f e r m i o n s g j ( T j T ) 3 , {\displaystyle g_{\text{⁎}}(T)=\sum _{i=\mathrm {bosons} }g_{i}{\left({\frac {T_{i}}{T}}\right)}^{3}+{\frac {7}{8}}\sum _{j=\mathrm {fermions} }g_{j}{\left({\frac {T_{j}}{T}}\right)}^{3},} for bosons and fermions with gi and gj degrees of freedom at temperatures Ti and Tj respectively. At the present epoch, s = 7.04 nγ. == Other models == === B-meson decay === Another possible explanation for the cause of baryogenesis is the decay reaction of B-mesogenesis. This phenomenon
|
{
"page_id": 462396,
"source": null,
"title": "Baryogenesis"
}
|
suggests that in the early universe, particles such as the B-meson decay into a visible Standard Model baryon as well as a dark antibaryon that is invisible to current observation techniques. === Asymmetric Dark Matter === The asymmetric dark matter proposal investigates mechanisms that would explain the abundance of dark matter but lack of dark antimatter as the consequence of the same effect as would explain baryogenesis. == See also == Affleck–Dine mechanism Anthropic principle Big Bang Chronology of the universe CP violation Leptogenesis (physics) Lepton == References == === Articles === === Textbooks === E. W. Kolb & M. S. Turner (1994). The Early Universe. Perseus Publishing. ISBN 978-0-201-62674-2. === Preprints === A. D. Dolgov (1997). "Baryogenesis, 30 Years After". Surveys in High Energy Physics. 13 (1–3): 83–117. arXiv:hep-ph/9707419. Bibcode:1998SHEP...13...83D. doi:10.1080/01422419808240874. S2CID 119499400.
|
{
"page_id": 462396,
"source": null,
"title": "Baryogenesis"
}
|
Safety in numbers is the hypothesis that, by being part of a large physical group or mass, an individual is less likely to be the victim of a mishap, accident, attack, or other bad event. Some related theories also argue (and can show statistically) that mass behaviour (by becoming more predictable and "known" to other people) can reduce accident risks, such as in traffic safety – in this case, the safety effect creates an actual reduction of danger, rather than just a redistribution over a larger group. == In biology == The mathematical biologist W.D. Hamilton proposed his selfish herd theory in 1971 to explain why animals seek central positions in a group. Each individual can reduce its own domain of danger by situating itself with neighbours all around, so it moves towards the centre of the group. The effect was tested in brown fur seal predation by great white sharks. Using decoy seals, the distance between decoys was varied to produce different domains of danger. As predicted, the seals with a greater domain of danger had an increased risk of shark attack. Antipredator adaptations include behaviour such as the flocking of birds, herding of sheep and schooling of fish. Similarly, Adelie penguins wait to jump into the water until a large enough group has assembled, reducing each individual's risk of seal predation. This behavior is also seen in masting and predator satiation where the predators are overwhelmed with an abundance of prey during a period of time resulting in more of the prey surviving. == In road traffic safety == In 1949 R. J. Smeed reported that per capita road fatality rates tended to be lower in countries with higher rates of motor vehicle ownership. This observation led to Smeed's Law. In 2003 Peter L. Jacobsen compared rates of
|
{
"page_id": 921151,
"source": null,
"title": "Safety in numbers"
}
|
walking and cycling, in a range of countries, with rates of collisions between motorists and cyclists or walkers. He found an inverse relationship that was hypothesised to be explained by a concept described as 'behavioural adaptation', whereby drivers who are exposed to greater numbers of cyclists on the road begin to drive more safely around them. Though an attractive concept for cycling advocates, it has not been empirically validated. Other combined modelling and empirical evidence suggests that while changes in driver behaviour might still be one way that collision risk per cyclist declines with greater numbers, the effect can be easily produced through simple spatial processes (traffic design) akin to the biological herding processes described above. Without considering hypotheses 1 or 3, Jacobsen concluded that "A motorist is less likely to collide with a person walking and bicycling if more people walk or bicycle." He described this theory as "safety in numbers." Safety in numbers is also used to describe the evidence that the number of pedestrians or cyclists correlates inversely with the risk of a motorist colliding with a pedestrian or cyclist. This non-linear relationship was first shown at intersections. It has been confirmed by ecologic data from cities in California and Denmark, and European countries, and time-series data for the United Kingdom and the Netherlands. The number of pedestrians or bicyclists injured increases at a slower rate than would be expected based on their numbers. That is, more people walk or cycle where the risk to the individual pedestrian or bicyclist is lower. A 2002 study into whether pedestrian risk decreased with pedestrian flow, using 1983-86 data from signalized intersections in a town in Canada, found that in some circumstances pedestrian flow increased where the risk per pedestrian decreased. After cycling was promoted in Finland, there was a
|
{
"page_id": 921151,
"source": null,
"title": "Safety in numbers"
}
|
75% drop in cyclists deaths and the number of trips increased by 72%. In England, between 2000 and 2008, serious bicycle injuries declined by 12%. Over the same period, the number of bicycle trips made in London doubled. Motor vehicle traffic decreased by 16%, bicycle use increased by 28% and cyclist injuries had decreased by 20% in the first year of operation of the London Congestion Charge. In January 2008, the number of cyclists in London being treated in hospitals for serious injuries had increased by 100% in six years. Over the same time, they report, the number of cyclists had increased by 84%. In York, comparing the periods 1991-93 and 1996–98, the number of bicyclists killed and seriously injured fell by 59%. The share of trips made by bicycle rose from 15% to 18%. In Germany, between 1975 and 2001, the total number of bicycle trips made in Berlin almost quadrupled. Between 1990 and 2007, the share of trips made by bicycle increased from 5% to 10%. Between 1992 and 2006, the number of serious bicycle injuries declined by 38%. In Germany as a whole, between 1975 and 1998, cyclist fatalities fell by 66% and the percent of trips made by bicycle rose from 8% to 12%. In America, during the period 1999-2007, the absolute number of cyclists killed or seriously injured decreased by 29% and the amount of cycling in New York city increased by 98%. In Portland, Oregon, between 1990 and 2000, the percentage of workers who commuted to work by bicycle rose from 1.1% to 1.8%. By 2008, the proportion has risen to 6.0%; while the number of workers increased by only 36% between 1990 and 2008, the number of workers commuting by bicycle increased 608%. Between 1992 and 2008, the number of bicyclists crossing four
|
{
"page_id": 921151,
"source": null,
"title": "Safety in numbers"
}
|
bridges into downtown was measured to have increased 369% between 1992 and 2008. During that same period, the number of reported crashes increased by only 14%. In Copenhagen, Denmark, between 1995 and 2006, the number of cyclists killed or seriously injured fell by 60%. During the same period, cycling increased by 44% and the percent of people cycling to work increased from 31% to 36%. In the Netherlands, between 1980 and 2005, and cyclist fatalities decreased by 58% and cycling increased by 45%. During 7 years of the 1980s, admissions to hospital of cyclists declined by 5% and cycling in Western Australia increased by 82%. == See also == Bike bus Critical Mass Predator satiation Walking bus == References == == External links == Media related to Safety in numbers at Wikimedia Commons
|
{
"page_id": 921151,
"source": null,
"title": "Safety in numbers"
}
|
The ouzo effect ( OO-zoh), also known as the louche effect ( LOOSH) and spontaneous emulsification, is the phenomenon of formation of a milky oil-in-water emulsion when water is added to ouzo and other anise-flavored liqueurs and spirits, such as pastis, rakı, arak, sambuca and absinthe. Such emulsions occur with only minimal mixing and are highly stable. == Observation and explanation == First a strongly hydrophobic essential oil such as trans-anethole is dissolved in a water-miscible solvent, such as ethanol, and the ethanol itself forms a solution (a homogeneous mixture) with water. If then the concentration of ethanol is lowered by addition of more water the hydrophobic substance precipitates from the solution and forms an emulsion with the remaining ethanol-water-mixture. The tiny droplets of the substance in the emulsion scatter light and thus make the mixture appear white. Oil-in-water emulsions are not normally stable. Oil droplets coalesce until complete phase separation is achieved at macroscopic levels. Addition of a small amount of surfactant or the application of high shear rates (strong stirring) can stabilize the oil droplets. In a water-rich ouzo mixture the droplet coalescence is dramatically slowed without mechanical agitation, dispersing agents, or surfactants. It forms a stable homogeneous fluid dispersion by liquid–liquid nucleation. The size of the droplets when measured by small-angle neutron scattering was found to be on the order of a micron. Using dynamic light scattering, Sitnikova et al. showed that the droplets of oil in the emulsion grow by Ostwald ripening, and that droplets do not coalesce. The Ostwald ripening rate is observed to diminish with increasing ethanol concentrations until the droplets stabilize in size with an average diameter of 3microns. Based on thermodynamic considerations of the multi-component mixture, the emulsion derives its stability from trapping between the binodal and spinodal curves in the phase
|
{
"page_id": 15470144,
"source": null,
"title": "Ouzo effect"
}
|
diagram. However, the microscopic mechanisms responsible for the observed slowing of Ostwald ripening rates at increasing ethanol concentrations appear not fully understood. == Applications == Emulsions have many commercial uses. A large range of prepared food products, detergents, and body-care products take the form of emulsions that are required to be stable over a long period of time. The ouzo effect is seen as a potential mechanism for generating surfactant-free emulsions without the need for high-shear stabilisation techniques that are costly in large-scale production processes. The creation of a variety of dispersions such as pseudolatexes, silicone emulsions, and biodegradable polymeric nanocapsules, have been synthesized using the ouzo effect, though as stated previously, the exact mechanism of this effect remains unclear. Nanoparticles formed using the ouzo effect are thought to be kinetically stabilized as opposed to thermodynamically stabilized micelles formed using a surfactant due to the fast solidification of the polymer during the preparation process. == See also == Interface and colloid science Miniemulsion Anise-flavored liqueurs Spinodal == References == == External links == Media related to Ouzo effect at Wikimedia Commons
|
{
"page_id": 15470144,
"source": null,
"title": "Ouzo effect"
}
|
The interorbital region of the skull is located between the eyes, anterior to the braincase. The form of the interorbital region may exhibit significant variation between taxonomic groups. In oryzomyine rodents, for example, the width, form, and presence of beading in the interorbital region vary among species. In birds and many other animals whose eyes are set on the side of the skull, the interorbital region normally consists of a thin interorbital septum only. This may be pierced by a hole of larger or smaller size, connecting the eye sockets. == References == Weksler, M. 2006. Phylogenetic relationships of oryzomyine rodents (Muroidea: Sigmodontinae): separate and combined analyses of morphological and molecular data. Bulletin of the American Museum of Natural History 296:1–149.
|
{
"page_id": 24907331,
"source": null,
"title": "Interorbital region"
}
|
The cavity method is a mathematical method presented by Marc Mézard, Giorgio Parisi and Miguel Angel Virasoro in 1987 to derive and solve some mean field-type models in statistical physics, specially adapted to disordered systems. The method has been used to compute properties of ground states in many condensed matter and optimization problems. Initially invented to deal with the Sherrington–Kirkpatrick model of spin glasses, the cavity method has shown wider applicability. It can be regarded as a generalization of the Bethe–Peierls iterative method in tree-like graphs, to the case of a graph with loops that are not too short. The cavity method can solve many problems also solvable using the replica trick but has the advantage of being more intuitive and less mathematically subtle than replica-based methods. The cavity method proceeds by perturbing a large system with the addition of a non-thermodynamic number of additional constituents and approximating the response of the entire system perturbatively. The application of the resulting approximation, along with an assumption that certain observables are self-averaging, yields a self-consistency equation for the statistics of the added constituents. The added constituents are then considered to be the mean-field variables. The cavity method has proved useful in solving optimization problems such as k-satisfiability and graph coloring. It has yielded not only ground states energy predictions in the average case but has also inspired algorithmic methods. == See also == The cavity method originated in the context of statistical physics, but is also closely related to methods from other areas such as belief propagation. == References == == Further reading == Braunstein, A.; Mézard, M.; Zecchina, R. (2005). "Survey propagation: An algorithm for satisfiability". Random Structures and Algorithms. 27 (2): 201–226. arXiv:cs.CC/0212002. doi:10.1002/rsa.20057. ISSN 1042-9832. S2CID 6601396. Mézard, M.; Parisi, G. (2001). "The Bethe lattice spin glass revisited". The
|
{
"page_id": 4263491,
"source": null,
"title": "Cavity method"
}
|
European Physical Journal B. 20 (2): 217–233. arXiv:cond-mat/0009418. Bibcode:2001EPJB...20..217M. doi:10.1007/PL00011099. ISSN 1434-6028. S2CID 59494448. Mézard, Marc; Parisi, Giorgio (2003). "The Cavity Method at Zero Temperature". Journal of Statistical Physics. 111 (1/2): 1–34. arXiv:cond-mat/0207121. Bibcode:2003JSP...111....1M. doi:10.1023/A:1022221005097. ISSN 0022-4715. S2CID 116942750. Krz̧akała, Florent; Montanari, Andrea; Ricci-Tersenghi, Federico; Semerjian, Guilhem; Zdeborová, Lenka (2007). "Gibbs states and the set of solutions of random constraint satisfaction problems". Proceedings of the National Academy of Sciences of the United States of America. 104 (2): 10318–10323. arXiv:cond-mat/0612365. Bibcode:2007PNAS..10410318K. doi:10.1073/pnas.0703685104. ISSN 0027-8424. PMC 1965511. PMID 17567754. S2CID 10018706. Advani, Madhu; Bunin, Guy; Mehta, Pankaj (2018). "Statistical physics of community ecology: a cavity solution to MacArthur's consumer resource model". Journal of Statistical Physics. 2018 (3): 033406. Bibcode:2018JSMTE..03.3406A. doi:10.1088/1742-5468/aab04e. PMC 6329381. PMID 30636966.
|
{
"page_id": 4263491,
"source": null,
"title": "Cavity method"
}
|
A pancake machine is an electrically-powered machine that automatically produces cooked pancakes. It is believed that the earliest known pancake machine was invented in the United States in 1928. Several types of pancake machines exist that perform in various manners, for both commercial and home use. Some are fully automatic in operation, while others are semi-automatic. Some companies mass-produce pancake machines, and some have been homemade. == History == In 1928, a man in Portland, Oregon, invented an electric pancake machine that operated by the process of batter being dropped onto a revolving heated flattop grill from a storage cylinder atop the grill. The grill was heated using electricity. The amount of batter dropped was controlled by using controlled amounts of compressed air, which pushed batter out of the storage cylinder. As the batter revolved on the hot grill, the pancake was flipped halfway through the cooking process by a shelf atop the grill. After being flipped, the completed pancake was ejected from the machine upon contact with a gate. In 1955 in the United States, an automatic pancake machine was developed by Vendo, which used a specially formulated pancake batter mix that was manufactured by the Quaker Oats Company's Aunt Jemima branch. The Vendo machine could produce pancakes "in less than three minutes". It was a semi-automatic machine that performed all of the cooking functions except for the pouring of the pancake batter. == Types and uses == Various types of pancake machines exist, such as those that run pancake batter through a heated conveyor inside of a box unit, and those that automatically drop pancake batter onto a flattop grill. Some pancake machines, such as one developed by Crepe-Coer, cook both sides of a pancake simultaneously. Semi-automatic pancake machines also exist, which require some human interaction to function,
|
{
"page_id": 47451719,
"source": null,
"title": "Pancake machine"
}
|
such as the pouring of batter. Commercial pancake machines may be used in the foodservice industry, in cafeterias and by restaurants, and can serve to reduce the waste of stale pancake batter. Some hotels have pancake machines that guests are allowed to operate. They are also used in other environments in a self-service manner, such as in upscale airport lounges and hotels. Homemade versions of pancake machines have been constructed. An example of a homemade pancake machine is one constructed in 1977 by Ken Whitsett of the Ocala Kiwanis Club in Ocala, Florida, which was used for the organization's annual pancake day. The Kiwanis machine utilized a hopper filled with pancake batter that was manually dropped onto a revolving griddle. The pancakes were manually flipped and plated when cooking was completed. It required four people for its operation, and could produce between 750–1000 pancakes per hour. Commercial pancake machines are typically used in the commercial foodservice and hospitality industries. Popcake is an international company that produces Popcake-brand pancake machines, which can produce 200 pancakes per hour. Individual pancakes are produced in seconds by this machine. The machine was designed for use in commercial establishments such as cafeterias and convenience stores. The Popcake machine was invented by Marek Szymanski in Australia, and as of July 2014 approximately 7,000 of them are used worldwide. This brand has features that allow users to adjust the size, quantity and doneness level of the pancakes produced. In March 2015 in the U.S., the PancakeBot pancake machine received over $141,000 on Kickstarter. Its target donation request on the website was $50,000. PancakeBot can produce custom pancakes in various designs, which is performed by the use of pancake batter in a bottle that is moved by a programmable machine arm atop the griddle. The machine utilizes custom
|
{
"page_id": 47451719,
"source": null,
"title": "Pancake machine"
}
|
software to accomplish this. == See also == Food processing French fries vending machine Let's Pizza List of cooking appliances Waffle iron == References == == External links == Media related to Pancake machines at Wikimedia Commons
|
{
"page_id": 47451719,
"source": null,
"title": "Pancake machine"
}
|
Cherenkov radiation () is electromagnetic radiation emitted when a charged particle (such as an electron) passes through a dielectric medium (such as distilled water) at a speed greater than the phase velocity (speed of propagation of a wavefront in a medium) of light in that medium. A classic example of Cherenkov radiation is the characteristic blue glow of an underwater nuclear reactor. Its cause is similar to the cause of a sonic boom, the sharp sound heard when faster-than-sound movement occurs. The phenomenon is named after Soviet physicist Pavel Cherenkov. == History == The radiation is named after the Soviet scientist Pavel Cherenkov, the 1958 Nobel Prize winner, who was the first to detect it experimentally under the supervision of Sergey Vavilov at the Lebedev Institute in 1934. Therefore, it is also known as Vavilov–Cherenkov radiation. Cherenkov saw a faint bluish light around a radioactive preparation in water during experiments. His doctorate thesis was on luminescence of uranium salt solutions that were excited by gamma rays instead of less energetic visible light, as done commonly. He discovered the anisotropy of the radiation and came to the conclusion that the bluish glow was not a fluorescent phenomenon. A theory of this effect was later developed in 1937 within the framework of Einstein's special relativity theory by Cherenkov's colleagues Igor Tamm and Ilya Frank, who also shared the 1958 Nobel Prize. Cherenkov radiation as conical wavefronts had been theoretically predicted by the English polymath Oliver Heaviside in papers published between 1888 and 1889 and by Arnold Sommerfeld in 1904, but both had been quickly dismissed following the relativity theory's restriction of superluminal particles until the 1970s. Marie Curie observed a pale blue light in a highly concentrated radium solution in 1910, but did not investigate its source. In 1926, the French radiotherapist
|
{
"page_id": 24383048,
"source": null,
"title": "Cherenkov radiation"
}
|
Lucien Mallet described the luminous radiation of radium irradiating water having a continuous spectrum. In 2019, a team of researchers from Dartmouth's and Dartmouth-Hitchcock's Norris Cotton Cancer Center discovered Cherenkov light being generated in the vitreous humor of patients undergoing radiotherapy. The light was observed using a camera imaging system called a CDose, which is specially designed to view light emissions from biological systems. For decades, patients had reported phenomena such as "flashes of bright or blue light" when receiving radiation treatments for brain cancer, but the effects had never been experimentally observed. == Physical origin == === Basics === While the speed of light in vacuum is a universal constant (c = 299,792,458 m/s), the speed in a material may be significantly less, as it is perceived to be slowed by the medium. For example, in water it is only 0.75c. Matter can accelerate to a velocity higher than this (although still less than c, the speed of light in vacuum) during nuclear reactions and in particle accelerators. Cherenkov radiation results when a charged particle, most commonly an electron, travels through a dielectric (can be polarized electrically) medium with a speed greater than light's speed in that medium. The effect can be intuitively described in the following way. From classical physics, it is known that accelerating charged particles emit EM waves and via Huygens' principle these waves will form spherical wavefronts which propagate with the phase velocity of that medium (i.e. the speed of light in that medium given by c / n {\displaystyle c/n} , for n {\displaystyle n} , the refractive index). When any charged particle passes through a medium, the particles of the medium will polarize around it in response. The charged particle excites the molecules in the polarizable medium and on returning to their ground
|
{
"page_id": 24383048,
"source": null,
"title": "Cherenkov radiation"
}
|
state, the molecules re-emit the energy given to them to achieve excitation as photons. These photons form the spherical wavefronts which can be seen originating from the moving particle. If v p < c / n {\displaystyle v_{\text{p}}<c/n} , that is the velocity of the charged particle is less than that of the speed of light in the medium, then the polarization field which forms around the moving particle is usually symmetric. The corresponding emitted wavefronts may be bunched up, but they do not coincide or cross, and there are therefore no interference effects to consider. In the reverse situation, i.e. v p > c / n {\displaystyle v_{\text{p}}>c/n} , the polarization field is asymmetric along the direction of motion of the particle, as the particles of the medium do not have enough time to recover to their "normal" randomized states. This results in overlapping waveforms (as in the animation) and constructive interference leads to an observed cone-like light signal at a characteristic angle: Cherenkov light. A common analogy is the sonic boom of a supersonic aircraft. The sound waves generated by the aircraft travel at the speed of sound, which is slower than the aircraft, and cannot propagate forward from the aircraft, instead forming a conical shock front. In a similar way, a charged particle can generate a "shock wave" of visible light as it travels through an insulator. The velocity that must be exceeded is the phase velocity of light rather than the group velocity of light. The phase velocity can be altered dramatically by using a periodic medium, and in that case one can even achieve Cherenkov radiation with no minimum particle velocity, a phenomenon known as the Smith–Purcell effect. In a more complex periodic medium, such as a photonic crystal, one can also obtain a variety
|
{
"page_id": 24383048,
"source": null,
"title": "Cherenkov radiation"
}
|
of other anomalous Cherenkov effects, such as radiation in a backwards direction (see below) whereas ordinary Cherenkov radiation forms an acute angle with the particle velocity. In their original work on the theoretical foundations of Cherenkov radiation, Tamm and Frank wrote, "This peculiar radiation can evidently not be explained by any common mechanism such as the interaction of the fast electron with individual atom or as radiative scattering of electrons on atomic nuclei. On the other hand, the phenomenon can be explained both qualitatively and quantitatively if one takes into account the fact that an electron moving in a medium does radiate light even if it is moving uniformly provided that its velocity is greater than the velocity of light in the medium." === Emission angle === In the figure on the geometry, the particle (red arrow) travels in a medium with speed v p {\displaystyle v_{\text{p}}} such that c n < v p < c , {\displaystyle {\frac {c}{n}}<v_{\text{p}}<c,} where c {\displaystyle c} is speed of light in vacuum, and n {\displaystyle n} is the refractive index of the medium. If the medium is water, the condition is 0.75 c < v p < c {\displaystyle 0.75c<v_{\text{p}}<c} , since n ≈ 1.33 {\displaystyle n\approx 1.33} for water at 20 °C. We define the ratio between the speed of the particle and the speed of light as β = v p c . {\displaystyle \beta ={\frac {v_{\text{p}}}{c}}.} The emitted light waves (denoted by blue arrows) travel at speed v em = c n . {\displaystyle v_{\text{em}}={\frac {c}{n}}.} The left corner of the triangle represents the location of the superluminal particle at some initial moment (t = 0). The right corner of the triangle is the location of the particle at some later time t. In the given time t, the
|
{
"page_id": 24383048,
"source": null,
"title": "Cherenkov radiation"
}
|
particle travels the distance x p = v p t = β c t {\displaystyle x_{\text{p}}=v_{\text{p}}t=\beta \,ct} whereas the emitted electromagnetic waves are constricted to travel the distance x em = v em t = c n t . {\displaystyle x_{\text{em}}=v_{\text{em}}t={\frac {c}{n}}t.} So the emission angle results in cos θ = 1 n β {\displaystyle \cos \theta ={\frac {1}{n\beta }}} === Arbitrary emission angle === Cherenkov radiation can also radiate in an arbitrary direction using properly engineered one dimensional metamaterials. The latter is designed to introduce a gradient of phase retardation along the trajectory of the fast travelling particle ( d ϕ / d x {\displaystyle d\phi /dx} ), reversing or steering Cherenkov emission at arbitrary angles given by the generalized relation: cos θ = 1 n β + n k 0 ⋅ d ϕ d x {\displaystyle \cos \theta ={\frac {1}{n\beta }}+{\frac {n}{k_{0}}}\cdot {\frac {d\phi }{dx}}} Note that since this ratio is independent of time, one can take arbitrary times and achieve similar triangles. The angle stays the same, meaning that subsequent waves generated between the initial time t = 0 and final time t will form similar triangles with coinciding right endpoints to the one shown. === Reverse Cherenkov effect === A reverse Cherenkov effect can be experienced using materials called negative-index metamaterials (materials with a subwavelength microstructure that gives them an effective "average" property very different from their constituent materials, in this case having negative permittivity and negative permeability). This means that, when a charged particle (usually electrons) passes through a medium at a speed greater than the phase velocity of light in that medium, that particle emits trailing radiation from its progress through the medium rather than in front of it (as is the case in normal materials with both permittivity and permeability positive).
|
{
"page_id": 24383048,
"source": null,
"title": "Cherenkov radiation"
}
|
One can also obtain such reverse-cone Cherenkov radiation in non-metamaterial periodic media where the periodic structure is on the same scale as the wavelength, so it cannot be treated as an effectively homogeneous metamaterial. === In vacuum === The Cherenkov effect can occur in vacuum. In a slow-wave structure, like in a traveling-wave tube (TWT), the phase velocity decreases and the velocity of charged particles can exceed the phase velocity while remaining lower than c {\displaystyle c} . In such a system, this effect can be derived from conservation of the energy and momentum where the momentum of a photon should be p = ℏ β {\displaystyle p=\hbar \beta } ( β {\displaystyle \beta } is phase constant) rather than the de Broglie relation p = ℏ k {\displaystyle p=\hbar k} . This type of radiation (VCR) is used to generate high-power microwaves. === Collective Cherenkov === Radiation with the same properties of typical Cherenkov radiation can be created by structures of electric current that travel faster than light. By manipulating density profiles in plasma acceleration setups, structures up to nanocoulombs of charge are created and may travel faster than the speed of light and emit optical shocks at the Cherenkov angle. Electrons are still subluminal, hence the electrons that compose the structure at a time t = t0 are different from the electrons in the structure at a time t > t0. == Characteristics == The frequency spectrum of Cherenkov radiation by a particle is given by the Frank–Tamm formula: d 2 E d x d ω = q 2 4 π μ ( ω ) ω ( 1 − c 2 v 2 n 2 ( ω ) ) {\displaystyle {\frac {d^{2}E}{dx\,d\omega }}={\frac {q^{2}}{4\pi }}\mu (\omega )\omega {\left(1-{\frac {c^{2}}{v^{2}n^{2}(\omega )}}\right)}} The Frank–Tamm formula describes the amount of energy
|
{
"page_id": 24383048,
"source": null,
"title": "Cherenkov radiation"
}
|
E {\displaystyle E} emitted from Cherenkov radiation, per unit length traveled x {\displaystyle x} and per frequency ω {\displaystyle \omega } . μ ( ω ) {\displaystyle \mu (\omega )} is the permeability and n ( ω ) {\displaystyle n(\omega )} is the index of refraction of the material the charged particle moves through. q {\displaystyle q} is the electric charge of the particle, v {\displaystyle v} is the speed of the particle, and c {\displaystyle c} is the speed of light in vacuum. Unlike fluorescence or emission spectra that have characteristic spectral peaks, Cherenkov radiation is continuous. Around the visible spectrum, the relative intensity per unit frequency is approximately proportional to the frequency. That is, higher frequencies (shorter wavelengths) are more intense in Cherenkov radiation. This is why visible Cherenkov radiation is observed to be brilliant blue. In fact, most Cherenkov radiation is in the ultraviolet spectrum—it is only with sufficiently accelerated charges that it even becomes visible; the sensitivity of the human eye peaks at green, and is very low in the violet portion of the spectrum. There is a cut-off frequency above which the equation cos θ = 1 / ( n β ) {\displaystyle \cos \theta =1/(n\beta )} can no longer be satisfied. The refractive index n {\displaystyle n} varies with frequency (and hence with wavelength) in such a way that the intensity cannot continue to increase at ever shorter wavelengths, even for very relativistic particles (where v/c is close to 1). At X-ray frequencies, the refractive index becomes less than 1 (note that in media, the phase velocity may exceed c without violating relativity) and hence no X-ray emission (or shorter wavelength emissions such as gamma rays) would be observed. However, X-rays can be generated at special frequencies just below the frequencies corresponding to
|
{
"page_id": 24383048,
"source": null,
"title": "Cherenkov radiation"
}
|
core electronic transitions in a material, as the index of refraction is often greater than 1 just below a resonant frequency (see Kramers–Kronig relation and Anomalous dispersion). As in sonic booms and bow shocks, the angle of the shock cone is directly related to the velocity of the disruption. The Cherenkov angle is zero at the threshold velocity for the emission of Cherenkov radiation. The angle takes on a maximum as the particle speed approaches the speed of light. Hence, observed angles of incidence can be used to compute the direction and speed of a Cherenkov radiation-producing charge. Cherenkov radiation can be generated in the eye by charged particles hitting the vitreous humour, giving the impression of flashes, as in cosmic ray visual phenomena and possibly some observations of criticality accidents. == Uses == === Detection of labelled biomolecules === Cherenkov radiation is widely used to facilitate the detection of small amounts and low concentrations of biomolecules. Radioactive atoms such as phosphorus-32 are readily introduced into biomolecules by enzymatic and synthetic means and subsequently may be easily detected in small quantities for the purpose of elucidating biological pathways and in characterizing the interaction of biological molecules such as affinity constants and dissociation rates. === Medical imaging of radioisotopes and external beam radiotherapy === More recently, Cherenkov light has been used to image substances in the body. These discoveries have led to intense interest around the idea of using this light signal to quantify and/or detect radiation in the body, either from internal sources such as injected radiopharmaceuticals or from external beam radiotherapy in oncology. Radioisotopes such as the positron emitters 18F and 13N or beta emitters 32P or 90Y have measurable Cherenkov emission and isotopes 18F and 131I have been imaged in humans for diagnostic value demonstration. External beam radiation
|
{
"page_id": 24383048,
"source": null,
"title": "Cherenkov radiation"
}
|
therapy has been shown to induce a substantial amount of Cherenkov light in the tissue being treated, due to electron beams or photon beams with energy in the 6 MV to 18 MV ranges. The secondary electrons induced by these high energy x-rays result in the Cherenkov light emission, where the detected signal can be imaged at the entry and exit surfaces of the tissue. The Cherenkov light emitted from patient's tissue during radiation therapy is a very low light level signal but can be detected by specially designed cameras that synchronize their acquisition to the linear accelerator pulses. The ability to see this signal shows the shape of the radiation beam as it is incident upon the tissue in real time. === Nuclear reactors === Cherenkov radiation is used to detect high-energy charged particles. In open pool reactors, beta particles (high-energy electrons) are released as the fission products decay. The glow continues after the chain reaction stops, dimming as the shorter-lived products decay. Similarly, Cherenkov radiation can characterize the remaining radioactivity of spent fuel rods. This phenomenon is used to verify the presence of spent nuclear fuel in spent fuel pools for nuclear safeguards purposes. === Astrophysics experiments === When a high-energy (TeV) gamma photon or cosmic ray interacts with the Earth's atmosphere, it may produce an electron–positron pair with enormous velocities. The Cherenkov radiation emitted in the atmosphere by these charged particles is used to determine the direction and energy of the cosmic ray or gamma ray, which is used for example in the Imaging Atmospheric Cherenkov Technique (IACT), by experiments such as VERITAS, H.E.S.S., MAGIC. Cherenkov radiation emitted in tanks filled with water by those charged particles reaching earth is used for the same goal by the Extensive Air Shower experiment HAWC, the Pierre Auger Observatory and
|
{
"page_id": 24383048,
"source": null,
"title": "Cherenkov radiation"
}
|
other projects. Similar methods are used in very large neutrino detectors, such as the Super-Kamiokande, the Sudbury Neutrino Observatory (SNO) and IceCube. Other projects operated in the past applying related techniques, such as STACEE, a former solar tower refurbished to work as a non-imaging Cherenkov observatory, which was located in New Mexico. Astrophysics observatories using the Cherenkov technique to measure air showers are key to determining the properties of astronomical objects that emit very-high-energy gamma rays, such as supernova remnants and blazars. === Particle physics experiments === Cherenkov radiation is commonly used in experimental particle physics for particle identification. One could measure (or put limits on) the velocity of an electrically charged elementary particle by the properties of the Cherenkov light it emits in a certain medium. If the momentum of the particle is measured independently, one could compute the mass of the particle by its momentum and velocity (see four-momentum), and hence identify the particle. The simplest type of particle identification device based on a Cherenkov radiation technique is the threshold counter, which answers whether the velocity of a charged particle is lower or higher than a certain value ( v 0 = c / n {\displaystyle v_{0}=c/n} , where c {\displaystyle c} is the speed of light, and n {\displaystyle n} is the refractive index of the medium) by looking at whether this particle emits Cherenkov light in a certain medium. Knowing particle momentum, one can separate particles lighter than a certain threshold from those heavier than the threshold. The most advanced type of a detector is the RICH, or ring-imaging Cherenkov detector, developed in the 1980s. In a RICH detector, a cone of Cherenkov light is produced when a high-speed charged particle traverses a suitable medium, often called radiator. This light cone is detected on a position
|
{
"page_id": 24383048,
"source": null,
"title": "Cherenkov radiation"
}
|
sensitive planar photon detector, which allows reconstructing a ring or disc, whose radius is a measure for the Cherenkov emission angle. Both focusing and proximity-focusing detectors are in use. In a focusing RICH detector, the photons are collected by a spherical mirror and focused onto the photon detector placed at the focal plane. The result is a circle with a radius independent of the emission point along the particle track. This scheme is suitable for low refractive index radiators—i.e. gases—due to the larger radiator length needed to create enough photons. In the more compact proximity-focusing design, a thin radiator volume emits a cone of Cherenkov light which traverses a small distance—the proximity gap—and is detected on the photon detector plane. The image is a ring of light whose radius is defined by the Cherenkov emission angle and the proximity gap. The ring thickness is determined by the thickness of the radiator. An example of a proximity gap RICH detector is the High Momentum Particle Identification Detector (HMPID), a detector currently under construction for ALICE (A Large Ion Collider Experiment), one of the six experiments at the LHC (Large Hadron Collider) at CERN. == See also == Askaryan radiation, similar radiation produced by fast uncharged particles Blue noise Bremsstrahlung, radiation produced when charged particles are decelerated by other charged particles Faster-than-light, about conjectural propagation of information or matter faster than the speed of light Frank–Tamm formula, giving the spectrum of Cherenkov radiation Light echo List of light sources Non-radiation condition Radioluminescence Tachyon Transition radiation == Citations == == Sources == == External links == Nuclear Reactor start up on YouTube Nuclear Reactor starting up (alternate link) on YouTube Radović, Andrija (2002). "Cherenkov's Particles as Magnetons" (PDF). Journal of Theoretics. 4 (4): 1–5. Archived from the original (PDF) on 2016-03-04. Retrieved 2015-09-30.
|
{
"page_id": 24383048,
"source": null,
"title": "Cherenkov radiation"
}
|
In cell biology, a deuterosome is a protein structure within a multiciliated cell (such as an epithelial cell of respiratory tract) that produces multiple centrioles. Most cells in the human body possess one primary cilium, a relatively small protrusion of the cell membrane that looks like a stick or a finger under the electron microscope. Primary cilium is typically used by the cell as a sensory organelle, or antenna. Some cells, however, have numerous cilia which they use to generate directed fluid flow. Examples include: epithelial cells of the respiratory tract, in which multiple cilia are used for mucus clearance; the oviduct, in which cilia help the egg migrate to the uterus; and others. Each cilium has a basal body formed from a centriole to which it is anchored and from which it starts to grow after each cell division, when a new daughter cell is formed. Centrioles typically replicate once during cell division, thus allowing for only one cilium for a daughter cell. Multiciliated cells, on the other hand, need to produce more than 100 centrioles in order to grow multiple cilia. This problem is solved by the existence of deuterosome, a structure thought to be formed from amorphous filamentous material and able to make many centrioles at once. The evidence of the existence of deuterosome first came from electron microscopy work in various multiciliated tissues. It was found that both centriole duplication and de novo generation of centrioles occurs in such cells. The generation of new centrioles which will serve as basal bodies for multiple cilia is due to a cytoplasmic structure, which was termed the “deuterosome” by Sorokin. == References ==
|
{
"page_id": 49024588,
"source": null,
"title": "Deuterosome"
}
|
The World Checklist of Selected Plant Families (usually abbreviated to WCSP) was an "international collaborative programme that provides the latest peer reviewed and published opinions on the accepted scientific names and synonyms of selected plant families." Maintained by the Royal Botanic Gardens, Kew, it was available online, allowing searches for the names of families, genera and species, as well as the ability to create checklists. The project traced its history to work done in the 1990s by Kew researcher Rafaël Govaerts on a checklist of the genus Quercus. Influenced by the Global Strategy for Plant Conservation, the project expanded. As of January 2013, 173 families of seed plants were included. Coverage of monocotyledon families was completed and other families were being added. There is a complementary project called the International Plant Names Index (IPNI), in which Kew is also involved. The IPNI aims to provide details of publication and does not aim to determine which are accepted species names. After a delay of about a year, newly published names were automatically added from the IPNI to the WCSP. The WCSP was also one of the underlying databases for The Plant List, created by Kew and the Missouri Botanical Garden, which was unveiled in 2010, and subsequently superseded by World Flora Online. At the end of October 2022, the WCSP website, together with the World Checklist of Vascular Plants (WCVP) website, was closed and the data was transferred to the Plants of the World Online (POWO) database. == See also == Australian Plant Name Index Convention on Biological Diversity Plants of the World Online The Plant List Tropicos Wikispecies == References == == External links == Official website
|
{
"page_id": 38145616,
"source": null,
"title": "World Checklist of Selected Plant Families"
}
|
A strangelet (pronounced ) is a hypothetical particle consisting of a bound state of roughly equal numbers of up, down, and strange quarks. An equivalent description is that a strangelet is a small fragment of strange matter, small enough to be considered a particle. The size of an object composed of strange matter could, theoretically, range from a few femtometers across (with the mass of a light nucleus) to arbitrarily large. Once the size becomes macroscopic (on the order of metres across), such an object is usually called a strange star. The term "strangelet" originates with Edward Farhi and Robert Jaffe in 1984. It has been theorized that strangelets can convert matter to strange matter on contact. Strangelets have also been suggested as a dark matter candidate. == Theoretical possibility == === Strange matter hypothesis === The known particles with strange quarks are unstable. Because the strange quark is heavier than the up and down quarks, it can spontaneously decay, via the weak interaction, into an up quark. Consequently, particles containing strange quarks, such as the lambda particle, always lose their strangeness, by decaying into lighter particles containing only up and down quarks. However, condensed states with a larger number of quarks might not suffer from this instability. That possible stability against decay is the "strange matter hypothesis", proposed separately by Arnold Bodmer and Edward Witten. According to this hypothesis, when a large enough number of quarks are concentrated together, the lowest energy state is one which has roughly equal numbers of up, down, and strange quarks, namely a strangelet. This stability would occur because of the Pauli exclusion principle; having three types of quarks, rather than two as in normal nuclear matter, allows more quarks to be placed in lower energy levels. === Relationship with nuclei === A nucleus
|
{
"page_id": 20647505,
"source": null,
"title": "Strangelet"
}
|
is a collection of a number of up and down quarks (in some nuclei a fairly large number), confined into triplets (neutrons and protons). According to the strange matter hypothesis, strangelets are more stable than nuclei, so nuclei are expected to decay into strangelets. But this process may be extremely slow because there is a large energy barrier to overcome: as the weak interaction starts making a nucleus into a strangelet, the first few strange quarks form strange baryons, such as the Lambda, which are heavy. Only if many conversions occur almost simultaneously will the number of strange quarks reach the critical proportion required to achieve a lower energy state. This is very unlikely to happen, so even if the strange matter hypothesis were correct, nuclei would never be seen to decay to strangelets because their lifetime would be longer than the age of the universe. === Size === The stability of strangelets depends on their size, because of surface tension at the interface between quark matter and vacuum (which affects small strangelets more than big ones). The surface tension of strange matter is unknown. If it is smaller than a critical value (a few MeV per square femtometer) then large strangelets are unstable and will tend to fission into smaller strangelets (strange stars would still be stabilized by gravity). If it is larger than the critical value, then strangelets become more stable as they get bigger. screening of charges, which allows small strangelets to be charged, with a neutralizing cloud of electrons/positrons around them, but requires large strangelets, like any large piece of matter, to be electrically neutral in their interior. The charge screening distance tends to be of the order of a few femtometers, so only the outer few femtometers of a strangelet can carry charge. == Natural
|
{
"page_id": 20647505,
"source": null,
"title": "Strangelet"
}
|
or artificial occurrence == Although nuclei do not decay to strangelets, there are other ways to create strangelets, so if the strange matter hypothesis is correct there should be strangelets in the universe. There are at least three ways they might be created in nature: Cosmogonically, i.e. in the early universe when the QCD confinement phase transition occurred. It is possible that strangelets were created along with the neutrons and protons that form ordinary matter. High-energy processes. The universe is full of very high-energy particles (cosmic rays). It is possible that when these collide with each other or with neutron stars they may provide enough energy to overcome the energy barrier and create strangelets from nuclear matter. Some identified exotic cosmic ray events, such as "Price's event"—i.e., those with very low charge-to-mass ratios (as the s-quark itself possesses charge commensurate with the more-familiar d-quark, but is much more massive)—could have already registered strangelets. Cosmic ray impacts. In addition to head-on collisions of cosmic rays, ultra high energy cosmic rays impacting on Earth's atmosphere may create strangelets. These scenarios offer possibilities for observing strangelets. If strangelets can be produced in high-energy collisions, then they might be produced by heavy-ion colliders. Similarly, if there are strangelets flying around the universe, then occasionally a strangelet should hit Earth, where it may appear as an exotic type of cosmic ray; alternatively, a stable strangelet could end up incorporated into the bulk of the Earth's matter, acquiring an electron shell proportional to its charge and hence appearing as an anomalously heavy isotope of the appropriate element—though searches for such anomalous "isotopes" have, so far, been unsuccessful. === Accelerator production === At heavy ion accelerators like the Relativistic Heavy Ion Collider (RHIC), nuclei are collided at relativistic speeds, creating strange and antistrange quarks that could conceivably
|
{
"page_id": 20647505,
"source": null,
"title": "Strangelet"
}
|
lead to strangelet production. The experimental signature of a strangelet would be its very high ratio of mass to charge, which would cause its trajectory in a magnetic field to be very nearly, but not quite, straight. The STAR collaboration has searched for strangelets produced at the RHIC, but none were found. The Large Hadron Collider (LHC) is even less likely to produce strangelets, but searches are planned for the LHC ALICE detector. === Space-based detection === The Alpha Magnetic Spectrometer (AMS), an instrument that is mounted on the International Space Station, could detect strangelets. === Possible seismic detection === In May 2002, a group of researchers at Southern Methodist University reported the possibility that strangelets may have been responsible for seismic events recorded on October 22 and November 24 in 1993. The authors later retracted their claim, after finding that the clock of one of the seismic stations had a large error during the relevant period. It has been suggested that the International Monitoring System be set up to verify the Comprehensive Nuclear Test Ban Treaty (CTBT) after entry into force may be useful as a sort of "strangelet observatory" using the entire Earth as its detector. The IMS will be designed to detect anomalous seismic disturbances down to 1 kiloton of TNT (4.2 TJ) energy release or less, and could be able to track strangelets passing through Earth in real time if properly exploited. === Impacts on Solar System bodies === It has been suggested that strangelets of subplanetary (i.e. heavy meteorite) mass would puncture planets and other Solar System objects, leading to impact craters which show characteristic features. == Potential propagation == If the strange matter hypothesis is correct, and if a stable negatively-charged strangelet with a surface tension larger than the aforementioned critical value exists, then
|
{
"page_id": 20647505,
"source": null,
"title": "Strangelet"
}
|
a larger strangelet would be more stable than a smaller one. One speculation that has resulted from the idea is that a strangelet coming into contact with a lump of ordinary matter could over time convert the ordinary matter to strange matter. This is not a concern for strangelets in cosmic rays because they are produced far from Earth and have had time to decay to their ground state, which is predicted by most models to be positively charged, so they are electrostatically repelled by nuclei, and would rarely merge with them. On the other hand, high-energy collisions could produce negatively charged strangelet states, which could live long enough to interact with the nuclei of ordinary matter. The danger of catalyzed conversion by strangelets produced in heavy-ion colliders has received some media attention, and concerns of this type were raised at the commencement of the RHIC experiment at Brookhaven, which could potentially have created strangelets. A detailed analysis concluded that the RHIC collisions were comparable to ones which naturally occur as cosmic rays traverse the Solar System, so we would already have seen such a disaster if it were possible. RHIC has been operating since 2000 without incident. Similar concerns have been raised about the operation of the LHC at CERN but such fears are dismissed as far-fetched by scientists. In the case of a neutron star, the conversion scenario may be more plausible. A neutron star is in a sense a giant nucleus (20 km across), held together by gravity, but it is electrically neutral and would not electrostatically repel strangelets. If a strangelet hit a neutron star, it might catalyze quarks near its surface to form into more strange matter, potentially continuing until the entire star became a strange star. == Debate about the strange matter hypothesis ==
|
{
"page_id": 20647505,
"source": null,
"title": "Strangelet"
}
|
The strange matter hypothesis remains unproven. No direct search for strangelets in cosmic rays or particle accelerators has yet confirmed a strangelet. If any of the objects such as neutron stars could be shown to have a surface made of strange matter, this would indicate that strange matter is stable at zero pressure, which would vindicate the strange matter hypothesis. However, there is no strong evidence for strange matter surfaces on neutron stars. Another argument against the hypothesis is that if it were true, essentially all neutron stars should be made of strange matter, and otherwise none should be. Even if there were only a few strange stars initially, violent events such as collisions would soon create many fragments of strange matter flying around the universe. Because collision with a single strangelet would convert a neutron star to strange matter, all but a few of the most recently formed neutron stars should by now have already been converted to strange matter. This argument is still debated, but if it is correct then showing that one old neutron star has a conventional nuclear matter crust would disprove the strange matter hypothesis. Because of its importance for the strange matter hypothesis, there is an ongoing effort to determine whether the surfaces of neutron stars are made of strange matter or nuclear matter. The evidence currently favors nuclear matter. This comes from the phenomenology of X-ray bursts, which is well explained in terms of a nuclear matter crust, and from measurement of seismic vibrations in magnetars. == In fiction == An episode of Odyssey 5 featured an attempt to destroy the planet by intentionally creating negatively charged strangelets in a particle accelerator. The BBC docudrama End Day features a scenario where a particle accelerator in New York City explodes, creating a strangelet and
|
{
"page_id": 20647505,
"source": null,
"title": "Strangelet"
}
|
starting a catastrophic chain reaction which destroys Earth. The story A Matter most Strange in the collection Indistinguishable from Magic by Robert L. Forward deals with the making of a strangelet in a particle accelerator. Impact, published in 2010 and written by Douglas Preston, deals with an alien machine that creates strangelets. The machine's strangelets impact the Earth and Moon and pass through. The novel Phobos, published in 2011 and written by Steve Alten as the third and final part of his Domain trilogy, presents a fictional story where strangelets are unintentionally created at the LHC and escape from it to destroy the Earth. In the 1992 black-comedy novel Humans by Donald E. Westlake, an irritated God sends an angel to Earth to bring about Armageddon by means of using a strangelet created in a particle accelerator to convert the Earth into a quark star. In the 2010 film Quantum Apocalypse, a strangelet approaches the Earth from space. In the novel The Quantum Thief by Hannu Rajaniemi and the rest of the trilogy, strangelets are mostly used as weapons, but during an early project to terraform Mars, one was used to convert Phobos into an additional "sun". == See also == Grey goo Ice-nine Hyperon == Further reading == Holden, Joshua (May 17, 1998). "The Story of Strangelets". Rutgers. Archived from the original on January 7, 2010. Retrieved April 1, 2010. Fridolin Weber (2005). "Strange Quark Matter and Compact Stars". Progress in Particle and Nuclear Physics. 54 (1): 193–288. arXiv:astro-ph/0407155. Bibcode:2005PrPNP..54..193W. doi:10.1016/j.ppnp.2004.07.001. S2CID 15002134. Jes Madsen (1999). "Physics and astrophysics of strange quark matter". Hadrons in Dense Matter and Hadrosynthesis. Lecture Notes in Physics. Vol. 516. pp. 162–203. arXiv:astro-ph/9809032. doi:10.1007/BFb0107314. ISBN 978-3-540-65209-0. S2CID 16566509. == References == == External links == "The Most Dangerous Stuff in the Universe – Strange
|
{
"page_id": 20647505,
"source": null,
"title": "Strangelet"
}
|
Stars Explained" (Video). Kurzgesagt. 14 April 2019. Archived from the original on 2021-12-15. Retrieved 15 April 2019 – via YouTube.
|
{
"page_id": 20647505,
"source": null,
"title": "Strangelet"
}
|
The molecular formula C6H4N2O2 (molar mass: 136.11 g/mol) may refer to: Bimane or syn-Bimane anti-Bimane(Pubchem: 54280186) Dinitrosobenzene diiminobenzoquinone
|
{
"page_id": 31657554,
"source": null,
"title": "C6H4N2O2"
}
|
Umbrella species are species selected for making conservation-related decisions, typically because protecting these species indirectly protects the many other species that make up the ecological community of its habitat (the umbrella effect). Species conservation can be subjective because it is hard to determine the status of many species. The umbrella species is often either a flagship species whose conservation benefits other species: 280 or a keystone species which may be targeted for conservation due to its impact on an ecosystem. Umbrella species can be used to help select the locations of potential reserves, find the minimum size of these conservation areas or reserves, and to determine the composition, structure, and processes of ecosystems. == Definitions == Two commonly used definitions are: "A wide-ranging species whose requirements include those of many other species" A species with large area requirements for which protection of the species offers protection to other species that share the same habitat Other descriptions include: "Traditional umbrella species, relatively large-bodied and wide-ranging species of higher vertebrates" Animals may also be considered umbrella species if they are charismatic. The hope is that species that appeal to popular audiences, such as pandas, will attract support for habitat conservation in general. == In land use management == In the two decades after its inception, the use of umbrella species as a conservation tool has been highly debated. The term was first used by Bruce Wilcox in 1984, who defined an umbrella species as one whose minimum area requirements are at least as comprehensive of the rest of the community for which protection is sought through the establishment and management of a protected area. Some scientists have found that the use of an umbrella species approach can provide a more streamlined way to manage ecological communities. Others have proposed that umbrella species
|
{
"page_id": 11406931,
"source": null,
"title": "Umbrella species"
}
|
in combination with other tools will more effectively protect other species in land management reserves than using umbrella species alone. Individual invertebrate species can be good umbrella species because they can protect older, unique ecosystems. There have been cases where umbrella species have protected a large amount of area which has been beneficial to surrounding species. Dunk, Zielinski and Welsh (2006) reported that the reserves in Northern California (the Klamath-Siskiyou forests), set aside for the northern spotted owl, also protect mollusks and salamanders within that habitat. They found that the reserves set aside for the northern spotted owl "serve as a reasonable coarse-filter umbrella species for the taxa evaluated", which were mollusks and salamanders. Gilby and colleagues (2017) found that using threatened species as umbrellas or "surrogates" for management targets could improve conservation outcomes in coastal areas. == Wildlife corridors == The concept of an umbrella species is further utilized to create wildlife corridors with what are termed focal species. These focal species are chosen for a number of reasons and fall into several types, generally measured by their potential for an umbrella effect. By carefully choosing species based on this criterion, a linked or networked habitat can be created from single-species corridors. These criteria are determined with the assistance of geographic information systems on the larger scale. Regardless of the location or scale of conservation, the umbrella effect is a measurement of a species' impact on others and is an important part of determining an approach. == In the Endangered Species Act (US) == The bay checkerspot butterfly has been on the Endangered Species List since 1987. Launer and Murphy (1994) tried to determine whether this butterfly could be considered an umbrella species in protecting the native grassland it inhabits. They discovered that the Endangered Species Act has a
|
{
"page_id": 11406931,
"source": null,
"title": "Umbrella species"
}
|
loophole excluding federally protected plants on private property. However, the California Environmental Quality Act reinforces state conservation regulations. Using the Endangered Species Act to protect termed umbrella species and their habitats can be controversial because they are not as well enforced in some states as others (such as California) to protect overall biodiversity. == Examples == Northern spotted owls and old-growth forest: Molluscs and salamanders are within the protective boundaries of the northern spotted owl. Bay checkerspot butterfly and grasslands Red-cockaded woodpeckers and Southeastern pine grasslands Amur tigers in the Russian Far East are considered umbrella/keystone species due to their impact on the deer and boar in their ecosystem Right whales Sharks Giant pandas and mountain ranges in China Jaguars and herpetofauna Canebrake and other species Protecting a species like the canebrake has practical applications, as protection measures would have broad environmental value because of an umbrella effect. That is, protecting the rattlesnakes would ensure protection of other wildlife species that use the same habitats but are less sensitive to development or require fewer resources. == See also == == References == == Further reading == Caro, Tim (2010). Conservation by proxy: indicator, umbrella, keystone, flagship, and other surrogate species. Washington, DC: Island Press. ISBN 9781597261920. == External links == NOAA The Endangered Species Act of 1973 U.S. Fish and Wildlife Service Bay checkerspot butterfly Northern Spotted Owl
|
{
"page_id": 11406931,
"source": null,
"title": "Umbrella species"
}
|
LAR1 ('Lichen-Associated Rhizobiales 1') refers to a specific bacterial lineage in the order Hyphomicrobiales (formerly Rhizobiales) that has most frequently been found directly in association with lichens. This lineage is currently known to associate with lichens that have a green-algal photosynthetic partner (as opposed to a cyanobacterial partner) and a fungal partner in the Lecanoromycetes (though other groups of fungi have not yet been examined). This lineage has been documented in association with all green-algal lichens specifically tested (all from North America), and was also found in a sequence library derived from Antarctic lichens. The specific ecological niche occupied by this lineage indicates that it may rely on certain nutrients that are abundant in green-algal lichen thalli but are rarer in other environments. == Nitrogen fixing == The LAR1 lineage is currently defined based on sequences of the 16S rRNA gene alone, since it remains uncultured in the laboratory. In spite of its resistance to being cultured, at least one potentially significant metabolic function can be inferred through circumstantial evidence: nitrogen fixation. Since nitrogen is required for growth by all biological systems, but is generally biologically inaccessible due to its high activation energy, many eukaryotes have established relationships with specialized bacteria that are capable of nitrogen fixation (converting dinitrogen gas into a molecular form which is easily assimilated). Many lichens grow in extremely nutrient-poor environments and may rely on nitrogen-fixing bacteria to provide them with enough molecular nitrogen to survive. It has been documented by numerous researchers that microbes associated with green-algal lichens have the potential to fix nitrogen in abundance. However, nearly all of these studies have relied solely on culture-based methods, which may provide an inaccurate picture of what the most abundant or important nitrogen-fixers are. Independent studies on lichens have used culture-free techniques to detect the presence
|
{
"page_id": 25366100,
"source": null,
"title": "Lar1"
}
|
of nifH, the primary gene involved in nitrogen fixation, and have uncovered sequences that share the same phylogenetic affinities as the LAR1 lineage. However, the diversity of bacteria found in environmental samples, the frequency with which horizontal gene transfer occurs in bacteria, and the lack of physiological studies make a definitive statement regarding the metabolic activity of this uncultured lineage impossible at this point. == References ==
|
{
"page_id": 25366100,
"source": null,
"title": "Lar1"
}
|
Biogenic silica (bSi), also referred to as opal, biogenic opal, or amorphous opaline silica, forms one of the most widespread biogenic minerals. For example, microscopic particles of silica called phytoliths can be found in grasses and other plants. Silica is an amorphous metalloid oxide formed by complex inorganic polymerization processes. This is opposed to the other major biogenic minerals, comprising carbonate and phosphate, which occur in nature as crystalline iono-covalent solids (e.g. salts) whose precipitation is dictated by solubility equilibria. Chemically, bSi is hydrated silica (SiO2·nH2O), which is essential to many plants and animals. Diatoms in both fresh and salt water extract dissolved silica from the water to use as a component of their cell walls. Likewise, some holoplanktonic protozoa (Radiolaria), some sponges, and some plants (leaf phytoliths) use silicon as a structural material. Silicon is known to be required by chicks and rats for growth and skeletal development. Silicon is in human connective tissues, bones, teeth, skin, eyes, glands, and organs. == Silica in marine environments == Silicate, or silicic acid (H4SiO4), is an important nutrient in the ocean. Unlike the other major nutrients such as phosphate, nitrate, or ammonium, which are needed by almost all marine plankton, silicate is an essential chemical requirement for very specific biota, including diatoms, radiolaria, silicoflagellates, and siliceous sponges. These organisms extract dissolved silicate from open ocean surface waters for the buildup of their particulate silica (SiO2), or opaline, skeletal structures (i.e. the biota's hard parts). Some of the most common siliceous structures observed at the cell surface of silica-secreting organisms include: spicules, scales, solid plates, granules, frustules, and other elaborate geometric forms, depending on the species considered. === Marine sources of silica === Five major sources of dissolved silica to the marine environment can be distinguished: Riverine influx of dissolved silica to
|
{
"page_id": 4001363,
"source": null,
"title": "Biogenic silica"
}
|
the oceans: 4.2 ± 0.8 × 1014 g SiO2 yr−1 Submarine volcanism and associated hydrothermal emanations: 1.9 ± 1.0 × 1014 g SiO2 yr−1 Glacial weathering: 2 × 1012 g SiO2 yr−1 Low temperature submarine weathering of oceanic basalts Some silica may also escape from silica-enriched pore waters of pelagic sediments on the seafloor Once the organism has perished, part of the siliceous skeletal material dissolves, as it settles through the water column, enriching the deep waters with dissolved silica. Some of the siliceous scales can also be preserved over time as microfossils in deep-sea sediments, providing a window into modern and ancient plankton/protists communities. This biologic process has operated, since at least early Paleozoic time, to regulate the balance of silica in the ocean. Radiolarians (Cambrian/Ordovician-Holocene), diatoms (Cretaceous-Holocene), and silicoflagellates (Cretaceous-Holocene) form the ocean's main contributors to the global silica biogenic cycle throughout geologic time. Diatoms account for 43% of the ocean primary production, and are responsible for the bulk of silica extraction from ocean waters in the modern ocean, and during much of the past fifty million years. In contrast, oceans of Jurassic and older ages, were characterized by radiolarians as major silica-utilizing phyla. Nowadays, radiolarians are the second (after diatoms) major producers of suspended amorphous silica in ocean waters. Their distribution ranges from the Arctic to the Antarctic, being most abundant in the equatorial zone. In equatorial Pacific waters, for example, about 16,000 specimens per cubic meter can be observed. === Silica cycle === The silicon cycle has gained increasingly in scientific attention the past decade for several reasons: Firstly, the modern marine silica cycle is widely believed to be dominated by diatoms for the fixation and export of particulate matter (including organic carbon), from the euphotic zone to the deep ocean, via a process known as
|
{
"page_id": 4001363,
"source": null,
"title": "Biogenic silica"
}
|
the biological pump. As a result, diatoms, and other silica-secreting organisms, play a crucial role in the global carbon cycle, and have the ability to affect atmospheric CO2 concentrations on a variety of time scales, by sequestering CO2 in the ocean. This connection between biogenic silica and organic carbon, together with the significantly higher preservation potential of biogenic siliceous compounds, compared to organic carbon, makes opal accumulation records very interesting for paleoceanography and paleoclimatology. Secondly, biogenic silica accumulation on the sea floor contains lot of information about where in the ocean export production has occurred on time scales ranging from hundreds to millions of years. For this reason, opal deposition records provide valuable information regarding large-scale oceanographic reorganizations in the geological past, as well as paleoproductivity. Thirdly, the mean oceanic residence time for silicate is approximately 10,000–15,000 yr. This relative short residence time, makes oceanic silicate concentrations and fluxes sensitive to glacial/interglacial perturbations, and thus an excellent proxy for evaluating climate changes. Increasingly, isotope ratios of oxygen (O18:O16) and silicon (Si30:Si28) are analysed from biogenic silica preserved in lake and marine sediments to derive records of past climate change and nutrient cycling (De La Rocha, 2006; Leng and Barker, 2006). This is a particularly valuable approach considering the role of diatoms in global carbon cycling. In addition, isotope analyses from BSi are useful for tracing past climate changes in regions such as in the Southern Ocean, where few biogenic carbonates are preserved. === Marine silica sinks === ==== Siliceous ooze ==== The remains of diatoms and other silica-utilizing organisms are found, as opal sediments within pelagic deep-sea deposits. Pelagic sediments, containing significant quantities of siliceous biogenic remains, are commonly referred to as siliceous ooze. Siliceous ooze are particularly abundant in the modern ocean at high latitudes in the northern and
|
{
"page_id": 4001363,
"source": null,
"title": "Biogenic silica"
}
|
southern hemispheres. A striking feature of siliceous ooze distribution is a ca. 200 km wide belt stretching across the Southern Ocean. Some equatorial regions of upwelling, where nutrients are abundant and productivity is high, are also characterized by local siliceous ooze. Siliceous oozes are composed primarily of the remains of diatoms and radiolarians, but may also include other siliceous organisms, such as silicoflagellates and sponge spicules. Diatom ooze occurs mainly in high-latitude areas and along some continental margins, whereas radiolarian ooze are more characteristic of equatorial areas. Siliceous ooze are modified and transformed during burial into bedded cherts. ==== Southern Ocean sediments ==== Southern Ocean sediments are a major sink for biogenic silica (50-75% of the oceanic total of 4.5 × 1014 g SiO2 yr−1; DeMaster, 1981), but only a minor sink for organic carbon (<1% of the oceanic 2 × 1014 g of organic C yr−1). These relatively high rates of biogenic silica accumulation in the Southern Ocean sediments (predominantly beneath the Polar Front) relative to organic carbon (60:1 on a weight basis) results from the preferential preservation of biogenic silica in the Antarctic water column. In contrast to what was previously thought, these high rates of biogenic silica accumulation are not the result from high rates of primary production. Biological production in the Southern Ocean is strongly limited due to the low levels of irradiance coupled with deep mixed layers and/or by limited amounts of micronutrients, such as iron. This preferential preservation of biogenic silica relative to organic carbon is evident in the steadily increasing ratio of silica/organic C as function of depth in the water column. About thirty-five percent of the biogenic silica produced in the euphotic zone survives dissolution within the surface layer; whereas only 4% of the organic carbon escapes microbial degradation in these near-surface
|
{
"page_id": 4001363,
"source": null,
"title": "Biogenic silica"
}
|
waters. Consequently, considerable decoupling of organic C and silica occurs during settling through the water column. The accumulation of biogenic silica in the seabed represents 12% of the surface production, whereas the seabed organic-carbon accumulation rate accounts for solely <0.5% of the surface production. As a result, polar sediments account for most of the ocean's biogenic silica accumulation, but only a small amount of the sedimentary organic-carbon flux. ==== Effect of oceanic circulation on silica sinks ==== Large-scale oceanic circulation has a direct impact on opal deposition. The Pacific (characterized by nutrient poor surface waters, and deep nutrient rich waters) and Atlantic Ocean circulations favor the production/preservation of silica and carbonate respectively. For instance, Si/N and Si/P ratios increase from the Atlantic to the Pacific and Southern Ocean, favoring opal versus carbonate producers. Consequently, the modern configuration of large-scale oceanic circulation resulted in the localization of major opal burial zones in the Equatorial Pacific, in the eastern boundary current upwelling systems, and by far the most important, the Southern Ocean. ===== Pacific and Southern Oceans ===== Waters from the modern Pacific and Southern ocean, typically observe an increase in Si/N ratio at intermediate depth, which results in an increase in opal export (~ increase in opal production). In the Southern Ocean and North Pacific, this relationship between opal export and Si/N ratio switches from linear to exponential for Si/N ratios greater than 2. This gradual increase in the importance of silicate (Si) relative to nitrogen (N) has tremendous consequences for the ocean biological production. The change in nutrient ratios contributes to select diatoms as main producers, compared to other (e.g., calcifying) organisms. For example, microcosm experiments have demonstrated that diatoms are DSi supercompetitors and dominate other producers above 2 μM DSi. Consequently, opal vs. carbonate export will be favored, resulting
|
{
"page_id": 4001363,
"source": null,
"title": "Biogenic silica"
}
|
in increasing opal production. The Southern Ocean and the North Pacific also display maximum biogenic silicate/Corganic flux ratios, and consist thus in an enrichment in biogenic silicate, compared to Corganic export flux. This combined increase in opal preservation and export makes the Southern Ocean the most important sink for DSi today. ===== Atlantic Ocean ===== In the Atlantic Ocean, intermediate and deep waters are characterized by a lower content in DSi, compared to the modern Pacific and Southern Ocean. This lower interbasin difference in DSi has the effect of decreasing the preservation potential of opal in the Atlantic compared to its Pacific and Southern ocean counterparts. Atlantic DSi depleted waters tends to produce relatively less silicified organisms, which has a strong influence on the preservation of their frustules. This mechanism in best illustrated when comparing the Peru and northwest Africa upwelling systems. The dissolution/production ratio is much higher in the Atlantic upwelling than in the Pacific upwelling. This is due to the fact that coastal upwelling source waters are much richer in DSi off Peru, than off NW Africa. === Marine biogenic silica budget === Rivers and submarine hydrothermal emanations supply 6.1 × 1014 g SiO2 yr−1 to the marine environment. Approximately two-thirds of this silica input is stored in continental margin and deep-sea deposits. Siliceous deep-sea sediments located beneath the Antarctic Convergence (convergence zone) host some 25% of the silica supplied to the oceans (i.e. 1.6 × 1014 g SiO2 yr−1) and consequently form one of Earth's major silica sinks. The highest biogenic silica accumulation rates in this area are observed in the South Atlantic, with values as large as 53 cm.kyr−1 during the last 18,000 yr. Further, extensive biogenic silica accumulation has been recorded in the deep-sea sediments of the Bering Sea, Sea of Okhotsk, and Subarctic North
|
{
"page_id": 4001363,
"source": null,
"title": "Biogenic silica"
}
|
Pacific. Total biogenic silica accumulation rates in these regions amounts nearly 0.6 × 1014 g SiO2 yr−1, which is equivalent to 10% of the dissolved silica input to the oceans. Continental margin upwelling areas, such as the Gulf of California, the Peru and Chile coast, are characteristic for some of the highest biogenic silica accumulation rates in the world. For example, biogenic silica accumulation rates of 69 g SiO2/cm2/kyr have been reported for the Gulf of California. Due to the laterally confined character of these rapid biogenic silica accumulation zones, upwelling areas solely account for approximately 5% of the dissolved silica supplied to the oceans. At last, extremely low biogenic silica accumulation rates have been observed in the extensive deep-sea deposits of the Atlantic, Indian and Pacific Oceans, rendering these oceans insignificant for the global marine silica budget. ==== Biogenic silica production ==== The mean daily BSi rate strongly depends on the region: Coastal upwelling: 46 mmol.m−2.d−1 Sub-arctic Pacific: 18 mmol.m−2.d−1 Southern Ocean: 3–38 mmol.m−2.d−1 mid-ocean gyres: 0.2–1.6 mmol.m−2.d−1 Likewise, the integrated annual BSi production strongly depends on the region: Coastal upwelling: 3 × 1012 mol.yr−1 Subarctic Pacific: 8 × 1012 mol.yr−1 Southern Ocean: 17–37 × 1012 mol.yr−1 mid-ocean gyres: 26 × 1012 mol.yr−1 BSi production is controlled by: Dissolved silica availability, however, half saturation constant Kμ for silicon-limited growth is lower than Ks for silicon uptake. Light availability: There is no direct light requirement; silicon uptake at 2x depth of photosynthesis; silicon uptake continues at night but cells must be actively growing. Micronutrient availability. ==== Biogenic silica dissolution ==== BSi dissolution is controlled by: Thermodynamics of solubility: Temperature (0 to 25 °C - 50x increase). Sinking rate: Food web structure—grazers, fecal pellets, discarded feeding structures, Aggregation - rapid transport. Bacterial degradation of organic matrix (Bidle and Azam, 1999). ====
|
{
"page_id": 4001363,
"source": null,
"title": "Biogenic silica"
}
|
Biogenic silica preservation ==== BSi preservation is measured by: Sedimentation rates, mainly sediment traps (Honjo); Benthic remineralization rates ("recycling"), benthic flux chamber (Berelson); BSi concentration in sediments, chemical leaching in alkaline solution, site specific, need to differentiate lithogenic vs. biogenic Si, X-ray diffraction. BSi preservation is controlled by: Sedimentation rate; Porewater dissolved silica concentration: saturation at 1.100 μmol/L; Surface coatings: dissolved Al modifies solubility of deposited biogenic silica particles, dissolved silica can also precipitate with Al as clay or Al-Si coatings. == Opaline silica on Mars == In the Gusev crater of Mars, the Mars Exploration Rover Spirit inadvertently discovered opaline silica. One of its wheels had earlier become immobilized and thus was effectively trenching the Martian regolith as it dragged behind the traversing rover. Later analysis showed that the silica was evidence for hydrothermal conditions. == See also == Marine biogenic calcification Protist shell == References == Brzezinski, M. A. (1985). "The Si:C:N ratio of marine diatoms: Interspecific variability and the effect of some environmental variables." Journal of Phycology 21(3): 347-357. De La Rocha, C.L. (2006). "Opal based proxies of paleoenvironmental conditions." Global Biogeochemical Cycles 20. doi:10.1029/2005GB002664. Dugdale, R. C. and F. P. Wilkerson (1998). "Silicate regulation of new production in the equatorial Pacific upwelling." Nature 391(6664): 270. Dugdale, R. C., F. P. Wilkerson, et al. (1995). "The role of the silicate pump in driving new production." Deep-Sea Research I 42(5): 697-719. Leng, M.J. and Barker, P.A. (2006). "A review of the oxygen isotope composition of lacustrine diatom silica for palaeoclimate reconstruction." Earth-Science Reviews 75:5-27. Ragueneau, O., P. Treguer, et al. (2000). "A review of the Si cycle in the modern ocean: recent progress and missing gaps in the application of biogenic opal as a paleoproductivity proxy." Global and Planetary Change 26: 317-365. Takeda, S. (1998). "Influence of iron
|
{
"page_id": 4001363,
"source": null,
"title": "Biogenic silica"
}
|
availability on nutrient consumption ratio of diatoms in oceanic waters." Nature 393: 774-777. Werner, D. (1977). The Biology of Diatoms. Berkeley and Los Angeles, University of California Press.
|
{
"page_id": 4001363,
"source": null,
"title": "Biogenic silica"
}
|
In genetics, DNase I hypersensitive sites (DHSs) are regions of chromatin that are sensitive to cleavage by the DNase I enzyme. In these specific regions of the genome, chromatin has lost its condensed structure, exposing the DNA and making it accessible. This raises the availability of DNA to degradation by enzymes, such as DNase I. These accessible chromatin zones are functionally related to transcriptional activity, since this remodeled state is necessary for the binding of proteins such as transcription factors. Since the discovery of DHSs 30 years ago, they have been used as markers of regulatory DNA regions. These regions have been shown to map many types of cis-regulatory elements including promoters, enhancers, insulators, silencers and locus control regions. A high-throughput measure of these regions is available through DNase-Seq. == Massive analysis == The ENCODE project proposes to map all of the DHSs in the human genome with the intention of cataloging human regulatory DNA. DHSs mark transcriptionally active regions of the genome, where there will be cellular selectivity. So, they used 125 different human cell types. This way, using the massive sequencing technique, they obtained the DHSs profiles of every cellular type. Through an analysis of the data, they identified almost 2.9 million distinct DHSs. 34% were specific to each cell type, and only a small minority (3,692) were detected in all cell types. Also, it was confirmed that only 5% of DHSs were found in TSS (Transcriptional Start Site) regions. The remaining 95% represented distal DHSs, divided in a uniform way between intronic and intergenic regions. The data gives an idea of the great complexity regulating the genetic expression in the human genome and the quantity of elements that control this regulation. The high-resolution mapping of DHSs in the model plant Arabidopsis thaliana has been reported. Total 38,290
|
{
"page_id": 38669910,
"source": null,
"title": "DNase I hypersensitive site"
}
|
and 41,193 DHSs in leaf and flower tissues have been identified, respectively. == Regulatory DNA tools == The study of DHS profiles combined with other techniques allows analysis of regulatory DNA in humans: Transcription factor: Using the ChIP-Seq technique, the binding sites to DNA in certain transcription factor groups are determined, and the DHS profiles are compared. The results confirm a high correlation, which show that the coordinated union of certain factors is implicated in the remodeling and accessibility of chromatin. DNA methylation patterns: CpG methylation has been closely linked with transcriptional silencing. This methylation causes a rearrangement of the chromatin, condensing and inactivating it transcriptionally. Methylated CpG falling within DHSs impedes the association of transcription factor to DNA, inhibiting the accessibility of chromatin. Data argue that methylation patterning paralleling cell-selective chromatin accessibility results from passive deposition after the vacation of transcription factors from regulatory DNA. Promoter chromatin signature: The H3K4me3 modification is related with transcriptional activity. This modification takes place in adjacent nucleosome to the transcription start site (TSS), relaxing the chromatin structure. This histone modification is used as a marker of promoters, using it to map these elements in the human genome. Promoter/enhancer connections: distal cis-regulatory elements, such as enhancers are in charge of modulating the activity of the promoters. In this way, the distal cis-regulatory elements are actively synchronized with their promoter in the cellular lines which is active the expression of the gene controlled. Using the DHS profiles, were looked for correlations between DHS to identify promoter/enhancer connections. Thus, it was able to create a map of candidate enhancers controlling specific genes. The data obtained were validated with the chromosome conformation capture carbon copy (5C) technique. This technique is based in the physical association that exists between the promoter and the enhancers, determining the regions of
|
{
"page_id": 38669910,
"source": null,
"title": "DNase I hypersensitive site"
}
|
chromatin that enter in contact in the promoter/enhancer connections. It was confirmed that the majority of promoters were related with more than one enhancer, which indicates the existence of a complicated network of regulation for the immense majority of genes. Surprisingly, they also found that approximately half of the enhancers were found to be associated with more than one promoter. This discovery shows that the human cis-regulatory system is much more complicated than initially thought. The number of distal cis-regulatory elements connected to a promoter is related to the quantitative average of the regulation complexity of a gene. In this way, it was determined that human genes with more interactions with distal DHSs, and with at least one more complex regulation, corresponded with those genes with functions in the immune system. This indicates that the complex of cellular and environmental signals processed by the immune system is directly encoded in the cis-regulatory architecture of its constituent genes. == Database == ENCODE Project: Regulatory Elements DB Plant DHSs : PlantDHS == References ==
|
{
"page_id": 38669910,
"source": null,
"title": "DNase I hypersensitive site"
}
|
The FSBI Medal is an international fish biology and/or fisheries science prize awarded annually for exceptional advances by a scientist in the earlier stages of his or her career. Medallists have made a significant contribution to the field of fish biology through their achievements in scientific research. The medal is only awarded if a candidate of sufficient quality is nominated. The medal was established by the Fisheries Society of the British Isles (FSBI) to recognize distinction in the field of fish biology and fisheries science, and to raise the profile of the discipline and of the Society in the wider scientific community. Medals are awarded to individuals who have made an outstanding contribution to fish biology and/or fisheries. The FSBI Medal is traditionally awarded in July at the Fisheries Society of the British Isles Annual International Conference. == Medallists == Source: FSBI 2023 - Rajeev Raghavan 2022 - Amy Deacon 2021 - Christos Ioannou 2020 - Julien Cucherousset 2019 - Shaun Killen 2018 - Aaron McNeil 2017 - Nick Graham 2016 - Stephen Simpson 2015 - Kathryn Elmer 2014 - Darren Croft 2013 - Katherine Sloman 2012 - Robert Arlinghaus 2011 - Ashley Ward 2010 - Iain Barber 2009 - John Pinnegar 2008 - Steven J. Cooke 2007 - David W. Sims 2006 - Victoria Braithwaite 2005 - Jason Link 2004 - Michel Kaiser 2003 - Jens Krause 2002 - Etienne Baras 2001 - Simon Jennings 2000 - John Reynolds 1999 - Neil Metcalfe == See also == List of biology awards == References ==
|
{
"page_id": 45616726,
"source": null,
"title": "FSBI Medal"
}
|
Branch migration is the process by which base pairs on homologous DNA strands are consecutively exchanged at a Holliday junction, moving the branch point up or down the DNA sequence. Branch migration is the second step of genetic recombination, following the exchange of two single strands of DNA between two homologous chromosomes. The process is random, and the branch point can be displaced in either direction on the strand, influencing the degree of which the genetic material is exchanged. Branch migration can also be seen in DNA repair and replication, when filling in gaps in the sequence. It can also be seen when a foreign piece of DNA invades the strand. == Mechanism == The mechanism for branch migration differs between prokaryotes and eukaryotes. === Prokaryotes === The mechanism for prokaryotic branch migration has been studied many times in Escherichia coli. In E. coli, the proteins RuvA and RuvB come together and form a complex that facilitates the process in a number of ways. RuvA is a tetramer and binds to the DNA at the Holliday junction when it is in the open X form. The protein binds in a way that the DNA entering/departing the junction is still free to rotate and slide through. RuvA has a domain with acidic amino acid residues that interfere with the base pairs in the centre of the junction. This forces the base pairs apart so that they can re-anneal with base pairs on the homologous strands. In order for migration to occur, RuvA must be associated with RuvB and ATP. RuvB has the ability to hydrolyze ATP, driving the movement of the branch point. RuvB is a hexamer with helicase activity, and also binds the DNA. As ATP is hydrolyzed, RuvB rotates the recombined strands while pulling them out of the junction,
|
{
"page_id": 20975192,
"source": null,
"title": "Branch migration"
}
|
but does not separate the strands as helicase would. The final step in branch migration is called resolution and requires the protein RuvC. The protein is a dimer, and will bind to the Holliday junction when it takes on the stacked X form. The protein has endonuclease activity, and cleaves the strands at exactly the same time. The cleavage is symmetrical, and gives two recombined DNA molecules with single stranded breaks. === Eukaryotes === The eukaryotic mechanism is much more complex involving different and additional proteins, but follows the same general path. Rad54, a highly conserved eukaryotic protein, is reported to oligomerize on Holliday junctions to promote branch migration. === Archaea === A helicase (designated Saci-0814) isolated from the thermophilic crenarchaeon Sulfolobus acidocaldarius dissociated DNA Holliday junction structures, and showed branch migration activity in vitro. In a S. acidocaldarius strain deleted for Saci-0814, the homologous recombination frequency was reduced five-fold compared to the parental strain indicating that Saci-0814 is involved in homologous recombination in vivo. Based on this evidence it appears that Saci-0814 is employed in homologous recombination in S. acidocaldarius and functions as a branch migration helicase. Homologous recombination appears to be an important adaptation in hyperthermophiles, such as S. acidocaldarius, for efficiently repairing DNA damage. Helicase Saci-0814 is classified as an aLhr1 (archaeal long helicase related 1) under superfamily 2 helicases, and its homologs are conserved among the archaea. == Control == The rate of branch migration is dependent on the amount of divalent ions, specifically magnesium ions (Mg2+), present during recombination. The ions determine which structure the Holliday junction will adopt, as they play a stabilizing role. When the ions are absent, the backbones repel each other and the junction takes on the open X structure. In this condition, migration is optimal and the junction will be
|
{
"page_id": 20975192,
"source": null,
"title": "Branch migration"
}
|
free to move up and down the strands. When the ions are present, they neutralize the negatively charged backbone. This allows the strands to move closer together and the junction adopts the stacked X structure. It is during this state that resolution will be optimal, allowing RuvC to bind to the junction. == References ==
|
{
"page_id": 20975192,
"source": null,
"title": "Branch migration"
}
|
A bacterivore is an organism which obtains energy and nutrients primarily or entirely from the consumption of bacteria. The term is most commonly used to describe free-living, heterotrophic, microscopic organisms such as nematodes as well as many species of amoeba and numerous other types of protozoans, but some macroscopic invertebrates are also bacterivores, including sponges, polychaetes, and certain molluscs and arthropods. Many bacterivorous organisms are adapted for generalist predation on any species of bacteria, but not all bacteria are easily digested; the spores of some species, such as Clostridium perfringens, will never be prey because of their cellular attributes. == In microbiology == Bacterivores can sometimes be a problem in microbiology studies. For instance, when scientists seek to assess microorganisms in samples from the environment (such as freshwater), the samples are often contaminated with microscopic bacterivores, which interfere with the growing of bacteria for study. Adding cycloheximide can inhibit the growth of bacterivores without affecting some bacterial species, but it has also been shown to inhibit the growth of some anaerobic prokaryotes. == Examples of bacterivores == Caenorhabditis elegans Ceriodaphnia quadrangula Diaphanosoma brachyura Vorticella Paramecium Paratrimastix pyriformis Many species of protozoa Many benthic meiofauna, e.g. gastrotrichs Springtails Many sponges, e.g. Aplysina aerophoba Many crustaceans Many polychaetes, e.g. feather duster worms Some marine molluscs == See also == Microbivory == References == Davies, Cheryl M. et al.: Survival of Fecal Microorganisms in Marine and Freshwater Sediments, 1995, PDF
|
{
"page_id": 6688339,
"source": null,
"title": "Bacterivore"
}
|
Paul Felix Neményi (June 5, 1895 – March 1, 1952) was a Hungarian mathematician and physicist who specialized in continuum mechanics. He was known for using what he called the inverse or semi-inverse approach, which applied vector field analysis, to obtain numerous exact solutions of the nonlinear equations of gas dynamics, many of them representing rotational flows of nonuniform total energy. His work applied geometrical solutions to fluid dynamics. In continuum mechanics, "Neményi's theorem" proves that, given any net of isothermal curves, there exists a five parameter family of plane stress systems for which these curves are stress trajectories. Neményi's five constant theory for the determination of stress trajectories in plane elastic systems was subsequently proven by later mathematicians. He was the father of the statistician Peter Nemenyi and the putative father of former World Chess Champion Bobby Fischer. == Biography == === Family === Neményi was born to a wealthy Hungarian-Jewish family on June 5, 1895, in Fiume (Rijeka) in the Kingdom of Hungary. His grandfather was Siegmund Neumann who magyarized his family to Neményi in 1871 and part of the family became Christians. Pauls father Dezső Neményi was one of the directors at Rijeka Refinery (now INA d.d.). His mother was Julianna Goldberger de Buda (or Buday= von Buda), born 1868 in Budapest, as at least, the fifth consecutive generation Goldberger to do so. Neményi attended elementary and high school in Fiume (Rijeka). He graduated from high school in Budapest. Neményi's uncle was Dr. Ambrus Neményi, born in Pécel, c. 20 km east of Budapest. Paul Neményi's aunt was Berta Koppély (whose parents were Adolf Koppély (1809–1883) and Rózsa von Hatvany-Deutsch). His family's art collection included works by Klimt, Kandinsky and Matisse. Hungary at the time was producing a generation of geniuses in the exact sciences, who would
|
{
"page_id": 1642078,
"source": null,
"title": "Paul Neményi"
}
|
be collectively known as Martians, that included Theodore von Kármán (b. 1881), George de Hevesy (b. 1885), Leó Szilárd (b. 1898), Dennis Gabor (b. 1900), Eugene Wigner (b. 1902), John von Neumann (b. 1903), Edward Teller (b. 1908), and Paul Erdős (b. 1913). == Family tree == === Mathematical career === A child prodigy in mathematics, at the age of 17, Neményi won the Hungarian national mathematics competition. Neményi obtained his doctorate in mathematics in Berlin in 1922 and was appointed a lecturer in fluid dynamics at the Technische Hochschule in Charlottenburg (now Technische Universität Berlin). In the early 1930s, he published a textbook on mathematical mechanics that became required reading in German universities. Stripped of his position when the Nazis came to power, he also had to leave Hungary where anti-Semitic laws had been enacted, and found work for a time in Copenhagen. In Germany, Neményi belonged to a Socialist party called the ISK, which believed that truth could be arrived at through neo-Kantian Socratic principles. He was an animal-rights supporter and refused to wear anything made of wool. In 1930, Neményi entrusted his 3 year old first son, Peter Nemenyi, to be looked after by the socialist vegetarian community, visiting him once a year. He arrived in the US at the outbreak of World War II. He briefly held a number of teaching positions in succession and took part in hydraulic research at the State University of Iowa. In 1941 he was appointed instructor at the University of Colorado (other sources claim Colorado State University), and in 1944 at the State College of Washington. Theodore von Kármán wrote of Neményi: "When he came to this country, he went to scientific meetings in an open shirt without a tie and was very much disappointed as I advised him to
|
{
"page_id": 1642078,
"source": null,
"title": "Paul Neményi"
}
|
dress as anyone else. He told me that he thought this was a country of freedom, and the man is only judged according to his internal values and not his external appearance." In 1947 Neményi was appointed a physicist with the Naval Ordnance Laboratory, White Oak, Maryland. He was head of the Theoretical Mechanics Section at the laboratory and one of the country's principal authorities on elasticity and fluid dynamics. At the US Navy Research Laboratory, Neményi became mentor to Jerald Ericksen, where he put Ericksen to work on the study of water bells. Neményi pioneered what he called the inverse or semi-inverse approach, which applied vector field analysis, to obtain numerous exact solutions of the nonlinear equations of gas dynamics, many of them representing rotational flows of nonuniform total energy. In continuum mechanics, "Neményi's theorem" proves that, given any net of isothermal curves, there exists a five parameter family of plane stress systems for which these curves are stress trajectories. In his exposition, The Main Concepts and Ideas of Fluid Dynamics in their Historical Development, Neményi was highly critical of Isaac Newton's inadequate understanding of fluid dynamics. I. Bernard Cohen argues that Neményi pays insufficient attention to Newton's empirical experiments. However, Cohen notes that Neményi provides the "most thorough and incisive analyses in print of Newton's work on fluids, written by an obvious master of science. For example, Neményi is the only author I have encountered who has shown the weakness of Newton's "proof" at the end of Book 2, that vortices contradict the laws of astronomy. Neményi's scientific knowledge extended well beyond the subjects of his researches. He has been described as having "extreme[ly] versatile interests and erudition". Neményi's interest and ability encompassed several nonscientific fields. He collected children's art and sometimes lectured upon it. In 1951, he
|
{
"page_id": 1642078,
"source": null,
"title": "Paul Neményi"
}
|
published a critique of the entire Encyclopædia Britannica, and suggested improvements for such diverse sections as psychology and psychoanalysis. Neményi was also deeply interested in the philosophy of mathematics and mathematical education. Clifford Truesdell writes that it was Neményi who first taught him "that mechanics was something deep and beautiful, beyond the ken of schools of "applied mathematics" and "applied mechanics"". Paul Neményi died on March 1, 1952, at the age of 56. He was survived officially by one son: Peter Nemenyi, then a student of mathematics at Princeton University. == Supposed fatherhood of Bobby Fischer == In 2002 Neményi was identified as the probable biological father of world chess champion Bobby Fischer, not the man named on Fischer's birth certificate (Hans Gerhardt Fischer). Additional details on their relationship were reported in 2009. In A Psychobiography of Bobby Fischer, Joseph G. Ponterotto enumerates nine clusters of evidence that indicate that Neményi was Bobby Fischer's father: Regina Fischer and Hans Gerhardt Fischer had no confirmed contact after 1939. Paul Neményi was in contact with Regina Fischer both before and after Bobby's birth, and occasionally came to visit Bobby. Regina told Jewish Family Services that she gave birth to a boy by Neményi in 1943. Neményi told a social worker that they had agreed to put the child up for adoption, but that Regina had later refused. Paul Neményi used Jewish Family Services to deliver money to Regina and Bobby and told the agency that he was concerned for Bobby's welfare. In letter to the psychiatrist Harold Kline on March 13, 1952, Peter Nemenyi wrote, "I take it you know that Paul was Bobby Fischer’s father." After Paul Neményi's death, Regina Fischer wrote to Peter Nemenyi to ask whether Paul had left any money for Bobby. In a letter to Allen W.
|
{
"page_id": 1642078,
"source": null,
"title": "Paul Neményi"
}
|
Dulles on May 22, 1959, J. Edgar Hoover wrote, "Investigation has established that Robert James Fischer’s father was one Paul Felix Nemenyi." A court document signed by Regina Fischer following Paul Neményi's death states that Bobby "was born to the decedent out of wedlock". Paul Neményi and Bobby Fischer physically resembled each other. == Selected list of publications == Ludin, Adolf; Neményi, Paul (1930). Die nordischen Wasserkräfte: Ausbau und wirtschaftliche Ausnutzung. Berlin: Julius Springer. Neményi, Paul (1933). Wasserbauliche Strömungslehre. Barth Verlag. Neményi, Paul; Netser, Bennie N. (1940). "Relation of the Statistical Theory of Turbulence to Hydraulics". Proceedings of the American Society of Civil Engineers. 66: 967–979. Neményi, Paul; Prim, R. (1948). "Some Geometric Properties of Plane Gas Flow". Journal of Mathematics and Physics. 27 (2): 130–135. doi:10.1002/sapm1948271130. Neményi, Paul (1962). "The Main Concepts and Ideas of Fluid Dynamics in their Historical Development". Archive for History of Exact Sciences. 2: 52–86. doi:10.1007/BF00325161. S2CID 120618333. Posthumous publication, edited by Clifford Truesdell. == Obituaries == Truesdell, Clifford (1952). "Paul Felix Neményi: 1895–1952". Science. 116 (3009): 215–216. Bibcode:1952Sci...116..215T. doi:10.1126/science.116.3009.215. JSTOR 1680060. PMID 17770928. Truesdell, Clifford (1953). "Obituaries". Journal of the Washington Academy of Sciences. 43 (2): 62–64. == References == == External links == Paul Neményi publications indexed by the Scopus bibliographic database. (subscription required)
|
{
"page_id": 1642078,
"source": null,
"title": "Paul Neményi"
}
|
Thermodynamic equilibrium is a notion of thermodynamics with axiomatic status referring to an internal state of a single thermodynamic system, or a relation between several thermodynamic systems connected by more or less permeable or impermeable walls. In thermodynamic equilibrium, there are no net macroscopic flows of mass nor of energy within a system or between systems. In a system that is in its own state of internal thermodynamic equilibrium, not only is there an absence of macroscopic change, but there is an “absence of any tendency toward change on a macroscopic scale.” Systems in mutual thermodynamic equilibrium are simultaneously in mutual thermal, mechanical, chemical, and radiative equilibria. Systems can be in one kind of mutual equilibrium, while not in others. In thermodynamic equilibrium, all kinds of equilibrium hold at once and indefinitely, unless disturbed by a thermodynamic operation. In a macroscopic equilibrium, perfectly or almost perfectly balanced microscopic exchanges occur; this is the physical explanation of the notion of macroscopic equilibrium. A thermodynamic system in a state of internal thermodynamic equilibrium has a spatially uniform temperature. Its intensive properties, other than temperature, may be driven to spatial inhomogeneity by an unchanging long-range force field imposed on it by its surroundings. In systems that are at a state of non-equilibrium there are, by contrast, net flows of matter or energy. If such changes can be triggered to occur in a system in which they are not already occurring, the system is said to be in a "meta-stable equilibrium". Though not a widely named "law," it is an axiom of thermodynamics that there exist states of thermodynamic equilibrium. The second law of thermodynamics states that when an isolated body of material starts from an equilibrium state, in which portions of it are held at different states by more or less permeable or
|
{
"page_id": 265823,
"source": null,
"title": "Thermodynamic equilibrium"
}
|
impermeable partitions, and a thermodynamic operation removes or makes the partitions more permeable, then it spontaneously reaches its own new state of internal thermodynamic equilibrium and this is accompanied by an increase in the sum of the entropies of the portions. == Overview == Classical thermodynamics deals with states of dynamic equilibrium. The state of a system at thermodynamic equilibrium is the one for which some thermodynamic potential is minimized (in the absence of an applied voltage), or for which the entropy (S) is maximized, for specified conditions. One such potential is the Helmholtz free energy (A), for a closed system at constant volume and temperature (controlled by a heat bath): A = U − T S {\displaystyle A=U-TS} Another potential, the Gibbs free energy (G), is minimized at thermodynamic equilibrium in a closed system at constant temperature and pressure, both controlled by the surroundings: G = U − T S + P V {\displaystyle G=U-TS+PV} where T denotes the absolute thermodynamic temperature, P the pressure, S the entropy, V the volume, and U the internal energy of the system. In other words, Δ G = 0 {\displaystyle \Delta G=0} is a necessary condition for chemical equilibrium under these conditions (in the absence of an applied voltage). Thermodynamic equilibrium is the unique stable stationary state that is approached or eventually reached as the system interacts with its surroundings over a long time. The above-mentioned potentials are mathematically constructed to be the thermodynamic quantities that are minimized under the particular conditions in the specified surroundings. == Conditions == For a completely isolated system, S is maximum at thermodynamic equilibrium. For a closed system at controlled constant temperature and volume, A is minimum at thermodynamic equilibrium. For a closed system at controlled constant temperature and pressure without an applied voltage, G is
|
{
"page_id": 265823,
"source": null,
"title": "Thermodynamic equilibrium"
}
|
minimum at thermodynamic equilibrium. The various types of equilibriums are achieved as follows: Two systems are in thermal equilibrium when their temperatures are the same. Two systems are in mechanical equilibrium when their pressures are the same. Two systems are in diffusive equilibrium when their chemical potentials are the same. All forces are balanced and there is no significant external driving force. == Relation of exchange equilibrium between systems == Often the surroundings of a thermodynamic system may also be regarded as another thermodynamic system. In this view, one may consider the system and its surroundings as two systems in mutual contact, with long-range forces also linking them. The enclosure of the system is the surface of contiguity or boundary between the two systems. In the thermodynamic formalism, that surface is regarded as having specific properties of permeability. For example, the surface of contiguity may be supposed to be permeable only to heat, allowing energy to transfer only as heat. Then the two systems are said to be in thermal equilibrium when the long-range forces are unchanging in time and the transfer of energy as heat between them has slowed and eventually stopped permanently; this is an example of a contact equilibrium. Other kinds of contact equilibrium are defined by other kinds of specific permeability. When two systems are in contact equilibrium with respect to a particular kind of permeability, they have common values of the intensive variable that belongs to that particular kind of permeability. Examples of such intensive variables are temperature, pressure, chemical potential. A contact equilibrium may be regarded also as an exchange equilibrium. There is a zero balance of rate of transfer of some quantity between the two systems in contact equilibrium. For example, for a wall permeable only to heat, the rates of diffusion of
|
{
"page_id": 265823,
"source": null,
"title": "Thermodynamic equilibrium"
}
|
internal energy as heat between the two systems are equal and opposite. An adiabatic wall between the two systems is 'permeable' only to energy transferred as work; at mechanical equilibrium the rates of transfer of energy as work between them are equal and opposite. If the wall is a simple wall, then the rates of transfer of volume across it are also equal and opposite; and the pressures on either side of it are equal. If the adiabatic wall is more complicated, with a sort of leverage, having an area-ratio, then the pressures of the two systems in exchange equilibrium are in the inverse ratio of the volume exchange ratio; this keeps the zero balance of rates of transfer as work. A radiative exchange can occur between two otherwise separate systems. Radiative exchange equilibrium prevails when the two systems have the same temperature. == Thermodynamic state of internal equilibrium of a system == A collection of matter may be entirely isolated from its surroundings. If it has been left undisturbed for an indefinitely long time, classical thermodynamics postulates that it is in a state in which no changes occur within it, and there are no flows within it. This is a thermodynamic state of internal equilibrium. (This postulate is sometimes, but not often, called the "minus first" law of thermodynamics. One textbook calls it the "zeroth law", remarking that the authors think this more befitting that title than its more customary definition, which apparently was suggested by Fowler.) Such states are a principal concern in what is known as classical or equilibrium thermodynamics, for they are the only states of the system that are regarded as well defined in that subject. A system in contact equilibrium with another system can by a thermodynamic operation be isolated, and upon the event
|
{
"page_id": 265823,
"source": null,
"title": "Thermodynamic equilibrium"
}
|
of isolation, no change occurs in it. A system in a relation of contact equilibrium with another system may thus also be regarded as being in its own state of internal thermodynamic equilibrium. == Multiple contact equilibrium == The thermodynamic formalism allows that a system may have contact with several other systems at once, which may or may not also have mutual contact, the contacts having respectively different permeabilities. If these systems are all jointly isolated from the rest of the world those of them that are in contact then reach respective contact equilibria with one another. If several systems are free of adiabatic walls between each other, but are jointly isolated from the rest of the world, then they reach a state of multiple contact equilibrium, and they have a common temperature, a total internal energy, and a total entropy. Amongst intensive variables, this is a unique property of temperature. It holds even in the presence of long-range forces. (That is, there is no "force" that can maintain temperature discrepancies.) For example, in a system in thermodynamic equilibrium in a vertical gravitational field, the pressure on the top wall is less than that on the bottom wall, but the temperature is the same everywhere. A thermodynamic operation may occur as an event restricted to the walls that are within the surroundings, directly affecting neither the walls of contact of the system of interest with its surroundings, nor its interior, and occurring within a definitely limited time. For example, an immovable adiabatic wall may be placed or removed within the surroundings. Consequent upon such an operation restricted to the surroundings, the system may be for a time driven away from its own initial internal state of thermodynamic equilibrium. Then, according to the second law of thermodynamics, the whole undergoes changes
|
{
"page_id": 265823,
"source": null,
"title": "Thermodynamic equilibrium"
}
|
and eventually reaches a new and final equilibrium with the surroundings. Following Planck, this consequent train of events is called a natural thermodynamic process. It is allowed in equilibrium thermodynamics just because the initial and final states are of thermodynamic equilibrium, even though during the process there is transient departure from thermodynamic equilibrium, when neither the system nor its surroundings are in well defined states of internal equilibrium. A natural process proceeds at a finite rate for the main part of its course. It is thereby radically different from a fictive quasi-static 'process' that proceeds infinitely slowly throughout its course, and is fictively 'reversible'. Classical thermodynamics allows that even though a process may take a very long time to settle to thermodynamic equilibrium, if the main part of its course is at a finite rate, then it is considered to be natural, and to be subject to the second law of thermodynamics, and thereby irreversible. Engineered machines and artificial devices and manipulations are permitted within the surroundings. The allowance of such operations and devices in the surroundings but not in the system is the reason why Kelvin in one of his statements of the second law of thermodynamics spoke of "inanimate" agency; a system in thermodynamic equilibrium is inanimate. Otherwise, a thermodynamic operation may directly affect a wall of the system. It is often convenient to suppose that some of the surrounding subsystems are so much larger than the system that the process can affect the intensive variables only of the surrounding subsystems, and they are then called reservoirs for relevant intensive variables. == Local and global equilibrium == It can be useful to distinguish between global and local thermodynamic equilibrium. In thermodynamics, exchanges within a system and between the system and the outside are controlled by intensive parameters. As
|
{
"page_id": 265823,
"source": null,
"title": "Thermodynamic equilibrium"
}
|
an example, temperature controls heat exchanges. Global thermodynamic equilibrium (GTE) means that those intensive parameters are homogeneous throughout the whole system, while local thermodynamic equilibrium (LTE) means that those intensive parameters are varying in space and time, but are varying so slowly that, for any point, one can assume thermodynamic equilibrium in some neighborhood about that point. If the description of the system requires variations in the intensive parameters that are too large, the very assumptions upon which the definitions of these intensive parameters are based will break down, and the system will be in neither global nor local equilibrium. For example, it takes a certain number of collisions for a particle to equilibrate to its surroundings. If the average distance it has moved during these collisions removes it from the neighborhood it is equilibrating to, it will never equilibrate, and there will be no LTE. Temperature is, by definition, proportional to the average internal energy of an equilibrated neighborhood. Since there is no equilibrated neighborhood, the concept of temperature doesn't hold, and the temperature becomes undefined. This local equilibrium may apply only to a certain subset of particles in the system. For example, LTE is usually applied only to massive particles. In a radiating gas, the photons being emitted and absorbed by the gas do not need to be in a thermodynamic equilibrium with each other or with the massive particles of the gas for LTE to exist. In some cases, it is not considered necessary for free electrons to be in equilibrium with the much more massive atoms or molecules for LTE to exist. As an example, LTE will exist in a glass of water that contains a melting ice cube. The temperature inside the glass can be defined at any point, but it is colder near the
|
{
"page_id": 265823,
"source": null,
"title": "Thermodynamic equilibrium"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.