text
stringlengths 2
132k
| source
dict |
|---|---|
adaptation to maintain efficiency in their use of energy while growing. == Examples of extant piscivores == Aquatic genet Yellow anaconda Komodo dragon Green anaconda Flat-headed cat Bulldog bat Sea lion Fur seal Orca Silver arowana Otter shrew European otter North American river otter American mink Fishing cat Amazon river dolphin Giant otter Bottlenose dolphin Harbor seal Osprey Merganser Penguin Bald eagle Arapaima Fish-eating bat Gharial Yellow-bellied sea snake African tigerfish Barracuda Giant trevally Alligator gar Asian water monitor Lemon shark Fishing spider Tiger Piranha Bluefish Lionfish Kidako moray == Extinct and prehistoric piscivores == Numerous extinct and prehistoric animals are hypothesized to have been primarily piscivorous due to anatomy and/or ecology. Furthermore, some have been confirmed to be piscivorous through fossil evidence. This list includes specialist piscivores, such as Laganosuchus, as well as generalist predators, such as Baryonyx and Spinosaurus, found to have or assumed to have eaten fish. Baryonyx (an opportunistic predator that had a crocodile-like skull, and scales of the lepidotid fish Scheenstia have been found in a skeleton where the stomach should be) Spinosaurus (close relative of Baryonyx, is hypothesized to have preyed on fish because of giant coelacanthids found in the same environment, and due to anatomical features, including a pressure-sensitive snout that could have detected movements of swimming prey) Laganosuchus (flattened head suggests that it passively waited for fish to swim near its mouth in order to engulf them) Pteranodon (remains of fish found in the beaks and stomach cavities of some specimens) Elasmosaurus (long neck, stereoscopicly positioned eyes, and long teeth are thought to be adaptations for stalking and trapping fish and other schooling animals) Thyrsocles (fossil specimen found with the stomach stuffed with the extinct herring Xyne grex) Xiphactinus (a 4-meter-long specimen was found with a perfectly preserved skeleton of its relative,
|
{
"page_id": 6755173,
"source": null,
"title": "Piscivore"
}
|
Gillicus, in its stomach) Diplomystus (a small relative of the herring, numerous fossils of individuals that died while trying to swallow other fishes, including smaller individuals of the same species, are known) Ornithocheirus (hypothesized to be piscivorous due to anatomy of its jaws and dentition) Titanoboa (multiple cranial and biochemical characteristics suggest it was primarily piscivorous) == References ==
|
{
"page_id": 6755173,
"source": null,
"title": "Piscivore"
}
|
Mycoscience is a peer-reviewed scientific journal covering all aspects of basic and applied research on fungi, including lichens, yeasts, oomycetes, and slime moulds. It is the official journal of the Mycological Society of Japan. A publication of the Mycological Society of Japan, it was founded in 1956 as Transactions of the Mycological Society of Japan (1956–1993) and was later titled Mycoscience (1994–present). == Editor-in-Chief == The following persons have been editor-in-chief of the journal: 1956–1969 - Rokuya Imazeki 1970–1971 - Minoru Hamada 1972–1973 - Hiroharu Indo 1974–1975 - Keisuke Tsubaki 1976 - Minoru Hamada 1976 - Kiyoo Aoshima 1977–1978 - Akinori Ueyama 1979–1980 - Syunichi Udagawa 1981–1984 - Shinichi Hatanaka 1985–1988 - Tatsuo Yokoyama 1989–1990 - Yukio Harada 1991–1992 - Kishio Hatai 1995–1996 - Kazuko Nishimura 1997–1998 - Takao Horikoshi 1999–2000 - Masatoshi Saikawa 2001–2004 - Makoto Kakishima 2005–2006 - Akira Nakagiri 2007–2008 - Gen Okada 2009–2010 - Takashi Yaguchi 2011–2012 - Yoshitaka Ono 2013–2014 - Gen Okada 2015–2016 - Takayuki Aoki (Mycologist) 2017–2018 - Tsutomu Hattori 2019–2020 - Eiji Tanaka 2021–2022 - Yutaka Tamai 2023–present - Kiminori Shimizu According to the Journal Citation Reports, Mycoscience has an impact factor of 1.4. == Abstracting and indexing == Mycoscience is abstracted and indexed in: == References == == External links == Official website Publication site
|
{
"page_id": 33821541,
"source": null,
"title": "Mycoscience"
}
|
In mathematics, physics and chemistry, a space group is the symmetry group of a repeating pattern in space, usually in three dimensions. The elements of a space group (its symmetry operations) are the rigid transformations of the pattern that leave it unchanged. In three dimensions, space groups are classified into 219 distinct types, or 230 types if chiral copies are considered distinct. Space groups are discrete cocompact groups of isometries of an oriented Euclidean space in any number of dimensions. In dimensions other than 3, they are sometimes called Bieberbach groups. In crystallography, space groups are also called the crystallographic or Fedorov groups, and represent a description of the symmetry of the crystal. A definitive source regarding 3-dimensional space groups is the International Tables for Crystallography Hahn (2002). == History == Space groups in 2 dimensions are the 17 wallpaper groups which have been known for several centuries, though the proof that the list was complete was only given in 1891, after the much more difficult classification of space groups had largely been completed. In 1879 the German mathematician Leonhard Sohncke listed the 65 space groups (called Sohncke groups) whose elements preserve the chirality. More accurately, he listed 66 groups, but both the Russian mathematician and crystallographer Evgraf Fedorov and the German mathematician Arthur Moritz Schoenflies noticed that two of them were really the same. The space groups in three dimensions were first enumerated in 1891 by Fedorov (whose list had two omissions (I43d and Fdd2) and one duplication (Fmm2)), and shortly afterwards in 1891 were independently enumerated by Schönflies (whose list had four omissions (I43d, Pc, Cc, ?) and one duplication (P421m)). The correct list of 230 space groups was found by 1892 during correspondence between Fedorov and Schönflies. William Barlow (1894) later enumerated the groups with a different
|
{
"page_id": 463721,
"source": null,
"title": "Space group"
}
|
method, but omitted four groups (Fdd2, I42d, P421d, and P421c) even though he already had the correct list of 230 groups from Fedorov and Schönflies; the common claim that Barlow was unaware of their work is incorrect. Burckhardt (1967) describes the history of the discovery of the space groups in detail. == Elements == The space groups in three dimensions are made from combinations of the 32 crystallographic point groups with the 14 Bravais lattices, each of the latter belonging to one of 7 lattice systems. What this means is that the action of any element of a given space group can be expressed as the action of an element of the appropriate point group followed optionally by a translation. A space group is thus some combination of the translational symmetry of a unit cell (including lattice centering), the point group symmetry operations of reflection, rotation and improper rotation (also called rotoinversion), and the screw axis and glide plane symmetry operations. The combination of all these symmetry operations results in a total of 230 different space groups describing all possible crystal symmetries. The number of replicates of the asymmetric unit in a unit cell is thus the number of lattice points in the cell times the order of the point group. This ranges from 1 in the case of space group P1 to 192 for a space group like Fm3m, the NaCl structure. === Elements fixing a point === The elements of the space group fixing a point of space are the identity element, reflections, rotations and improper rotations, including inversion points. === Translations === The translations form a normal abelian subgroup of rank 3, called the Bravais lattice (so named after French physicist Auguste Bravais). There are 14 possible types of Bravais lattice. The quotient of the space group
|
{
"page_id": 463721,
"source": null,
"title": "Space group"
}
|
by the Bravais lattice is a finite group which is one of the 32 possible point groups. === Glide planes === A glide plane is a reflection in a plane, followed by a translation parallel with that plane. This is noted by a {\displaystyle a} , b {\displaystyle b} , or c {\displaystyle c} , depending on which axis the glide is along. There is also the n {\displaystyle n} glide, which is a glide along the half of a diagonal of a face, and the d {\displaystyle d} glide, which is a fourth of the way along either a face or space diagonal of the unit cell. The latter is called the diamond glide plane as it features in the diamond structure. In 17 space groups, due to the centering of the cell, the glides occur in two perpendicular directions simultaneously, i.e. the same glide plane can be called b or c, a or b, a or c. For example, group Abm2 could be also called Acm2, group Ccca could be called Cccb. In 1992, it was suggested to use symbol e for such planes. The symbols for five space groups have been modified: === Screw axes === A screw axis is a rotation about an axis, followed by a translation along the direction of the axis. These are noted by a number, n, to describe the degree of rotation, where the number is how many operations must be applied to complete a full rotation (e.g., 3 would mean a rotation one third of the way around the axis each time). The degree of translation is then added as a subscript showing how far along the axis the translation is, as a portion of the parallel lattice vector. So, 21 is a twofold rotation followed by a translation of
|
{
"page_id": 463721,
"source": null,
"title": "Space group"
}
|
1/2 of the lattice vector. === General formula === The general formula for the action of an element of a space group is y = M.x + D where M is its matrix, D is its vector, and where the element transforms point x into point y. In general, D = D (lattice) + D(M), where D(M) is a unique function of M that is zero for M being the identity. The matrices M form a point group that is a basis of the space group; the lattice must be symmetric under that point group, but the crystal structure itself may not be symmetric under that point group as applied to any particular point (that is, without a translation). For example, the diamond cubic structure does not have any point where the cubic point group applies. The lattice dimension can be less than the overall dimension, resulting in a "subperiodic" space group. For (overall dimension, lattice dimension): (1,1): One-dimensional line groups (2,1): Two-dimensional line groups: frieze groups (2,2): Wallpaper groups (3,1): Three-dimensional line groups; with the 3D crystallographic point groups, the rod groups (3,2): Layer groups (3,3): The space groups discussed in this article === Chirality === The 65 "Sohncke" space groups, not containing any mirrors, inversion points, improper rotations or glide planes, yield chiral crystals, not identical to their mirror image; whereas space groups that do include at least one of those give achiral crystals. Achiral molecules sometimes form chiral crystals, but chiral molecules always form chiral crystals, in one of the space groups that permit this. Among the 65 Sohncke groups are 22 that come in 11 enantiomorphic pairs. === Combinations === Only certain combinations of symmetry elements are possible in a space group. Translations are always present, and the space group P1 has only translations and the
|
{
"page_id": 463721,
"source": null,
"title": "Space group"
}
|
identity element. The presence of mirrors implies glide planes as well, and the presence of rotation axes implies screw axes as well, but the converses are not true. An inversion and a mirror implies two-fold screw axes, and so on. == Notation == There are at least ten methods of naming space groups. Some of these methods can assign several different names to the same space group, so altogether there are many thousands of different names. Number The International Union of Crystallography publishes tables of all space group types, and assigns each a unique number from 1 to 230. The numbering is arbitrary, except that groups with the same crystal system or point group are given consecutive numbers. International symbol notation Hermann–Mauguin notation The Hermann–Mauguin (or international) notation describes the lattice and some generators for the group. It has a shortened form called the international short symbol, which is the one most commonly used in crystallography, and usually consists of a set of four symbols. The first describes the centering of the Bravais lattice (P, A, C, I, R or F). The next three describe the most prominent symmetry operation visible when projected along one of the high symmetry directions of the crystal. These symbols are the same as used in point groups, with the addition of glide planes and screw axis, described above. By way of example, the space group of quartz is P3121, showing that it exhibits primitive centering of the motif (i.e., once per unit cell), with a threefold screw axis and a twofold rotation axis. Note that it does not explicitly contain the crystal system, although this is unique to each space group (in the case of P3121, it is trigonal). In the international short symbol the first symbol (31 in this example) denotes the symmetry
|
{
"page_id": 463721,
"source": null,
"title": "Space group"
}
|
along the major axis (c-axis in trigonal cases), the second (2 in this case) along axes of secondary importance (a and b) and the third symbol the symmetry in another direction. In the trigonal case there also exists a space group P3112. In this space group the twofold axes are not along the a and b-axes but in a direction rotated by 30°. The international symbols and international short symbols for some of the space groups were changed slightly between 1935 and 2002, so several space groups have 4 different international symbols in use. The viewing directions of the 7 crystal systems are shown as follows. Hall notation Space group notation with an explicit origin. Rotation, translation and axis-direction symbols are clearly separated and inversion centers are explicitly defined. The construction and format of the notation make it particularly suited to computer generation of symmetry information. For example, group number 3 has three Hall symbols: P 2y (P 1 2 1), P 2 (P 1 1 2), P 2x (P 2 1 1). Schönflies notation The space groups with given point group are numbered by 1, 2, 3, ... (in the same order as their international number) and this number is added as a superscript to the Schönflies symbol for the point group. For example, groups numbers 3 to 5 whose point group is C2 have Schönflies symbols C12, C22, C32. Fedorov notation Shubnikov symbol Strukturbericht designationA related notation for crystal structures given a letter and index: A Elements (monatomic), B for AB compounds, C for AB2 compounds, D for Am Bn compounds, (E, F, ..., K More complex compounds), L Alloys, O Organic compounds, S Silicates. Some structure designation share the same space groups. For example, space group 225 is A1, B1, and C1. Space group 221 is Ah,
|
{
"page_id": 463721,
"source": null,
"title": "Space group"
}
|
and B2. However, crystallographers would not use Strukturbericht notation to describe the space group, rather it would be used to describe a specific crystal structure (e.g. space group + atomic arrangement (motif)). Orbifold notation (2D) Fibrifold notation (3D)As the name suggests, the orbifold notation describes the orbifold, given by the quotient of Euclidean space by the space group, rather than generators of the space group. It was introduced by Conway and Thurston, and is not used much outside mathematics. Some of the space groups have several different fibrifolds associated to them, so have several different fibrifold symbols. Coxeter notation Spatial and point symmetry groups, represented as modifications of the pure reflectional Coxeter groups. Geometric notation A geometric algebra notation. == Classification systems == There are (at least) 10 different ways to classify space groups into classes. The relations between some of these are described in the following table. Each classification system is a refinement of the ones below it. To understand an explanation given here it may be necessary to understand the next one down. Conway, Delgado Friedrichs, and Huson et al. (2001) gave another classification of the space groups, called a fibrifold notation, according to the fibrifold structures on the corresponding orbifold. They divided the 219 affine space groups into reducible and irreducible groups. The reducible groups fall into 17 classes corresponding to the 17 wallpaper groups, and the remaining 35 irreducible groups are the same as the cubic groups and are classified separately. == In other dimensions == === Bieberbach's theorems === In n dimensions, an affine space group, or Bieberbach group, is a discrete subgroup of isometries of n-dimensional Euclidean space with a compact fundamental domain. Bieberbach (1911, 1912) proved that the subgroup of translations of any such group contains n linearly independent translations, and is a
|
{
"page_id": 463721,
"source": null,
"title": "Space group"
}
|
free abelian subgroup of finite index, and is also the unique maximal normal abelian subgroup. He also showed that in any dimension n there are only a finite number of possibilities for the isomorphism class of the underlying group of a space group, and moreover the action of the group on Euclidean space is unique up to conjugation by affine transformations. This answers part of Hilbert's eighteenth problem. Zassenhaus (1948) showed that conversely any group that is the extension of Zn by a finite group acting faithfully is an affine space group. Combining these results shows that classifying space groups in n dimensions up to conjugation by affine transformations is essentially the same as classifying isomorphism classes for groups that are extensions of Zn by a finite group acting faithfully. It is essential in Bieberbach's theorems to assume that the group acts as isometries; the theorems do not generalize to discrete cocompact groups of affine transformations of Euclidean space. A counter-example is given by the 3-dimensional Heisenberg group of the integers acting by translations on the Heisenberg group of the reals, identified with 3-dimensional Euclidean space. This is a discrete cocompact group of affine transformations of space, but does not contain a subgroup Z3. === Classification in small dimensions === This table gives the number of space group types in small dimensions, including the numbers of various classes of space group. The numbers of enantiomorphic pairs are given in parentheses. === Magnetic groups and time reversal === In addition to crystallographic space groups there are also magnetic space groups (also called two-color (black and white) crystallographic groups or Shubnikov groups). These symmetries contain an element known as time reversal. They treat time as an additional dimension, and the group elements can include time reversal as reflection in it. They are
|
{
"page_id": 463721,
"source": null,
"title": "Space group"
}
|
of importance in magnetic structures that contain ordered unpaired spins, i.e. ferro-, ferri- or antiferromagnetic structures as studied by neutron diffraction. The time reversal element flips a magnetic spin while leaving all other structure the same and it can be combined with a number of other symmetry elements. Including time reversal there are 1651 magnetic space groups in 3D (Kim 1999, p.428). It has also been possible to construct magnetic versions for other overall and lattice dimensions (Daniel Litvin's papers, (Litvin 2008), (Litvin 2005)). Frieze groups are magnetic 1D line groups and layer groups are magnetic wallpaper groups, and the axial 3D point groups are magnetic 2D point groups. Number of original and magnetic groups by (overall, lattice) dimension:(Palistrant 2012)(Souvignier 2006) == Table of space groups in 2 dimensions (wallpaper groups) == Table of the wallpaper groups using the classification of the 2-dimensional space groups: For each geometric class, the possible arithmetic classes are None: no reflection lines Along: reflection lines along lattice directions Between: reflection lines halfway in between lattice directions Both: reflection lines both along and between lattice directions == Table of space groups in 3 dimensions == Note: An e plane is a double glide plane, one having glides in two different directions. They are found in seven orthorhombic, five tetragonal and five cubic space groups, all with centered lattice. The use of the symbol e became official with Hahn (2002). The lattice system can be found as follows. If the crystal system is not trigonal then the lattice system is of the same type. If the crystal system is trigonal, then the lattice system is hexagonal unless the space group is one of the seven in the rhombohedral lattice system consisting of the 7 trigonal space groups in the table above whose name begins with R.
|
{
"page_id": 463721,
"source": null,
"title": "Space group"
}
|
(The term rhombohedral system is also sometimes used as an alternative name for the whole trigonal system.) The hexagonal lattice system is larger than the hexagonal crystal system, and consists of the hexagonal crystal system together with the 18 groups of the trigonal crystal system other than the seven whose names begin with R. The Bravais lattice of the space group is determined by the lattice system together with the initial letter of its name, which for the non-rhombohedral groups is P, I, F, A or C, standing for the principal, body centered, face centered, A-face centered or C-face centered lattices. There are seven rhombohedral space groups, with initial letter R. == Derivation of the crystal class from the space group == Leave out the Bravais type Convert all symmetry elements with translational components into their respective symmetry elements without translation symmetry (Glide planes are converted into simple mirror planes; Screw axes are converted into simple axes of rotation) Axes of rotation, rotoinversion axes and mirror planes remain unchanged. == References == == External links == International Union of Crystallography Point Groups and Bravais Lattices Archived 2012-07-16 at the Wayback Machine [1] Bilbao Crystallographic Server Space Group Info (old) Space Group Info (new) Crystal Lattice Structures: Index by Space Group Full list of 230 crystallographic space groups Interactive 3D visualization of all 230 crystallographic space groups Archived 2021-04-18 at the Wayback Machine Huson, Daniel H. (1999), The Fibrifold Notation and Classification for 3D Space Groups (PDF) The Geometry Center: 2.1 Formulas for Symmetries in Cartesian Coordinates (two dimensions) The Geometry Center: 10.1 Formulas for Symmetries in Cartesian Coordinates (three dimensions)
|
{
"page_id": 463721,
"source": null,
"title": "Space group"
}
|
Selenium yeast is a feed additive for livestock, used to increase the selenium content in their fodder. It is a form of selenium currently approved for human consumption in the EU and Britain. Inorganic forms of selenium are used in feeds (namely sodium selenate and sodium selenite, which appear to work in roughly the same manner). Since these products can be patented, producers can demand premium prices. It is produced by fermenting Saccharomyces cerevisiae (baker's yeast) in a selenium-rich media. There is considerable variability in products described as Se-yeast and the selenium compounds found within. Many manufacturers and products on the market are simply mixtures of largely inorganic selenium and some yeast. Selenium is found in different forms based upon the food in which it is found. For instance, the form found in mustard and garlic is different from the form found in wheat or corn. In some products, the added selenium is structurally substituted for sulfur in the amino acid methionine, thus forming an organic chemical called selenomethionine via the same pathways and enzymes. Owing to its similarity to sulfur-containing methionine, selenomethionine is mistaken for an amino acid by the yeast anabolism and incorporated in its proteins. It has been claimed that selenomethionine makes a better source of dietary selenium in animal nutrition, since it is an organic chemical compound sometimes found in some common crops such as wheat. == Animal feed additive == Large amounts of selenium are toxic; however, it is physiologically necessary for animals in extremely small amounts. Many other uncharacterized selenium-containing organic chemicals are also produced by a method similar to that of selenomethionine; some have recently been characterized but remain relatively unknown, such as S-seleno-methyl-glutathione and glutathione-S-selenoglutathione. Due to this, the European Union has questioned the safety and potential toxicity of this food supplement
|
{
"page_id": 30806891,
"source": null,
"title": "Selenium yeast"
}
|
for humans, and it may not be used as an additive after 2002. G.N Schrauzer, who has written two papers about selenomethionine, claims it should be an essential amino acid, and that the product is completely safe. The European Food Safety Authority does allow the use of selenomethionine as a feed additive for animals. Because organic forms of selenium appear to be excreted from the body slower than inorganic forms, products enriched with organic selenium might detrimentally bioaccumulate in the body. Because selenium-enriched foods contain much more selenium than natural foods, selenium toxicity is a potential problem, and such foods must be treated with caution. The EU allows up to 300 micrograms of selenium per day, but one long-term study of selenium supplementation showed no evidence of toxicity at a dose as high as 800 micrograms per day. An organic selenium-containing chemical found in selenium yeast has been shown to differ in bioavailability and metabolism compared with common inorganic forms of dietary selenium. Dietary supplementation using selenium yeast is ineffective in the production of antioxidants in bovine milk compared to inorganic selenium (sodium selenate). One study examined if increased selenium in the diet of mutant mice (via a selenium yeast product) caused a higher production of selenium-containing enzymes which have an antioxidant effect. The effect was modest. Selenium supplementation in yeast form has been shown to increase pig selenium-containing antioxidant enzymes, broiler growth and meat quality, the shelf life of turkey and rooster semen, and possibly cattle fertility. Selenium supplementation in animal feeds may be profitable for agribusinesses. It may be possible to market selenium-fortified foods to consumers as functional foods, such as selenium-enriched eggs, meat, or milk. == Sel-Plex® == A patented cultivar of yeast (Saccharomyces cerevisiae 'CNCM I-3060') marketed as Sel-Plex® has been approved for use in animal
|
{
"page_id": 30806891,
"source": null,
"title": "Selenium yeast"
}
|
fodder: U.S. Food and Drug Administration approval for use as a supplement to feed for chickens, turkeys, swine, goats, sheep, horses, dogs, bison, and beef and dairy cows. Organic Materials Review Institute approval for use as a feed supplement for all animal species. As of 2006, the European Food Safety Authority's Scientific Panel on Additives and Products or Substances used in Animal Feed allows the use of Sel-Plex® in animal fodder for poultry, swine, and bovines, as the selenium is not significantly bio-accumulated by the human consumer. Only a small amount should be used when blending animal feeds, 10x the authorized maximum selenium intake causes a drop in production. Appropriate measures to minimize inhalation exposure to the product should be taken. == Analytical chemistry == Total selenium in selenium yeast can be reliably determined using open acid digestion to extract selenium from the yeast matrix followed by flame atomic absorption spectrometry. Determination of the selenium species selenomethionine can be achieved via proteolytic digestion of selenium yeast followed by high-performance liquid chromatography with inductively coupled plasma mass spectrometry. == See also == Nutritional muscular dystrophy == References ==
|
{
"page_id": 30806891,
"source": null,
"title": "Selenium yeast"
}
|
The term spiral separator can refer to either a device for separating slurry components by density (wet spiral separators), or for a device for sorting particles by shape (dry spiral separators). == Wet spiral separators == Spiral separators of the wet type, also called spiral concentrators, are devices to separate solid components in a slurry, based upon a combination of the solid particle density as well as the particle's hydrodynamic properties (e.g. drag). The device consists of a tower, around which is wound a sluice, from which slots or channels are placed in the base of the sluice to extract solid particles that have come out of suspension. As larger and heavier particles sink to the bottom of the sluice faster and experience more drag from the bottom, they travel slower, and so move towards the center of the spiral. Conversely, light particles stay towards the outside of the spiral, with the water, and quickly reach the bottom. At the bottom, a "cut" is made with a set of adjustable bars, channels, or slots, separating the low and high density parts. == Efficiency == Typical spiral concentrators will use a slurry from about 20%-40% solids by weight, with a particle size somewhere between 0.75—1.5mm (17-340 mesh), though somewhat larger particle sizes are sometimes used. The spiral separator is less efficient at the particle sizes of 0.1—0.074mm however. For efficient separation, the density difference between the heavy minerals and the light minerals in the feedstock should be at least 1 g/cm3; and because the separation is dependent upon size and density, spiral separators are most effective at purifying ore if its particles are of uniform size and shape. A spiral separator may process a couple tons per hour of ore, per flight, and multiple flights may be stacked in the same
|
{
"page_id": 18223985,
"source": null,
"title": "Spiral separator"
}
|
space as one, to improve capacity. Many things can be done to improve the separation efficiency, including: changing the rate of material feed changing the grain size of the material changing the slurry mass percentage adjusting the cutter bar positions running the output of one spiral separator (often, a third, intermediate, cut) through a second. adding washwater inlets along the length of the spiral, to aid in separating light minerals adding multiple outlets along the length, to improve the ability of the spiral to remove heavy contaminants adding ridges on the sluice at an angle to the direction of flow. == Dry spiral separators == Dry spiral separators, capable of distinguishing round particles from nonrounds, are used to sort the feed by shape. The device consists of a tower, around which is wound an inwardly inclined flight. A catchment funnel is placed around this inner flight. Round particles roll at a higher speed than other objects, and so are flung off the inner flight and into the collection funnel. Shapes which are not round enough are collected at the bottom of the flight. Separators of this type may be used for removing weed seeds from the intended harvest, or to remove deformed lead shot. == See also == Sieve Screw conveyor Cyclone (separator) Mineral processing Mechanical screening == References == == Further reading == US Design of a take-off point for the extraction of dense material separated from a helical spiral separator. 922804, Wright, Douglas, "Spiral Separators", published 7. Jul. 1978, issued 19. Feb. 1980 == External links == Screw Conveyor
|
{
"page_id": 18223985,
"source": null,
"title": "Spiral separator"
}
|
Random features (RF) are a technique used in machine learning to approximate kernel methods, introduced by Ali Rahimi and Ben Recht in their 2007 paper "Random Features for Large-Scale Kernel Machines", and extended by. RF uses a Monte Carlo approximation to kernel functions by randomly sampled feature maps. It is used for datasets that are too large for traditional kernel methods like support vector machine, kernel ridge regression, and gaussian process. == Mathematics == === Kernel method === Given a feature map ϕ : R d → V {\textstyle \phi :\mathbb {R} ^{d}\to V} , where V {\textstyle V} is a Hilbert space (more specifically, a reproducing kernel Hilbert space), the kernel trick replaces inner products in feature space ⟨ ϕ ( x i ) , ϕ ( x j ) ⟩ V {\displaystyle \langle \phi (x_{i}),\phi (x_{j})\rangle _{V}} by a kernel function k ( x i , x j ) : R d × R d → R {\displaystyle k(x_{i},x_{j}):\mathbb {R} ^{d}\times \mathbb {R} ^{d}\to \mathbb {R} } Kernel methods replaces linear operations in high-dimensional space by operations on the kernel matrix: K X := [ k ( x i , x j ) ] i , j ∈ 1 : N {\displaystyle K_{X}:=[k(x_{i},x_{j})]_{i,j\in 1:N}} where N {\textstyle N} is the number of data points. === Random kernel method === The problem with kernel methods is that the kernel matrix K X {\textstyle K_{X}} has size N × N {\textstyle N\times N} . This becomes computationally infeasible when N {\textstyle N} reaches the order of a million. The random kernel method replaces the kernel function k {\textstyle k} by an inner product in low-dimensional feature space R D {\textstyle \mathbb {R} ^{D}} : k ( x , y ) ≈ ⟨ z ( x ) , z ( y
|
{
"page_id": 77992820,
"source": null,
"title": "Random feature"
}
|
) ⟩ {\displaystyle k(x,y)\approx \langle z(x),z(y)\rangle } where z {\textstyle z} is a randomly sampled feature map z : R d → R D {\textstyle z:\mathbb {R} ^{d}\to \mathbb {R} ^{D}} . This converts kernel linear regression into linear regression in feature space, kernel SVM into SVM in feature space, etc. Since we have K X ≈ Z X T Z X {\displaystyle K_{X}\approx Z_{X}^{T}Z_{X}} where Z X = [ z ( x 1 ) , … , z ( x N ) ] {\displaystyle Z_{X}=[z(x_{1}),\dots ,z(x_{N})]} , these methods no longer involve matrices of size O ( N 2 ) {\textstyle O(N^{2})} , but only random feature matrices of size O ( D N ) {\textstyle O(DN)} . == Random Fourier feature == === Radial basis function kernel === The radial basis function (RBF) kernel on two samples x i , x j ∈ R d {\displaystyle x_{i},x_{j}\in \mathbb {R} ^{d}} is defined as k ( x i , x j ) = exp ( − ‖ x i − x j ‖ 2 2 σ 2 ) {\displaystyle k(x_{i},x_{j})=\exp \left(-{\frac {\|x_{i}-x_{j}\|^{2}}{2\sigma ^{2}}}\right)} where ‖ x i − x j ‖ 2 {\displaystyle \|x_{i}-x_{j}\|^{2}} is the squared Euclidean distance and σ {\displaystyle \sigma } is a free parameter defining the shape of the kernel. It can be approximated by a random Fourier feature map z : R d → R 2 D {\displaystyle z:\mathbb {R} ^{d}\to \mathbb {R} ^{2D}} : z ( x ) := 1 D [ cos ⟨ ω 1 , x ⟩ , sin ⟨ ω 1 , x ⟩ , … , cos ⟨ ω D , x ⟩ , sin ⟨ ω D , x ⟩ ] T {\displaystyle z(x):={\frac {1}{\sqrt {D}}}[\cos \langle \omega _{1},x\rangle ,\sin \langle \omega _{1},x\rangle
|
{
"page_id": 77992820,
"source": null,
"title": "Random feature"
}
|
,\ldots ,\cos \langle \omega _{D},x\rangle ,\sin \langle \omega _{D},x\rangle ]^{T}} where ω 1 , . . . , ω D {\displaystyle \omega _{1},...,\omega _{D}} are IID samples from the multidimensional normal distribution N ( 0 , σ − 2 I ) {\displaystyle N(0,\sigma ^{-2}I)} . Since cos , sin {\displaystyle \cos ,\sin } are bounded, there is a stronger convergence guarantee by Hoeffding's inequality.: Claim 1 === Random Fourier features === By Bochner's theorem, the above construction can be generalized to arbitrary positive definite shift-invariant kernel k ( x , y ) = k ( x − y ) {\displaystyle k(x,y)=k(x-y)} . Define its Fourier transform p ( ω ) = 1 2 π ∫ R d e − j ⟨ ω , Δ ⟩ k ( Δ ) d Δ {\displaystyle p(\omega )={\frac {1}{2\pi }}\int _{\mathbb {R} ^{d}}e^{-j\langle \omega ,\Delta \rangle }k(\Delta )d\Delta } then ω 1 , . . . , ω D {\displaystyle \omega _{1},...,\omega _{D}} are sampled IID from the probability distribution with probability density p {\displaystyle p} . This applies for other kernels like the Laplace kernel and the Cauchy kernel. === Neural network interpretation === Given a random Fourier feature map z {\displaystyle z} , training the feature on a dataset by featurized linear regression is equivalent to fitting complex parameters θ 1 , … , θ D ∈ C {\displaystyle \theta _{1},\dots ,\theta _{D}\in \mathbb {C} } such that f θ ( x ) = R e ( ∑ k θ k e i ⟨ ω k , x ⟩ ) {\displaystyle f_{\theta }(x)=\mathrm {Re} \left(\sum _{k}\theta _{k}e^{i\langle \omega _{k},x\rangle }\right)} which is a neural network with a single hidden layer, with activation function t ↦ e i t {\displaystyle t\mapsto e^{it}} , zero bias, and the parameters in the first layer frozen.
|
{
"page_id": 77992820,
"source": null,
"title": "Random feature"
}
|
In the overparameterized case, when 2 D ≥ N {\displaystyle 2D\geq N} , the network linearly interpolates the dataset { ( x i , y i ) } i ∈ 1 : N {\displaystyle \{(x_{i},y_{i})\}_{i\in 1:N}} , and the network parameters is the least-norm solution: θ ^ = arg min θ ∈ C D , f θ ( x k ) = y k ∀ k ∈ 1 : N ‖ θ ‖ {\displaystyle {\hat {\theta }}=\arg \min _{\theta \in \mathbb {C} ^{D},f_{\theta }(x_{k})=y_{k}\forall k\in 1:N}\|\theta \|} At the limit of D → ∞ {\displaystyle D\to \infty } , the L2 norm ‖ θ ^ ‖ → ‖ f K ‖ H {\displaystyle \|{\hat {\theta }}\|\to \|f_{K}\|_{H}} where f K {\displaystyle f_{K}} is the interpolating function obtained by the kernel regression with the original kernel, and ‖ ⋅ ‖ H {\displaystyle \|\cdot \|_{H}} is the norm in the reproducing kernel Hilbert space for the kernel. == Other examples == === Random binning features === A random binning features map partitions the input space using randomly shifted grids at randomly chosen resolutions and assigns to an input point a binary bit string that corresponds to the bins in which it falls. The grids are constructed so that the probability that two points x i , x j ∈ R d {\displaystyle x_{i},x_{j}\in \mathbb {R} ^{d}} are assigned to the same bin is proportional to K ( x i , x j ) {\displaystyle K(x_{i},x_{j})} . The inner product between a pair of transformed points is proportional to the number of times the two points are binned together, and is therefore an unbiased estimate of K ( x i , x j ) {\displaystyle K(x_{i},x_{j})} . Since this mapping is not smooth and uses the proximity between input points, Random Binning
|
{
"page_id": 77992820,
"source": null,
"title": "Random feature"
}
|
Features works well for approximating kernels that depend only on the L 1 {\displaystyle L_{1}} distance between datapoints. === Orthogonal random features === Orthogonal random features uses a random orthogonal matrix instead of a random Fourier matrix. == Historical context == In NIPS 2006, deep learning had just become competitive with linear models like PCA and linear SVMs for large datasets, and people speculated about whether it could compete with kernel SVMs. However, there was no way to train kernel SVM on large datasets. The two authors developed the random feature method to train those. It was then found that the O ( 1 / D ) {\displaystyle O(1/D)} variance bound did not match practice: the variance bound predicts that approximation to within 0.01 {\displaystyle 0.01} requires D ∼ 10 4 {\displaystyle D\sim 10^{4}} , but in practice required only ∼ 10 2 {\displaystyle \sim 10^{2}} . Attempting to discover what caused this led to the subsequent two papers. == See also == Kernel method Support vector machine Fourier transform Monte Carlo method == References == == External links == Random Walks - Random Fourier features
|
{
"page_id": 77992820,
"source": null,
"title": "Random feature"
}
|
The Hayashi track is a luminosity–temperature relationship obeyed by infant stars of less than 3 M☉ in the pre-main-sequence phase (PMS phase) of stellar evolution. It is named after Japanese astrophysicist Chushiro Hayashi. On the Hertzsprung–Russell diagram, which plots luminosity against temperature, the track is a nearly vertical curve. After a protostar ends its phase of rapid contraction and becomes a T Tauri star, it is extremely luminous. The star continues to contract, but much more slowly. While slowly contracting, the star follows the Hayashi track downwards, becoming several times less luminous but staying at roughly the same surface temperature, until either a radiative zone develops, at which point the star starts following the Henyey track, or nuclear fusion begins, marking its entry onto the main sequence. The shape and position of the Hayashi track on the Hertzsprung–Russell diagram depends on the star's mass and chemical composition. For solar-mass stars, the track lies at a temperature of roughly 4000 K. Stars on the track are nearly fully convective and have their opacity dominated by hydrogen ions. Stars less than 0.5 M☉ are fully convective even on the main sequence, but their opacity begins to be dominated by Kramers' opacity law after nuclear fusion begins, thus moving them off the Hayashi track. Stars between 0.5 and 3 M☉ develop a radiative zone prior to reaching the main sequence. Stars between 3 and 10 M☉ are fully radiative at the beginning of the pre-main-sequence. Even heavier stars are born onto the main sequence, with no PMS evolution. At the end of a low- or intermediate-mass star's life, the star follows an analogue of the Hayashi track, but in reverse—it increases in luminosity, expands, and stays at roughly the same temperature, eventually becoming a red giant. == History == In 1961, Professor Chushiro
|
{
"page_id": 5116788,
"source": null,
"title": "Hayashi track"
}
|
Hayashi published two papers that led to the concept of the pre-main-sequence and form the basis of the modern understanding of early stellar evolution. Hayashi realized that the existing model, in which stars are assumed to be in radiative equilibrium with no substantial convection zone, cannot explain the shape of the red-giant branch. He therefore replaced the model by including the effects of thick convection zones on a star's interior. A few years prior, Osterbrock proposed deep convection zones with efficient convection, analyzing them using the opacity of H− ions (the dominant opacity source in cool atmospheres) in temperatures below 5000 K. However, the earliest numerical models of Sun-like stars did not follow up on this work and continued to assume radiative equilibrium. In his 1961 papers, Hayashi showed that the convective envelope of a star is determined by E = 4 π G 3 / 2 ( μ H k ) 5 / 2 M 1 / 2 R 3 / 2 P T 5 / 2 , {\displaystyle E=4\pi G^{3/2}\left({\frac {\mu H}{k}}\right)^{5/2}{\frac {M^{1/2}R^{3/2}P}{T^{5/2}}},} where E is unitless, and not the energy. Modelling stars as polytropes with index 3/2 (in other words, assuming they follow a pressure-density relationship of P = K ρ 5 / 3 {\displaystyle P=K\rho ^{5/3}} ), he found that E = 45 is the maximum for a quasistatic star. If a star is not contracting rapidly, E = 45 defines a curve on the HR diagram, to the right of which the star cannot exist. He then computed the evolutionary tracks and isochrones (luminosity–temperature distributions of stars at a given age) for a variety of stellar masses and noted that NGC2264, a very young star cluster, fits the isochrones well. In particular, he calculated much lower ages for solar-type stars in NGC2264 and predicted that
|
{
"page_id": 5116788,
"source": null,
"title": "Hayashi track"
}
|
these stars were rapidly contracting T Tauri stars. In 1962, Hayashi published a 183-page review of stellar evolution, discussing the evolution of stars born in the forbidden region. These stars rapidly contract due to gravity before settling to a quasistatic, fully convective state on the Hayashi tracks. In 1965, numerical models by Iben and Ezer & Cameron realistically simulated pre-main-sequence evolution, including the Henyey track that stars follow after leaving the Hayashi track. These standard PMS tracks can still be found in textbooks on stellar evolution. == Forbidden zone == The forbidden zone is the region on the HR diagram to the right of the Hayashi track where no star can be in hydrostatic equilibrium, even those that are partially or fully radiative. Newborn protostars start out in this zone, but are not in hydrostatic equilibrium and will rapidly move towards the Hayashi track. Because stars emit light via black-body radiation, the power per unit surface area they emit is given by the Stefan–Boltzmann law: j ⋆ = σ T 4 . {\displaystyle j^{\star }=\sigma T^{4}.} The star's luminosity is therefore given by L = 4 π R 2 σ T 4 . {\displaystyle L=4\pi R^{2}\sigma T^{4}.} For a given L, a lower temperature implies a larger radius, and vice versa. Thus, the Hayashi track separates the HR diagram into two regions: the allowed region to the left, with high temperatures and smaller radii for each luminosity, and the forbidden region to the right, with lower temperatures and correspondingly larger radii. The Hayashi limit can refer to either the lower bound in temperature or the upper bound on radius defined by the Hayashi track. The region to the right is forbidden because it can be shown that a star in the region must have a temperature gradient of d ln
|
{
"page_id": 5116788,
"source": null,
"title": "Hayashi track"
}
|
T d ln P > 0.4 , {\displaystyle {\frac {d\ln T}{d\ln P}}>0.4,} where d ln T / d ln P = 0.4 {\displaystyle d\ln T/d\ln P=0.4} for a monatomic ideal gas undergoing adiabatic expansion or contraction. A temperature gradient greater than 0.4 is therefore called superadiabatic. Consider a star with a superadiabatic gradient. Imagine a parcel of gas that starts at radial position r, but moves upwards to r + dr in a sufficiently short time that it exchanges negligible heat with its surroundings—in other words, the process is adiabatic. The pressure of the surroundings, as well as that of the parcel, decreases by some amount dP. The parcel's temperature changes by d T = 0.4 T d P / P {\displaystyle dT=0.4\,T\,dP/P} . The temperature of the surroundings also decreases, but by some amount dT′ that is greater than dT. The parcel therefore ends up being hotter than its surroundings. Since the ideal gas law can be written P = ρ R T / μ {\displaystyle P=\rho RT/\mu } , a higher temperature implies a lower density at the same pressure. The parcel is therefore also less dense than its surroundings. This will cause it to rise even more, and the parcel will become even less dense than its new surroundings. Clearly, this situation is not stable. In fact, a superadiabatic gradient causes convection. Convection tends to lower the temperature gradient because the rising parcel of gas will eventually be dispersed, dumping its excess thermal and kinetic energy into its surroundings and heating up said surroundings. In stars, the convection process is known to be highly efficient, with a typical d ln T / d ln P {\displaystyle d\ln T/d\ln P} that only exceeds the adiabatic gradient by 1 part in 10 million.
|
{
"page_id": 5116788,
"source": null,
"title": "Hayashi track"
}
|
If a star is placed in the forbidden zone, with a temperature gradient much greater than 0.4, it will experience rapid convection that brings the gradient down. Since this convection will drastically change the star's pressure and temperature distribution, the star is not in hydrostatic equilibrium, and will contract until it is. A star far to the left of the Hayashi track has a temperature gradient smaller than adiabatic. This means that if a parcel of gas rises a tiny bit, it will be more dense than its surroundings and sink back to where it came from. Convection therefore does not occur, and almost all energy output is carried radiatively. == Star formation == Stars form when small regions of a giant molecular cloud collapse under their own gravity, becoming protostars. The collapse releases gravitational energy, which heats up the protostar. This process occurs on the free fall timescale, which is roughly 100,000 years for solar-mass protostars, and ends when the protostar reaches approximately 4000 K. This is known as the Hayashi boundary, and at this point, the protostar is on the Hayashi track. At this point, they are known as T Tauri stars and continue to contract, but much more slowly. As they contract, they decrease in luminosity because less surface area becomes available for emitting light. The Hayashi track gives the resulting change in temperature, which will be minimal compared to the change in luminosity because the Hayashi track is nearly vertical. In other words, on the HR diagram, a T Tauri star starts out on the Hayashi track with a high luminosity and moves downward along the track as time passes. The Hayashi track describes a fully convective star. This is a good approximation for very young pre-main-sequence stars because they are still cool and highly opaque,
|
{
"page_id": 5116788,
"source": null,
"title": "Hayashi track"
}
|
so that radiative transport is insufficient to carry away the generated energy and convection must occur. Stars less massive than 0.5 M☉ remain fully convective, and therefore remain on the Hayashi track, throughout their pre-main-sequence stage, joining the main sequence at the bottom of the Hayashi track. Stars heavier than 0.5 M☉ have higher interior temperatures, which decreases their central opacity and allows radiation to carry away large amounts of energy. This allows a radiative zone to develop around the star's core. The star is then no longer on the Hayashi track, and experiences a period of rapidly increasing temperature at nearly constant luminosity. This is called the Henyey track, and ends when temperatures are high enough to ignite hydrogen fusion in the core. The star is then on the main sequence. Lower-mass stars follow the Hayashi track until the track intersects with the main sequence, at which point hydrogen fusion begins and the star follows the main sequence. Even lower-mass 'stars' never achieve the conditions necessary to fuse hydrogen and become brown dwarfs. == Derivation == The exact shape and position of the Hayashi track can only be computed numerically using computer models. Nevertheless, we can make an extremely crude analytical argument that captures most of the track's properties. The following derivation loosely follows that of Kippenhahn, Weigert, and Weiss in Stellar Structure and Evolution. In our simple model, a star is assumed to consist of a fully convective interior inside of a fully radiative atmosphere. The convective interior is assumed to be an ideal monatomic gas with a perfectly adiabatic temperature gradient: d ln T d ln P = 0.4. {\displaystyle {\frac {d\ln T}{d\ln P}}=0.4.} This quantity is sometimes labelled ∇ {\displaystyle \nabla } . The following adiabatic equation therefore holds true for the entire interior:
|
{
"page_id": 5116788,
"source": null,
"title": "Hayashi track"
}
|
P 1 − γ T γ = C , {\displaystyle P^{1-\gamma }T^{\gamma }=C,} where γ {\displaystyle \gamma } is the adiabatic gamma, which is 5/3 for an ideal monatomic gas. The ideal gas law says: P = N k T / V = ρ k T μ H = ( k ρ μ H ) γ C , {\displaystyle {\begin{aligned}P&=NkT/V\\[1ex]&={\frac {\rho kT}{\mu _{H}}}\\[1ex]&=\left({\frac {k\rho }{\mu _{H}}}\right)^{\gamma }C,\end{aligned}}} where μ H {\displaystyle \mu _{H}} is the molecular weight per particle, and H is (to a very good approximation) the mass of a hydrogen atom. This equation represents a polytrope of index 1.5, since a polytrope is defined by P = K ρ 1 + 1 / n {\displaystyle P=K\rho ^{1+1/n}} , where n = 1.5 {\displaystyle n=1.5} is the polytropic index. Applying the equation to the center of the star gives P c = ( k ρ c μ H ) γ C . {\displaystyle P_{c}=\left({\frac {k\rho _{c}}{\mu H}}\right)^{\gamma }C.} We can solve for C: C = ( μ H ρ c k ) γ P c . {\displaystyle C=\left({\frac {\mu _{H}}{\rho _{c}k}}\right)^{\gamma }P_{c}.} But for any polytrope, P c = W n G M 2 / R 4 {\displaystyle P_{c}=W_{n}GM^{2}/R^{4}} and ρ c = K n ρ avg {\displaystyle \rho _{c}=K_{n}\rho _{\text{avg}}} . W n , K n {\displaystyle W_{n},K_{n}} are all constants independent of pressure and density, and the average density is defined as ρ avg ≡ M 4 3 π R 3 . {\displaystyle \rho _{\text{avg}}\equiv {\frac {M}{{\frac {4}{3}}\pi R^{3}}}.} Plugging this 2 equations into the equation for C, we have C ∼ M 2 − γ R 3 γ − 4 , {\displaystyle C\sim M^{2-\gamma }R^{3\gamma -4},} where all multiplicative constants have been ignored. Recall that our original definition of C was P 1 − γ T
|
{
"page_id": 5116788,
"source": null,
"title": "Hayashi track"
}
|
γ = C . {\displaystyle P^{1-\gamma }T^{\gamma }=C.} Therefore, for any star of mass M and radius R, we have P 1 − γ T γ ∼ M 2 − γ R 3 γ − 4 , {\displaystyle P^{1-\gamma }T^{\gamma }\sim M^{2-\gamma }R^{3\gamma -4},} We need another relationship between P, T, M, and R in order to eliminate P. This relationship will come from the atmosphere model. The atmosphere is assumed to be thin, with average opacity k. Opacity is defined to be optical depth divided by density. Thus, by definition, the optical depth of the stellar surface, also called the photosphere, is d τ d r = k ρ , {\displaystyle {\frac {d\tau }{dr}}=k\rho ,} τ = ∫ R ∞ k ρ d r = k ∫ R ∞ ρ d r , {\displaystyle \tau =\int _{R}^{\infty }k\rho \,dr=k\int _{R}^{\infty }\rho \,dr,} where R is the stellar radius, also known as the position of the photosphere. The pressure at the surface is P 0 = ∫ R ∞ g ρ d r = G M R 2 ∫ R ∞ ρ d r = G M τ k R 2 . {\displaystyle {\begin{aligned}P_{0}&=\int _{R}^{\infty }g\rho \,dr\\&={\frac {GM}{R^{2}}}\int _{R}^{\infty }\rho \,dr\\&={\frac {GM\tau }{kR^{2}}}.\end{aligned}}} The optical depth at the photosphere turns out to be τ = 2 / 3 {\displaystyle \tau =2/3} . By definition, the temperature of the photosphere is T = T eff {\displaystyle T=T_{\text{eff}}} , where effective temperature is given by L = 4 π R 2 T eff 4 {\displaystyle L=4\pi R^{2}T_{\text{eff}}^{4}} . Therefore, the pressure is P 0 = G M R 2 2 3 k . {\displaystyle P_{0}={\frac {GM}{R^{2}}}{\frac {2}{3k}}.} We can approximate the opacity to be k = k 0 P a T b , {\displaystyle k=k_{0}P^{a}T^{b},} where a = 1, b = 3.
|
{
"page_id": 5116788,
"source": null,
"title": "Hayashi track"
}
|
Plugging this into the pressure equation, we get P 0 ∝ ( M R 2 T eff b ) 1 a + 1 , {\displaystyle P_{0}\propto \left({\frac {M}{R^{2}T_{\text{eff}}^{b}}}\right)^{\frac {1}{a+1}},} Finally, we need to eliminate R and introduce L, the luminosity. This can be done with the equation L = 4 π R 2 σ T eff 4 , {\displaystyle L=4\pi R^{2}\sigma T_{\text{eff}}^{4},} Equations 1 and 2 can now be combined by setting T = T eff {\displaystyle T=T_{\text{eff}}} and P = P 0 {\displaystyle P=P_{0}} in equation 1, then eliminating P 0 {\displaystyle P_{0}} . R can be eliminated using equation 3. After some algebra and setting γ = 5 / 3 {\displaystyle \gamma =5/3} , we get ln T eff = A ln L + B ln M + const , {\displaystyle \ln T_{\text{eff}}=A\ln L+B\ln M+{\text{const}},} where A = 0.75 a − 0.25 5.5 a + b + 1.5 , B = 0.5 a + 1.5 5.5 a + b + 1.5 . {\displaystyle {\begin{aligned}A&={\frac {0.75a-0.25}{5.5a+b+1.5}},\\B&={\frac {0.5a+1.5}{5.5a+b+1.5}}.\end{aligned}}} In cool stellar atmospheres (T < 5000 K), like those of newborn stars, the dominant source of opacity is the H− ion, for which a ≈ 1 {\displaystyle a\approx 1} and b ≈ 3 {\displaystyle b\approx 3} , we get A = 0.05 {\displaystyle A=0.05} and B = 0.2 {\displaystyle B=0.2} . Since A is much smaller than 1, the Hayashi track is extremely steep: if the luminosity changes by a factor of 2, the temperature only changes by 4%. The fact that B is positive indicates that the Hayashi track shifts left on the HR diagram, towards higher temperatures, as mass increases. Although this model is extremely crude, these qualitative observations are fully supported by numerical simulations. At high temperatures, the atmosphere's opacity begins to be dominated by
|
{
"page_id": 5116788,
"source": null,
"title": "Hayashi track"
}
|
Kramers' opacity law instead of the H− ion, with a = 1 and b = −4.5. In that case, A = 0.2 in our crude model, far higher than 0.05, and the star is no longer on the Hayashi track. In Stellar Interiors, Hansen, Kawaler, and Trimble go through a similar derivation without neglecting multiplicative constants and arrived at T eff = ( 2600 K ) μ 13 / 51 ( M M ⊙ ) 7 / 51 ( L L ⊙ ) 1 / 102 , {\displaystyle T_{\text{eff}}=(2600~{\text{K}})\mu ^{13/51}\left({\frac {M}{M_{\odot }}}\right)^{7/51}\left({\frac {L}{L_{\odot }}}\right)^{1/102},} where μ {\displaystyle \mu } is the molecular weight per particle. The authors note that the coefficient of 2600 K is too low—it should be around 4000 K—but this equation nevertheless shows that temperature is nearly independent of luminosity. == Numerical results == The diagram at the top of this article shows numerically computed stellar evolution tracks for various masses. The vertical portions of each track is the Hayashi track. The endpoints of each track lie on the main sequence. The horizontal segments for higher-mass stars show the Henyey track. It is approximately true that ∂ ln T eff ∂ ln M ≈ 0.1. {\displaystyle {\frac {\partial \ln T_{\text{eff}}}{\partial \ln M}}\approx 0.1.} The diagram to the right shows how Hayashi tracks change with changes in chemical composition. Z is the star's metallicity, the mass fraction not accounted for by hydrogen or helium. For any given hydrogen mass fraction, increasing Z leads to increasing molecular weight. The dependence of temperature on molecular weight is extremely steep—it is approximately ∂ ln T eff ∂ ln μ ≈ − 26. {\displaystyle {\frac {\partial \ln T_{\text{eff}}}{\partial \ln \mu }}\approx -26.} Decreasing Z by a factor of 10 shifts the track right, changing ln T eff
|
{
"page_id": 5116788,
"source": null,
"title": "Hayashi track"
}
|
{\displaystyle \ln T_{\text{eff}}} by about 0.05. Chemical composition affects the Hayashi track in a few ways. The track depends strongly on the atmosphere's opacity, and this opacity is dominated by the H− ion. The abundance of the H− ion is proportional to the density of free electrons, which, in turn, is higher if there are more metals because metals are easier to ionize than hydrogen or helium. == Observational status == Observational evidence of the Hayashi track comes from color–magnitude plots—the observational equivalent of HR diagrams—of young star clusters. For Hayashi, NGC 2264 provided the first evidence of a population of contracting stars. In 2012, data from NGC 2264 was re-analyzed to account for dust reddening and extinction. The resulting color–magnitude plot is shown at right. In the upper diagram, the isochrones are curves along which stars of a certain age are expected to lie, assuming that all stars evolve along the Hayashi track. An isochrone is created by taking stars of every conceivable mass, evolving them forwards to the same age, and plotting all of them on the color–magnitude diagram. Most of the stars in NGC 2264 are already on the main sequence (black line), but a substantial population lies between the isochrones for 3.2 million and 5 million years, indicating that the cluster is 3.2–5 million years old and a large population of T Tauri stars is still on their respective Hayashi tracks. Similar results have been obtained for NGC 6530, IC 5146, and NGC 6611. The lower diagram shows Hayashi tracks for various masses, along with T Tauri observations collected from a variety of sources. Note the bold curve to the right, representing a stellar birthline. Even though some Hayashi tracks theoretically extend above the birthline, few stars are above it. In effect, stars are "born" onto
|
{
"page_id": 5116788,
"source": null,
"title": "Hayashi track"
}
|
the birthline before evolving downwards along their respective Hayashi tracks. The birthline exists because stars formed from overdense cores of giant molecular clouds in an inside-out manner. That is, a small central region first collapses in on itself while the outer shell is still nearly static. The outer envelope then accretes onto the central protostar. Before the accretion is over, the protostar is hidden from view, and therefore not plotted on the color-magnitude diagram. When the envelope finishes accreting, the star is revealed and appears on the birthline. == See also == Historical brightest stars List of brightest stars List of most luminous stars List of nearest bright stars Stellar birthline Stellar isochrone == References ==
|
{
"page_id": 5116788,
"source": null,
"title": "Hayashi track"
}
|
The Los Alamos Primer is a printed version of the first five lectures on the principles of nuclear weapons given to new arrivals at the top-secret Los Alamos laboratory during the Manhattan Project. The five lectures were given by physicist Robert Serber in April 1943. The notes from the lectures which became the Primer were written by Edward Condon. == History == The Los Alamos Primer was composed from five lectures given by the physicist Robert Serber to the newcomers at the Los Alamos Laboratory in April 1943, at the start of the Manhattan Project. The aim of the project was to build the first nuclear bomb, and these lectures were a very concise introduction into the principles of nuclear weapon design. Serber was a postdoctoral student of J. Robert Oppenheimer, the leader of the Los Alamos Laboratory, and worked with him on the project from the very start. The five lectures were conducted at April 5, 7, 9, 12, and 14, 1943; according to Serber, between 30 and 50 people attended them. Notes were taken by Edward Condon; the Primer is just 24-pages-long. Only 36 copies were printed at the time. Serber later described the lectures: Previously the people working at the separate universities had no idea of the whole story. They only knew what part they were working on. So somebody had to give them the picture of what it was all about and what the bomb was like, what was known about the theory, and some idea why they needed the various experimental numbers. In July 1942, Oppenheimer held a "conference" at his office at Berkeley. No records were preserved, but the Primer arose from all the aspects of bomb design discussed there. == Content == The Primer, though only 24 pages long, consists of 22 sections,
|
{
"page_id": 4592501,
"source": null,
"title": "Los Alamos Primer"
}
|
divided into chapters: Preliminaries Neutrons and the fission process Critical mass and efficiency Detonation, pre-detonation, and fizzles Conclusion The first paragraph states the intention of the Los Alamos Laboratory during World War II: The object of the project is to produce a practical military weapon in the form of a bomb in which the energy is released by a fast neutron chain reaction in one or more of the materials known to show nuclear fission. The Primer contained the basic physical principles of nuclear fission, as they were known at the time, and their implications for nuclear weapon design. It suggested possible ways to assemble a critical mass of uranium-235 or plutonium, the simplest being the shooting of a "cylindrical plug" into a sphere of "active material" with a "tamper"—dense material which would reflect neutrons inward and keep the reacting mass together to increase its efficiency (this model, the Primer said, "avoids fancy shapes"). They also explored designs involving spheroids, a primitive form of "implosion" (suggested by Richard C. Tolman), and explored the speculative possibility of "autocatalytic methods" which would increase the efficiency of the bomb as it exploded. According to Rhodes, Serber discussed fission cross sections, the energy spectrum of secondary neutrons, the average number of secondary neutrons per fission (measured by then to be about 2.2), the neutron capture process in U238 that led to plutonium and why ordinary uranium is safe (it would have to be enriched to at least 7 percent U235, the young theoretician pointed out, 'to make an explosive reaction possible'). The calculations Serber reported indicated a critical mass of metallic U235 tamped with a thick shell of ordinary uranium of 15 kilograms: 33 pounds. For plutonium similarly tamped the critical mass might be 5 kilograms: 11 pounds. Tamper always increased efficiency: it reflected
|
{
"page_id": 4592501,
"source": null,
"title": "Los Alamos Primer"
}
|
neutrons back into the core and its inertia...slowed the core's expansion and helped keep the core surface from blowing away. So there might be a third basic component to their atomic bomb besides nuclear core and confining tamper: an initiator - a Ra + Be source or, better, a Po + Be source, with the radium or polonium attached perhaps to one piece of the core and the beryllium to the other, to smash together and spray neutrons when the parts mated to start the chain reaction. The immediate work of experiment, Serber concluded, would be measuring the neutron properties of various materials and mastering the ordnance problem - the problem, that is, of assembling a critical mass and firing the bomb. The Primer became designated as the first official Los Alamos technical report (LA-1), and though its information about the physics of fission and weapon design was soon rendered obsolete, it is still considered a fundamental historical document in the history of nuclear weapons. Its contents would be of little use today to someone attempting to build a nuclear weapon, a fact acknowledged by its complete declassification in 1965. In 1992, an edited version of the Primer with many annotations and explanations by Serber was published with an introduction by Richard Rhodes, who previously published The Making of the Atomic Bomb. The 1992 edition also contains the Frisch–Peierls memorandum, written in 1940 in England. == Reception == The physicist Freeman Dyson, who knew Serber, Oppenheimer, and other participants of the Manhattan Project, called the Primer a "legendary document in the literature of nuclear weapons". He praised "Serber's clear thinking", but harshly criticized the Primer's publication, writing "I still wish that it had been allowed to languish in obscurity for another century or two." Acknowledging that it was unclassified in
|
{
"page_id": 4592501,
"source": null,
"title": "Los Alamos Primer"
}
|
1965, and that it can't be useful to any bomb designer from 1950, Dyson still thinks that such publication can be dangerous: There is nothing here that would have been technically useful to a Russian bomb designer in 1950 or to an Iraqi bomb designer in 1990. But the primer contains much more than technical information. It conveys a powerful message that bomb designing is fun. The primer succeeds all too well in recreating the Los Alamos mystique, the picture of this brilliant group of city slickers suddenly dumped into the remotest corner of the Wild West and having the best time of their lives building bombs. It helps to perpetuate the myth. ... This is what I mean by seduction-the myth, unfortunately containing an element of truth, that building bombs is a wild, consciousness-raising adventure. Dyson compared bomb-building with LSD synthesis: "Nuclear weapons and LSD are both highly addictive. Both have been manufactured extensively by bright young people seduced by a myth and searching for adventure. Both have destroyed many lives and are likely to destroy many more if the myths are not dispelled. ... Books that present either LSD or nuclear bombs as a romantic adventure can be a danger to public health and safety." His article, titled "Dragon's Teeth", reflects another analogy he uses in his criticisms: We are here confronting an ethical dilemma that is at least 350 years old, the same dilemma that John Milton confronted in his historic battle for the freedom of the press in 17th-century England. Milton in his famous appeal with the title "Areopagitica", addressed to the English parliament in 1644, conceded to his enemies the point that books "are as lively and as vigorously productive as those fabulous dragon's teeth, and being sown up and down, may chance to spring
|
{
"page_id": 4592501,
"source": null,
"title": "Los Alamos Primer"
}
|
up armed men." He conceded that the risks of letting books go free into the world could be lethal as well as irreversible. He argued that the risks must still be accepted, because the censorship of books was the greater evil. He lost the argument, and in his day the censors prevailed. In our day, the censors have lost their grip, but the ethical dilemma remains. Books have not lost their power to spring up armed men, to seduce and to destroy. The fact that this primer was declassified 26 years ago does not mean that we can spread it over the world without some responsibility for the consequences. Dyson concludes his review writing: "With luck, this charming little book will be read only by elderly physicists and historians, people who can appreciate its elegance without being seduced by its magic." Other reviews, however, were more favorable. John F. Ahearne writes that the book "remains mathematical", and that it can be useful to young scientists: "the insight to be gained from reading Serber's lucid descriptions of how to analyze complex events by using first approximations. Serber was speaking in many cases to a group of experimentalists who, as one of the experimental group leaders is noted as saying, found "a qualitative argument was more convincing than any amount of fancy theory." A good physicist should be able to get an approximate answer to any complex question using what he or she carries around in the head, the well known "back of the envelope" calculation." Paul W. Henriksen praised the book, writing that "one will be even more impressed with the magnitude of the effort to build an atomic bomb to try to end World War II". He notes that the "annotated version is fascinating in several respects. It is a
|
{
"page_id": 4592501,
"source": null,
"title": "Los Alamos Primer"
}
|
rare instance in which one of the contributors to a historical event has gone back and explained his work, its importance, and the mistakes that were made at the same time." He also notes that the book is "one of the few books to deal at all with the technical side of the bomb project." Matthew Hersch writes that the book has "power to amaze", and that "The Los Alamos Primer is a work bound to be read differently by different generations ... [it] is a rich text that peers into a moment of innovation that had global consequences." Frank A. Settle also finds the primer to be unique in style and context, and sees it as "a significant contribution to the technical and scientific history of this important period." == Publication history == Serber, Robert (1992). The Los Alamos primer: the first lectures on How to build an atomic bomb. Berkeley: University of California Press. ISBN 0-520-07576-5. Serber, Robert; Rhodes, Richard (2020). The Los Alamos Primer: The First Lectures on How to Build an Atomic Bomb, Updated with a New Introduction by Richard Rhodes (1 ed.). University of California Press. ISBN 978-0-520-34417-4. JSTOR j.ctvw1d5pf. == Literature == === Articles === Reed, B. C. (1 September 2017). "Revisiting The Los Alamos Primer". Physics Today. 70 (9): 42–49. doi:10.1063/PT.3.3692. Reed, B. C. (1 November 2016). "A physicists guide to The Los Alamos Primer". Physica Scripta. 91 (11): 113002. doi:10.1088/0031-8949/91/11/113002. === Editions === Serber, R. (2020). Rhodes, R. (ed.). The Los Alamos Primer: The First Lectures on How to Build an Atomic Bomb (Updated). University of California Press. ISBN 978-0-520-37433-1. Serber, R. (1992). Rhodes, R. (ed.). The Los Alamos Primer: The First Lectures on How to Build an Atomic Bomb. University of California Press. ISBN 978-0-520-07576-4. === Original === Serber, R., Condon,
|
{
"page_id": 4592501,
"source": null,
"title": "Los Alamos Primer"
}
|
E. U. (1943), The Los Alamos Primer (PDF), retrieved 1 January 2024 == References == == External links == LANL (2023-07-19). "The Los Alamos Primer". Retrieved 2024-01-01.
|
{
"page_id": 4592501,
"source": null,
"title": "Los Alamos Primer"
}
|
In particle physics, the crypton is a hypothetical superheavy particle, thought to exist in a hidden sector of string theory. It has been proposed as a candidate particle to explain the dark matter content of the universe. Cryptons arising in the hidden sector of a superstring-derived flipped SU(5) GUT model have been shown to be metastable with a lifetime exceeding the age of the universe. Their slow decays may provide a source for the ultra-high-energy cosmic rays (UHECR). == References == John Ellis; Jorge L. Lopez; D.V. Nanopoulos (1990). "Confinement of fractional charges yields integer-charged relics in string models". Physics Letters B. 247 (2–3): 257–264. Bibcode:1990PhLB..247..257E. doi:10.1016/0370-2693(90)90893-B. hdl:1969.1/181059. Karim Benakli; John Ellis; D.V. Nanopoulos (1999). "Natural Candidates for Superheavy Dark Matter in String and M Theory". Physical Review D. 59 (4): 047301. arXiv:hep-ph/9803333. Bibcode:1999PhRvD..59d7301B. doi:10.1103/PhysRevD.59.047301. S2CID 119382848. John Ellis; V.E. Mayes; D.V. Nanopoulos (2004). "Flipped Cryptons and the Ultra-high Energy Cosmic Rays". Physical Review D. 70 (7): 075015. arXiv:hep-ph/0403144. Bibcode:2004PhRvD..70g5015E. doi:10.1103/PhysRevD.70.075015. S2CID 119360169. John Ellis; V.E. Mayes; D.V. Nanopoulos (2006). "Ultrahigh Energy Cosmic Ray Particle Spectra from Crypton Decays". Physical Review D. 74 (11): 115003. arXiv:astro-ph/0512303. Bibcode:2006PhRvD..74k5003E. doi:10.1103/PhysRevD.74.115003. S2CID 119382442.
|
{
"page_id": 1839990,
"source": null,
"title": "Crypton (particle)"
}
|
The Pathogen-Host Interactions database (PHI-base) is a biological database that contains manually curated information on genes experimentally proven to affect the outcome of pathogen-host interactions. The database has been maintained by researchers at Rothamsted Research and external collaborators since 2005. PHI-base has been part of the UK node of ELIXIR, the European life-science infrastructure for biological information, since 2016. == Background == The Pathogen-Host Interactions database was developed to utilise the growing number of verified genes that mediate an organism's ability to cause disease and/or trigger host responses. The web-accessible database catalogues experimentally verified pathogenicity, virulence, and effector genes from bacterial, fungal, and oomycete pathogens which infect animal, plant, and fungal hosts. PHI-base was the first online resource devoted to the identification and presentation of information on fungal and oomycete pathogenicity genes and their host interactions. PHI-base is a resource for the discovery of candidate targets in medically and agronomically important fungal and oomycete pathogens for intervention with synthetic chemistries and natural products (fungicides). Each entry in PHI-base is curated by domain experts and supported by strong experimental evidence (gene disruption experiments) as well as literature references in which the experiments are described. Each gene in PHI-base is presented with its nucleotide and deduced amino acid sequence as well as a detailed structured description of the predicted protein's function during the host infection process. To facilitate data interoperability, genes are annotated using controlled vocabularies (Gene Ontology terms, EC Numbers, etc.), and links to other external data sources such as UniProt, EMBL, and the NCBI taxonomy services. == Current developments == Version 4.17 (May 2024) of PHI-base provides information on 9973 genes from 296 pathogens and 249 hosts and their impact on 22415 interactions as well on efficacy information on ~20 drugs and the target sequences in the pathogen. PHI-base currently
|
{
"page_id": 8721272,
"source": null,
"title": "PHI-base"
}
|
focuses on plant pathogenic and human pathogenic organisms including fungi, oomycetes, and bacteria. The entire contents of the database can be downloaded in a tab delimited format. Since the launch of version 4, the PHI-base is also searchable using the PHIB-BLAST search tool, which uses the BLAST algorithm to compare a user's sequence against the sequences available from PHI-base. The database providers recently announced the launch of PHI-base 5, a new gene-centric version of PHI-base, through a press release on the Rothamsted Research website. A summary of the improvements made is also available. In 2016 the plant portion of PHI-base was used to establish a Semantic PHI-base search tool. PHI-base has been aligned with Ensembl Genomes since 2011, FungiDB since 2016, and Global Biotic Interactions (GloBI) since 2018. All new PHI-base releases are integrated by these independent databases. PHI-base is a resource for many applications including: › The discovery of conserved genes in medically and agronomically important pathogens, which may be potential targets for chemical intervention › Comparative genome analyses › Annotation of newly sequenced pathogen genomes › Functional interpretation of RNA sequencing and microarray experiments › The rapid cross-checking of phenotypic differences between pathogenic species when writing articles for peer review PHI-base use has been cited in over 900 peer-reviewed articles. Since 2015, the website has linked to an online literature curation tool called PHI-Canto, enabling community-driven literature curation for various pathogenic species. PHI-Canto employs a community curation framework that not only offers a curation tool but also includes a phenotype ontology and controlled vocabularies using unified languages and rules used in biology experiments. The central concept of this framework is the introduction of a 'Metagenotype', which allows the annotation and assignment of phenotypes to specific pathogen mutant-host interactions. PHI-Canto extends the single species curation tool developed for PomBase
|
{
"page_id": 8721272,
"source": null,
"title": "PHI-base"
}
|
(https://www.pombase.org), the model organism database for fission yeast. == Funding == PHI-base is a National Capability funded by the Biotechnology and Biological Sciences Research Council (BBSRC), a UK research council. == References == == External links == PHI-base
|
{
"page_id": 8721272,
"source": null,
"title": "PHI-base"
}
|
The Double Helix: A Personal Account of the Discovery of the Structure of DNA is an autobiographical account of the discovery of the double helix structure of DNA written by James D. Watson and published in 1968. It has earned both critical and public praise, along with continuing controversy about credit for the Nobel award and attitudes towards female scientists at the time of the discovery. == Significance == Watson is a U.S. molecular biologist, geneticist and zoologist, best known as one of the co-discoverers of the structure of DNA in 1953 with Francis Crick. In 1998, the Modern Library placed The Double Helix at number 7 on its list of the 100 best nonfiction books of the 20th century. In 2012, The Double Helix was named as one of the 88 "Books That Shaped America" by the Library of Congress. Though an important book about an immensely important subject, it was and remains a controversial account. Though it was originally slated to be published by Harvard University Press, Watson's home university, Harvard dropped the arrangement after protestations from Francis Crick and Maurice Wilkins, co-discoverers of the structure of DNA, and it was published instead by Atheneum in the United States and Weidenfeld & Nicolson in the UK. The intimate first-person memoir about scientific discovery was unusual for its time. The book has been hailed for its highly personal view of scientific work, though has been criticised as caring only about the glory of priority and the author is claimed to be willing to appropriate data from others surreptitiously in order to obtain it. It has also been criticized as being disagreeably sexist towards Rosalind Franklin, another participant in the discovery, who was deceased by the time Watson's book was written. The events described in the book were dramatized in
|
{
"page_id": 922489,
"source": null,
"title": "The Double Helix"
}
|
a BBC television program Life Story (known as The Race for the Double Helix in the U.S.). == Criticism == A 1980 Norton Critical Edition of The Double Helix edited by Gunther Stent, analyzed the events surrounding its initial publication. It presents a selection of both positive and negative reviews of the book, by such figures as Philip Morrison, Richard Lewontin, Alex Comfort, Jacob Bronowski, and more in-depth analyses by Peter Medawar, Robert K. Merton, and Andre Lwoff. Erwin Chargaff declined permission to reprint his unsympathetic review from the March 29, 1968, issue of Science, but letters in response from Max Perutz, Maurice Wilkins, and Watson are printed. Also included are retrospectives from a 1974 edition of Nature written by Francis Crick and Linus Pauling, and an analysis of Franklin's work by her student Aaron Klug. The Norton edition concludes with the 1953 papers on DNA structure as published in Nature. In the book Rosalind Franklin and DNA, author Anne Sayre is very critical of Watson's account. She claims that Watson's book did not give a balanced description of Rosalind Franklin and the nature of her interactions with Maurice Wilkins at King's College, London. Sayre's book raises doubts about the ethics of how Watson and Crick used some of Franklin's results and whether adequate credit was given to her. Watson had very limited contact with Franklin during the time she worked on DNA. By providing more information about Franklin's life than was included in Watson's book, it was possible for Sayre to provide a different perspective on the role Franklin played in Watson and Crick's discovery of the double helix structure of DNA. (See: King's College London DNA Controversy.) In the book's preface, Watson explains that he is describing his impressions at the time of the events, and not at
|
{
"page_id": 922489,
"source": null,
"title": "The Double Helix"
}
|
the time he wrote the book. In the epilogue Watson writes; "Since my initial impressions about [Franklin], both scientific and personal (as recorded in the early pages of this book) were often wrong I want to say something here about her achievements." He goes on to describe her superb work, and, despite this, the enormous barriers she faced as a woman in the field of science. He also acknowledged that it took years to overcome their bickering before he could appreciate Franklin's generosity and integrity. == An annotated and illustrated edition == An annotated and illustrated version of the book, edited by Alex Gann and Jan Witkowski, was published in November 2012 by Simon & Schuster in association with Cold Spring Harbor Laboratory Press. The new edition coincided with the fiftieth anniversary of the award of the 1962 Nobel Prize for physiology or medicine to Francis Crick, James D. Watson and Maurice Wilkins. It contains over three hundred annotations on the events and characters portrayed, with facsimile letters and contemporary photographs, many previously unpublished. Their sources include newly discovered correspondence from Crick, the papers of Franklin, Pauling, and Wilkins, and they include a chapter dropped from the original edition that described Watson's holiday in the Italian Alps in 1952. The edition was favorably reviewed in The New York Times by Nicholas Wade who commented, "anyone seeking to understand modern biology and genomics could do much worse than start with the discovery of the structure of DNA, on which almost everything else is based. This edition includes several appendices, including letters by Crick and Watson giving the first account of the discovery, a previously unpublished chapter, an account of the controversy surrounding the publication, and the unsympathetic review by the late Erwin Chargaff from the March 29, 1968, issue of Science,
|
{
"page_id": 922489,
"source": null,
"title": "The Double Helix"
}
|
which he previously declined permission to reprint in the 1980 Norton Critical Edition of The Double Helix edited by Gunther Stent. The book does not include the four press cuttings from the News Chronicle, Varsity and The New York Times (2) of May and June 1953 regarding the discovery of the structure of DNA, and Crick's letter of 13th April 1967 is incomplete. == Film adaptation == In 1987, the memoir was adapted as a 107-minute television docudrama called Life Story for the BBC, airing on Horizon, the long-running British documentary television series on BBC Two that covers science and philosophy. The script was written by William Nicholson, and it was produced and directed by Mick Jackson. Jeff Goldblum starred as Watson, with Tim Pigott-Smith as Francis Crick, Juliet Stevenson as Rosalind Franklin, and Alan Howard as Maurice Wilkins. The film won several awards in the UK and U.S., including the 1988 BAFTA TV Award as the Best Single Drama. == Notes == == References == James D. Watson, The Double Helix: A Personal Account of the Discovery of the Structure of DNA (1968), Atheneum, 1980, ISBN 0-689-70602-2, OCLC 6197022 James D. Watson, The Annotated and Illustrated Double Helix, edited by Alexander Gann and Jan Witkowski (2012) Simon & Schuster, ISBN 978-1-4767-1549-0. James D. Watson, The Double Helix: A Personal Account of the Discovery of the Structure of DNA (1980 Norton Critical Edition), editor Gunther Stent, W.W. Norton, ISBN 0-393-95075-1. Maddox, Brenda (2002). Rosalind Franklin: the dark lady of DNA. HarperCollins. ISBN 0-393-32044-8. Sayre, Anne. Rosalind Franklin and DNA (1975), New York: W.W. Norton and Company, ISBN 0-393-32044-8 Wilkins, Maurice, The Third Man of the Double Helix: The Autobiography (2003), Oxford U Press, ISBN 0-19-860665-6 == External links == [1] Interview with editors of the Annotated and Illustrated edition, 2012
|
{
"page_id": 922489,
"source": null,
"title": "The Double Helix"
}
|
Photos of the first edition of The Double Helix A Reader's Guide to The Double Helix, 2009 by Kenneth R. Miller, a biology professor at Brown University Resource Page for The Double Helix used in Biology 20, The Foundations of Living Systems, a course at Brown University [2] 'DNA Pioneer James Watson Reveals Helix Story Was Almost Never Told,' Robin McKie, The Observer, 8 December 2012
|
{
"page_id": 922489,
"source": null,
"title": "The Double Helix"
}
|
A thermal mass refrigerator is a refrigerator that is foreseen with thermal mass as well as insulation to decrease the energy use of the refrigerator. A particularly popular thermal mass refrigerator was conceived by Michael Reynolds and detailed in his 1993 book Earthship Volume 3. This refrigerator was a DIY refrigerator designed around a (Sun-Frost) DC refrigeration unit run on P.V. panels. == Design == The thermal mass used in Michael Reynolds' design is a combination of a liquid (i.e. water or beer) together with concrete mass. Concrete's temperature can be decreased quickly, while a liquid's (such as beer or water) with its higher thermal mass requires more energy to change temperature, holding the cold for longer. In Michael Reynolds' design, the liquid is added in the form of beer cans, placed in the back of the refrigerator. Besides the use of a large quantity of thermal mass, he also made sure that the inside of the box could be in direct contact with the outside air, so as to allow the unit to be cooled without electricity/compressor-assistance during winter. This is done by having the top of the unit openable by placing a skylight on top. A pipe connects the bottom to the outside as well, so as to allow natural circulation of air. The inflow of outside air from the top could be closed off by closing the skylight as well as by closing the removable insulated damper. == See also == Kimchi refrigerator == References ==
|
{
"page_id": 41620348,
"source": null,
"title": "Thermal mass refrigerator"
}
|
This is a list of investigational sleep drugs, or drugs for the treatment of sleep disorders that are currently under development for clinical use but are not yet approved. Chemical/generic names are listed first, with developmental code names, synonyms, and brand names in parentheses. This list was last comprehensively updated sometime between June 2017 and August 2021. It is likely to become outdated with time. == Insomnia == === GABAA receptor potentiators === Lorediplon (GF-015535-00) – GABAA receptor positive allosteric modulator [1] Zuranolone (SAGE-217) – GABAA receptor positive allosteric modulator [2] === Orexin receptor antagonists === Seltorexant (MIN-202, JNJ,42847922, JNJ-922) – selective OX2 receptor antagonist [3] Vornorexant (ORN-0829, TS-142) – dual OX1 and OX2 receptor antagonist [4] === Melatonin receptor agonists === Piromelatine (Neu-P11) – melatonin receptor agonist and 5-HT1A and 5-HT1D receptor agonist [5] === Nociceptin receptor agonists === Sunobinop (IMB-115, IT-1315, S-117957, V-117957) – nociceptin receptor agonist [6][7] == Hypersomnia/narcolepsy == === Orexin receptor agonists === ALKS-2680 – selective OX2 receptor agonist Danavorexton (TAK-925) – selective OX2 receptor agonist via IV infusion[8] E-2086 – selective OX2 receptor agonist Firazorexton (TAK-994) – OX2 receptor agonist pill (orally available)[9] Oveporexton (TAK-861) – selective OX2 receptor agonist (orally available) [10] Suntinorexton – selective OX2 receptor agonist (orally available) [11] === Monoaminergics === Flecainide/modafinil (THN-102) – modafinil (dopamine reuptake inhibitor) and flecainide (antiarrhythmic) combination [12] Mazindol controlled release (NLS-1001, NLS-1) – norepinephrine-predominant serotonin–norepinephrine–dopamine reuptake inhibitor (SNDRI)[13] Reboxetine (AXS-12) – norepinephrine reuptake inhibitor (NRI) [14] === GHB receptor agonists === === H3 receptor inverse agonists === Samelisant (SUVN-G3031) – H3 receptor antagonist [15] === GABAA receptor inhibitors === Pentylenetetrazole (BTD-001) – GABAA receptor negative allosteric modulator [16] == See also == List of investigational drugs == References ==
|
{
"page_id": 54203260,
"source": null,
"title": "List of investigational sleep drugs"
}
|
David Herbert Samuel, 3rd Viscount Samuel OBE (Hebrew: דוד הרברט סמואל; 8 July 1922 – 7 October 2014) was an Anglo-Israeli chemist and neurobiologist. Samuel was the son of Edwin Samuel, 2nd Viscount Samuel, and the grandson of the British-Jewish diplomat Herbert Samuel, 1st Viscount Samuel. He was a second cousin of Rosalind Franklin. He was educated at Balliol College, Oxford, and served in the Royal Artillery in the Second World War in India, Burma and Sumatra, reaching the rank of captain. He then commenced a distinguished academic career, and was one of the founding fathers of the faculty of chemistry at the Weizmann Institute of Science. Late in his career, his scientific interests moved from physical chemistry (particularly chemical kinetics) to neurobiology. He was the author of over 300 publications as well as the book Memory: How We Use It, Lose It and Can Improve It. From 14 November 1978 until 11 November 1999 he was a member of the British House of Lords, although his only appearance in the official record, Hansard, was taking his oath in 1995. This made him the only Israeli who ever served in this capacity. == References ==
|
{
"page_id": 1839997,
"source": null,
"title": "David Samuel, 3rd Viscount Samuel"
}
|
In thermodynamics, the limit of local stability against phase separation with respect to small fluctuations is clearly defined by the condition that the second derivative of Gibbs free energy is zero. d 2 G d x 2 = 0 {\displaystyle {\operatorname {d} ^{2}\!G \over \operatorname {d} \!x^{2}}{=}0} The locus of these points (the inflection point within a G-x or G-c curve, Gibbs free energy as a function of composition) is known as the spinodal curve. For compositions within this curve, infinitesimally small fluctuations in composition and density will lead to phase separation via spinodal decomposition. Outside of the curve, the solution will be at least metastable with respect to fluctuations. In other words, outside the spinodal curve some careful process may obtain a single phase system. Inside it, only processes far from thermodynamic equilibrium, such as physical vapor deposition, will enable one to prepare single phase compositions. The local points of coexisting compositions, defined by the common tangent construction, are known as a binodal coexistence curve, which denotes the minimum-energy equilibrium state of the system. Increasing temperature results in a decreasing difference between mixing entropy and mixing enthalpy, and thus the coexisting compositions come closer. The binodal curve forms the basis for the miscibility gap in a phase diagram. The free energy of mixing changes with temperature and concentration, and the binodal and spinodal meet at the critical or consolute temperature and composition. == Criterion == For binary solutions, the thermodynamic criterion which defines the spinodal curve is that the second derivative of free energy with respect to density or some composition variable is zero. == Critical point == Extrema of the spinodal in a temperature vs composition plot coincide with those of the binodal curve, and are known as critical points. The spinodal itself can be thought of as
|
{
"page_id": 25105276,
"source": null,
"title": "Spinodal"
}
|
a line of pseudocritical points, with the correlation function taking a scaling form with non-classical critical exponents. Strictly speaking, a spinodal is defined as a mean field theoretic object. As such, the spinodal does not exist in real systems, but one can extrapolate to infer the existence of a pseudospinodal that exhibits critical-like behavior such as critical slowing down. === Isothermal liquid-liquid equilibria === In the case of ternary isothermal liquid-liquid equilibria, the spinodal curve (obtained from the Hessian matrix) and the corresponding critical point can be used to help the experimental data correlation process. == References ==
|
{
"page_id": 25105276,
"source": null,
"title": "Spinodal"
}
|
Schmaltz herring is herring caught just before spawning, when the fat (schmaltz in Yiddish) in the fish is at a maximum. Colloquially, schmaltz herring refers to this fish pickled in brine: see pickled herring. == References == == External links == "Herring". The Food Lovers' Companion. Retrieved March 30, 2005.
|
{
"page_id": 1184645,
"source": null,
"title": "Schmaltz herring"
}
|
RecLOH is a term in genetics that is an abbreviation for "Recombinant Loss of Heterozygosity". This is a type of mutation which occurs with DNA by recombination. From a pair of equivalent ("homologous"), but slightly different (heterozygous) genes, a pair of identical genes results. In this case there is a non-reciprocal exchange of genetic code between the chromosomes, in contrast to chromosomal crossover, because genetic information is lost. == For Y chromosome == In genetic genealogy, the term is used particularly concerning similar seeming events in Y chromosome DNA. This type of mutation happens within one chromosome, and does not involve a reciprocal transfer. Rather, one homologous segment "writes over" the other. The mechanism is presumed to be different from RecLOH events in autosomal chromosomes, since the target is the very same chromosome instead of the homologous one. During the mutation one of these copies overwrites the other. Thus the differences between the two are lost. Because differences are lost, heterozygosity is lost. Recombination on the Y-chromosome does not only take place during meiosis, but virtually at every mitosis when the Y chromosome condenses, because it doesn't require pairing between chromosomes. Recombination frequency even exceeds the frame shift mutation frequency (slipped strand mispairing) of (average fast) Y-STRs, however many recombination products may lead to infertile germ cells and "daughter out". Recombination events (RecLOH) can be observed if YSTR databases are searched for twin alleles at 3 or more duplicated markers on the same palindrome (hairpin). E.g. DYS459, DYS464 and DYS724 (CDY) are located on the same palindrome P1. A high proportion of 9-9, 15-15-17-17, 36-36 combinations and similar twin allelic patterns will be found. PCR typing technologies have been developed (e.g. DYS464X) that are able to verify that there are most frequently really two alleles of each, so we can
|
{
"page_id": 3150728,
"source": null,
"title": "RecLOH"
}
|
be sure that there is no gene deletion. Family genealogies have proven many times, that parallel changes on all markers located on the same palindrome are frequently observed and the result of those changes are always twin alleles. So a 9–10, 15-16-17-17, 36-38 haplotype can change in one recombination event to the one mentioned above, because all three markers (DYS459, DYS464 and DYS724) are affected by one and the same recLOH event. == See also == Null allele Paternal mtDNA transmission List of genetic genealogy topics == References == Krahn, Thomas (2005). "Recombinational Loss of Heterozygosity (recLOH)". DNA-Fingerprint, Germany. Archived from the original on 2010-11-29. Retrieved 2006-07-11. == External links == RecLOH explained
|
{
"page_id": 3150728,
"source": null,
"title": "RecLOH"
}
|
Carboxypeptidase G may refer to one of the following: Glutamate carboxypeptidase, an enzyme Gamma-glutamyl hydrolase, an enzyme
|
{
"page_id": 39261065,
"source": null,
"title": "Carboxypeptidase G"
}
|
The beak is part of the shell of a bivalve mollusk, i.e. part of the shell of a saltwater or freshwater clam. The beak is the basal projection of the oldest part of the valve of the adult animal. The beak usually, but not always, coincides with the umbo, the highest and most prominent point on the valve. Because by definition, all bivalves have two valves, the shell of a bivalve has two umbones, and two beaks. In many species of bivalves the beaks point towards one another. However, in some species of bivalves the beaks point posteriorly, in which case they are referred to as opisthogyrate; in others the beaks point forward, and are described as being prosogyrate. If the beak is not eroded or worn down at all, it may still be capped with the prodissoconch, which is the larval shell of the animal. == References ==
|
{
"page_id": 43455370,
"source": null,
"title": "Beak (bivalve)"
}
|
Universal conductance fluctuations (UCF) in mesoscopic physics is a phenomenon encountered in electrical transport experiments in mesoscopic species. The measured electrical conductance will vary from sample to sample, mainly due to inhomogeneous scattering sites. Fluctuations originate from coherence effects for electronic wavefunctions and thus the phase-coherence length l ϕ {\displaystyle \textstyle l_{\phi }} needs be larger than the momentum relaxation length l m {\displaystyle \textstyle l_{m}} . UCF is more profound when electrical transport is in weak localization regime. l ϕ < l c {\displaystyle \textstyle l_{\phi }<l_{c}} where l c = M ⋅ l m {\displaystyle l_{c}=M\cdot l_{m}} , M {\displaystyle \textstyle M} is the number of conduction channels and l m {\displaystyle \textstyle l_{m}} is the momentum relaxation due to phonon scattering events length or mean free path. For weakly localized samples fluctuation in conductance is equal to fundamental conductance G o = 2 e 2 / h {\displaystyle \textstyle G_{o}=2e^{2}/h} regardless of the number of channels. Many factors will influence the amplitude of UCF. At zero temperature without decoherence, the UCF is influenced by mainly two factors, the symmetry and the shape of the sample. Recently, a third key factor, anisotropy of Fermi surface, is also found to fundamentally influence the amplitude of UCF. == See also == Speckle patterns, the optical analogues of conductance fluctuation patterns. == References == === General references === Akkermans and Montambaux, Mesoscopic Physics of Electrons and Photons, Cambridge University Press (2007) Supriyo Datta, Electronic Transport in Mesoscopic Systems, Cambridge University Press (1995) R. Saito, G. Dresselhaus and M. S. Dresselhaus, Physical Properties of Carbon Nanotubes, Imperial College Press (1998) Lee, P.; Stone, A. (1985). "Universal Conductance Fluctuations in Metals". Physical Review Letters. 55 (15): 1622–1625. Bibcode:1985PhRvL..55.1622L. doi:10.1103/PhysRevLett.55.1622. PMID 10031872. Boris Altshuler (1985), Pis'ma Zh. Eksp. Teor. Fiz. 41: 530 [JETP Lett.
|
{
"page_id": 13439882,
"source": null,
"title": "Universal conductance fluctuations"
}
|
41: 648] .
|
{
"page_id": 13439882,
"source": null,
"title": "Universal conductance fluctuations"
}
|
Photo-activated localization microscopy (PALM or FPALM) and stochastic optical reconstruction microscopy (STORM) are widefield (as opposed to point scanning techniques such as laser scanning confocal microscopy) fluorescence microscopy imaging methods that allow obtaining images with a resolution beyond the diffraction limit. The methods were proposed in 2006 in the wake of a general emergence of optical super-resolution microscopy methods, and were featured as Methods of the Year for 2008 by the Nature Methods journal. The development of PALM as a targeted biophysical imaging method was largely prompted by the discovery of new species and the engineering of mutants of fluorescent proteins displaying a controllable photochromism, such as photo-activatible GFP. However, the concomitant development of STORM, sharing the same fundamental principle, originally made use of paired cyanine dyes. One molecule of the pair (called activator), when excited near its absorption maximum, serves to reactivate the other molecule (called reporter) to the fluorescent state. A growing number of dyes are used for PALM, STORM and related techniques, both organic fluorophores and fluorescent proteins. Some are compatible with live cell imaging, others allow faster acquisition or denser labeling. The choice of a particular fluorophore ultimately depends on the application and on its underlying photophysical properties. Both techniques have undergone significant technical developments, in particular allowing multicolor imaging and the extension to three dimensions, with the best current axial resolution of 10 nm in the third dimension obtained using an interferometric approach with two opposing objectives collecting the fluorescence from the sample. == Principle == Conventional fluorescence microscopy is performed by selectively staining the sample with fluorescent molecules, either linked to antibodies as in immunohistochemistry or using fluorescent proteins genetically fused to the genes of interest. Typically, the more concentrated the fluorophores, the better the contrast of the fluorescence image. A single fluorophore can
|
{
"page_id": 36770702,
"source": null,
"title": "Photoactivated localization microscopy"
}
|
be visualized under a microscope (or even under the naked eye) if the number of photons emitted is sufficiently high, and in contrast the background is low enough. The two dimensional image of a point source observed under a microscope is an extended spot, corresponding to the Airy disk (a section of the point spread function) of the imaging system. The ability to identify as two individual entities two closely spaced fluorophores is limited by the diffraction of light. This is quantified by Abbe’s criterion, stating that the minimal distance d {\displaystyle d} that allows resolving two point sources is given by d = λ 2 N A {\displaystyle d={\frac {\lambda }{2NA}}} where λ {\displaystyle \lambda } is the wavelength of the fluorescent emission and NA is the numerical aperture of the microscope. The theoretical resolution limit at the shortest practical excitation wavelength is around 150 nm in the lateral dimension and approaching 400 nm in the axial dimension (if using an objective having a numerical aperture of 1.40 and the excitation wavelength is 400 nm). However, if the emission from the two neighboring fluorescent molecules is made distinguishable, i.e. the photons coming from each of the two can be identified, then it is possible to overcome the diffraction limit. Once a set of photons from a specific molecule is collected, it forms a diffraction-limited spot in the image plane of the microscope. The center of this spot can be found by fitting the observed emission profile to a known geometrical function, typically a Gaussian function in two dimensions. The error that is made in localizing the center of a point emitter scales to a first approximation as the inverse square root of the number of emitted photons, and if enough photons are collected it is easy to obtain a
|
{
"page_id": 36770702,
"source": null,
"title": "Photoactivated localization microscopy"
}
|
localization error much smaller than the original point spread function. The two steps of identification and localization of individual fluorescent molecules in a dense environment where many are present are at the basis of PALM, STORM and their development. Although many approaches to molecular identification exist, the light-induced photochromism of selected fluorophores developed as the most promising approach to distinguish neighboring molecules by separating their fluorescent emission in time. By turning on stochastically sparse subsets of fluorophores with light of a specific wavelength, individual molecules can then be excited and imaged according to their spectra. To avoid the accumulation of active fluorophores in the sample, which would eventually degrade back to a diffraction-limited image, the spontaneously occurring phenomenon of photobleaching is exploited in PALM, whereas reversible switching between a fluorescent on-state and a dark off-state of a dye is exploited in STORM. In summary, PALM and STORM are based on collecting under a fluorescent microscope a large number of images each containing just a few active isolated fluorophores. The imaging sequence allows for the many emission cycles necessary to stochastically activate each fluorophore from a non-emissive (or less emissive) state to a bright state, and back to a non-emissive or bleached state. During each cycle, the density of activated molecules is kept low enough that the molecular images of individual fluorophores do not typically overlap. === Localization of individual fluorophores === In each image of the sequence, the position of a fluorophore is calculated with a precision typically greater than the diffraction limit - in the typical range of a few to tens of nm - and the resulting information of the position of the centers of all the localized molecules is used to build up the super-resolution PALM or STORM image. The localization precision σ {\displaystyle \sigma } can
|
{
"page_id": 36770702,
"source": null,
"title": "Photoactivated localization microscopy"
}
|
be calculated according to the formula: σ = ( s i 2 + a 2 12 N ) ⋅ ( 16 9 + 8 π s i 2 b 2 a 2 N 2 ) {\displaystyle \sigma ={\sqrt {\left({\frac {s_{i}^{2}+{\frac {a^{2}}{12}}}{N}}\right)\cdot \left({\frac {16}{9}}+{\frac {8\pi s_{i}^{2}b^{2}}{a^{2}N^{2}}}\right)}}} where N is the number of collected photons, a is the pixel size of the imaging detector, b 2 {\displaystyle b^{2}} is the average background signal and s i {\displaystyle s_{i}} is the standard deviation of the point spread function. The requirement of localizing at the same time multiple fluorophores simultaneously over an extended area determines the reason why these methods are wide-field, employing as a detector a CCD, EMCCD or a CMOS camera. The requirement for an enhanced signal-to-noise ratio to maximize localization precision determines the frequent combination of this concept with widefield fluorescent microscopes allowing optical sectioning, such as total internal reflection fluorescence microscopes (TIRF) and light sheet fluorescence microscopes. == Super-resolution image == The resolution of the final image is limited by the precision of each localization and the number of localizations, instead of by diffraction. The super resolution image is therefore a pointillistic representation of the coordinates of all the localized molecules. The super resolution image is commonly rendered by representing each molecule in the image plane as a two dimensional Gaussian with amplitude proportional to the number of photons collected, and the standard deviation depending on the localization precision. == Applications == === Multicolor PALM/STORM === The peculiar photophysical properties of the fluorophores employed in PALM/STORM super resolution imaging pose both constraints and opportunities for multicolor imaging. Three strategies have emerged so far: excitation of spectrally separated fluorophores using an emission beamsplitter, using of multiple activators/reporters in STORM mode and ratiometric imaging of spectrally close fluorophores. === 3D in PALM
|
{
"page_id": 36770702,
"source": null,
"title": "Photoactivated localization microscopy"
}
|
and STORM === Although originally developed as 2D (x,y) imaging methods, PALM and STORM have quickly developed into 3D (x,y,z) capable techniques. To determine the axial position of a single fluorophore in the sample the following approaches are currently being used: modification of the point spread function to introduce z-dependent features in the 2D (x,y) image (the most common approach is to introduce astigmatism in the PSF); multiplane detection, where the axial position is determined by comparing two images of the same PSF defocused one with respect to the other; interferometric determination of the axial position of the emitter using two opposed objectives and multiple detectors; use of temporal focusing to confine the excitation/activation; use of light sheet excitation/activation to confine to a few hundred nanometers thick layer arbitrarily positioned along the z-plane within the sample. === Live cell imaging === The requirement for multiple cycles of activation, excitation and de-activation/bleaching would typically imply extended periods of time to form a PALM/STORM image, and therefore operation on a fixed sample. A number of works have been published as early as 2007 performing PALM/STORM on live cells. The ability to perform live super-resolution imaging using these techniques ultimately depends on the technical limitations of collecting enough photons from a single emitter in a very short time. This depends both on the photophysical limitations of the probe as well as on the sensitivity of the detector employed. Relatively slow (seconds to tens of seconds) processes such as modification in the organization of focal adhesions have been investigated by means of PALM, whereas STORM has allowed imaging of faster processes such as membrane diffusion of clathrin coated pits or mitochondrial fission/fusion processes. A promising application of live cell PALM is the use of photoactivation to perform high-density single-particle tracking (sptPALM), overcoming the traditional
|
{
"page_id": 36770702,
"source": null,
"title": "Photoactivated localization microscopy"
}
|
limitation of single particle tracking to work with systems displaying a very low concentration of fluorophores. === Nanophotonic interactions === While traditional PALM and STORM measurements are used to determine the physical structure of a sample, with the intensities of fluorescent events determining the certainty of the localization, these intensities can also be used to map fluorophore interactions with nanophotonic structures. This has been performed on both metallic (plasmonic) structures, such as gold nanorods, as well as semiconducting structures, such as silicon nanowires. These approaches can either be used for fluorophores functionalized on the surface of the sample of interest (as for the plasmonic particle studies mentioned here), or randomly adsorbed onto the substrate surrounding the sample, allowing full 2D mapping of fluorophore-nanostructure interactions at all positions relative to the structure. These studies have found that, in addition to the standard uncertainty of localization due to the point spread function fitting, self-interference with light scattered by nanoparticles can lead to distortions or displacements of the imaged point spread functions, complicating the analysis of such measurements. These may be possible to limit, however, for example by incorporating metasurface masks which control the angular distribution of light permitted into the measurement system. == Differences from STORM == PALM and STORM share a common fundamental principle, and numerous developments have tended to make the two techniques even more intertwined. Still, they differ in several technical details and a fundamental point. On the technical side, PALM is performed on a biological specimen using fluorophores expressed exogenously in the form of genetic fusion constructs to a photoactivatable fluorescent protein. STORM instead uses immunolabeling of endogenous molecules in the sample with antibodies tagged with organic fluorophores. In both cases the fluorophores are driven between an active-ON and an inactive-OFF state by light. In PALM, however, photoactivation
|
{
"page_id": 36770702,
"source": null,
"title": "Photoactivated localization microscopy"
}
|
and photobleaching confine the life of the fluorophore to a limited interval of time, and a continuous emission of the fluorophore is desirable in between without any fluorescence intermittency. In STORM stochastic photoblinking of the organic fluorophores (typically brighter than fluorescent proteins) was originally exploited to separate neighboring dyes. In this respect, the more robust the blinking, the higher the probability of distinguishing two neighbouring fluorophores. In this respect, several research works have explored the potential of PALM to perform a quantitation of the number of fluorophores (and therefore proteins of interest) present in a sample by counting the activated fluorophores. The approach used to treat the fluorescent dynamics of the fluorescent label used in the experiments will determine the final appearance of the super-resolution image, and the possibility of determining an unambiguous correspondence between a localization event and a protein in the sample. == Multimedia == == References == == External links == Superresolution Microscopy within Zeiss educational page in Microscopy and Digital Imaging Fundamental Concepts in Super Resolution within Nikon educational resources for Microscopy Education Eric Betzig and Harald Hess talk: Developing PALM Microscopy Xiaowei Zhuang talk: Super-Resolution Microscopy Archived 2015-03-24 at the Wayback Machine Light Microscopy: An ongoing contemporary revolution (Introductory Review)
|
{
"page_id": 36770702,
"source": null,
"title": "Photoactivated localization microscopy"
}
|
In nuclear and materials physics, stopping power is the retarding force acting on charged particles, typically alpha and beta particles, due to interaction with matter, resulting in loss of particle kinetic energy. Stopping power is also interpreted as the rate at which a material absorbs the kinetic energy of a charged particle. Its application is important in a wide range of thermodynamic areas such as radiation protection, ion implantation and nuclear medicine. == Definition and Bragg curve == Both charged and uncharged particles lose energy while passing through matter. Positive ions are considered in most cases below. The stopping power depends on the type and energy of the radiation and on the properties of the material it passes. Since the production of an ion pair (usually a positive ion and a (negative) electron) requires a fixed amount of energy (for example, 33.97 eV in dry air: 305 ), the number of ionizations per path length is proportional to the stopping power. The stopping power of the material is numerically equal to the loss of energy E per unit path length, x: S ( E ) = − d E / d x {\displaystyle S(E)=-dE/dx} The minus sign makes S positive. The force usually increases toward the end of range and reaches a maximum, the Bragg peak, shortly before the energy drops to zero. The curve that describes the force as function of the material depth is called the Bragg curve. This is of great practical importance for radiation therapy. The equation above defines the linear stopping power which in the international system is expressed in N but is usually indicated in other units like MeV/mm or similar. If a substance is compared in gaseous and solid form, then the linear stopping powers of the two states are very different just
|
{
"page_id": 5051282,
"source": null,
"title": "Stopping power (particle radiation)"
}
|
because of the different density. One therefore often divides the force by the density of the material to obtain the mass stopping power which in the international system is expressed in m4/s2 but is usually found in units like MeV/(mg/cm2) or similar. The mass stopping power then depends only very little on the density of the material. The picture shows how the stopping power of 5.49 MeV alpha particles increases while the particle traverses air, until it reaches the maximum. This particular energy corresponds to that of the alpha particle radiation from naturally radioactive gas radon (222Rn) which is present in the air in minute amounts. The mean range can be calculated by integrating the reciprocal stopping power over energy: Δ x = ∫ 0 E 0 1 S ( E ) d E {\displaystyle \Delta x=\int _{0}^{E_{0}}{\frac {1}{S(E)}}\,dE} where: E0 is the initial kinetic energy of the particle Δx is the "continuous slowing down approximation (CSDA)" range and S(E) is the linear stopping power. The deposited energy can be obtained by integrating the stopping power over the entire path length of the ion while it moves in the material. == Electronic, nuclear and radiative stopping == Electronic stopping refers to the slowing down of a projectile ion due to the inelastic collisions between bound electrons in the medium and the ion moving through it. The term inelastic is used to signify that energy is lost during the process (the collisions may result both in excitations of bound electrons of the medium, and in excitations of the electron cloud of the ion as well). Linear electronic stopping power is identical to unrestricted linear energy transfer. Instead of energy transfer, some models consider the electronic stopping power as momentum transfer between electron gas and energetic ion. This is consistent with the
|
{
"page_id": 5051282,
"source": null,
"title": "Stopping power (particle radiation)"
}
|
result of Bethe in the high energy range. Since the number of collisions an ion experiences with electrons is large, and since the charge state of the ion while traversing the medium may change frequently, it is very difficult to describe all possible interactions for all possible ion charge states. Instead, the electronic stopping power is often given as a simple function of energy F e ( E ) {\displaystyle F_{e}(E)} which is an average taken over all energy loss processes for different charge states. It can be theoretically determined to an accuracy of a few % in the energy range above several hundred keV per nucleon from theoretical treatments, the best known being the Bethe formula. At energies lower than about 100 keV per nucleon, it becomes more difficult to determine the electronic stopping using analytical models. Recently real-time Time-dependent density functional theory has been successfully used to accurately determine the electronic stopping for various ion-target systems over a wide range of energies including the low energy regime. Graphical presentations of experimental values of the electronic stopping power for many ions in many substances have been given by Paul. The accuracy of various stopping tables has been determined using statistical comparisons. Nuclear stopping power refers to the elastic collisions between the projectile ion and atoms in the sample (the established designation "nuclear" may be confusing since nuclear stopping is not due to nuclear forces, but it is meant to note that this type of stopping involves the interaction of the ion with the nuclei in the target). If one knows the form of the repulsive potential energy V ( r ) {\displaystyle V(r)} between two atoms (see below), it is possible to calculate the nuclear stopping power F n ( E ) {\displaystyle F_{n}(E)} . This is done by
|
{
"page_id": 5051282,
"source": null,
"title": "Stopping power (particle radiation)"
}
|
determining the energy loss in binary collisions T ( E , b ) {\displaystyle T(E,b)} of two atoms interacting with the energy V ( r ) {\displaystyle V(r)} as a function of impact parameter b {\displaystyle b} F n ( E ) = 2 π ∫ 0 ∞ T ( E 0 , b ) b d b {\displaystyle F_{n}(E)=2\pi \int _{0}^{\infty }T(E_{0},b)bdb} In the stopping power figure shown above for aluminium ions in aluminum, nuclear stopping is negligible except at the lowest energy. Nuclear stopping increases when the mass of the ion increases. In the figure shown on the right, nuclear stopping is larger than electronic stopping at low energy. For very light ions slowing down in heavy materials, the nuclear stopping is weaker than the electronic at all energies. Especially in the field of radiation damage in detectors, the term "non-ionizing energy loss" (NIEL) is used as a term opposite to the linear energy transfer (LET), see e.g. Refs. Since per definition nuclear stopping power does not involve electronic excitations, NIEL and nuclear stopping can be considered to be the same quantity in the absence of nuclear reactions. The total non-relativistic stopping power is therefore the sum of two terms: F ( E ) = F e ( E ) + F n ( E ) {\displaystyle F(E)=F_{e}(E)+F_{n}(E)} . Several semi-empirical stopping power formulas have been devised. The model given by Ziegler, Biersack and Littmark (the so-called "ZBL" stopping, see next chapter), implemented in different versions of the TRIM/SRIM codes, is used most often today. Radiative stopping power, which is due to the emission of bremsstrahlung in the electric fields of the particles in the material traversed, must be considered at extremely high ion energies. For electron projectiles, radiative stopping is always important. At high ion energies, there
|
{
"page_id": 5051282,
"source": null,
"title": "Stopping power (particle radiation)"
}
|
may also be energy losses due to nuclear reactions, but such processes are not normally described by stopping power. Close to the surface of a solid target material, both nuclear and electronic stopping may lead to sputtering. == The slowing-down process in solids == In the beginning of the slowing-down process at high energies, the ion is slowed mainly by electronic stopping, and it moves almost in a straight path. When the ion has slowed sufficiently, the collisions with nuclei (the nuclear stopping) become more and more probable, finally dominating the slowing down. When atoms of the solid receive significant recoil energies when struck by the ion, they will be removed from their lattice positions, and produce a cascade of further collisions in the material. These collision cascades are the main cause of damage production during ion implantation in metals and semiconductors. When the energies of all atoms in the system have fallen below the threshold displacement energy, the production of new damage ceases, and the concept of nuclear stopping is no longer meaningful. The total amount of energy deposited by the nuclear collisions to atoms in the materials is called the nuclear deposited energy. The inset in the figure shows a typical range distribution of ions deposited in the solid. The case shown here might, for instance, be the slowing down of a 1 MeV silicon ion in silicon. The mean range for a 1 MeV ion is typically in the micrometer range. === Repulsive interatomic potentials === At very small distances between the nuclei the repulsive interaction can be regarded as essentially Coulombic. At greater distances, the electron clouds screen the nuclei from each other. Thus the repulsive potential can be described by multiplying the Coulombic repulsion between nuclei with a screening function φ(r/a), V ( r )
|
{
"page_id": 5051282,
"source": null,
"title": "Stopping power (particle radiation)"
}
|
= 1 4 π ε 0 Z 1 Z 2 e 2 r φ ( r / a ) {\displaystyle V(r)={1 \over 4\pi \varepsilon _{0}}{Z_{1}Z_{2}e^{2} \over r}\varphi (r/a)} where φ(r/a) → 1 when r → 0. Here Z 1 {\displaystyle Z_{1}} and Z 2 {\displaystyle Z_{2}} are the charges of the interacting nuclei, and r the distance between them; a is the so-called screening parameter. A large number of different repulsive potentials and screening functions have been proposed over the years, some determined semi-empirically, others from theoretical calculations. A much used repulsive potential is the one given by Ziegler, Biersack and Littmark, the so-called ZBL repulsive potential. It has been constructed by fitting a universal screening function to theoretically obtained potentials calculated for a large variety of atom pairs. The ZBL screening parameter and function have the forms a = a u = 0.8854 a 0 Z 1 0.23 + Z 2 0.23 {\displaystyle a=a_{u}={0.8854a_{0} \over Z_{1}^{0.23}+Z_{2}^{0.23}}} and φ ( x ) = 0.1818 e − 3.2 x + 0.5099 e − 0.9423 x + 0.2802 e − 0.4029 x + 0.02817 e − 0.2016 x {\displaystyle \varphi (x)=0.1818e^{-3.2x}+0.5099e^{-0.9423x}+0.2802e^{-0.4029x}+0.02817e^{-0.2016x}} where x = r/au, and a0 is the Bohr atomic radius = 0.529 Å. The standard deviation of the fit of the universal ZBL repulsive potential to the theoretically calculated pair-specific potentials it is fit to is 18% above 2 eV. Even more accurate repulsive potentials can be obtained from self-consistent total energy calculations using density-functional theory or other quantum chemical methods such as Hartree-Fock methods. Nordlund, Lehtola and Hobler have compared potentials with these methods for all atom pairs from Z=1 (hydrogen) to Z=92 (uranium) and shown that pair-specific quantum chemical calculations can give repulsive potentials that are accurate to withing ~1% above 30 eV. Moreover, they fit pair-specific "NLH"
|
{
"page_id": 5051282,
"source": null,
"title": "Stopping power (particle radiation)"
}
|
screening parameters for all atom pairs in the range Z1, Z2 <= 92 in the functional form φ ( r ) = a 1 − b 1 r + a 2 − b 2 r + a 3 − b 3 r {\displaystyle \varphi (r)=a_{1}^{-b_{1}r}+a_{2}^{-b_{2}r}+a_{3}^{-b_{3}r}} where r is directly the interatomic distance, i.e. the screening parameter a=1. The pair-specific parameters are available in the supplemental material of Ref.: [1]. === Channeling === In crystalline materials the ion may in some instances get "channeled", i.e., get focused into a channel between crystal planes where it experiences almost no collisions with nuclei. Also, the electronic stopping power may be weaker in the channel. Thus the nuclear and electronic stopping do not only depend on material type and density but also on its microscopic structure and cross-section. === Computer simulations of ion slowing down === Computer simulation methods to calculate the motion of ions in a medium have been developed since the 1960s, and are now the dominant way of treating stopping power theoretically. The basic idea in them is to follow the movement of the ion in the medium by simulating the collisions with nuclei in the medium. The electronic stopping power is usually taken into account as a frictional force slowing down the ion. Conventional methods used to calculate ion ranges are based on the binary collision approximation (BCA). In these methods the movement of ions in the implanted sample is treated as a succession of individual collisions between the recoil ion and atoms in the sample. For each individual collision the classical scattering integral is solved by numerical integration. The impact parameter p in the scattering integral is determined either from a stochastic distribution or in a way that takes into account the crystal structure of the sample. The former
|
{
"page_id": 5051282,
"source": null,
"title": "Stopping power (particle radiation)"
}
|
method is suitable only in simulations of implantation into amorphous materials, as it does not account for channeling. The best known BCA simulation program is TRIM/SRIM (acronym for TRansport of Ions in Matter, in more recent versions called Stopping and Range of Ions in Matter), which is based on the ZBL electronic stopping and interatomic potential. It has a very easy-to-use user interface, and has default parameters for all ions in all materials up to an ion energy of 1 GeV, which has made it immensely popular. However, it doesn't take account of the crystal structure, which severely limits its usefulness in many cases. Several BCA programs overcome this difficulty; some fairly well known are MARLOWE, BCCRYS and crystal-TRIM. Although the BCA methods have been successfully used in describing many physical processes, they have some obstacles for describing the slowing down process of energetic ions realistically. Basic assumption that collisions are binary results in severe problems when trying to take multiple interactions into account. Also, in simulating crystalline materials the selection process of the next colliding lattice atom and the impact parameter p always involve several parameters which may not have perfectly well defined values, which may affect the results 10–20% even for quite reasonable-seeming choices of the parameter values. The best reliability in BCA is obtained by including multiple collisions in the calculations, which is not easy to do correctly. However, at least MARLOWE does this. A fundamentally more straightforward way to model multiple atomic collisions is provided by molecular dynamics (MD) simulations, in which the time evolution of a system of atoms is calculated by solving the equations of motion numerically. Special MD methods have been devised in which the number of interactions and atoms involved in MD simulations have been reduced in order to make them efficient
|
{
"page_id": 5051282,
"source": null,
"title": "Stopping power (particle radiation)"
}
|
enough for calculating ion ranges. The MD simulations this automatically describe the nuclear stopping power. The electronic stopping power can be readily included in molecular dynamics simulations, either as a frictional force or in a more advanced manner by also following the heating of the electronic systems and coupling the electronic and atomic degrees of freedom. == Minimum ionizing particle == Beyond the maximum, stopping power decreases approximately like 1/v2 with increasing particle velocity v, but after a minimum, it increases again. A minimum ionizing particle (MIP) is a particle whose mean energy loss rate through matter is close to the minimum. In many practical cases, relativistic particles (e.g., cosmic-ray muons) are minimum ionizing particles. An important property of all minimum ionizing particles is that β γ ≃ 3 {\displaystyle \beta \gamma \simeq 3} is approximately true where β {\displaystyle \beta } and γ {\displaystyle \gamma } are the usual relativistic kinematic quantities. Moreover, all of the MIPs have almost the same energy loss in the material which value is: − d E d x ≃ 2 MeV g cm − 2 {\displaystyle -{\frac {dE}{dx}}\simeq 2{\frac {\text{MeV}}{\mathrm {g} \,{\text{cm}}^{-2}}}} . == See also == Radiation length Attenuation length Collision cascade Radiation material science == References == == Further reading == (Lindhard 1963) J. Lindhard, M. Scharff, and H. E. Shiøtt. Range concepts and heavy ion ranges. Mat. Fys. Medd. Dan. Vid. Selsk., 33(14):1, 1963. (Smith 1997) R. Smith (ed.), Atomic & ion collisions in solids and at surfaces: theory, simulation and applications, Cambridge University Press, Cambridge, UK, 1997. == External links == Stopping power and energy loss straggling calculations in solids by MELF-GOS model Archived 2010-09-25 at the Wayback Machine A Web-based module for Range and Stopping Power in Nucleonica Passage of charged particles through matter Stopping-Power and Range Tables
|
{
"page_id": 5051282,
"source": null,
"title": "Stopping power (particle radiation)"
}
|
for Electrons, Protons, and Helium Ions Stopping Power: Graphs and Data Penetration of charged particles through matter; lecture notes by E. Bonderup Archived 2019-05-28 at the Wayback Machine
|
{
"page_id": 5051282,
"source": null,
"title": "Stopping power (particle radiation)"
}
|
Human interactions with insects include both a wide variety of uses, whether practical such as for food, textiles, and dyestuffs, or symbolic, as in art, music, and literature, and negative interactions including damage to crops and extensive efforts to control insect pests. Academically, the interaction of insects and society has been treated in part as cultural entomology, dealing mostly with "advanced" societies, and in part as ethnoentomology, dealing mostly with "primitive" societies, though the distinction is weak and not based on theory. Both academic disciplines explore the parallels, connections and influence of insects on human populations, and vice versa. They are rooted in anthropology and natural history, as well as entomology, the study of insects. Other cultural uses of insects, such as biomimicry, do not necessarily lie within these academic disciplines. More generally, people make a wide range of uses of insects, both practical and symbolic. On the other hand, attitudes to insects are often negative, and extensive efforts are made to kill them. The widespread use of insecticides has failed to exterminate any insect pest, but has caused resistance to commonly-used chemicals in a thousand insect species. Practical uses include as food, in medicine, for the valuable textile silk, for dyestuffs such as carmine, in science, where the fruit fly is an important model organism in genetics, and in warfare, where insects were successfully used in the Second World War to spread disease in enemy populations. One insect, the honey bee, provides honey, pollen, royal jelly, propolis and an anti-inflammatory peptide, melittin; its larvae too are eaten in some societies. Medical uses of insects include maggot therapy for wound debridement. Over a thousand protein families have been identified in the saliva of blood-feeding insects; these may provide useful drugs such as anticoagulants, vasodilators, antihistamines and anaesthetics. Symbolic uses include
|
{
"page_id": 37360538,
"source": null,
"title": "Human interactions with insects"
}
|
roles in art, in music (with many songs featuring insects), in film, in literature, in religion, and in mythology. Insect costumes are used in theatrical productions and worn for parties and carnivals. == Cultural entomology and ethnoentomology == Ethnoentomology developed from the 19th century with early works by authors such as Alfred Russel Wallace (1852) and Henry Walter Bates (1862). Hans Zinsser's classic Rats, Lice and History (1935) showed that insects were an important force in human history. Writers like William Morton Wheeler, Maurice Maeterlinck, and Jean Henri Fabre described insect life and communicated their meaning to people "with imagination and brilliance". Frederick Simon Bodenheimer's Insects as Human Food (1951) drew attention to the scope and potential of entomophagy, and showed a positive aspect of insects. Food is the most studied topic in ethnoentomology, followed by medicine and beekeeping. In 1968, Erwin Schimitschek claimed cultural entomology as a branch of insect studies, in a review of the roles insects played in folklore and culture including religion, food, medicine and the arts. In 1984, Charles Hogue covered the field in English and from 1994 to 1997, Hogue's The Cultural Entomology Digest served as a forum on the field. Hogue argued that "Humans spend their intellectual energies in three basic areas of activity: surviving, using practical learning (the application of technology); seeking pure knowledge through inductive mental processes (science); and pursuing enlightenment to taste a pleasure by aesthetic exercises that may be referred to as the 'humanities.' Entomology has long been concerned with survival (economic entomology) and scientific study (academic entomology), but the branch of investigation that addresses the influence of insects (and other terrestrial Arthropoda, including arachnids and myriapods) in literature, language, music, the arts, interpretive history, religion, and recreation has only become recognized as a distinct field" through Schimitschek's work.
|
{
"page_id": 37360538,
"source": null,
"title": "Human interactions with insects"
}
|
Hogue set out the boundaries of the field by saying: "The narrative history of the science of entomology is not part of cultural entomology, while the influence of insects on general history would be considered cultural entomology." He added: "Because the term "cultural" is narrowly defined, some aspects normally included in studies of human societies are excluded." Darrell Addison Posey, noting that the boundary between cultural entomology and ethnoentomology is difficult to draw, cites Hogue as limiting cultural entomology to the influence of insects on "the essence of humanity as expressed in the arts and humanities". Posey notes further that cultural anthropology is usually restricted to the study of "advanced", industrialised, and literate societies, whereas ethnoentomology studies "the entomological concerns of 'primitive' or 'noncivilized' societies". Posey states at once that the division is artificial, complete with an unjustified us/them bias. Brian Morris similarly criticises the way that anthropologists treat non-Western attitudes to nature as monadic and spiritualist, and contrast this "in gnostic fashion" with a simplistic treatment of Western, often 17th-century, mechanistic attitude. Morris considers this "quite unhelpful, if not misleading", and offers instead his own research into the multiple ways that the people of Malawi relate to insects and other animals: "pragmatic, intellectual, realist, practical, aesthetic, symbolic and sacramental." == Benefits and costs == === Insect ecosystem services === The Millennium Ecosystem Assessment (MEA) report 2005 defines ecosystem services as benefits people obtain from ecosystems, and distinguishes four categories, namely provisioning, regulating, supporting, and cultural. A fundamental tenet is that a few species of arthropod are well understood for their influence on humans (such as honeybees, ants, mosquitoes, and spiders). However, insects offer ecological goods and services. The Xerces Society calculates the economic impact of four ecological services rendered by insects: pollination, recreation (i.e. "the importance of bugs to
|
{
"page_id": 37360538,
"source": null,
"title": "Human interactions with insects"
}
|
hunting, fishing, and wildlife observation, including bird-watching"), dung burial, and pest control. The value has been estimated at $153 billion worldwide. As the ant expert E. O. Wilson observed: "If all mankind were to disappear, the world would regenerate back to the rich state of equilibrium that existed ten thousand years ago. If insects were to vanish, the environment would collapse into chaos." A Nova segment on the American Public Broadcasting Service framed the relationship with insects in an urban context: "We humans like to think that we run the world. But even in the heart of our great cities, a rival superpower thrives ... These tiny creatures live all around us in vast numbers, though we hardly even notice them. But in many ways, it is they who really run the show. The Washington Post stated: "We are flying blind in many aspects of preserving the environment, and that's why we are so surprised when a species like the honeybee starts to crash, or an insect we don't want, the Asian tiger mosquito or the fire ant, appears in our midst. In other words: Start thinking about the bugs." === Pests and propaganda === Human attitudes toward insects are often negative, reinforced by sensationalism in the media. This has produced a society that attempts to eliminate insects from daily life. For example, nearly 75 million pounds of broad-spectrum insecticides are manufactured and sold each year for use in American homes and gardens. Annual revenues from insecticide sales to homeowners exceeded $450 million in 2004. Out of the roughly a million species of insects described so far, not more than 1,000 can be regarded as serious pests, and less than 10,000 (about 1%) are even occasional pests. Yet not one species of insect has been permanently eradicated through the use
|
{
"page_id": 37360538,
"source": null,
"title": "Human interactions with insects"
}
|
of pesticides. Instead, at least 1,000 species have developed field resistance to pesticides, and extensive harm has been done to beneficial insects including pollinators such as bees. During the Cold War, the Warsaw Pact countries launched a widespread war against the potato beetle, blaming the introduction of the species from America on the CIA, demonising the species in propaganda posters, and urging children to gather the beetles and kill them. == Practical uses == === As food === Entomophagy is the eating of insects. Many edible insects are considered a culinary delicacy in some societies around the world, and Frederick Simon Bodenheimer's Insects as Human Food (1951) drew attention to the scope and potential of entomophagy, but the practice is uncommon and even taboo in other societies. Sometimes insects are considered suitable only for the poor in the third world, but in 1975 Victor Meyer-Rochow suggested that insects could help ease global future food shortages and advocated a change in western attitudes towards cultures in which insects were appreciated as a food item. P. J. Gullan and P. S. Cranston felt that the remedy for this may be marketing of insect dishes as suitably exotic and costly to make them acceptable. They also note that some societies in sub-Saharan Africa prefer caterpillars to beef, while Chakravorty et al. (2011) point out that food insects (highly appreciated in North-East India) are more expensive than meat. The economics, i.e., the costs involved collecting food insects and the money earned through the sale of such insects, have been studied in a Laotian setting by Meyer-Rochow et al. (2008). In Mexico, ant larvae and corixid water boatman eggs are sought out as a form of caviar by gastronomes. In Guangdong, water beetles fetch a high enough price for these insects to be farmed. Especially
|
{
"page_id": 37360538,
"source": null,
"title": "Human interactions with insects"
}
|
high prices are fetched in Thailand for the giant water bug Lethocerus indicus. Insects used in food include honey bee larvae and pupae, mopani worms, silkworms, Maguey worms, Witchetty grubs, crickets, grasshoppers and locusts. In Thailand, there are 20,000 farmers rearing crickets, producing some 7,500 tons per year. === In medicine === Insects have been used medicinally in cultures around the world, often according to the Doctrine of Signatures. Thus, the femurs of grasshoppers, which were said to resemble the human liver, were used to treat liver ailments by the indigenous peoples of Mexico. The doctrine was applied in both Traditional Chinese Medicine (TCM) and in Ayurveda. TCM uses arthropods for various purposes; for example, centipede is used to treat tetanus, seizures, and convulsions, while the Chinese Black Mountain Ant, Polyrhachis vicina, is used as a cure all, especially by the elderly, and extracts have been examined as a possible anti-cancer agent. Ayurveda uses insects such as Termite for conditions such as ulcers, rheumatic diseases, anaemia, and pain. The Jatropha leaf miner's larvae are used boiled to induce lactation, reduce fever, and soothe the gastrointestinal tract. In contrast, the traditional insect medicine of Africa is local and unformalised. The indigenous peoples of Central America used a wide variety of insects medicinally. Mayans used Army ant soldiers as living sutures. The venom of the Red harvester ant was used to cure rheumatism, arthritis, and poliomyelitis via the immune reaction produced by its sting. Boiled silkworm pupae were taken to treat apoplexy, aphasy, bronchitis, pneumonia, convulsions, haemorrhages, and frequent urination. Honey bee products are used medicinally in apitherapy across Asia, Europe, Africa, Australia, and the Americas, despite the fact that the honey bee was not introduced to the Americas until the colonization by Spain and Portugal. They are by far the most
|
{
"page_id": 37360538,
"source": null,
"title": "Human interactions with insects"
}
|
common medical insect product both historically and currently, and the most frequently referenced of these is honey. It can be applied to skin to treat excessive scar tissue, rashes, and burns, and as an eye poultice to treat infection. Honey is taken for digestive problems and as a general health restorative. It is taken hot to treat colds, cough, throat infections, laryngitis, tuberculosis, and lung diseases. Apitoxin (honey bee venom) is applied via direct stings to relieve arthritis, rheumatism, polyneuritis, and asthma. Propolis, a resinous, waxy mixture collected by honeybees and used as a hive insulator and sealant, is often consumed by menopausal women because of its high hormone content, and it is said to have antibiotic, anesthetic, and anti-inflammatory properties. Royal jelly is used to treat anaemia, gastrointestinal ulcers, arteriosclerosis, hypo- and hypertension, and inhibition of sexual libido. Finally bee bread, or bee pollen, is eaten as a generally health restorative, and is said to help treat both internal and external infections. One of the major peptides in bee venom, melittin, has the potential to treat inflammation in sufferers of rheumatoid arthritis and multiple sclerosis. The rise of antibiotic resistant infections has sparked pharmaceutical research for new resources, including into arthropods. Maggot therapy uses blowfly larvae to perform wound-cleaning debridement. Cantharidin, the blister-causing oil found in several families of beetles described by the vague common name Spanish fly has been used as an aphrodisiac in some societies. Blood-feeding insects like ticks, horseflies, and mosquitoes inject multiple bioactive compounds into their prey. These insects have long been used by practitioners of Eastern Medicine to prevent blood clot formation or thrombosis, suggesting possible applications in scientific medicine. Over 1280 protein families have been associated with the saliva of blood feeding organisms, including inhibitors of platelet aggregation, ADP, arachidonic acid, thrombin, PAF,
|
{
"page_id": 37360538,
"source": null,
"title": "Human interactions with insects"
}
|
anticoagulants, vasodilators, vasoconstrictors, antihistamines, sodium channel blockers, complement inhibitors, pore formers, inhibitors of angiogenesis, anaesthetics, AMPs and microbial pattern recognition molecules, and parasite enhancers/activators. === In science and technology === Insects play an important role in biological research. Because of its small size, short generation time and high fecundity, the common fruit fly Drosophila melanogaster was selected as a model organism for studies of the genetics of higher eukaryotes. D. melanogaster has been an essential part of studies into principles like genetic linkage, interactions between genes, chromosomal genetics, evolutionary developmental biology, animal behaviour and evolution. Because genetic systems are well conserved among eukaryotes, understanding basic cellular processes like DNA replication or transcription in fruit flies helps scientists to understand those processes in other eukaryotes, including humans. The genome of D. melanogaster was sequenced in 2000, reflecting the fruit fly's important role in biological research. 70% of the fly genome is similar to the human genome, supporting the Darwinian theory of evolution from a single origin of life. Some hemipterans are used to produce dyestuffs such as carmine (also called cochineal). The scale insect Dactylopius coccus produces the brilliant red-coloured carminic acid to deter predators. Up to 100,000 scale insects are needed to make a kilogram (2.2 lbs) of cochineal dye. A similarly enormous number of lac bugs are needed to make a kilogram of shellac, a brush-on colourant and wood finish. Additional uses of this traditional product include the waxing of citrus fruits to extend their shelf-life, and the coating of pills to moisture-proof them, provide slow-release or mask the taste of bitter ingredients. Kermes is a red dye from the dried bodies of the females of a scale insect in the genus Kermes, primarily Kermes vermilio. Kermes are native to the Mediterranean region, living on the sap of the
|
{
"page_id": 37360538,
"source": null,
"title": "Human interactions with insects"
}
|
kermes oak. They were used as a red dye by the ancient Greeks and Romans. The kermes dye is a rich red, and has good colour fastness in silk and wool. Insect attributes are sometimes mimicked in architecture, as at the Eastgate Centre, Harare, which uses passive cooling, storing heat in the morning and releasing it in the warm parts of the day. The target of this piece of biomimicry is the structure of the mounds of termites such as Macrotermes michaelseni which effectively cool the nests of these social insects. The properties of the Namib desert beetle's exoskeleton, in particular its wing-cases (elytra) which have bumps with hydrophilic (water-attracting) tips and hydrophobic (water-shedding) sides, have been mimicked in a film coating designed for the British Ministry of Defence, to capture water in arid regions. === In textiles === Silkworms, the caterpillars and pupae of the moth Bombyx mori, have been reared to produce silk in China from the Neolithic Yangshao period onwards, c. 5000 BC. Production spread to India by 140 AD. The caterpillars are fed on mulberry leaves. The cocoon, produced after the fourth moult, is covered with a continuous filament of the silk protein, fibroin, gummed together with sericin. In the traditional process, the gum is removed by soaking in hot water, and the silk is then unwound from the cocoon and reeled. Filaments are spun together to make silk thread. Commerce in silk between China and countries to its west began in ancient times, with silk known from an Egyptian mummy of 1070 BC, and later to the ancient Greeks and Romans. The Silk Road leading west from China was opened in the 2nd century AD, helping to drive trade in silk and other goods. === In warfare === The use of insects for warfare may
|
{
"page_id": 37360538,
"source": null,
"title": "Human interactions with insects"
}
|
have been attempted in the Middle Ages or earlier, but was first systematically researched by several nations during the 20th century. It was put into practice by the Japanese army's Unit 731 in attacks on China during the Second World War, killing almost 500,000 Chinese people with fleas infected with plague and flies infected with cholera. Also in the Second World War, the French and Germans explored the use of Colorado beetles to destroy enemy potato crops. During the Cold War, the US Army considered using yellow fever mosquitoes to attack Soviet cities. == Symbolic uses == === In mythology and folklore === Insects have appeared in mythology around the world from ancient times. Among the insect groups featuring in myths are the bee, butterfly, cicada, fly, dragonfly, praying mantis and scarab beetle. Scarab beetles held religious and cultural symbolism in Old Egypt, Greece and some shamanistic Old World cultures. The ancient Chinese regarded cicadas as symbols of rebirth or immortality. In the Homeric Hymn to Aphrodite, the goddess Aphrodite retells the legend of how Eos, the goddess of the dawn, requested Zeus to let her lover Tithonus live forever as an immortal. Zeus granted her request, but, because Eos forgot to ask him to also make Tithonus ageless, Tithonus never died, but he did grow old. Eventually, he became so tiny and shriveled that he turned into the first cicada. In an ancient Sumerian poem, a fly helps the goddess Inanna when her husband Dumuzid is being chased by galla demons. Flies also appear on Old Babylonian seals as symbols of Nergal, the god of death and fly-shaped lapis lazuli beads were often worn by many different cultures in ancient Mesopotamia, along with other kinds of fly-jewellery. The Akkadian Epic of Gilgamesh contains allusions to dragonflies, signifying the impossibility
|
{
"page_id": 37360538,
"source": null,
"title": "Human interactions with insects"
}
|
of immortality. Amongst the Arrernte people of Australia, honey ants and witchety grubs served as personal clan totems. In the case of the San bushmen of the Kalahari, it is the praying mantis which holds much cultural significance including creation and zen-like patience in waiting.: 9 Insects feature in folklore around the world. In China, farmers traditionally regulated their crop planting according to the Awakening of the Insects, when temperature shifts and monsoon rains bring insects out of hibernation. Most "awakening" customs are related to eating snacks like pancakes, parched beans, pears, and fried corn, symbolizing harmful insects in the field. In the Great Lakes region of the United States, there is an annual Woollybear Festival that has been celebrated for over 40 years. The larvae of the species Pyrrharctia isabella (commonly known as the isabella tiger moth), with their 13 distinct segments of black and reddish brown, have the reputation in common folklore of being able to forecast the coming winter weather. There is a common misconception that cockroaches are serious vectors of disease, but while they can carry bacteria they do not travel far, and have no bite or sting. Their shells contain a protein, arylphorin, implicated in asthma and other respiratory conditions. Among the deep-sea fishermen of Greenock in Scotland, there is a belief that if a fly falls into a glass from which a person has been drinking, or is about to drink, it is a sure omen of good luck to the drinker. Many people believe the urban myth that the daddy longlegs (Opiliones) has the most poisonous bite in the spider world, but that the fangs are too small to penetrate human skin. This is untrue on several counts. None of the known species of harvestmen have venom glands; their chelicerae are not hollowed
|
{
"page_id": 37360538,
"source": null,
"title": "Human interactions with insects"
}
|
fangs but grasping claws that are typically very small and definitely not strong enough to break human skin. In Japan, the emergence of fireflies and rhinoceros beetles signify the anticipated changing of the seasons. === In religion === In the Brazilian Amazon, members of the Tupí–Guaraní language family have been observed using Pachycondyla commutata ants during female rite-of-passage ceremonies, and prescribing the sting of Pseudomyrmex spp. for fevers and headaches. The red harvester ant Pogonomyrmex californicus has been widely used by natives of Southern California and Northern Mexico for hundreds of years in ceremonies conducted to help tribe members acquire spirit helpers through hallucination. During the ritual, young men are sent away from the tribe and consume large quantities of live, unmasticated ants under the supervision of an elderly member of the tribe. Ingestion of ants should lead to a prolonged state of unconsciousness, where dream helpers appear and serve as allies to the dreamer for the rest of his life. === In art === Both the symbolic form and the actual body of insects have been used to adorn humans in ancient and modern times. A recurrent theme for ancient cultures in Europe and the Near East regarded the sacred image of a bee or human with insect features. Often referred to as the bee "goddess", these images were found in gems and stones. An onyx gem from Knossos (ancient Crete) dating to approximately 1500 BC illustrates a Bee goddess with bull horns above her head. In this instance, the figure is surrounded by dogs with wings, most likely representing Hecate and Artemis – gods of the underworld, similar to the Egyptian gods Akeu and Anubis. Beetlewing art is an ancient craft technique using iridescent beetle wings practiced traditionally in Thailand, Myanmar, India, China and Japan. Beetlewing pieces are
|
{
"page_id": 37360538,
"source": null,
"title": "Human interactions with insects"
}
|
used as an adornment to paintings, textiles and jewelry. Different species of metallic wood-boring beetle wings were used depending on the region, but traditionally the most valued were those from beetles belonging to the genus Sternocera. The practice comes from across Asia and Southeast Asia, especially Thailand, Myanmar, Japan, India and China. In Thailand beetlewings were preferred to decorate clothing (shawls and Sabai cloth) and jewellery in former court circles. The Canadian entomologist Charles Howard Curran's 1945 book, Insects of the Pacific World, noted women from India and Sri Lanka, who kept 1+1⁄2 inch (38 mm) long, iridescent greenish coppery beetles of the species Chrysochroa ocellata as pets. These living jewels were worn on festive occasions, probably with a small chain attached to one leg anchored to the clothing to prevent escape. Afterwards, the insects were bathed, fed, and housed in decorative cages. Living jewelled beetles have also been worn and kept as pets in Mexico. Butterflies have long inspired humans with their life cycle, color, and ornate patterns. The novelist Vladimir Nabokov was also a renowned butterfly expert. He published and illustrated many butterfly species, stating: I discovered in nature the nonutilitarian delights that I sought in art. Both were a form of magic, both were games of intricate enchantment and deception. It was the aesthetic complexity of insects that led Nabokov to reject natural selection. The naturalist Ian MacRae writes of butterflies: the animal is at once awkward, flimsy, strange, bouncy in flight, yet beautiful and immensely sympathetic; it is painfully transient, albeit capable of extreme migrations and transformations. Images and phrases such as "kaleidoscopic instabilities," "oxymoron of similarities," "rebellious rainbows," "visible darkness" and "souls of stone" have much in common. They bring together the two terms of a conceptual contradiction, thereby facilitating the mixing of what should
|
{
"page_id": 37360538,
"source": null,
"title": "Human interactions with insects"
}
|
be discrete and mutually exclusive categories ... In positing such questions, butterfly science, an inexhaustible, complex, and finely nuanced field, becomes not unlike the human imagination, or the field of literature itself. In the natural history of the animal, we begin to sense its literary and artistic possibilities. The photographer Kjell Sandved spent 25 years documenting all 26 characters of the Latin alphabet using the wing patterns of butterflies and moths as The Butterfly Alphabet. In 2011, the artist Anna Collette created over 10,000 individual ceramic insects at Nottingham Castle, "Stirring the Swarm". Reviews of the exhibit offered a compelling narrative for cultural entomology: "the unexpected use of materials, dark overtones, and the straightforward impact of thousands of tiny multiples within the space. The exhibition was at once both exquisitely beautiful and deeply repulsive, and this strange duality was fascinating." === In literature and film === The Ancient Greek playwright Aeschylus has a gadfly pursue and torment Io, a maiden associated with the moon, watched constantly by the eyes of the herdsman Argus, associated with all the stars: "Io: Ah! Hah! Again the prick, the stab of gadfly-sting! O earth, earth, hide, the hollow shape—Argus—that evil thing—the hundred-eyed." William Shakespeare, inspired by Aeschylus, has Tom o'Bedlam in King Lear, "Whom the foul fiend hath led through fire and through flame, through ford and whirlpool, o'er bog and quagmire", driven mad by the constant pursuit. In Antony and Cleopatra, Shakespeare similarly likens Cleopatra's hasty departure from the Actium battlefield to that of a cow chased by a gadfly. H. G. Wells introduced giant wasps in his 1904 novel The Food of the Gods and How It Came to Earth, making use of the newly discovered growth hormones to lend plausibility to his science fiction. Lafcadio Hearn's essay Butterflies analyses the treatment
|
{
"page_id": 37360538,
"source": null,
"title": "Human interactions with insects"
}
|
of the butterfly in Japanese literature, both prose and poetry. He notes that these often allude to Chinese tales, such as of the young woman that the butterflies took to be a flower. He translates 22 Japanese haiku poems about butterflies, including one by the haiku master Matsuo Bashō, said to suggest happiness in springtime: "Wake up! Wake up!—I will make thee my comrade, thou sleeping butterfly." The novelist Vladimir Nabokov was the son of a professional lepidopterist, and was interested in butterflies himself. He wrote his novel Lolita while travelling on his annual butterfly-collection trips in the western United States. He eventually became a leading lepidopterist. This is reflected in his fiction, where for example The Gift devotes two whole chapters (of five) to the tale of a father and son on a butterfly expedition. Horror films involving insects, sometimes called "big bug movies", include the pioneering 1954 Them!, featuring giant ants mutated by radiation, and the 1957 The Deadly Mantis. The Far Side, a newspaper cartoon, has been used by professor of Michael Burgett as a teaching tool in his entomology class; The Far Side and its author Gary Larson have been acknowledged by biologist Dale H. Clayton his colleague for "the enormous contribution" Larson has made to their field through his cartoons. === In music === Some popular and influential pieces of music have had insects as their subjects. The French Renaissance composer Josquin des Prez wrote a frottola entitled El Grillo (lit. 'The Cricket'). It is among the most frequently sung of his works. Nikolai Rimsky-Korsakov wrote the "Flight of the Bumblebee" in 1899–1900 as part of his opera The Tale of Tsar Saltan. The piece is one of the most recognizable pieces in classical composition. The bumblebee in the story is a prince who has
|
{
"page_id": 37360538,
"source": null,
"title": "Human interactions with insects"
}
|
been transformed into an insect so that he can fly off to visit his father. The play upon which the opera was based – written by Alexander Pushkin – originally had two more insect themes: the Flight of the Mosquito and the Flight of the Fly. The Hungarian composer Béla Bartók explained in his diary that he was attempting to depict the desperate attempts to escape of a fly caught in a cobweb in his piece From the Diary of a Fly, for piano (Mikrokosmos Vol. 6/142). The jazz musician and philosophy professor David Rothenberg plays duets with singing insects including cicadas, crickets, and beetles. === In astronomy and cosmology === In astronomy, constellations named after arthropods include the zodiacal Scorpius, the scorpion, and Musca, the fly, also known as Apis, the bee, in the deep southern sky. Musca, the only recognised insect constellation, was named by Petrus Plancius in 1597. "The Bug Nebula", also called "The Butterfly Nebula", is a more recent discovery. Known as NGC 6302 is one of the brightest and most popular stars in the universe – popular in that its features draw the attention of a lot of researchers. It happens to be located in the Scorpius constellation. It is perfectly bipolar, and until recently, the central star was unobservable, clouded by gas, but estimated to be one of the hottest in the galaxy – 200,000 degrees Fahrenheit, perhaps 35 times hotter than the Sun. The honey bee played a central role in the cosmology of the Mayan people. The stucco figure at the temples of Tulum known as "Ah Mucen Kab" – the Diving Bee God – bears resemblance to the insect in the Codex Tro-Cortesianus identified as a bee. Such reliefs might have indicated towns and villages that produce honey. Modern Mayan authorities
|
{
"page_id": 37360538,
"source": null,
"title": "Human interactions with insects"
}
|
say the figure also have a connection to modern cosmology. Mayan mythology expert Migel Angel Vergara relates that the Mayans held a belief that bees came from Venus, the "Second Sun." The relief might be indicative of another "insect deity", that of Xux Ex, the Mayan "wasp star." The Mayan embodied Venus in the form of the god Kukulkán (also known as or related to Gukumatz and Quetzalcoatl in other parts of Mexico), Quetzalcoatl is a Mesoamerican deity whose name in Nahuatl means "feathered serpent". The cult was the first Mesoamerican religion to transcend the old Classic Period linguistic and ethnic divisions. This cult facilitated communication and peaceful trade among peoples of many different social and ethnic backgrounds. Although the cult was originally centered on the ancient city of Chichén Itzá in the modern Mexican state of Yucatán, it spread as far as the Guatemalan highlands. === In costumes === Bee and other insect costumes are worn in a variety of countries for parties, carnivals and other celebrations. Ovo is an insect-themed production by the world renowned Canadian entertainment company Cirque du Soleil. The show looks at the world of insects and its biodiversity where they go about their daily lives until a mysterious egg appears in their midst, as the insects become awestruck about this iconic object that represents the enigma and cycles of their lives. The costuming was a fusion of arthropod body types blended with superhero armour. Liz Vandal, the lead costume designer, has a special affinity for the world of the insect: When I was just a kid I put rocks down around the yard near the fruit trees and I lifted them regularly to watch the insects who had taken up residence underneath them. I petted caterpillars and let butterflies into the house. So when
|
{
"page_id": 37360538,
"source": null,
"title": "Human interactions with insects"
}
|
I learned that OVO was inspired by insects, I immediately knew that I was in a perfect position to pay tribute to this majestic world with my costumes. All insects are beautiful and perfect; it is what they evoke for each of us that changes our perception of them." The Webby award-winning video series Green Porno was created to showcase the reproductive habits of insects. Jody Shapiro and Rick Gilbert were responsible for translating the research and concepts that Isabella Rossellini envisioned into the paper and paste costumes which directly contribute to the series' unique visual style. The film series was driven by the creation of costumes to translate scientific research into "something visual and how to make it comical." == See also == Arthropods in culture Human uses of birds Human uses of plants Human interactions with insects in southern Africa Insects in ethics Insect collecting == References == == Further reading == Hogue, James N. (2003). "Cultural Entomology". In Vincent H. Resh, Ring T. Cardé (ed.). Encyclopedia of Insects. Academic Press. pp. 273–281. ISBN 0-08-054605-6. Marren, Peter; Mabey, Richard (2010). Bugs Britannica. Chatto & Windus. ISBN 978-0-7011-8180-2. Meyer-Rochow, V. B.; et al. (2008). "More feared than revered: Insects and their impact on human societies (with specific data on the importance of entomophagy in a Laotian setting)". Entomologie Heute. 20: 3–25. Morris, Brian (2006) [2004]. Insects and Human Life. Berg. pp. 181–216. ISBN 978-1-84520-949-0. == External links == The Cultural Entomology Digest Ethnoentomology journal (in Czech)
|
{
"page_id": 37360538,
"source": null,
"title": "Human interactions with insects"
}
|
The molecular formula CH6N2 (molar mass: 46.07 g/mol, exact mass: 46.0531 u) may refer to: Methanediamine Monomethylhydrazine (mono-methyl hydrazine, MMH)
|
{
"page_id": 60035999,
"source": null,
"title": "CH6N2"
}
|
Jerzy Władysław Jurka (June 4, 1950 – July 19, 2014) was a Polish–American computational and molecular biologist known for his pioneering work on repetitive DNA and transposable elements (TEs) in eukaryotic genomes. He served as the assistant director of research at the Linus Pauling Institute prior to founding and directing the Genetic Information Research Institute (GIRI) in Mountain View, California. == Early life and education == Jurka was born on June 4, 1950, in the village of Ponikiew, Poland. He obtained his M.Sc. in Chemistry from the Jagiellonian University and a D.Sc. in Molecular Biology from the University of Warsaw. After earning his doctorate, Jurka moved to the United States and conducted post-doctoral research at Harvard University. == Career and research == After completing his post-doctoral research, Jurka joined the Linus Pauling Institute in Palo Alto, California, where he eventually served as assistant director of research. During his tenure, he collaborated with several notable scientists, including Linus Pauling, George Irving Bell, Roy Britten, Temple Smith, and Emile Zuckerkandl. In 1994, he founded the Genetic Information Research Institute (GIRI), focusing on the computational analysis of genome sequences and the identification of transposable elements. Jurka’s team developed Repbase, a widely used reference database of eukaryotic repetitive elements that aids in DNA annotation and comparative genomics. His work on Alu elements, one of the most abundant short interspersed elements in primate genomes, provided insights into their classification, the mechanisms behind their proliferation, and their paternal transmission. Jurka’s research group discovered and characterized numerous TE families. In collaboration with Vladimir Kapitonov, Jurka identified Helitrons, a family of rolling-circle transposons that influence genomic evolution. In 2006, they reported the discovery of Polinton (also known as Maverick) transposons, self-synthesizing DNA elements found in diverse eukaryotes, providing important clues about the structure and evolution of complex genomes.
|
{
"page_id": 20845472,
"source": null,
"title": "Jerzy Jurka"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.