id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
37,080,977
https://en.wikipedia.org/wiki/Bibliography%20of%20encyclopedias%3A%20astronomy%20and%20astronomers
This is a list of encyclopedias and encyclopedic/biographical dictionaries published on the subject of astronomy and astronomers in any language. Entries are in the English language except where noted. A Amils, Ricardo; Quintanilla, José Cernicharo; Cleaves, Henderson James (1 June 2011). Encyclopedia of Astrobiology. Springer. . Angelo, Joseph A. (1 January 2009). Encyclopedia of Space and Astronomy. Infobase Publishing. . Angelo, Joseph A. Encyclopedia of space exploration. Facts on File, 2000. . Angelo, Joseph A. The Extraterrestrial Encyclopedia: Our Search for Life in Outer Space. Facts on File, rev. ed., 1991. Angelo, Joseph A., Jr., Facts on File. The Facts on File space and astronomy handbook. Facts on File, 2002. . Asimov, Isaac. Isaac Asimov's Library of the Universe. Gareth Stevens, 1988. B Baker, David. Larousse Guide to Astronomy. Larousse, 1978. Bakich, Michael E. (10 July 2003). The Cambridge Encyclopedia of Amateur Astronomy. Cambridge University Press. . C Cambridge Encyclopedia of Astronomy. Crown, 1977. Clapham, Frances M.; Taylor, Ron B. (1982). Rand McNally astronomy encyclopedia. Children's Press. . D Daintith, John; Gould, William (September 2006). Collins Dictionary of Astronomy. Collins. . Daintith, John; Gould, William; Illingworth, Valerie (1 January 2009). The Facts on File Dictionary of Astronomy. Infobase Publishing. . E Encyclopedia of astronomy and astrophysics. Taylor & Francis. . G Gatland, Kenneth. Illustrated Encyclopedia of Space Technology. Crown, 2nd ed., 1990. Gore, John Ellard (September 2010). An Astronomical Glossary: Or Dictionary of Terms Used in Astronomy (1893). Kessinger Publishing. . H Handbuch der Physik. Springer-Verlag, 1956–1988. ISSN 0085-140X. Hetherington, Norriss S. Encyclopedia of Cosmology: Historical, Philosophical, and Scientific Foundations of Modern Cosmology. Garland, 1993. Hockey, Thomas A., Virginia Trimble, Thomas R. Williams, Katherine Bracher, eds. Biographical Encyclopedia of Astronomers. Springer, 2007. . Hutton, Charles (1817). An astronomical dictionary: compiled from Hutton's Mathematical and philosophical dictionary: to which is prefixed an introduction containing a brief history of astronomy, and a familiar illustration of its elementary principles. Published and sold by Hezekiah Howe. I Illingworth, Valerie, John Owen, Edward Clark, Facts on File. The Facts on File dictionary of astronomy. Facts on File, 2000. . Ince, Martin (2001). Dictionary of Astronomy. Peter Collin Publishing. . K Kitchin, Christopher R. (2002). Illustrated Dictionary of Practical Astronomy. Springer. . L Lang, Kenneth R. (19 June 2006). A Companion to Astronomy and Astrophysics: Chronology and Glossary with Data Tables. Springer. . Lewis, Richard. Illustrated Encyclopedia of the Universe. Crown, 1983. Lusis, Andy. Astronomy and astronautics: An enthusiast's guide to books and periodicals. Facts on File, 1986. . M Maran, Stephen P. The Astronomy and astrophysics encyclopedia. Van Nostrand Reinhold, 1992. . Maran, Stephen P. (15 October 1991). The Astronomy and Astrophysics Encyclopedia. John Wiley & Sons. . Mark, Hans. Encyclopedia of space science and technology. Wiley, 2003. . Matzner, Richard A. Dictionary of geophysics, astrophysics, and astronomy. CRC Press, 2001. . Meyers, Robert Allen (1989). Encyclopedia of Astronomy and Astrophysics. Academic Press. Meyers, Robert A., ed. Encyclopedia of Physical Science and Technology., 2nd ed., Academic Press, 1992. Mitton, Jacqueline (1998). The Penguin Dictionary of Astronomy. Penguin Books. . Mitton, Jacqueline (29 January 2001). Cambridge Dictionary of Astronomy. Cambridge University Press. . Moore, Patrick (15 August 2002). Astronomy Encyclopedia. Oxford University Press. . Moore, Patrick. The International Encyclopedia of Astronomy. Crown, 1987. Moore, Patrick. Patrick Moore's A-Z of Astronomy. Norton, rev. ed., 1987. Muller, Paul (1968). Concise Encyclopedia of Astronomy. Collins. Murdin, Paul. Encyclopedia of astronomy and astrophysics. Institute of Physics Publ.; Nature Publ. Group, 2001. . Murdin, Paul; Penston, Margaret (30 September 2004). The Firefly Encyclopedia Of Astronomy. Firefly Books. . Murdin, Paul; Penston, Margaret (1 September 2004). The Canopus Encyclopedia of Astronomy. Canopus. . P Parker, Sybil, ed. McGraw-Hill Encyclopedia of Science and Technology. 7th ed., McGraw-Hill, 1992. Parker, Sybil P. and Jay M. Pasachoff(1993). McGraw-Hill encyclopedia of astronomy. McGraw-Hill. . Porter, Roy, Marilyn Bailey Ogilvie. The biographical dictionary of scientists. Oxford University Press, 2000. . R Ramamurthy, G. (2005). Biographical Dictionary Of Great Astronomers. Sura Books. . Ridpath, Ian (1 March 2012). A Dictionary of Astronomy. Oxford University Press. . Ridpath, Ian. The illustrated encyclopedia of the universe. Watson-Guptill Publications, 2001. . Ridpath, Ian (1 January 1980). The Illustrated Encyclopedia of Astronomy and Space. Crowell. . Ridpath, Ian; Woodruff, John (13 September 1996). Cambridge Astronomy Dictionary. Cambridge University Press. . Ronan, Colin A. (1979). Encyclopedia of astronomy: a comprehensive survey of our solar system, galaxy and beyond. Hamlyn. . Room, Adrian. Dictionary of astronomical names. Routledge, 1988. . Rudaux, Lucien; Vaucouleurs, Gérard Henri de (1959). Larousse Encyclopedia of Astronomy. Prometheus Press. Rycroft, Michael. The Cambridge Encyclopedia of Space. Cambridge, 1990. S Sachs, Margaret. UFO Encyclopedia. Putnam, 1981. Satterthwaite, Gilbert Elliott (1 January 1970). Encyclopedia of Astronomy. Hamlyn. Schweighauser, Charles A. Astronomy from A to Z: A Dictionary of Celestial Objects and Ideas. Sangamon State University, 1991. Spitz, Armand; Gaynor, Frank (1959). Dictionary of Astronomy and Astronautics. Philosophical Library. Stewart, John. Moons of the Solar System: An Illustrated Encyclopedia. McFarland, 1991. Story, Ronald. Encyclopedia of UFOs. Doubleday, 1980. T Trimble, Virginia; Williams, Thomas; Bracher, Katherine (20 November 2007). Biographical Encyclopedia of Astronomers. Springer. . W Weigert, Alfred; Zimmermann, Helmut (1968). A Concise Encyclopedia of Astronomy. American Elsevier Pub. Co. Welch, Rosanne. Encyclopedia of women in aviation and space. ABC-CLIO, 1998. . Woodruff, John (1 October 2003). Firefly Astronomy Dictionary. Firefly Books. . Y Yenne, Bill. Encyclopedia of U.S. Spacecraft. Exeter Books, 1985. See also Bibliography of encyclopedias Citations References Guide to Reference. American Library Association. Retrieved 5 December 2014. (subscription required). Kister, Kenneth F. (1994). Kister's Best Encyclopedias (2nd ed.). Phoenix: Oryx. . Astronomy Astronomy books
Bibliography of encyclopedias: astronomy and astronomers
Astronomy
1,579
2,208,748
https://en.wikipedia.org/wiki/Super%20black
Super black is a surface treatment developed at the National Physical Laboratory (NPL) in the United Kingdom. It absorbs approximately 99.6% of visible light at normal incidence, while conventional black paint absorbs about 97.5%. At other angles of incidence, super black is even more effective: at an angle of 45°, it absorbs 99.9% of light. Technology The technology to create super black involves chemically etching a nickel-phosphorus alloy. Applications of super black are in specialist optical instruments for reducing unwanted reflections. The disadvantage of this material is its low optical thickness, as it is a surface treatment. As a result, infrared light of a wavelength longer than a few micrometers penetrates through the dark layer and has much higher reflectivity. The reported spectral dependence increases from about 1% at 3 μm to 50% at 20 μm. In 2009, a competitor to the super black material, Vantablack, was developed based on carbon nanotubes. It has a relatively flat reflectance in a wide spectral range. In 2011, NASA and the US Army began funding research in the use of nanotube-based super black coatings in sensitive optics. Nanotube-based superblack arrays and coatings have recently become commercially available. See also Vantablack Emissivity Black hole Black body References External links Materials science Optical materials Shades of black
Super black
Physics,Materials_science,Engineering
287
33,194,835
https://en.wikipedia.org/wiki/Logarithmic%20Schr%C3%B6dinger%20equation
In theoretical physics, the logarithmic Schrödinger equation (sometimes abbreviated as LNSE or LogSE) is one of the nonlinear modifications of Schrödinger's equation, first proposed by Gerald H. Rosen in its relativistic version (with D'Alembertian instead of Laplacian and first-order time derivative) in 1969. It is a classical wave equation with applications to extensions of quantum mechanics, quantum optics, nuclear physics, transport and diffusion phenomena, open quantum systems and information theory, effective quantum gravity and physical vacuum models and theory of superfluidity and Bose–Einstein condensation. It is an example of an integrable model. The equation The logarithmic Schrödinger equation is a partial differential equation. In mathematics and mathematical physics one often uses its dimensionless form: for the complex-valued function of the particles position vector at time , and is the Laplacian of in Cartesian coordinates. The logarithmic term has been shown indispensable in determining the speed of sound scales as the cubic root of pressure for Helium-4 at very low temperatures. This logarithmic term is also needed for cold sodium atoms. In spite of the logarithmic term, it has been shown in the case of central potentials, that even for non-zero angular momentum, the LogSE retains certain symmetries similar to those found in its linear counterpart, making it potentially applicable to atomic and nuclear systems. The relativistic version of this equation can be obtained by replacing the derivative operator with the D'Alembertian, similarly to the Klein–Gordon equation. Soliton-like solutions known as Gaussons figure prominently as analytical solutions to this equation for a number of cases. See also Galaxy rotation curve Nonlinear Schrödinger equation Superfluid Helium-4 Superfluid vacuum theory References External links Theoretical physics Schrödinger equation
Logarithmic Schrödinger equation
Physics
396
46,599,027
https://en.wikipedia.org/wiki/HIV%20Medicine
HIV Medicine is a bimonthly peer-reviewed medical journal covering HIV/AIDS research. It was established in 1999 and is published by Wiley-Blackwell on behalf of the British HIV Association, of which it is the official journal. It is also the official journal of the European AIDS Clinical Society and the Australasian Society for HIV Medicine. The editors-in-chief are Brian Gazzard (Chelsea and Westminster Hospital) and Jens Lundgren (University of Copenhagen). According to the Journal Citation Reports, the journal has a 2014 impact factor of 3.988, ranking it 18th out of 78 journals in the category "Infectious Diseases". References External links HIV/AIDS journals Wiley-Blackwell academic journals Academic journals associated with learned and professional societies Academic journals established in 1999 Bimonthly journals English-language journals
HIV Medicine
Biology
165
458,866
https://en.wikipedia.org/wiki/Solid%20mechanics
Solid mechanics (also known as mechanics of solids) is the branch of continuum mechanics that studies the behavior of solid materials, especially their motion and deformation under the action of forces, temperature changes, phase changes, and other external or internal agents. Solid mechanics is fundamental for civil, aerospace, nuclear, biomedical and mechanical engineering, for geology, and for many branches of physics and chemistry such as materials science. It has specific applications in many other areas, such as understanding the anatomy of living beings, and the design of dental prostheses and surgical implants. One of the most common practical applications of solid mechanics is the Euler–Bernoulli beam equation. Solid mechanics extensively uses tensors to describe stresses, strains, and the relationship between them. Solid mechanics is a vast subject because of the wide range of solid materials available, such as steel, wood, concrete, biological materials, textiles, geological materials, and plastics. Fundamental aspects A solid is a material that can support a substantial amount of shearing force over a given time scale during a natural or industrial process or action. This is what distinguishes solids from fluids, because fluids also support normal forces which are those forces that are directed perpendicular to the material plane across from which they act and normal stress is the normal force per unit area of that material plane. Shearing forces in contrast with normal forces, act parallel rather than perpendicular to the material plane and the shearing force per unit area is called shear stress. Therefore, solid mechanics examines the shear stress, deformation and the failure of solid materials and structures. The most common topics covered in solid mechanics include: stability of structures - examining whether structures can return to a given equilibrium after disturbance or partial/complete failure, see Structure mechanics dynamical systems and chaos - dealing with mechanical systems highly sensitive to their given initial position thermomechanics - analyzing materials with models derived from principles of thermodynamics biomechanics - solid mechanics applied to biological materials e.g. bones, heart tissue geomechanics - solid mechanics applied to geological materials e.g. ice, soil, rock vibrations of solids and structures - examining vibration and wave propagation from vibrating particles and structures i.e. vital in mechanical, civil, mining, aeronautical, maritime/marine, aerospace engineering fracture and damage mechanics - dealing with crack-growth mechanics in solid materials composite materials - solid mechanics applied to materials made up of more than one compound e.g. reinforced plastics, reinforced concrete, fiber glass variational formulations and computational mechanics - numerical solutions to mathematical equations arising from various branches of solid mechanics e.g. finite element method (FEM) experimental mechanics - design and analysis of experimental methods to examine the behavior of solid materials and structures Relationship to continuum mechanics As shown in the following table, solid mechanics inhabits a central place within continuum mechanics. The field of rheology presents an overlap between solid and fluid mechanics. Response models A material has a rest shape and its shape departs away from the rest shape due to stress. The amount of departure from rest shape is called deformation, the proportion of deformation to original size is called strain. If the applied stress is sufficiently low (or the imposed strain is small enough), almost all solid materials behave in such a way that the strain is directly proportional to the stress; the coefficient of the proportion is called the modulus of elasticity. This region of deformation is known as the linearly elastic region. It is most common for analysts in solid mechanics to use linear material models, due to ease of computation. However, real materials often exhibit non-linear behavior. As new materials are used and old ones are pushed to their limits, non-linear material models are becoming more common. These are basic models that describe how a solid responds to an applied stress: Elasticity – When an applied stress is removed, the material returns to its undeformed state. Linearly elastic materials, those that deform proportionally to the applied load, can be described by the linear elasticity equations such as Hooke's law. Viscoelasticity – These are materials that behave elastically, but also have damping: when the stress is applied and removed, work has to be done against the damping effects and is converted in heat within the material resulting in a hysteresis loop in the stress–strain curve. This implies that the material response has time-dependence. Plasticity – Materials that behave elastically generally do so when the applied stress is less than a yield value. When the stress is greater than the yield stress, the material behaves plastically and does not return to its previous state. That is, deformation that occurs after yield is permanent. Viscoplasticity - Combines theories of viscoelasticity and plasticity and applies to materials like gels and mud. Thermoelasticity - There is coupling of mechanical with thermal responses. In general, thermoelasticity is concerned with elastic solids under conditions that are neither isothermal nor adiabatic. The simplest theory involves the Fourier's law of heat conduction, as opposed to advanced theories with physically more realistic models. Timeline 1452–1519 Leonardo da Vinci made many contributions 1638: Galileo Galilei published the book "Two New Sciences" in which he examined the failure of simple structures 1660: Hooke's law by Robert Hooke 1687: Isaac Newton published "Philosophiae Naturalis Principia Mathematica" which contains Newton's laws of motion 1750: Euler–Bernoulli beam equation 1700–1782: Daniel Bernoulli introduced the principle of virtual work 1707–1783: Leonhard Euler developed the theory of buckling of columns 1826: Claude-Louis Navier published a treatise on the elastic behaviors of structures 1873: Carlo Alberto Castigliano presented his dissertation "Intorno ai sistemi elastici", which contains his theorem for computing displacement as partial derivative of the strain energy. This theorem includes the method of least work as a special case 1874: Otto Mohr formalized the idea of a statically indeterminate structure. 1922: Timoshenko corrects the Euler–Bernoulli beam equation 1936: Hardy Cross' publication of the moment distribution method, an important innovation in the design of continuous frames. 1941: Alexander Hrennikoff solved the discretization of plane elasticity problems using a lattice framework 1942: R. Courant divided a domain into finite subregions 1956: J. Turner, R. W. Clough, H. C. Martin, and L. J. Topp's paper on the "Stiffness and Deflection of Complex Structures" introduces the name "finite-element method" and is widely recognized as the first comprehensive treatment of the method as it is known today See also Strength of materials - Specific definitions and the relationships between stress and strain. Applied mechanics Materials science Continuum mechanics Fracture mechanics Impact (mechanics) Solid-state physics References Notes Bibliography L.D. Landau, E.M. Lifshitz, Course of Theoretical Physics: Theory of Elasticity Butterworth-Heinemann, J.E. Marsden, T.J. Hughes, Mathematical Foundations of Elasticity, Dover, P.C. Chou, N. J. Pagano, Elasticity: Tensor, Dyadic, and Engineering Approaches, Dover, R.W. Ogden, Non-linear Elastic Deformation, Dover, S. Timoshenko and J.N. Goodier," Theory of elasticity", 3d ed., New York, McGraw-Hill, 1970. G.A. Holzapfel, Nonlinear Solid Mechanics: A Continuum Approach for Engineering, Wiley, 2000 A.I. Lurie, Theory of Elasticity, Springer, 1999. L.B. Freund, Dynamic Fracture Mechanics, Cambridge University Press, 1990. R. Hill, The Mathematical Theory of Plasticity, Oxford University, 1950. J. Lubliner, Plasticity Theory, Macmillan Publishing Company, 1990. J. Ignaczak, M. Ostoja-Starzewski, Thermoelasticity with Finite Wave Speeds, Oxford University Press, 2010. D. Bigoni, Nonlinear Solid Mechanics: Bifurcation Theory and Material Instability, Cambridge University Press, 2012. Y. C. Fung, Pin Tong and Xiaohong Chen, Classical and Computational Solid Mechanics, 2nd Edition, World Scientific Publishing, 2017, . Mechanics Continuum mechanics Rigid bodies mechanics km:មេកានិចសូលីដ sv:Hållfasthetslära
Solid mechanics
Physics,Engineering
1,731
11,040,776
https://en.wikipedia.org/wiki/Sirius%20visualization%20software
Sirius is a molecular modelling and analysis system developed at San Diego Supercomputer Center. Sirius is designed to support advanced user requirements that go beyond simple display of small molecules and proteins. Sirius supports high quality interactive 3D graphics, structure building, displaying protein or DNA primary sequences, access to remote data sources, and visualizing molecular dynamics trajectories. It can be used for scientific visualization and analysis, and chemistry and biology instruction. This software is no longer supported as of 2011. Key features Sirius supports a variety of applications with a set of features, including: Building and editing chemical structures using a library of fragments Protein structure and sequence alignment Command line interpreter and scripting support fully compatible with extant RasMol scripts Full support for molecular dynamics trajectory visualizing BLAST search directly in Protein Data Bank and Uniprot databases Ability to move parts of the loaded data while freezing the rest Interactive calculation of hydrogen bonding, steric clashes, Ramachandran plots Support for all major structure and sequence formats Bundled POV-Ray for creating photorealistic images Integrated selection and coloring across individual visualizing components Sirius is based on molecular graphics code and data structures developed as a part of the Molecular Biology Toolkit. RasMol-compatible scripting Sirius features a command line interpreter that can be used to quickly manipulate structure appearance and orientation. The set of commands has been patterned after RasMol, so it's fully compatible with extant scripts. Added commands introduced in Sirius provide support for manipulating multiple structures loaded at the same time, and enable more flexible selection. Extant RasMol scripts can be imported and run within Sirius to produce high quality representations of encoded molecular scenes. Since RasMol uses a coordinate system that differs from that Sirius, internal conversion is performed when RasMol scripts are imported, so that any orientation changes are shown correctly. Any manually entered commands, however, are executed according to the Sirius coordinate system. Sirius supports several predefined atom-residue sets and color schemes, allows editing of scripts using the Command Panel interface, and logical operators and parentheses can be used to create complex selection commands. Visualizing molecular dynamics trajectories Sirius contains a full-featured molecular dynamics visualizing component. It can read output files from AMBER and CHARMM simulations, including compressed and AMBER out files. RMSD changes along the trajectory can be calculated using user-defined atom subsets and displayed in an interactively updated graph. In order to reduce memory requirements, large multifile simulations may be loaded in a buffered mode. If a simulation involves changes in protein fold, Sirius can be set to track and recompute displayed secondary structure features in real time, which provides a convenient way to observe transformations of the structure. The full trajectory or selected frames can be exported as QuickTime video or a set of POV-Ray scene snapshots that can later be converted to a high quality movie. Access and download Sirius is distributed freely from the project website to individuals affiliated with academic and non-profit organizations. Native desktop application installers are available for Windows, Linux, and macOS. See also Comparison of software for molecular mechanics modeling List of molecular graphics systems Molecule editor Molecular modelling Molecular graphics Molecular dynamics References External links Internet Archive of Official Website Molecular Biology Toolkit San Diego Supercomputer Center University of California San Diego Molecular modelling software
Sirius visualization software
Chemistry
670
27,993,748
https://en.wikipedia.org/wiki/Dorado%20in%20Chinese%20astronomy
The modern constellation Dorado is not included in the Three Enclosures and Twenty-Eight Mansions system of traditional Chinese uranography because its stars are too far south for observers in China to know about them prior to the introduction of Western star charts. Based on the work of Xu Guangqi and the German Jesuit missionary Johann Adam Schall von Bell in the late Ming Dynasty, this constellation has been classified under the 23 Southern Asterisms (近南極星區, Jìnnánjíxīngōu) with the names White Patches Attached (夾白, Jiābái) and Goldfish (金魚, Jīnyú). The name of the western constellation in modern Chinese is 劍魚座 (jiàn yú zuò), meaning "the swordfish constellation". Stars The map of Chinese constellation in constellation Dorado area consists of : See also Chinese astronomy Traditional Chinese star names Chinese constellations References External links 香港太空館研究資源 中國星區、星官及星名英譯表 天象文學 台灣自然科學博物館天文教育資訊網 中國古天文 中國古代的星象系統 Astronomy in China Dorado
Dorado in Chinese astronomy
Astronomy
243
2,850,048
https://en.wikipedia.org/wiki/Sessile%20drop%20technique
In materials science, the sessile drop technique is a method used for the characterization of solid surface energies, and in some cases, aspects of liquid surface energies. The main premise of the method is that by placing a droplet of liquid with a known surface energy and contact angle, the surface energy of the solid substrate can be calculated. The liquid used for such experiments is referred to as the probe liquid, and the use of several different probe liquids is required. Probe liquid The surface energy is measured in units of joules per square meter, which is equivalent in the case of liquids to surface tension, measured in newtons per meter. The overall surface tension/energy of a liquid can be acquired through various methods using a tensiometer or using the pendant drop method and maximum bubble pressure method. The interface tension at the interface of the probe liquid and the solid surface can additionally be viewed as being the result of different types of intermolecular forces. As such, surface energies can be subdivided according to the various interactions that cause them, such as the surface energy due to dispersive (e.g. van der Waals forces) and other interactions (e.g. hydrogen bonding, polar interactions, acid–base interactions, etc.). It is often useful for the sessile drop technique to use liquids that are known to be incapable of some of those interactions (see table 1). For example, the surface tension of all straight alkanes is said to be entirely dispersive, and all of the other components are zero. This is algebraically useful, as it eliminates a variable in certain cases and makes these liquids essential testing materials. The overall surface energy, both for a solid and a liquid, is assumed traditionally to simply be the sum of the components considered. For example, the equation describing the subdivision of surface energy into the contributions of dispersive interactions and polar interactions would be where σS is the total surface energy of the solid, σSD and σSP are respectively the dispersive and polar components of the solid surface energy, σL is the total surface tension/surface energy of the liquid, and σLD and σLP are respectively the dispersive and polar components of the surface tension. In addition to the tensiometer and pendant drop techniques, the sessile drop technique can be used in some cases to separate the known total surface energy of a liquid into its components. This is done by reversing the above idea with the introduction of a reference solid surface that is assumed to be incapable of polar interactions, such as polytetrafluoroethylene (PTFE). Contact angle The contact angle is defined as the angle made by the intersection of the liquid/solid interface and the liquid/air interface. It can be alternately described as the angle between solid sample's surface and the tangent of the droplet's ovate shape at the edge of the droplet. A high contact angle indicates a low solid surface energy or chemical affinity. This is also referred to as a low degree of wetting. A low contact angle indicates a high solid surface energy or chemical affinity, and a high or sometimes complete degree of wetting. For example, a contact angle of zero degrees will occur when the droplet has turned into a flat puddle; this is called complete wetting. Measuring contact angle Goniometer method The simplest way of measuring the contact angle of a sessile drop is with a contact angle goniometer, which allows the user to measure the contact angle visually. A droplet is deposited by a syringe which is positioned above the sample surface, and a high resolution camera captures the image from the profile or side view. The image can then be analyzed either by eye (with a protractor) or more often is measured using image analysis software. This type of measurement is referred to as a static contact angle measurement. The contact angle is affected not only by the surface chemistry but also by the surface roughness. The Young equation, which is the basis for the contact angle, assumes a homogeneous surface with no surface roughness. In case surface roughness is present, the droplet can be in Wenzel state (homogeneous wetting), Cassie-Baxter state (heterogeneous wetting) or in an intermediary state. The surface roughness amplifies the wetting behavior caused by the surface chemistry. In order to measure the contact angle hysteresis, the sessile droplet can be increased gradually in volume. The maximum possible contact angle is referred to as the advancing contact angle. The receding contact angle can be measured by removing volume from the drop until dewetting occurs. The minimum possible contact angle is referred to as the receding contact angle. The contact angle hysteresis is the difference between the advancing and receding contact angle. Advantages and disadvantages The advantage of this method, aside from its relatively straightforward nature, is the fact that with a large enough solid surface, multiple droplets can be deposited in various locations on the sample to determine heterogeneity. The reproducibility of particular values of the contact angle will reflect the heterogeneity of the surface's energy properties. Conversely, the disadvantage is that if the sample is only large enough for one droplet, then it will be difficult to determine heterogeneity, or consequently to assume homogeneity. This is particularly true because conventional, commercially available goniometers do not swivel the camera/backlight set up relative to the stage, and thus can only show the contact angle at two points: the right and the left edge of the droplet. In addition to this, this measurement is hampered by its inherent subjectivity, as the placement of the lines is determined either by the user looking at the pictures or by the image analysis software's definition of the lines. Wilhelmy method An alternative method for measuring the contact angle is the Wilhelmy method, which employs a sensitive force meter of some sort to measure a force that can be translated into a value of the contact angle. In this method, a small plate-shaped sample of the solid in question, attached to the arm of a force meter, is vertically dipped into a pool of the probe liquid (in actuality, the design of a stationary force meter would have the liquid being brought up, rather than the sample being brought down), and the force exerted on the sample by the liquid is measured by the force meter. This force is related to the contact angle by the following equation: where F is the total force measured by the force meter, Fb is the force of buoyancy due to the solid sample displacing the liquid, I is the wetted length, and σ is the known surface tension of the liquid. Advantages and disadvantages The advantage of this method is that it is fairly objective and the measurement yields data which is inherently averaged over the wetted length. Although this does not help determine heterogeneity, it does automatically give a more accurate average value. Its disadvantages, aside from being more complicated than the goniometer method, include the fact that sample of an appropriate size must be produced with a uniform cross section in the submersion direction, and the wetted length must be measured with some precision. In addition, this method is only appropriate if both sides of the sample are identical, otherwise the measured data will be a result of two completely different interactions. Strictly speaking, this is not a sessile drop technique, as we are using a small submerging pool, rather than a droplet. However, the calculations described in the following sections, which were derived for the relation of the sessile drop contact angle to the surface energy, apply just as well. Determining surface energy While surface energy is conventionally defined as the work required to build a unit of area of a given surface, when it comes to its measurement by the sessile drop technique, the surface energy is not quite as well defined. The values obtained through the sessile drop technique depend not only on the solid sample in question, but equally on the properties of the probe liquid being used, as well as the particular theory relating the parameters mathematically to one another. There are numerous such theories developed by various researchers. These methods differ in several regards, such as derivation and convention, but most importantly they differ in the number of components or parameters which they are equipped to analyze. The simpler methods containing fewer components simplify the system by lumping surface energy into one number, while more rigorous methods with more components are derived to distinguish between various components of the surface energy. Again, the total surface energy of solids and liquids depends on different types of molecular interactions, such as dispersive (van der Waals), polar, and acid/base interactions, and is considered to be the sum of these independent components. Some theories account for more of these phenomena than do other theories. These distinctions are to be considered when deciding which method is appropriate for the experiment at hand. The following are a few commonly used such theories. One-component theories The Zisman theory The Zisman theory is the simplest commonly used theory, as it is a one-component theory, and is best used for non-polar surfaces. This means that polymer surfaces that have been subjected to heat treatment, corona treatment, plasma cleaning, or polymers that contain heteroatoms do not lend themselves to this particular theory, as they tend to be at least somewhat polar. The Zisman theory also tends to be more useful in practice for surfaces with lower energies. The Zisman theory simply defines the surface energy as being equal to the surface energy of the highest surface energy liquid that wets the solid completely. That is to say, the droplet will disperse as much as possible, i.e. completely wetting the surface, for this liquid and any liquids with lower surface energies, but not for liquids with higher surface energies. Since this probe liquid could hypothetically be any liquid, including an imaginary liquid, the best way to determine the surface energy by the Zisman method is to acquire data points of contact angles for several probe liquids on the solid surface in question, and then plot the cosine of that angle against the known surface energy of the probe liquid. By constructing the Zisman plot, one can extrapolate the highest liquid surface energy, real or hypothetical, that would result in complete wetting of the sample with a contact angle of zero degrees. Accuracy/precision The line coefficient (Fig 5) suggests that this is a fairly accurate result, however this is only the case for the pairing of that particular solid with those particular liquids. In other cases, the fit may not be so great (such is the case if we replace polyethylene with poly(methyl methacrylate), wherein the line coefficient of the plot results using the same list of liquids would be significantly lower). This shortcoming is a result of the fact that the Zisman theory treats the surface energy as one single parameter, rather than accounting for the fact that, for example, polar interactions are much stronger than dispersive ones, and thus the degree to which one is happening versus the other greatly affects the necessary calculations. As such, it is a simple but not particularly robust theory. Since the premise of this procedure is to determine the hypothetical properties of a liquid, the precision of the result depends on the precision to which the surface energy values of the probe liquids are known. Two-component theories The Owens/Wendt theory The Owens/Wendt theory (after D. K. Owens and R. C. Wendt) divides the surface energy into two components: surface energy due to dispersive interactions and surface energy due to polar interactions. This theory is derived from the combination of Young's relation, which relates the contact angle to the surface energies of the solid and liquid and to the interface tension, and Good's equation (after R. J. Good), which relates the interface tension to the polar and dispersive components of the surface energy. The resulting equation is Note that this equation has the form of y = mx + b, with As such, the polar and dispersive components of the solid's surface energy are determined by the slope and intercept of the resulting graph. Of course, the problem at this point is that in order to make that graph, knowing the surface energy of the probe liquid is not enough, as it is necessary to know specifically how it breaks down into its polar and dispersive components as well. To do this, one can simply reverse the procedure by testing the probe liquid against a standard reference solid that is not capable of polar interactions, such as PTFE. If the contact angle of a sessile drop of the probe liquid is measured on a PTFE surface with the principle equation reduces to Since the total surface tension of the liquid is already known, this equation determines the dispersive component, and the difference between the total and dispersive components gives the polar component. Accuracy/precision The accuracy and precision of this method is supported largely by the confidence level of the results for appropriate liquid/solid combinations (as seen, for example, in fig 6). The Owens/Wendt theory is typically applicable to surfaces with low charge and moderate polarity. Some good examples are polymers that contain heteroatoms, such as PVC, polyurethanes, polyamides, polyesters, polyacrylates, and polycarbonates The Fowkes theory The Fowkes theory (after F. M. Fowkes) is derived in a slightly different way from the Owens/Wendt theory, although the Fowkes theory's principle equation is mathematically equivalent to that of Owens and Wendt: Note that by dividing both sides of the equation by , the Owens/Wendt principle equation is recovered. As such, one of the options for the proper determination of the surface energy components is the same. In addition to that method, it is also possible to simply do tests using liquids with no polar component to their surface energies, and then liquids that do have both polar and dispersive components, and then linearize the equations (see table 1). First, one performs the standard sessile drop contact angle measurement for the solid in question and a liquid with a polar components of zero (; ) The second step is to use a second probe liquid that has both a dispersive and a polar component to its surface energy, and then solve for the unknowns algebraically. The Fowkes theory generally requires the use of only two probe liquids, as described above, and the recommended ones are diiodomethane, which should have no polar component due to its molecular symmetry, and water, which is commonly known to be a very polar liquid. Accuracy/precision Though the principle equation is essentially identical to that of Owens and Wendt, the Fowkes theory in a larger sense has slightly different applications. Because it is derived from different principles than Owens/Wendt, the rest of the information that Fowkes theory is concerned with is related to adhesion. As such, it is more applicable to situations where adhesion occurs, and in general works better than does the Owens/Wendt theory when dealing with higher surface energies. In addition, there is an extended Fowkes theory, rooted in the same principles, but dividing the total surface energy into a sum of three rather than two components: surface energy due to dispersive interactions, polar interactions, and hydrogen bonding. The Wu theory The Wu theory (after Souheng Wu) is also essentially similar to the Owens/Wendt and Fowkes theories, in that it divides surface energy into a polar and a dispersive component. The primary difference is that Wu uses the harmonic means rather than the geometric means of the known surface tensions, and subsequently the use of more rigorous mathematics is employed. Accuracy/precision The Wu theory provides more accurate results than do the other two component theories, particularly for high surface energies. It does, however, suffer from one complication: because of the mathematics involved, the Wu theory yields two results for each component, one being the true result, and one being simply a consequence of the mathematics. The challenge at this point lies in interpreting which is the true result. Sometimes this is as simple as eliminating the result that makes no physical sense (a negative surface energy) or the result that is clearly incorrect by virtue of being many orders of magnitude larger or smaller than it should be. Sometimes interpretation is more tricky. The Schultz theory The Schultz theory (after D. L. Schultz) is applicable only for very high energy solids. Again, it is similar to the theories of Owens, Wendt, Fowkes, and Wu, but is designed for a situation where conventional measurement required for those theories is impossible. In the class of solids with sufficiently high surface energy, most liquids wet the surface completely with a contact angle of zero degrees, and thus no useful data can be gathered. The Schultz theory and procedure calls to deposit a sessile drop of probe liquid on the solid surface in question, but this is all done while the system is submerged in yet another liquid, rather than being done in the open air. As a result, the higher "atmospheric" pressure due to the surrounding liquid causes the probe liquid droplet to compress so that there is a measurable contact angle. Accuracy/precision This method is designed to be robust where the other methods don't even provide any results in particular. As such, it is indispensable, since it is the only way to use the sessile drop technique on very high surface energy solids. Its major drawback is the fact that it is far more complex, both in its mathematics and experimentally. The Schultz theory requires one to account for many more factors, as there is now the unusual interaction of the probe liquid phase with the surrounding liquid. Three-component theories The van Oss theory The van Oss theory separates the surface energy of solids and liquids into three components. It includes the dispersive surface energy, as before, and subdivides the polar component as being the sum of two more specific components: the surface energy due to acidic interactions () and due to basic interactions (). The acid component theoretically describes a surface's propensity to have polar interactions with a second surface that has the ability to act basic by donating electrons. Conversely, the base component of the surface energy describes the propensity of a surface to have polar interactions with another surface that acts acidic by accepting electrons. The principle equation for this theory is Again, the best way to deal with this theory, much like the two-component theories, is to use at least three liquids (more can be used to get more results for statistical purposes) one with only a dispersive component in its surface energy (), one with only a dispersive and an acidic or basic component (), and finally either a liquid with a dispersive and a basic or acidic component (whichever the second probe liquid did not have ()), or a liquid with all three components () and linearizing the results. It is naturally more robust than other theories, particularly in cases where there is a great imbalance between the acid and base components of the polar surface energy. The van Oss theory is most suitable for testing the surface energies of inorganics, organometallics, and surface containing ions. The most significant difficulty of applying the van Oss theory is the fact that there is not much of an agreement in regards to a set of reference solids that can be used to characterize the acid and base components of potential probe liquids. There are however some liquids that are generally agreed to have known dispersive/acid/base components to their surface energies. Two of them are listed in table 1. List of common probe liquids Potential problems The presence of surface active elements such as oxygen and sulfur will have a large impact on the measurements obtained with this technique. Surface active elements will exist in larger concentrations at the surface than in the bulk of the liquid, meaning that the total levels of these elements must be carefully controlled to a very low level. For example, the presence of only 50 ppm sulphur in liquid iron will reduce the surface tension by approximately 20%. Practical applications The sessile drop technique has various applications for both materials engineering and straight characterization. In general, it is useful in determining the surface tension of liquids through the use of reference solids, with a similar technique being the Captive Bubble Method. There are various other specific applications which can be subdivided according to which of the above theories is most likely to be applicable to the circumstances: The Zisman theory is mostly used for low energy surfaces and characterizes only the total surface energy. As such, it is probably most useful in cases that recall the conventional definition of surfaces, for example if a chemical engineer wants to know what the energy associated with fabricating a surface is. It may also be useful in cases where the surface energy has some effect on a spectroscopic technique being used on the solid in question. The two component theories would most likely be applicable to materials engineering questions about the practical interactions of liquids and solids. The Fowkes theory, since it is more suited for higher energy solid surfaces, and since much of it is rooted in theories about adhesion, would likely be suited for the characterization of interactions where the solids and liquids have a high affinity for one another, such as, logically enough, adhesives and adhesive coatings. The Owens/Wendt theory, which deals in low energy solid surfaces, would be helpful in characterizing the interactions where the solids and liquids do not have a strong affinity for one another – for example, the effectiveness of waterproofing. Polyurethanes and PVC are good examples of waterproof plastics. The Schultz theory is best used for the characterization of very high energy surfaces for which the other theories are ineffective, the most significant example being bare metals. The van Oss theory is most suitable for cases in which acid/base interaction is an important consideration. Examples include pigments, pharmaceuticals, and paper. Specifically, notable examples include both paper used for the regular purpose of printing, and the more specialized case of litmus paper, which in itself is used to characterize acidity and basicity. See also Contact angle Du Noüy ring method Goniometer Surface energy Wilhelmy plate Rise in core References External links Contact angle Materials testing Surface science Shimizu, R. N., & Demarquette, N. R. (2000). Evaluation of surface energy of solid polymers using different models. Journal of Applied Polymer Science, 76(12), 1831-1845.
Sessile drop technique
Physics,Chemistry,Materials_science,Engineering
4,644
2,260,696
https://en.wikipedia.org/wiki/Oxymonad
The Oxymonads (or Oxymonadida) are a group of flagellated protists found exclusively in the intestines of animals, mostly termites and other wood-eating insects. Along with the similar parabasalid flagellates, they harbor the symbiotic bacteria that are responsible for breaking down cellulose. There is no evidence for presence of mitochondria (not even anaerobic mitochondrion-like organelles like hydrogenosomes or mitosomes) in oxymonads and three species have been shown to completely lack any molecular markers of mitochondria. It includes e.g. Dinenympha, Pyrsonympha, Oxymonas, Streblomastix, Monocercomonoides, and Blattamonas. Characteristics Most Oxymonads are around 50 μm in size and have a single nucleus, associated with four flagella. Their basal bodies give rise to several long sheets of microtubules, which form an organelle called an axostyle, but different in structure from the axostyles of parabasalids. The cell may use the axostyle to swim, as the sheets slide past one another and cause it to undulate. An associated fiber called the preaxostyle separates the flagella into two pairs. A few oxymonads have multiple nuclei, flagella, and axostyles. Relationship to Trimastix and Paratrimastix The free-living flagellates Trimastix and Paratrimastix are closely related to the oxymonads. They lack aerobic mitochondria and have four flagella separated by a preaxostyle, but unlike the oxymonads have a feeding groove. This character places the Oxymonads, Trimastix, and Paratrimastix among the Excavata, and in particular they may belong to the metamonads. Molecular phylogenetic studies indeed place Preaxostyla (oxymonads, Trimastix, and Paratrimastix) in Metamonada. Taxonomy Order Oxymonadida Grassé 1952 emend. Cavalier-Smith 2003 Family Oxymonadidae Kirby 1928 [Oxymonadaceae; Oxymonadinae Cleveland 1934] Genus ?Metasaccinobaculus Freitas 1945 Genus Barroella Zeliff 1944 [Kirbyella Zeliff 1930 non Kirkaldy 1906 non Bolivar 1909] Genus Microrhopalodina Grassé & Foa 1911 [Proboscidiella Kofoid & Swezy 1926] Genus Opisthomitus Grassé 1952 non Duboscq & Grassé 1934 Genus Oxymonas Janicki 1915 Genus Sauromonas Grassé & Hollande 1952 Family Polymastigidae Bütschli 1884 [Polymastiginae Kirby 1931; Polymastigaceae; Streblomastigaceae; Streblomastigidae Kofoid & Swezy 1919] Genus ?Paranotila Cleveland 1966 Genus ?Tubulimonoides Krishnamurthy & Sultana 1976 Genus Blattamonas Treitli et al. 2018 Genus Brachymonas (Grassé 1952) Treitli et al. 2018 non Hiraishi et al. 1995 Genus Monocercomonoides Travis 1932 Genus Polymastix Bütschli 1884 non Gruber 1884 Genus Streblomastix Kofoid & Swezy 1920 Family Pyrsonymphidae Grassé 1892 [Pyrsonymphaceae; Pyrsonymphinae Kirby 1937 nom. nud.; Dinenymphidae Grassé 1911; Dinenymphinae Cleveland et al. 1934; Dinenymphaceae] Genus Dinenympha Leidy 1877 [Pyrsonympha (Dinenympha) (Leidy 1877) Koidzumi 1921] Genus Pyrsonympha Leidy 1877 [Pyrsonema Kent 1881; Lophophora Comes 1910 non Coulter 1894 non Kraatz 1895 non Moeschler 1890] Family Saccinobaculidae Brugerolle & Lee 2002 ex Cavalier-Smith 2012 [Saccinobaculinae Cleveland et al. 1934] Genus Notila Cleveland 1950 Genus Saccinobaculus Cleveland-Hall & Sanders & Collier 1934 References Flagellates Metamonads Anaerobes
Oxymonad
Biology
907
992,734
https://en.wikipedia.org/wiki/Winnecke%20Catalogue%20of%20Double%20Stars
Winnecke Catalogue of Double Stars is a list of seven "new" double stars published by German Astronomer August Winnecke in Astronomische Nachrichten in 1869. Winnecke later noted that three of the double stars he catalogued had been discovered earlier (30 Eridani, Bradley 757, and 44 Cygni). The stars are sometimes given Winnecke designations (e.g. Winnecke 4), and sometimes abbreviated to WNC. References External links Winnecke Objects from SEDS A biography of August Winnecke from SEDS Astronomical catalogues of stars Double stars
Winnecke Catalogue of Double Stars
Astronomy
127
780,590
https://en.wikipedia.org/wiki/Vredefort%20impact%20structure
The Vredefort impact structure is the largest verified impact structure on Earth. The crater, which has since been eroded away, has been estimated at across when it was formed. The remaining structure, comprising the deformed underlying bedrock, is located in present-day Free State province of South Africa. It is named after the town of Vredefort, which is near its centre. The structure's central uplift is known as the Vredefort Dome. The impact structure was formed during the Paleoproterozoic Era, 2.023 billion (± 4 million) years ago. It is the second-oldest known impact structure on Earth, after Yarrabubba. In 2005, the Vredefort Dome was added to the list of UNESCO World Heritage Sites for its geologic interest. Formation and structure The asteroid that hit Vredefort is estimated to have been one of the largest ever to strike Earth since the Hadean Eon some four billion years ago, originally thought to have been approximately in diameter. As of 2022, the bolide was estimated at between in diameter and to have impacted with a vertical velocity of . The original impact structure is estimated to have had a diameter of at least , with the impact affecting the structure of the surrounding host rock in a circular region around in diameter. Other estimates have placed the original crater diameter closer to . The landscape has since been eroded to a depth of around since formation, obliterating the original crater. The remaining structure, the "Vredefort Dome", consists of a partial ring of hills in diameter, and is the remains of the central uplift created by the rebound of rock below the impact site after the collision. Estimates have placed the structure’s age to be 2.023 billion years (± 4 million years) or 2.019/2.020 billion years (± 2-3 million years) old, which places it in the Orosirian Period of the Paleoproterozoic Era. It is the second oldest universally accepted impact structure on Earth. In comparison, it is about 10% older than the Sudbury Basin impact (at 1.849 billion years) and the Yarrabubba impact structure is older than the Vredefort impact structure by about 0.2 billion years. Other purported older impact structures have either poorly constrained ages (Dhala impact structure, India) or highly contentious impact evidence in case of the circa 3.023 billion year old Maniitsoq structure, West Greenland and the circa 2.4 billion year old Suavjärvi structure, Russia. Their classification as impact structures remain controversial and unsettled. The dome in the centre of the impact structure was originally thought to have been formed by a volcanic explosion, but in the mid-1990s, evidence revealed it was the site of a huge bolide impact, as telltale shatter cones were discovered in the bed of the nearby Vaal River. This impact structure is one of the few multiple-ringed impact structures on Earth, although they are more common elsewhere in the Solar System. Perhaps the best-known example is Valhalla crater on Jupiter's moon Callisto. Earth's Moon has some as well. Geological processes, such as erosion and plate tectonics, have destroyed most multiple-ring impact structures on Earth. The impact distorted the Witwatersrand Basin which was laid down over a period of 250 million years between 950 and 700 million years before the Vredefort impact. The overlying Ventersdorp lavas and the Transvaal Supergroup which were laid down between 700 and 80 million years before the meteorite strike, were similarly distorted by the formation of the impact structure. The rocks form partial concentric rings around the impact structure's centre today, with the oldest, the Witwatersrand rocks, forming a semicircle from the centre. Since the Witwatersrand rocks consist of several layers of very hard, erosion-resistant sediments (e.g. quartzites and banded ironstones), they form the prominent arc of hills that can be seen to the northwest of the impact structure's centre in the satellite picture above. The Witwatersrand rocks are followed, in succession, by the Ventersdorp lavas at a distance of about from the centre, and the Transvaal Supergroup, consisting of a narrow band of the Ghaap Dolomite rocks and the Pretoria Subgroup of rocks, which together form a band beyond that. From about halfway through the Pretoria Subgroup of rocks around the impact structure's centre, the order of the rocks is reversed. Moving outwards towards where the crater rim used to be, the Ghaap Dolomite group resurfaces at from the centre, followed by an arc of Ventersdorp lavas, beyond which, at between from the centre, the Witwatersrand rocks re-emerge to form an interrupted arc of outcrops today. The Johannesburg group is the most famous one because it was here that gold was discovered in 1886. It is thus possible that if it had not been for the Vredefort impact this gold would never have been discovered. The centre of the Vredefort impact structure consists of a granite dome (where it is not covered by much younger rocks belonging to the Karoo Supergroup) which is an exposed part of the Kaapvaal craton, one of the oldest microcontinents which formed on Earth 3.9 billion years ago. This central peak uplift, or dome, is typical of a complex impact structure, where the liquefied rocks splashed up in the wake of the meteor as it penetrated the surface. Conservation The Vredefort Dome World Heritage Site is currently subject to property development, and local owners have expressed concern regarding sewage dumping into the Vaal River and the impact structure. The granting of prospecting rights around the edges of the impact structure has led environmental interests to express fear of destructive mining. Community The Vredefort Dome in the centre of the impact structure is home to four towns: Parys, Vredefort, Koppies and Venterskroon. Parys is the largest and a tourist hub; both Vredefort and Koppies mainly depend on an agricultural economy. On 19 December 2011, a broadcasting licence was granted by ICASA to a community radio station to broadcast for the Afrikaans- and English-speaking members of the communities within the impact structure. The Afrikaans name Koepel Stereo (Dome Stereo) refers to the dome and announces its broadcast as KSFM. The station broadcasts on 94.9 MHz FM. See also List of impact structures on Earth List of possible impact structures on Earth Deniliquin multiple-ring feature References External links Parys South Africa Impact Cratering Research Group – University of the Witwatersrand Earth Impact Database Deep Impact – The Vredefort Dome Satellite image of Vredefort impact structure from Google Maps Impact Cratering: an overview of Mineralogical and Geochemical aspects – University of Vienna Google Earth 3d .KMZ of 25 largest craters (requires Google Earth) Extinction events Impact craters of South Africa Landforms of the Free State (province) Proterozoic impact craters World Heritage Sites in South Africa
Vredefort impact structure
Biology
1,462
3,924,362
https://en.wikipedia.org/wiki/Electric%20spark
An electric spark is an abrupt electrical discharge that occurs when a sufficiently high electric field creates an ionized, electrically conductive channel through a normally-insulating medium, often air or other gases or gas mixtures. Michael Faraday described this phenomenon as "the beautiful flash of light attending the discharge of common electricity". The rapid transition from a non-conducting to a conductive state produces a brief emission of light and a sharp crack or snapping sound. A spark is created when the applied electric field exceeds the dielectric breakdown strength of the intervening medium. For air, the breakdown strength is about 30 kV/cm at sea level. Experimentally, this figure tends to differ depending upon humidity, atmospheric pressure, shape of electrodes (needle and ground-plane, hemispherical etc.) and the corresponding spacing between them and even the type of waveform, whether sinusoidal or cosine-rectangular. At the beginning stages, free electrons in the gap (from cosmic rays or background radiation) are accelerated by the electrical field, resulting in a Townsend avalanche. As they collide with air molecules, they create additional ions and newly freed electrons which are also accelerated. At some point, thermal energy will provide a much greater source of ions. The exponentially-increasing electrons and ions rapidly cause regions of the air in the gap to become electrically conductive in a process called dielectric breakdown. Once the gap breaks down, current flow is limited by the available charge (for an electrostatic discharge) or by the impedance of the external power supply. If the power supply continues to supply current, the spark will evolve into a continuous discharge called an electric arc. An electric spark can also occur within insulating liquids or solids, but with different breakdown mechanisms from sparks in gases. Sometimes, sparks can be dangerous. They can cause fires and burn skin. Lightning is an example of an electric spark in nature, while electric sparks, large or small, occur in or near many man-made objects, both by design and sometimes by accident. History In 1671, Leibniz discovered that sparks were associated with electrical phenomena. In 1708, Samuel Wall performed experiments with amber rubbed with cloth to produce sparks. In 1752, Thomas-François Dalibard, acting on an experiment proposed by Benjamin Franklin, arranged for a retired French dragoon named Coiffier in the village of Marly to collect lightning in a Leyden jar thereby proving that lightning and electricity are the same. In Franklin's famous kite experiment, he successfully extracted sparks from a cloud during a thunderstorm. Uses Ignition sources Electric sparks are used in spark plugs in gasoline internal combustion engines to ignite fuel and air mixtures. The electric discharge in a spark plug occurs between an insulated central electrode and a grounded terminal on the base of the plug. The voltage for the spark is provided by an ignition coil or magneto that is connected to the spark plug with an insulated wire. Flame igniters use electric sparks to initiate combustion in some furnaces and gas stoves in place of a pilot flame. Auto reignition is a safety feature that is used in some flame igniters that senses the electrical conductivity of the flame and uses this information to determine whether a burner flame is lit. This information is used to stop an ignition device from sparking after the flame is lit or restart the flame if it goes out. Radio communications A spark-gap transmitter uses an electric spark gap to generate radio frequency electromagnetic radiation that can be used as transmitters for wireless communication. Spark gap transmitters were widely used in the first three decades of radio from 1887–1916. They were later supplanted by vacuum tube systems and by 1940 were no longer used for communication. The wide use of spark-gap transmitters led to the nickname "sparks" for a ship's radio officer. Metalworking Electric sparks are used in different kinds of metalworking. Electric discharge machining (EDM) is sometimes called spark machining and uses a spark discharge to remove material from a workpiece. Electrical discharge machining is used for hard metals or those that are difficult to machine with traditional techniques. Spark plasma sintering (SPS) is a sintering technique that uses a pulsed direct current that passes through a conductive powder in a graphite die. SPS is faster than conventional hot isostatic pressing, where the heat is provided by external heating elements. Chemical analysis The light that is produced by electric sparks can be collected and used for a type of spectroscopy called spark emission spectroscopy. A high energy pulsed laser can be used to produce an electric spark. Laser induced breakdown spectroscopy (LIBS) is a type of atomic emission spectroscopy that uses a high pulse energy laser to excite atoms in a sample. LIBS has also been called laser spark spectroscopy (LSS). Electric sparks can also be used to create ions for mass spectrometry. Spark discharge has been also applied in electrochemical sensing via the in-situ surface modification of disposable screen printed carbon electrodes (SPEs) with various metal and carbon sources. Hazards Sparks can be hazardous to people, animals or even inanimate objects. Electric sparks can ignite flammable materials, liquids, gases and vapors. Even inadvertent static-discharges, or small sparks that occur when switching on lights or other circuits, can be enough to ignite flammable vapors from sources like gasoline, acetone, propane, or dust concentrations in the air, such as those found in flour mills or more generally in factories handling powders. Sparks often indicate the presence of a high voltage, or "potential field". The higher the voltage; the farther a spark can jump across a gap, and with enough energy supplied can lead to greater discharges such as a glow or an arc. When a person is charged with high-voltage static-charges, or is in the presence of high-voltage electrical supplies, a spark can jump between a conductor and a person who is in close enough proximity, allowing the release of much higher energies that can cause severe burns, shut down the heart and internal organs, or even develop into an arc flash. High-voltage sparks, even those with low energy such as from a stun gun, can overload the conductive pathways of the nervous system, causing involuntary muscle-contractions, or interfere with vital nervous-system functions such as heart rhythm. When the energy is low enough most of it may be used just heating the air, so the spark never fully stabilizes into a glow or arc. However, sparks with very low energy still produce a "plasma tunnel" through the air, through which electricity can pass. This plasma is heated to temperatures often greater than the surface of the Sun, and can cause small, localized burns. Conductive liquids, gels or ointments are often used when applying electrodes to a person's body, preventing sparks from forming at the point of contact and damaging skin. Similarly, sparks can cause damage to metals and other conductors, ablating or pitting the surface; a phenomenon which is exploited in electric etching. Sparks also produce ozone which, in high enough concentrations, can cause respiratory discomfort or distress, itching, or tissue damage, and can be harmful to other materials such as certain plastics. See also Corona discharge Electrical breakdown Paschen's law Static electricity References External links Szikrakisülés (1)...(4) Electric spark (1)...(4). Videos on the portal FizKapu . Electrical breakdown Plasma phenomena
Electric spark
Physics
1,537
375,665
https://en.wikipedia.org/wiki/Google%20%28verb%29
Owing to the dominance of the Google search engine, to google has become a transitive verb. The neologism commonly refers to searching for information on the World Wide Web, typically using the Google search engine. The American Dialect Society chose it as the "most useful word of 2002". It was added to the Oxford English Dictionary on June 15, 2006, and to the eleventh edition of the Merriam-Webster Collegiate Dictionary in July 2006. Etymology The first recorded usage of google was as a gerund, on July 8, 1998, by Google co-founder Larry Page himself, who wrote on a mailing list: "Have fun and keep googling!". Its earliest known use as an explicitly transitive verb on American television was in the "Help" episode of Buffy the Vampire Slayer (October 15, 2002), when Willow asked Buffy, "Have you googled her yet?". To prevent genericizing and potential loss of its trademark, Google has discouraged use of the word as a verb, particularly when used as a synonym for general web searching. On February 23, 2003, Google sent a cease and desist letter to Paul McFedries, creator of Word Spy, a website that tracks neologisms. In an article in The Washington Post, Frank Ahrens discussed the letter he received from a Google lawyer that demonstrated "appropriate" and "inappropriate" ways to use the verb "google". It was reported that, in response to this concern, lexicographers for the Merriam-Webster Collegiate Dictionary turned to lowercase the actual entry of the word, google. And, they maintained the capitalization of the search engine in their definition, "to use the Google search engine to seek online information" (a concern which did not deter the Oxford editors from preserving the history of both "cases"). On October 25, 2006, Google sent a request to the public requesting that "You should please only use 'Google' when you're actually referring to Google Inc. and our services." means it is something that cannot be "googled"i.e. it cannot be easily found using a web search engine, especially Google. If a word or phrase is ungoogleable, it means it cannot be googled. In 2013, the Swedish Language Council attempted to include the Swedish version of the word () in its list of new words, but Google objected to the definition not being specifically related to Google, and the council was forced to remove it immediately to avoid a legal confrontation with Google. See also grep Photoshop (verb), a similar neologism referring to digital photo editing References Google Verbs Internet terminology Internet search 1998 neologisms Internet properties established in 1998
Google (verb)
Technology
553
3,979,239
https://en.wikipedia.org/wiki/File%20Compare
In computing, fc (File Compare) is a command-line program in DOS, IBM OS/2 and Microsoft Windows operating systems, that compares multiple files and outputs the differences between them. It is similar to the Unix commands comm, cmp and diff. History The fc command has been included in Microsoft operating systems since MS-DOS 2.11 (e.g. on the 1984/85 DEC Rainbow release) and is included in all versions of Microsoft Windows. fc has also been included in IBM OS/2 Version 2.0. DR DOS 6.0 includes an implementation of the command. The command is also available in FreeDOS. This implementation is licensed under the GPLv2+. Functionality fc can compare text files as well as binary files. The latest versions can compare ASCII or Unicode text. The result of comparisons are output to the standard output. The output of fc is intended primarily to be human readable and may be difficult to use in other programs. See also Data comparison comp (command) List of DOS commands References Further reading External links fc | Microsoft Docs Open source FC implementation that comes with MS-DOS v2.0 DOS software External DOS commands File comparison tools Microsoft free software OS/2 commands Windows administration
File Compare
Technology
257
4,594,672
https://en.wikipedia.org/wiki/Computational%20problem
In theoretical computer science, a computational problem is one that asks for a solution in terms of an algorithm. For example, the problem of factoring "Given a positive integer n, find a nontrivial prime factor of n." is a computational problem that has a solution, as there are many known integer factorization algorithms. A computational problem can be viewed as a set of instances or cases together with a, possibly empty, set of solutions for every instance/case. The question then is, whether there exists an algorithm that maps instances to solutions. For example, in the factoring problem, the instances are the integers n, and solutions are prime numbers p that are the nontrivial prime factors of n. An example of a computational problem without a solution is the Halting problem. Computational problems are one of the main objects of study in theoretical computer science. One is often interested not only in mere existence of an algorithm, but also how efficient the algorithm can be. The field of computational complexity theory addresses such questions by determining the amount of resources (computational complexity) solving a given problem will require, and explain why some problems are intractable or undecidable. Solvable computational problems belong to complexity classes that define broadly the resources (e.g. time, space/memory, energy, circuit depth) it takes to compute (solve) them with various abstract machines. For example, the complexity classes P, problems that consume polynomial time for deterministic classical machines BPP, problems that consume polynomial time for probabilistic classical machines (e.g. computers with random number generators) BQP, problems that consume polynomial time for probabilistic quantum machines. Both instances and solutions are represented by binary strings, namely elements of {0, 1}*. For example, natural numbers are usually represented as binary strings using binary encoding. This is important since the complexity is expressed as a function of the length of the input representation. Types Decision problem A decision problem is a computational problem where the answer for every instance is either yes or no. An example of a decision problem is primality testing: "Given a positive integer n, determine if n is prime." A decision problem is typically represented as the set of all instances for which the answer is yes. For example, primality testing can be represented as the infinite set L = {2, 3, 5, 7, 11, ...} Search problem In a search problem, the answers can be arbitrary strings. For example, factoring is a search problem where the instances are (string representations of) positive integers and the solutions are (string representations of) collections of primes. A search problem is represented as a relation consisting of all the instance-solution pairs, called a search relation. For example, factoring can be represented as the relation R = {(4, 2), (6, 2), (6, 3), (8, 2), (9, 3), (10, 2), (10, 5)...} which consist of all pairs of numbers (n, p), where p is a prime factor of n. Counting problem A counting problem asks for the number of solutions to a given search problem. For example, a counting problem associated with factoring is "Given a positive integer n, count the number of nontrivial prime factors of n." A counting problem can be represented by a function f from {0, 1}* to the nonnegative integers. For a search relation R, the counting problem associated to R is the function fR(x) = |{y: R(x, y) }|. Optimization problem An optimization problem asks for finding a "best possible" solution among the set of all possible solutions to a search problem. One example is the maximum independent set problem: "Given a graph G, find an independent set of G of maximum size." Optimization problems are represented by their objective function and their constraints. Function problem In a function problem a single output (of a total function) is expected for every input, but the output is more complex than that of a decision problem, that is, it isn't just "yes" or "no". One of the most famous examples is the traveling salesman problem: "Given a list of cities and the distances between each pair of cities, find the shortest possible route that visits each city exactly once and returns to the origin city." It is an NP-hard problem in combinatorial optimization, important in operations research and theoretical computer science. Promise problem In computational complexity theory, it is usually implicitly assumed that any string in {0, 1}* represents an instance of the computational problem in question. However, sometimes not all strings {0, 1}* represent valid instances, and one specifies a proper subset of {0, 1}* as the set of "valid instances". Computational problems of this type are called promise problems. The following is an example of a (decision) promise problem: "Given a graph G, determine if every independent set in G has size at most 5, or G has an independent set of size at least 10." Here, the valid instances are those graphs whose maximum independent set size is either at most 5 or at least 10. Decision promise problems are usually represented as pairs of disjoint subsets (Lyes, Lno) of {0, 1}*. The valid instances are those in Lyes ∪ Lno. Lyes and Lno represent the instances whose answer is yes and no, respectively. Promise problems play an important role in several areas of computational complexity, including hardness of approximation, property testing, and interactive proof systems. See also Lateral computing, alternative approaches to solving problems computationally Model of computation Transcomputational problem Notes References . . . Theoretical computer science
Computational problem
Mathematics
1,193
46,808,694
https://en.wikipedia.org/wiki/TidalCycles
TidalCycles (also known as Tidal) is a live coding environment which is designed for improvising and composing music. Technically, it is a domain-specific language embedded in the functional programming language Haskell, and is focused on the generating and manipulating audiovisual patterns. It was originally designed for heavily percussive and polyrhythmic grid-based music, but it now uses a flexible and functional reactive representation for patterns, by using rational time. Therefore, Tidal may be applied to a wide range of musical styles, although its cyclic approach to time means that it affords use in repetitive styles such as algorave. Background TidalCycles was created by Alex McLean who also coined the term algorave, and is a domain-specific language embedded in Haskell, which focuses on generating and manipulating audiovisual patterns. Tidal's representation of rhythm is based on metrical cycles, which is inspired by Indian classical music, supporting polyrhythmic and polymetric structures using a flexible, functional reactive representation for patterns, and rational time. This programme doesn't produce sound itself, but via the SuperCollider sound environment through the SuperDirt framework, via MIDI, or Open Sound Control. Tidal is also used widely in academic research, including representation in music AI, as a language in network music, and in electronic literature. Tidal is widely used at algorave algorithmic dance music events, and on high profile music releases. It has been featured on BBC Radio 3's New Music Show. Since January 2022, an official port of Tidal's pattern engine has developed into the web-based live coding environment Strudel, created by Felix Roos and Alex McLean. Artists using it Richard Devine Beatrice Dillon Lil Data digital selves MIRI KAT Daniel M Karlsson 65daysofstatic Benjamin Wynn Hsien-Yu Cheng References External links Official website Digital art Computer programming Live coding Algorave Functional programming Music technology 2009 establishments
TidalCycles
Technology,Engineering
401
25,440,338
https://en.wikipedia.org/wiki/Juncus%20gerardii
Juncus gerardii, commonly known as blackgrass, black needle rush or saltmarsh rush, is a perennial flowering plant in the rush family Juncaceae. Description Juncus gerardii forms loose swards of erect tufts from dense and far-reaching matrix of black rhizomes. Stems are slender and wiry, growing to 25-75 cm tall. Leaves are narrow, channelled and with short auricles. Flowers are borne towards the tips of the branches, with a short primary bract. Tepals are dark brown and held around black capsules, which can give the capsules a striped appearance. Distribution Habitat Juncus gerardii occurs on coastal sites and intertidal zones, in salt marshes, wetland margins, disturbed habitats and wastelands. It tends to establish just above the high-tide line, as it prefers saline, waterlogged soils, but is intolerant of flooding. Natural global range Juncus gerardii is native to Europe (Mediterranean to Mongolia) and North America. In North America, it has spread to some unwanted locations, such as the Great Lakes region, where it causes several adverse environmental impacts, such as threatening the survival of native vegetation and hosting insects that can carry diseases. Introduced range Juncus gerardii has been introduced to a number of countries, including Greenland, New Zealand, Australia (Tasmania and Victoria), and Asia (Primorye and Magadan). New Zealand range Juncus gerardii was accidentally introduced to New Zealand, becoming naturalised in 1891. It is considered invasive, having been recorded in coastal wetlands and pastures, where it can form large swards that exclude native vegetation and reduce grazing potential. J. gerardii has also been recorded on saline soils as far inland as Alexandra, as well as on non-saline soils in Invercargill. Phenology Flowers and fruits late spring to summer. Seedling recruitment occurs in exposed habitats where there is little light competition. However, this species mainly spreads through asexual reproduction, forming clonal populations from the spreading rhizomes. References gerardii Halophytes Salt marsh plants Flora of Northern America Plants described in 1809
Juncus gerardii
Chemistry
438
30,559,753
https://en.wikipedia.org/wiki/Astronomical%20Society%20of%20India
The Astronomical Society of India (ASI) is an Indian society of professional astronomers and other professionals from related disciplines. It was founded in 1972, with Vainu Bappu being the founder President of the Society, and as of 2010 has a membership of approximately 1000. Its registered office is at the Astronomy Department, Osmania University, Hyderabad, India. Its primary objective is the promotion of Astronomy and related branches of science. It organises meetings, supports and tries to popularise Astronomy and related subjects and publishes the Bulletin of the Astronomical Society of India. Prof. Dipankar Banerjee, Director of Aryabhatta Research Institute of Observational Sciences, Nainital, is the Society's President. The Society makes a series of awards, the most prestigious of which is the Prof. M. K. Vainu Bappu Gold Medal awarded once every two years to "honour exceptional contributions to Astronomy and Astrophysics by young scientists anywhere in the world." Previous award winners include: 1986 Yasuo Fukui 1988 George Efstathiou and Shrinivas Kulkarni 1990 D. J. Saikia and Dipankar Bhattacharya 1992 Pawan Kumar 1994 Matthew Colless 1996 Sarbani Basu 1998 Peter Martinez 2000 Biswajit Paul and Alycia J. Weinberger 2002 Brian P. Schmidt 2004 R. Srianand and Ray Jayawardhana 2006 Banibrata Mukhopadhyay 2008 Niayesh Afshordi and Nissim Kanekar 2010 Marta Burgay and Parampreet Singh The Society also runs two prestigious lectures: the Modali Endowment Lecture and the R. C. Gupta Endowment Lecture. Previous Organisation A previous organisation of the same name existed between July 1910 and circa 1922. It was founded to promote astronomy following an appearance of Halley’s Comet. Initially there was strong support for such a society and by 30 September 1911 there were 239 members (192 original and a net 47 added during the first session). The society was run along similar lines to the British Astronomical Association. Sections were formed for general observation, meteors, Earth’s Moon and variable stars, experts were appointed to advise on instrumental matters and photography. A Library was established. The society was based in Calcutta and nearby Barrackpore. Sidney Gerald Burrard and John Evershed were Vice Presidents. However the organisation faded to obscurity following the departure from India of one of the principal members, Herbert Gerard Tomkins. Publications Bulletin of the Astronomical Society of India See also Akash Mitra Mandal List of astronomical societies References External links Astronomy organizations Scientific organizations established in 1972 Scientific organisations based in India Organisations based in Hyderabad, India 1972 establishments in Andhra Pradesh Astronomy in India
Astronomical Society of India
Astronomy
552
9,132,444
https://en.wikipedia.org/wiki/Rauhut%E2%80%93Currier%20reaction
The Rauhut–Currier reaction, also called the vinylogous Morita–Baylis–Hillman reaction, is an organic reaction describing (in its original scope) the dimerization or isomerization of electron-deficient alkenes such as enones by action of an organophosphine of the type R3P. In a more general description the RC reaction is any coupling of one active alkene / latent enolate to a second Michael acceptor, creating a new C–C bond between the alpha-position of one activated alkene and the beta-position of a second alkene under the influence of a nucleophilic catalyst. The reaction mechanism is essentially that of the related and better known Baylis–Hillman reaction (DABCO not phosphine, carbonyl not enone) but the Rauhut–Currier reaction actually predates it by several years. In comparison to the MBH reaction, the RC reaction lacks substrate reactivity and regioselectivity. The original 1963 reaction described the dimerization of the ethyl acrylate to the ethyl diester of 2-methylene-glutaric acid with tributylphosphine in acetonitrile: This reaction was also found to work for acrylonitrile. RC cross-couplings are known but suffer from lack of selectivity. Amines such as DABCO can also act as catalyst. The reactivity is improved in intramolecular RC reactions, for example in the isomerization of di-enones to form cyclopentenes: A similar reaction by asymmetric synthesis organocatalyzed by a protected cysteine and potassium tert-butoxide afforded a cyclohexene with 95% enantiomeric excess: In this reaction the phosphine is replaced by the thiol group of cysteine but the reaction is the same. References Carbon-carbon bond forming reactions Name reactions
Rauhut–Currier reaction
Chemistry
411
68,455,602
https://en.wikipedia.org/wiki/KELT-6b
KELT-6b is an exoplanet orbiting the F-type subgiant KELT-6 approximately 791 light years away in the northern constellation Coma Berenices. It was discovered in 2013 using the transit method, and was announced in 2014. Discovery In 2014, the planet's parameters were observed. The paper states that KELT-6 has just entered the subgiant phase, and is no longer on the main sequence. In 2015, an additional planet, c, was discovered using the radial velocity method. Properties KELT-6b is a hot Saturn with 44.2% Jupiter's mass, but has been bloated to 1.3 times Jupiter's radius. Its density is half of Saturn's, and it has an equilibrium temperature of 1,313 K, but a hotter dayside temperature of 1,531 K. References Coma Berenices Exoplanets discovered in 2013 Exoplanets discovered by KELT
KELT-6b
Astronomy
201
67,102,966
https://en.wikipedia.org/wiki/Proprietary%20drug
Proprietary drug are chemicals used for medicinal purposes which are formulated or manufactured under a name protected from competition through trademark or patent. The invented drug is usually still considered proprietary even if the patent expired. When a patent expires, generic drugs may be developed and released legally. Some international and national governmental organizations have set up laws to enforce intellectual property to protect proprietary drugs, but some also highlight the importance of public health disregarding legal regulations. Proprietary drugs affect the world in various aspects including medicine, public health and economy. Not all proprietary drugs have their generic replacements available. Biologics are often produced by in vivo preparation and direct extraction of substances from living organisms. Pharma is not extensively involved in searching for ready-to-sell generic biologics due to the complexity of manufacture and hurdles in extraction processes. Besides vaccines, these endogenous origin chemicals are prescribed to patients with severe conditions, such as complications including asthma, rheumatoid arthritis, or cancer. Patients taking a particular brand of biologics are unable to interchange between one and another to prevent underlying exposure to more side effects and/or suboptimal treatment. It is believed that generic biopharmaceutical products will not be released in the near future until all technical difficulties are overcome. The table below shows some examples of pharma and their past/current proprietary medications: Terminology Brand name drugs Broadly defined as drugs that are marketed under trade names and have patents, which can be a synonym of proprietary drugs in daily use. Strictly speaking, every drug with a trade name is a brand name drug, such as Panadol, a GSK branded paracetamol. Generic drugs Generic drugs are drugs that have the same active ingredient with a patent-expired drug, and are virtually bio-equivalent. The official names are often used to market these drugs, which are called unbranded generic drug, such as Panamax, a generic form of paracetamol. Off-patent drugs A term specifically used to describe past proprietary drugs by referring to their off-patent status. Regulations To support scientific investigation and protect intellectual properties, patents are granted to companies and individuals who invented the drug. Most entities in the world have established corresponding agendas legally. Global and regional governmental organizations have various extents of advancements and approaches in their intellectual property rights protection laws. Below are some examples for comparison: World Trade Organization (WTO) TRIPS Agreement The Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS Agreement) set up in 1994 suggested a standard on Intellectual Property Rights, which proprietary drug, a type of pharmaceutical and scientific inventions, is covered in this agreement. Basic principles such as the minimal duration of patent and part of the exclusive rights of patent owners are included by WTO member states in their respective national regulations. By 5 years and 10 years after the effectiveness of TRIPS Agreement, developed countries and developing countries were required to comply with it. Doha Declaration Trying to alleviate worldwide divide in accessibility of medical resources, members from the WTO endorsed the Doha Declaration on the TRIPS Agreement and Public Health in 2001. The basics of this Declaration is that "the TRIPS Agreement does not and should not prevent Members from taking measures to protect public health". It legalized the participating members to ignore the restriction from the patent of the proprietary medicine when they are controlling a significant public health crisis, namely human immunodeficiency virus (HIV), malaria and tuberculosis. Thus, affordable generic medicine can be provided for the populations in the developing countries in emergency situations. United States In the United States, proprietary drugs are associated with two status: patent and exclusivity. Patent is managed by the United States Patent and Trademark Office, granting inventors of new drugs rights for 20 years. It is open to all drugs, regardless of its research or commercialization status. To enjoy the benefits brought by patenting, pharmaceutical companies are obliged to disclose all research data on that drug to the public for further progression. Exclusivity, given by the U.S. Food and Drug Association (FDA), means a period of time in which no other competitor drugs can be approved. Commercialized and clinically used drugs are the targets of exclusivity. The length of exclusivity depends on the nature of application, and ranges from one to seven years. Practically exclusivity is granted for proprietary drugs that have been granted with patents, but it is not mandatory. Legalwise, generic counterparts have to wait for at least 2 decades for patent expiration to sell a copy . This system is said to aim for a balance between gaining public access to generic drugs and encouraging drug research and development. Litigation Despite the US having a legal system regarding to drug patenting, litigation have taken place. In the past, it is common for drug manufacturers challenging the validity of patents. In 2018, Mylan attempted to revoke the patent of Symbicort owned by AstraZeneca through the court. Now pharma suing on deliberately infringing generic drugs has become more prominent. AstraZeneca then took follow-up actions against Mylan for premature submission of Abbreviated New Drug Application (ANDA) for generic Symbicort and won the lawsuit. The introduction of biopharmaceuticals and the subsequent establishment of new drug laws may also bring more litigation. India According to the Patent Law in India, drug can apply for a patent since 2005. Registering a drug for patents in India is more limiting than developed countries. India was removed from the least developed member states list of the WTO and is therefore no longer eligible for the waiver, it has modified its patent law to satisfy TRIPS Agreement with its intellectual property rights legal system. An Indian patent lasts for 20 years. To ensure the interest of public, a compulsory license can be issued by the government if a pharma is suspected for violating the public health principles. Litigation Introducing new drug patenting regulations after 35 years could possibly lead to today's disputes, such as the Novartis v. Union of India incident. Generic Gleevec, a formulation that can substitute Glivec, were distributed in the local market since 1993. Novartis had filed a patent for Glivec in 1997. An exclusive marketing right was also granted for Novartis in 2003 and the application were approved in 2005 . In 2013, the Supreme Court of India upheld the rejection of the patent application of Glivec by Novartis, ending the 10-years battle between the proprietary drug tycoon and the local patent law. Since 1993, Novartis has started to register patent for Glivec and its active ingredients worldwide without defeats. However, the Supreme Court of India rejected the patent registration of Glivec on 2006 according to the interpretations on the patent law and TRIPS agreement by the Court. By then, the local generic drugs in India are protected. This incident is referred as a challenge to the intellectual property laws. Patent cliff Patent cliff refers to a dramatic dip in the revenue of a merchandise upon patent expiry. It is a prominent phenomenon in the proprietary drug industry due to the vast gap of prices between the proprietary drug and the generic drug. Since 2010, numerous pharmaceutical board busters have started to become off-patent. As seen in the figure below, the top five off-patent proprietary drug before 2017 have a combined lifetime sale of around US$588.4 Billion, which is enormous enough to surpass the bottom 5% countries' GDP in 2020. Figure 1: Five best-selling proprietary drug which lost their patents before 2017 Proprietary drug is a substantial business protected by its respective patent. They are usually sold at a higher price, to compensate for the clinical trial cost and sometimes for the manufacturing of new technology. For example, an widely used average proprietary drug is 18 times more expensive than a common generic drug. Lyrica, a recent off-patent painkiller for nervous systems, had a sale of 5B USD in 2019, out of the 51.8B USD annual sale of the corresponding company. However, once the proprietary drug become off-patent, there will soon be immense competition from the generic drugs produced by their business rivals. As cheaper pharmaceutical alternatives are launched, the surge in supply disrupts the market supply-and-demand status. The declining dependence on the original proprietary drug will cause its sales decreases. On top of that, the original company usually will have their prices tuned down for improving competitiveness. Resulting in a significant drop of revenue of the proprietary drug. Below showed the graph which represented the yearly revenue of Lipitor, a proprietary drug which lost its patent on 2011. As it can be seen, a significant patent cliff happened from 2011 to 2012(58.8% drop in yearly revenue) and it is very possibly due to its newly off-patent status. Figure 2: The yearly revenue of Lipitor by Pfizer (million US dollars) from 2004 to 2019 Benefits Promoting influx of income to pharmaceutical research industry Proprietary drug market is protected by its patent. As a result, its market exclusivity basis allows proprietary drugs to be highly profitable and commercially successful. Usually, pharmaceutical research is a lengthy, highly demanding, rarely successful, costly and risky investment. It is usually associated with a disinterested merchandise in the economic world. However, once a successful experimental drug candidate is registered as a proprietary drug product, the patent legally ensures a long-term dominance in the exclusive market which is free of imitative generic drug. Generating a stable and considerable net income to cover the cost. The huge earning of the proprietary drug can circulate back to fund future medical research. Providing more resources and manpower to the research and development of another drug candidates. As well as attracting new investments to the pharmaceutical research industry due to its exclusive market potential. They encourage efforts on biopharmaceutical innovations and newborn medical breakthroughs. Guiding future medical progresses by referencing of product knowledge Being required for successful patent registration, detailed pharmaceutical formulations of the proprietary drug are disclosed on its patent registration application, which promote the spillover of research efforts among the medical world. In order to inspect the safety and efficacy of the proprietary drug candidate, pharmaceutical companies need to list all clinical trial data and formulation method as detailed as possible to prove the drug candidate's validity to the patent registration committee. Once the patent is rewarded, these data will later be published on medical literature and public domains as common knowledge. Researchers can capitalise on the previous successes and establish their own project on top of the current statistics. These cooperatively help exploit more unknown drug candidates without repeating previous progresses. Speeding up future medical advancements. Criticisms Hindering of equitable access to medicine According to World Health Organisation(WHO), equitable access to medicines refers to an affordable and reasonable ability for patients to get their required drug to achieve health. WHO member states shall fulfill their moral responsibility to improve the delivery of and access to the needed drugs. However, the monopolisation of some expensive proprietary drug in the market is hindering poor patients' access to their best available medications. Leading to suboptimal treatments of diseases and lowering of health standard of these patients. This phenomenon is very prominent in the underdeveloped countries which usually have a large proportion of underprivileged citizens. Some proprietary drugs(mainly speciality proprietary drug) are criticised for their price-gouging commercial tactics. To illustrate, the world's most expensive drug, Zolgensma, costs over US$2.1 million per year of treatment, which are generally considered as unaffordable. Since Zolgensma is the only approved drug for curing Spinal Muscular Atrophy in childhood, patients who cannot afford Zolgensma will be physically disabled for the rest of their lives. Creating inequity among patients with varying financial capacitances. Abusing of patent extension system According to the TRIPS Agreement, the term of patent of the proprietary drug usually can last for 20 years counting from the filing date. After that, approved generic drugs can enter the market legally with fair competition. However, in order to achieve longer dominance in the market, pharmaceutical manufacturer (especially big pharma) may apply for patent extension or even new patent registration based on various reasons. Including modifying the formulations, dosage form or maneuvering legal system. To illustrate, AbbVie, a pharmaceutical tycoon, had attempted 247 proprietary drug patent extension applications for extending their exclusivity for 39 years in the USA on 2018 alone. Among them, 137 applications were successful in extending the patent. The abusing of patent extension system leads to a much longer terms of patent than that stated in both local regulations and TRIPS Agreement. Providing a long period of competition-free market to their proprietary drug. It creates an unfair competition environment in the pharmaceutical market. Since the generic drug companies are excluded from that particular market, they cannot release new pharmaceutical products for public use on the same field. Resulting enduring monopolisation of proprietary drug market by the big pharma which are already stockpiling proprietary drug. See also Doha Declaration Medicines Patent Pool Generic drug Generic brand Intellectual property Novartis v. Union of India & Others incident Patent Patent Cliff Pharmaceutical Industry Trademark TRIPS agreement References Drugs
Proprietary drug
Chemistry
2,692
56,967,081
https://en.wikipedia.org/wiki/Mistral%20G-360-TS
The Mistral G-360-TS is a Swiss aircraft engine, designed and produced by Mistral Engines of Geneva for use in light aircraft. By March 2018 the engine was no longer advertised on the company website and seems to be out of production. Design and development The engine is a three-rotor, 3X3X displacement, liquid-cooled, gasoline Wankel engine design, with a mechanical gearbox reduction drive with a reduction ratio of 2.82:1. It employs dual electronic ignition systems and produces at 2250 rpm. Specifications (G-360-TS) See also References External links Mistral aircraft engines Pistonless rotary engine
Mistral G-360-TS
Technology
130
6,390,882
https://en.wikipedia.org/wiki/Design%20history%20file
A design history file is a compilation of documentation that describes the design history of a finished medical device. The design history file, or DHF, is part of regulation introduced in 1990 when the U.S. Congress passed the Safe Medical Devices Act, which established new standards for medical devices that can cause or contribute to the death, serious illness, or injury of a patient. Prior to this legislation, U.S. Food and Drug Administration (FDA) auditors were limited to examining the production and quality control records of the device. Requirements The regulation requires medical device manufacturers of Class II and Class III devices to implement design controls. These design controls consist of a development and control plan used to manage the development of a new product, and a design history file where these activities are documented. These controls are specifically intended to manage a medical device company's new product development activities. Research and development processes aimed at developing new underlying technologies are not subject to these regulations. The requirements for a DHF are documented in FDA Regulation CFR 21 820. Design controls Each manufacturer of either a class II or class III medical device (as well as a select group of class I devices) needs to establish and document procedures on the design and design requirements. These design controls include: Design input - Design inputs are typically the initial requirements that describe the medical device to be produced. Design output - Design outputs are the results of the design and engineering efforts. These are normally the final specifications for the device. Including the manufacturing process and the in-coming, in-process and finished device inspection, measurement or test methods and criteria. The outputs are normally documented in models, drawings, engineering analysis and other documents. The output needs to be directly traceable to the input requirements. Design verification and validation should demonstrate that the final output specifications conform to the input requirements and meet user needs and intended uses. Design review - The design review is a formal review of the medical device design by representatives of each design function participating in the design efforts as well as other interested parties (e.g. marketing, sales, manufacturing engineering, etc.). The design review must be documented in the DHF and include review date, participants, design version/revision reviewed and review results. Design verification - Design verification is the process that confirms that the design output conforms to the design input. Design verification should demonstrate that the specifications are the correct specifications for the design. Design verification must be documented in the DHF and include the verification date, participants, design version/revision verified, verification method and verification results. Process validation - Process validation is the process in which the device design is validated using initial/low volume production processes. The purpose for the process validation is to confirm that the design functions according to design inputs when produced using normal production processes rather than prototype processes. The process validation must be documented in the DHF. Design validation. Each manufacturer shall establish and maintain procedures for validating the device design. Design validation shall be performed under defined operating conditions on initial production units, lots, or batches, or their equivalents. Design validation shall ensure that devices conform to defined user needs and intended uses and shall include testing of production units under actual or simulated use conditions. Design validation shall include software validation and risk analysis, where appropriate. The results of the design validation, including identification of the design, method(s), the date, and the individual(s) performing the validation, shall be documented in the DHF. Design transfer - Design transfer is the process in which the device design is translated into production, distribution, and installation specifications. Design changes - Design changes is the process in which the design changes are identified and documented. Also known as engineering change or enterprise change. Design history file - The DHF is a formal document that is prepared for each medical device. The DHF can be either a collection of the actual documents generated in the product development (PD) process or an index of documents and their storage location. Design and development files The sub-clause 7.3.10 of ISO 13485:2016 requires a manufacturer of a medical device to maintain (and control) a design and development file for a medical device to document the design history of a medical device. This file shall also contain records for changes in design and development (per device type or family). It might contain e.g. a design and development plan, or test reports; and is thus comparable to the DHF of the FDA regulations. Similarly, Annex II §3.1 of the EU medical device regulation asks for information to allow the design stages applied to the device to be understood to be part of the Technical documentation. See also Device Master Record Medical equipment management Technical file References External links CFR Title 21 Database Medical equipment Regulation of medical devices Regulation in the United States
Design history file
Biology
968
5,244,454
https://en.wikipedia.org/wiki/Hiemalora
Hiemalora is a fossil of the Ediacaran biota, reaching around 3 cm in diameter, which superficially resembles a sea anemone. The genus has a sack-like body with faint radiating lines originally interpreted as tentacles, but discovery of a frond-like structure seemingly attached to some Heimalora has added weight to a competing interpretation: that it represents the holdfast of a larger organism. In 2020, a new study was published that described nine different specimens from the Indreelva member, Digermulen Peninsula, Finnmark (Arctic Norway). The specimens described in the paper have high degrees of variation between morphologies and within the specimens that are thought to be of the same species. Some of the representative fossils from that paper either show multiple Aspidella-like structures on the same specimen, or a Primocandelabrum-like cone visible in one of the fossils. All of the examples of fossils in the publication were determined to most likely represent the species Hiemalora stellaris, however, one of the more poorly preserved specimens (D18-50) is thought to have been representative of Hiemalora pleiomorphus, although the latter of the species represented by the specimens does not show parallel ridges running along the poorly preserved central disc. A representative of H, stellaris might have represented a holdfast with a Primocandelabrum frond attached to it, which may further support the theory of Hiemalora being a holdfast for Primocandelabrum. This interpretation would stand against its original classification in the medusoid Cnidaria; it would also consign a once-popular hypothesis placing Hiemalora in the chondrophores, on the basis of its tentacle structure, to the dustbin. Studies testing the feasibility of the hypothesis investigated the possibilities that such fragile tentacles could be preserved, and concluded that it would be very improbable — especially as many Hiemalora bearing beds also contain such fossils as Cyclomedusa, but do not preserve the tentacles on these organisms. Hiemalora has been identified in a wide range of facies and locations globally. Etymology The genus was originally named Pinegia, but was renamed two years later when it was realised that a genus of Permian insect already bore the name. The revised name comes from Latin hiemalis ora, "winter coast". See also List of Ediacaran genera References External links Image Ediacaran life Incertae sedis Ediacaran Newfoundland and Labrador Fossil taxa described in 1982
Hiemalora
Biology
531
36,196,505
https://en.wikipedia.org/wiki/C23H21NO
{{DISPLAYTITLE:C23H21NO}} The molecular formula C23H21NO (molar mass: 327.42 g/mol, exact mass: 327.1623 u) may refer to: JWH-015 JWH-073 JWH-120 Molecular formulas
C23H21NO
Physics,Chemistry
67
50,998,377
https://en.wikipedia.org/wiki/Global%20Change%20Biology
Global Change Biology is a biweekly peer-reviewed scientific journal covering research on the interface between biological systems and all aspects of environmental change that affect a substantial part of the globe including climate change, global warming, land use change, invasive species, urbanization, wildfire, and greenhouse gases. The editor-in-chief is Stephen P. Long, environmental plant physiologist, Fellow of the Royal Society and member of the National Academy of Sciences (University of Illinois and Lancaster University). This journal has a sister journal: GCB Bioenergy: Bioproducts for a Sustainable Bioeconomy. References External links Ecology journals Environmental science journals English-language journals Wiley-Blackwell academic journals Academic journals established in 1995 Biweekly journals Climate change journals
Global Change Biology
Environmental_science
156
4,024
https://en.wikipedia.org/wiki/Butterfly%20effect
In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state. The term is closely associated with the work of the mathematician and meteorologist Edward Norton Lorenz. He noted that the butterfly effect is derived from the example of the details of a tornado (the exact time of formation, the exact path taken) being influenced by minor perturbations such as a distant butterfly flapping its wings several weeks earlier. Lorenz originally used a seagull causing a storm but was persuaded to make it more poetic with the use of a butterfly and tornado by 1972. He discovered the effect when he observed runs of his weather model with initial condition data that were rounded in a seemingly inconsequential manner. He noted that the weather model would fail to reproduce the results of runs with the unrounded initial condition data. A very small change in initial conditions had created a significantly different outcome. The idea that small causes may have large effects in weather was earlier acknowledged by the French mathematician and physicist Henri Poincaré. The American mathematician and philosopher Norbert Wiener also contributed to this theory. Lorenz's work placed the concept of instability of the Earth's atmosphere onto a quantitative base and linked the concept of instability to the properties of large classes of dynamic systems which are undergoing nonlinear dynamics and deterministic chaos. The concept of the butterfly effect has since been used outside the context of weather science as a broad term for any situation where a small change is supposed to be the cause of larger consequences. History In The Vocation of Man (1800), Johann Gottlieb Fichte says "you could not remove a single grain of sand from its place without thereby ... changing something throughout all parts of the immeasurable whole". Chaos theory and the sensitive dependence on initial conditions were described in numerous forms of literature. This is evidenced by the case of the three-body problem by Poincaré in 1890. He later proposed that such phenomena could be common, for example, in meteorology. In 1898, Jacques Hadamard noted general divergence of trajectories in spaces of negative curvature. Pierre Duhem discussed the possible general significance of this in 1908. In 1950, Alan Turing noted: "The displacement of a single electron by a billionth of a centimetre at one moment might make the difference between a man being killed by an avalanche a year later, or escaping." The idea that the death of one butterfly could eventually have a far-reaching ripple effect on subsequent historical events made its earliest known appearance in "A Sound of Thunder", a 1952 short story by Ray Bradbury. "A Sound of Thunder" features time travel. More precisely, though, almost the exact idea and the exact phrasing —of a tiny insect's wing affecting the entire atmosphere's winds— was published in a children's book which became extremely successful and well-known globally in 1962, the year before Lorenz published: In 1961, Lorenz was running a numerical computer model to redo a weather prediction from the middle of the previous run as a shortcut. He entered the initial condition 0.506 from the printout instead of entering the full precision 0.506127 value. The result was a completely different weather scenario. Lorenz wrote: In 1963, Lorenz published a theoretical study of this effect in a highly cited, seminal paper called Deterministic Nonperiodic Flow (the calculations were performed on a Royal McBee LGP-30 computer). Elsewhere he stated: Following proposals from colleagues, in later speeches and papers, Lorenz used the more poetic butterfly. According to Lorenz, when he failed to provide a title for a talk he was to present at the 139th meeting of the American Association for the Advancement of Science in 1972, Philip Merilees concocted Does the flap of a butterfly's wings in Brazil set off a tornado in Texas? as a title. Although a butterfly flapping its wings has remained constant in the expression of this concept, the location of the butterfly, the consequences, and the location of the consequences have varied widely. The phrase refers to the effect of a butterfly's wings creating tiny changes in the atmosphere that may ultimately alter the path of a tornado or delay, accelerate, or even prevent the occurrence of a tornado in another location. The butterfly does not power or directly create the tornado, but the term is intended to imply that the flap of the butterfly's wings can cause the tornado: in the sense that the flap of the wings is a part of the initial conditions of an interconnected complex web; one set of conditions leads to a tornado, while the other set of conditions doesn't. The flapping wing creates a small change in the initial condition of the system, which cascades to large-scale alterations of events (compare: domino effect). Had the butterfly not flapped its wings, the trajectory of the system might have been vastly different—but it's also equally possible that the set of conditions without the butterfly flapping its wings is the set that leads to a tornado. The butterfly effect presents an obvious challenge to prediction, since initial conditions for a system such as the weather can never be known to complete accuracy. This problem motivated the development of ensemble forecasting, in which a number of forecasts are made from perturbed initial conditions. Some scientists have since argued that the weather system is not as sensitive to initial conditions as previously believed. David Orrell argues that the major contributor to weather forecast error is model error, with sensitivity to initial conditions playing a relatively small role. Stephen Wolfram also notes that the Lorenz equations are highly simplified and do not contain terms that represent viscous effects; he believes that these terms would tend to damp out small perturbations. Recent studies using generalized Lorenz models that included additional dissipative terms and nonlinearity suggested that a larger heating parameter is required for the onset of chaos. While the "butterfly effect" is often explained as being synonymous with sensitive dependence on initial conditions of the kind described by Lorenz in his 1963 paper (and previously observed by Poincaré), the butterfly metaphor was originally applied to work he published in 1969 which took the idea a step further. Lorenz proposed a mathematical model for how tiny motions in the atmosphere scale up to affect larger systems. He found that the systems in that model could only be predicted up to a specific point in the future, and beyond that, reducing the error in the initial conditions would not increase the predictability (as long as the error is not zero). This demonstrated that a deterministic system could be "observationally indistinguishable" from a non-deterministic one in terms of predictability. Recent re-examinations of this paper suggest that it offered a significant challenge to the idea that our universe is deterministic, comparable to the challenges offered by quantum physics. In the book entitled The Essence of Chaos published in 1993, Lorenz defined butterfly effect as: "The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration." This feature is the same as sensitive dependence of solutions on initial conditions (SDIC) in . In the same book, Lorenz applied the activity of skiing and developed an idealized skiing model for revealing the sensitivity of time-varying paths to initial positions. A predictability horizon is determined before the onset of SDIC. Illustrations {|class="wikitable" width=100% |- ! colspan=3|The butterfly effect in the Lorenz attractor |- | colspan="2" style="text-align:center;" | time 0 ≤ t ≤ 30 (larger) | style="text-align:center;" | z coordinate (larger) |- | colspan="2" style="text-align:center;"| | style="text-align:center;"| |- |colspan=3 | These figures show two segments of the three-dimensional evolution of two trajectories (one in blue, and the other in yellow) for the same period of time in the Lorenz attractor starting at two initial points that differ by only 10−5 in the x-coordinate. Initially, the two trajectories seem coincident, as indicated by the small difference between the z coordinate of the blue and yellow trajectories, but for t > 23 the difference is as large as the value of the trajectory. The final position of the cones indicates that the two trajectories are no longer coincident at t = 30. |- | style="text-align:center;" colspan="3" | An animation of the Lorenz attractor shows the continuous evolution. |} Theory and mathematical definition Recurrence, the approximate return of a system toward its initial conditions, together with sensitive dependence on initial conditions, are the two main ingredients for chaotic motion. They have the practical consequence of making complex systems, such as the weather, difficult to predict past a certain time range (approximately a week in the case of weather) since it is impossible to measure the starting atmospheric conditions completely accurately. A dynamical system displays sensitive dependence on initial conditions if points arbitrarily close together separate over time at an exponential rate. The definition is not topological, but essentially metrical. Lorenz defined sensitive dependence as follows: The property characterizing an orbit (i.e., a solution) if most other orbits that pass close to it at some point do not remain close to it as time advances. If M is the state space for the map , then displays sensitive dependence to initial conditions if for any x in M and any δ > 0, there are y in M, with distance such that and such that for some positive parameter a. The definition does not require that all points from a neighborhood separate from the base point x, but it requires one positive Lyapunov exponent. In addition to a positive Lyapunov exponent, boundedness is another major feature within chaotic systems. The simplest mathematical framework exhibiting sensitive dependence on initial conditions is provided by a particular parametrization of the logistic map: which, unlike most chaotic maps, has a closed-form solution: where the initial condition parameter is given by . For rational , after a finite number of iterations maps into a periodic sequence. But almost all are irrational, and, for irrational , never repeats itself – it is non-periodic. This solution equation clearly demonstrates the two key features of chaos – stretching and folding: the factor 2n shows the exponential growth of stretching, which results in sensitive dependence on initial conditions (the butterfly effect), while the squared sine function keeps folded within the range [0, 1]. In physical systems In weather Overview The butterfly effect is most familiar in terms of weather; it can easily be demonstrated in standard weather prediction models, for example. The climate scientists James Annan and William Connolley explain that chaos is important in the development of weather prediction methods; models are sensitive to initial conditions. They add the caveat: "Of course the existence of an unknown butterfly flapping its wings has no direct bearing on weather forecasts, since it will take far too long for such a small perturbation to grow to a significant size, and we have many more immediate uncertainties to worry about. So the direct impact of this phenomenon on weather prediction is often somewhat wrong." Differentiating types of butterfly effects The concept of the butterfly effect encompasses several phenomena. The two kinds of butterfly effects, including the sensitive dependence on initial conditions, and the ability of a tiny perturbation to create an organized circulation at large distances, are not exactly the same. In Palmer et al., a new type of butterfly effect is introduced, highlighting the potential impact of small-scale processes on finite predictability within the Lorenz 1969 model. Additionally, the identification of ill-conditioned aspects of the Lorenz 1969 model points to a practical form of finite predictability. These two distinct mechanisms suggesting finite predictability in the Lorenz 1969 model are collectively referred to as the third kind of butterfly effect. The authors in have considered Palmer et al.'s suggestions and have aimed to present their perspective without raising specific contentions. The third kind of butterfly effect with finite predictability, as discussed in, was primarily proposed based on a convergent geometric series, known as Lorenz's and Lilly's formulas. Ongoing discussions are addressing the validity of these two formulas for estimating predictability limits in. A comparison of the two kinds of butterfly effects and the third kind of butterfly effect has been documented. In recent studies, it was reported that both meteorological and non-meteorological linear models have shown that instability plays a role in producing a butterfly effect, which is characterized by brief but significant exponential growth resulting from a small disturbance. Recent debates on butterfly effects The first kind of butterfly effect (BE1), known as SDIC (Sensitive Dependence on Initial Conditions), is widely recognized and demonstrated through idealized chaotic models. However, opinions differ regarding the second kind of butterfly effect, specifically the impact of a butterfly flapping its wings on tornado formation, as indicated in two 2024 articles. In more recent discussions published by Physics Today, it is acknowledged that the second kind of butterfly effect (BE2) has never been rigorously verified using a realistic weather model. While the studies suggest that BE2 is unlikely in the real atmosphere, its invalidity in this context does not negate the applicability of BE1 in other areas, such as pandemics or historical events. For the third kind of butterfly effect, the limited predictability within the Lorenz 1969 model is explained by scale interactions in one article and by system ill-conditioning in another more recent study. Finite predictability in chaotic systems According to Lighthill (1986), the presence of SDIC (commonly known as the butterfly effect) implies that chaotic systems have a finite predictability limit. In a literature review, it was found that Lorenz's perspective on the predictability limit can be condensed into the following statement: (A). The Lorenz 1963 model qualitatively revealed the essence of a finite predictability within a chaotic system such as the atmosphere. However, it did not determine a precise limit for the predictability of the atmosphere. (B). In the 1960s, the two-week predictability limit was originally estimated based on a doubling time of five days in real-world models. Since then, this finding has been documented in Charney et al. (1966) and has become a consensus. Recently, a short video has been created to present Lorenz's perspective on predictability limit. A recent study refers to the two-week predictability limit, initially calculated in the 1960s with the Mintz-Arakawa model's five-day doubling time, as the "Predictability Limit Hypothesis." Inspired by Moore's Law, this term acknowledges the collaborative contributions of Lorenz, Mintz, and Arakawa under Charney's leadership. The hypothesis supports the investigation into extended-range predictions using both partial differential equation (PDE)-based physics methods and Artificial Intelligence (AI) techniques. Revised perspectives on chaotic and non-chaotic systems By revealing coexisting chaotic and non-chaotic attractors within Lorenz models, Shen and his colleagues proposed a revised view that "weather possesses chaos and order", in contrast to the conventional view of "weather is chaotic". As a result, sensitive dependence on initial conditions (SDIC) does not always appear. Namely, SDIC appears when two orbits (i.e., solutions) become the chaotic attractor; it does not appear when two orbits move toward the same point attractor. The above animation for double pendulum motion provides an analogy. For large angles of swing the motion of the pendulum is often chaotic. By comparison, for small angles of swing, motions are non-chaotic. Multistability is defined when a system (e.g., the double pendulum system) contains more than one bounded attractor that depends only on initial conditions. The multistability was illustrated using kayaking in Figure on the right side (i.e., Figure 1 of ) where the appearance of strong currents and a stagnant area suggests instability and local stability, respectively. As a result, when two kayaks move along strong currents, their paths display SDIC. On the other hand, when two kayaks move into a stagnant area, they become trapped, showing no typical SDIC (although a chaotic transient may occur). Such features of SDIC or no SDIC suggest two types of solutions and illustrate the nature of multistability. By taking into consideration time-varying multistability that is associated with the modulation of large-scale processes (e.g., seasonal forcing) and aggregated feedback of small-scale processes (e.g., convection), the above revised view is refined as follows: "The atmosphere possesses chaos and order; it includes, as examples, emerging organized systems (such as tornadoes) and time varying forcing from recurrent seasons." In quantum mechanics The potential for sensitive dependence on initial conditions (the butterfly effect) has been studied in a number of cases in semiclassical and quantum physics, including atoms in strong fields and the anisotropic Kepler problem. Some authors have argued that extreme (exponential) dependence on initial conditions is not expected in pure quantum treatments; however, the sensitive dependence on initial conditions demonstrated in classical motion is included in the semiclassical treatments developed by Martin Gutzwiller and John B. Delos and co-workers. The random matrix theory and simulations with quantum computers prove that some versions of the butterfly effect in quantum mechanics do not exist. Other authors suggest that the butterfly effect can be observed in quantum systems. Zbyszek P. Karkuszewski et al. consider the time evolution of quantum systems which have slightly different Hamiltonians. They investigate the level of sensitivity of quantum systems to small changes in their given Hamiltonians. David Poulin et al. presented a quantum algorithm to measure fidelity decay, which "measures the rate at which identical initial states diverge when subjected to slightly different dynamics". They consider fidelity decay to be "the closest quantum analog to the (purely classical) butterfly effect". Whereas the classical butterfly effect considers the effect of a small change in the position and/or velocity of an object in a given Hamiltonian system, the quantum butterfly effect considers the effect of a small change in the Hamiltonian system with a given initial position and velocity. This quantum butterfly effect has been demonstrated experimentally. Quantum and semiclassical treatments of system sensitivity to initial conditions are known as quantum chaos. In popular culture The butterfly effect has appeared across mediums such as literature (for instance, A Sound of Thunder), films and television (such as The Simpsons), video games (such as Life Is Strange), webcomics (such as Homestuck), AI-driven expansive language models, and more. See also Avalanche effect Behavioral cusp Cascading failure Catastrophe theory Causality Chain reaction Clapotis Determinism Domino effect Dynamical system Fractal Great Stirrup Controversy Innovation butterfly Kessler syndrome Norton's dome Numerical analysis Point of divergence Positive feedback Potentiality and actuality Representativeness heuristic Ripple effect Snowball effect Traffic congestion Tropical cyclogenesis Unintended consequences References Further reading James Gleick, Chaos: Making a New Science, New York: Viking, 1987. 368 pp. Bradbury, Ray. "A Sound of Thunder." Collier's. 28 June 1952 External links Weather and Chaos: The Work of Edward N. Lorenz. A short documentary that explains the "butterfly effect" in context of Lorenz's work. The Chaos Hypertextbook. An introductory primer on chaos and fractals New England Complex Systems Institute - Concepts: Butterfly Effect ChaosBook.org. Advanced graduate textbook on chaos (no fractals) Causality Chaos theory Determinism Metaphors referring to insects Physical phenomena Stability theory
Butterfly effect
Physics,Mathematics
4,149
63,019,482
https://en.wikipedia.org/wiki/Absorption%20rate%20constant
The absorption rate constant Ka is a value used in pharmacokinetics to describe the rate at which a drug enters into the system. It is expressed in units of time−1. The Ka is related to the absorption half-life (t1/2a) per the following equation: Ka = ln(2) / t1/2a. Ka values can typically only be found in research articles. This is in contrast to parameters like bioavailability and elimination half-life, which can often be found in drug and pharmacology handbooks. References Pharmacokinetic metrics
Absorption rate constant
Chemistry
125
7,609,013
https://en.wikipedia.org/wiki/Hysterical%20contagion
In social psychology, hysterical contagion occurs when people in a group show signs of a physical problem or illness, when in reality there are psychological and social forces at work. Hysterical contagion is a strong form of social contagion; the symptoms can include those associated with clinical hysteria. In 1977 Frieda L. Gehlen offered a revised theory of hysterical contagion that argues that what is actually contagious is the belief that showing certain characteristics will "entitle one to the secondary benefits of the sick role." It may be an unconscious decision on the part of the individual. This approach posited by Gehlen is believed to be more consistent with the existing knowledge of the contagion process and the theoretical approaches to collective behavior. June bug epidemic The June bug epidemic serves as a classic example of hysterical contagion. In 1962 a mysterious disease broke out in a dressmaking department of a US textile factory. The symptoms included numbness, nausea, dizziness, and vomiting. Word of a bug in the factory that would bite its victims and cause them to develop the above symptoms quickly spread. Soon sixty-two employees developed this mysterious illness, some of whom were hospitalized. The news media reported on the case. After research by company physicians and experts from the US Public Health Service Communicable Disease Center, it was concluded that the case was one of mass hysteria. While the researchers believed some workers were bitten by the bug, anxiety was probably the cause of the symptoms. No evidence was ever found for a bug which could cause the above flu-like symptoms, nor did all workers demonstrate bites. Workers concluded that the environment was quite stressful; the plant had recently opened, was quite busy and organization was poor. Further, most of the victims reported high levels of stress in their lives. Social forces seemed at work too. Of the 62 employees that reported symptoms, 59 worked on the first shift, 58 worked in the same area, and 50 of the 62 cases occurred in the two consecutive days after the media supposedly “sensationalized” the event. Most of the employees who became sick took time off to recuperate. See also Mass psychogenic illness Conversion disorder Body-centred countertransference References Group processes Mass psychogenic illness Conformity Hysteria
Hysterical contagion
Biology
464
13,803,804
https://en.wikipedia.org/wiki/Calculation%20of%20glass%20properties
The calculation of glass properties (glass modeling) is used to predict glass properties of interest or glass behavior under certain conditions (e.g., during production) without experimental investigation, based on past data and experience, with the intention to save time, material, financial, and environmental resources, or to gain scientific insight. It was first practised at the end of the 19th century by A. Winkelmann and O. Schott. The combination of several glass models together with other relevant functions can be used for optimization and six sigma procedures. In the form of statistical analysis glass modeling can aid with accreditation of new data, experimental procedures, and measurement institutions (glass laboratories). History Historically, the calculation of glass properties is directly related to the founding of glass science. At the end of the 19th century the physicist Ernst Abbe developed equations that allow calculating the design of optimized optical microscopes in Jena, Germany, stimulated by co-operation with the optical workshop of Carl Zeiss. Before Ernst Abbe's time the building of microscopes was mainly a work of art and experienced craftsmanship, resulting in very expensive optical microscopes with variable quality. Now Ernst Abbe knew exactly how to construct an excellent microscope, but unfortunately, the required lenses and prisms with specific ratios of refractive index and dispersion did not exist. Ernst Abbe was not able to find answers to his needs from glass artists and engineers; glass making was not based on science at this time. In 1879 the young glass engineer Otto Schott sent Abbe glass samples with a special composition (lithium silicate glass) that he had prepared himself and that he hoped to show special optical properties. Following measurements by Ernst Abbe, Schott's glass samples did not have the desired properties, and they were also not as homogeneous as desired. Nevertheless, Ernst Abbe invited Otto Schott to work on the problem further and to evaluate all possible glass components systematically. Finally, Schott succeeded in producing homogeneous glass samples, and he invented borosilicate glass with the optical properties Abbe needed. These inventions gave rise to the well-known companies Zeiss and Schott Glass (see also Timeline of microscope technology). Systematic glass research was born. In 1908, Eugene Sullivan founded glass research also in the United States (Corning, New York). At the beginning of glass research it was most important to know the relation between the glass composition and its properties. For this purpose Otto Schott introduced the additivity principle in several publications for calculation of glass properties. This principle implies that the relation between the glass composition and a specific property is linear to all glass component concentrations, assuming an ideal mixture, with Ci and bi representing specific glass component concentrations and related coefficients respectively in the equation below. The additivity principle is a simplification and only valid within narrow composition ranges as seen in the displayed diagrams for the refractive index and the viscosity. Nevertheless, the application of the additivity principle lead the way to many of Schott's inventions, including optical glasses, glasses with low thermal expansion for cooking and laboratory ware (Duran), and glasses with reduced freezing point depression for mercury thermometers. Subsequently, English and Gehlhoff et al. published similar additive glass property calculation models. Schott's additivity principle is still widely in use today in glass research and technology. Additivity Principle: Global models Schott and many scientists and engineers afterwards applied the additivity principle to experimental data measured in their own laboratory within sufficiently narrow composition ranges (local glass models). This is most convenient because disagreements between laboratories and non-linear glass component interactions do not need to be considered. In the course of several decades of systematic glass research thousands of glass compositions were studied, resulting in millions of published glass properties, collected in glass databases. This huge pool of experimental data was not investigated as a whole, until Bottinga, Kucuk, Priven, Choudhary, Mazurin, and Fluegel published their global glass models, using various approaches. In contrast to the models by Schott the global models consider many independent data sources, making the model estimates more reliable. In addition, global models can reveal and quantify non-additive influences of certain glass component combinations on the properties, such as the mixed-alkali effect as seen in the adjacent diagram, or the boron anomaly. Global models also reflect interesting developments of glass property measurement accuracy, e.g., a decreasing accuracy of experimental data in modern scientific literature for some glass properties, shown in the diagram. They can be used for accreditation of new data, experimental procedures, and measurement institutions (glass laboratories). In the following sections (except melting enthalpy) empirical modeling techniques are presented, which seem to be a successful way for handling huge amounts of experimental data. The resulting models are applied in contemporary engineering and research for the calculation of glass properties. Non-empirical (deductive) glass models exist. They are often not created to obtain reliable glass property predictions in the first place (except melting enthalpy), but to establish relations among several properties (e.g. atomic radius, atomic mass, chemical bond strength and angles, chemical valency, heat capacity) to gain scientific insight. In future, the investigation of property relations in deductive models may ultimately lead to reliable predictions for all desired properties, provided the property relations are well understood and all required experimental data are available. Methods Glass properties and glass behavior during production can be calculated through statistical analysis of glass databases such as GE-SYSTEM SciGlass and Interglad, sometimes combined with the finite element method. For estimating the melting enthalpy thermodynamic databases are used. Linear regression If the desired glass property is not related to crystallization (e.g., liquidus temperature) or phase separation, linear regression can be applied using common polynomial functions up to the third degree. Below is an example equation of the second degree. The C-values are the glass component concentrations like Na2O or CaO in percent or other fractions, the b-values are coefficients, and n is the total number of glass components. The glass main component silica (SiO2) is excluded in the equation below because of over-parametrization due to the constraint that all components sum up to 100%. Many terms in the equation below can be neglected based on correlation and significance analysis. Systematic errors such as seen in the picture are quantified by dummy variables. Further details and examples are available in an online tutorial by Fluegel. Non-linear regression The liquidus temperature has been modeled by non-linear regression using neural networks and disconnected peak functions. The disconnected peak functions approach is based on the observation that within one primary crystalline phase field linear regression can be applied and at eutectic points sudden changes occur. Glass melting enthalpy The glass melting enthalpy reflects the amount of energy needed to convert the mix of raw materials (batch) to a melt glass. It depends on the batch and glass compositions, on the efficiency of the furnace and heat regeneration systems, the average residence time of the glass in the furnace, and many other factors. A pioneering article about the subject was written by Carl Kröger in 1953. Finite element method For modeling of the glass flow in a glass melting furnace the finite element method is applied commercially, based on data or models for viscosity, density, thermal conductivity, heat capacity, absorption spectra, and other relevant properties of the glass melt. The finite element method may also be applied to glass forming processes. Optimization It is often required to optimize several glass properties simultaneously, including production costs. This can be performed, e.g., by simplex search, or in a spreadsheet as follows: Listing of the desired properties; Entering of models for the reliable calculation of properties based on the glass composition, including a formula for estimating the production costs; Calculation of the squares of the differences (errors) between desired and calculated properties; Reduction of the sum of square errors using the Solver option in Microsoft Excel with the glass components as variables. Other software (e.g. Microcal Origin) can also be used to perform these optimizations. It is possible to weight the desired properties differently. Basic information about the principle can be found in an article by Huff et al. The combination of several glass models together with further relevant technological and financial functions can be used in six sigma optimization. See also Glass batch calculation References Glass engineering and science Glass physics Glass chemistry Glass Mathematical modeling
Calculation of glass properties
Physics,Chemistry,Materials_science,Mathematics,Engineering
1,754
3,338,671
https://en.wikipedia.org/wiki/Definite%20clause%20grammar
A definite clause grammar (DCG) is a way of expressing grammar, either for natural or formal languages, in a logic programming language such as Prolog. It is closely related to the concept of attribute grammars / affix grammars. DCGs are usually associated with Prolog, but similar languages such as Mercury also include DCGs. They are called definite clause grammars because they represent a grammar as a set of definite clauses in first-order logic. The term DCG refers to the specific type of expression in Prolog and other similar languages; not all ways of expressing grammars using definite clauses are considered DCGs. However, all of the capabilities or properties of DCGs will be the same for any grammar that is represented with definite clauses in essentially the same way as in Prolog. The definite clauses of a DCG can be considered a set of axioms where the validity of a sentence, and the fact that it has a certain parse tree can be considered theorems that follow from these axioms. This has the advantage of making it so that recognition and parsing of expressions in a language becomes a general matter of proving statements, such as statements in a logic programming language. History The history of DCGs is closely tied to the history of Prolog, and the history of Prolog revolves around several researchers in both Marseille, France, and Edinburgh, Scotland. According to Robert Kowalski, an early developer of Prolog, the first Prolog system was developed in 1972 by Alain Colmerauer and Phillipe Roussel. The first program written in the language was a large natural-language processing system. Fernando Pereira and David Warren at the University of Edinburgh were also involved in the early development of Prolog. Colmerauer had previously worked on a language processing system called Q-systems that was used to translate between English and French. In 1978, Colmerauer wrote a paper about a way of representing grammars called metamorphosis grammars which were part of the early version of Prolog called Marseille Prolog. In this paper, he gave a formal description of metamorphosis grammars and some examples of programs that use them. Fernando Pereira and David Warren, two other early architects of Prolog, coined the term "definite clause grammar" and created the notation for DCGs that is used in Prolog today. They gave credit for the idea to Colmerauer and Kowalski, and they note that DCGs are a special case of Colmerauer's metamorphosis grammars. They introduced the idea in an article called "Definite Clause Grammars for Language Analysis", where they describe DCGs as a "formalism ... in which grammars are expressed clauses of first-order predicate logic" that "constitute effective programs of the programming language Prolog". Pereira, Warren, and other pioneers of Prolog later wrote about several other aspects of DCGs. Pereira and Warren wrote an article called "Parsing as Deduction", describing things such as how the Earley Deduction proof procedure is used for parsing. Pereira also collaborated with Stuart M. Shieber on a book called "Prolog and Natural Language Analysis", that was intended as a general introduction to computational linguistics using logic programming. Example A basic example of DCGs helps to illustrate what they are and what they look like. sentence --> noun_phrase, verb_phrase. noun_phrase --> det, noun. verb_phrase --> verb, noun_phrase. det --> [the]. det --> [a]. noun --> [cat]. noun --> [bat]. verb --> [eats]. This generates sentences such as "the cat eats the bat", "a bat eats the cat". One can generate all of the valid expressions in the language generated by this grammar at a Prolog interpreter by typing sentence(X,[]). Similarly, one can test whether a sentence is valid in the language by typing something like sentence([the,bat,eats,the,bat],[]). Translation into definite clauses DCG notation is just syntactic sugar for normal definite clauses in Prolog. For example, the previous example could be translated into the following: sentence(A,Z) :- noun_phrase(A,B), verb_phrase(B,Z). noun_phrase(A,Z) :- det(A,B), noun(B,Z). verb_phrase(A,Z) :- verb(A,B), noun_phrase(B,Z). det([the|X], X). det([a|X], X). noun([cat|X], X). noun([bat|X], X). verb([eats|X], X). Difference lists The arguments to each functor, such as (A,B) and (B,Z) are difference lists; difference lists are a way of representing a prefix of a list as the difference between its two suffixes (the bigger including the smaller one). Using Prolog's notation for lists, a singleton list prefix P = [H] can be seen as the difference between [H|X] and X, and thus represented with the pair ([H|X],X), for instance. Saying that P is the difference between A and B is the same as saying that append(P,B,A) holds. Or in the case of the previous example, append([H],X,[H|X]). Difference lists are used to represent lists with DCGs for reasons of efficiency. It is much more efficient to concatenate list differences (prefixes), in the circumstances that they can be used, because the concatenation of (A,B) and (B,Z) is just (A,Z). Indeed, append(P,B,A), append(Q,Z,B) entails append(P,Q,S), append(S,Z,A). This is the same as saying that list concatenation is associative: A = P + B = P + (Q + Z) = (P + Q) + Z = S + Z = A Non-context-free grammars In pure Prolog, normal DCG rules with no extra arguments on the functors, such as the previous example, can only express context-free grammars; there is only one argument on the left side of the production. However, context-sensitive grammars can also be expressed with DCGs, by providing extra arguments, such as in the following example: s --> a(N), b(N), c(N). a(0) --> []. a(M) --> [a], a(N), {M is N + 1}. b(0) --> []. b(M) --> [b], b(N), {M is N + 1}. c(0) --> []. c(M) --> [c], c(N), {M is N + 1}. This set of DCG rules describes the grammar which generates the language that consists of strings of the form . s --> symbols(Sem,a), symbols(Sem,b), symbols(Sem,c). symbols(end,_) --> []. symbols(s(Sem),S) --> [S], symbols(Sem,S). This set of DCG rules describes the grammar which generates the language that consists of strings of the form , by structurally representing Representing features Various linguistic features can also be represented fairly concisely with DCGs by providing extra arguments to the functors. For example, consider the following set of DCG rules: sentence --> pronoun(subject), verb_phrase. verb_phrase --> verb, pronoun(object). pronoun(subject) --> [he]. pronoun(subject) --> [she]. pronoun(object) --> [him]. pronoun(object) --> [her]. verb --> [likes]. This grammar allows sentences like "he likes her" and "he likes him", but not "her likes he" and "him likes him". Parsing with DCGs The main practical use of a DCG is to parse sentences of the given grammar, i.e. to construct a parse tree. This can be done by providing "extra arguments" to the functors in the DCG, like in the following rules: sentence(s(NP,VP)) --> noun_phrase(NP), verb_phrase(VP). noun_phrase(np(D,N)) --> det(D), noun(N). verb_phrase(vp(V,NP)) --> verb(V), noun_phrase(NP). det(d(the)) --> [the]. det(d(a)) --> [a]. noun(n(bat)) --> [bat]. noun(n(cat)) --> [cat]. verb(v(eats)) --> [eats]. One can now query the interpreter to yield a parse tree of any given sentence: | ?- sentence(Parse_tree, [the,bat,eats,a,cat], []). Parse_tree = s(np(d(the),n(bat)),vp(v(eats),np(d(a),n(cat)))) ? ; Other uses DCGs can serve as a convenient syntactic sugar to hide certain parameters in code in other places besides parsing applications. In the declaratively pure programming language Mercury I/O must be represented by a pair of io.state arguments. DCG notation can be used to make using I/O more convenient, although state variable notation is usually preferred. DCG notation is also used for parsing and similar things in Mercury as it is in Prolog. Extensions Since DCGs were introduced by Pereira and Warren, several extensions have been proposed. Pereira himself proposed an extension called extraposition grammars (XGs). This formalism was intended in part to make it easier to express certain grammatical phenomena, such as left-extraposition. Pereira states, "The difference between XG rules and DCG rules is then that the left-hand side of an XG rule may contain several symbols." This makes it easier to express rules for context-sensitive grammars. Peter Van Roy extended DCGs to allow multiple accumulators. Another, more recent, extension was made by researchers at NEC Corporation called Multi-Modal Definite Clause Grammars (MM-DCGs) in 1995. Their extensions were intended to allow the recognizing and parsing expressions that include non-textual parts such as pictures. Another extension, called definite clause translation grammars (DCTGs) was described by Harvey Abramson in 1984. DCTG notation looks very similar to DCG notation; the major difference is that one uses ::= instead of --> in the rules. It was devised to handle grammatical attributes conveniently. The translation of DCTGs into normal Prolog clauses is like that of DCGs, but 3 arguments are added instead of 2. See also Chomsky hierarchy Context-free grammar Natural language processing Phrase structure grammar Notes External links NLP with Prolog Context-free grammars and DCGs Definite Clause Grammars: Not Just for Parsing Anymore Definite Clause Grammars for Language Analysis Formal languages Logic programming Parsing
Definite clause grammar
Mathematics
2,488
26,879,790
https://en.wikipedia.org/wiki/IConnectHere
iConnectHere is the consumer division of Deltathree, which provides VoIP internet telephony service to consumers and businesses worldwide. The company's products are: Broadband (Internet) Phones, PC to Phone service, Mobile Dialers, Calling Cards and local phone numbers. History Deltathree was founded in 1996 and on March 14, 1997, first demonstrated a direct telephone conversation over the Internet. By June 1999, deltathree's PC-to-Phone and Phone-to-Phone services became commercially available. In September 2001 the iConnectHere brand and service was launched with even lower rates that initially offered. On December 19, 2001, Deltathree announced that iConnectHere would offer its PC-to-phone service to MSN Messenger and Windows Messenger users in 17 countries. In 2007 Deltathree launched a communications solution called JoIP jointly with Panasonic. JoIP is a service enabling regular phone owners of Panasonic's Globrange to make cheap international calls. In July 2010 Deltathree launched a communications solution called the JoIP Mobile (mobile.joip.com). This VoIP mobile dialer can be downloaded to the mobile (practically any smartphone) of the user: BlackBerry OS, Symbian OS, Android OS and iPhone. Windows OS and Blackberry will soon be launched as well. On August 1, 2017, Deltathree, LLC, provider of iConnecthere discontinued service. Client applications and devices iConnectHere provides a free client applications such as the PC to Phone Dialer and Mobile Dialers as part of its service; to date the application had 8 major releases (current version is 8). Additionally, iConnectHere offers a free broadband phone adapter from Linksys along with the Pay as you Go World Plans with local receiving calls numbers all over the world among other international and U.S calling plans. References External links iConnectHere web site mobile JoIP web site VoIP companies of the United States VoIP software Instant messaging Windows instant messaging clients Telecommunications companies established in 1996 Companies based in New York (state)
IConnectHere
Technology
439
873,363
https://en.wikipedia.org/wiki/Miriam%20Rothschild
Dame Miriam Louisa Rothschild (5 August 1908 – 20 January 2005) was a British natural scientist and author with contributions to zoology, entomology, and botany. Early life Miriam Rothschild was born in 1908 in Ashton Wold, near Oundle in Northamptonshire, the daughter of Charles Rothschild of the Rothschild banking family of England of Jewish bankers and Rózsika Edle Rothschild (née von Wertheimstein), a Hungarian sportswoman, of Austrian-Jewish descent. Her brother was Victor Rothschild, 3rd Baron Rothschild and one of her sisters (Kathleen Annie) Pannonica Rothschild (Baroness Nica de Koenigswarter) would later be a bebop jazz enthusiast and patroness of Thelonious Monk and Charlie Parker. Her father had described about 500 new species of flea, and her uncle Lionel Walter Rothschild had built a private natural history museum at Tring. By the age of four she had started collecting ladybird beetles and caterpillars and taking a tame quail to bed with her. World War I broke on the eve of Miriam's sixth birthday in 1914, while the Rothschilds were holidaying in Austro-Hungary. They hurried home on the first westward train but, unable to pay, had to borrow money from a Hungarian passenger who commented "This is the proudest moment of my life. Never did I think that I should be asked to lend money to a Rothschild!" Her father took his own life when she was 15, after which she became closer to her uncle. She was educated at home until the age of 17, when she demanded to go to school. She thence attended evening classes in zoology at Chelsea College of Science and Technology and classes during the day in literature at Bedford College, London. Personal life During World War II, Rothschild was recruited to work at Bletchley Park on codebreaking with Alan Turing and was awarded a Defence Medal by the British government for her efforts. Additionally, she pressed the UK Government to admit more German Jews as refugees from Nazi Germany. She arranged housing for 49 Jewish children, some of whom stayed at her home at Ashton Wold. The estate also served as a hospital for wounded military personnel, including her future husband, Captain George Lane. Lane, a Hungarian-born British soldier, had changed his name from Lanyi in case of enemy capture. They had six children, four biological: Mary Rozsiska (1945–2010), Charles Daniel (born 1948), Charlotte Teresa (born 1951) and Johanna Miriam (born 1951); and two adopted. The marriage was dissolved in 1957 but the pair remained on good terms. Rothschild was a vegetarian and had a close connection to her pets and wild animals that she befriended. Rothschild supported many social causes including animal welfare, free milk for children in schools, and gay rights by contributing to the Wolfenden Report which resulted in decriminalising "homosexual behaviour between consenting adults in private". Research During the 1930s Rothschild made a name for herself at the Marine Biological Station in Plymouth, studying the mollusc Nucula and its trematode parasites (Rothschild 1936, 1938a, 1938b). Rothschild was a leading authority on fleas. She was the first person to work out the flea's jumping mechanism. She also studied the flea's reproductive cycle and linked this, in rabbits, to the hormonal changes within the host. Her New Naturalist book on parasitism (Fleas, Flukes and Cuckoos) was a huge success. Its title can be explained as: external parasites (e.g. fleas), internal parasites (e.g. flukes) and others (the cuckoo is a 'brood parasite'). Along with Professor G. Harris, Rothschild determined that myxomatosis, a virus affecting tapeti and brush rabbits, was spread by fleas, not mosquitoes as previously understood. The Rothschild Collection of Fleas (founded by Charles Rothschild) is now part of the Natural History Museum collection and her six-volume catalogue of the collection (in collaboration with G. H. E. Hopkins and illustrated by Arthur Smith) took thirty years to complete. In addition to her work on fleas and other parasites, Rothschild studied insects in the order Lepidoptera. Specifically, she was interested in chemical ecology and mimicry. To learn more about mimicry and its role in Lepidopteran predation by birds, Rothschild adapted greenhouses on her Ashton Wold estate to serve as aviaries for owls and other potential predators. This led to further work to identify the compounds synthesized by insects such as Burnet moth and collaboration with Tadeusz Reichstein to show that a monarch butterfly's toxicity comes from milkweed, its larval host plant. It also resulted in work to demonstrate the importance of plant-derived carotenoids in insect coloration. Rothschild discovered that Large white cabbage butterfly caterpillars fed a diet without carotenoids did not match their background as they typically would and Monarch butterfly caterpillars' pupae had silver threads instead of gold. Another area of Lepidoptera research that Rothschild pursued was that of the production of antibiotics by butterflies. This work was initially inspired by observations Rothschild made during an anthrax outbreak in the 1930s, but did not begin in earnest until around 60 years later. Rothschild drafted a manuscript on the subject and the results were eventually published 12 years after her death. Rothschild was a member of the Oxford genetics school during the 1960s, where she met the ecological geneticist E.B. Ford. Rothschild authored books about her father (Rothschild's Reserves – time and fragile nature) and her uncle (Dear Lord Rothschild). She wrote about 350 papers on entomology, zoology and other subjects. Later in her career, Rothschild grew interested in hay meadow restoration. In response to a comment that it would take 1,000 years to reproduce a medieval meadow, she said "I could make a very good imitation in ten...it took me fifteen." She developed multiple seed mixes on her Ashton Wold estate, including one she called "Farmer's Nightmare". Another seed mix was used by Prince Charles, Prince of Wales, on his Highgrove Estate. Awards/honours In 1973. Rothschild was elected a Foreign Honorary Member of the American Academy of Arts and Sciences. She received honorary doctorates from eight universities, including Oxford and Cambridge, and was an Honorary Fellow of St Hugh's College, Oxford. She gave the Romanes Lecture for 1984–5 in Oxford. Rothschild was elected a Fellow of the Royal Society in 1985 and was granted the title of Dame Commander of the British Empire in 2000. Rothschild was a pioneer among women in entomology and became the first woman trustee of the Natural History Museum (1967–1975), the first woman president of Royal Entomological Society (1993–1994), the first woman to serve on the Committee for Conservation of the National Trust, and the first woman member of the eight-member Entomological Club. In 1986 the John Galway Foster Human Rights Trust was established; in 2006 the name of the trust was expanded to The Miriam Rothschild & John Foster Human Rights Trust. This funds an annual lecture on human rights. She is also honoured by the endowed Professorship in Conservation Biology in her name at University of Cambridge. Philanthropy Rothschild founded the 'Schizophrenia Research Fund' in 1962, in honour of her sister Liberty after Liberty was diagnosed and hospitalized with schizophrenia. The Schizophrenia Research Fund is an independent registered charity formed "to advance the better understanding, prevention, treatment and cure of all forms of mental illness and in particular of the illness known as Schizophrenia". In March 2006, following Miriam's death, the name of the Fund was changed in her memory to the 'Miriam Rothschild Schizophrenia Research Fund'. The pioneer of British Art Therapy, Edward Adamson and his partner and collaborator, John Timlin, were regular visitors to Ashton Wold. Between 1983 and 1997, the influential Adamson Collection of 6,000 paintings, drawings, sculptures and ceramics by people living with major mental disorder at Netherne Hospital, created with Adamson's encouragement in his progressive art studios at the hospital, was housed and displayed to the public in a medieval barn at Ashton. Rothschild was both a Trustee and, subsequently, Patron of the Adamson Collection Trust. The Adamson Collection is now almost all re-located to the Wellcome Library. All Adamson's papers, correspondence, photographs and other material are currently being organised as the 'Edward Adamson Archive', also at the Wellcome Library. Selected works Books Rothschild, Miriam and Clay, Theresa (1953) Fleas, Flukes and Cuckoos: a study of bird parasites. The New Naturalist series. London: Collins Hopkins, G. H. E. and Rothschild, Miriam (1953–81) An Illustrated Catalogue to the Rothschild Collection of Fleas 6 volumes (4to.) London: British Museum (Natural History) Rothschild, Miriam (1983) Dear Lord Rothschild: birds, butterflies and history. London: Hutchinson () Rothschild, Miriam and Farrell, Clive (1985) The Butterfly Gardener. London: Michael Joseph Rothschild, Miriam (1986) Animals and Man: the Romanes lecture for 1984–5 delivered in Oxford on 5 February 1985. Oxford: Clarendon Press Rothschild, Miriam et al. (1986) Colour Atlas of Insect Tissues via the Flea. London: Wolfe Rothschild, Miriam (1991) Butterfly Cooing Like a Dove. London: Doubleday Stebbing-Allen, George; Woodcock, Martin; Lings, Stephen and Rothschild, Miriam (1994) A Diversity of Birds: a personal voyage of discovery. London: Headstart () Rothschild, Miriam and Marren, Peter (1997) Rothschild's Reserves: time & fragile nature. London: Harley () Rothschild, Miriam; Garton, Kate; De Rothschild, Lionel & Lawson, Andrew (1997) The Rothschild Gardens: a family tribute to nature. London: Abrams Van Emden, Helmut F. and Rothschild, Miriam (eds.) (2004) Insect and Bird Interactions Andover, Hampshire: Intercept () Papers Rothschild, M. (1936) Gigantism and variation in Peringia ulvae Pennant 1777, caused by infection with larval trematodes. Journal of the Marine Biological Association of the United Kingdom 20, 537–46 Rothschild, M. (1938)a. Further observations on the effect of trematode parasites on Peringia ulvae (Pennant) 1777. Novavit Zool. 41, 84–102 Rothschild, M. (1938)b. Observations on the growth and trematode infections of Peringia ulvae (Pennant) 1777 in a pool in the Tamar saltings, Plymouth. Parasitology, 33(4), 406–415. doi:10.1017/S0031182000024616 [many more] References External links Further reading The Women of Rothschild: The Untold Story of the World's Most Famous Dynasty, Natalie Livingstone (2021) 1908 births 2005 deaths 20th-century British women scientists Alumni of King's College London Bletchley Park women British women biologists Dames Commander of the Order of the British Empire Daughters of barons English Jews English entomologists English botanists English garden writers English malacologists English women philanthropists Fellows of the American Academy of Arts and Sciences Fellows of the Royal Entomological Society Female fellows of the Royal Society Jewish British scientists Jewish British philanthropists Jewish biologists Jewish women scientists English LGBTQ rights activists New Naturalist writers Chemical ecologists People from Oundle Miriam Rothschild Victoria Medal of Honour recipients Women entomologists 20th-century British zoologists Bletchley Park people
Miriam Rothschild
Chemistry
2,383
23,508,955
https://en.wikipedia.org/wiki/Galanin-like%20peptide
Galanin-like peptide (GALP) is a neuropeptide present in humans and other mammals. It is a 60-amino acid polypeptide produced in the arcuate nucleus of the hypothalamus and the posterior pituitary gland. It is involved in the regulation of appetite and may also have other roles such as in inflammation, sex behavior, and stress. Findings additionally suggest that GALP could play a function in energy metabolism due to its ability to maintain continual activation of the sympathetic nervous system (SNS) via thermogenesis, which refers to the production of heat within living organisms. In addition, the administration of GALP directly into the brain leads to a reduction in the secretion of thyroid-stimulating hormone (TSH), which indicates the involvement of GALP in the neuroendocrine regulation of the hypothalamic-pituitary-thyroid (HPT) axis, and further adding to the evidence of the role of GALP in energy homeostasis. Notes Neuropeptides
Galanin-like peptide
Chemistry,Biology
217
43,291,147
https://en.wikipedia.org/wiki/RU%20Camelopardalis
RU Camelopardalis, or RU Cam, is a W Virginis variable (type II Cepheid) in the constellation of Camelopardalis. It is also a Carbon star, which is very unusual for a Cepheid variable. History RU Cam was reported as a new variable star in 1907. It was quickly recognised as one of the Cepheid class of variable stars. The first detailed study of the spectrum of RU Cam showed that it changed during the brightness variations. From partway down the descending branch of the light curve to just after minimum brightness, the spectrum is class R with hydrogen absorption lines. The spectrum then develops hydrogen emission lines. For several days either side of maximum brightness, the spectrum becomes a relatively normal class K. RU Cam remained a somewhat unusual W Virginis variable until 1964, when the relatively regular pulsation of about 1 magnitude almost entirely stopped. Since then the pulsations have varied from cycle to cycle, with amplitudes changing from several tenths of a magnitude to almost zero. The light curve has a more sinusoidal shape than when it was pulsating at full amplitude and the period changes erratically between 17.4 and 26.6 days. Properties RU Camelopardalis is both a Carbon star and a type II Cepheid variable star. This is unusual but not unique. At least five other relatively bright examples are known, two of which are of the BL Herculis sub-type. The atmosphere contains more carbon than oxygen but is not deficient in hydrogen. This can be explained as the result of triple-alpha helium burning being processed through a CNO cycle and convected to the surface. This process occurs in some of the more massive asymptotic giant branch (AGB) stars at the third dredge-up. W Virginis stars are typically metal-poor and enriched by s-process elements, but this is not the case for RU Cam which has near-solar metallicity and no heavy metal enhancement. W Virginis variables are thought to be AGB stars executing a blue loop due to a thermal pulse from the helium burning shell. These stars cross the instability strip and undergo very regular pulsations. RU Cam fits this model reasonably well despite its peculiarities. Its temperature of around 5,000 K and luminosity several hundred times the sun's place it on or near the instability strip, and its mass about is typical of AGB stars. The brightness variations of RU Cam are caused by pulsations which cause both the temperature and radius to vary. The temperature has been estimated to vary between 3,800 K and 5,650 K, with a change in the radius of about an average radius of . Even prior to 1965, the colour variations suggested a smaller temperature range of 4,220 K - 5,240 K. The maximum temperature occurs at the same time as the minimum radius, and this is when the star is near its brightest. Evolution The evolution of a star executing a blue loop from the AGB is expected to be rapid. Period changes in RU Cam before 1965 suggest that it would cross the entire instability strip in 31,000 years. Any secular period changes since then have been masked by irregularities. It is predicted that the temperature of RU Cam is increasing and it is approaching, or leaving, the bluer edge of the instability strip, in which case the pulsations would stop completely. A blueward crossing is the first crossing of the instability strip and would be followed by a second crossing when the star cools back towards the AGB. References Camelopardalis Camelopardalis, RU 056167 W Virginis variables Carbon stars 035681 J07214412+6940147 BD+69 417
RU Camelopardalis
Astronomy
769
100,532
https://en.wikipedia.org/wiki/Ashlesha
Ashlesha (Sanskrit: आश्लेषा or Āśleṣā) (Tibetan: སྐར་མ་སྐག), also known as Ayilyam in Tamil and Malayalam (Tamil: ஆயில்யம், Malayalam: ആയില്യം, Āyilyaṃ), is the 9th of the 27 nakshatras in Hindu astrology. Ashlesha is also known as the Clinging Star or Nāga. It is known as the Hydra. It extends from 16:40 to 30:00 Cancer. The planetary lord is Mercury or Budha. Its presiding deities are the Nāgas. The nakshatra's symbol is a coiled serpent. It is a trikshna or sharp nakshatra. Its animal symbol is the male cat. See also List of Nakshatras References Nakshatra
Ashlesha
Astronomy
165
66,522,474
https://en.wikipedia.org/wiki/Ascosphaera%20apis
Ascosphaera apis is a species of fungus belonging to the family Ascosphaeraceae. It was one of the first entomopathogen genomes to be sequenced. It has a cosmopolitan distribution. It causes the chalkbrood diseases in bees, which rarely kills infected colonies but can weaken them and lead to reduced honey yields and susceptibility to other pests and diseases. References Onygenales Fungus species
Ascosphaera apis
Biology
91
65,307,736
https://en.wikipedia.org/wiki/Terrestrial%20atmospheric%20lens
A Terrestrial Atmospheric Lens is a theoretical method of using the Earth as a large lens with a physical effect called atmospheric refraction. The sun's image appears about a half degree above its real position during sunset due to Earth's atmospheric refraction. In 1998, NASA astrophysicist Yu Wang from the Jet Propulsion Laboratory for the first time proposed to use the Earth as an atmospheric lens. Wang suggests in his paper that:''If we could build a space telescope using the Earth's atmosphere as an objective lens the aperture of such space telescope would be the diameter of the earth. Telescope resolution could be enhanced by up to seven orders of magnitude and would enable detailed images of planets in far away stellar systems.'' If built, the terrestrial atmospheric lens would become the largest telescope ever built. Its high resolution would allow to directly image nearby Earth-like planets with a level of detail never seen before. As of September 2020, the main observation targets are Proxima b, located 4.2 light years away, Tau Ceti e, 12 light years away, and Teegarden b, also located 12 light years away. The three planets are currently considered to be potentially habitable. However, using the Sun as a gravitational lens would produce images with higher resolution when imaging potentially habitable exoplanets. See also Gravitational lens Gravitational microlensing References Atmospheric optical phenomena Observational astronomy Refraction
Terrestrial atmospheric lens
Physics,Astronomy
285
33,979,901
https://en.wikipedia.org/wiki/Intel%20P67
The Intel P67 is a mainstream chipset created by Intel. It was launched to market in January 2011, the first edition of this chipset had a faulty SATA 3.0 controller and Intel had to issue a hardware fix to resolve this problem. This fix (Revision B3) was launched to market at the beginning of March 2011. Features Standard features: Supports processor overclocking (Only available for unlocked processors: Core i5-2500K, Core i5-2550K, Core i7-2600K and 2700K) Supports memory overclocking 1× PCI Express 2.0 x16 lanes at 16 GB/s bandwidth 2× Serial ATA (SATA) 3.0 (6 Gbit/s) ports 4× Serial ATA (SATA) 2.0(3 Gbit/s) ports 14× Universal Serial Bus (USB) 2.0 ports Dual-channel DDR3 memory Integrated Gigabit Ethernet MAC Optional features: SATA RAID support (0/1/10/5) through Intel Rapid Storage Technology 2× PCI Express 2.0 x8 lanes at 8 GB/s bandwidth each The P67 chipset is made to work in conjunction with Intel LGA 1155 CPUs and LGA 1156 CPUs. ASRock produced a motherboard in 2010 using the P67 chipset which supports Lynnfield and Clarkdale. References External links Intel P67 Express Chipset Overview HWbot submission of Clarkdale running on P67 chipset P67
Intel P67
Technology
320
23,431,978
https://en.wikipedia.org/wiki/HARMST
HARMST is an acronym for high aspect ratio microstructure technology, which describes fabrication technologies, used to create high-aspect-ratio microstructures with heights between tens of micrometers up to a centimeter, and aspect ratios greater than 10:1. Examples include the LIGA fabrication process, advanced silicon etch, and deep reactive ion etching. See also LIGA Micromechanical systems — high aspect ratio (HAR) micromachining References Materials science Microtechnology
HARMST
Physics,Materials_science,Engineering
103
67,611,062
https://en.wikipedia.org/wiki/NGC%205422
NGC 5422 is a lenticular galaxy located in the constellation Ursa Major. It was discovered on April 14, 1789, by the astronomer William Herschel. At a distance of about 100 million light-years (30 megaparsecs), NGC 5422 is located within the sparse NGC 5485 group, which is dominated by lenticular galaxies. It has only a single, thick, disk component. Like other galaxies in the group, it has no recent star formation, as its stellar disk is relatively old (about 10 billion years old). Its disk appears similar to the face-on galaxy NGC 6340, but appears edge-on. References External links Ursa Major 5422 Unbarred lenticular galaxies
NGC 5422
Astronomy
147
53,620,926
https://en.wikipedia.org/wiki/Saccharomycodes
Saccharomycodes is a genus of yeasts. They are helobiallly reproducing yeasts. The type species is Saccharomycodes ludwigii. The other species, Saccharomycodes sinensis, is known from a single strain that was isolated from soil from a forest on Mount Chienfang on Hainan in China. It is the sister genus of Hanseniaspora. Relationships with humans The species Saccharomycodes lugwigii is considered a "spoilage" yeast in the winemaking process and is commonly referred to as the "winemaker's nightmare". It has a high polluting capacity, beginning at one to two cells per liter. It has a high tolerance for sulfur dioxide, high sugar levels, and pressurized carbon dioxide and is difficult to eradicated from an already contaminated environment. It produces high levels of secondary metabolites, including isobutanol, amyl alcohol, and isoamyl alcohol. References Saccharomycetes Yeasts
Saccharomycodes
Biology
211
11,421,719
https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORA17
In molecular biology, SNORA17 (also known as ACA17) is a member of the H/ACA class of small nucleolar RNA that guide the sites of modification of uridines to pseudouridines. Specifically, it is predicted to guide pseudouridylation of the 28S rRNA at positions U4659 and U4598. It shares the same host gene together with ACA43. There are many closely related sequences that do not appear to have conserved H and ACA boxes, and may be pseudogenes. References Further reading External links Small nuclear RNA
Small nucleolar RNA SNORA17
Chemistry
124
1,346,384
https://en.wikipedia.org/wiki/Deadweight%20tonnage
Deadweight tonnage (also known as deadweight; abbreviated to DWT, D.W.T., d.w.t., or dwt) or tons deadweight (DWT) is a measure of how much weight a ship can carry. It is the sum of the weights of cargo, fuel, fresh water, ballast water, provisions, passengers, and crew. DWT is often used to specify a ship's maximum permissible deadweight (i.e. when it is fully loaded so that its Plimsoll line is at water level), although it may also denote the actual DWT of a ship not loaded to capacity. Definition Deadweight tonnage is a measure of a vessel's weight carrying capacity, not including the empty weight of the ship. It is distinct from the displacement (weight of water displaced), which includes the ship's own weight, or the volumetric measures of gross tonnage or net tonnage (and the legacy measures gross register tonnage and net register tonnage). Deadweight tonnage was historically expressed in long tons, but is now usually given internationally in tonnes (metric tons). In modern international shipping conventions such as the International Convention for the Safety of Life at Sea and the International Convention for the Prevention of Pollution From Ships, deadweight is explicitly defined as the difference in tonnes between the displacement of a ship in water of a specific gravity of 1.025 (corresponding to average density of sea water of ) at the draft corresponding to the assigned summer freeboard and the light displacement (lightweight) of the ship. See also Notes References Nautical terminology Ship measurements Shipbuilding
Deadweight tonnage
Engineering
335
14,936,605
https://en.wikipedia.org/wiki/The%20Wonderful%20Stories%20of%20Professor%20Kitzel
The Wonderful Stories of Professor Kitzel is a 1972 educational animated series. Produced by Shamus Culhane for Krantz Films, the program combined film clips, animation, and commentary to teach the viewers about historic and cultural events. It was "hosted" by the eccentric scientist Professor Kitzel, whose voice was provided by Paul Soles, with occasional appearances by his grandfather or his parrot. Format The format of each short (5 minute) episode, of which one hundred and six were produced in all, was generally an opening discussion by the professor introducing the subject. He would then take the viewer to his time machine, pull a lever and the first series of drawings and commentary related to the subject would begin. Halfway through the story, the professor would interrupt the commentary to make some humorous remark, before returning to the narrative with an invitation to "Let's see what happened next." Each episode concluded with some humorous closing sequence. Distribution The series was offered in barter syndication by Bristol-Myers for their Pal Vitamins line from 1972 to 1976; after 1976, it was syndicated for cash by Worldvision Enterprises. Forerunner The format of the series and style of presentation was similar to an earlier production, Max, the 2000-Year-Old Mouse, which utilized the same production house and voice cast. Episodes Martin Frobisher The Crusades The Spartans Charlemagne and the Elephant Leonardo da Vinci Samuel F.B. Morse Profile of Japan Mayan Archaeology Charles Darwin (2) The Sahara Desert Charles Dickens Thomas Edison Buffalo Bill Cody Joan of Arc India (1) Pilgrims Montezuma and Cortez Peary at the Pole Edmund Hillary and Mount Everest The Mississippi Steamboat Reptiles The Rosetta Stone The South Pole Auguste Piccard Abba of Benin India (2) The Oracle of Delphi Northwest Indians Daniel Boone Jacques Cartier The Great London Fire The Masai Warriors Marco Polo The Wright Brothers New Amsterdam Athens and Sparta Beavers Romulus and Remus The Buffalo Herds Captain William Bligh Peter the Great Fur Trading George Washington Robert Perry Egypt The Vikings The Phoenicians Frederick Douglass Al Rashid Pioneers in Early America The Early Boat Builders Antoni van Leeuwenhoek The African Gold Coast Gorillas The Picard Brothers The Whaling Ships Montgolfier The Treasure Ships The Eskimos Prehistoric Man Mount Olympus Vasco de Gama James Watt The Middle Ages California Gold Rush Christopher Columbus Louis Blériot Peter the Hermit Pueblo Indians Kier and Drake Abraham Lincoln Guglielmo Marconi Benjamin Franklin Emperor Nero of Rome The Covered Wagons Easter Island The Cave Paintings of Altamira Louis Pasteur The Search for Ancient Troy Jacques Cousteau The Statue of Liberty John Cabot John Smith and Pocahontas The Middle Ages Thor Heyerdahl The Declaration of Independence Johannes Gutenberg The History of Rockets Galileo Galilei Early Man Ponce de León The Erie Canal Charles Darwin (1) The Duryea Brothers Samuel De Champlain The Customs of China Michelangelo Thomas Paine Charles Lindbergh Early Crete The Australian Aborigines Eskimo Life Pompeii and Mount Vesuvius Lewis Carroll The Mystery of Stonehenge See also Once Upon a Time... Man Histeria! References External links New York Times: "Wonderful World of Professor Kitzel" 1970s American animated television series 1972 Canadian television series debuts 1972 animated television series debuts 1970s Canadian animated television series American children's animated education television series Canadian children's animated education television series Television series by CBS Studios Cultural depictions of Charlemagne Cultural depictions of Leonardo da Vinci Cultural depictions of Charles Darwin Cultural depictions of Charles Dickens Cultural depictions of Thomas Edison Cultural depictions of Buffalo Bill Cultural depictions of Joan of Arc Cultural depictions of Hernán Cortés Cultural depictions of Marco Polo Works about HMS Bounty Cultural depictions of Peter the Great Cultural depictions of George Washington Cultural depictions of Frederick Douglass Cultural depictions of Vasco da Gama Cultural depictions of Christopher Columbus Cultural depictions of Abraham Lincoln Cultural depictions of Guglielmo Marconi Cultural depictions of Benjamin Franklin Cultural depictions of Louis Pasteur Cultural depictions of Pocahontas Cultural depictions of Johannes Gutenberg Cultural depictions of Galileo Galilei Cultural depictions of Michelangelo Cultural depictions of Thomas Paine Cultural depictions of Charles Lindbergh Cultural depictions of Jacques Cousteau
The Wonderful Stories of Professor Kitzel
Astronomy
851
16,875,298
https://en.wikipedia.org/wiki/Tension%20fabric%20building
Tension fabric buildings or tension fabric structures are constructed using a rigid frame—which can consist of timber, steel, rigid plastic, or aluminum—and a sturdy fabric outer membrane. Once the frame is erected, the fabric cover is stretched over the frame. The fabric cover is tensioned to provide the stable structural support of the building. The fabric is tensioned using multiple methods, varying by manufacturer, to create a tight fitting cover membrane. Compared to traditional or conventional buildings, tension fabric buildings may have lower operational costs due to the daylight that comes through the fabric roof when light-coloured fabrics are used. This natural lighting process is known as daylighting and can improve both energy use and life-cycle costs, as well as occupant health. Tension fabric structures may be more quickly installed than traditional structures as they use fewer materials and therefore usually require less ground works to install. Some tension fabric structures, particularly those with aluminium frames, may be easily relocated. Common applications Tension fabric buildings have gained popularity over the last few decades in industries using: indoor practice facilities, commercial structures, industrial buildings, manufacturing, warehousing, sand and salt storage for road maintenance departments, environmental management, aviation, airplane hangars, marine, government, military, remediation and emergency shelters, hay and feed storage, and horse riding arenas. These structures are suitable for quickly expanding existing facilities, by attaching the fabric structures to extend warehouses or workspaces. They can also be used as covered loading/unloading areas. Tension fabric buildings are often used for sports due to the natural light that permeates light-coloured fabrics. These buildings provide covered indoor spaces that allow teams to train under natural daylight when weather is inclement, combating a common problem in sports known as rainout. The light weight of the fabric roofs enables the construction of tension fabric structures up to clear span without supporting pillars or columns, contributing to the use of these buildings for applications that require large open spaces. One example is Phase 2 of the Sport Ireland National Indoor Arena project which includes a tension fabric building that will be in size, to be used for gaelic games, rugby and soccer. These buildings may also be used for holding livestock or as indoor riding arenas, due to the controlled interior climate and the existence of tension structures that run over long. Construction Building sizes are usually standardized by the nature of being a pre-engineered building. Some manufacturers produce tension fabric buildings spanning up to 300 feet wide and to almost any length. Buildings can be designed to be portable, mounted on wheels or other rolling crane-type designs fitted to the base-plates, or lifting in modules by overhead cranes. Industrial strength fabric, which can have life expectancies of 20–30 years, have been used for many applications. Fabric life expectancy is affected by local environmental factors (e.g. sunlight, temperature, wind, air quality) and occupancy conditions (e.g. humidity, chemical vapours). The structural membranes available are made of PVC or polyethylene. Some fabrics are sufficiently translucent to allow sunlight to pass through, creating a naturally lit environment inside the building. Fabric selection influences project capital cost and maintenance. Building regulation In some jurisdictions tension fabric buildings may qualify as temporary structures which benefit from a shorter capital depreciation period, relative to a permanent structure, for tax purposes. Buildings classified as temporary structures may have significant limitations on occupancy, applied load and fire safety considerations and period of installation. Whilst a common application of tension fabric buildings is temporary use, it is not exempt from regulatory requirements including compliance with building codes, occupancy classifications, aesthetics and building permits. Fabric tension buildings are required to meet the same building code safety requirements and applicable design standards as any other structure. Tension fabric buildings may also be permanent structures with structural longevity varying according to manufacturer. See also Tensile structure Membrane structure Fabric structure References External links The History of Tension Fabric Structures Fabric Buildings for Oil & Gas Exploration Fabric Structures Association Structural system Tensile membrane structures Tensile architecture
Tension fabric building
Technology,Engineering
818
12,421,862
https://en.wikipedia.org/wiki/Nechisar%20nightjar
The Nechisar nightjar (Caprimulgus solala) is a species of nightjar in the family Caprimulgidae. It is endemic to Ethiopia. The species was first discovered in 1990 when researchers discovered a decomposing specimen in the Nechisar National Park. After bringing back a single wing from the specimen to the Natural History Museum in London, it was determined to be a previously unknown species. Its specific name, solala, means "only a wing". Its natural habitat is subtropical. It is probably endemic to Nechisar National Park. References External links BirdLife Species Factsheet. Caprimulgus Endemic birds of Ethiopia Birds described in 1995 Species known from a single specimen Taxonomy articles created by Polbot
Nechisar nightjar
Biology
152
72,155,492
https://en.wikipedia.org/wiki/Kinara%20%28company%29
Kinara is an American semiconductor company that develops AI processors for machine learning applications. History Kinara was founded in 2013 by Rehan Hameed, Wajahat Qadeer and Jason Copeland as CoreViz. The company was rebranded as Deep Vision, and raised $35M in a series B funding round led by Tiger Global Management. In 2022 Deep Vision rebranded as Kinara. The company has a development base in Hyderabad, India. Kinara has partnerships with NXP Semiconductors and Arcturus Networks. In December 2023 they began a collaboration with Ampere and Mirasys. Kinara is a member of the AI Platform Alliance which is led by Ampere and is made up of 8 other companies. In April 2023, Kinara announced a partnership with ENERZAi, a company that offers technology to compress AI models. Kinara has a partnership with Awiros, which showed its first results in January 2023 at the Consumer Technology Association’s CES annual trade show. Products In 2020, the company announced its first product, the Ara-1 Edge AI Processor. The product uses a polymorphic dataflow architecture. The Ara-2 was launched in December 2023 and is 5 to 8 times faster than its predecessor. References External links Kinara Semiconductor companies of the United States Technology companies based in the San Francisco Bay Area Computer companies of the United States Computer hardware companies
Kinara (company)
Technology
285
61,173,951
https://en.wikipedia.org/wiki/Sylvia%20M.%20Stoesser%20Lecturer%20in%20Chemistry
The Sylvia M. Stoesser Lecture series was established in 2000 by the Department of Chemistry at the University of Illinois. It is supported by alumna Yulan Tong and by Dow AgroSciences. It is named after the first woman chemist to work at Dow, Sylvia M. Stoesser. The lectureship is given every two years to "an individual who has made outstanding contributions to the chemical community and provides new perspectives in the chemical field outside academia." Lecturers References Women and science University of Illinois Urbana-Champaign 2000 establishments in Illinois Chemistry events Recurring events established in 2000 Biennial events Women in Illinois Chemistry education University and college lecture series
Sylvia M. Stoesser Lecturer in Chemistry
Technology
133
2,221,187
https://en.wikipedia.org/wiki/Solid%20solution
A solid solution, a term popularly used for metals, is a homogeneous mixture of two compounds in solid state and having a single crystal structure. Many examples can be found in metallurgy, geology, and solid-state chemistry. The word "solution" is used to describe the intimate mixing of components at the atomic level and distinguishes these homogeneous materials from physical mixtures of components. Two terms are mainly associated with solid solutions – solvents and solutes, depending on the relative abundance of the atomic species. In general if two compounds are isostructural then a solid solution will exist between the end members (also known as parents). For example sodium chloride and potassium chloride have the same cubic crystal structure so it is possible to make a pure compound with any ratio of sodium to potassium (Na1-xKx)Cl by dissolving that ratio of NaCl and KCl in water and then evaporating the solution. A member of this family is sold under the brand name Lo Salt which is (Na0.33K0.66)Cl, hence it contains 66% less sodium than normal table salt (NaCl). The pure minerals are called halite and sylvite; a physical mixture of the two is referred to as sylvinite. Because minerals are natural materials they are prone to large variations in composition. In many cases specimens are members for a solid solution family and geologists find it more helpful to discuss the composition of the family than an individual specimen. Olivine is described by the formula (Mg, Fe)2SiO4, which is equivalent to (Mg1−xFex)2SiO4. The ratio of magnesium to iron varies between the two endmembers of the solid solution series: forsterite (Mg-endmember: Mg2SiO4) and fayalite (Fe-endmember: Fe2SiO4) but the ratio in olivine is not normally defined. With increasingly complex compositions the geological notation becomes significantly easier to manage than the chemical notation. Nomenclature The IUPAC definition of a solid solution is a "solid in which components are compatible and form a unique phase". The definition "crystal containing a second constituent which fits into and is distributed in the lattice of the host crystal" given in refs., is not general and, thus, is not recommended. The expression is to be used to describe a solid phase containing more than one substance when, for convenience, one (or more) of the substances, called the solvent, is treated differently from the other substances, called solutes. One or several of the components can be macromolecules. Some of the other components can then act as plasticizers, i.e., as molecularly dispersed substances that decrease the glass-transition temperature at which the amorphous phase of a polymer is converted between glassy and rubbery states. In pharmaceutical preparations, the concept of solid solution is often applied to the case of mixtures of drug and polymer. The number of drug molecules that do behave as solvent (plasticizer) of polymers is small. Phase diagrams On a phase diagram a solid solution is represented by an area, often labeled with the structure type, which covers the compositional and temperature/pressure ranges. Where the end members are not isostructural there are likely to be two solid solution ranges with different structures dictated by the parents. In this case the ranges may overlap and the materials in this region can have either structure, or there may be a miscibility gap in solid state indicating that attempts to generate materials with this composition will result in mixtures. In areas on a phase diagram which are not covered by a solid solution there may be line phases, these are compounds with a known crystal structure and set stoichiometry. Where the crystalline phase consists of two (non-charged) organic molecules the line phase is commonly known as a cocrystal. In metallurgy alloys with a set composition are referred to as intermetallic compounds. A solid solution is likely to exist when the two elements (generally metals) involved are close together on the periodic table, an intermetallic compound generally results when two metals involved are not near each other on the periodic table. Details The solute may incorporate into the solvent crystal lattice substitutionally, by replacing a solvent particle in the lattice, or interstitially, by fitting into the space between solvent particles. Both of these types of solid solution affect the properties of the material by distorting the crystal lattice and disrupting the physical and electrical homogeneity of the solvent material. Where the atomic radii of the solute atom is larger than the solvent atom it replaces the crystal structure (unit cell) often expands to accommodate it, this means that the composition of a material in a solid solution can be calculated from the unit cell volume a relationship known as Vegard's law. Some mixtures will readily form solid solutions over a range of concentrations, while other mixtures will not form solid solutions at all. The propensity for any two substances to form a solid solution is a complicated matter involving the chemical, crystallographic, and quantum properties of the substances in question. Substitutional solid solutions, in accordance with the Hume-Rothery rules, may form if the solute and solvent have: Similar atomic radii (15% or less difference) Same crystal structure Similar electronegativities Similar valency a solid solution mixes with others to form a new solution The phase diagram in the above diagram displays an alloy of two metals which forms a solid solution at all relative concentrations of the two species. In this case, the pure phase of each element is of the same crystal structure, and the similar properties of the two elements allow for unbiased substitution through the full range of relative concentrations. Solid solution of pseudo-binary systems in complex systems with three or more components may require a more involved representation of the phase diagram with more than one solvus curves drawn corresponding to different equilibrium chemical conditions. Solid solutions have important commercial and industrial applications, as such mixtures often have superior properties to pure materials. Many metal alloys are solid solutions. Even small amounts of solute can affect the electrical and physical properties of the solvent. The binary phase diagram in the above diagram shows the phases of a mixture of two substances in varying concentrations, and . The region labeled "" is a solid solution, with acting as the solute in a matrix of . On the other end of the concentration scale, the region labeled "" is also a solid solution, with acting as the solute in a matrix of . The large solid region in between the and solid solutions, labeled " + ", is not a solid solution. Instead, an examination of the microstructure of a mixture in this range would reveal two phases—solid solution -in- and solid solution -in- would form separate phases, perhaps lamella or grains. Application In the phase diagram, at three different concentrations, the material will be solid until heated to its melting point, and then (after adding the heat of fusion) become liquid at that same temperature: the unalloyed extreme left the unalloyed extreme right the dip in the center (the eutectic composition). At other proportions, the material will enter a mushy or pasty phase until it warms up to being completely melted. The mixture at the dip point of the diagram is called a eutectic alloy. Lead-tin mixtures formulated at that point (37/63 mixture) are useful when soldering electronic components, particularly if done manually, since the solid phase is quickly entered as the solder cools. In contrast, when lead-tin mixtures were used to solder seams in automobile bodies a pasty state enabled a shape to be formed with a wooden paddle or tool, so a 70–30 lead to tin ratio was used. (Lead is being removed from such applications owing to its toxicity and consequent difficulty in recycling devices and components that include lead.) Exsolution When a solid solution becomes unstable—due to a lower temperature, for example—exsolution occurs and the two phases separate into distinct microscopic to megascopic lamellae. This is mainly caused by difference in cation size. Cations which have a large difference in radii are not likely to readily substitute. Alkali feldspar minerals, for example, have end members of albite, NaAlSi3O8 and microcline, KAlSi3O8. At high temperatures Na+ and K+ readily substitute for each other and so the minerals will form a solid solution, yet at low temperatures albite can only substitute a small amount of K+ and the same applies for Na+ in the microcline. This leads to exsolution where they will separate into two separate phases. In the case of the alkali feldspar minerals, thin white albite layers will alternate between typically pink microcline, resulting in a perthite texture. See also Solid solution strengthening Notes References External links DoITPoMS Teaching and Learning Package—"Solid Solutions" Materials science Mineralogy
Solid solution
Physics,Materials_science,Engineering
1,870
1,137,612
https://en.wikipedia.org/wiki/Generalized%20eigenvector
In linear algebra, a generalized eigenvector of an matrix is a vector which satisfies certain criteria which are more relaxed than those for an (ordinary) eigenvector. Let be an -dimensional vector space and let be the matrix representation of a linear map from to with respect to some ordered basis. There may not always exist a full set of linearly independent eigenvectors of that form a complete basis for . That is, the matrix may not be diagonalizable. This happens when the algebraic multiplicity of at least one eigenvalue is greater than its geometric multiplicity (the nullity of the matrix , or the dimension of its nullspace). In this case, is called a defective eigenvalue and is called a defective matrix. A generalized eigenvector corresponding to , together with the matrix generate a Jordan chain of linearly independent generalized eigenvectors which form a basis for an invariant subspace of . Using generalized eigenvectors, a set of linearly independent eigenvectors of can be extended, if necessary, to a complete basis for . This basis can be used to determine an "almost diagonal matrix" in Jordan normal form, similar to , which is useful in computing certain matrix functions of . The matrix is also useful in solving the system of linear differential equations where need not be diagonalizable. The dimension of the generalized eigenspace corresponding to a given eigenvalue is the algebraic multiplicity of . Overview and definition There are several equivalent ways to define an ordinary eigenvector. For our purposes, an eigenvector associated with an eigenvalue of an × matrix is a nonzero vector for which , where is the × identity matrix and is the zero vector of length . That is, is in the kernel of the transformation . If has linearly independent eigenvectors, then is similar to a diagonal matrix . That is, there exists an invertible matrix such that is diagonalizable through the similarity transformation . The matrix is called a spectral matrix for . The matrix is called a modal matrix for . Diagonalizable matrices are of particular interest since matrix functions of them can be computed easily. On the other hand, if does not have linearly independent eigenvectors associated with it, then is not diagonalizable. Definition: A vector is a generalized eigenvector of rank m of the matrix and corresponding to the eigenvalue if but Clearly, a generalized eigenvector of rank 1 is an ordinary eigenvector. Every × matrix has linearly independent generalized eigenvectors associated with it and can be shown to be similar to an "almost diagonal" matrix in Jordan normal form. That is, there exists an invertible matrix such that . The matrix in this case is called a generalized modal matrix for . If is an eigenvalue of algebraic multiplicity , then will have linearly independent generalized eigenvectors corresponding to . These results, in turn, provide a straightforward method for computing certain matrix functions of . Note: For an matrix over a field to be expressed in Jordan normal form, all eigenvalues of must be in . That is, the characteristic polynomial must factor completely into linear factors; must be an algebraically closed field. For example, if has real-valued elements, then it may be necessary for the eigenvalues and the components of the eigenvectors to have complex values. The set spanned by all generalized eigenvectors for a given forms the generalized eigenspace for . Examples Here are some examples to illustrate the concept of generalized eigenvectors. Some of the details will be described later. Example 1 This example is simple but clearly illustrates the point. This type of matrix is used frequently in textbooks. Suppose Then there is only one eigenvalue, , and its algebraic multiplicity is . Notice that this matrix is in Jordan normal form but is not diagonal. Hence, this matrix is not diagonalizable. Since there is one superdiagonal entry, there will be one generalized eigenvector of rank greater than 1 (or one could note that the vector space is of dimension 2, so there can be at most one generalized eigenvector of rank greater than 1). Alternatively, one could compute the dimension of the nullspace of to be , and thus there are generalized eigenvectors of rank greater than 1. The ordinary eigenvector is computed as usual (see the eigenvector page for examples). Using this eigenvector, we compute the generalized eigenvector by solving Writing out the values: This simplifies to The element has no restrictions. The generalized eigenvector of rank 2 is then , where a can have any scalar value. The choice of a = 0 is usually the simplest. Note that so that is a generalized eigenvector, because so that is an ordinary eigenvector, and that and are linearly independent and hence constitute a basis for the vector space . Example 2 This example is more complex than Example 1. Unfortunately, it is a little difficult to construct an interesting example of low order. The matrix has eigenvalues and with algebraic multiplicities and , but geometric multiplicities and . The generalized eigenspaces of are calculated below. is the ordinary eigenvector associated with . is a generalized eigenvector associated with . is the ordinary eigenvector associated with . and are generalized eigenvectors associated with . This results in a basis for each of the generalized eigenspaces of . Together the two chains of generalized eigenvectors span the space of all 5-dimensional column vectors. An "almost diagonal" matrix in Jordan normal form, similar to is obtained as follows: where is a generalized modal matrix for , the columns of are a canonical basis for , and . Jordan chains Definition: Let be a generalized eigenvector of rank m corresponding to the matrix and the eigenvalue . The chain generated by is a set of vectors given by where is always an ordinary eigenvector with a given eigenvalue . Thus, in general, The vector , given by (), is a generalized eigenvector of rank j corresponding to the eigenvalue . A chain is a linearly independent set of vectors. Canonical basis Definition: A set of n linearly independent generalized eigenvectors is a canonical basis if it is composed entirely of Jordan chains. Thus, once we have determined that a generalized eigenvector of rank m is in a canonical basis, it follows that the m − 1 vectors that are in the Jordan chain generated by are also in the canonical basis. Let be an eigenvalue of of algebraic multiplicity . First, find the ranks (matrix ranks) of the matrices . The integer is determined to be the first integer for which has rank (n being the number of rows or columns of , that is, is n × n). Now define The variable designates the number of linearly independent generalized eigenvectors of rank k corresponding to the eigenvalue that will appear in a canonical basis for . Note that . Computation of generalized eigenvectors In the preceding sections we have seen techniques for obtaining the linearly independent generalized eigenvectors of a canonical basis for the vector space associated with an matrix . These techniques can be combined into a procedure: Solve the characteristic equation of for eigenvalues and their algebraic multiplicities ; For each Determine ; Determine ; Determine for ; Determine each Jordan chain for ; Example 3 The matrix has an eigenvalue of algebraic multiplicity and an eigenvalue of algebraic multiplicity . We also have . For we have . The first integer for which has rank is . We now define Consequently, there will be three linearly independent generalized eigenvectors; one each of ranks 3, 2 and 1. Since corresponds to a single chain of three linearly independent generalized eigenvectors, we know that there is a generalized eigenvector of rank 3 corresponding to such that but Equations () and () represent linear systems that can be solved for . Let Then and Thus, in order to satisfy the conditions () and (), we must have and . No restrictions are placed on and . By choosing , we obtain as a generalized eigenvector of rank 3 corresponding to . Note that it is possible to obtain infinitely many other generalized eigenvectors of rank 3 by choosing different values of , and , with . Our first choice, however, is the simplest. Now using equations (), we obtain and as generalized eigenvectors of rank 2 and 1, respectively, where and The simple eigenvalue can be dealt with using standard techniques and has an ordinary eigenvector A canonical basis for is and are generalized eigenvectors associated with , while is the ordinary eigenvector associated with . This is a fairly simple example. In general, the numbers of linearly independent generalized eigenvectors of rank will not always be equal. That is, there may be several chains of different lengths corresponding to a particular eigenvalue. Generalized modal matrix Let be an n × n matrix. A generalized modal matrix for is an n × n matrix whose columns, considered as vectors, form a canonical basis for and appear in according to the following rules: All Jordan chains consisting of one vector (that is, one vector in length) appear in the first columns of . All vectors of one chain appear together in adjacent columns of . Each chain appears in in order of increasing rank (that is, the generalized eigenvector of rank 1 appears before the generalized eigenvector of rank 2 of the same chain, which appears before the generalized eigenvector of rank 3 of the same chain, etc.). Jordan normal form Let be an n-dimensional vector space; let be a linear map in , the set of all linear maps from into itself; and let be the matrix representation of with respect to some ordered basis. It can be shown that if the characteristic polynomial of factors into linear factors, so that has the form where are the distinct eigenvalues of , then each is the algebraic multiplicity of its corresponding eigenvalue and is similar to a matrix in Jordan normal form, where each appears consecutive times on the diagonal, and the entry directly above each (that is, on the superdiagonal) is either 0 or 1: in each block the entry above the first occurrence of each is always 0 (except in the first block); all other entries on the superdiagonal are 1. All other entries (that is, off the diagonal and superdiagonal) are 0. (But no ordering is imposed among the eigenvalues, or among the blocks for a given eigenvalue.) The matrix is as close as one can come to a diagonalization of . If is diagonalizable, then all entries above the diagonal are zero. Note that some textbooks have the ones on the subdiagonal, that is, immediately below the main diagonal instead of on the superdiagonal. The eigenvalues are still on the main diagonal. Every n × n matrix is similar to a matrix in Jordan normal form, obtained through the similarity transformation , where is a generalized modal matrix for . (See Note above.) Example 4 Find a matrix in Jordan normal form that is similar to Solution: The characteristic equation of is , hence, is an eigenvalue of algebraic multiplicity three. Following the procedures of the previous sections, we find that and Thus, and , which implies that a canonical basis for will contain one linearly independent generalized eigenvector of rank 2 and two linearly independent generalized eigenvectors of rank 1, or equivalently, one chain of two vectors and one chain of one vector . Designating , we find that and where is a generalized modal matrix for , the columns of are a canonical basis for , and . Note that since generalized eigenvectors themselves are not unique, and since some of the columns of both and may be interchanged, it follows that both and are not unique. Example 5 In Example 3, we found a canonical basis of linearly independent generalized eigenvectors for a matrix . A generalized modal matrix for is A matrix in Jordan normal form, similar to is so that . Applications Matrix functions Three of the most fundamental operations which can be performed on square matrices are matrix addition, multiplication by a scalar, and matrix multiplication. These are exactly those operations necessary for defining a polynomial function of an n × n matrix . If we recall from basic calculus that many functions can be written as a Maclaurin series, then we can define more general functions of matrices quite easily. If is diagonalizable, that is with then and the evaluation of the Maclaurin series for functions of is greatly simplified. For example, to obtain any power k of , we need only compute , premultiply by , and postmultiply the result by . Using generalized eigenvectors, we can obtain the Jordan normal form for and these results can be generalized to a straightforward method for computing functions of nondiagonalizable matrices. (See Matrix function#Jordan decomposition.) Differential equations Consider the problem of solving the system of linear ordinary differential equations where and If the matrix is a diagonal matrix so that for , then the system () reduces to a system of n equations which take the form In this case, the general solution is given by In the general case, we try to diagonalize and reduce the system () to a system like () as follows. If is diagonalizable, we have , where is a modal matrix for . Substituting , equation () takes the form , or where The solution of () is The solution of () is then obtained using the relation (). On the other hand, if is not diagonalizable, we choose to be a generalized modal matrix for , such that is the Jordan normal form of . The system has the form where the are the eigenvalues from the main diagonal of and the are the ones and zeros from the superdiagonal of . The system () is often more easily solved than (). We may solve the last equation in () for , obtaining . We then substitute this solution for into the next to last equation in () and solve for . Continuing this procedure, we work through () from the last equation to the first, solving the entire system for . The solution is then obtained using the relation (). Lemma: Given the following chain of generalized eigenvectors of length , these functions solve the system of equations, Proof: Define Then, as and , . On the other hand we have, and so as required. Notes References Linear algebra Matrix theory
Generalized eigenvector
Mathematics
3,050
78,597,479
https://en.wikipedia.org/wiki/Purma
A purma is a type of early successional forest, or secondary forest, in the Amazon Basin. In the Amazon, people convert forest to farms and plantations called chacras. If that farm should be abandoned, the local plants begin to recolonize the area. It is then called a purma. References HabitatsForestsForest ecologyReforestation
Purma
Biology
78
68,882,947
https://en.wikipedia.org/wiki/Daniele%20Cherniak
Daniele Cherniak is an American geochemist known for her work on using particle beams for geochemical analysis on small scales. She was elected a fellow of the American Geophysical Union in 2021. Education and career Cherniak grew up in Cohoes, New York and went to Keveny Memorial Academy. In 1983, Cherniak received her undergraduate degree from Union College and went on to earn her Ph.D. in physics at the University at Albany, SUNY in 1990. As of 2021, she is a research professor at Rensselaer Polytechnic Institute and works at the Ion Beam Lab at the University at Albany. Research Cherniak is known for her research on rock-forming minerals, specifically on atomic diffusion in these minerals. She established the use of ion implantation to place lead into minerals followed by the use of Rutherford backscattering spectrometry to obtain diffusion profiles, which she first applied to measurements in apatite and zircon, and has subsequently applied to other minerals. She has also examined the diffusion of rare-earth elements, tetravalent cations, and oxygen into zircon. Her work on argon showed that the degassing of Earth is slower than expected. Much of her work is collaborative projects with E. Bruce Watson. In 2020, she began a project working with scientists Union College on a study of radioactive decay which will improve both disposal of nuclear waste and increase precision of dating material that is billions of years old. Selected publications Awards and honors Fellow, Geochemical Society (2010) Fellow, American Geophysical Union (2021) Walt Westman Award, National Organization of Gay and Lesbian Scientists and Technical Professionals (2021) Personal life Cherniak started running cross country while in high school and continued to run while at Union College. Cherniak runs in ultramarathons and has earned team bronze medals in 1998 and 2000 in the IAU 100 km World Championships. Her local running club, Hudson Mohawk Road Runners Club, elected her to their hall of fame for her running accomplishments, the first woman to receive this honor. Cherniak also volunteers for the Spindle City Historic Society in Cohoes, New York and has been recognized for her work in historic preservation in the area, especially in the restoration of parts of the Erie Canal. References External links , August 15, 2021, interview Fellows of the American Geophysical Union Living people University at Albany, SUNY alumni Rensselaer Polytechnic Institute faculty American female ultramarathon runners American women physicists American physicists American geochemists 1961 births American LGBTQ scientists 20th-century American sportswomen
Daniele Cherniak
Chemistry
530
26,249,584
https://en.wikipedia.org/wiki/Rot-proof
Rot-proof or rot resistant is a condition of preservation or protection, by a process or treatment of materials used in industrial manufacturing or production to prevent biodegradation and chemical decomposition. Decomposition is a factor in which organic matter breaks down over time. It is commonly caused by fungus, mold or mildew. There are natural conditions where the environment is inhospitable to animals, bacteria and fungus, for example in high altitude and the freezing subzero temperatures of the Arctic and Antarctic, which creates a similar suspension. The proofing of materials may also prevent dry rot and wet rot. See also Dust resistant Fireproofing Rustproofing Thermal resistant Toughness Waterproofing Chemical properties Materials science
Rot-proof
Physics,Chemistry,Materials_science,Engineering
142
24,140,211
https://en.wikipedia.org/wiki/Intersection%20type
In type theory, an intersection type can be allocated to values that can be assigned both the type and the type . This value can be given the intersection type in an intersection type system. Generally, if the ranges of values of two types overlap, then a value belonging to the intersection of the two ranges can be assigned the intersection type of these two types. Such a value can be safely passed as argument to functions expecting either of the two types. For example, in Java the class implements both the and the interfaces. Therefore, an object of type can be safely passed to functions expecting an argument of type and to functions expecting an argument of type . Intersection types are composite data types. Similar to product types, they are used to assign several types to an object. However, product types are assigned to tuples, so that each tuple element is assigned a particular product type component. In comparison, underlying objects of intersection types are not necessarily composite. A restricted form of intersection types are refinement types. Intersection types are useful for describing overloaded functions. For example, if is the type of function taking a number as an argument and returning a number, and is the type of function taking a string as an argument and returning a string, then the intersection of these two types can be used to describe (overloaded) functions that do one or the other, based on what type of input they are given. Contemporary programming languages, including Ceylon, Flow, Java, Scala, TypeScript, and Whiley (see comparison of languages with intersection types), use intersection types to combine interface specifications and to express ad hoc polymorphism. Complementing parametric polymorphism, intersection types may be used to avoid class hierarchy pollution from cross-cutting concerns and reduce boilerplate code, as shown in the TypeScript example below. The type theoretic study of intersection types is referred to as the intersection type discipline. Remarkably, program termination can be precisely characterized using intersection types. TypeScript example TypeScript supports intersection types, improving expressiveness of the type system and reducing potential class hierarchy size, demonstrated as follows. The following program code defines the classes , , and that each have a method returning an object of either type , , or . Additionally, the functions and require arguments of type and , respectively. class Egg { private kind: "Egg" } class Milk { private kind: "Milk" } // produces eggs class Chicken { produce() { return new Egg(); } } // produces milk class Cow { produce() { return new Milk(); } } // produces a random number class RandomNumberGenerator { produce() { return Math.random(); } } // requires an egg function eatEgg(egg: Egg) { return "I ate an egg."; } // requires milk function drinkMilk(milk: Milk) { return "I drank some milk."; } The following program code defines the ad hoc polymorphic function that invokes the member function of the given object . The function has two type annotations, namely and , connected via the intersection type constructor . Specifically, when applied to an argument of type returns an object of type , and when applied to an argument of type returns an object of type . Ideally, should not be applicable to any object having (possibly by chance) a method. // given a chicken, produces an egg; given a cow, produces milk let animalToFood: ((_: Chicken) => Egg) & ((_: Cow) => Milk) = function (animal: any) { return animal.produce(); }; Finally, the following program code demonstrates type safe use of the above definitions. var chicken = new Chicken(); var cow = new Cow(); var randomNumberGenerator = new RandomNumberGenerator(); console.log(chicken.produce()); // Egg { } console.log(cow.produce()); // Milk { } console.log(randomNumberGenerator.produce()); //0.2626353555444987 console.log(animalToFood(chicken)); // Egg { } console.log(animalToFood(cow)); // Milk { } //console.log(animalToFood(randomNumberGenerator)); // ERROR: Argument of type 'RandomNumberGenerator' is not assignable to parameter of type 'Cow' console.log(eatEgg(animalToFood(chicken))); // I ate an egg. //console.log(eatEgg(animalToFood(cow))); // ERROR: Argument of type 'Milk' is not assignable to parameter of type 'Egg' console.log(drinkMilk(animalToFood(cow))); // I drank some milk. //console.log(drinkMilk(animalToFood(chicken))); // ERROR: Argument of type 'Egg' is not assignable to parameter of type 'Milk' The above program code has the following properties: Lines 1–3 create objects , , and of their respective type. Lines 5–7 print for the previously created objects the respective results (provided as comments) when invoking . Line 9 (resp. 10) demonstrates type safe use of the method applied to (resp. ). Line 11, if uncommented, would result in a type error at compile time. Although the implementation of could invoke the method of , the type annotation of disallows it. This is in accordance with the intended meaning of . Line 13 (resp. 15) demonstrates that applying to (resp. ) results in an object of type (resp. ). Line 14 (resp. 16) demonstrates that applying to (resp. ) does not result in an object of type (resp. ). Therefore, if uncommented, line 14 (resp. 16) would result in a type error at compile time. Comparison to inheritance The above minimalist example can be realized using inheritance, for instance by deriving the classes and from a base class . However, in a larger setting, this could be disadvantageous. Introducing new classes into a class hierarchy is not necessarily justified for cross-cutting concerns, or maybe outright impossible, for example when using an external library. Imaginably, the above example could be extended with the following classes: a class that does not have a method; a class that has a method returning ; a class that has a method, which can be used only once, returning . This may require additional classes (or interfaces) specifying whether a produce method is available, whether the produce method returns food, and whether the produce method can be used repeatedly. Overall, this may pollute the class hierarchy. Comparison to duck typing The above minimalist example already shows that duck typing is less suited to realize the given scenario. While the class contains a method, the object should not be a valid argument for . The above example can be realized using duck typing, for instance by introducing a new field to the classes and signifying that objects of corresponding type are valid arguments for . However, this would not only increase the size of the respective classes (especially with the introduction of more methods similar to ), but is also a non-local approach with respect to . Comparison to function overloading The above example can be realized using function overloading, for instance by implementing two methods and . In TypeScript, such a solution is almost identical to the provided example. Other programming languages, such as Java, require distinct implementations of the overloaded method. This may lead to either code duplication or boilerplate code. Comparison to the visitor pattern The above example can be realized using the visitor pattern. It would require each animal class to implement an method accepting an object implementing the interface (adding non-local boilerplate code). The function would be realized as the method of an implementation of . Unfortunately, the connection between the input type ( or ) and the result type ( or ) would be difficult to represent. Limitations On the one hand, intersection types can be used to locally annotate different types to a function without introducing new classes (or interfaces) to the class hierarchy. On the other hand, this approach requires all possible argument types and result types to be specified explicitly. If the behavior of a function can be specified precisely by either a unified interface, parametric polymorphism, or duck typing, then the verbose nature of intersection types is unfavorable. Therefore, intersection types should be considered complementary to existing specification methods. Dependent intersection type A dependent intersection type, denoted , is a dependent type in which the type may depend on the term variable . In particular, if a term has the dependent intersection type , then the term has both the type and the type , where is the type which results from replacing all occurrences of the term variable in by the term . Scala example Scala supports type declarations as object members. This allows a type of an object member to depend on the value of another member, which is called a path-dependent type. For example, the following program text defines a Scala trait Witness, which can be used to implement the singleton pattern. trait Witness { type T val value: T {} } The above trait Witness declares the member T, which can be assigned a type as its value, and the member value, which can be assigned a value of type T. The following program text defines an object booleanWitness as instance of the above trait Witness . The object booleanWitness defines the type T as Boolean and the value value as true. For example, executing System.out.println(booleanWitness.value) prints true on the console. object booleanWitness extends Witness { type T = Boolean val value = true } Let be the type (specifically, a record type) of objects having the member of type . In the above example, the object booleanWitness can be assigned the dependent intersection type . The reasoning is as follows. The object booleanWitness has the member T that is assigned the type Boolean as its value. Since Boolean is a type, the object booleanWitness has the type . Additionally, the object booleanWitness has the member value that is assigned the value true of type Boolean. Since the value of booleanWitness.T is Boolean, the object booleanWitness has the type . Overall, the object booleanWitness has the intersection type . Therefore, presenting self-reference as dependency, the object booleanWitness has the dependent intersection type . Alternatively, the above minimalistic example can be described using dependent record types. In comparison to dependent intersection types, dependent record types constitute a strictly more specialized type theoretic concept. Intersection of a type family An intersection of a type family, denoted , is a dependent type in which the type may depend on the term variable . In particular, if a term has the type , then for each term of type , the term has the type . This notion is also called implicit Pi type, observing that the argument is not kept at term level. Comparison of languages with intersection types References Type theory Type systems Data types Composite data types Polymorphism (computer science) TypeScript
Intersection type
Mathematics
2,348
47,843,724
https://en.wikipedia.org/wiki/Bounded%20expansion
In graph theory, a family of graphs is said to have bounded expansion if all of its shallow minors are sparse graphs. Many natural families of sparse graphs have bounded expansion. A closely related but stronger property, polynomial expansion, is equivalent to the existence of separator theorems for these families. Families with these properties have efficient algorithms for problems including the subgraph isomorphism problem and model checking for the first order theory of graphs. Definition and equivalent characterizations A t-shallow minor of a graph G is defined to be a graph formed from G by contracting a collection of vertex-disjoint subgraphs of radius t, and deleting the remaining vertices of G. A family of graphs has bounded expansion if there exists a function f such that, in every t-shallow minor of a graph in the family, the ratio of edges to vertices is at most f(t). Equivalent definitions of classes of bounded expansions are that all shallow minors have chromatic number bounded by a function of t, or that the given family has a bounded value of a topological parameter. Such a parameter is a graph invariant that is monotone under taking subgraphs, such that the parameter value can change only in a controlled way when a graph is subdivided, and such that a bounded parameter value implies that a graph has bounded degeneracy. Polynomial expansion and separator theorems A stronger notion is polynomial expansion, meaning that the function f used to bound the edge density of shallow minors is a polynomial. If a hereditary graph family obeys a separator theorem, stating that any n-vertex graph in the family can be split into pieces with at most n/2 vertices by the removal of O(nc) vertices for some constant c < 1, then that family necessarily has polynomial expansion. Conversely, graphs with polynomial expansion have sublinear separator theorems. Classes of graphs with bounded expansion Because of the connection between separators and expansion, every minor-closed graph family, including the family of planar graphs, has polynomial expansion. The same is true for 1-planar graphs, and more generally the graphs that can be embedded onto surfaces of bounded genus with a bounded number of crossings per edge, as well as the biclique-free string graphs, since these all obey similar separator theorems to the planar graphs. In higher dimensional Euclidean spaces, intersection graphs of systems of balls with the property that any point of space is covered by a bounded number of balls also obey separator theorems that imply polynomial expansion. Although graphs of bounded book thickness do not have sublinear separators, they also have bounded expansion. Other graphs of bounded expansion include graphs of bounded degree, random graphs of bounded average degree in the Erdős–Rényi model, and graphs of bounded queue number. Algorithmic applications Instances of the subgraph isomorphism problem in which the goal is to find a target graph of bounded size, as a subgraph of a larger graph whose size is not bounded, may be solved in linear time when the larger graph belongs to a family of graphs of bounded expansion. The same is true for finding cliques of a fixed size, finding dominating sets of a fixed size, or more generally testing properties that can be expressed by a formula of bounded size in the first-order logic of graphs. For graphs of polynomial expansion, there exist polynomial-time approximation schemes for the set cover problem, maximum independent set problem, dominating set problem, and several other related graph optimization problems. References Graph minor theory
Bounded expansion
Mathematics
716
2,907,075
https://en.wikipedia.org/wiki/Spoofed%20URL
A spoofed URL involves one website masquerading as another, often leveraging vulnerabilities in web browser technology to facilitate a malicious computer attack. These attacks are particularly effective against computers that lack up-to-date security patches. Alternatively, some spoofed URLs are crafted for satirical purposes. In such an attack scenario, an unsuspecting computer user visits a website and observes a familiar URL, like http://www.wikipedia.org, in the address bar. However, unbeknownst to them, the information they input is being directed to a completely different location, usually monitored by an information thief. When a fraudulent website requests sensitive information, it's referred to as phishing. These fraudulent websites often entice users through emails or hyperlinks. In a different variation, a website might resemble the original but is, in reality, a parody. These instances are generally harmless and conspicuously distinct from the genuine sites, as they typically do not exploit web browser vulnerabilities. Another avenue for these exploits involves redirects within a host's file, rerouting traffic from legitimate sites to an alternate IP associated with the spoofed URL. Cyber security Spoofing is the act of deception or hoaxing. URLs are the address of a resource (as a document or website) on the Internet that consists of a communications protocol followed by the name or address of a computer on the network and that often includes additional locating information (as directory and file names). Simply, a spoofed URL is a web address that illuminates an immense amount of deception through its ability to appear as an original site, despite it not being one. In order to prevent falling victim to the prevalent scams stemmed from the spoofed URLs, major software companies have come forward and advised techniques to detect and prevent spoofed URLs. Prevention Spoofed URLs, a universal defining identity for phishing scams, pose a serious threat to end-users and commercial institutions. Email continues to be the favorite vehicle to perpetrate such scams mainly due to its widespread use combined with the ability to easily spoof them. Several approaches, both generic and specialized, have been proposed to address this problem. However, phishing techniques, growing in ingenuity as well as sophistication, render these solutions weak. In order to prevent users from future victimization stemmed from a spoofed URL, Internet vigilantes have published numerous tips to help users identify a spoof. The most common are: using authentication based on key exchange between the machines on your network, using an access control list to deny private IP addresses on your downstream interface, implementing filters of both inbound and outbound traffic, configuring routers and switches if they support such configuration, to reject packets originating from outside the local network that claim to originate from within, and enable encryption sessions in the router so that trusted hosts that are outside your network can securely communicate with your local hosts. Ultimately, protection comes from the individual user. Keeping up with new spoofing techniques or scams will readily allow one to identify a scam and most importantly keep information secure and personal. Susceptible targets PayPal, an e-commerce business allows money transactions to be made through the Internet and is a common target for spoofed URLs. This forgery of a legitimate PayPal website allows hackers to gain personal and financial information and thus, steal money through fraud. Along with spoof or fake emails that appear with generic greetings, misspellings, and a false sense of urgency, spoofed URLs are an easy way for hackers to violate one’s PayPal privacy. For example, www.paypalsecure.com, includes the name, but is a spoofed URL designed to deceive. Remember to always log into PayPal through a new window browser and never log in through email. In the case that you do receive a suspected spoofed URL, forward the entire email to spoof@PayPal.com to help prevent the URL from tricking other PayPal users. Common crimes A major crime associated with spoofed URLs is identity theft. The thief will create a website very similar in appearance to that of a popular site, then when a user accesses the spoofed URL, they can inadvertently give the thief their credit card and personal details. Their spoofed URLs might use “too good to be true” prices to lure more and more looking for a good deal. Crimes like these happen quite often, and most frequently during the festive holidays and other heavy online shopping periods of the year. Another crime associated with spoofed URLs is setting up a fake anti-malware software. An example of this would be Ransomware, fake anti-malware software that locks up important files for the computer to run, and forces the user to pay a ransom to get the files back. If the user refuses to pay after a certain period of time, the Ransomware will delete the files from the computer, essentially making the computer unusable. Ads for these programs usually appear on popular websites, such as dating sites or social media sites like Facebook and Twitter. They can also come in the form of attachments to emails. Phishing scams are also another major way that users can get tricked into scams (see below). Phishing Phishing is a scam by which an e-mail user is duped into revealing personal or confidential information which the scammer can use illicitly. Phishing is the action of fraudsters sending an email to an individual, hoping to seek private information used for identity theft, by falsely asserting to be a reputable legal business. Phishing is performed through emails containing a spoofed URL, which links them to a website. Since it usually appears in the form on an email, it is crucial to not rely just on the address in the “from” field in order to prevent phishing. Computer users should also look out for spelling mistakes within the website's URLs, as this is another common sign to look out for in a phishing email. The website whose URLs are in the e-mails requests individuals to enter personal information so businesses can update it in their system. This information often includes passwords, credit card numbers, social security, and bank account numbers. In turn, the email recipients are giving these fake businesses their information the real businesses already have. See also Computer insecurity Hosts File IDN homograph attack Internet fraud prevention Social engineering (computer security) Spoofing attack References URL Web security exploits Client-side web security exploits
Spoofed URL
Technology
1,378
51,354,460
https://en.wikipedia.org/wiki/List%20of%20big%20data%20companies
This is an alphabetical list of notable IT companies using the marketing term big data: A Alpine Data Labs, an analytics interface working with Apache Hadoop and big data AvocaData, a two sided marketplace allowing consumers to buy & sell data with ease. Azure Data Lake is a highly scalable data storage and analytics service. The service is hosted in Azure, Microsoft's public cloud B Big Data Partnership, a professional services company based in London Big Data Scoring, a cloud-based service that lets consumer lenders improve loan quality and acceptance rates through the use of big data BigPanda, a technology company headquartered in Mountain View, California Bright Computing, developer of software for deploying and managing high-performance (HPC) clusters, big data clusters, and OpenStack in data centers and in the cloud C Clarivate Analytics, a global company that owns and operates a collection of subscription-based services focused largely on analytics Cloudera, an American-based software company that provides Apache Hadoop-based software, support and services, and training to business customers Compuverde, an IT company with a focus on big data storage CVidya, a provider of big data analytics products for communications and digital service providers D Databricks, a company founded by the creators of Apache Spark Dataiku, a French computer software company DataStax Domo F Fluentd G Greenplum Groundhog Technologies H Hack/reduce Hazelcast Hortonworks HPCC Systems I IBM Imply Corporation M MapR MarkLogic Medio Medopad N NetApp O Oracle Cloud Platform P Palantir Technologies Pentaho, a data integration and business analytics company with an enterprise-class, open source-based platform for big data deployments Pitney Bowes Platfora Q Qumulo R Rocket Fuel Inc. S SAP SE, offers the SAP Data Hub to connect data bases and other products through acquisition of Altiscale SalesforceIQ ScyllaDB, developer of a database for big data Sense Networks Shanghai Data Exchange SK Telecom, developer of big data analytics platform Metatron Discovery Sojern Splunk Sumo Logic T Teradata ThetaRay TubeMogul V VoloMetrix Z Zaloni, deployment and vendor agnostic data lake management platform Zoomdata References Lists of software Lists of technology companies
List of big data companies
Technology
469
54,616,030
https://en.wikipedia.org/wiki/Line%20sampling
Line sampling is a method used in reliability engineering to compute small (i.e., rare event) failure probabilities encountered in engineering systems. The method is particularly suitable for high-dimensional reliability problems, in which the performance function exhibits moderate non-linearity with respect to the uncertain parameters The method is suitable for analyzing black box systems, and unlike the importance sampling method of variance reduction, does not require detailed knowledge of the system. The basic idea behind line sampling is to refine estimates obtained from the first-order reliability method (FORM), which may be incorrect due to the non-linearity of the limit state function. Conceptually, this is achieved by averaging the result of different FORM simulations. In practice, this is made possible by identifying the importance direction   in the input parameter space, which points towards the region which most strongly contributes to the overall failure probability. The importance direction can be closely related to the center of mass of the failure region, or to the failure point with the highest probability density, which often falls at the closest point to the origin of the limit state function, when the random variables of the problem have been transformed into the standard normal space. Once the importance direction has been set to point towards the failure region, samples are randomly generated from the standard normal space and lines are drawn parallel to the importance direction in order to compute the distance to the limit state function, which enables the probability of failure to be estimated for each sample. These failure probabilities can then be averaged to obtain an improved estimate. Mathematical approach Firstly the importance direction must be determined. This can be achieved by finding the design point, or the gradient of the limit state function. A set of samples is generated using Monte Carlo simulation in the standard normal space. For each sample , the probability of failure in the line parallel to the important direction is defined as: where   is equal to one for samples contributing to failure, and is zero otherwise:   is the important direction,   is the probability density function of a Gaussian distribution (and   is a real number). In practice the roots of a nonlinear function must be found to estimate the partial probabilities of failure along each line. This is either done by interpolation of a few samples along the line, or by using the Newton–Raphson method. The global probability of failure is the mean of the probability of failure on the lines: where   is the total number of lines used in the analysis and the   are the partial probabilities of failure estimated along all the lines. For problems in which the dependence of the performance function is only moderately non-linear with respect to the parameters modeled as random variables, setting the importance direction as the gradient vector of the performance function in the underlying standard normal space leads to highly efficient Line Sampling. In general it can be shown that the variance obtained by line sampling is always smaller than that obtained by conventional Monte Carlo simulation, and hence the line sampling algorithm converges more quickly. The rate of convergence is made quicker still by recent advancements which allow the importance direction to be repeatedly updated throughout the simulation, and this is known as adaptive line sampling. Industrial application The algorithm is particularly useful for performing reliability analysis on computationally expensive industrial black box models, since the limit state function can be non-linear and the number of samples required is lower than for other reliability analysis techniques such as subset simulation. The algorithm can also be used to efficiently propagate epistemic uncertainty in the form of probability boxes, or random sets. A numerical implementation of the method is available in the open source software OpenCOSSAN. See also Rare event sampling Curse of dimensionality Quantitative risk assessment References Reliability analysis Variance_reduction
Line sampling
Engineering
741
917,868
https://en.wikipedia.org/wiki/Comparative%20genomics
Comparative genomics is a branch of biological research that examines genome sequences across a spectrum of species, spanning from humans and mice to a diverse array of organisms from bacteria to chimpanzees. This large-scale holistic approach compares two or more genomes to discover the similarities and differences between the genomes and to study the biology of the individual genomes. Comparison of whole genome sequences provides a highly detailed view of how organisms are related to each other at the gene level. By comparing whole genome sequences, researchers gain insights into genetic relationships between organisms and study evolutionary changes. The major principle of comparative genomics is that common features of two organisms will often be encoded within the DNA that is evolutionarily conserved between them. Therefore, Comparative genomics provides a powerful tool for studying evolutionary changes among organisms, helping to identify genes that are conserved or common among species, as well as genes that give unique characteristics of each organism. Moreover, these studies can be performed at different levels of the genomes to obtain multiple perspectives about the organisms. The comparative genomic analysis begins with a simple comparison of the general features of genomes such as genome size, number of genes, and chromosome number. Table 1 presents data on several fully sequenced model organisms, and highlights some striking findings. For instance, while the tiny flowering plant Arabidopsis thaliana has a smaller genome than that of the fruit fly Drosophila melanogaster (157 million base pairs v. 165 million base pairs, respectively) it possesses nearly twice as many genes (25,000 v. 13,000). In fact, A. thaliana has approximately the same number of genes as humans (25,000). Thus, a very early lesson learned in the genomic era is that genome size does not correlate with evolutionary status, nor is the number of genes proportionate to genome size. In comparative genomics, synteny is the preserved order of genes on chromosomes of related species indicating their descent from a common ancestor. Synteny provides a framework in which the conservation of homologous genes and gene order is identified between genomes of different species. Synteny blocks are more formally defined as regions of chromosomes between genomes that share a common order of homologous genes derived from a common ancestor. Alternative names such as conserved synteny or collinearity have been used interchangeably. Comparisons of genome synteny between and within species have provided an opportunity to study evolutionary processes that lead to the diversity of chromosome number and structure in many lineages across the tree of life; early discoveries using such approaches include chromosomal conserved regions in nematodes and yeast, evolutionary history and phenotypic traits of extremely conserved Hox gene clusters across animals and MADS-box gene family in plants, and karyotype evolution in mammals and plants. Furthermore, comparing two genomes not only reveals conserved domains or synteny but also aids in detecting copy number variations, single nucleotide polymorphisms (SNPs), indels, and other genomic structural variations. Virtually started as soon as the whole genomes of two organisms became available (that is, the genomes of the bacteria Haemophilus influenzae and Mycoplasma genitalium) in 1995, comparative genomics is now a standard component of the analysis of every new genome sequence. With the explosion in the number of genome projects due to the advancements in DNA sequencing technologies, particularly the next-generation sequencing methods in late 2000s, this field has become more sophisticated, making it possible to deal with many genomes in a single study. Comparative genomics has revealed high levels of similarity between closely related organisms, such as humans and chimpanzees, and, more surprisingly, similarity between seemingly distantly related organisms, such as humans and the yeast Saccharomyces cerevisiae. It has also showed the extreme diversity of the gene composition in different evolutionary lineages. History See also: History of genomics Comparative genomics has a root in the comparison of virus genomes in the early 1980s. For example, small RNA viruses infecting animals (picornaviruses) and those infecting plants (cowpea mosaic virus) were compared and turned out to share significant sequence similarity and, in part, the order of their genes. In 1986, the first comparative genomic study at a larger scale was published, comparing the genomes of varicella-zoster virus and Epstein-Barr virus that contained more than 100 genes each. The first complete genome sequence of a cellular organism, that of Haemophilus influenzae Rd, was published in 1995. The second genome sequencing paper was of the small parasitic bacterium Mycoplasma genitalium published in the same year. Starting from this paper, reports on new genomes inevitably became comparative-genomic studies. Microbial genomes. The first high-resolution whole genome comparison system of microbial genomes of 10-15kbp was developed in 1998 by Art Delcher, Simon Kasif and Steven Salzberg and applied to the comparison of entire highly related microbial organisms with their collaborators at the Institute for Genomic Research (TIGR). The system is called MUMMER and was described in a publication in Nucleic Acids Research in 1999. The system helps researchers to identify large rearrangements, single base mutations, reversals, tandem repeat expansions and other polymorphisms. In bacteria, MUMMER enables the identification of polymorphisms that are responsible for virulence, pathogenicity, and anti-biotic resistance. The system was also applied to the Minimal Organism Project at TIGR and subsequently to many other comparative genomics projects. Eukaryote genomes. Saccharomyces cerevisiae, the baker's yeast, was the first eukaryote to have its complete genome sequence published in 1996. After the publication of the roundworm Caenorhabditis elegans genome in 1998 and together with the fruit fly Drosophila melanogaster genome in 2000, Gerald M. Rubin and his team published a paper titled "Comparative Genomics of the Eukaryotes", in which they compared the genomes of the eukaryotes D. melanogaster, C. elegans, and S. cerevisiae, as well as the prokaryote H. influenzae. At the same time, Bonnie Berger, Eric Lander, and their team published a paper on whole-genome comparison of human and mouse. With the publication of the large genomes of vertebrates in the 2000s, including human, the Japanese pufferfish Takifugu rubripes, and mouse, precomputed results of large genome comparisons have been released for downloading or for visualization in a genome browser. Instead of undertaking their own analyses, most biologists can access these large cross-species comparisons and avoid the impracticality caused by the size of the genomes. Next-generation sequencing methods, which were first introduced in 2007, have produced an enormous amount of genomic data and have allowed researchers to generate multiple (prokaryotic) draft genome sequences at once. These methods can also quickly uncover single-nucleotide polymorphisms, insertions and deletions by mapping unassembled reads against a well annotated reference genome, and thus provide a list of possible gene differences that may be the basis for any functional variation among strains. Evolutionary principles One character of biology is evolution, evolutionary theory is also the theoretical foundation of comparative genomics, and at the same time the results of comparative genomics unprecedentedly enriched and developed the theory of evolution. When two or more of the genome sequence are compared, one can deduce the evolutionary relationships of the sequences in a phylogenetic tree. Based on a variety of biological genome data and the study of vertical and horizontal evolution processes, one can understand vital parts of the gene structure and its regulatory function. Similarity of related genomes is the basis of comparative genomics. If two creatures have a recent common ancestor, the differences between the two species genomes are evolved from the ancestors' genome. The closer the relationship between two organisms, the higher the similarities between their genomes. If there is close relationship between them, then their genome will display a linear behaviour (synteny), namely some or all of the genetic sequences are conserved. Thus, the genome sequences can be used to identify gene function, by analyzing their homology (sequence similarity) to genes of known function. Orthologous sequences are related sequences in different species: a gene exists in the original species, the species divided into two species, so genes in new species are orthologous to the sequence in the original species. Paralogous sequences are separated by gene cloning (gene duplication): if a particular gene in the genome is copied, then the copy of the two sequences is paralogous to the original gene. A pair of orthologous sequences is called orthologous pairs (orthologs), a pair of paralogous sequence is called collateral pairs (paralogs). Orthologous pairs usually have the same or similar function, which is not necessarily the case for collateral pairs. In collateral pairs, the sequences tend to evolve into having different functions. Comparative genomics exploits both similarities and differences in the proteins, RNA, and regulatory regions of different organisms to infer how selection has acted upon these elements. Those elements that are responsible for similarities between different species should be conserved through time (stabilizing selection), while those elements responsible for differences among species should be divergent (positive selection). Finally, those elements that are unimportant to the evolutionary success of the organism will be unconserved (selection is neutral). One of the important goals of the field is the identification of the mechanisms of eukaryotic genome evolution. It is however often complicated by the multiplicity of events that have taken place throughout the history of individual lineages, leaving only distorted and superimposed traces in the genome of each living organism. For this reason comparative genomics studies of small model organisms (for example the model Caenorhabditis elegans and closely related Caenorhabditis briggsae) are of great importance to advance our understanding of general mechanisms of evolution. Role of CNVs in evolution Comparative genomics plays a crucial role in identifying copy number variations (CNVs) and understanding their significance in evolution. CNVs, which involve deletions or duplications of large segments of DNA, are recognized as a major source of genetic diversity, influencing gene structure, dosage, and regulation. While single nucleotide polymorphisms (SNPs) are more common, CNVs impact larger genomic regions and can have profound effects on phenotype and diversity. Recent studies suggest that CNVs constitute around 4.8–9.5% of the human genome and have a substantial functional and evolutionary impact. In mammals, CNVs contribute significantly to population diversity, influencing gene expression and various phenotypic traits. Comparative genomics analyses of human and chimpanzee genomes have revealed that CNVs may play a greater role in evolutionary change compared to single nucleotide changes. Research indicates that CNVs affect more nucleotides than individual base-pair changes, with about 2.7% of the genome affected by CNVs compared to 1.2% by SNPs. Moreover, while many CNVs are shared between humans and chimpanzees, a significant portion is unique to each species. Additionally, CNVs have been associated with genetic diseases in humans, highlighting their importance in human health. Despite this, many questions about CNVs remain unanswered, including their origin and contributions to evolutionary adaptation and disease. Ongoing research aims to address these questions using techniques like comparative genomic hybridization, which allows for a detailed examination of CNVs and their significance. When investigators examined the raw sequence data of the human and chimpanzee. Significance of comparative genomics Comparative genomics holds profound significance across various fields, including medical research, basic biology, and biodiversity conservation. For instance, in medical research, predicting how genomic variants limited ability to predict which genomic variants lead to changes in organism-level phenotypes, such as increased disease risk in humans, remains challenging due to the immense size of the genome, comprising about three billion nucleotides. To tackle this challenge, comparative genomics offers a solution by pinpointing nucleotide positions that have remained unchanged over millions of years of evolution. These conserved regions indicate potential sites where genetic alterations could have detrimental effects on an organism's fitness, thus guiding the search for disease-causing variants. Moreover, comparative genomics holds promise in unraveling the mechanisms of gene evolution, environmental adaptations, gender-specific differences, and population variations across vertebrate lineages. Furthermore, comparative studies enable the identification of genomic signatures of selection—regions in the genome that have undergone preferential increase and fixation in populations due to their functional significance in specific processes. For instance, in animal genetics, indigenous cattle exhibit superior disease resistance and environmental adaptability but lower productivity compared to exotic breeds. Through comparative genomic analyses, significant genomic signatures responsible for these unique traits can be identified. Using insights from this signature, breeders can make informed decisions to enhance breeding strategies and promote breed development. Methods Computational approaches are necessary for genome comparisons, given the large amount of data encoded in genomes. Many tools are now publicly available, ranging from whole genome comparisons to gene expression analysis. This includes approaches from systems and control, information theory, string analysis and data mining. Computational approaches will remain critical for research and teaching, especially when information science and genome biology is taught in conjunction. Comparative genomics starts with basic comparisons of genome size and gene density. For instance, genome size is important for coding capacity and possibly for regulatory reasons. High gene density facilitates genome annotation, analysis of environmental selection. By contrast, low gene density hampers the mapping of genetic disease as in the human genome. Sequence alignment Alignments are used to capture information about similar sequences such as ancestry, common evolutionary descent, or common structure and function. Alignments can be done for both nucleotide and protein sequences. Alignments consist of local or global pairwise alignments, and multiple sequence alignments. One way to find global alignments is to use a dynamic programming algorithm known as Needleman-Wunsch algorithmwhereas Smith–Waterman algorithm used to find local alignments. With the exponential growth of sequence databases and the emergence of longer sequences, there's a heightened interest in faster, approximate, or heuristic alignment procedures. Among these, the FASTA and BLAST algorithms are prominent for local pairwise alignment. Recent years have witnessed the development of programs tailored to aligning lengthy sequences, such as MUMmer (1999), BLASTZ (2003), and AVID (2003). While BLASTZ adopts a local approach, MUMmer and AVID are geared towards global alignment. To harness the benefits of both local and global alignment approaches, one effective strategy involves integrating them. Initially, a rapid variant of BLAST known as BLAT is employed to identify homologous "anchor" regions. These anchors are subsequently scrutinized to identify sets exhibiting conserved order and orientation. Such sets of anchors are then subjected to alignment using a global strategy. Additionally, ongoing efforts focus on optimizing existing algorithms to handle the vast amount of genome sequence data by enhancing their speed. Furthermore, MAVID stands out as another noteworthy pairwise alignment program specifically designed for aligning multiple genomes. Pairwise Comparison: The Pairwise comparison of genomic sequence data is widely utilized in comparative gene prediction. Many studies in comparative functional genomics lean on pairwise comparisons, wherein traits of each gene are compared with traits of other genes across species. his method yields many more comparisons than unique observations, making each comparison dependent on others. Multiple comparisons: The comparison of multiple genomes is a natural extension of pairwise inter-specific comparisons. Such comparisons typically aim to identify conserved regions across two phylogenetic scales: 1. Deep comparisons, often referred to as phylogenetic footprinting reveal conservation across higher taxonomic units like vertebrates. 2. Shallow comparisons, recently termed Phylogenetic shadowing, probe conservation across a group of closely related species. Whole-genome alignment Whole-genome alignment (WGA) involves predicting evolutionary relationships at the nucleotide level between two or more genomes. It integrates elements of colinear sequence alignment and gene orthology prediction, presenting a greater challenge due to the vast size and intricate nature of whole genomes. Despite its complexity, numerous methods have emerged to tackle this problem because WGAs play a crucial role in various genome-wide analyses, such as phylogenetic inference, genome annotation, and function prediction. Thereby, SyRI (Synteny and Rearrangement Identifier) is one such method that utilizes whole genome alignment and it is designed to identify both structural and sequence differences between two whole-genome assemblies. By taking WGAs as input, SyRI initially scans for disparities in genome structures. Subsequently, it identifies local sequence variations within both rearranged and non-rearranged (syntenic) regions. Phylogenetic reconstruction Another computational method for comparative genomics is phylogenetic reconstruction. It is used to describe evolutionary relationships in terms of common ancestors. The relationships are usually represented in a tree called a phylogenetic tree. Similarly, coalescent theory is a retrospective model to trace alleles of a gene in a population to a single ancestral copy shared by members of the population. This is also known as the most recent common ancestor. Analysis based on coalescence theory tries predicting the amount of time between the introduction of a mutation and a particular allele or gene distribution in a population. This time period is equal to how long ago the most recent common ancestor existed. The inheritance relationships are visualized in a form similar to a phylogenetic tree. Coalescence (or the gene genealogy) can be visualized using dendrograms. Genome maps An additional method in comparative genomics is genetic mapping. In genetic mapping, visualizing synteny is one way to see the preserved order of genes on chromosomes. It is usually used for chromosomes of related species, both of which result from a common ancestor. This and other methods can shed light on evolutionary history. A recent study used comparative genomics to reconstruct 16 ancestral karyotypes across the mammalian phylogeny. The computational reconstruction showed how chromosomes rearranged themselves during mammal evolution. It gave insight into conservation of select regions often associated with the control of developmental processes. In addition, it helped to provide an understanding of chromosome evolution and genetic diseases associated with DNA rearrangements. Tools Computational tools for analyzing sequences and complete genomes are developing quickly due to the availability of large amount of genomic data. At the same time, comparative analysis tools are progressed and improved. In the challenges about these analyses, it is very important to visualize the comparative results. Visualization of sequence conservation is a tough task of comparative sequence analysis. As we know, it is highly inefficient to examine the alignment of long genomic regions manually. Internet-based genome browsers provide many useful tools for investigating genomic sequences due to integrating all sequence-based biological information on genomic regions. When we extract large amount of relevant biological data, they can be very easy to use and less time-consuming. UCSC Browser: This site contains the reference sequence and working draft assemblies for a large collection of genomes. Ensembl: The Ensembl project produces genome databases for vertebrates and other eukaryotic species, and makes this information freely available online. MapView: The Map Viewer provides a wide variety of genome mapping and sequencing data. VISTA is a comprehensive suite of programs and databases for comparative analysis of genomic sequences. It was built to visualize the results of comparative analysis based on DNA alignments. The presentation of comparative data generated by VISTA can easily suit both small and large scale of data. BlueJay Genome Browser: A stand-alone visualization tool for the multi-scale viewing of annotated genomes and other genomic elements. SyRI: SyRI stands for Synteny and Rearrangement Identifier and is a versatile tool for comparative genomics, offering functionalities for synteny analysis and visualization, aiding in the prediction of genomic differences between related genomes using whole-genome assemblies (WGA). Synmap2: Specifically designed for synteny mapping, Synmap2 efficiently compares genetic maps or assemblies, providing insights into genome evolution and rearrangements among related organisms. GSAlign: GSAlign facilitates accurate alignment of genomic sequences, particularly useful for large-scale comparative genomics studies, enabling researchers to identify similarities and differences across genomes. IGV (Integrative Genomics Viewer): A widely-used tool for visualizing and analyzing genomic data, IGV supports comparative genomics by enabling users to explore alignments, variants, and annotations across multiple genomes. Manta: Manta is a rapid structural variant caller, crucial for comparative genomics as it detects genomic rearrangements such as insertions, deletions, inversions, and duplications, aiding in understanding genetic variation among populations or species. CNVNatar: CNVNatar specializes in detecting copy number variations (CNVs), which are crucial in understanding genome evolution and population genetics, providing insights into genomic structural changes across different organisms. PIPMaker: PIPMaker facilitates the alignment and comparison of two genomic sequences, enabling the identification of conserved regions, duplications, and evolutionary breakpoints, aiding in comparative genomics analyses. GLASS (Genome-wide Location and Sequence Searcher): GLASS is a tool for identifying conserved regulatory elements across genomes, crucial for comparative genomics studies focusing on understanding gene regulation and evolution. PatternHunter: PatternHunter is a versatile tool for sequence analysis, offering functionalities for identifying conserved patterns, motifs, and repeats across genomic sequences, aiding in comparative genomics studies of gene families and regulatory elements. Mummer: Mummer is a suite of tools for whole-genome alignment and comparison, widely used in comparative genomics for identifying similarities, differences, and evolutionary events among genomes at various scales. An advantage of using online tools is that these websites are being developed and updated constantly. There are many new settings and content can be used online to improve efficiency. Selected applications Agriculture Agriculture is a field that reaps the benefits of comparative genomics. Identifying the loci of advantageous genes is a key step in breeding crops that are optimized for greater yield, cost-efficiency, quality, and disease resistance. For example, one genome wide association study conducted on 517 rice landraces revealed 80 loci associated with several categories of agronomic performance, such as grain weight, amylose content, and drought tolerance. Many of the loci were previously uncharacterized. Not only is this methodology powerful, it is also quick. Previous methods of identifying loci associated with agronomic performance required several generations of carefully monitored breeding of parent strains, a time-consuming effort that is unnecessary for comparative genomic studies. Medicine Vaccine development The medical field also benefits from the study of comparative genomics. In an approach known as reverse vaccinology, researchers can discover candidate antigens for vaccine development by analyzing the genome of a pathogen or a family of pathogens. Applying a comparative genomics approach by analyzing the genomes of several related pathogens can lead to the development of vaccines that are multi-protective. A team of researchers employed such an approach to create a universal vaccine for Group B Streptococcus, a group of bacteria responsible for severe neonatal infection. Comparative genomics can also be used to generate specificity for vaccines against pathogens that are closely related to commensal microorganisms. For example, researchers used comparative genomic analysis of commensal and pathogenic strains of E. coli to identify pathogen-specific genes as a basis for finding antigens that result in immune response against pathogenic strains but not commensal ones. In May 2019, using the Global Genome Set, a team in the UK and Australia sequenced thousands of globally-collected isolates of Group A Streptococcus, providing potential targets for developing a vaccine against the pathogen, also known as S. pyogenes. Personalized Medicine Personalized Medicine, enabled by Comparative Genomics, represents a revolutionary approach in healthcare, tailoring medical treatment and disease prevention to the individual patient's genetic makeup. By analyzing genetic variations across populations and comparing them with an individual's genome, clinicians can identify specific genetic markers associated with disease susceptibility, drug metabolism, and treatment response. By identifying genetic variants associated with drug metabolism pathways, drug targets, and adverse reactions, personalized medicine can optimize medication selection, dosage, and treatment regimens for individual patients. This approach minimizes the risk of adverse drug reactions, enhances treatment efficacy, and improves patient outcomes. Cancer Cancer Genomics represents a cutting-edge field within oncology that leverages comparative genomics to revolutionize cancer diagnosis, treatment, and prevention strategies. Comparative genomics plays a crucial role in cancer research by identifying driver mutations, and providing comprehensive analyses of mutations, copy number alterations, structural variants, gene expression, and DNA methylation profiles in large-scale studies across different cancer types. By analyzing the genomes of cancer cells and comparing them with healthy cells, researchers can uncover key genetic alterations driving tumorigenesis, tumor progression, and metastasis. This deep understanding of the genomic landscape of cancer has profound implications for precision oncology. Moreover, Comparative Genomics is instrumental in elucidating mechanisms of drug resistance—a major challenge in cancer treatment. Mouse models in immunology T cells (also known as a T lymphocytes or a thymocytes) are immune cells that grow from stem cells in the bone marrow. They assist to defend the body from infection and may aid in the fight against cancer. Because of their morphological, physiological, and genetic resemblance to humans, mice and rats have long been the preferred species for biomedical research animal models. Comparative Medicine Research is built on the ability to use information from one species to understand the same processes in another. We can get new insights into molecular pathways by comparing human and mouse T cells and their effects on the immune system utilizing comparative genomics. In order to comprehend its TCRs and their genes, Glusman conducted research on the sequencing of the human and mouse T cell receptor loci. TCR genes are well-known and serve as a significant resource for supporting functional genomics and understanding how genes and intergenic regions of the genome contribute to biological processes. T-cell immune receptors are important in seeing the world of pathogens in the cellular immune system. One of the reasons for sequencing the human and mouse TCR loci was to match the orthologous gene family sequences and discover conserved areas using comparative genomics. These, it was thought, would reflect two sorts of biological information: (1) exons and (2) regulatory sequences. In fact, the majority of V, D, J, and C exons could be identified in this method. The variable regions are encoded by multiple unique DNA elements that are rearranged and connected during T cell (TCR) differentiation: variable (V), diversity (D), and joining (J) elements for the and polypeptides; and V and J elements for the and polypeptides.[Figure 1] However, several short noncoding conserved blocks of the genome had been shown. Both human and mouse motifs are largely clustered in the 200 bp [Figure 2], the known 3′ enhancers in the TCR/ were identified, and a conserved region of 100 bp in the mouse J intron was subsequently shown to have a regulatory function. Comparisons of the genomic sequences within each physical site or location of a specific gene on a chromosome (locs) and across species allow for research on other mechanisms and other regulatory signals. Some suggest new hypotheses about the evolution of TCRs, to be tested (and improved) by comparison to the TCR gene complement of other vertebrate species. A comparative genomic investigation of humans and mice will obviously allow for the discovery and annotation of many other genes, as well as identifying in other species for regulatory sequences. Research Comparative genomics also opens up new avenues in other areas of research. As DNA sequencing technology has become more accessible, the number of sequenced genomes has grown. With the increasing reservoir of available genomic data, the potency of comparative genomic inference has grown as well. A notable case of this increased potency is found in recent primate research. Comparative genomic methods have allowed researchers to gather information about genetic variation, differential gene expression, and evolutionary dynamics in primates that were indiscernible using previous data and methods. Great Ape Genome Project The Great Ape Genome Project used comparative genomic methods to investigate genetic variation with reference to the six great ape species, finding healthy levels of variation in their gene pool despite shrinking population size. Another study showed that patterns of DNA methylation, which are a known regulation mechanism for gene expression, differ in the prefrontal cortex of humans versus chimps, and implicated this difference in the evolutionary divergence of the two species. See also Data mining Molecular evolution Comparative anatomy Homology Sequence mining Alignment-free sequence analysis References Further reading External links Genomes OnLine Database (GOLD) Genome News Network JCVI Comprehensive Microbial Resource Pathema: A Clade Specific Bioinformatics Resource Center CBS Genome Atlas Database The UCSC Genome Browser The U.S. National Human Genome Research Institute Ensembl The Ensembl Genome Browser Genolevures, comparative genomics of the Hemiascomycetous yeasts Phylogenetically Inferred Groups (PhIGs), a recently developed method incorporates phylogenetic signals in building gene clusters for use in comparative genomics. Metazome, a resource for the phylogenomic exploration and analysis of Metazoan gene families. IMG The Integrated Microbial Genomes system, for comparative genome analysis by the DOE-JGI. Dcode.org Dcode.org Comparative Genomics Center. SUPERFAMILY Protein annotations for all completely sequenced organisms Comparative Genomics Blastology and Open Source: Needs and Deeds Alignment-free comparative Genomics tool Evolutionary biology Genomics Comparisons
Comparative genomics
Biology
6,277
7,757,614
https://en.wikipedia.org/wiki/Pyrolytic%20chromium%20carbide%20coating
Pyrolytic chromium carbide coating (PCC) is a technology for protection and reworking of rapidly wearing parts of manufacturing equipment working in extreme environmental conditions, using vacuum deposition technology. Coating mechanical parts can help with problems of corrosion, adhering, high-temperature and mechanical wear thus reducing unplanned repairs and loss of production. The features of PCC coatings are: obtaining protective layers with high adhesion strength on parts and products made of various engineering materials including metal and non-metal materials withstanding deposition conditions (up to 500 °C, 0.1 Pa); applying coatings on internal and external surfaces of longs shafts, complex-geometry parts with dead holes, grooves and channels, providing uniform thickness and composition; coating finished surfaces with no through porosity at small thickness (3–5 micrometres) PCC coating process runs in vacuum at 450 to 500 °C. Fields of application Protection of instruments and machinery parts surface exposed to simultaneous impact of corrosion, erosion, sealing, pickup, high temperature, abrasive and mechanical wear. External links Pyrolytic chromium carbide Coating obtained from Bahros chrome-organic fluid Wear-resistive properties of PCC Coating Coatings
Pyrolytic chromium carbide coating
Chemistry
252
27,277,885
https://en.wikipedia.org/wiki/Teloblast
A teloblast is a large cell in the embryos of clitellate annelids which asymmetrically divide to form many smaller cells known as blast cells. These blast cells further proliferate and differentiate to form the segmental tissues of the annelid. Teloblasts are well studied in leeches, though they are also present in the other major class of clitellates: the oligochaetes. Developmental role and morphology All teloblasts are specified from the D quadrant macromere after the second round of divisions post-fertilization. They are larger than the other cells that result from cleavage of macromere D'. There are five pairs of teloblasts, one on each side of the embryo. Four of the teloblasts (N, O, P, and Q) give rise to ectodermal tissue and one pair (M) gives rise to mesodermal tissue. The column of blast cells arising out of each teloblast is known as a bandlet. All five bandlets coalesce into one germinal band on each side of the embryo, extending out from the teloblast towards the head (in the rostral direction). There is a ventral plate of blast cells where the lateral columns meet. The teloblasts are located at the rear of the embryo. Teloblasts have two separate cytoplasmic domains: the teloplasm and the vitelloplasm. The teloplasm contains the nucleus, ribosomes, mitochondria, and other subcellular organelles. The vitelloplasm contains mostly yolk platelets. Only the teloplasm gets passed onto the daughter stem cells after cell division. The teloplasm also includes maternal RNA transcripts. O/P specification The O and P teloblasts are specified from two separate but identical precursors, which form an equivalence group. These two precursor cells are termed O/P cells for their ability to become either O or P teloblasts. Signals from the surrounding cells act to specify which fate the teloblasts and their progeny take on. Interactions with the q bandlet, however transient, can induce the p fate in the adjacent o/p bandlet. In some species (i.e. Helobdella triserialis), the provisional epithelium covering the cells plays a role in inducing the O fate. In the absence of cell-cell interactions, the O/P precursors will become O teloblasts. O and P bandlets exhibit very different mitotic patterns (see figure) which are used to identify them in experimental manipulations. Experimental results in Tubifex hattai suggest that there is not an equivalence group for O and P in these worms, but instead the P lineage is committed at its birth from the O/P proteloblast stage, while the O lineage is induced by the P teloblast. In the absence of the P teloblast, the pluripotent O teloblast becomes P specified. In Helobdella, the O/P proteloblasts generate four blast cells with segmental progeny by asymmetric division before a symmetric division into O/P teloblasts. Helobdella austensis appears to have an additional M-lineage-sourced signal that promotes P lineage differentiation in addition to bone morphogenic protein molecular signaling that is sourced from Q lineage cells and also helps specify P fate. Segmental fates The N and Q teloblasts contribute two blast cells per segment, one making up the anterior half of the segment, the second making up the posterior half of the segment. The O, P, and M lineages contribute one blast cell per segment, but the contributions from each blast cell span a segmental boundary. These segmental boundaries were discovered by injecting teloblasts with cell lineage tracers after a few blast cells had already been generated. During development, the N and Q bandlets, which eventually have 64 blast cells each, slide past the O, P, and M bandlets, which only have 32 cells. Thus, the segmental boundaries within each bandlet are already specified before all the bandlets come into complete register. References Developmental biology
Teloblast
Biology
894
19,553,114
https://en.wikipedia.org/wiki/Behavioral%20and%20Brain%20Functions
Behavioral and Brain Functions is a peer-reviewed open access scientific journal published by BioMed Central. It publishes articles on "all aspects of neurobiology where the unifying theme is behavior or behavioral dysfunction". It was established in 2005 with Terje Sagvolden as founding editor-in-chief, who was succeeded by Vivienne A. Russell (University of Cape Town). The current editor-in-chief is Wim Crusio (University of Bordeaux and Centre national de la recherche scientifique). Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2022 impact factor of 5.1. References External links Neuroscience journals Behavioral neuroscience BioMed Central academic journals Academic journals established in 2005 English-language journals Creative Commons Attribution-licensed journals Continuous journals
Behavioral and Brain Functions
Biology
177
1,432,664
https://en.wikipedia.org/wiki/Choice%20function
Let X be a set of sets none of which are empty. Then a choice function (selector, selection) on X is a mathematical function f that is defined on X such that f is a mapping that assigns each element of X to one of its elements. An example Let X = { {1,4,7}, {9}, {2,7} }. Then the function f defined by f({1, 4, 7}) = 7, f({9}) = 9 and f({2, 7}) = 2 is a choice function on X. History and importance Ernst Zermelo (1904) introduced choice functions as well as the axiom of choice (AC) and proved the well-ordering theorem, which states that every set can be well-ordered. AC states that every set of nonempty sets has a choice function. A weaker form of AC, the axiom of countable choice (ACω) states that every countable set of nonempty sets has a choice function. However, in the absence of either AC or ACω, some sets can still be shown to have a choice function. If is a finite set of nonempty sets, then one can construct a choice function for by picking one element from each member of This requires only finitely many choices, so neither AC or ACω is needed. If every member of is a nonempty set, and the union is well-ordered, then one may choose the least element of each member of . In this case, it was possible to simultaneously well-order every member of by making just one choice of a well-order of the union, so neither AC nor ACω was needed. (This example shows that the well-ordering theorem implies AC. The converse is also true, but less trivial.) Choice function of a multivalued map Given two sets and , let be a multivalued map from to (equivalently, is a function from to the power set of ). A function is said to be a selection of , if: The existence of more regular choice functions, namely continuous or measurable selections is important in the theory of differential inclusions, optimal control, and mathematical economics. See Selection theorem. Bourbaki tau function Nicolas Bourbaki used epsilon calculus for their foundations that had a symbol that could be interpreted as choosing an object (if one existed) that satisfies a given proposition. So if is a predicate, then is one particular object that satisfies (if one exists, otherwise it returns an arbitrary object). Hence we may obtain quantifiers from the choice function, for example was equivalent to . However, Bourbaki's choice operator is stronger than usual: it's a global choice operator. That is, it implies the axiom of global choice. Hilbert realized this when introducing epsilon calculus. See also Axiom of countable choice Axiom of dependent choice Hausdorff paradox Hemicontinuity Notes References Basic concepts in set theory Axiom of choice
Choice function
Mathematics
627
26,261,505
https://en.wikipedia.org/wiki/The%20Zen%20of%20CSS%20Design
The Zen of CSS Design: Visual Enlightenment for the Web is a book by web designers Dave Shea and Molly E. Holzschlag, published in 2005. Content The book is based on 36 designs featured at the CSS Zen Garden resource, an online showcase of CSS-based design. The process that each designer took in coming up with the final design is examined in each case study. Reception It was reviewed favorably by freelance Web designer Karen Morrill-McClure of Digital Web Magazine: See also CSS Zen CSS Zen Garden Web design References External links CSS Zen Garden Cascading Style Sheets
The Zen of CSS Design
Technology
126
72,910,789
https://en.wikipedia.org/wiki/Absolutely%20maximally%20entangled%20state
The absolutely maximally entangled (AME) state is a concept in quantum information science, which has many applications in quantum error-correcting code, discrete AdS/CFT correspondence, AdS/CMT correspondence, and more. It is the multipartite generalization of the bipartite maximally entangled state. Definition The bipartite maximally entangled state is the one for which the reduced density operators are maximally mixed, i.e., . Typical examples are Bell states. A multipartite state of a system is called absolutely maximally entangled if for any bipartition of , the reduced density operator is maximally mixed , where . Property The AME state does not always exist; in some given local dimension and number of parties, there is no AME state. There is a list of AME states in low dimensions created by Huber and Wyderka. The existence of the AME state can be transformed into the existence of the solution for a specific quantum marginal problem. The AME can also be used to build a kind of quantum error-correcting code called holographic error-correcting code. References Quantum information science Quantum states
Absolutely maximally entangled state
Physics
243
52,011,926
https://en.wikipedia.org/wiki/Teissier%20affair
The Teissier affair was a controversy that occurred in France in 2001. French astrologer Élizabeth Teissier was awarded a doctorate in sociology by Paris Descartes University for a doctoral thesis in which she argued that astrology was being oppressed by science. Her work was contested by the scientific community within the context of the science wars, and compared to the Sokal hoax. Criticisms included the alleged failure to work within the field of sociology and also lacking the necessary scientific rigour for a doctoral thesis in any scientific field. The university and jury who awarded the degree were harshly criticised, though both they and Teissier had supporters and defenders. Teissier's doctorate On April 7, 2001, Elizabeth Teissier defended her thesis entitled Situation épistémologique de l'astrologie à travers l'ambivalence fascination-rejet dans les sociétés postmodernes ("The Epistemological Situation of Astrology in Relation to the Ambivalent Fascination/Rejection of Postmodern Societies") – accounts of the defense have been published. Her studies at the University of Paris Descartes were under the supervision of Michel Maffesoli, an Emeritus Professor of Sociology. The central idea of the thesis was described by The New York Times as being that astrology is being oppressed by science, which Teissier called "official science" and "monolithic thought". Teissier argued, however, that her work is devoid of bias and had "focused only on the misunderstanding that astrology as a multimillennial knowledge vehicle" provokes. Her prepared statement was enthusiastically received by her supporters, but there was also a declaration from the editor-in-chief of Science et Vie Junior that what was occurring was a "farce". At end of the defense, the jury deliberated only briefly before Serge Moscovici admitted Teissier to her doctoral degree with the "very honourable" distinction. Initial reaction Controversy erupted in the scientific community following the decision, and several sociologists also publicly challenged its legitimacy. The university was criticised for granting the degree, as was the jury, along with Teissier's statements in support of astrology as a science, though the university rejected accusations of "irresponsibility". A petition signed by over 370 sociologists was sent to Professor Pierre Daumard, the President of the university; he responded that the Teissier had complied with all university requirements and it is not his place to question the "guarantees of the scientific validity of the thesis" from the independent jury. Daumard also defended that astrology is a legitimate subject for sociological study for its impact on society, a point on which Teissier's critics agreed. These critics were themselves criticised for their "incendiary" complaints which targeted her personally for her astrological beliefs instead of based on her thesis. Critics were also described as engaging in a witch-hunt whose true target was the academic reputation of Michel Maffesoli. Maffesoli addressed the controversy in an email on 23 April 2001, acknowledging that the thesis included some "slippages" but minimising the importance of these errors. Maffesoli added that there is a "manhunt" against him and more broadly against scientific and intellectual rigor in "diverse approaches to sociology", but still engaged with critics such as Christian Baudelot at an ASES-organised symposium on the Teissier affair. Maffesoli did state during the defense that he had tried to keep Teissier focused on the sociological impact of astrology rather than discussing its scientific legitimacy, while still maintaining that the thesis demonstrated sufficient sociological significance to justify awarding the doctorate. AFIS analysis Once the thesis was available following the defense, the Association française pour l'information scientifique (AFIS) organised a group to critique the thesis; the analysis was published by a multi-disciplinary group (two experts in pseudoscience, including the editor of the AFIS Science et pseudo-sciences, three astrophysicists, two sociologists, and a philosopher) on 6 August 2001. They looked at the scientific, philosophical, and sociological aspects of Teissier's thesis, describing it as "not a thesis in sociology but actually pro-astrological advocacy". They concluded that the Teissier's work did not meet the requirements of scientific rigor of doctoral research, regardless of the discipline in question. They described the jury as having accepted the thesis "in defiance of basic academic requirements of objectivity and intellectual honesty" in part because the AFIS group's multidisciplinary analysis shows that "no relevant standard (analytical rigor, objectivity, indication of sources, style of writing, etc.) had truly been fulfilled". They comment all Teissier has achieved is to "demonstrate once again that [astrology] does not deserve the status of an intellectual discipline that can be taught in a university course". According to the journal Skepter, the "thesis pretends to provide irrefutable proof that astrology is a science, but the author has no idea what constitutes a scientific proof, she is muddleminded about basic astronomical and astrological facts, and the pièce de resistance of her argument consists of statements about Michel Gauquelin which can only be called lies." Examples of excerpts from the thesis which bear this out, according to Broch, include unsupported medical claims, fundamental errors in astronomy, and a lack of proper evidence. Teissier was "completely appalled" that a "tiny group" would question the award of her doctorate and did not exclude the possibility of suing the AFIS, who published the critique of her thesis, after its "intolerable attack" on academic freedom. Wider context Discussion of the circumstances of Teissier's doctorate occurred and continues to occur with the context of the science wars, a dispute which pitted humanities academics taking postmodernist perspectives against scientists taking positivist and rationalist approaches. In particular, comparisons have been made to the Sokal hoax, in that each case exemplifies the alleged support for pseudoscience and hostility to science within postmodernist circles. The emphatic language and personalised tone of the debate around Teissier's work was fuelled by the broader ongoing conflict, as was the targeting of Maffesoli and the description of the university as "heavily influenced by so-called post-modern ideologists" (emphasis in original). It also explains criticisms of the jury for its failure to seek input from scientists (a bone of contention in the science wars), and the unusually personalised tone of comments such as that Teissier, "very astutely, has taken advantage of the intellectual weakness and/or incompetence of ... the nincompoops who accepted to ratify such nonsense" (bold emphases from original omitted). References 2001 controversies Controversies in France Paris Descartes University Education controversies in France Theses Astrology 2001 in France History of sociology Scientific controversies
Teissier affair
Astronomy
1,448
17,511,256
https://en.wikipedia.org/wiki/SYZ%20conjecture
The SYZ conjecture is an attempt to understand the mirror symmetry conjecture, an issue in theoretical physics and mathematics. The original conjecture was proposed in a paper by Strominger, Yau, and Zaslow, entitled "Mirror Symmetry is T-duality". Along with the homological mirror symmetry conjecture, it is one of the most explored tools applied to understand mirror symmetry in mathematical terms. While the homological mirror symmetry is based on homological algebra, the SYZ conjecture is a geometrical realization of mirror symmetry. Formulation In string theory, mirror symmetry relates type IIA and type IIB theories. It predicts that the effective field theory of type IIA and type IIB should be the same if the two theories are compactified on mirror pair manifolds. The SYZ conjecture uses this fact to realize mirror symmetry. It starts from considering BPS states of type IIA theories compactified on X, especially 0-branes that have moduli space X. It is known that all of the BPS states of type IIB theories compactified on Y are 3-branes. Therefore, mirror symmetry will map 0-branes of type IIA theories into a subset of 3-branes of type IIB theories. By considering supersymmetric conditions, it has been shown that these 3-branes should be special Lagrangian submanifolds. On the other hand, T-duality does the same transformation in this case, thus "mirror symmetry is T-duality". Mathematical statement The initial proposal of the SYZ conjecture by Strominger, Yau, and Zaslow, was not given as a precise mathematical statement. One part of the mathematical resolution of the SYZ conjecture is to, in some sense, correctly formulate the statement of the conjecture itself. There is no agreed upon precise statement of the conjecture within the mathematical literature, but there is a general statement that is expected to be close to the correct formulation of the conjecture, which is presented here. This statement emphasizes the topological picture of mirror symmetry, but does not precisely characterise the relationship between the complex and symplectic structures of the mirror pairs, or make reference to the associated Riemannian metrics involved. SYZ Conjecture: Every 6-dimensional Calabi–Yau manifold has a mirror 6-dimensional Calabi–Yau manifold such that there are continuous surjections , to a compact topological manifold of dimension 3, such that There exists a dense open subset on which the maps are fibrations by nonsingular special Lagrangian 3-tori. Furthermore for every point , the torus fibres and should be dual to each other in some sense, analogous to duality of Abelian varieties. For each , the fibres and should be singular 3-dimensional special Lagrangian submanifolds of and respectively. The situation in which so that there is no singular locus is called the semi-flat limit of the SYZ conjecture, and is often used as a model situation to describe torus fibrations. The SYZ conjecture can be shown to hold in some simple cases of semi-flat limits, for example given by Abelian varieties and K3 surfaces which are fibred by elliptic curves. It is expected that the correct formulation of the SYZ conjecture will differ somewhat from the statement above. For example the possible behaviour of the singular set is not well understood, and this set could be quite large in comparison to . Mirror symmetry is also often phrased in terms of degenerating families of Calabi–Yau manifolds instead of for a single Calabi–Yau, and one might expect the SYZ conjecture to reformulated more precisely in this language. Relation to homological mirror symmetry conjecture The SYZ mirror symmetry conjecture is one possible refinement of the original mirror symmetry conjecture relating Hodge numbers of mirror Calabi–Yau manifolds. The other is Kontsevich's homological mirror symmetry conjecture (HMS conjecture). These two conjectures encode the predictions of mirror symmetry in different ways: homological mirror symmetry in an algebraic way, and the SYZ conjecture in a geometric way. There should be a relationship between these three interpretations of mirror symmetry, but it is not yet known whether they should be equivalent or one proposal is stronger than the other. Progress has been made toward showing under certain assumptions that homological mirror symmetry implies Hodge theoretic mirror symmetry. Nevertheless, in simple settings there are clear ways of relating the SYZ and HMS conjectures. The key feature of HMS is that the conjecture relates objects (either submanifolds or sheaves) on mirror geometric spaces, so the required input to try to understand or prove the HMS conjecture includes a mirror pair of geometric spaces. The SYZ conjecture predicts how these mirror pairs should arise, and so whenever an SYZ mirror pair is found, it is a good candidate to try and prove the HMS conjecture on this pair. To relate the SYZ and HMS conjectures, it is convenient to work in the semi-flat limit. The important geometric feature of a pair of Lagrangian torus fibrations which encodes mirror symmetry is the dual torus fibres of the fibration. Given a Lagrangian torus , the dual torus is given by the Jacobian variety of , denoted . This is again a torus of the same dimension, and the duality is encoded in the fact that so and are indeed dual under this construction. The Jacobian variety has the important interpretation as the moduli space of line bundles on . This duality and the interpretation of the dual torus as a moduli space of sheaves on the original torus is what allows one to interchange the data of submanifolds and subsheaves. There are two simple examples of this phenomenon: If is a point which lies inside some fibre of the special Lagrangian torus fibration, then since , the point corresponds to a line bundle supported on . If one chooses a Lagrangian section such that is a Lagrangian submanifold of , then precisely since chooses one point in each torus fibre of the SYZ fibration, this Lagrangian section is mirror dual to a choice of line bundle structure supported on each torus fibre of the mirror manifold , and consequently a line bundle on the total space of , the simplest example of a coherent sheaf appearing in the derived category of the mirror manifold. If the mirror torus fibrations are not in the semi-flat limit, then special care must be taken when crossing over singular set of the base . Another example of a Lagrangian submanifold is the torus fibre itself, and one sees that if the entire torus is taken as the Lagrangian , with the added data of a flat unitary line bundle over it, as is often necessary in homological mirror symmetry, then in the dual torus this corresponds to a single point which represents that line bundle over the torus. If one takes the skyscraper sheaf supported on that point in the dual torus, then we see torus fibres of the SYZ fibration get sent to skyscraper sheaves supported on points in the mirror torus fibre. These two examples produce the most extreme kinds of coherent sheaf, locally free sheaves (of rank 1) and torsion sheaves supported on points. By more careful construction one can build up more complicated examples of coherent sheaves, analogous to building a coherent sheaf using the torsion filtration. As a simple example, a Lagrangian multisection (a union of k Lagrangian sections) should be mirror dual to a rank k vector bundle on the mirror manifold, but one must take care to account for instanton corrections by counting holomorphic discs which are bounded by the multisection, in the sense of Gromov-Witten theory. In this way enumerative geometry becomes important for understanding how mirror symmetry interchanges dual objects. By combining the geometry of mirror fibrations in the SYZ conjecture with a detailed understanding of enumerative invariants and the structure of the singular set of the base , it is possible to use the geometry of the fibration to build the isomorphism of categories from the Lagrangian submanifolds of to the coherent sheaves of , the map . By repeating this same discussion in reverse using the duality of the torus fibrations, one similarly can understand coherent sheaves on in terms of Lagrangian submanifolds of , and hope to get a complete understanding of how the HMS conjecture relates to the SYZ conjecture. References String theory Symmetry Duality theories Conjectures
SYZ conjecture
Physics,Astronomy,Mathematics
1,780
436,077
https://en.wikipedia.org/wiki/Diamond%20Light%20Source
Diamond Light Source (or Diamond) is the UK's national synchrotron light source science facility located at the Harwell Science and Innovation Campus in Oxfordshire. Its purpose is to produce intense beams of light whose special characteristics are useful in many areas of scientific research. In particular it can be used to investigate the structure and properties of a wide range of materials from proteins (to provide information for designing new and better drugs), and engineering components (such as a fan blade from an aero-engine) to conservation of archeological artifacts (for example Henry VIII's flagship the Mary Rose). There are more than 50 light sources across the world. With an energy of 3 GeV, Diamond is a medium energy synchrotron currently operating with 32 beamlines. Design, construction and finance The Diamond synchrotron is the largest UK-funded scientific facility to be built in the UK since the Nimrod proton synchrotron which was sited at the Rutherford Appleton Laboratory in 1964. Nearby facilities include the ISIS Neutron and Muon Source, the Central Laser Facility, and the laboratories at Harwell and Culham (including the Joint European Torus (JET) project). It replaced the Synchrotron Radiation Source, a second-generation synchrotron at the Daresbury Laboratory in Cheshire. Diamond produced its first user beam towards the end of January 2007, and was formally opened by Queen Elizabeth II on 19 October 2007. Construction A design study during the 1990s was completed in 2001 by scientists at Daresbury and construction began following the creation of the operating company, Diamond Light Source Ltd. The construction costs of £260m covered the synchrotron building, the accelerators inside it, the first seven experimental stations (beamlines) and the adjacent office block, Diamond House. Governance The facility is operated by Diamond Light Source Ltd, a joint venture company established in March 2002. The company receives 86% of its funding from the UK Government via the Science and Technology Facilities Council (STFC) and 14% from the Wellcome Trust. Synchrotron Diamond generates synchrotron light at wavelengths ranging from X-rays to the far infrared. This is also known as synchrotron radiation and is the electromagnetic radiation emitted by charged particles travelling near the speed of light when their path deviates from a straight line. It is used in a huge variety of experiments to study the structure and behaviour of many different types of matter. The particles Diamond uses are electrons travelling at an energy of 3 GeV round a 561.6 m circumference storage ring. This is not a true circle, but a 48-sided polygon with a bending magnet at each vertex and straight sections in between. The bending magnets are dipole magnets whose magnetic field deflects the electrons so as to steer them around the ring. As Diamond is a third generation light source it also uses special arrays of magnets called insertion devices. These cause the electrons to undulate and it is their sudden change of direction that causes the electrons to emit an exceptionally bright beam of electromagnetic radiation, brighter than that of a single bend when traveling through a bending magnet. This is the synchrotron light used for experiments. Some beamlines, however, use light solely from a bending magnet without the need of an insertion device. The electrons reach this high energy via a series of pre-accelerator stages before being injected into the 3 GeV storage ring: an electron gun – 90 keV a 100 MeV linear accelerator a 100 MeV – 3 GeV booster synchrotron (158 m in circumference). The Diamond synchrotron is housed in a silver toroidal building of 738 m in circumference, covering an area in excess of 43,300 square metres, or the area of over six football pitches. This contains the storage ring and a number of beamlines, with the linear accelerator and booster synchrotron housed in the centre of the ring. These beamlines are the experimental stations where the synchrotron light's interaction with matter is used for research purposes. Seven beamlines were available when Diamond became operational in 2007, with more coming online as construction continued. As of April 2019 there were 32 beamlines in operation. Diamond is intended ultimately to host about 33 beamlines, supporting the life, physical and environmental sciences. Diamond is also home to eleven electron microscopes. Nine of these are cryo-electron microscopes specialising in life sciences including two provided for industry use in partnership with Thermo Fisher Scientific; the remaining two microscopes are dedicated to research of advanced materials. Case studies In September 2007, scientists from Cardiff University led by Tim Wess, found that the Diamond synchrotron could be used to see hidden content of ancient documents by illumination without opening them (penetrating layers of parchment). In November 2010 data collected at Diamond by Imperial College London formed the basis for a paper in the journal Nature advancing the understanding of how HIV and other retroviruses infect human and animal cells. The findings may enable improvements in gene therapy to correct gene malfunctions. In June 2011 data from Diamond led to an article in the journal Nature detailing the 3D structure of the human Histamine H1 receptor protein. This led to the development of 'third generation' anti-histamines, drugs effective against some allergies without adverse side-effects. In December 2017, UK established the Synchrotron Techniques for African Research and Technology (START) with a £3.7 million funded by the UK Research and Innovation for 3 years. START aimed to provide access to African researchers with focus on energy materials and structural biology. The step is circuital for the inception of the first African Light Source. Published in the Proceedings of the National Academy of Sciences in April 2018, a five institution collaboration including scientists from Diamond used three of Diamond's macromolecular beamlines to discover details of how a bacterium used plastic as an energy source. High resolution data allowed the researchers to determine the workings of an enzyme that degraded the plastic PET. Subsequently computational modelling was carried out to investigate and thus improve this mechanism. An article published in Nature in 2019 described how a worldwide multidisciplinary collaboration designed several ways to control metal nano-particles, including synthesis at a substantially reduced cost for use as catalysts for the production of everyday goods. Research conducted at Diamond Light Source in 2020 helped determine the atomic structure of SARS‑CoV‑2, the virus responsible for COVID-19. In 2023, Diamond Light Source scanned the Herculaneum papyri including scroll PHerc. Paris. 4 to facilitate non-invasive decipherment through machine learning. See also List of synchrotron radiation facilities Synchrotron Radiation Source (SRS) European Synchrotron Radiation Facility (ESRF) MAX IV BESSY DESY SOLEIL Canadian Light Source (CLS) Elettra Synchrotron The African Light Source (AfLS) References External links Diamond: Britain's answer to the Large Hadron Collider Guardian article describing the machine and its applications Physics research institutes Research institutes in Oxfordshire Science and Technology Facilities Council Synchrotron radiation facilities Vale of White Horse Wellcome Trust
Diamond Light Source
Materials_science
1,478
8,935,386
https://en.wikipedia.org/wiki/Joint%20Institute%20for%20Nuclear%20Astrophysics
The Joint Institute for Nuclear Astrophysics Center for the Evolution of the Elements (JINA-CEE) is a multi-institutional Physics Frontiers Center funded by the US National Science Foundation since 2014. From 2003 to 2014, JINA was a collaboration between Michigan State University, the University of Notre Dame, the University of Chicago, and directed by Michael Wiescher from the University of Notre Dame. Principal investigators were Hendrik Schatz, Timothy Beers and Jim Truran. JINA-CEE is a collaboration between Michigan State University, the University of Notre Dame, University of Washington and Arizona State University and a number of associated institutions, centers, and national laboratories in the US and across the world, with the goal to bring together nuclear experimenters, nuclear theorists, astrophysical modelers, astrophysics theorists, and observational astronomers to address the open scientific questions at the intersection of nuclear physics and astrophysics. JINA-CEE serves as an intellectual center and focal point for the field of nuclear astrophysics, and is intended to enable scientific work and exchange of data and information across field boundaries within its collaboration, and for the field as a whole though workshops, schools, and web-based tools and data bases. It is led by director Hendrik Schatz with Michael Wiescher, Timothy Beers, Sanjay Reddy and Frank Timmes as principal investigators. Most JINA-CEE nuclear physics experiments are carried out at the Nuclear Science Laboratory at the University of Notre Dame, the National Superconducting Cyclotron Laboratory at Michigan State University and the ATLAS/CARIBOU facility at Argonne National Laboratory. JINA-CEE is heavily involved in observations with the Apache Point Observatory within the framework of extensions to the Sloan Digital Sky Survey, LAMOST in China, SkyMapper in Australia, and the Hubble Space Telescope. Among many other observational data, JINA-CEE also uses heavily X-ray observational data from BeppoSAX, RXTE, Chandra, XMM-Newton, and INTEGRAL. JINA stimulated the development of similar centers in other countries, and collaborates with a number of multi-institutional nuclear astrophysics centers in Germany, including NAVI, EMMI and the Universe Cluster in Munich. REACLIB Database One of the many projects of JINA-CEE is the maintenance of an up-to-date nuclear reaction rate library called REACLIB. REACLIB contains over 75,000 thermonuclear reaction rates. Virtual Journals Nuclear astrophysics is made of many overlapping disciplines, spanning fields in astronomy, astrophysics and nuclear physics. In order to understand the origin of the elements, or the evolution and deaths of stars in galaxies, quite a broad base of knowledge is required. JINA-CEE created two virtual journals in order to meet the need for coverage of this broad-based information. The JINA Virtual Journal debuted in 2003, and reviews a broad realm of nuclear astrophysics, followed by the SEGUE Virtual Journal in 2006, focusing more on Galactic Chemical and Structural evolution. Each week, the editors search almost 40 refereed journals for newly published articles. Editors review the articles, flagging those that are relevant, and categorize them into their respective subjects (which are searchable by individual users). When the virtual journals are published, an email notification is sent to subscribers informing them of the newly available selections from the Virtual Journals. Education Education, outreach, and creating inclusive environments are high priorities for JINA-CEE. JINA-CEE has a multitude of educational and outreach programs aimed at attracting young people to science careers, research training, and disseminating research findings to the public. Educational programs target audiences ranging from K-12 to Graduate Students and Postdocs. References External links Official JINA website JINA DIANA/SURF Nuclear Astrophysics Group website JINA FRIB Nuclear Astrophysics Group website JINA SDSS-II Nuclear Astrophysics Group website Full list of Associated and Participating Institutions JINA Educational programs Research institutes in the United States Astrophysics research institutes
Joint Institute for Nuclear Astrophysics
Physics
826
16,781,529
https://en.wikipedia.org/wiki/HIP%2014810%20b
HIP 14810 b is a massive hot Jupiter approximately 165 light-years away in the constellation of Aries. It has mass 3.88 times that of Jupiter and orbits at 0.0692 AU. It was discovered by the N2K Consortium in 2006 and the discovery paper was published in 2007. Prior to this a preliminary orbit had been published in the Catalog of Nearby Exoplanets. References External links Aries (constellation) Giant planets Exoplanets discovered in 2006 Exoplanets detected by radial velocity Hot Jupiters de:HIP 14810 b
HIP 14810 b
Astronomy
117
1,949,812
https://en.wikipedia.org/wiki/Size%20zero
Size zero or size 0 is a women's clothing size in the US catalog sizes system. Size 0 and 00 were invented due to increasing body sizes and therefore the changing of clothing sizes over time (referred to as vanity sizing or size inflation), which has caused the adoption of lower numbers. For example, a 2011 size 0 is equivalent to a 2001 size 2, and is larger than a 1970 size 6 or 1958 size 8. Modern size 0 clothing, depending on brand and style, fits measurements of chest-stomach-hips from 30-22-32 inches (76-56-81 cm) to 33-25-35 inches (84-64-89 cm). Size 00 can be anywhere from 0.5 to 2 inches (1 to 5 cm) smaller than size 0. Size zero often refers to thin people (especially women and adolescent girls), or trends associated with them. Criticism The use of size 0 in advertisements and products of the clothing industry has been met with some media attention. In July 2009, Katie Green won a competition to represent Wonderbra. They referred her to the Premier Model Management agency for representation. Green reported that "one of the guys from the PR agency from Wonderbra" insisted that she lose weight, that it wasn't normal for models to be a (UK) size 8. "Unless I could drop down to that weight, they wouldn't be willing to get me more work." Green at first complied, but then quit the agency. She then, with Liberal Democrat MP Lembit Öpik, launched a campaign titled "Say No to Size Zero". They began a petition drive with the goal to put an end to size zero and underweight models on the catwalk or working in the fashion industry. They set a goal to obtain 20,000 signatures and plan to present it to the UK Prime Minister and Parliament. They are campaigning for legislation that would require regular health checkups for all models before undertaking any assignments. Movement against size zero After the death of Luisel Ramos from anorexia in August, 2006, Madrid Fashion Week banned size-zero models the following month, and the Milan fashion show took the same action shortly afterward, banning models with a body mass index (BMI) of 18 or below. As a result, five models were banned from taking part. As of 2007, the British Fashion Council promoted the creation of a task force to establish guidelines for the fashion industry. They also urged fashion designers to use healthy models. In September 2019, Victoria Beckham received criticism on social media over thin models who appeared "ill" in her Fall fashion show. In 2015 she had "vowed that her agents were in touch with the agents of all the models she uses in an effort to make sure that the women are healthy". Representatives for Beckham did not respond to Fox News’s request for comment. Italian fashion labels Prada, Versace and Armani have agreed to ban size-zero models from their catwalks. Under the new self-regulation code drawn up in Italy by the government and designers, all models in future shows will be "full-bodied", and larger sizes will be introduced at shows. Fashion designer Giorgio Armani has given support to the effort to eliminate ultra-thin models. "The time has now come for clarity. We all need to work together against anorexia." France banned size-zero models by law stipulating that models needed a doctor’s note attesting to their health in regards to their age, weight and body shape. The new charter brought forth by the iconic fashion companies take the 2015 legislation further, committing their brands to banning models who are smaller than size 34 for women and 44 for men (for reference, a French size 34 is roughly equivalent to a size 0 in the US). Paris fashion week has also banned size-zero models from their catwalks. Kering and LVMH, parent company of brands like Christian Dior, Louis Vuitton, Yves Saint Laurent and Givenchy have created a new charter that bans the hiring of excessively thin models. The CEO of Kering, Francois-Henri Pinault spoke of the decision, “We hope to inspire the entire industry to follow suit, thus making a real difference in the working conditions of fashion models industry-wide." Israel banned underweight models in March 2012. Their law stipulates that women and men hired as models must be certified by a physician as having a body mass index (BMI) of no less than 18.5. The legislation also requires the inclusion of an informational note in adverts using photos manipulated to make models look thinner. There were divergent views on the ban within the Israeli fashion industry. One modelling agent, who had helped promote the bill, suggested that the fall in typical dress sizes for models in the preceding 15–20 years amounted to "the difference between death and life". However, another described the law as "arbitrary" and "not appropriate for every model". See also EN 13402, a partially adopted centimeter-based standard for labelling clothes, which aims to make it easier to find and select fitting clothes resulting in fewer returns Female body shape Body image Clothing sizes References 2000s fashion Clothing controversies Fashion aesthetics Sizes in clothing sv:Size zero
Size zero
Physics,Mathematics
1,079
1,186,707
https://en.wikipedia.org/wiki/Bipyridine
Bipyridines are a family of organic compounds with the formula (C5H4N)2, consisting of two pyridyl (C5H4N) rings. Pyridine is an aromatic nitrogen-containing heterocycle. The bipyridines are all colourless solids, which are soluble in organic solvents and slightly soluble in water. Bipyridines, especially the 4,4' isomer, are mainly of significance in pesticides. Six isomers of bipyridine exist, but two are prominent. 2,2′-bipyridine, also known as bipyridyl, dipyridyl, and dipyridine, is a popular ligand in coordination chemistry 2,2′-Bipyridine 2,2′-Bipyridine (2,2′-bipy) is a chelating ligand that forms complexes with most transition metal ions that are of broad academic interest. Many of these complexes have distinctive optical properties, and some are of interest for analysis. Its complexes are used in studies of electron and energy transfer, supramolecular, and materials chemistry, and catalysis. 2,2′-Bipyridine is used in the manufacture of diquat. 4,4′-Bipyridine 4,4′-Bipyridine (4,4′-bipy) is mainly used as a precursor to the N,N′-dimethyl-4,4′-bipyridinium dication commonly known as paraquat. This species is redox active, and its toxicity arises from its ability to interrupt biological electron transfer processes. Because of its structure, 4,4′-bipyridine can bridge between metal centres to give coordination polymers. 3,4′-Bipyridine The 3,4′-bipyridine derivatives inamrinone and milrinone are used occasionally for short term treatment of congestive heart failure. They inhibit phosphodiesterase and thus increasing cAMP, exerting positive inotropy and causing vasodilation. Inamrinone causes thrombocytopenia. Milrinone decreases survival in heart failure. References Chelating agents Ligands
Bipyridine
Chemistry
465
27,434,193
https://en.wikipedia.org/wiki/Book%20illustration
The illustration of manuscript books was well established in ancient times, and the tradition of the illuminated manuscript thrived in the West until the invention of printing. Other parts of the world had comparable traditions, such as the Persian miniature. Modern book illustration comes from the 15th-century woodcut illustrations that were fairly rapidly included in early printed books, and later block books. Other techniques such as engraving, etching, lithography and various kinds of colour printing were to expand the possibilities and were exploited by such masters as Daumier, Doré or Gavarni. History Book illustration as we now know it evolved from early European woodblock printing. In the early 15th century, playing cards were created using block printing, which was the first use of prints in a sequenced and logical order. "The first known European block printings with a communications function were devotional prints of saints." As printing took off and books became common, printers began to use woodcuts to illustrate them. Hence, "centers for woodblock playing-card and religious-print production became centers for illustrated books. Printers of large early books often reused several times, and also had detachable "plugs" of figures, or the attributes of saints, which they could rearrange within a larger image to make several variations. Luxury books were for a few decades often printed with blank spaces for manual illumination in the old way. Unlike later techniques, woodcut uses relief printing just as metal moveable type does, so that pages including both text and illustration can be set up and printed together. However the technique either gives rather crude results or was expensive if a high-quality block-cutter was used, and could only manage fine detail on atypically large pages. It was not suitable for the level of detail required for maps, for example, and the 1477 Bolognese edition of Ptolemy's Cosmographia was both the first book to contain printed maps and the first to be illustrated by engravings (by Taddeo Crivelli) rather than woodcuts. However hardly any further engraved illustrations were produced for several decades after about 1490, and instead a style of expensive books decorated in metalcut, mostly religious and produced in Paris, was a popular luxury product between about 1480 and 1540. In the middle of the 16th century woodcut was gradually overtaken by the intaglio printing techniques of engraving and etching which became dominant by about 1560–1590, first in Antwerp, then Germany, Switzerland and Italy, the important publishing centres. They remained so until the later 19th century. They required the illustrations to be printed separately, on a different type of printing press, so encouraging illustrations that took a whole page, which became the norm. Engraving and etching gave sharper definition and finer detail to the illustrations, and rapidly became dominant by the late 15th century, often with the two techniques mixed together in a single plate. A wide range of books were now illustrated, initially mostly on a few pages, but with the number of illustrations gradually rising over the period, and tending to use more etching than engraving. Particular kinds of books such as scientific and technical works, children's books, and atlases now became very heavily illustrated, and from the mid-18th century many of the new form of the novel had a small number of illustrations. Luxury books on geographical topics and natural history, and some children's books, had printed illustrations which were then coloured by hand, but in Europe none of the experimental techniques for true colour printing became widely used before the mid-19th century, when several different techniques became successful. In East Asia colour printing with many different woodblocks was increasing widely used; the fully developed technique in Japan was called nishiki-e, and used in books as well as ukiyo-e prints. Lithography (invented by Alois Senefelder in 1798 and made public in 1818) allowed for more textual variety and accuracy. This is because the artist could now draw directly on the printing plate itself. New techniques developed in the nineteenth and twentieth centuries revolutionized book illustrations and put new resources at the disposal of artists and designers. In the early nineteenth century, the photogravure process allowed for photographs to be reproduced in books. In this process, light-sensitive gelatin was used to transfer the image to a metal plate, which would then be etched. Another process, chromolithography, which was developed in France in the mid-nineteenth century, permitted color printing. The process was extremely labor-intensive and expensive though as the artist would have to prepare a separate plate for each color used. In the late twentieth century, the process known as offset lithography made color printing cheaper and less-time consuming for the artist. The process used a chemical process to transfer a photographic negative to a rubber surface before printing. There were various artistic movements and their proponents in the nineteenth and twentieth centuries that took an interest in the enrichment of book design and illustration. For example, Aubrey Beardsley, a proponent of both Art Nouveau and Aestheticism, had a great influence over book illustrations. Beardsley specialized in erotica and some of the best examples of his drawings were for the first English edition of Oscar Wilde's Salomé (1894). Contamination of historic books In the 19th century, Paris green and similar arsenic pigments were often used on front and back covers, top, fore and bottom edges, title pages, book decorations, and in printed or manual colorations of illustrations of books. Since February 2024, several German libraries started to block public access to their stock of 19th century books to check for the degree of poisoning. See also Children's book illustration Illustration Extra-illustration Picture books Artist's book, works of art realized in the form of a book Bookbreaking Livre d'art, books in which the illustration holds a predominant place References Further reading Douglas Martin, The Telling Line Essays On Fifteen Contemporary Book Illustrators (1989) Jos Pennec, Émile Malo-Renault, graveur et illustrateur (1870–1938), Bulletin et Mémoires de la Société Archéologique et Histoire d'Ille-et-Vilaine (2004) Edward Hodnett, Five Centuries of English Book Illustration (1988) Maurice Sendak, Caldecott & Co.: Notes on Books and Pictures (1988) Joyce Irene Whalley and Tessa Rose Chester, A History of Children's Book Illustration (1988) Elaine Moss, Part of the Pattern (1986) [incl. interviews with illustrators] John Lewis, The Twentieth Century Book: Its Illustration and Design (new ed. 1984) H. Carpenter and M. Prichard, The Oxford Companion to Children's Literature (1984) Brigid Peppin and Lucy Micklethwaite, Dictionary of British Book Illustrators: The Twentieth Century (1983) Alan Ross, Colours of War: War Art 1939–45 (1983) Hugh Williamson, Methods of Book Design (3rd. ed., 1983) Edward Hodnett, Image and Text: Studies in the Illustration of English Literature (1982) Hans Adolf Halbey, Im weiten Feld der Buchkunst (1982) [on 20th century] John Harthan, The History of the Illustrated Book: The Western Tradition (1981) Pat Gilmore, Artists at Curwen (1977. Tate Gallery) William Feaver, When We Were Young: Two Centuries of Children's Book Illustration (1977) Illustrators [periodical] (1975 onwards) Images [annual] (1975 onwards) Donnerae MacCann and Olga Richard, The Children's First Books: A Critical Study of Pictures and Text (1973) The Francis Williams Bequest: An Exhibition of Illustrated Books, 1967–71 [National Book League] (1972) Frank Eyre, British Children's Books in the Twentieth Century (1971) Walter Herdeg, An International Survey of Children's Book Illustration = special issue of Graphis; 155 (1971) [& subsequent surveys] Diana Klemin, The Illustrated Book: Its Art and Craft (1970) David Bland, A History of Book Illustration (2nd ed. 1969) W. J. Strachan, The Artist and the Book in France (1969) Bettina Hurlimann, Picture-Book World (1968) Bettina Hurlimann, Three Centuries of Children's Books in Europe (1967) Adrian Wilson, The Design of Books (1967) Rigby Graham, Romantic Book Illustration in England, 1943–55 (1965. Private Libraries Association) Bob Gill and John Lewis, Illustration: Aspects and Directions (1964) Robin Jacques, Illustrators at Work (1963. Studio Books) David Bland, The Illustration of Books (3rd. ed. 1962) Lynton Lamb, Drawing for Illustration (1962) Anders Hedvall and Bror Zachrisson, 'Children and their books', in Penrose Annual; 56 (1962), p. 59–66 & plates [incl. children's reactions] John Ryder, Artists of a Certain Line: A Selection of Illustrators for Children's Books (1960) Lynton Lamb, 'The True Illustrator', in Motif; 2 (1959 February), p. 70–76 John Lewis, A Handbook of Type and Illustration (1956) John Lewis and John Brinkley, Graphic Design (1954) James Boswell, 'English book illustration today', in Graphis; 7/34 (1951), p. 42–57 British Book Illustration 1935–45 [exhibition catalogue, National Book League] (1949) John Piper, 'Book illustration and the painter-artist', in Penrose Annual; 43 (1949), p. 52–54 Lynton Lamb, 'Predicaments of illustration', in Signature; new series, 4 (1947), p. 16-27 Bertha E. Mahoney, Illustrators of Children's Books 1744–1945 (1947) [and periodic supplements] External links Old book illustrations (all public domain) Site for the scholarly study of the history of book illustration run by not-for-profit society (IBIS) Illustration Illustrated books Book design
Book illustration
Engineering
2,073
996,678
https://en.wikipedia.org/wiki/MPU-401
The MPU-401, where MPU stands for MIDI Processing Unit, is an important but now obsolete interface for connecting MIDI-equipped electronic music hardware to personal computers. It was designed by Roland Corporation, which also co-authored the MIDI standard. Design Released around 1984, the original MPU-401 was an external breakout box providing MIDI IN/MIDI OUT/MIDI THRU/TAPE IN/TAPE OUT/MIDI SYNC connectors, for use with a separately-sold interface card/cartridge ("MPU-401 interface kit") inserted into a computer system. For this setup, the following "interface kits" were made: MIF-APL: For the Apple II MIF-C64: For the Commodore 64 MIF-FM7: For the Fujitsu FM-7 MIF-IPC: For the IBM PC/IBM XT. It turned out not to work reliably with 286 and faster processors. Early versions of the actual PCB had IF-MIDI/IBM as a silk screen. MIF-IPC-A: For the IBM AT, works with PC and XT as well. Xanadu MUSICOM IFM-PC: For the IBM PC / IBM XT / IBM AT. This was a third party MIDI card, incorporating the MIF-IPC(-A) and additional functionality that was coupled with the OEM Roland MPU-401 BOB. It also had a mini audio jack on the PCB. MIF-PC8: For the NEC PC-88 MIF-PC98: For the NEC PC-98 MIF-X1: For the Sharp X1 MIF-AMG: For the Amiga, from Musicsoft In 2014 hobbyists built clones of the MIF-IPC-A card for PCs. Variants Later, Roland would put most of the electronics originally found in the breakout box onto the interface card itself, thus reducing the size of the breakout box. Products released in this manner: MPU-401N: an external interface, specifically designed for use with the NEC PC-98 series notebook computers. This breakout-box unit features a special COMPUTER IN port for direct connection to the computer's 110-pin expansion bus. METRONOME OUT connector was added. Released in Japan only. MPU-IPC: for the IBM PC/IBM XT/IBM AT and compatibles (8 bit ISA). It had a 25-pin female connector for the breakout box, even though only nine pins were used, and only seven were functionally different: both 5V and ground use two pins each. MPU-IPC-T: for the IBM PC/IBM XT/IBM AT and compatibles (8-bit ISA). The MIDI SYNC connector was removed from this Taiwanese-manufactured model, and the previously hardcoded I/O address and IRQ could be set to different values with jumpers. The break-out box has three DIN connectors for MIDI (1xIN and 2xOUT) plus three 3.5mm mini jack connectors (TAPE IN, TAPE OUT and METRONOME OUT). MPU-IMC: for the IBM PS/2's Micro Channel architecture bus. In earlier models both I/O address and IRQ were hardcoded to IRQ 2 (causing serious problems with the hard disk as it also uses that IRQ); in later models the IRQ could be set with a jumper. It had a 9-pin female connector for the breakout box. . Due to the incompatibility of IRQ 2/9 (and potentially I/O addresses) between the MPU-IMC and IBM PS/2 MCA models certain games will not work with MPU-401. S-MPU/AT (Super MPU): for the IBM AT and compatibles (16-bit ISA). It had a Mini-DIN female connector for the breakout box. The MIDI SYNC, TAPE IN, TAPE OUT, METRONOME OUT connectors was removed, but a second MIDI IN connector was added. An application to assign resources (plug and play) must be run to use the card in DOS. This application is not a TSR (it does not take up conventional memory). S-MPU-IIAT (Super MPU II): for the IBM or compatible Plug and Play PC computers (16 bit ISA). It had a Mini-DIN female connector for the breakout box with two MIDI In connectors and two MIDI Out connectors. An application to assign resources (plug and play) must be run to use the card in DOS. This application is not a TSR (it does not take up precious conventional memory). S-MPU/FMT: For FM Towns LAPC-I: for the IBM PC and compatibles. Includes the Roland CM-32L sound source. A breakout box for this card, the MCB-1, was sold separately. LAPC-N: for the NEC PC-98. Includes the Roland CM-32LN sound source. A breakout box for this card, the MCB-2, was sold separately. RAP-10: for the IBM AT and compatibles (16 bit ISA). General midi sound source only. MPU-401 UART mode only. A breakout box for this card, the MCB-10, was sold separately. SCP-55: for the IBM and compatible laptops (PCMCIA). Includes the Roland SC-55 sound source. A breakout box for this card, the MCB-3, was sold separately. MPU-401 UART mode only. Still later, Roland would get rid of the breakout box completely and put all connectors on the back of the interface card itself. Products released in this manner: MPU-APL: for the Apple II. Single-card combination of the MIF-APL interface and MPU-401, featuring MIDI IN, OUT, and SYNC connectors. MPU-401AT: for IBM AT and "100% compatibles". Includes a connector for Wavetable daughterboards. MPU-PC98: for the NEC PC-98 MPU-PC98II: for the NEC PC-98 S-MPU/PC (Super MPU PC-98): for the NEC PC-98 S-MPU/2N (Super MPU II N): for the NEC PC-98 SCC-1: for the IBM PC and compatibles. Includes the Roland SC-55 sound source. GPPC-N & GPPC-NA: for the NEC PC-98. Includes the Roland SC-55 sound source. Clones By the late 1980s other manufacturers of PCBs developed intelligent MPU-401 clones. Some of these, like Voyetra, were equipped with Roland chips whereas most had reverse-engineered ROMs (Midiman / Music Quest). Examples: Midiman MM-401 (8BIT, non Roland chip set, also sold as part of the Midiman PC Desktop Music Kit) Midi System, Inc. MDR-401, non Roland chip set Computer Music Supply CMS-401 (8BIT, non Roland chip set) Music Quest PC MIDI Card / MQX-16s / MQX-32m (8 & 16BIT, non Roland chip set) Voyetra V-400x / OP-400x (V-4000, V4001, 8BIT, Roland chip set) MIDI LAND DX-401 (non Roland chipset) & MD-401 (non Roland chipset) Data Soft DS-401 (non Roland chipset) In 2015 hobbyists developed a Music Quest PC MIDI Card 8BIT clone. In 2017/2018 hobbyists developed a revision of the Music Quest PC MIDI Card 8BIT clone that includes a wavetable header in analogy of the Roland MPU-401AT. Modes The MPU-401 can work in two modes, normal mode and UART mode. "Normal mode" would provide the host system with an 8-track sequencer, MIDI clock output, SYNC 24 signal output, Tape Sync and a metronome; as a result of these features, it is often called "intelligent mode". Compare this to UART mode, which reduces the MPU-401 to simply relaying in-/outcoming MIDI data bytes. As computers became more powerful, the features offered in "intelligent mode" became obsolete. Implementing these in the host system's software was more efficient. Specific hardware was no longer required. As a result, the UART mode became the dominant mode of operation. Early UART MPU-401 capable cards were still advertised as MPU-401 compatible. SoftMPU In the mid 2010s, a hobbyist platform software interface, SoftMPU, was written that upgrades UART (non intelligent) MPU-401 interfaces to an intelligent MPU-401 interface, however this only works for MS-DOS. It also does not work for all games. Especially early Sierra games, such as Jones in the Fast Lane, will not work with SoftMPU. HardMPU In 2015, a PCB (HardMPU) was developed that incorporates SoftMPU as logic on hardware (so that the PC's CPU does not have to process intelligent MIDI). Currently HardMPU only supports playback and not recording. Contemporary interfaces Physical MIDI connections are increasingly replaced with the USB interface, and a USB to MIDI converter in order to drive musical peripherals which do not yet have their own USB ports. Often, peripherals are able to accept MIDI input through USB and convert it for the traditional DIN connectors. While MPU-401 support is no longer included in Windows Vista, a driver is available on Windows Update. As of 2011, the interface was still supported by Linux and Mac OS X. References External links 'Card Times' - Sound on Sound magazine, Nov 1996 SoftMPU Louis Ohland's PS/2 Archiveshere Computer hardware standards MIDI Obsolete technologies Music sequencers
MPU-401
Technology,Engineering
2,061
35,830,148
https://en.wikipedia.org/wiki/Condensation%20particle%20counter
A condensation particle counter or CPC is a particle counter that detects and counts aerosol particles by first enlarging them by using the particles as nucleation centers to create droplets in a supersaturated gas. Three techniques have been used to produce nucleation: Adiabatic expansion using an expansion chamber. This was the original technique used by John Aitken in 1888. Thermal diffusion. Mixing of hot and cold gases. The most usually used (also the most efficient) method is cooling by thermal diffusion. Most abundantly used working fluid is n-butanol; during last years water is also encountered in this use. Condensation particle counters are able to detect particles with dimensions from 2 nm and larger. This is of special importance because particles sized down from 50 nm are generally undetectable with conventional optical techniques. Usually the supersaturation is ca. 100…200 % in condensation chamber, despite the fact that heterogeneous nucleation (droplet growth on surface of a suspended solid particle) can occur at supersaturation as small as 1%. The greater vapour content is needed because, according to surface science laws, the vapour pressure over a convex surface is less than over a plane, thus greater content of vapor in air is required to meet actual supersaturation criteria. This amount grows (vapor pressure decreases) along with decrease in particle size, the critical diameter for which condensation can occur at the present saturation level is called Kelvin diameter. The supersaturation level must, however, be small enough to prevent homogeneous nucleation (when liquid molecules collide so often that they form clusters – stable enough to ensure further growth is possible), which will produce false counts. This usually starts at ca. 300% supersaturation. On the right, a diffusional thermal cooling CPC is shown in operation. In order to ensure a high vapour content, the working liquid is in contact with a hollow block of porous material that is heated. Then the humified air enters the cooler where nucleation occur. Temperature difference between the heater and the cooler determines the supersaturation, which in its turn determines the minimal size of particles that will be detected (the greater the difference, the smaller particles get counted). As proper nucleation conditions occur in the center of the flow, sometimes incoming flow is divided: most of it undergoes filtering and forms the sheath flow, which the rest of flow, still containing particles, is inserted into via a capillary. The more uniform is obtained supersaturation, the sharper is particle minimal size cutoff. During the heterogeneous nucleation process in the nucleation chamber, particles grow up to 10…12 μm large and so are conveniently detected by usual techniques, such as laser nephelometry (measurement of light pulses scattered by the grown-up particles). References Meteorological instrumentation and equipment Counting instruments Particle detectors Aerosols Air pollution Aerosol measurement
Condensation particle counter
Chemistry,Mathematics,Technology,Engineering
611
2,377,007
https://en.wikipedia.org/wiki/V849%20Ophiuchi
V849 Ophiuchi or Nova Ophiuchi 1919 was a nova that erupted in 1919, in the constellation Ophiuchus, and reached a blue band brightness of magnitude 7.2. Joanna C. S. Mackie discovered the star while she was examining Harvard College Observatory photographic plates. The earliest plate it was visible on was exposed on August 20, 1919, when the star was at magnitude 9.4. It reached magnitude 7.5 on September 13 of that year. In its quiescent state it has a visual magnitude of about 18.8. V849 Ophiuchi is classified as a "slow nova"; it took six months for it to fade by three magnitudes. All novae are binary stars, and V849 is an eclipsing binary. Its orbital period is 4.146128 hours. References External links https://web.archive.org/web/20050909054008/http://www.tsm.toyama.toyama.jp/curators/aroom/var/nova/1910.htm Novae Ophiuchus ? Ophiuchi, V849 167276
V849 Ophiuchi
Astronomy
241
16,604,809
https://en.wikipedia.org/wiki/List%20of%20interactive%20artists
This is a list of artists who work primarily in the medium of interactive art. B Artur Barrio Maurice Benayoun Timothy Binkley Maurizio Bolognini Geoff Bunn C Peter Campus Janet Cardiff Thomas Charvériat Marcelo Coelho Shane Cooper D Char Davies Liu Dao (artist collective) Mark Divo Juan Downey E Ernest Edmonds F Ken Feingold Alicia Framis Masaki Fujihata H Dominic Harris Heather Hart Jeppe Hein Desmond Paul Henry Lynn Hershman Leeson Hugo Heyrman Perry Hoberman I Toshio Iwai J Christopher Janney Miranda July K Eduardo Kac Sep Kamvar Knowbotic Research Meeli Kõiva Myron Krueger Aki Kuroda Ryota Kuwakubo L Marc Lee Golan Levin Jen Lewin LIA Zachary Lieberman Liu Dao Marita Liulia Rafael Lozano-Hemmer M Ali Miharbi George Bures Miller N Michael Naimark Mark Napier Graham Nicholls P Jim Pallas Simon Penny Liz Phillips R Ken Rinaldo Don Ritter (artist) Miroslaw Rogala David Rokeby Daan Roosegaarde Daniel Rozin S Tomás Saraceno Tino Sehgal Jeffrey Shaw Nathaniel Stern Scott Snibbe T Marc Tasman Rirkrit Tiravanija Timo Toots U Camille Utterback V Angelo Vermeulen W Theo Watson Z Ricardo Miranda Zuñiga See also Interactive Art Interactive Media References Bullivant, Lucy (2006). Responsive Environments: architecture, art and design (V&A Contemporaries). London:Victoria and Albert Museum. Bullivant, Lucy (2005). 4dspace: Interactive Architecture (Architectural Design). London: John Wiley & Sons. Christiane Paul (2003). Digital Art (World of Art series). London: Thames & Hudson. Wands, Bruce Art of the Digital Age, Thames and Hudson 2006, pp. 89, 139, | Weibel, Peter and Shaw, Jeffrey, Future Cinema, MIT Press 2003, pp. 472,572-581, Wilson, Steve Information Arts: Intersections of Art, Science, and Technology Interactive artists, List of New media New media art
List of interactive artists
Technology
445
3,111,575
https://en.wikipedia.org/wiki/Drainage%20density
Drainage density is a quantity used to describe physical parameters of a drainage basin. First described by Robert E. Horton, drainage density is defined as the total length of channel in a drainage basin divided by the total area, represented by the following equation: The quantity represents the average length of channel per unit area of catchment and has units , which is often reduced to . Drainage density depends upon both climate and physical characteristics of the drainage basin. Soil permeability (infiltration difficulty) and underlying rock type affect the runoff in a watershed; impermeable ground or exposed bedrock will lead to an increase in surface water runoff and therefore to more frequent streams. Rugged regions or those with high relief will also have a higher drainage density than other drainage basins if the other characteristics of the basin are the same. When determining the total length of streams in a basin, both perennial and ephemeral streams should be considered. If a drainage basin contained only ephemeral streams, the drainage density by the equation above would be calculated to be zero if only the total length of streams was calculated using only perennial streams. Ignoring ephemeral streams in the calculations does not consider the behavior of the basin during flood events and is therefore not completely representative of the drainage characteristics of the basin. Drainage density is indicative of infiltration and permeability of a drainage basin, as well as relating to the shape of the hydrograph. Drainage density depends upon both climate and physical characteristics of the drainage basin. High drainage densities also mean a high bifurcation ratio. Inverse of drainage density as a physical quantity Drainage density can be used to approximate the average length of overland flow in a catchment. Horton (1945) used the following equation to describe the average length of overland flow as a function of drainage density: Where is the length of overland flow with units of length and is the drainage density of the catchment, expressed in units of inverse length. Considering the geometry of channels on the hillslope, Horton also proposed the following equation: Where is the channel slope and is the average slope of the ground in the area. Elementary components of drainage basins A drainage basin can be defined by three elementary quantities: channels, the hillslope area associated with those channels, and the source areas. The channels are the well-defined segments that efficiently carry water through the catchment. Labeling these features as “channels” rather than “streams” indicates that there need not be a continuous flow of water to capture the behavior of this region as a conduit of water. According to Arthur Strahler’s stream ordering system, the channels are not defined to be any single order or range of orders. Channels of lower orders combine to form higher order channels. The associated hillslope areas are the hillslopes that slope directly into the channels. Precipitation that enters the system on the hillslopes areas and is not lost to infiltration or evapotranspiration enters the channels. The source areas are concave regions of hillslope that are associated with a single channel. Precipitation entering a source area that is not lost to infiltration or evapotranspiration flows through the source area and enters the channel at the channel’s head. Source areas and the hillslope areas associated with channels are differentiated by source areas draining through the channel head, while the associated hillslope areas drain into the rest of the stream. According to Strahler’s stream ordering system, all source areas drain into a primary channel, by the definition of a primary channel. Bras et al. (1991) describe the conditions that are necessary for channel formation. Channel formation is a concept intimately tied to the formation and evolution of a drainage system and influence the drainage density of catchment. The relation they propose determines the behavior of a given hillslope in response to a small perturbation. They propose the following equation as a relation between source area, source slope, and the sediment flux through this source area: Where F is the sediment flux, S is the slope of the source area, and a is the source area. The right-hand side of this relation determines channel stability or instability. If the right-hand side of the equation is greater than zero, the hillslope is stable, and small perturbations such as small erosive events do no develop into channels. Conversely, if the right-hand side of the equation is less than zero, Bras et al. determine the hillslope to be unstable, and small erosive structures, such as rills, will tend to grow and form a channel and increase the drainage density of a basin. In this sense, "unstable" is not used in the sense of the gradient of the hillslope being greater than the angle of repose and therefore susceptible to mass wasting, but rather fluvial erosive processes such as sheet flow or channel flow tend to incise and erode to form a singular channel. Therefore, the characteristics of the source area, or potential source area, influence the drainage density and evolution of a drainage basin. Relation to water balance Drainage density is tied to the water balance equation: Where is the change in reservoir storage, R is precipitation, ET is evapotranspiration, Gi and Go are the respective groundwater flux into and out of the basin, Gs is the groundwater discharge into streams, and Qw is groundwater discharge from the basin through wells. Drainage density relates to the storage and runoff terms. Drainage density relates to the efficiency by which water is carried over the landscape. Water is carried through channels much faster than over hillslopes, as saturated overland flow is slower due to being thinned out and obstructed by vegetation or pores in the ground. Consequently, a drainage basin with a relatively higher drainage density will be more efficiently drained than a higher density one. Because of the more extensive drainage system in a higher density basin, precipitation entering the basement will, on average, travel a shorter distance over the slower hillslopes before reaching the faster-flowing channels and exit the basin through the channels in less time. Conversely, precipitation entering a lower drainage density basin will take longer to exit the basin due to travelling over the slower hillslope longer. In his 1963 paper on drainage density and streamflow, Charles Carlston found that baseflow into streams is inversely related to the drainage density of the drainage basin: This equation represents the effect of drainage density on infiltration. As drainage density increases, baseflow discharge into a stream decreases for a given basin because there is less infiltration to contribute to baseflow. More of the water entering the drainage basin during a precipitation immediately following a rainfall event exits quickly through streams and does not become infiltration to contribute to baseflow discharge. Gregory and Walling (1968) found that the average discharge through a drainage basin is proportional to the square of drainage density: This relation illustrates that a higher drainage density environment transports water more efficiently through the basin. In a relatively low drainage density environment, the lower average discharge results predicted by this relation would be the result of the surface runoff spending more time travelling over hillslope and having a larger time for infiltration to occur. The increased infiltration results in a decreased surface runoff according to the water balance equation. These two equations agree with each other and follow the water balance equation. According to the equations, a basin with high drainage density, the contribution of surface runoff to stream discharge will be high, while that from baseflow will be low. Conversely, a stream in a low drainage density system will have a larger contribution from baseflow and a smaller contribution from overland flow. Relation to hydrographs The discharge through the central stream draining a catchment reflects the drainage density, which makes it a useful diagnostic for predicting the flooding behavior of a catchment following a storm event due to being intimately tied to the hydrograph. The material that overland flow travels over is one factor that influences the speed that water can flow out of a catchment. Water flows significantly slower over hillslopes compared to channels that form to efficiently carry water and other flowing material. According to Horton’s interpretation of half of the inverse of drainage density as the average length of overland flow implies that overland flow in high-drainage environments will reach a fast-flowing channel faster over a shorter range. On the hydrograph, the peak is higher and occurs over a shorter range. This more compact and higher peak is often referred to as being “flashy”. The timing of the hydrograph in relation to the peak of the hyetograph is influenced by the drainage density. The water that enters a high-drainage watershed during a storm will reach a channel relatively fast and travel in the high-velocity channels to the outlet of the watershed in a relatively short time. Conversely, the water entering a low drainage density basin will, on average, have to travel a longer distance over the low velocity hillslope to reach the channels. As a result, the water will require more time to reach the exit of the catchment. The lag time between the peak of the hyetograph and the hydrograph is then inversely related to drainage density; as drainage density increases, water is more efficiently drained from the basin and the lag time decreases. Another impact on the hydrograph that drainage density has is a steeper falling limb following the storm event due to its impact on both overland flow and baseflow. The falling limb occurs after the peak of the hydrograph curve and is when overland flow is decreasing back to ambient levels. In higher drainage systems, the overland flow reaches the channels quicker resulting in a narrower spread in the falling limb. Baseflow is the other contributor to the hydrograph. The peak of baseflow to the channels will occur after the quick-flow peak because groundwater flow is much slower than quick-flow. Because the baseflow peak occurs after the quick-flow peak, the baseflow peak influences the shape of the falling limb. According to the proportionality put forth by Gregory and Walling, as drainage density increases, the contribution of baseflow to the falling limb of the hydrograph diminishes. During a storm event in a high drainage density basin, there is little water that infiltrates into the ground as infiltration because water spends less time flowing over the surface in the catchment before exiting through the central channel. Because there is little water that enters the water as infiltration, baseflow will contribute only a small part to falling limb. The falling limb is thus quite steep. Conversely, a low drainage system will have a shallower falling limb. According to Gregory and Walling’s relation, the decrease in drainage density results in an increase in baseflow to the channels and a more gradual decrease in the hydrograph. Formula for drainage density Montgomery and Dietrich (1989) Montgomery and Dietrich (1989) determined the following equation for drainage density by observing drainage basins in the Tennessee Valley, California: Where ws is the mean source width, ρw is the density of water, R0 is the average precipitation rate, W* is the width of the channel head, ρs is the saturated bulk density of the soil, Kz is the vertical saturated hydraulic conductivity, θ is the slope at the channel head, and φ is the soil angle of internal friction. R0, the average precipitation term, shows the dependence of drainage density on climate. With all other factors being constant, an increase in precipitation in the drainage basin results in an increase in drainage density. A decrease in precipitation, such as in an arid environment, results in a lower drainage density. The equation also shows the dependence on the physical characteristics and lithology of the drainage basin. Materials with a low hydraulic conductivities, such as clay or solid rock, would result in a higher-drainage density system. Because of the low hydraulic conductivity, there is little water lost to infiltration and that water exits the system as runoff and can contribute to erosion. In a basin with a higher vertical hydraulic conductivity, water more effectively infiltrates into the ground and does not contribute to saturated overland flow erosion, resulting in a less developed channel system and therefore lower drainage density. Relation to the mean annual flood Charles Carlston (1963) determined an equation to express the mean annual flood runoff, Q2.33, for a given drainage basin as a function of drainage density. Carlston found a correlation between the two quantities when plotting data from 15 drainage basins and determined the following equation: Where Q is in units of cubic feet per second per square mile and Dd is in units of inverse miles. From that equation, it is concluded that a drainage basin will adjust itself through erosion such that this equation is satisfied. Effect of vegetation on drainage density The presence of vegetation in a drainage basin has multiple effects on the drainage density. Vegetation prevents landslides in the source area of a basin that would result in channel formation as well as decrease the range of drainage density values regardless of soil composition. Vegetation stabilizes the unstable source area in basin and prevents channel initiation. Plants stabilize the hillslope that they grow in, which results in physical erosion processes such as rain splash, dry ravel, or freezing and thawing processes. While there is significant variation between species, plant roots grow in underground networks that holds the soil in place. Because the soil is held in place, it is less prone to erosion from those physical methods. Hillslope diffusion was found to decrease exponentially with vegetation cover. By stabilizing the hillslope in the source area of the basins, channel initiation, channel initiation is less likely. The erosional processes that may lead to channel initiation are prevented. The increased soil strength also protects against surface runoff erosion, which hinders channel evolution once it has begun. At the basin scale, there are fewer channels in the basin and the drainage density is lower than an unvegetated system. The effect of the vegetation on decreasing the drainage density is not unbounded though. At high vegetative coverage, the effect of increasing the coverage diminishes. This effect imposes an upper limit to the total reduction in drainage density that vegetation can result in. Vegetation also narrows the range of drainage density values for basins of various soil composition. Unvegetated basins can have a large range in drainage densities, from low to high. Drainage density is related to the ease at which channels can form. According to Montgomery and Dietrich’s equation, drainage density is a function of vertical hydraulic conductivity. Coarse-grained sediment like sand would have a higher hydraulic conductivity and are predicted by the equation to form a relatively higher drainage density system than a system formed by finer silt with a lower hydraulic conductivity. Forest fires play an indirect role in a basin’s drainage density. Forest fires, both natural and unnatural, destroy some or all of the existing vegetation, which removes the stability that the plants and their roots provide. Newly destabilized hillslope in the basin is then susceptible to channel formation processes, and drainage density of the basin may increase until the vegetation grows back to the previous state. The type of plants and the associated depth and density of the plant roots determine how strongly the soil is held in place as well as the intensity of the forest fire in killing and removing the vegetation. Computer simulation experiments have validated that drainage density will be higher in regions that have more frequent forest fires. Relation to flood hydrograph The discharge through the central stream draining a catchment reflects the drainage density, which makes it a useful diagnostic for predicting the flooding behavior of a catchment following a storm event due to being intimately tied to the hydrograph. The material that overland flow travels over is one factor that influences the speed that water can flow out of a catchment. Water flows significantly slower over hillslopes compared to channels that form to efficiently carry water and other flowing material. According to Horton’s interpretation of half of the inverse of drainage density as the average length of overland flow implies that overland flow in high-drainage environments will reach a fast-flowing channel faster over a shorter range. On the hydrograph, the peak is higher and occurs over a shorter range. This more compact and higher peak is often referred to as being “flashy”. The timing of the hydrograph in relation to the peak of the hyetograph is influenced by the drainage density. The water that enters a high-drainage watershed during a storm will reach a channel relatively fast and travel in the high-velocity channels to the outlet of the watershed in a relatively short time. Conversely, the water entering a low drainage density basin will, on average, have to travel a longer distance over the low velocity hillslope to reach the channels. As a result, the water will require more time to reach the exit of the catchment. The lag time between the peak of the hyetograph and the hydrograph is then inversely related to drainage density; as drainage density increases, water is more efficiently drained from the basin and the lag time decreases. Another impact on the hydrograph that drainage density has is a steeper falling limb following the storm event due to its impact on both overland flow and baseflow. The falling limb occurs after the peak of the hydrograph curve and is when overland flow is decreasing back to ambient levels. In higher drainage systems, the overland flow reaches the channels quicker resulting in a narrower spread in the falling limb. Baseflow is the other contributor to the hydrograph. The peak of baseflow to the channels will occur after the quick-flow peak because groundwater flow is much slower than quick-flow. Because the baseflow peak occurs after the quick-flow peak, the baseflow peak influences the shape of the falling limb.4 According to the proportionality put forth by Gregory and Walling, as drainage density increases, the contribution of baseflow to the falling limb of the hydrograph diminishes. During a storm event in a high drainage density basin, there is little water that infiltrates into the ground as infiltration because water spends less time flowing over the surface in the catchment before exiting through the central channel. Because there is little water that enters the water as infiltration, baseflow will contribute only a small part to falling limb. The falling limb is thus quite steep. Conversely, a low drainage system will have a shallower falling limb. According to Gregory and Walling’s relation, the decrease in drainage density results in an increase in baseflow to the channels and a more gradual decrease in the hydrograph. Effect of climate change on drainage density Drainage density may also be influenced by climate change. Langbein and Schumm (1958)9 propose an equation for the rate of sediment discharge through catchment as a function of precipitation rate: Where P is sediment yield, R is the average effective rainfall, α ~ 2.3, γ ~ 3.33, and a and b vary depending on units. The graph of this equation has a maximum between 10 and 14 inches and sharp declines on either side of the peak. At lower effective rainfalls, sediment discharge is lower because there is less rainfall to erode the hillslope. At effective rainfalls of greater than 10-14 inches, the decrease in sediment yield is interpreted to be the result of increasing vegetation cover. Increasing precipitation supports denser vegetation coverage and prevents overland flow and other methods of physical erosion. This finding is consistent with the Istanbulluoglu and Bras’ findings on the effect of vegetation on erosion and channel formation. The Caineville Badlands The badlands of Caineville, Utah are often cited as a region of extremely high drainage density. The region features steep slopes, high relief, an arid climate, and a complete absence of vegetation. Because the slopes of hillslopes are often greater than the angle of repose, the dominant erosional process in the Caineville badlands is mass wasting. There is no vegetation to provide stability to the slopes and increase the angle of repose and prevent mass-wasting. The regions below the angle of repose, however, are still generally at a significant angle, and hillslope diffusion, according to the following relation, is still a significant source of erosion: Where Ks is a coefficient of diffusivity of the hillslope, z is the elevation of the hillslope, and x is horizontal distance. The range of drainage densities in the Caineville Badlands illustrates the complicated nature of drainage densities in low-precipitation environments. In a study on the region, Alan Howard (1996) found that the effect of increasing relief angles in different basins did not have a constant effect on the drainage density. For regions of relatively low relief, drainage density and relief are positively correlated. This occurs until a threshold is reached at a higher relief ratio, when increase the slope ratio is accompanied by a decrease in drainage density. This is interpreted by Howard to be a result of the critical source area needed to support a channel increasing. At a higher slope, the erosion is faster and more efficiently funneled through fewer channels. The smaller number of channels results in a smaller drainage density for the basin. This qualitative topographic map of a section of a section of the Caineville Badlands shows the extensive drainage network in the arid environment. Relating to Montgomery and Dietrich’s definition of the elementary parts of a drainage basin, the source area for each of the channels is relatively very small, resulting in a large number of channels forming. The image of the Caineville Badlands displays the lack of vegetation and numerous channels. The Caineville Badlands are located in an arid environment, receiving an average of 125mm of precipitation per year. This low precipitation contrasts with Montgomery and Dietrich’s equation of drainage density, which predicts that drainage density should be low where rainfall is low. This behavior is more consistent with Langbein and Schumm’s expression of erosion rate as a function of rainfall. According to the equation, erosion will increase with precipitation up to a point where the precipitation can support stabilizing vegetation. The lack of vegetation present in the image of the Caineville Badlands implies that the rainfall rate of this region is below the critical rainfall amount vegetation can be supported. References External links Drainage Basin at the Learning Channel Geomorphology Density Rivers Hydrology
Drainage density
Physics,Chemistry,Mathematics,Engineering,Environmental_science
4,496
2,035,308
https://en.wikipedia.org/wiki/Animal%20science
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals. Education Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered. Bachelor degree At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs. Pre-veterinary emphasis Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, the University of Nebraska–Lincoln, and the University of Minnesota, for example. This option provides knowledge of the biological and physical sciences including nutrition, reproduction, physiology, and genetics. This can prepare students for graduate studies in animal science, veterinary school, and pharmaceutical or animal science industries. Graduate studies In a Master of Science degree option, students take required courses in areas that support their main interest. These courses are above courses normally required for a Bachelor of Science degree in the Animal Science major. For example, in a Ph.D. degree program students take courses related to their major that are more in-depth than those for the Master of Science degree, with an emphasis on research or teaching. Graduate studies in animal sciences are considered preparation for upper-level positions in production, management, education, research, or agri-services. Professional study in veterinary medicine, law, and business administration are among the most commonly chosen programs by graduates. Other areas of study include growth biology, physiology, nutrition, and production systems. Careers in Animal Science There are a variety of careers available to someone with an animal science degree. Including, but not limited to, Academic researcher, Animal nutritionist, Animal physiotherapist technician, Nature conservation officer, Zookeeper, and Zoologist. Areas of study Animal Behavior Animal behavior is the study of how animals interact with their environment, interact with each other socially, and how they may achieve understanding of their environment. Animal behavior is examined with the framework of its development, mechanism, adaptive value, and evolution. Animal Genetics Animal genetics is the study of an animal genes and how they effect an animal's appearance, health, and function. The information gained from such studies is often applied to livestock breeding. Veterinary Medicine Veterinary medicine is a specialization within the field of medicine focusing on the diagnosis, prevention, control, and treatment of diseases that effect both wild and domesticated animals. There are three main medical positions within veterinary medicine, veterinarians, veterinary technicians, and veterinary assistants. See also American Registry of Professional Animal Scientists List of animal science degree-granting institutions Zoology, the interest of all animals. Veterinary science References External links "Career Information." American Society of Animal Science. ASAS, 2009. Web. 29 September 2011. http://www.asas.org American Society of Animal Science "UNL Animal Science Department." University of Nebraska-Lincoln. UNL Institute of Agriculture and Natural Resources, 27 January 2015. "MSU Department of Animal Science." Michigan State University. Michigan State University Department of Animal Science, 28 December 2013. "Animal Industry Careers." Purdue University. Purdue University, 11 August 2005. Web. 5 October 2011. http://www.ansc.purdue.edu Purdue University Animal Science Livestock Zoology Agricultural science
Animal science
Biology
932
27,705,643
https://en.wikipedia.org/wiki/Seasoning%20%28cookware%29
Seasoning is the process of coating the surface of cookware with fat which is heated in order to produce a corrosion resistant layer of polymerized fat. It is required for raw cast-iron cookware and carbon steel, which otherwise rust rapidly in use, but is also used for many other types of cookware. An advantage of seasoning is that it helps prevent food sticking. Some cast-iron and carbon steel cookware is pre-seasoned by manufacturers to protect the pan from oxidation (rust), but will need to be further seasoned by the end-users for the cookware to become ready for best nonstick cooking results. To form a strong seasoning, the raw iron item is thoroughly cleaned, coated in a very thin layer of unsaturated fat or oil, and then heated until the bioplastic layer forms, and left to completely cool. Multiple layers are required for the best long-term results. Stainless steel and aluminium cookware do not require protection from corrosion, but seasoning reduces sticking, and can help with browning as the seasoning coating has high thermal emissivity. Other cookware surfaces are generally not seasoned. A seasoned surface is hydrophobic and highly attractive to oils and fats used for cooking. These form a layer that prevents foods, which typically contain water, from touching and cooking onto the hydrophilic metallic cooking surface underneath. These properties are useful when frying, roasting and baking. Methods of seasoning Food sticks easily to a bare metal cooking surface; it must either be oiled or seasoned before use. The coating known as seasoning is formed by a process of repeatedly layering extremely thin coats of oil on the cookware and oxidizing each layer with medium-high heat for a time. This process is known as "seasoning"; the color of the coating is commonly known as its "patina" - the base coat will darken with use. To season cookware (e.g., to season a new pan, or to replace damaged seasoning on an old pan), the following is a typical process: First the cookware is thoroughly cleaned to remove old seasoning, manufacturing residues or a possible manufacturer-applied anti corrosion coating and to expose the bare metal. If it is not pre-seasoned, a new cast-iron skillet or dutch oven typically comes from the manufacturer with a protective coating of wax or shellac; otherwise it would rust. This needs to be removed before the cookware is used. An initial scouring with hot soapy water will usually remove the protective coating. Alternatively, for woks, it is common to burn off the coating over high heat (outside or under a vent hood) to expose the bare metal surface. For already-used cookware that are to be re-seasoned, the cleaning process can be more complex, involving rust removal and deep cleaning (with strong soap or lye, or by burning in a campfire or self-cleaning oven) to remove existing seasoning and build-up. Then several times the following is performed: Applying a very thin layer of animal fat or cooking oil (ranging from vegetable oil to lard, including many common food-grade oils). Polishing most of it off so that barely any remains or alternatively use a seasoning paste Heat the cookware to just below or just above the smoke point to generate a layer of seasoning. The precise details of the seasoning process differ from one source to another, and there is much disagreement regarding the correct oil to use. There is also no clear consensus about the best temperature and duration. Lodge Manufacturing uses a proprietary soybean blend in their base coats as stated on their website, but states that all oils and fats can be used. The temperature recommended for seasoning varies from high temperatures above to temperatures below . Seasoning a cast-iron or carbon steel wok is a common process in Asia and Asian-American culture. While the vegetable oil method of seasoning is also used in Asia, a traditional process for seasoning also includes the use of Chinese chives or scallions as part of the process. Surface chemistry In conventional seasoning, the oil or fat is converted into a hard surface at or above the high temperatures used for cooking, analogous to the reaction of drying oils. When oils or fats are heated, multiple degradation reactions occur, including decomposition, autoxidation, thermal oxidation, polymerization, and cyclization. Often cookware's seasoning is uneven, and over time it will spread to the whole item. Heating the cookware (such as in a hot oven or on a stovetop) facilitates the oxidation of the iron; the fats and/or oils protect the metal from contact with the air during the reaction, which would otherwise cause rust to form. Some cast iron users advocate heating the cookware slightly before applying the fat or oil to ensure it is completely dry. The seasoned surface is hydrophobic and highly attractive to oils and fats used for cooking (oleophilic). These form a layer that prevents foods, which typically contain water, from touching and cooking on to the hydrophilic metallic cooking surface underneath. The seasoned surface will deteriorate at the temperature where the coating breaks down. This is typically higher than the smoke point of the original oils and fats used to season the cookware. Thus old seasoning can be removed at a sufficiently high temperature (~500 °C), as found in oven self-cleaning cycles. High-temperature seasoning Some Chinese cookware is seasoned at a much higher temperature than conventional seasoning at 450 °C. More akin to bluing, this type of seasoning mainly involves a chemical change of the iron pan itself and not the oil. When beef tallow is heated at this temperature, it evaporates on the iron surface and increases the partial pressure of O2 (oxygen gas) on the pot surface. This transport of oxygen encourages the formation of Fe3O4 nanoballs. The surface formed is broadly speaking hydrophobic and oleophilic, but is more versatile in that it temporarily turns hydrophilic on contact with high-water ingredients. Care Some food writers advise against using seasoned pans and Dutch ovens to cook foods containing tomatoes, vinegar, or other acidic ingredients because these foods would eventually remove the protective layer created during the seasoning process. Tests conducted by America's Test Kitchen found that, while cooking a highly acidic tomato sauce for over 30 minutes produced a metallic taste, cooking acidic food in a well-seasoned pan for a short time is unlikely to have negative consequences. Cast iron pots are best suited to cook food high in oil or fat, such as chicken, bacon, or sausage, or used for deep frying. Cleaning (except prior to seasoning) is often carried out without the use of detergent. Some cookbook authors recommend only wiping seasoned cookware clean after each use or using other cleaning methods such as a salt scrub or boiling water. The protective layer itself is not very susceptible to soaps, and many users do briefly use detergents and soaps. However, cast iron is very prone to rust, and the protective layer may have pinholes, so soaking for long periods is contraindicated as the layer may start to flake off. Unlike commercial non-stick coatings such as Teflon, with which metal cooking utensils are not used because they damage the surface, seasoned surfaces tend to be self-reforming, so they allow the use of such utensils. These are of course much more effective in scraping off food than the softer utensils used with non-stick pans. Bluing In the process of bluing, an oxidizing chemical reaction on an iron surface selectively forms magnetite (Fe3O4), the black oxide of iron (as opposed to rust, the red oxide of iron (Fe2O3)). Black oxide provides some protection against corrosion if also treated with a water-displacing oil to reduce wetting and galvanic action. Bluing is often used with carbon steel and cast iron pans in conjunction with seasoning. See also Non-stick pan References Works cited Cookware and bakeware Polymer chemistry
Seasoning (cookware)
Chemistry,Materials_science,Engineering
1,671
18,562,555
https://en.wikipedia.org/wiki/The%20Oil%20Drum
The Oil Drum was a website devoted to analysis and discussion of energy and its impact on society that described itself as an "energy, peak oil & sustainability research and news site". The Oil Drum was published by the Institute for the Study of Energy and Our Future, a Colorado non-profit corporation. The site was a resource for information on many energy and sustainability topics, including peak oil, and related concepts such as oil megaprojects, Hubbert linearization, and the Export Land Model. The Oil Drum had over 25 online contributors from all around the globe. In 2013, the site ceased publishing new articles. As of October 2016, the site continues to function as an archive. The Oil Drum was rated one of the top five sustainability blogs of 2007 by Nielsen Netratings, and was read by a diverse collection of public figures, including Roscoe Bartlett, Paul Krugman, James Howard Kunstler, Richard Rainwater, and Radiohead. In 2008, the site received the M. King Hubbert Award for Excellence in Energy Education from the U.S. chapter of the Association for the Study of Peak Oil and Gas (ASPO). The Oil Drum was started in March 2005 by Kyle Saunders (username "Prof. Goose"), a professor of political science at Colorado State University, and Dave Summers (username "Heading Out"), a professor of mining engineering at Missouri University of Science and Technology (then known as University of Missouri-Rolla). The site first rose to prominence following its coverage of the impact of Hurricanes Katrina and Rita on oil and gas production. The staff grew by dozens and became well known for rigorous, quantitative analysis of energy production and consumption. A notable example is former editor Stuart Staniford's analysis of the depletion of Saudi Arabia's Ghawar oil field (Depletion Levels in Ghawar). The site started out on the Blogger platform, moved to Scoop in August 2005, and to Drupal in December 2006. In 2013, The Oil Drum announced that it would stop publishing new content and would turn into an archive resource. Reasons cited for this change include server costs and a dwindling number of contributors of high-quality content. References External links The Oil Drum "The Oil Drum: $100 a Barrel Quickens the Beat" - Interview with The Oil Drum editor Nate Hagens, January 7, 2008. "The Oil Drum, peak oil and why some good blogs don’t last" - Retrospective look at The Oil Drum and the circumstances leading to its shutdown, August 29, 2013. Energy economics Economics websites Internet properties established in 2005 Internet properties disestablished in 2013 American environmental websites Science blogs
The Oil Drum
Environmental_science
548
30,862,748
https://en.wikipedia.org/wiki/Pre-exponential%20factor
In chemical kinetics, the pre-exponential factor or A factor is the pre-exponential constant in the Arrhenius equation (equation shown below), an empirical relationship between temperature and rate coefficient. It is usually designated by A when determined from experiment, while Z is usually left for collision frequency. The pre-exponential factor can be thought of as a measure of the frequency of properly oriented collisions. It is typically determined experimentally by measuring the rate constant at a particular temperature and fitting the data to the Arrhenius equation. The pre-exponential factor is generally not exactly constant, but rather depends on the specific reaction being studied and the temperature at which the reaction is occurring. The units of the pre-exponential factor A are identical to those of the rate constant and will vary depending on the order of the reaction. For a first-order reaction, it has units of s−1. For that reason, it is often called frequency factor. According to collision theory, the frequency factor, A, depends on how often molecules collide when all concentrations are 1 mol/L and on whether the molecules are properly oriented when they collide. Values of A for some reactions can be found at Collision theory. According to transition state theory, A can be expressed in terms of the entropy of activation of the reaction. References IUPAC Gold Book definition of pre-exponential factor Chemical kinetics
Pre-exponential factor
Chemistry
282
40,355,292
https://en.wikipedia.org/wiki/Breguet%27s%20thermometer
Breguet's thermometer, also called a spiral thermometer, is a type of thermometer which uses the expansion of metal under heat to produce a measurement more sensitive, and with a higher range, than both mercury and air thermometers. Working on the principle of a bimetallic strip, it consists of a very slender strip of platinum soldered to a similar strip of silver, with a slip of gold soldered in between. The strips of soldered metals are curved into a helix (a). The upper extremity of the helix is fastened to a metallic support (c) and the lower extremity is connected to an index, which projects over a graduated circle (b). The expansion of silver with temperature is almost twice as great as platinum, with gold being somewhere in between. The result is that a temperature rise or fall will cause a corresponding twist in the spiral, moving the index. The slip of gold in between is to prevent "sudden starts". References Thermometers
Breguet's thermometer
Technology,Engineering
213
39,383,425
https://en.wikipedia.org/wiki/Journal%20of%20Interpretation%20Research
The Journal of Interpretation Research is a biannual peer-reviewed academic journal covering research and discourse in the field of environmental interpretation, heritage interpretation, and environmental education. It is published by the National Association for Interpretation. The editors-in-chief are Robert B. Powell (Clemson University, Clemson, SC, United States) and Marc J. Stern (Virginia Tech, Blacksburg, VA, United States. External links Environmental education History journals Cultural heritage Biannual journals Academic journals published by learned and professional societies Delayed open access journals Academic journals established in 1996 Education journals Environmental humanities journals Heritage interpretation Environmental social science journals
Journal of Interpretation Research
Environmental_science
124
468,924
https://en.wikipedia.org/wiki/For%20loop
In computer science, a for-loop or for loop is a control flow statement for specifying iteration. Specifically, a for-loop functions by running a section of code repeatedly until a certain condition has been satisfied. For-loops have two parts: a header and a body. The header defines the iteration and the body is the code executed once per iteration. The header often declares an explicit loop counter or loop variable. This allows the body to know which iteration is being executed. For-loops are typically used when the number of iterations is known before entering the loop. For-loops can be thought of as shorthands for while-loops which increment and test a loop variable. Various keywords are used to indicate the usage of a for loop: descendants of ALGOL use "", while descendants of Fortran use "". There are other possibilities, for example COBOL which uses . The name for-loop comes from the word for. For is used as the reserved word (or keyword) in many programming languages to introduce a for-loop. The term in English dates to ALGOL 58 and was popularized in ALGOL 60. It is the direct translation of the earlier German and was used in Superplan (1949–1951) by Heinz Rutishauser. Rutishauser was involved in defining ALGOL 58 and ALGOL 60. The loop body is executed "for" the given values of the loop variable. This is more explicit in ALGOL versions of the for statement where a list of possible values and increments can be specified. In Fortran and PL/I, the keyword is used for the same thing and it is named a do-loop; this is different from a do while loop. FOR A for-loop statement is available in most imperative programming languages. Even ignoring minor differences in syntax, there are many differences in how these statements work and the level of expressiveness they support. Generally, for-loops fall into one of four categories: Traditional for-loops The for-loop of languages like ALGOL, Simula, BASIC, Pascal, Modula, Oberon, Ada, MATLAB, OCaml, F#, and so on, requires a control variable with start- and end-values, which looks something like this: for i = first to last do statement (* or just *) for i = first..last do statement Depending on the language, an explicit assignment sign may be used in place of the equal sign (and some languages require the word even in the numerical case). An optional step-value (an increment or decrement ≠ 1) may also be included, although the exact syntaxes used for this differ a bit more between the languages. Some languages require a separate declaration of the control variable, some do not. Another form was popularized by the C language. It requires 3 parts: the initialization (loop variant), the condition, and the advancement to the next iteration. All these three parts are optional. This type of "semicolon loops" came from B programming language and it was originally invented by Stephen Johnson. In the initialization part, any variables needed are declared (and usually assigned values). If multiple variables are declared, they should all be the same type. The condition part checks a certain condition and exits the loop if false, even if the loop is never executed. If the condition is true, then the lines of code inside the loop are executed. The advancement to the next iteration part is performed exactly once every time the loop ends. The loop is then repeated if the condition evaluates to true. Here is an example of the C-style traditional for-loop in Java. // Prints the numbers from 0 to 99 (and not 100), each followed by a space. for (int i=0; i<100; i++) { System.out.print(i); System.out.print(' '); } System.out.println(); These loops are also sometimes named numeric for-loops when contrasted with foreach loops (see below). Iterator-based for-loops This type of for-loop is a generalization of the numeric range type of for-loop, as it allows for the enumeration of sets of items other than number sequences. It is usually characterized by the use of an implicit or explicit iterator, in which the loop variable takes on each of the values in a sequence or other data collection. A representative example in Python is: for an item in some_iterable_object: do_something() do_something_else() Where is either a data collection that supports implicit iteration (like a list of employee's names), or may be an iterator itself. Some languages have this in addition to another for-loop syntax; notably, PHP has this type of loop under the name , as well as a three-expression for-loop (see below) under the name . Vectorised for-loops Some languages offer a for-loop that acts as if processing all iterations in parallel, such as the keyword in Fortran 95 which has the interpretation that all right-hand-side expressions are evaluated before any assignments are made, as distinct from the explicit iteration form. For example, in the statement in the following pseudocode fragment, when calculating the new value for , except for the first (with ) the reference to will obtain the new value that had been placed there in the previous step. In the version, however, each calculation refers only to the original, unaltered . for i := 2 : N - 1 do A(i) := [A(i - 1) + A(i) + A(i + 1)] / 3; next i; for all i := 2 : N - 1 do A(i) := [A(i - 1) + A(i) + A(i + 1)] / 3; The difference may be significant. Some languages (such as PL/I, Fortran 95) also offer array assignment statements, that enable many for-loops to be omitted. Thus pseudocode such as would set all elements of array A to zero, no matter its size or dimensionality. The example loop could be rendered as A(2 : N - 1) := [A(1 : N - 2) + A(2 : N - 1) + A(3 : N)] / 3; But whether that would be rendered in the style of the for-loop or the for-all-loop or something else may not be clearly described in the compiler manual. Compound for-loops Introduced with ALGOL 68 and followed by PL/I, this allows the iteration of a loop to be compounded with a test, as in for i := 1 : N while A(i) > 0 do etc. That is, a value is assigned to the loop variable i and only if the while expression is true will the loop body be executed. If the result were false the for-loop's execution stops short. Granted that the loop variable's value is defined after the termination of the loop, then the above statement will find the first non-positive element in array A (and if no such, its value will be N + 1), or, with suitable variations, the first non-blank character in a string, and so on. Loop counters In computer programming, a loop counter is a control variable that controls the iterations of a loop (a computer programming language construct). It is so named because most uses of this construct result in the variable taking on a range of integer values in some orderly sequences (for example., starting at 0 and ending at 10 in increments of 1) Loop counters change with each iteration of a loop, providing a unique value for each iteration. The loop counter is used to decide when the loop should terminate and for the program flow to continue to the next instruction after the loop. A common identifier naming convention is for the loop counter to use the variable names i, j, and k (and so on if needed), where i would be the most outer loop, j the next inner loop, etc. The reverse order is also used by some programmers. This style is generally agreed to have originated from the early programming of Fortran, where these variable names beginning with these letters were implicitly declared as having an integer type, and so were obvious choices for loop counters that were only temporarily required. The practice dates back further to mathematical notation where indices for sums and multiplications are often i, j, etc. A variant convention is the use of duplicated letters for the index, ii, jj, and kk, as this allows easier searching and search-replacing than using a single letter. Example An example of C code involving nested for loops, where the loop counter variables are i and j: for (i = 0; i < 100; i++) { for (j = i; j < 10; j++) { some_function(i, j); } } Loops in C can also be used to print the reverse of a word. As: for (i = 0; i < 6; i++) { scanf("%c", &a[i]); } for (i = 4; i >= 0; i--) { printf("%c", a[i]); } Here, if the input is , the output will be . Additional semantics and constructs Use as infinite loops This C-style for-loop is commonly the source of an infinite loop since the fundamental steps of iteration are completely in the control of the programmer. When infinite loops are intended, this type of for-loop can be used (with empty expressions), such as: for (;;) //loop body This style is used instead of infinite loops to avoid a type conversion warning in some C/C++ compilers. Some programmers prefer the more succinct form over the semantically equivalent but more verbose form. Early exit and continuation Some languages may also provide other supporting statements, which when present can alter how the for-loop iteration proceeds. Common among these are the break and continue statements found in C and its derivatives. The break statement causes the innermost loop to be terminated immediately when executed. The continue statement will move at once to the next iteration without further progress through the loop body for the current iteration. A for statement also terminates when a break, goto, or return statement within the statement body is executed.[Wells] Other languages may have similar statements or otherwise provide means to alter the for-loop progress; for example in Fortran 95: DO I = 1, N statements!Executed for all values of "I", up to a disaster if any. IF (no good) CYCLE! Skip this value of "I", and continue with the next. Statements!Executed only where goodness prevails. IF (disaster) EXIT! Abandon the loop. Statements!While good and, no disaster. END DO! Should align with the "DO". Some languages offer further facilities such as naming the various loop statements so that with multiple nested loops there is no doubt as to which loop is involved. Fortran 95, for example: X1:DO I = 1, N statements X2:DO J = 1, M statements IF (trouble) CYCLE X1 statements END DO X2 statements END DO X1 Thus, when "trouble" is detected in the inner loop, the CYCLE X1 (not X2) means that the skip will be to the next iteration for I, not J. The compiler will also be checking that each END DO has the appropriate label for its position: this is not just a documentation aid. The programmer must still code the problem correctly, but some possible blunders will be blocked. Loop variable scope and semantics Different languages specify different rules for what value the loop variable will hold on termination of its loop, and indeed some hold that it "becomes undefined". This permits a compiler to generate code that leaves any value in the loop variable, or perhaps even leaves it unchanged because the loop value was held in a register and never stored in memory. Actual behavior may even vary according to the compiler's optimization settings, as with the Honeywell Fortran66 compiler. In some languages (not C or C++) the loop variable is immutable within the scope of the loop body, with any attempt to modify its value being regarded as a semantic error. Such modifications are sometimes a consequence of a programmer error, which can be very difficult to identify once made. However, only overt changes are likely to be detected by the compiler. Situations, where the address of the loop variable is passed as an argument to a subroutine, make it very difficult to check because the routine's behavior is in general unknowable to the compiler. Some examples in the style of Fortran: DO I = 1, N I = 7!Overt adjustment of the loop variable. Compiler complaint likely. Z = ADJUST(I)!Function "ADJUST" might alter "I", to uncertain effect. Normal statements! Memory might fade and "I" is the loop variable. PRINT (A(I), B(I), I = 1, N, 2)!Implicit for-loop to print odd elements of arrays A and B, reusing "I"... PRINT I ! What value will be presented? END DO! How many times will the loop be executed? A common approach is to calculate the iteration count at the start of a loop (with careful attention to overflow as in in sixteen-bit integer arithmetic) and with each iteration decrement this count while also adjusting the value of : double counting results. However, adjustments to the value of within the loop will not change the number of iterations executed. Still, another possibility is that the code generated may employ an auxiliary variable as the loop variable, possibly held in a machine register, whose value may or may not be copied to on each iteration. Again, modifications of would not affect the control of the loop, but now a disjunction is possible: within the loop, references to the value of might be to the (possibly altered) current value of or to the auxiliary variable (held safe from improper modification) and confusing results are guaranteed. For instance, within the loop a reference to element of an array would likely employ the auxiliary variable (especially if it were held in a machine register), but if is a parameter to some routine (for instance, a print-statement to reveal its value), it would likely be a reference to the proper variable instead. It is best to avoid such possibilities. Adjustment of bounds Just as the index variable might be modified within a for-loop, so also may its bounds and direction. But to uncertain effect. A compiler may prevent such attempts, they may have no effect, or they might even work properly - though many would declare that to do so would be wrong. Consider a statement such as for i := first : last : step do A(i) := A(i) / A(last); If the approach to compiling such a loop was to be the evaluation of , and and the calculation of an iteration count via something like once only at the start, then if those items were simple variables and their values were somehow adjusted during the iterations, this would have no effect on the iteration count even if the element selected for division by changed. List of value ranges PL/I and ALGOL 68, allow loops in which the loop variable is iterated over a list of ranges of values instead of a single range. The following PL/I example will execute the loop with six values of i: 1, 7, 12, 13, 14, 15: do i = 1, 7, 12 to 15; /*statements*/ end; Equivalence with while-loops A for-loop is generally equivalent to a while-loop: factorial := 1 for counter from 2 to 5 factorial := factorial * counter counter:= counter - 1 print counter + "! equals " + factorial Is equivalent to: factorial := 1 counter := 1 while counter < 5 counter := counter + 1 factorial := factorial * counter print counter + "! equals " + factorial As demonstrated by the output of the variables. Timeline of the for-loop syntax in various programming languages Given an action that must be repeated, for instance, five times, different languages' for-loops will be written differently. The syntax for a three-expression for-loop is nearly identical in all languages that have it, after accounting for different styles of block termination and so on. 1957: FORTRAN Fortran's equivalent of the loop is the loop, using the keyword do instead of for, The syntax of Fortran's loop is: DO label counter = first, last, step statements label statement The following two examples behave equivalently to the three argument for-loop in other languages, initializing the counter variable to 1, incrementing by 1 each iteration of the loop, and stopping at five (inclusive). DO 9, ICOUNT = 1, 5, 1 WRITE (6,8) ICOUNT 8 FORMAT( I2 ) 9 CONTINUE As of Fortran 90, block structured was added to the language. With this, the end of loop label became optional: do icounter = 1, 5 write(*, '(i2)') icounter end do The step part may be omitted if the step is one. Example: * DO loop example. PROGRAM MAIN INTEGER SUMSQ SUMSQ = 0 DO 199 I = 1, 9999999 IF (SUMSQ.GT.1000) GO TO 200 199 SUMSQ = SUMSQ + I**2 200 PRINT 206, SUMSQ 206 FORMAT( I2 ) END In Fortran 90, the may be avoided by using an statement. * DO loop example. program main implicit none integer:: sums integer:: i sums = 0 do i = 1, 9999999 if (sums > 1000.0) exit sums = sums + i**2 end do print *, sums end program 1958: ALGOL ALGOL 58 introduced the statement, using the form as Superplan: FOR Identifier = Base (Difference) Limit For example to print 0 to 10 incremented by 1: FOR x = 0 (1) 10 BEGIN PRINT (FL) = x END 1960: COBOL COBOL was formalized in late 1959 and has had many elaborations. It uses the PERFORM verb which has many options. Originally all loops had to be out-of-line with the iterated code occupying a separate paragraph. Ignoring the need for declaring and initializing variables, the COBOL equivalent of a for-loop would be. PERFORM SQ-ROUTINE VARYING I FROM 1 BY 1 UNTIL I > 1000 SQ-ROUTINE ADD I**2 TO SUM-SQ. In the 1980s, the addition of in-line loops and structured programming statements such as END-PERFORM resulted in a for-loop with a more familiar structure. PERFORM VARYING I FROM 1 BY 1 UNTIL I > 1000 ADD I**2 TO SUM-SQ. END-PERFORM If the PERFORM verb has the optional clause TEST AFTER, the resulting loop is slightly different: the loop body is executed at least once, before any test. 1964: BASIC In BASIC, a loop is sometimes named a for-next loop. 10 REM THIS FOR LOOP PRINTS ODD NUMBERS FROM 1 TO 15 20 FOR I = 1 TO 15 STEP 2 30 PRINT I 40 NEXT I The end-loop marker specifies the name of the index variable, which must correspond to the name of the index variable at the start of the for-loop. Some languages (PL/I, Fortran 95, and later) allow a statement label at the start of a for-loop that can be matched by the compiler against the same text on the corresponding end-loop statement. Fortran also allows the and statements to name this text; in a nest of loops, this makes clear which loop is intended. However, in these languages, the labels must be unique, so successive loops involving the same index variable cannot use the same text nor can a label be the same as the name of a variable, such as the index variable for the loop. 1964: PL/I do counter = 1 to 5 by 1; /* "by 1" is the default if not specified */ /*statements*/; end; The statement may be used to exit the loop. Loops can be labeled, and leave may leave a specific labeled loop in a group of nested loops. Some PL/I dialects include the statement to terminate the current loop iteration and begin the next. 1968: ALGOL 68 ALGOL 68 has what was considered the universal loop, the full syntax is: FOR i FROM 1 BY 2 TO 3 WHILE i≠4 DO ~ OD Further, the single iteration range could be replaced by a list of such ranges. There are several unusual aspects of the construct only the portion was compulsory, in which case the loop will iterate indefinitely. thus the clause , will iterate exactly 100 times. The syntactic element allowed a programmer to break from a loop early, as in: INT sum sq := 0; FOR i WHILE print(("So far:", i, new line)); # Interposed for tracing purposes. # sum sq ≠ 70↑2 # This is the test for the WHILE # DO sum sq +:= i↑2 OD Subsequent extensions to the standard ALGOL 68 allowed the syntactic element to be replaced with and to achieve a small optimization. The same compilers also incorporated: for late loop termination. for working on arrays in parallel. 1970: Pascal for Counter:= 1 to 5 do (*statement*); Decrementing (counting backwards) is using keyword instead of , as in: for Counter:= 5 down to 1 do (*statement*); The numeric range for-loop varies somewhat more. 1972: C, C++ for (initialization; condition; increment/decrement) statement The is often a block statement; an example of this would be: //Using for-loops to add numbers 1 - 5 int sum = 0; for (int i = 1; i <= 5; ++i) { sum += i; } The ISO/IEC 9899:1999 publication (commonly known as C99) also allows initial declarations in loops. All three sections in the for loop are optional, with an empty condition equivalent to true. 1972: Smalltalk 1 to: 5 do: [ :counter | "statements" ] Contrary to other languages, in Smalltalk a for-loop is not a language construct but is defined in the class Number as a method with two parameters, the end value and a closure, using self as start value. 1980: Ada for Counter in 1 .. 5 loop -- statements end loop; The exit statement may be used to exit the loop. Loops can be labeled, and exit may leave a specifically labeled loop in a group of nested loops: Counting: For Counter in 1 .. 5 loop Triangle: for Secondary_Index in 2 .. Counter loop -- statements exit Counting; -- statements end loop Triangle; end loop Counting; 1980: Maple Maple has two forms of for-loop, one for iterating over a range of values, and the other for iterating over the contents of a container. The value range form is as follows: for i from f by b to t while w do # loop body od; All parts except do and od are optional. The for I part, if present, must come first. The remaining parts (from f, by b, to t, while w) can appear in any order. Iterating over a container is done using this form of loop: for e in c while w do # loop body od; The in c clause specifies the container, which may be a list, set, sum, product, unevaluated function, array, or object implementing an iterator. A for-loop may be terminated by od, end, or end do. 1982: Maxima CAS In Maxima CAS, one can use also integer values: for x:0.5 step 0.1 thru 0.9 do /* "Do something with x" */ 1982: PostScript The for-loop, written as initializes an internal variable, and executes the body as long as the internal variable is not more than the limit (or not less, if the increment is negative) and, at the end of each iteration, increments the internal variable. Before each iteration, the value of the internal variable is pushed onto the stack. 1 1 6 {STATEMENTS} for There is also a simple repeat loop. The repeat-loop, written as , repeats the body exactly X times. 5 { STATEMENTS } repeat 1983: Ada 83 and above procedure Main is Sum_Sq : Integer := 0; begin for I in 1 .. 9999999 loop if Sum_Sq <= 1000 then Sum_Sq := Sum_Sq + I**2 end if; end loop; end; 1984: MATLAB for n = 1:5 -- statements end After the loop, would be 5 in this example. As is used for the Imaginary unit, its use as a loop variable is discouraged. 1987: Perl for ($counter = 1; $counter <= 5; $counter++) { # implicitly or predefined variable # statements; } for (my $counter = 1; $counter <= 5; $counter++) { # variable private to the loop # statements; } for (1..5) { # variable implicitly called $_; 1..5 creates a list of these 5 elements # statements; } statement for 1..5; # almost same (only 1 statement) with natural language order for my $counter (1..5) { # variable private to the loop # statements; } "There's more than one way to do it" is a Perl programming motto. 1988: Mathematica The construct corresponding to most other languages' for-loop is named Do in Mathematica. Do[f[x], {x, 0, 1, 0.1}] Mathematica also has a For construct that mimics the for-loop of C-like languages. For[x= 0 , x <= 1, x += 0.1, f[x] ] 1989: Bash # first form for i in 1 2 3 4 5 do # must have at least one command in a loop echo $i # just print the value of i done # second form for (( i = 1; i <= 5; i++ )) do # must have at least one command in a loop echo $i # just print the value of i done An empty loop (i.e., one with no commands between and ) is a syntax error. If the above loops contained only comments, execution would result in the message "syntax error near unexpected token 'done'". 1990: Haskell In Haskell98, the function mapM_ maps a monadic function over a list, as mapM_ print [4, 3 .. 1] -- prints -- 4 -- 3 -- 2 -- 1 The function mapM collects each iteration result in a list: result_list <- mapM (\ indx -> do{ print indx; return (indx - 1) }) [1..4] -- prints -- 1 -- 2 -- 3 -- 4 -- result_list is [0,1,2,3,4] Haskell2010 adds functions forM_ and forM, which are equivalent to mapM_ and mapM, but with their arguments flipped: forM_ [0..3] $ \ indx -> do print indx -- prints -- 0 -- 1 -- 2 -- 3 result_list <- forM ['a'..'d'] $ \ indx -> do print indx return indx -- prints -- 'a' -- 'b' -- 'c' -- 'd' -- result_list is ['a','b','c','d'] When compiled with optimization, none of the expressions above will create lists. But, to save the space of the [1..5] list if optimization is turned off, a forLoop_ function could be defined as import Control.Monad as M forLoop_ :: Monad m => a -> (a -> Bool) -> (a -> a) -> (a -> m ()) -> m () forLoop_ startIndx test next f = theLoop startIndx where theLoop indx = M.when (test indx) $ do f indx theLoop (next indx) and used as forLoopM_ (0::Int) (< len) (+1) $ \indx -> do -- statements 1991: Oberon-2, Oberon-07, Component Pascal FOR Counter:= 1 TO 5 DO (* statement sequence *) END In the original Oberon language, the for-loop was omitted in favor of the more general Oberon loop construct. The for-loop was reintroduced in Oberon-2. 1991: Python Python does not contain the classical for loop, rather a foreach loop is used to iterate over the output of the built-in range() function which returns an iterable sequence of integers.for i in range(1, 6): # gives i values from 1 to 5 inclusive (but not 6) # statements print(i) # if we want 6 we must do the following for i in range(1, 6 + 1): # gives i values from 1 to 6 # statements print(i)Using range(6) would run the loop from 0 to 5. 1993: AppleScript repeat with i from 1 to 5 -- statements log i end repeat It can also iterate through a list of items, similar to what can be done with arrays in other languages: set x to {1, "waffles", "bacon", 5.1, false} repeat with i in x log i end repeat A may also be used to exit a loop at any time. Unlike other languages, AppleScript currently has no command to continue to the next iteration of a loop. 1993: Crystal for i = start, stop, interval do -- statements end So, this code for i = 1, 5, 2 do print(i) end will print: 1 3 5 For-loops can also loop through a table using ipairs() to iterate numerically through arrays and pairs() to iterate randomly through dictionaries. Generic for-loop making use of closures: for name, phone, and address in contacts() do -- contacts() must be an iterator function end 1995: ColdFusion Markup Language (CFML) Script syntax Simple index loop: for (i = 1; i <= 5; i++) { // statements } Using an array: for (i in [1,2,3,4,5]) { // statements } Using a list of string values: loop index="i" list="1;2,3;4,5" delimiters=",;" { // statements } The above example is only available in the dialect of CFML used by Lucee and Railo. Tag syntax Simple index loop: <cfloop index="i" from="1" to="5"> <!--- statements ---> </cfloop> Using an array: <cfloop index="i" array="#[1,2,3,4,5]#"> <!--- statements ---> </cfloop> Using a "list" of string values: <cfloop index="i" list="1;2,3;4,5" delimiters=",;"> <!--- statements ---> </cfloop> 1995: Java for (int i = 0; i < 5; i++) { //perform functions within the loop; //can use the statement 'break;' to exit early; //can use the statement 'continue;' to skip the current iteration } For the extended for-loop, see . 1995: JavaScript JavaScript supports C-style "three-expression" loops. The and statements are supported inside loops. for (var i = 0; i < 5; i++) { // ... } Alternatively, it is possible to iterate over all keys of an array. for (var key in array) { // also works for assoc. arrays // use array[key] ... } 1995: PHP This prints out a triangle of * for ($i = 0; $i <= 5; $i++) { for ($j = 0; $j <= $i; $j++) { echo "*"; } echo "<br />\n"; } 1995: Ruby for the counter in 1..5 # statements end 5.times do |counter| # counter iterates from 0 to 4 # statements end 1.upto(5) do |counter| # statements end Ruby has several possible syntaxes, including the above samples. 1996: OCaml See expression syntax. (* for_statement:= "for" ident '=' expr ( "to" ∣ "down to" ) expr "do" expr "done" *) for i = 1 to 5 do (* statements *) done ;; for j = 5 down to 0 do (* statements *) done ;; 1998: ActionScript 3 for (var counter:uint = 1; counter <= 5; counter++){ //statement; } 2008: Small Basic For i = 1 To 10 ' Statements EndFor 2008: Nim Nim has a foreach-type loop and various operations for creating iterators. for i in 5 .. 10: # statements 2009: Go for i := 0; i <= 10; i++ { // statements } 2010: Rust for i in 0..10 { // statements } 2012: Julia for j = 1:10 # statements end See also Do while loop Foreach While loop Primitive recursive function General recursive function References Control flow Iteration in programming Programming language comparisons Articles with example Ada code Articles with example ALGOL 68 code Articles with example BASIC code Articles with example C code Articles with example C++ code Articles with example Fortran code Articles with example Haskell code Articles with example Java code Articles with example JavaScript code Articles with example Julia code Articles with example MATLAB/Octave code Articles with example OCaml code Articles with example Pascal code Articles with example Perl code Articles with example PHP code Articles with example Python (programming language) code Articles with example Ruby code Articles with example Rust code Articles with example Smalltalk code
For loop
Technology
7,383
15,382,697
https://en.wikipedia.org/wiki/Temahome
TemaHome is a furniture exporter based in Lisbon, Portugal. The company's main markets are Germany, Switzerland, Portugal, Spain, Denmark and the United States. , the company exports to 40 countries. Company history Founded as Norema Portuguesa in 1981 by the marriage of the Norwegian Norema SA and the Portuguese Mendes Godinho SA, it was meant to combine the Norway's high technology with the Portuguese affordable costs to produce furniture. Between 1984 and 1994, the company manufactured a line of modular furniture for IKEA. In 1995, the Norwegian partners took complete ownership of the company and started to produce kitchen, bathroom and living furniture exclusively to the Norwegian market where it maintained its own flagship stores. In 2000, TemaHome was created through the leadership of its new major shareholder, 3i. From this point on, the ownership of the company was composed by four different stockholders, 3i (41.50%), the Spanish MCH (25.25%), the Portuguese ESCAPITAL (25.25%) and its management team (8%). Later in 2006, a new ownership structure was defined by a Management buy-in headed by a new management team and ownership was divided as follows: Management team (30%), Lead Capital management fund (60%) and an individual investor, Miguel Calado (10%). By the end of 2007, the company employed over 170 workers in its staff, distribute between its Lisbon headquarters and a 16.500 Sq meters production plant in the city of Tomar. TemaHome produces contemporary furniture and decorative accents combining modern lines and designs by some Portuguese designers such as Filipe Alarcão. Product range The range of products is composed by 3 distinct lines: essence, style and trends. Essence: Basic essential furnishing, ready to assembly similar to the IKEA style. Style: A more detailed, design oriented line of products superior in quality of materials. Trends: furnishing by international designers. Design team The company has relied in Portuguese designer Filipe Alarcão to lead its team of in house designers. Besides Filipe, the company works independently with names like Miguel Vieira Baptista, Fernando Brizio and Jette Fyhn. The in house team is composed by Maria Joao Maia, Délio Vicente, and Nádia Soares. The company also works in cooperation on external designers like Marco Sousa Santos, Fernando Brizio and Miguel Vieira Batista. Awards 2009 Design Management Europe Award - Medium-sized Company 2007 Mobis award - Best contemporary manufacturer 2007 Leader company status - IAPMEI (Portugal institution for small and medium businesses) 2002 Portugal Best Small and Medium businesses award 2000 Portugal Best Small and Medium businesses award References Furniture companies of Portugal Companies based in Lisbon Design companies established in 1981 Portuguese companies established in 1981 Design companies Firms 3i Group companies Portuguese brands
Temahome
Engineering
586
60,677,420
https://en.wikipedia.org/wiki/NIMPLY%20gate
The NIMPLY gate is a digital logic gate that implements a material nonimplication. Symbols A right-facing arrow with a line through it () can be used to denote NIMPLY in algebraic expressions. Logically, it is equivalent to material nonimplication, and the logical expression A ∧ ¬B. Usage The NIMPLY gate is often used in synthetic biology and genetic circuits. See also IMPLY gate AND gate NOT gate NAND gate NOR gate XOR gate XNOR gate Boolean algebra (logic) Logic gates References Logic gates
NIMPLY gate
Technology
112
59,573,206
https://en.wikipedia.org/wiki/Hans%20Andrew%20Hansen
Hans Andrew Hansen is an American plant breeder, currently working for Walters Gardens in Zeeland, Michigan. Hansen is the former director of lab production and new plants for Shady Oaks Nursery in Minnesota, where he revolutionized tissue culture techniques for many genera including Hosta, Arisaema and variegated Agave. In 2009, Hansen moved to Michigan where he became the Director of New Plants for Walters Gardens, one of North America's leading wholesale perennial growers. There he took over the perennial plant breeding program, which is now recognized as a leader in the industry. When Walters agreed to become the perennial supplier for the Proven Winners brand, Hansens' perennial influence expanded further. He has made dramatic improvement in many plant genera, including Baptisia, Agastache, Clematis, Digiplexis, Helleborus, Heuchera, Heucherella, Hibiscus, Lagerstroemia, Mangave, Nepeta, Salvia, Sedum and Veronica. Hansen currently holds over 179 US plant patents. References External links Hansen's home garden in Michigan Plant breeding Scientists from Michigan Year of birth missing (living people) Living people American inventors
Hans Andrew Hansen
Chemistry
243
19,718,830
https://en.wikipedia.org/wiki/VHF%20Data%20Link
The VHF Data Link or VHF Digital Link (VDL) is a means of sending information between aircraft and ground stations (and in the case of VDL Mode 4, other aircraft). Aeronautical VHF data links use the band 117.975–137 MHz assigned by the International Telecommunication Union to Aeronautical mobile (R) service. There are ARINC standards for ACARS on VHF and other data links installed on approximately 14,000 aircraft and a range of ICAO standards defined by the Aeronautical Mobile Communications Panel (AMCP) in the 1990s. Mode 2 is the only VDL mode being implemented operationally to support Controller Pilot Data Link Communications (CPDLC). ICAO VDL Mode 1 The ICAO AMCP defined this Mode for validation purposes. It was the same as VDL Mode 2 except that it used the same VHF link as VHF ACARS so it could be implemented using analog radios before VHF Digital Radio implementation was completed. The ICAO AMCP completed validation of VDL Modes 1&2 in 1994, after which the Mode 1 was no longer needed and was deleted from the ICAO standards. ICAO VDL Mode 2 The ICAO VDL Mode 2 is the main version of VDL. It has been implemented in a Eurocontrol Link 2000+ program and is specified as the primary link in the EU Single European Sky rule adopted in January 2009 requiring all new aircraft flying in Europe after January 1, 2014 to be equipped with CPDLC. In advance of CPDLC implementation, VDL Mode 2 has already been implemented in approximately 2,000 aircraft to transport ACARS messages simplifying the addition of CPDLC. Networks of ground stations providing VDL Mode 2 service have been deployed by ARINC and SITA with varying levels of coverage. The ICAO standard for the VDL Mode 2 specifies three layers: the Subnetwork, Link, and Physical Layer. The Subnetwork Layer complies with the requirements of the ICAO Aeronautical Telecommunication Network (ATN) standard which specifies an end-to-end data protocol to be used over multiple air-ground and ground subnetworks including VDL. The VDL Mode 2 Link Layer is made up of two sublayers: a Data Link service and a media access control (MAC) sublayer. The Data Link protocol is based on the ISO standards used for dial-up HDLC access to X.25 networks. It provides aircraft with a positive link establishment to a ground station, and defines an addressing scheme for ground stations. The MAC protocol is a version of Carrier Sense Multiple Access (CSMA). The VDL Mode 2 Physical Layer specifies the use in a 25 kHz wide VHF channel of a modulation scheme called Differential 8-Phase-shift keying with a symbol rate of 10,500 symbols per second. The raw (uncoded) physical layer bit rate is thus 31.5 kilobit/second. This required the implementation of VHF digital radios. ICAO VDL Mode 3 The ICAO standard for VDL Mode 3 defines a protocol providing aircraft with both data and digitized voice communications that was defined by the US FAA with support from Mitre. The digitized voice support made the Mode 3 protocol much more complex than VDL Mode 2. The data and digitized voice packets go in Time Division Multiple Access (TDMA) slots assigned by ground stations. The FAA implemented a prototype system around 2003 but did not manage to convince airlines to install VDL Mode 3 avionics and in 2004 abandoned its implementation. ICAO VDL Mode 4 The ICAO standard for VDL Mode 4 specifies a protocol enabling aircraft to exchange data with ground stations and other aircraft. VDL Mode 4 uses a protocol (Self-organized Time Division Multiple Access, STDMA, invented by Swede Håkan Lans in 1988) that allows it to be self-organizing, meaning no master ground station is required. This made it much simpler to implement than VDL Mode 3. In November 2001 this protocol was adopted by ICAO as a global standard. Its primary function was to provide a VHF frequency physical layer for ADS-B transmissions. However it was overtaken as the link for ADS-B by the Mode S radar link operating in the 1,090 MHz band which was selected as the primary link by the ICAO Air Navigation Conference in 2003. The VDL Mode 4 medium can also be used for air-ground exchanges. It is best used for short message transmissions between a large number of users, e.g. providing situational awareness, Digital Aeronautical Information Management (D-AIM), etc.. European Air Traffic Management modernization trials have implemented ADS-B and air-ground exchanges using VDL Mode 4 systems. However, on air transport aircraft the operational implementations of ADS-B will use the Mode S link and of CPDLC will use VDL Mode 2. Frequency use The European Frequency Management Manual of the International Civil Aviation Organization, ICAO contains, among other things, the following regulations for the use of frequency channels for the VHF Data-Link: ACARS: Channels 131.525, 131.725 and 131.825 MHz VDL Mode 2: 12 channels are available in the band 136.700 – 136.975 MHz with a spacing of 25 kHz are reserved for VDL Mode 2. Channel 136.975 serves as Common Signalling Channel (CSC). VDL Mode 4: Channel 136.925 can in some States alternatively used for Mode-4. (This is however very rare.) Additional remarks: For VDL Mode 2 ground stations it is being distinguished between stations serving only airports (i.e. aircraft are on ground, indicated as 'GND') and stations which serve airspaces (i.e. aircraft are airborne, indicated as 'AIR') The Common Signalling Channel (CSC) is being used for AIR and GND VDL Mode 2 ground stations, which are using adjacent channels need to be at least separated 3 Nautical Miles (NM) in order to avoid interference. By this means the guard-band channels (unused channels in between channel used for VDL) which were used until 2022 can be avoided. References External links Mode 2: Eurocontrol Link 2000+ Programme Eurocontrol Single European Sky Mode 4: GP&C Systems International AB Patent 5506587: Position Indicating System ("STDMA") Airbands Avionics Aircraft instruments
VHF Data Link
Technology,Engineering
1,296
38,903,404
https://en.wikipedia.org/wiki/Tricholoma%20subresplendens
Tricholoma subresplendens is a mushroom of the agaric genus Tricholoma. It was first described by American mycologist William Alphonso Murrill in 1914. See also List of North American Tricholoma References External links Fungi described in 1914 Fungi of North America subresplendens Taxa named by William Alphonso Murrill Fungus species
Tricholoma subresplendens
Biology
82
23,399,373
https://en.wikipedia.org/wiki/Do%20Not%20Fold%2C%20Spindle%20or%20Mutilate
Do Not Fold, Spindle or Mutilate is a 1971 American made-for-television mystery film directed by Ted Post, starring Myrna Loy, Helen Hayes, Mildred Natwick, Sylvia Sidney, John Beradino and Vince Edwards, with the screenplay adapted by John D. F. Black from a novel of the same name by Doris Miles Disney. It was broadcast as the ABC Movie of the Week on November 9, 1971. Plot Four middle-class Pasadena ladies in their late sixties habitually meet for lunch and exchange small talk with their waitress. They propose to create a fictitious young woman named Rebecca, and to submit her profile to a computer dating service. Several days after doing so they begin to receive letters from potential suitors, and derive additional amusement from reading them out loud. Concurrently, a young woman becomes alarmed by her date Mal's attempts to force himself upon her, and manages to escape into her home. His audible thoughts reveal that he has dangerous difficulty in relating to women. Mal turns his obsessive attentions to the fictitious "Rebecca", and not only sends a letter but tracks down the telephone number of "her" address. He calls and speaks to one of the old ladies, who impishly accepts a date with him at a local bar. In a spirit of fun, the four ladies wait at the bar to see what Mal looks like; however, when he arrives he mistakes a hooker, Brenda for "Rebecca", and leaves with her. When they arrive at Brenda's apartment and she asks for money, an outraged Mal attacks and kills her. Once the ladies realize their actions have led to murder, they go to the police; however, they also investigate Mal themselves, which places them in grave danger... Cast Helen Hayes as Sophie Tate Curtis Myrna Loy as Evelyn Tryon Mildred Natwick as Shelby Saunders Sylvia Sidney as Elizabeth Gibson Vince Edwards as Mal Weston John Beradino as Detective Hallum Larry D. Mann as Police Sergeant Lutz Barbara Davis as Brenda Paul Smith as Cutter Gary Vinson as Jonas Diane Shalet as Ruth Mellon Dodo Denney as Trudy Patrecia Wynand as Hostess Leonidas Ossetynski as Florist John Mitchum as Mr. Tubbs Margaret Wheeler as Mrs. Mellon Joe Haworth as Detective William Sumper as Man in Handcuffs Brief continuation in a similar form On December 16, 1972, 13 months after the ABC broadcast of Do Not Fold, Spindle or Mutilate on November 9, 1971, NBC reunited Hayes and Natwick in The Snoop Sisters, a two-hour television film about two aged sisters who write mysteries as well as solve crimes. Although different characters than in Do Not Fold, the Snoop sisters' relationship clearly resembles that of the one adventurous / one sensible style of Do Not Folds Helen Hayes and Myrna Loy, but with Natwick now cast as the level-headed sibling. Four additional 90-minute episodes of The Snoop Sisters were broadcast between December 1973 and March 1974. Reception In the 1989 edition of Leonard Maltin's TV Movies & Video Guide, the film was rated "Average", with the comment that the "way in which prank turns frightening could've been handled far, far better; otherwise, good performances." Steven H. Scheuer's Movies on TV and Videocassette (1986–87 edition) gave the movie 1½ stars (out of 4), with the opening sentence stating, "[T]his all-star comedy about murder tends to be a bit coy..." See also The Snoop Sisters List of television films produced for American Broadcasting Company References External links Do Not Fold, Spindle or Mutilate at CampBlood Homo Horror Features 1971 television films 1971 films 1970s mystery films ABC Movie of the Week Comedy mystery films Films about computing Films based on American novels Films based on mystery novels Films directed by Ted Post Films scored by Jerry Goldsmith Films set in Los Angeles 1970s American films American mystery television films
Do Not Fold, Spindle or Mutilate
Technology
809
46,352,440
https://en.wikipedia.org/wiki/Aby%20Warburg%20Prize
The Aby Warburg Prize (German Aby Warburg-Preis; formerly Aby M. Warburg-Preis) is a science prize of the city of Hamburg. It was established in 1979. Since 1980, it is donated by the senate of the city for excellence in the humanities and social sciences. It is named after the Hamburg-born art historian Aby Warburg. The prize is worth 25,000 Euros and awarded every four years. Young scientists will receive a scholarship of 10,000 euros. Award winners 1980 Jan Białostocki, art historian 1984 Meyer Schapiro, art historian 1988 Michael Baxandall, art historian 1992 Carlo Ginzburg, historian 1996 Claude Lévi-Strauss, anthropologist and ethnologist 2000 Natalie Zemon Davis, historian 2002 Rüdiger Campe, professor of German literature 2004 Horst Bredekamp, art historian 2008 Werner Hofmann, art historian 2012 Martin Warnke, art historian 2016 Sigrid Weigel, professor of German literature 2020 Georges Didi-Huberman, art historian 2024 Eva Illouz, sociologist See also List of social sciences awards References External links Kulturpreise: Aby Warburg-Preis Social sciences awards Awards established in 1979 1979 establishments in West Germany Humanities awards German awards Hamburg
Aby Warburg Prize
Technology
266
59,053,470
https://en.wikipedia.org/wiki/Roseimaritima
Roseimaritima is a genus of bacteria from the family of Planctomycetaceae with three known species. Roseimaritima ulvae has been isolated from an Ulva from Carreço in Portugal. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI) See also List of bacterial orders List of bacteria genera References Bacteria genera Monotypic bacteria genera Planctomycetota
Roseimaritima
Biology
108
27,145,862
https://en.wikipedia.org/wiki/Graph%20algebra%20%28social%20sciences%29
Graph algebra is systems-centric modeling tool for the social sciences. It was first developed by Sprague, Pzeworski, and Cortes as a hybridized version of engineering plots to describe social phenomena. Notes and references Algebra Social science methodology
Graph algebra (social sciences)
Mathematics
53
22,312,854
https://en.wikipedia.org/wiki/C19H28O3
{{DISPLAYTITLE:C19H28O3}} The molecular formula C19H28O3 may refer to: Hydroxy-DHEA 16-Hydroxydehydroepiandrosterone 7α-Hydroxy-DHEA 7β-Hydroxy-DHEA 15α-Hydroxy-DHEA Hydroxytestosterone 4-Hydroxytestosterone 11β-Hydroxytestosterone 11-Ketodihydrotestosterone 11-Ketoandrosterone Methylhydroxynandrolone
C19H28O3
Chemistry
119
41,571,562
https://en.wikipedia.org/wiki/Non-revenue%20track
Non-revenue track (or trackage), or a non-revenue route, is a section of track or transport route that is not used to carry revenue-earning freight or goods nor for scheduled passenger services. The term is used to refer mainly to sections of track or routes in public transport systems, such as rapid transit and tramway networks, but non-revenue track or routes can also be found in other transport systems. Non-revenue tracks may be used for revenue service during temporary reroutings. See also Dead mileage Network length (transport) References Public transport
Non-revenue track
Physics
117
5,156,882
https://en.wikipedia.org/wiki/Dimethoxymethane
Dimethoxymethane, also called methylal, is a colorless flammable liquid with a low boiling point, low viscosity and excellent dissolving power. It has a chloroform-like odor and a pungent taste. It is the dimethyl acetal of formaldehyde. Dimethoxymethane is soluble in three parts water and miscible with most common organic solvents. Synthesis and structure It can be manufactured by oxidation of methanol or by the reaction of formaldehyde with methanol. In aqueous acid, it is hydrolyzed back to formaldehyde and methanol. Due to the anomeric effect, dimethoxymethane has a preference toward the gauche conformation with respect to each of the C–O bonds, instead of the anti conformation. Since there are two C–O bonds, the most stable conformation is gauche-gauche, which is around 7 kcal/mol more stable than the anti-anti conformation, while the gauche-anti and anti-gauche are intermediate in energy. Since it is one of the smallest molecules exhibiting this effect, which has great interest in carbohydrate chemistry, dimethoxymethane is often used for theoretical studies of the anomeric effect. Applications Industrially, it is primarily used as a solvent and in the manufacture of perfumes, resins, adhesives, paint strippers and protective coatings. Another application is as a gasoline-additive for increasing octane number. Dimethoxymethane can also be used for blending with diesel. Reagent in organic synthesis Another useful application of dimethoxymethane is to protect alcohols with a methoxymethyl (MOM) ether in organic synthesis. Dimethoxymethane can be activated with phosphorus pentoxide in dichloromethane or chloroform. This method is preferred to the use of chloromethyl methyl ether (MOMCl). Phenols can also be MOM-protected using dimethoxymethane, p-toluenesulfonic acid. Alternatively, MOMCl can be generated as a solution by treating dimethoxymethane with an acyl chloride in the presence of a Lewis acid catalyst like zinc bromide: MeOCH2OMe + RC(=O)Cl → MeOCH2Cl + RC(=O)(OMe)). Unlike the classical procedure, which uses formaldehyde and hydrogen chloride as starting materials, the highly carcinogenic side product bis(chloromethyl) ether is not generated. References External links Ether solvents Formals Methoxy compounds
Dimethoxymethane
Chemistry
565
31,465,993
https://en.wikipedia.org/wiki/List%20of%20rail%20accidents%20%281970%E2%80%931979%29
This is a list of rail accidents from 1970 to 1979. For a list of terrorist incidents involving trains, see List of terrorist incidents involving railway systems. 1970 February 1 – Argentina – Benavídez rail disaster: A Tucumán–Buenos Aires express train collided with a standing local train south of Benavídez railroad station north of Buenos Aires. 142 people were killed, 368 injured (though some sources state 236 killed). February 16 – Nigeria – A train crowded with Eid al-Kabir pilgrims derailed at Langalanga about 27 km southeast of Gusau, with several cars falling down an embankment. About 150 were killed; reportedly, 52 of the injured were killed in a truck crash on the way to hospital. March 22 – United States – Branford, Connecticut: A Penn Central freight train derailed on the Shore Line Division (now the Northeast Corridor) in Branford center. 25 of the 86 cars derailed, demolishing Branford Station (a passenger shed at the time), and tore up a of track. The cause of the accident was the breakage of an overheated axle on a car loaded with 83 tons (75 t) of steel, which dragged the derailed cars off with it. May 20 – United Kingdom – Audenshaw Junction rail accident: A Class 506 electric multiple unit derailed at Audenshaw Junction, Cheshire due to a set of points moving under it. Two people were killed and 13 were injured. The cause of the accident was irregular practices by a signalman. June 21 - United States - A Toledo, Peoria and Western Railway train derailed in downtown Crescent City, Illinois. A propane tank car ruptured and explosions caused fires that destroyed the city center. Over 60 firefighters and civilians were injured. July 15 – United Kingdom – A 4BEP electric multiple unit collided with a lorry on a level crossing at , Kent, killing two people. August 9 – Spain – A train from coastal resorts to Bilbao collided with a freight train at Plentzia, killing 33 people and injuring about 200. October 31 – India – A Mangalore Mail crashes into a stationary Cochin Mail at the Perambur station, Madras, killing 16 and injuring 108. December 6 – United States – 1970 Lehigh Valley Railroad derailment: Twenty-five cars of a Lehigh Valley Railroad freight train derailed in Le Roy, New York. A toxic chemical spill led to the scene becoming a United States Environmental Protection Agency Superfund site in 1999. Over 40 years later, the spill was briefly thought to have caused an illness outbreak in the town. December 31 – Iran – Two trains collided at Ardakan due to a signalman's error; a government source indicates 15 people killed, but journalists reported at least 70, with 130 injured. 1971 January 18 – Switzerland – Two commuter trains collided between Feldmeilen and Herrliberg, killing six people and injuring 17. February 9 – West Germany – Aitrang: The TEE 56 Bavaria, a SBB RAm TEE DMU, traveling from Munich to Zürich, derailed while passing a curve shortly after Aitrang station. The maximum speed in the curve was 80 km/h, however the train passed the curve at 130 km/h due to frozen water in the air brake. Shortly after the TEE derailed, a railbus hit the wreckage from the opposite direction. 28 people died, 42 were injured. February 14 – Yugoslavia – In a tunnel near Vranduk (now in Bosnia and Herzegovina), a fire broke out in the diesel-electric locomotive of a passenger train, which then spread to the other cars, killing at least 34. February 26 – United Kingdom – A passenger train made of five 2HAP electric multiple units overran the buffers at , Kent and demolished the station building, killing one person and injuring ten. March 11 – Chile – Gualliguaica rail accident: A passenger train ran away on a downslope from Vicuña to Cresta de Gualliguaica and plunged into a ravine; killing 12. May 4 – United Kingdom – On the Northern Line of the London Underground, a train entered the reversing siding at Tooting Broadway after offloading passengers and crashed into the end of the tunnel at . The driver, who was killed, was apparently reading a book. May 27 – West Germany – Dahlerau train disaster: At Radevormwald, a special railbus service carrying schoolchildren and a freight train collided on the Wuppertal–Radevormwald single-track line near Dahlerau station. The local dispatcher claimed to have signalled a red light to the freight train, while the freight train engineer claimed to have seen a green one. Ultimately, the case was not resolved as the dispatcher was killed in a car accident before legal hearings started. 41 people died, 25 were injured in the worst rail accident in West Germany during Deutsche Bundesbahn era. The crash led to the phasing out of the Nachtbefehlsstab ("Nighttime Command Staff"), and presses DB to introduce radio communications on branch lines. June 10 – United States – 1971 Salem, Illinois, derailment: Amtrak train number 1, the northbound City of New Orleans derailed at due to a false flange on a flat wheel caused by a seized axle bearing. Eleven people died and over were 150 injured in Amtrak's first major incident. July 2 – United Kingdom – At Tattenhall Junction, the track shifted under a passing 10-car special school excursion train from Rhyl to Smethwick due to thermal stress and derailed the last three cars, killing two children and injuring 26 people. July 21 – West Germany – Rheinweiler: D 370 from Basel to Copenhagen passed a 75 km/h curve at about 140 km/h and derailed, destroying a detached house, killing 23 people and injuring 121. The suspected reason for the accident was a technical failure in the Class 103 engine's automatic cruise control mechanism, leading to the engine gaining too much speed. The cruise control was consequently disabled and restricted speed zones were equipped with PZB. August 4 – Yugoslavia – Interurban train Belgare-Požarevac collided with a freight train between the Kasapovac and Lipe stations near Vrčin. 39 people were killed and 73 injured. August 28 – Switzerland – A train derailed in the Simplon Tunnel, killing five people. October 6 – United Kingdom – A 24-car freight train ran away on the downgrade from Beattock Summit toward Carlisle due to stopcocks in the air-brake line being closed. It collided with the train ahead, killing one crew member on that train. October 19 – United States – 20 cars of Missouri Pacific Railroad train No. 94 derailed in Houston; two tank cars loaded with vinyl chloride monomer were punctured, allowing the gas to escape and ignite; 45 minutes after the derailment a third tank car exploded and a fourth was "rocketed" some away; a fireman was killed and 50 were injured. October 26 – Japan – On the Kinki Nippon Railway, between Osaka and Nagoya, a head-on collision killed 23 people. 1972 January 9 – United Kingdom – An engineers train overran signals and rear-ended an electric multiple unit at , West Sussex. The train crew had failed to perform a brake check before departing from and failed to discover that the isolation cocks between the two locomotives had not been opened. Fifteen people are injured. January 16 – Greece – Orfana rail disaster – A breakdown in communication between the stationmasters at Doxaras and Orfana caused an express train and a military relief train to collide in bad weather on the single track line. The southbound diesel hauled Acropolis Express and northbound Number 121 Athens-Thessaloniki, (known as posta) were allowed to proceed without first allowing a passing loop. 21 people died, and more than 40 were injured in one of the deadliest rail accidents in Greece. Nikolaos Gekas The stationmaster at Orfana was later sentenced to 5 years for his part in the disaster. March 24 – United States – Gilchrest Road, New York crossing accident: a school bus was struck by a freight train at a level crossing in Rockland County, New York, near the New York City suburbs of Congers and Valley Cottage, killing five students. The bus driver was convicted of negligent homicide and sentenced to probation; the accident also led the U.S. to require school buses to stop at all grade crossings they encounter. March 31 – South Africa – A derailment on the approach to a bridge at Potgietersrus (now Mokopane), possibly due to sabotage, killed 38 people and injures 174. April 26 – India – A derailment north of Bangalore killed 21 and injured 37. May 8 - United Kingdom - Chester General rail crash: A freight train collided with empty coaching stock stopped at , as a result of a brake failure. The resulting fire caused significant damage to the station. June 3 – Poland – Ślesin, (near Bydgoszcz): A train Kolobrzeg–Warsaw derailed on fatigue rail , killing 12 people and injuring 26. June 4 – Bangladesh – A crowded passenger train from Khulna crashed into a stationary freight train at Jessore after the stationmaster threw the wrong switch, killing 76 people and injuring about 500. June 6 - Indonesia - In West Java, a train carrying pine wood rolled over near Cukanghaur train station, killing a crew member of the train and the station master, and injuring seven people. June 11 – United Kingdom – Eltham Well Hall rail crash: An excursion train took a bend at excessive speed and derailed at Eltham, London. The driver and five passengers were killed, and 126 people injured. The subsequent investigation established that the driver had been drinking. June 17 – France – After 110 years in service, the roof of a tunnel at Vierzy collapsed without warning. Passenger trains in both directions between Paris and Laon, both moving about , crashed into the rubble and each other. 108 were killed and 240 injured. July 21 – Spain – A Madrid-to-Cádiz train collided near Jerez with a local train that failed to obey signals, killing 76 and injuring 103. August 8 – Pakistan – At Liaqatpur on the line between Lahore and Karachi, an express was misrouted onto a side track where a freight train was standing; there were between 38 and 65 deaths. September 29 – South Africa – All but the first-class cars of a 9-car passenger train from Cape Town to Bitterfontein derailed near Malmesbury due to excess speed; between 48 and about 100 people were killed. October 4 – Mexico – At Saltillo, a train carrying people from a festival at San Luis Potosí entered a downhill curve at about or twice the speed limit and derailed nine cars, killing 208 people and injuring 700. The engineer was found to have been drinking and was nearly lynched. October 12 – United Kingdom – A freight train rear-ended an electric multiple unit at , London due to an error by the freight driver, injuring twelve people. October 30 – United States – Chicago commuter rail crash, Two Illinois Central Railroad commuter trains collided after one train, having overshot a station stop, backs into the station, killing 45 people and injuring over 300. October 30 – East Germany – Schweinsburg-Culten: The driver of Ext 346 (Leipzig–Karlovy Vary) failed to notice a stop signal on a single-track stretch of line because of dense fog and collided with D 273 heading toward Berlin, killing 22 people and injuring 70. October 31 – Turkey – A passenger train and a train carrying oil collided at Eskişehir, starting a fire and causing several cars to go down a cliff; at least 30 were killed, and about 50 injured. November 6 – Japan – A fire broke out in the dining car of the Japanese National Railways' Kitaguni night train from Osaka to Aomori, at Hokoriku Tunnel (between Tsuruga and Imajō on the Hokuriku Main Line). One crew member and 29 passengers were killed by carbon monoxide and 714 people were injured. November 22 – Netherlands – Railway accident near Halfweg (1972) – the locomotive of a work train derailed in North Holland. December 16 – United Kingdom – two electric multiple unit passenger trains collided at Copyhold Junction, West Sussex after a driver misread signals, injuring 25 people. 1973 January 30 – Hungary – Kecskemét level crossing disaster: A bus disregarded crossing signals and booms and was crushed by a train, killing 37 people and injuring 18. February 1 – Algeria – A derailment killed 35 people and injured 51. February 6 – United States – Littlefield, Texas: A Santa Fe freight train crashed into a schoolbus, killing 7 children and injuring 16. March 9 – United States – White Haven, Pennsylvania: A runaway train crashed into the Lehigh Valley Railroad Engine House, damaging the southeast corner of the building March 18 – United States – East Palestine, Ohio: Amtrak's westbound Broadway Limited derailed in a heavy snowstorm, killing one Penn Central employee riding on a pass, and injuring 19 passengers. May 10 - Canada - Stettler, Alberta. A Canadian National Railways 'Dayliner" struck a vehicle on the regular Edmonton-to-Drumheller line at an uncontrolled crossing, south and west of Stettler. All six teenagers in the car were killed. July 10 – East Germany – Leipzig: The driver of a commuter train failed to notice a diversion, causing the train to derail and hit the signal box of Leipzig-Leutzsch railway station. Four people are killed, 25 injured. August 27 – Poland – Radkowice, (near Kielce): A Zakopane–Warsaw passenger train slammed into 20 cars which break away from a freight train , killing 16 people and injuring 24. October 10 – United States – Bronx, New York: A commuter train from Brewster derailed on Penn Central's Harlem Division at 155th Street in the Mott Haven Yard. It crashed into a signal gantry bringing it down on top of the train. There were two minor injuries. December 17 – Brazil – An express passenger train collided head-on with a freight train on the outskirts of Salvador, Bahia, killing 18 people and injuring 40. December 19 – United Kingdom – Ealing rail crash: An express passenger train derailed at Ealing Broadway station after a loose battery-box door on the locomotive hauling it struck point rodding, causing a set of points to move under the locomotive. Ten people were killed and 94 were injured. 1974 February 12 – United States – A Delaware and Hudson freight train derailed north of Oneonta, New York. 54 people (most of them firefighters) were injured after a propane car that had been punctured and two other propane tanker cars exploded. Several nearby homes were also damaged in the blast. March 26 – Switzerland – A train derailed at Moutier, killing three people and injuring 13. March 27 – Portuguese Mozambique – Magude train disaster: A passenger train collided with a freight train carrying petroleum products. 70 people were killed and 200 injured after the petroleum exploded, melting several passenger coaches. July 19 – United States – Decatur, Illinois – A tanker car containing isobutane collided with a Norfolk & Western boxcar causing an explosion that killed seven people, injured 349, and caused $18 million in property damage. August 12 – United States – Wake Forest, North Carolina: The Amtrak Silver Star derailed while navigating a curve, injuring 28. August 13 – Ireland – Rosslare: Two passenger trains collided head-on at , injuring 15 people. August 30 – Yugoslavia – Zagreb train disaster: An express train from Athens to Dortmund derailed at Zagreb railway station due to excessive speed, killing 152 passengers and injuring 90. October 31 – India – On an express train from Delhi to Calcutta, a fire started after a passenger's fireworks exploded at Mohanganj. Between 43 and 52 people were killed and about 60 injured. September 21 – United States – Houston, Texas – At Southern Pacific's Englewood Yard hump, two "jumbo" tank cars collided with an empty tank car which caused it to ride over the coupler of a loaded tank car and puncture the tank head. Butadiene spilled from the car and formed a vapor cloud, which dispersed over the area. After 2–3 minutes, the vapor exploded, killing one person and injuring 235. October 21 – Ireland – Gormanston, County Meath. A passenger train ran away driverless and collided with another passenger train at , a third passenger train was struck by the two wrecked trains. Two people were killed and 29 were injured. 1975 February 22 – Norway – Tretten train disaster: A passenger train from Oslo collided head-on with an express train from Trondheim, killing 27 people. February 28 – United Kingdom – Moorgate tube crash: A driver fails to stop a London Underground train at Moorgate station and continued into the dead-end tunnel beyond, killing 43 people. April 4 – Soviet Union – Žasliai railway disaster: passenger train hits a tank car carrying fuel, derailed, and caught fire, killing 20 people and injuring 80. It remains the worst rail disaster in Lithuania. May 22 – Morocco – A derailment near Kenitra killed at least 34 people. June 6 – United Kingdom – Nuneaton rail crash: The driver of a London Euston-to-Glasgow train missed a temporary speed restriction at Nuneaton because the propane-burning lamps on the marker sign had run out of fuel. The train derailed, killing six people and injuring 38. June 8 – West Germany – Two passenger trains collided head-on between Lenggries and Warngau due to errors by dispatchers at both stations. 41 people were killed (38 passengers, 2 drivers, 1 conductor) and 122 were injured. June 12 – Canada – At Simcoe, Ontario, a freight train loaded with newly built cars passing through town derailed at a bridge over a road. The locomotive plunged onto the pavement and burst into flames. Two men in the cab died, a third was seriously injured, and blockage of the road effectively split the town in half. July 22 – West Germany – A regional train passed a signal at danger and crashed head-on into a freight train which was crossing the tracks in Hamburg-Hausbruch, killing 11 passengers and seriously injuring 65. Investigations revealed that the distance between the signal and place of danger was so short that automated braking would not have prevented the crash. September 11 – United Kingdom – A diesel-electric multiple unit collided with an electric multiple unit at Bricklayers Arms Junction, London. One of the trains had passed a signal a danger, but which appeared to the driver to be showing a proceed aspect due to the reflection of sunlight from his cab. 62 people were injured. September 29 – Argentina – Two passenger trains collided in Río Luján, killing 32 and injuring 100. October 20 – Mexico – A Mexico City Metro train crashed into another at Metro Viaducto station. Between 31 and 39 people were killed, and between 71 and 119 were injured. It is considered the worst accident recorded on the system. October 26 – United Kingdom – A passenger train came to a standstill at Lunan, Angus due to the failure of the locomotive hauling it. Assistance is sent for, but an incorrect location was given, causing the rescue locomotive to rear-end the train at . One person was killed and 42 were injured. December 12 – Canada – A Toronto Transit Commission bus, whose rear doors worked erratically due to a missing wire-retaining screw, was immobilized by its own safety features after the doors opened on a level crossing on St. Clair Avenue near Scarborough GO Station. Before all the passengers can be evacuated, a GO Train running express from Pickering to Toronto smashed into it, killing nine bus passengers and injuring about 20. December 31 – Ireland – Near Gorey, County Wexford, a passenger train derailed on a bridge damaged by a vehicle crashing into it, killing five people and injuring 43. 1976 January 2 – United Kingdom – A light engine rear-ended a parcels train at Worcester Tunnel Junction, Worcestershire, killing both crew. February – Switzerland – A head-on collision on the Yverdon–Ste-Croix line killed seven and injured 40. February 7 – United States – 1976 Beckemeyer train accident: A Baltimore & Ohio freight train struck an overloaded pickup truck at an unprotected grade crossing in Beckemeyer, Illinois, killing 12 people, mostly children, and injuring three others. May 4 – Netherlands - Schiedam train disaster: An international train collided with a local train, killing 24 people and injuring 11. May 23 – South Korea – At a level crossing in Seoul, a train collided with a tanker truck carrying flammable liquid, killing 20 people. July 23 – Switzerland – A Riviera Express train derailed at Brig, killing six people and injuring 32. August 1 – Japan – Two trains collided head on at Imabashi Station, injuring 210 people. September 9 – South Africa – A local train to the black township of Daveyton rear-ended an express stopped for signals at Benoni; 31 people were killed, all on the local. The cause is not determined except that sabotage is ruled out. September 20 – Yugoslavia – A passenger train and an express train (Direct Orient Express) collided head-on on the Ljubljana-Postojna rail line, between the stations of Preserje and Notranje Gorice. 18 people were killed. October 10 – Mexico – A two-car Chihuahua–Los Mochis passenger train collided head-on with a standing freight train and plunged to the bottom of a embankment, near Copper Canyon, Chihuahua, killing 24 people and injuring 60. Some of the passengers were riding on the roof. Railroad employees reported seeing about 60 cadavers; farm laborers' deaths were not counted by authorities. November 3 – Poland – A Lublin–Wrocław express whose crew had fallen asleep rammed a standing passenger train at Julianka railroad station, Kielce, Świętokrzyskie, killing 25 people and injuring 79. November 26 – United States – A defective fissure caused the derailment of several Burlington Northern Railways train cars carrying tanks of propane, butane, and fuel oil as it was passing through the small town of Belt, Montana. Two people were killed and 22 injured. 1977 January 18 – Australia – Granville rail disaster: 83 people died after a train derails and hit a bridge support, causing the bridge to collapse and crush part of the train. This is Australia's worst railway accident. January 19 – India – Near Benares, a passenger train collided with a stationary train, killing 28 and injuring 78. February 4 – United States – Chicago Loop derailment, Chicago: In the worst accident in the system's history, a Chicago Transit Authority elevated train motorman disregarded cab signals and rear-ended another train on the Loop curve at Wabash and Lake Streets during the evening rush hour. Eleven people are killed and over 180 injured while four cars of the rear train derailed and fell to the street below. The motorman was discovered to have marijuana in his possession, although it was never determined if he was impaired in any way. February 28 – Spain – A head-on collision of two crowded Catalan Railways suburban trains about from Barcelona killed 22 people. May 30 – India – A flood-weakened bridge collapsed under a train about from Gauhati (now Guwahati). The locomotive and four cars fell into the river and 85 people were killed. June 27 – East Germany – Lebus train collision: Because of a dispatcher working under the influence of medication, at Booßen station, near Frankfurt (Oder), a holiday train from Zittau to Stralsund was diverted onto the branch line to Kietz, where it , killing 28 people, including the crew of the holiday train; the dispatcher was jailed for five years. July 9 – Poland – Psie Pole, near Wrocław: Express "Czech-Russian Friendship" Prague-Moscow collided with locomotive which passed signal at danger: 11-32 people were killed and 15-40 were injured. September 5 – United Kingdom – Due to faulty wiring in a lineside relay cabinet, a mail train and a passenger train collided head-on at Farnley Junction, Leeds, West Yorkshire, killing two people and injuring fifteen. September 8 – Egypt – As an 11-car Cairo-to-Aswan express passes Asyut at about , eight cars derailed. Newspapers reported 70 people killed, but official sources said only 25. October 10 – India – Deluxe express passenger train 103 from Howrah to Amritsar rear-ended a freight at Naini, killing at least 61 and injuring 151, 81 seriously. November 12 – Mexico – A National Railroad passenger train collided with a gasoline truck at a grade crossing south of Ciudad Juárez, killing 37 people. November 27 – East Germany – Bitterfeld: The boiler of a Class 01 steam engine exploded for lack of water, killing nine and injuring 45. 1978 January 4 – Turkey – A head-on collision of two passenger trains at Esenköy killed at least 30 people and injured at least 100. February 24 – United States – Waverly tank car explosion, Waverly, Tennessee: A Louisville and Nashville Railroad freight train derailed; one tank car containing liquefied petroleum gas exploded two days later, killing 16 people and injuring 43. Numerous buildings in downtown Waverly were destroyed or damaged by the blast and resulting fires. February 25 – Argentina – A passenger train collided with a truck in Sa Pereira, Santa Fe, killing 55 and injuring 56. April 15 – Italy – Due to a landslide, the locomotive of a Lecce–Milan train collided with a Bolzano-Rome train in Murazze di Vado, Bologna and derailed, killing 48 people and injuring 76. July 6 – United Kingdom – Taunton sleeping car fire, Taunton, England: A fire aboard a British Rail sleeping car travelling from Penzance to London Paddington killed 12 people. Investigation showed that the fire was caused by the careless placement of a plastic bag of linens against a heater in the car's vestibule. September 10 – United States – Due to a hotbox, 15 cars of a Conrail freight train derailed at a grade crossing in Miamisburg, Ohio, demolishing a house and killing its three occupants. The ensuing investigation by the National Transportation Safety Board and the local police department resulted in a ruling of homicide in the deaths by the Montgomery County Coroner. July 16 – Australia – Queensland Rail (QR) train No.242 (loaded with interstate fruit) at Cooroy lost control while travelling down the Cooroy-Eumundi Range, a 1:50 (2%) gradient. The locomotive and most of the wagons rolled and derailed on a right-hand curve. Some of the wagons landed on top of the derailed locomotive which had crashed into a small embankment. The driver's assistant was killed and the driver was injured. The Cooroy-Eumundi range was regraded to a less steep gradient. 1979 January 4 – Turkey – An accident near Ankara killed 16 people. January 9 – Turkey – A rear-end collision between two commuter trains near Ankara killed 30 people and injured about 100. January 26 – Bangladesh – Near Chuadanga, a train derailed and overturned, killing at least 70 and injuring at least 300. April 8 – United States – Louisville and Nashville Railroad freight train No. 403 derailed between Milligan, Florida and Crestview, Florida. A punctured tank car then leaked anhydrous ammonia, injuring 14. April 16 – United Kingdom – Paisley Gilmour Street accident: head-on collision between two DMU trains after starting signal was passed at danger in a case of "ding-ding, and away". Both drivers and five passengers were killed and 67 passengers and a guard were injured. The rule change that led to the crash was reversed. May 29 – Canada – A UAC TurboTrain operated by Via Rail on westbound service from Montréal to Toronto caught fire near Morrisburg, Ontario after developing an oil leak. A third of the train was totally destroyed. This was the last major incident for the troubled Turbo Trains, which were retired in 1982. July 10 – Italy – A Pompeii-Naples and Naples-Herculaneum commuter train collided head-on in Cercola near Mount Vesuvius, Naples, killing 14 people and injuring 70. August 21 – Thailand – A head-on collision of freight and passenger trains at Taling Chan killed 52 people and injured about 200. August 28 – Netherlands – Nijmegen train collision: Eight people died after two passenger trains collided head-on at Nijmegen. September 13 – Yugoslavia – Stalać rail crash: A head-on collision of freight and passenger trains at Stalać (now in Serbia), when a goods train violated signals, possibly because the driver was asleep, and crashed into a passenger train bound for Skopje, killing at least 60 people. October 2 – United States – The Southwest Limited derailed at Lawrence, Kansas. Two people were killed and 69 were injured. The cause is excessive speed on a curve. October 3 – Ireland – A passenger train and a freight train collided head-on at , County Wicklow, injuring 29 people. October 12 - United States - Harvey, Illinois train collision: The Amtrak Shawnee collided head-on into a stopped Illinois Central Gulf freight train, and derailed, killing two people and injuring 38. October 22 – United Kingdom – Invergowrie rail accident: A semaphore signal failed to return completely to danger, and was apparently interpreted wrongly as clear, the resulting accident killed five people. October 30 – Djibouti – A freight train—with passengers riding in its empty cars, and unusually many of them due to Eid al-Adha—en route from Dire Dawa, Ethiopia, to Djibouti City ran away due to brake failure and derailed at a bridge near Holhol station, which some cars smashed into. 63 people were killed and 90 injured, mostly women and children. November 10 – Canada – Mississauga train derailment in Mississauga, Ontario: tank cars containing propane and chlorine derailed due to a hot box, causing a propane fire that burned for days and released chlorine. More than 250,000 residents were evacuated from the city, the largest peacetime emergency evacuation in North American history until Hurricane Katrina in 2005. November 16 – Ireland – , County Dublin. One passenger train collided with another, injuring 36 people. December 3 – India – A derailment at Londa, Karnataka killed 23 people and injured at least 12. See also List of road accidents – includes level crossing accidents. List of rail accidents in Canada List of rail accidents in the United Kingdom List of Russian rail accidents Years in rail transport References Sources "Freight Train Wreck at Houston." ARR Stories: Freight Train Wreck. N.p., n.d. Web. 10 June 2016. External links Railroad train wrecks 1907–2007 Rail accidents 1970-1979 20th-century railway accidents
List of rail accidents (1970–1979)
Technology
6,453
71,040,894
https://en.wikipedia.org/wiki/HD%20194012
HD 194012 (HR 7793; Gliese 789) is a star in the equatorial constellation Delphinus. It has an apparent magnitude of 6.15, making it visible to the naked eye under ideal conditions. The star is relatively close at a distance of only 85 light years but is receding with a heliocentric radial velocity of . HD 194012 has a stellar classification of F7 V, indicating that it is an ordinary F-type main-sequence star. It has 121% the mass of the Sun and is estimated to be a billion years old, spinning with a projected rotational velocity of . The star's diameter is 118% that of the Sun and shines with a luminosity of from its photosphere at an effective temperature of , giving a yellow white hue. HD 194012's metallicity is calculated to be 87% that of the Sun. A 2010 paper has identified a candidate substellar companion away along a position angle of . HD 194012 has been examined for infrared excess suggesting a debris disk but none was found. References Delphinus F-type main-sequence stars BD+14 04275 0789 100511 194012 7793
HD 194012
Astronomy
248