source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Pores%20of%20Kohn
The pores of Kohn (also known as interalveolar connections or alveolar pores) are discrete holes in walls of adjacent alveoli. Cuboidal type II alveolar cells, which produce surfactant, usually form part of aperture. Etymology The pores of Kohn take their name from the German physician and pathologist Hans Nathan Kohn (1866–1935) who first described them in 1893. Development They are absent in human newborns. They develop at 3–4 years of age along with canals of Lambert during the process of thinning of alveolar septa. Function The pores allow the passage of other materials such as fluid and bacteria, which is an important mechanism of spread of infection in lobar pneumonia and spread of fibrin in the grey hepatisation phase of recovery from the same. They also equalize the pressure in adjacent alveoli and, combined with increased distribution of surfactant, thus play an important role in prevention of collapse of the lung. Unlike adults, in children these inter-alveolar connections are poorly developed which aids in limiting the spread of infection. This is thought to contribute to round pneumonia.
https://en.wikipedia.org/wiki/Short%20interspersed%20nuclear%20element
Short interspersed nuclear elements (SINEs) are non-autonomous, non-coding transposable elements (TEs) that are about 100 to 700 base pairs in length. They are a class of retrotransposons, DNA elements that amplify themselves throughout eukaryotic genomes, often through RNA intermediates. SINEs compose about 13% of the mammalian genome. The internal regions of SINEs originate from tRNA and remain highly conserved, suggesting positive pressure to preserve structure and function of SINEs. While SINEs are present in many species of vertebrates and invertebrates, SINEs are often lineage specific, making them useful markers of divergent evolution between species. Copy number variation and mutations in the SINE sequence make it possible to construct phylogenies based on differences in SINEs between species. SINEs are also implicated in certain types of genetic disease in humans and other eukaryotes. In essence, short interspersed nuclear elements are genetic parasites which have evolved very early in the history of eukaryotes to utilize protein machinery within the organism as well as to co-opt the machinery from similarly parasitic genomic elements. The simplicity of these elements make them remarkably successful at persisting and amplifying (through retrotransposition) within the genomes of eukaryotes. These "parasites" which have become ubiquitous in genomes can be very deleterious to organisms as discussed below. However, eukaryotes have been able to integrate short-interspersed nuclear elements into different signaling, metabolic and regulatory pathways and SINEs have become a great source of genetic variability. They seem to play a particularly important role in the regulation of gene expression and the creation of RNA genes. This regulation extends to chromatin re-organization and the regulation of genomic architecture. The different lineages, mutations, and activities among eukaryotes make short-interspersed nuclear elements a useful tool in phylogenetic analy
https://en.wikipedia.org/wiki/Stagnation%20enthalpy
In thermodynamics and fluid mechanics, the stagnation enthalpy of a fluid is the static enthalpy of the fluid at a stagnation point. The stagnation enthalpy is also called total enthalpy. At a point where the flow does not stagnate, it corresponds to the static enthalpy of the fluid at that point assuming it was brought to rest from velocity isentropically. That means all the kinetic energy was converted to internal energy without losses and is added to the local static enthalpy. When the potential energy of the fluid is negligible, the mass-specific stagnation enthalpy represents the total energy of a flowing fluid stream per unit mass. Stagnation enthalpy, or total enthalpy, is the sum of the static enthalpy (associated with the temperature and static pressure at that point) plus the enthalpy associated with the dynamic pressure, or velocity. This can be expressed in a formula in various ways. Often it is expressed in specific quantities, where specific means mass-specific, to get an intensive quantity: where: mass-specific total enthalpy, in [J/kg] mass-specific static enthalpy, in [J/kg] fluid velocity at the point of interest, in [m/s] mass-specific kinetic energy, in [J/kg] The volume-specific version of this equation (in units of energy per volume, [J/m^3] is obtained by multiplying the equation with the fluid density : where: volume-specific total enthalpy, in [J/m^3] volume-specific static enthalpy, in [J/m^3] fluid velocity at the point of interest, in [m/s] fluid density at the point of interest, in [kg/m^3] volume-specific kinetic energy, in [J/m^3] The non-specific version of this equation, that means extensive quantities are used, is: where: total enthalpy, in [J] static enthalpy, in [J] fluid mass, in [kg] fluid velocity at the point of interest, in [m/s] kinetic energy, in [J] The suffix ‘0’ usually denotes the stagnation condition and is used as such here. Enthalpy is the energy associated with the temperature plus the ener
https://en.wikipedia.org/wiki/Alder%20Lake
Alder Lake is Intel's codename for the 12th generation of Intel Core processors based on a hybrid architecture utilizing Golden Cove performance cores and Gracemont efficient cores. It is fabricated using Intel's Intel 7 process, previously referred to as Intel 10 nm Enhanced SuperFin (10ESF). The 10ESF has a 10%-15% boost in performance over the 10SF used in the mobile Tiger Lake processors. Intel officially announced 12th Gen Intel Core CPUs on October 27, 2021. Intel officially announced 12th Gen Intel Core mobile CPUs and non-K series desktop CPUs on January 4, 2022. Intel officially announced the launch of Alder Lake-P and -U series on February 23, 2022, and Alder Lake-HX series on May 10, 2022. History Fabricated using Intel's Intel 7 process, which was previously referred to as Intel 10 nm Enhanced SuperFin (10ESF), Intel officially announced 12th Gen Intel Core CPUs on October 27, 2021. Intel then officially announced 12th Gen Intel Core mobile CPUs and non-K series desktop CPUs on January 4, 2022. It further was announced in January 2022 that Intel Alder Lake would use a hybrid architecture combining performance and efficiency cores, similar to ARM big.LITTLE. This was the second Intel's hybrid architecture, after the mobile-only Lakefield released in June 2020. While the desktop Alder Lake processors were already on the market by January 2022, the mobile processors were not, although release was expected early that year. Starting cost were USD $289 for the Core i5-12600K. Gracemont was the name given to the efficiency cores, while Golden Cove cores were set for tasks such as gaming and video processing. First laptop tests were performed later that month, with PCMag positively reviewing the Core i9-12900HK, stating the H series represented "Intel's enthusiast line," with "the same hybrid designs" also in the P-series and U-series chips to come out later that year. In April 2022, press reported on "hints" that Intel was working on Alder Lake-X. Intel of
https://en.wikipedia.org/wiki/Upper%20critical%20solution%20temperature
The upper critical solution temperature (UCST) or upper consolute temperature is the critical temperature above which the components of a mixture are miscible in all proportions. The word upper indicates that the UCST is an upper bound to a temperature range of partial miscibility, or miscibility for certain compositions only. For example, hexane-nitrobenzene mixtures have a UCST of , so that these two substances are miscible in all proportions above but not at lower temperatures. Examples at higher temperatures are the aniline-water system at (at pressures high enough for liquid water to exist at that temperature), and the lead-zinc system at (a temperature where both metals are liquid). A solid state example is the palladium-hydrogen system which has a solid solution phase (H2 in Pd) in equilibrium with a hydride phase (PdHn) below the UCST at 300 °C. Above this temperature there is a single solid solution phase. In the phase diagram of the mixture components, the UCST is the shared maximum of the concave down spinodal and binodal (or coexistence) curves. The UCST is in general dependent on pressure. The phase separation at the UCST is in general driven by unfavorable energetics; in particular, interactions between components favor a partially demixed state. Polymer-solvent mixtures Some polymer solutions also have a lower critical solution temperature (LCST) or lower bound to a temperature range of partial miscibility. As shown in the diagram, for polymer solutions the LCST is higher than the UCST, so that there is a temperature interval of complete miscibility, with partial miscibility at both higher and lower temperatures. The UCST and LCST of polymer mixtures generally depend on polymer degree of polymerization and polydispersity. The seminal statistical mechanical model for the UCST of polymers is the Flory–Huggins solution theory. By adding soluble impurities the upper critical solution temperature increases and lower critical solution temperature
https://en.wikipedia.org/wiki/Gunther%20Schmidt
Gunther Schmidt (born 1939, Rüdersdorf) is a German mathematician who works also in informatics. Life Schmidt began studying Mathematics in 1957 at Göttingen University. His academic teachers were in particular Kurt Reidemeister, Wilhelm Klingenberg and Karl Stein. In 1960 he transferred to Ludwig-Maximilians-Universität München where he studied functions of several complex variables with Karl Stein. Schmidt wrote a thesis on analytic continuation of such functions. In 1962 Schmidt began work at TU München with students of Robert Sauer, in the beginning in labs and tutorials, later in mentoring and administration. Schmidt's interests turned toward programming when he collaborated with Hans Langmaack on rewriting and the braid group in 1969. Friedrich L. Bauer and Klaus Samelson were establishing software engineering at the university and Schmidt joined their group in 1974. In 1977 he submitted his Habilitation "Programs as partial graphs". He became a professor in 1980. Shortly after that, he was appointed to hold the chair of the late Klaus Samelson for one and a half years. From 1988 until his retirement in 2004, he held a professorship at the Faculty for Computer Science of the Universität der Bundeswehr München. He was a classroom instructor for beginners courses as well as special courses in mathematical logic, semantics of programming languages, construction of compilers, and algorithmic languages. Working with Thomas Strohlein, he authored a textbook on relations and graphs, published in German in 1989 and English in 1993 and again in 2012. In 2001 he became involved in a large project (17 nations) with the European Cooperation in Science and Technology: Schmidt was chairman of project COST 274 TARSKI (Theory and Application of Relational Structures as Knowledge Instruments). In 2014 a festschrift was organized to celebrate his 75th year. The calculus of relations had a relatively low profile among mathematical topics in the twentieth century, but Schm
https://en.wikipedia.org/wiki/Working%20set%20size
In computing, working set size is the amount of memory needed to compute the answer to a problem. In any computing scenario, but especially high performance computing where mistakes can be costly, this is a significant design-criteria for a given super computer system in order to ensure that the system performs as expected. When a program/algorithm computes the answer to a problem, it uses a set of data (input and intermediate data) to complete the work. For any given instance of the problem, the program has one such data set, which is called the working set. The Working Set Size (WSS) is the size of this data set. The significance of this is that if the Working Set Size is larger than the available memory in a virtual memory system then the memory manager must refer to the next level in the memory hierarchy (usually hard disk) to perform a swap operation swapping some memory contents from RAM to hard disk to enable the program to continue working on the problem. If this swapping goes on continuously the program is slowed significantly. This phenomenon is known as thrashing.
https://en.wikipedia.org/wiki/Liouville%E2%80%93Arnold%20theorem
In dynamical systems theory, the Liouville–Arnold theorem states that if, in a Hamiltonian dynamical system with n degrees of freedom, there are also n independent, Poisson commuting first integrals of motion, and the energy level set is compact, then there exists a canonical transformation to action-angle coordinates in which the transformed Hamiltonian is dependent only upon the action coordinates and the angle coordinates evolve linearly in time. Thus the equations of motion for the system can be solved in quadratures if the level simultaneous set conditions can be separated. The theorem is named after Joseph Liouville and Vladimir Arnold. History The theorem was proven in its original form by Liouville in 1853 for functions on with canonical symplectic structure. It was generalized to the setting of symplectic manifolds by Arnold, who gave a proof in his textbook Mathematical Methods of Classical Mechanics published 1974. Statement Preliminary definitions Let be a -dimensional symplectic manifold with symplectic structure . An integrable system on is a set of functions on , labelled , satisfying (Generic) linear independence: on a dense set Mutually Poisson commuting: the Poisson bracket vanishes for any pair of values . The Poisson bracket is the Lie bracket of vector fields of the Hamiltonian vector field corresponding to each . In full, if is the Hamiltonian vector field corresponding to a smooth function , then for two smooth functions , the Poisson bracket is . A point is a regular point if . The integrable system defines a function . Denote by the level set of the functions , or alternatively, . Now if is given the additional structure of a distinguished function , the Hamiltonian system is integrable if can be completed to an integrable system, that is, there exists an integrable system . Theorem If is an integrable Hamiltonian system, and is a regular point, the theorem characterizes the level set of the image of the regula
https://en.wikipedia.org/wiki/Laser%20accelerometer
A laser accelerometer is an accelerometer that uses a laser to measure changes in velocity/direction. Mechanism It employs a frame with three orthogonal input axes and multiple proof masses. Each proof mass has a predetermined blanking surface. A flexible beam supports each proof mass. The flexible beam permits movement of the proof mass on its axis. A laser light source provides a light ray. The laser source has a transverse field characteristic with a central null intensity region. A mirror transmits a beam of light to a detector. The detector is positioned to be centered on the light ray and responds to the light's intensity to provide an intensity signal. The signal's magnitude is related to the intensity of the light ray. The proof mass blanking surface is centrally positioned within and normal to the light ray null intensity region to provide increased blanking of the light ray in response to transverse movement of the mass on the input axis. In response to acceleration in the direction of the input axis, the proof mass deflects the beam and moves the blanking surface in a direction transverse to the light ray to partially blank the light beam. A control responds to the intensity signal to apply a restoring force to restore the proof mass to a central position and provides an output signal proportional to the restoring force. Applications Accelerometers are added to many devices, including (smart) watches, phones and vehicles of all kinds. Accelerometers oriented vertically function as gravimeters, useful for mining. Other applications include medical diagnostics and satellite measurements for climate change studies. Lasers Basic lasers operate with a frequency range (line width) of some 500 mHz. The range is widened by small temperature changes and vibrations, and by imperfections in the laser cavity. The line width of a specialised scientific laser approaches 1mHz. History 2021 An accelerometer was announced that used infrared light to measure t
https://en.wikipedia.org/wiki/Apparent%20molar%20property
In thermodynamics, an apparent molar property of a solution component in a mixture or solution is a quantity defined with the purpose of isolating the contribution of each component to the non-ideality of the mixture. It shows the change in the corresponding solution property (for example, volume) per mole of that component added, when all of that component is added to the solution. It is described as apparent because it appears to represent the molar property of that component in solution, provided that the properties of the other solution components are assumed to remain constant during the addition. However this assumption is often not justified, since the values of apparent molar properties of a component may be quite different from its molar properties in the pure state. For instance, the volume of a solution containing two components identified as solvent and solute is given by where is the volume of the pure solvent before adding the solute and its molar volume (at the same temperature and pressure as the solution), is the number of moles of solvent, is the apparent molar volume of the solute, and is the number of moles of the solute in the solution. By dividing this relation to the molar amount of one component a relation between the apparent molar property of a component and the mixing ratio of components can be obtained. This equation serves as the definition of . The first term is equal to the volume of the same quantity of solvent with no solute, and the second term is the change of volume on addition of the solute. may then be considered as the molar volume of the solute if it is assumed that the molar volume of the solvent is unchanged by the addition of solute. However this assumption must often be considered unrealistic as shown in the examples below, so that is described only as an apparent value. An apparent molar quantity can be similarly defined for the component identified as solvent . Some authors have reported apparent molar volume
https://en.wikipedia.org/wiki/Strong%20measure%20zero%20set
In mathematical analysis, a strong measure zero set is a subset A of the real line with the following property: for every sequence (εn) of positive reals there exists a sequence (In) of intervals such that |In| < εn for all n and A is contained in the union of the In. (Here |In| denotes the length of the interval In.) Every countable set is a strong measure zero set, and so is every union of countably many strong measure zero sets. Every strong measure zero set has Lebesgue measure 0. The Cantor set is an example of an uncountable set of Lebesgue measure 0 which is not of strong measure zero. Borel's conjecture states that every strong measure zero set is countable. It is now known that this statement is independent of ZFC (the Zermelo–Fraenkel axioms of set theory, which is the standard axiom system assumed in mathematics). This means that Borel's conjecture can neither be proven nor disproven in ZFC (assuming ZFC is consistent). Sierpiński proved in 1928 that the continuum hypothesis (which is now also known to be independent of ZFC) implies the existence of uncountable strong measure zero sets. In 1976 Laver used a method of forcing to construct a model of ZFC in which Borel's conjecture holds. These two results together establish the independence of Borel's conjecture. The following characterization of strong measure zero sets was proved in 1973: A set has strong measure zero if and only if for every meagre set . This result establishes a connection to the notion of strongly meagre set, defined as follows: A set is strongly meagre if and only if for every set of Lebesgue measure zero. The dual Borel conjecture states that every strongly meagre set is countable. This statement is also independent of ZFC.
https://en.wikipedia.org/wiki/Exclaimer
Exclaimer is a privately held UK-based information technology company owned by Insight Partners. It develops, sells and provides support for a suite of email utilities and cloud computing technologies designed for adding disclaimers, branding, and regulatory compliance for corporate email via personalized email signatures. Its products are designed to work with Microsoft Exchange Server, Office 365 and Google Workspace. The company's headquarters is located in Farnborough, United Kingdom. The company name, "Exclaimer", comes from a combination of "Exchange" and "disclaimer". History The company was founded in 2001 by Andrew Millington and Christopher Crawshay. Exclaimer became an incorporated company in October 2003. The first Exclaimer products allowed organizations to apply legally compliant disclaimers to corporate email sent by Microsoft Exchange Server and Microsoft Outlook. These were later enhanced to allow for more complex HTML to ensure consistent corporate branding. On 6 December 2016, Livingbridge invested £23 million in Exclaimer. The investment was advised by PricewaterhouseCoopers and was the last primary investment from Livingbridge 5, the firm's £360m fund raised in 2012. In 2020, Exclaimer appointed Heath Davies, as CEO. On 7 December 2020, Insight Partners took a majority stake in Exclaimer, investing over $133 million with participation from Farview Equity Partners and existing investor Livingbridge. On 9 February 2021, Exclaimer completed the acquisition of Customer Thermometer, a customer satisfaction survey platform. On 11 October 2021, Exclaimer appointed Marco Costa as its new chief executive following Heath Davies standing down. Company Exclaimer is a Microsoft Gold Certified Partner and a Google Cloud Partner. The company was named in The Sunday Times WorldFirst SME Export Track 100 of the UK's small and medium-sized private companies with the fastest-growing international sales. It was also named in the Financial Times FT 1000:
https://en.wikipedia.org/wiki/Timoshenko%E2%80%93Ehrenfest%20beam%20theory
The Timoshenko–Ehrenfest beam theory was developed by Stephen Timoshenko and Paul Ehrenfest early in the 20th century. The model takes into account shear deformation and rotational bending effects, making it suitable for describing the behaviour of thick beams, sandwich composite beams, or beams subject to high-frequency excitation when the wavelength approaches the thickness of the beam. The resulting equation is of 4th order but, unlike Euler–Bernoulli beam theory, there is also a second-order partial derivative present. Physically, taking into account the added mechanisms of deformation effectively lowers the stiffness of the beam, while the result is a larger deflection under a static load and lower predicted eigenfrequencies for a given set of boundary conditions. The latter effect is more noticeable for higher frequencies as the wavelength becomes shorter (in principle comparable to the height of the beam or shorter), and thus the distance between opposing shear forces decreases. Rotary inertia effect was introduced by Bresse and Rayleigh. If the shear modulus of the beam material approaches infinity—and thus the beam becomes rigid in shear—and if rotational inertia effects are neglected, Timoshenko beam theory converges towards ordinary beam theory. Quasistatic Timoshenko beam In static Timoshenko beam theory without axial effects, the displacements of the beam are assumed to be given by where are the coordinates of a point in the beam, are the components of the displacement vector in the three coordinate directions, is the angle of rotation of the normal to the mid-surface of the beam, and is the displacement of the mid-surface in the -direction. The governing equations are the following coupled system of ordinary differential equations: The Timoshenko beam theory for the static case is equivalent to the Euler–Bernoulli theory when the last term above is neglected, an approximation that is valid when where is the length of the beam. is the c
https://en.wikipedia.org/wiki/Bragg%E2%80%93Gray%20cavity%20theory
Bragg-Gray cavity theory relates the radiation dose in a cavity volume of material to the dose that would exist in a surrounding medium in the absence of the cavity volume. It was developed in 1936 by British scientists Louis Harold Gray, William Henry Bragg, and William Lawrence Bragg. Most often, material is assumed to be a gas, however Bragg-Gray cavity theory applies to any cavity volume (gas, liquid, or solid) that meets the following Bragg-Gray conditions. The dimensions of the cavity containing is small with respect to the range of charged particles striking the cavity so that the cavity does not perturb the charged particle field. That is, the cavity does not change the number, energy, or direction of the charged particles that would exist in in the absence of the cavity. The absorbed dose in the cavity containing is deposited entirely by charged particles crossing it. When the Bragg-Gray conditions are met, then , where is the dose to material (SI unit Gray) is the dose to the cavity material (SI unit Gray) is the ratio of the mass-electronic stopping powers (also known as mass-collision stopping powers) of and averaged over the charged particle fluence crossing the cavity. In an ionization chamber, the dose to material (typically a gas) is where is the ionization per unit volume produced in the (SI unit Coulomb) is the mass of the gas (SI unit kg) is the mean energy required to produce an ion pair in divided by the charge of an electron (SI units Joules/Coulomb) See also Ionizing radiation Ionization chamber Sources Khan, F. M. (2003). The physics of radiation therapy (3rd ed.). Lippincott Williams & Wilkins: Philadelphia. . Attix, F.H. (1986). Introduction to Radiological Physics and Radiation Dosimetry, Wiley-Interscience: New York. . Physics theorems
https://en.wikipedia.org/wiki/Valerie%20Isham
Valerie Susan Isham (born 1947) is a British applied probabilist and former President of the Royal Statistical Society. Isham's research interests in include point processes, spatial processes, spatio-temporal processes and population processes. Education and career Isham went to Imperial College London (B.Sc., Ph.D.) where she was a student of statistician David Cox. She has been a professor of probability and statistics at University College London since 1992. Book Isham is the coauthor with Cox of the book Point Processes (Chapman & Hall, 1980). Recognition Isham was the president of the Royal Statistical Society for 2011–2012. She was awarded its Guy Medal in Bronze in 1990. In 2018 she received the Forder Lectureship from the London Mathematical Society and the New Zealand Mathematical Society.
https://en.wikipedia.org/wiki/Tarski%27s%20plank%20problem
In mathematics, Tarski's plank problem is a question about coverings of convex regions in n-dimensional Euclidean space by "planks": regions between two hyperplanes. Tarski asked if the sum of the widths of the planks must be at least the minimum width of the convex region. The question was answered affirmatively by . Statement Given a convex body C in Rn and a hyperplane H, the width of C parallel to H, w(C,H), is the distance between the two supporting hyperplanes of C that are parallel to H. The smallest such distance (i.e. the infimum over all possible hyperplanes) is called the minimal width of C, w(C). The (closed) set of points P between two distinct, parallel hyperplanes in Rn is called a plank, and the distance between the two hyperplanes is called the width of the plank, w(P). Tarski conjectured that if a convex body C of minimal width w(C) was covered by a collection of planks, then the sum of the widths of those planks must be at least w(C). That is, if P1,…,Pm are planks such that then Bang proved this is indeed the case. Nomenclature The name of the problem, specifically for the sets of points between parallel hyperplanes, comes from the visualisation of the problem in R2. Here, hyperplanes are just straight lines and so planks become the space between two parallel lines. Thus the planks can be thought of as (infinitely long) planks of wood, and the question becomes how many planks does one need to completely cover a convex tabletop of minimal width w? Bang's theorem shows that, for example, a circular table of diameter d feet can't be covered by fewer than d planks of wood of width one foot each.
https://en.wikipedia.org/wiki/Integrated%20passive%20devices
Integrated passive devices (IPDs), also known as integrated passive components (IPCs) or embedded passive components (EPC), are electronic components where resistors (R), capacitors (C), inductors (L)/coils/chokes, microstriplines, impedance matching elements, baluns or any combinations of them are integrated in the same package or on the same substrate. Sometimes integrated passives can also be called as embedded passives, and still the difference between integrated and embedded passives is technically unclear. In both cases passives are realized in between dielectric layers or on the same substrate. The earliest form of IPDs are resistor, capacitor, resistor-capacitor (RC) or resistor-capacitor-coil/inductor (RCL) networks. Passive transformers can also be realised as integrated passive devices like by putting two coils on top of each other separated by a thin dielectric layer. Sometimes diodes (PN, PIN, zener etc.) can be integrated on the same substrate with integrated passives specifically if the substrate is silicon or some other semiconductor like gallium arsenide (GaAs). Description Integrated passive devices can be packaged, bare dies/chips or even stacked (assembled on top of some other bare die/chip) in a third dimension (3D) with active integrated circuits or other IPDs in an electronic system assembly. Typical packages for integrated passives are SIL (Standard In Line), SIP or any other packages (like DIL, DIP, QFN, chip-scale package/CSP, wafer level package/WLP etc.) used in electronic packaging. Integrated passives can also act as a module substrate, and therefore be part of a hybrid module, multi-chip module or chiplet module/implementation. The substrate for IPDs can be rigid like ceramic (aluminumoxide/alumina), layered ceramic (low temperature co-fired ceramic/LTCC, high temperature co-fired ceramic/HTCC), glass, and silicon coated with some dielectric layer like silicon dioxide. The substrate can be also flexible like laminate e. g. a packa
https://en.wikipedia.org/wiki/Head-twitch%20response
The head-twitch response (HTR) is a rapid side-to-side head movement that occurs in mice and rats after the serotonin 5-HT2A receptor is activated. The prefrontal cortex may be the neuroanatomical locus mediating the HTR. Many serotonergic hallucinogens, including lysergic acid diethylamide (LSD), induce the head-twitch response, and so the HTR is used as a behavioral model of hallucinogen effects. However while there is generally a good correlation between compounds that induce head twitch in mice and compounds that are hallucinogenic in humans, it is unclear whether the head twitch response is primarily caused by 5-HT2A receptors, 5-HT2C receptors or both, though recent evidence shows that the HTR is mediated by the 5-HT2A receptor and modulated by the 5-HT2C receptor. Also, the effect can be non-specific, with head twitch responses also produced by some drugs that do not act through 5-HT2 receptors, such as phencyclidine, yohimbine, atropine and cannabinoid receptor antagonists. As well, compounds such as 5-HTP, fenfluramine, 1-Methylpsilocin, Ergometrine, and 3,4-di-methoxyphenethylamine (DMPEA) can also produce head twitch and do stimulate serotonin receptors, but are not hallucinogenic in humans. This means that while the head twitch response can be a useful indicator as to whether a compound is likely to display hallucinogenic activity in humans, the induction of a head twitch response does not necessarily mean that a compound will be hallucinogenic, and caution should be exercised when interpreting such results.
https://en.wikipedia.org/wiki/Pasko%20Rakic
Pasko Rakic (; ; born May 15, 1933) is a Yugoslav-born American neuroscientist, who presently works in the Yale School of Medicine Department of Neuroscience in New Haven, Connecticut. His main research interest is in the development and evolution of the human brain. He was the founder and served as Chairman of the Department of Neurobiology at Yale, and was founder and Director of the Kavli Institute for Neuroscience. He is best known for elucidating the mechanisms involved in development and evolution of the cerebral cortex. In 2008, Rakic shared the inaugural Kavli Prize in Neuroscience. He is currently the Dorys McConell Duberg Professor of Neuroscience, leads an active research laboratory, and serves on Advisory Boards and Scientific Councils of a number of Institutions and Research Foundations. Early life and education Rakic was born on May 15, 1933, in Ruma (formerly Kingdom of Yugoslavia). His father, Toma Rakić, was Croatian, originally from Pula (Istria, at that time part of Italy), but emigrated to Yugoslavia, where in the town of Novi Sad (Bačka) he studied to become an accountant and tax official. His mother, Juliana Todorić, of Serbian and Slovakian descent was born in Dubrovnik (Dalmatia) and moved to Ruma, where they met and got married in 1929. Due to the nature of his father's job as Director of Regional Tax Services, the family moved to different towns every few years. Finally, their daughter, Vera, and son, Pasko, completed Gimnasium (High School) in the town of Sremska Mitrovica. Vera eventually graduated in mathematics from Belgrade University, and Pasko obtained his medical degree (MD) from the University of Belgrade School of Medicine, where he embarked on a career as a neurosurgeon. His research career began in 1962, with a Fulbright Fellowship at Harvard University in Boston, MA, where he met professor Paul Yakovlev, who introduced him to the joy of studying human brain development, which inspired him to abandon neurosurgery. In 1966
https://en.wikipedia.org/wiki/Butyrate%20esterase
α-Naphthyl butyrate esterase, also referred to as naphthyl butyrate esterase or butyrate esterase, is a histological stain specific for white blood cells of the monocytic proliferation line. It is used in the diagnosis of leukemia when staining touch preparation type slides of bone marrow. It is instrumental in the diagnosis of monocytic leukemias and the myelomonocytic variant of acute myelocytic leukemia.
https://en.wikipedia.org/wiki/CD4%2B/CD8%2B%20ratio
The CD4+/CD8+ ratio is the ratio of T helper cells (with the surface marker CD4) to cytotoxic T cells (with the surface marker CD8). Both CD4+ and CD8+ T cells contain several subsets. The CD4+/CD8+ ratio in the peripheral blood of healthy adults and mice is about 2:1, and an altered ratio can indicate diseases relating to immunodeficiency or autoimmunity. An inverted CD4+/CD8+ ratio (namely, less than 1/1) indicates an impaired immune system. Conversely, an increased CD4+/CD8+ ratio corresponds to increased immune function. Obesity and dysregulated lipid metabolism in the liver leads to loss of CD4+, but not CD8+ cells, contributing to the induction of liver cancer. Regulatory CD4+ cells decline with expanding visceral fat, whereas CD8+ T-cells increase. Decreased ratio with infection A reduced CD4+/CD8+ ratio is associated with reduced resistance to infection. Patients with tuberculosis show a reduced CD4+/CD8+ ratio. HIV infection leads to low levels of CD4+ T cells (lowering the CD4+/CD8+ ratio) through a number of mechanisms, including killing of infected CD4+. Acquired immunodeficiency syndrome (AIDS) is (by one definition) a CD4+ T cell count below 200 cells per µL. HIV progresses with declining numbers of CD4+ and expanding number of CD8+ cells (especially CD8+ memory cells), resulting in high morbidity and mortality. When CD4+ T cell numbers decline below a critical level, cell-mediated immunity is lost, and the body becomes progressively more susceptible to opportunistic infections. Declining CD4+/CD8+ ratio has been found to be a prognostic marker of HIV disease progression. COVID-19 In COVID-19 B cell, natural killer cell, and total lymphocyte counts decline, but both CD4+ and CD8+ cells decline to a far greater extent. Low CD4+ predicted greater likelihood of intensive care unit admission, and CD4+ cell count was the only parameter that predicted length of time for viral RNA clearance. Decreased ratio with aging A declining CD4+/CD8+ ratio i
https://en.wikipedia.org/wiki/Teechart
TeeChart is a charting library for programmers, developed and managed by Steema Software of Girona, Catalonia, Spain. It is available as commercial and non-commercial software. TeeChart has been included in most Delphi and C++Builder products since 1997, and TeeChart Standard currently is part of Embarcadero RAD Studio 11 Alexandria. TeeChart Pro version is a commercial product that offers shareware releases for all of its formats, TeeChart Lite for .NET is a free charting component for the Microsoft Visual Studio .NET community and TeeChart for PHP is an open-source library for PHP environments. The TeeChart Charting Library offers charts, maps and gauges in versions for Delphi VCL/FMX, ActiveX, C# for Microsoft Visual Studio .NET, Java and PHP. Full source code has always been available for all versions except the ActiveX version. TeeChart's user interface is translated into 38 languages. History The first version of TeeChart was authored in 1995 by David Berneda, co-founder of Steema, using the Borland Delphi Visual Component Library programming environment and TeeChart was first released as a shareware version and made available via Compuserve in the same year. It was written in the first version of Delphi VCL, as a 16-bit Charting Library named TeeChart version 1. The next version of TeeChart was released as a 32-bit library (Delphi 2 supported 32-bit compilation) but was badged as TeeChart VCL v3 to coincide with Borland's naming convention for inclusion on the toolbox palette of Borland Delphi v3 in 1997 and with C++ Builder v3 in 1998. It has been on the Delphi/C++ Builder toolbox palette ever since. The current version is Embarcadero RAD Studio 11 Alexandria. TeeChart's first ActiveX version named "version 3" too, to match the VCL version's nomenclature, was released in 1998. The version was optimised to work with Microsoft's Visual Studio v97 and v6.0 developer suites that include Visual Basic and Microsoft Visual C++ programming languages. Support f
https://en.wikipedia.org/wiki/Harrington%20knot
The Harrington knot is a decorative heraldic knot, the badge of the Harrington family. It is in essence identical to the fret.
https://en.wikipedia.org/wiki/Standard%20enthalpy%20of%20reaction
The standard enthalpy of reaction (denoted ) for a chemical reaction is the difference between total product and total reactant molar enthalpies, calculated for substances in their standard states. The value can be approximately interpreted in terms of the total of the chemical bond energies for bonds broken and bonds formed. For a generic chemical reaction the standard enthalpy of reaction is related to the standard enthalpy of formation values of the reactants and products by the following equation: In this equation, are the stoichiometric coefficients of each product and reactant. The standard enthalpy of formation, which has been determined for a vast number of substances, is the change of enthalpy during the formation of 1 mole of the substance from its constituent elements, with all substances in their standard states. Standard states can be defined at any temperature and pressure, so both the standard temperature and pressure must always be specified. Most values of standard thermochemical data are tabulated at either (25°C, 1 bar) or (25°C, 1 atm). For ions in aqueous solution, the standard state is often chosen such that the aqueous H+ ion at a concentration of exactly 1 mole/liter has a standard enthalpy of formation equal to zero, which makes possible the tabulation of standard enthalpies for cations and anions at the same standard concentration. This convention is consistent with the use of the standard hydrogen electrode in the field of electrochemistry. However, there are other common choices in certain fields, including a standard concentration for H+ of exactly 1 mole/(kg solvent) (widely used in chemical engineering) and mole/L (used in the field of biochemistry). For this reason it is important to note which standard concentration value is being used when consulting tables of enthalpies of formation. Introduction Two initial thermodynamic systems, each isolated in their separate states of internal thermodynamic equilibrium, can, by a
https://en.wikipedia.org/wiki/Chronotropic
Chronotropic effects (from chrono-, meaning time, and tropos, "a turn") are those that change the heart rate. Chronotropic drugs may change the heart rate and rhythm by affecting the electrical conduction system of the heart and the nerves that influence it, such as by changing the rhythm produced by the sinoatrial node. Positive chronotropes increase heart rate; negative chronotropes decrease heart rate. A dromotrope affects atrioventricular node (AV node) conduction. A positive dromotrope increases AV nodal conduction, and a negative dromotrope decreases AV nodal conduction. A lusitrope is an agent that affects diastolic relaxation. Many positive inotropes affect preload and afterload. Positive chronotropes Most Adrenergic agonists Atropine Dopamine Epinephrine Isoproterenol Milrinone Theophylline Negative chronotropes Chronotropic variables in systolic myocardial left and right. Left sided systolic chronotropy can be appreciated as Aortic Valve open to close time. Right sided variables are represented by pulmonary valve open to close time. Inverted as diastolic chronotropy, the variables are aortic valve close to open and pulmonic close to open time. Pharmaceutical manipulation of chronotropic properties was perhaps first appreciated by the introduction of digitalis, though it turns out that digitalis has an inotropic effect rather than a chronotropic effect. Beta blockers such as metoprolol Acetylcholine Digoxin Pacemaker current (i.e. HCN channel) inhibitors (e.g. ivabradine)
https://en.wikipedia.org/wiki/Elementary%20number
An elementary number is one formalization of the concept of a closed-form number. The elementary numbers form an algebraically closed field containing the roots of arbitrary expressions using field operations, exponentiation, and logarithms. The set of the elementary numbers is subdivided into the explicit elementary numbers and the implicit elementary numbers.
https://en.wikipedia.org/wiki/Mammalian%20embryogenesis
Mammalian embryogenesis is the process of cell division and cellular differentiation during early prenatal development which leads to the development of a mammalian embryo. Difference from embryogenesis of lower chordates Due to the fact that placental mammals and marsupials nourish their developing embryos via the placenta, the ovum in these species does not contain significant amounts of yolk, and the yolk sac in the embryo is relatively small in size, in comparison with both the size of the embryo itself and the size of yolk sac in embryos of comparable developmental age from lower chordates. The fact that an embryo in both placental mammals and marsupials undergoes the process of implantation, and forms the chorion with its chorionic villi, and later the placenta and umbilical cord, is also a difference from lower chordates. The difference between a mammalian embryo and an embryo of a lower chordate animal is evident starting from blastula stage. Due to that fact, the developing mammalian embryo at this stage is called a blastocyst, not a blastula, which is more generic term. There are also several other differences from embryogenesis in lower chordates. One such difference is that in mammalian embryos development of the central nervous system and especially the brain tends to begin at earlier stages of embryonic development and to yield more structurally advanced brain at each stage, in comparison with lower chordates. The evolutionary reason for such a change likely was that the advanced and structurally complex brain, characteristic of mammals, requires more time to develop, but the maximum time spent in utero is limited by other factors, such as relative size of the final fetus to the mother (ability of the fetus to pass mother's genital tract to be born), limited resources for the mother to nourish herself and her fetus, etc. Thus, to develop such a complex and advanced brain in the end, the mammalian embryo needed to start this process earlier and to
https://en.wikipedia.org/wiki/SGI%20Indigo%C2%B2%20and%20Challenge%20M
The SGI Indigo2 (stylized as "Indigo2") and the SGI Challenge M are Unix workstations which were designed and sold by SGI from 1992 to 1997. The Indigo2, codenamed "Fullhouse", is a desktop workstation. The Challenge M is a server which differs from the Indigo2 only by a slightly differently colored and badged case, and the absence of graphics and sound hardware. Both systems are based on the MIPS processors, with EISA bus and SGI proprietary GIO64 expansion bus via a riser card. The Indigo preceded the Indigo2, which is succeeded by the Octane. Overview Indigo2 desktop workstations have two models: the teal Indigo2 and the purple IMPACT. Both have identical looking cases except color, and sub-model case badging. The CPU types, the amount of RAM, and graphics capabilities, depend on the model or sub-model variation. Power Indigo2 is a version of the teal Indigo2, with the R8000 chipset and strong floating-point arithmetic performance. The later IMPACT Indigo2 workstation model has more computational and visualization power, especially due to the introduction of the R10000 series RISC CPU and IMPACT graphics. CPU All Indigo2 models use one of four distinct MIPS CPU variants: the 100 to 250 MHz MIPS R4000 and R4400, and the Quantum Effect Devices R4600 (IP22 mainboard); the 75 MHz MIPS R8000 (IP26 mainboard); and the 175 to 195 MHz R10000 (IP28 mainboard), which are featured in the last produced Indigo2 model, the IMPACT10000. Each microprocessor family differs in clock frequency, and in primary and secondary cache capacity. RAM All Indigo2 motherboards have 12 SIMM slots, for standard 36-bit parity 72-pin fast page mode SIMM memory modules seated in groups of four. Indigo2 can be expanded to a thermal specification maximum of either 384 MB or 512 MB RAM. The design of the memory control logic in R10000 machines support up to 1 GB RAM, but the thermal output of older generation of DRAM chips necessitate the 512 MB limit. With newer, higher-density and smaller s
https://en.wikipedia.org/wiki/Plant%20breeders%27%20rights
Plant breeders' rights (PBR), also known as plant variety rights (PVR), are rights granted in certain places to the breeder of a new variety of plant that give the breeder exclusive control over the propagating material (including seed, cuttings, divisions, tissue culture) and harvested material (cut flowers, fruit, foliage) of a new variety for a number of years. With these rights, the breeder can choose to become the exclusive marketer of the variety, or to license the variety to others. In order to qualify for these exclusive rights, a variety must be new, distinct, uniform, and stable. A variety is: new if it has not been commercialized for more than one year in the country of protection; distinct if it differs from all other known varieties by one or more important botanical characteristics, such as height, maturity, color, etc.; uniform if the plant characteristics are consistent from plant to plant within the variety; stable if the plant characteristics are genetically fixed and therefore remain the same from generation to generation, or after a cycle of reproduction in the case of hybrid varieties. The breeder must also give the variety an acceptable "denomination", which becomes its generic name and must be used by anyone who markets the variety. Typically, plant variety rights are granted by national offices after examination. Seed is submitted to the plant variety office, who grow it for one or more seasons, to check that it is distinct, stable, and uniform. If these tests are passed, exclusive rights are granted for a specified period (typically 20/25 years, or 25/30 years for trees and vines). Renewal fees (often, annual) are required to maintain the rights. Breeders can bring suit to enforce their rights and can recover damages. Plant breeders' rights contain exemptions that are not recognized under other legal doctrines such as patent law. Commonly, there is an exemption for farm-saved seed. Farmers may store this production in their own bins for
https://en.wikipedia.org/wiki/Metalanguage
In logic and linguistics, a metalanguage is a language used to describe another language, often called the object language. Expressions in a metalanguage are often distinguished from those in the object language by the use of italics, quotation marks, or writing on a separate line. The structure of sentences and phrases in a metalanguage can be described by a metasyntax. For example, to say that the word "noun" can be used as a noun in a sentence, one could write "noun" is a <noun>. Types of metalanguage There are a variety of recognized types of metalanguage, including embedded, ordered, and nested (or hierarchical) metalanguages. Embedded An embedded metalanguage is a language formally, naturally and firmly fixed in an object language. This idea is found in Douglas Hofstadter's book, Gödel, Escher, Bach, in a discussion of the relationship between formal languages and number theory: "... it is in the nature of any formalization of number theory that its metalanguage is embedded within it." It occurs in natural, or informal, languages, as well—such as in English, where words such as noun, verb, or even word describe features and concepts pertaining to the English language itself. Ordered An ordered metalanguage is analogous to an ordered logic. An example of an ordered metalanguage is the construction of one metalanguage to discuss an object language, followed by the creation of another metalanguage to discuss the first, etc. Nested A nested (or hierarchical) metalanguage is similar to an ordered metalanguage in that each level represents a greater degree of abstraction. However, a nested metalanguage differs from an ordered one in that each level includes the one below. The paradigmatic example of a nested metalanguage comes from the Linnean taxonomic system in biology. Each level in the system incorporates the one below it. The language used to discuss genus is also used to discuss species; the one used to discuss orders is also used to discuss gener
https://en.wikipedia.org/wiki/Crus%20of%20penis
The two crura of penis (one crus on each side) constitute the root of penis along with the bulb of penis. The two crura flank the bulb - one to each side of the bulb. Each crus is attached at the angle between the perineal membrane and ischiopubic ramus. The deep artery of the penis enters the anterior portion of the crus. Distally, each crus transitions into either corpus spongiosum of the body of penis. Anatomy Each crus represents the tapering, posterior fourth of each corpora cavernosa penis; the two corpora cavernosa are situated alongside each other along the length of the body of penis while the two crura diverge laterally in the root of penis before attaching firmly onto either ischial ramus at their proximal end. Each crus begins proximally as a blunt-pointed process in anterior to the tuberosity of the ischium, along the perineal surface of the conjoined (ischiopubic) ramus. Just proximal to the convergence of the two crura, they come into contact with the bulb of (corpus cavernosum of) penis. Additional images See also Crus of clitoris
https://en.wikipedia.org/wiki/Regular%20paperfolding%20sequence
In mathematics the regular paperfolding sequence, also known as the dragon curve sequence, is an infinite sequence of 0s and 1s. It is obtained from the repeating partial sequence by filling in the question marks by another copy of the whole sequence. The first few terms of the resulting sequence are: If a strip of paper is folded repeatedly in half in the same direction, times, it will get folds, whose direction (left or right) is given by the pattern of 0's and 1's in the first terms of the regular paperfolding sequence. Opening out each fold to create a right-angled corner (or, equivalently, making a sequence of left and right turns through a regular grid, following the pattern of the paperfolding sequence) produces a sequence of polygonal chains that approaches the dragon curve fractal: Properties The value of any given term in the regular paperfolding sequence, starting with , can be found recursively as follows. Divide by two, as many times as possible, to get a factorization of the form where is an odd number. Then Thus, for instance, : dividing 12 by two, twice, leaves the odd number 3. As another example, because 13 is congruent to 1 mod 4. The paperfolding word 1101100111001001..., which is created by concatenating the terms of the regular paperfolding sequence, is a fixed point of the morphism or string substitution rules 11 → 1101 01 → 1001 10 → 1100 00 → 1000 as follows: 11 → 1101 → 11011001 → 1101100111001001 → 11011001110010011101100011001001 ... It can be seen from the morphism rules that the paperfolding word contains at most three consecutive 0s and at most three consecutive 1s. The paperfolding sequence also satisfies the symmetry relation: which shows that the paperfolding word can be constructed as the limit of another iterated process as follows: 1 1 1 0 110 1 100 1101100 1 1100100 110110011100100 1 110110001100100 In each iteration of this process, a 1 is placed at the end of the previous iteration's string, then t
https://en.wikipedia.org/wiki/Andrzej%20Mostowski
Andrzej Mostowski (1 November 1913 – 22 August 1975) was a Polish mathematician. He is perhaps best remembered for the Mostowski collapse lemma. Biography Born in Lemberg, Austria-Hungary, Mostowski entered University of Warsaw in 1931. He was influenced by Kuratowski, Lindenbaum, and Tarski. His Ph.D. came in 1939, officially directed by Kuratowski but in practice directed by Tarski who was a young lecturer at that time. He became an accountant after the German invasion of Poland but continued working in the Underground Warsaw University. After the Warsaw uprising of 1944, the Nazis tried to put him in a concentration camp. With the help of some Polish nurses, he escaped to a hospital, choosing to take bread with him rather than his notebook containing his research. Some of this research he reconstructed after the War, however much of it remained lost. His work was largely on recursion theory and undecidability. From 1946 until his death in Vancouver, British Columbia, Canada, he worked at the University of Warsaw. Much of his work, during that time, was on first order logic and model theory. His son Tadeusz is also a mathematician working on differential geometry. With Krzysztof Kurdyka and Adam Parusinski, Tadeusz Mostowski solved René Thom's gradient conjecture in 2000. See also List of Polish mathematicians Mostowski model Works Books 1968 & 1976: (with Kazimierz Kuratowski) Set Theory. With an Introduction to Descriptive Set Theory, Studies in Logic and Foundations of Mathematics #86, North Holland, 1952: Sentences Undecidable in Formalized Arithmetic: An Exposition of the Theory of Kurt Godel, North-Holland, Amsterdam, 1969: Constructible Sets with Applications, North-Holland, Amsterdam. Papers Andrzej Mostowski, "Über die Unabhängigkeit des Wohlordnungssatzes vom Ordnungsprinzip." Fundamenta Mathematicae Vol. 32, No.1, ss. 201-252, (1939). Andrzej Mostowski, "On definable sets of positive integers", Fundamenta Mathematicae Vol. 34, No. 1, s
https://en.wikipedia.org/wiki/Fourth%20principal%20meridian
The fourth principal meridian, set in 1815, is the principal meridian for land surveys in northwestern Illinois and west-central Illinois, and its 1831 extension is the principal meridian for land surveys in Wisconsin and northeastern Minnesota. It is part of the Public Land Survey System that covers most of the United States. The fourth principal meridian begins at a point on the west bank of the Illinois River in Schuyler County, Illinois. The fourth principal meridian's baseline, sometimes called the Beardstown baseline, runs west from this initial point. The meridian and this baseline governs surveys in Illinois that are west of both the Illinois River and the third principal meridian. The Illinois Department of Transportation 2003 Survey Manual gives the point as and notes that the meridian is an extension of the line north from the mouth of the Illinois River near Grafton, Illinois. Extended The meridian was extended north in 1831, through Wisconsin and northeastern Minnesota. The extension uses the Illinois–Wisconsin border as its baseline, and is the basis of surveys in all of Wisconsin, as well as that part of Minnesota: east of the Mississippi River, and some land west of the Mississippi River that includes the northern parts of Dakota and Scott Counties, and eastern Hennepin County, including all of Richfield, that part of Minneapolis south of 44th Ave. N., and the eastern half of the city of Bloomington, and all land east of the "third guide meridian, west of the fifth principal meridian," that is north of the Mississippi River from a point near the city of Aitkin. This line follows the western boundary of Aitkin County, then goes through the center of Itasca and Koochiching Counties. This includes a portion of land west of the Mississippi River in Aitkin and Itasca Counties. The initial point of the extended fourth principal meridian is located at . See also List of principal and guide meridians and base lines of the United States Public L
https://en.wikipedia.org/wiki/Yanliao%20Biota
The Yanliao Biota is the name given to an assembly of fossils preserved in northeastern China from the Middle to Late Jurassic. It includes fossils from the Tiaojishan Formation and Haifanggou Formation. This spans approximately 165 to 150 million years ago. Like the Jehol Biota, these deposits are composed of alternating layers of volcanic tuff and sediment, and are considered Lagerstätte. These are some of the best preserved Jurassic fossils in the world, and include many important dinosaur, mammal, salamander, insect and lizard specimens, as well as plants. History The first fossils of the Yanliao Biota were found around 1998 near the village of Daohugou in Inner Mongolia. The following year, the first two important specimens were discovered, and published in 2000. Since that time many more have been found from the same area, and in neighbouring provinces. The Yanliao Biota is made up of fossils from more than one locality, and the geology has been difficult to interpret (see below). It includes what was previously referred to as the Daohugou Biota, and some of it was thought to belong to the Jehol Biota. Location The Yanliao Biota comes from outcrops north of the Han Mountains, in the northeast of the People's Republic of China. The most important site is near Daohugou Village in Inner Mongolia, but fossils and outcrops are also found in neighbouring Liaoning Province and Heibei Province. Geology The Dauhugou locality lies in the Ningchen Basin in the SE corner of Inner Mongolia. Dauhugou village has fossil-bearing lacustrine (laid down in lakes) strata overlying precambrian basement. Fossil preservation The formations that yield the fossils of the Yanliao Biota are known as Lagerstätte, meaning that they have exceptionally good conditions for fossil preservation. The fossils are not only numerous, but also very well preserved. For vertebrates, there are often whole skeletons with soft tissues like skin and fur, colour patterns, and stomach contents. In
https://en.wikipedia.org/wiki/Next%20%28restaurant%29
Next is a restaurant in Chicago. It opened April 6, 2011. The restaurant received media interest due to chef Grant Achatz's success at his first restaurant, Alinea, as well as its unique "ticketed" format: Next sells pre-priced tickets for specific dates and times in a similar fashion to the way theater, concert and sporting event tickets are sold. Property Next is located in Chicago's historic Fulton Market, just north of the West Loop's "Restaurant Row" on Randolph Street. Next's operation also includes two on-site bars: The Aviary, previously headed by Charles Joly, and presently headed by Micah Melton, and The Office, an invite-only speakeasy-format bar that seats 14 and is located behind an unmarked metal door in the basement of the building. Menus Rather than stick with one type of cuisine, Next completely changes its style every few months, focusing on a different time period, parts of the world, or various abstract themes for each "season" of its menu. While themes for the year are often released at the end of the previous season, menu development for each of the season's themes begins in final weeks of the previous menu. Executive Chef Ed Tinoco and Grant Achatz head this process. These are the past, present, and (known) future menus of Next Restaurant: Tickets Through the "Childhood" menu, Next sold tickets through their website in batches. Several tables would be opened up, and announcements were made on their Facebook and Twitter pages when tickets were available. The tickets sold rapidly. Next tickets are transferable, but not refundable or exchangeable. This has sparked the creation of a secondary market for the tickets, which has resulted in reports of people scalping the tickets for several times their face value. In an attempt to eliminate the secondary market on Next tickets, the sales model was changed in 2012 to follow a season ticket model, where in-advance tickets were only available if patrons purchased tickets for one meal from
https://en.wikipedia.org/wiki/Incremental%20capital-output%20ratio
The Incremental Capital-Output Ratio (ICOR) is the ratio of investment to growth which is equal to the reciprocal of the marginal product of capital. The higher the ICOR, the lower the productivity of capital or the marginal efficiency of capital. The ICOR can be thought of as a measure of the inefficiency with which capital is used. In most countries the ICOR is in the neighborhood of 3. It is a topic discussed in economic growth. It can be expressed in the following formula, where K is capital output ratio, Y is output (GDP), and I is net investment. According to this formula the incremental capital output ratio can be computed by dividing the investment share in GDP by the rate of growth of GDP. As an example, if the level of investment (as a share of GDP) in a developing country had been (approximately) 20% over a particular period, and if the growth rate of GDP had been (approximately) 5% per year during the same period, then the ICOR would be 20/5 = 4. ICOR, world, and determining variables Further reading van Rijckeghem, Willy "The Secret of the Variable ICOR" The Economic Journal, December 1968, Vol LXXVOO, pp.984-85. Reinhart, Carmen M. "Comment" on Giancarlo Corsetti, Paolo Pesenti, and Nouriel Roubini: "Fundamental Determinants of the Asian Crisis: The Role of Financial Fragility and External Imbalances", in Takatoshi Ito and Anne Krueger, eds. Regional and Global Capital Flows: Macroeconomic Causes and Consequences (Chicago: University of Chicago Press for the NBER, 2001), 42–45. . . Capital (economics) Financial ratios Investment indicators
https://en.wikipedia.org/wiki/Simultaneous%20perturbation%20stochastic%20approximation
Simultaneous perturbation stochastic approximation (SPSA) is an algorithmic method for optimizing systems with multiple unknown parameters. It is a type of stochastic approximation algorithm. As an optimization method, it is appropriately suited to large-scale population models, adaptive modeling, simulation optimization, and atmospheric modeling. Many examples are presented at the SPSA website http://www.jhuapl.edu/SPSA. A comprehensive book on the subject is Bhatnagar et al. (2013). An early paper on the subject is Spall (1987) and the foundational paper providing the key theory and justification is Spall (1992). SPSA is a descent method capable of finding global minima, sharing this property with other methods as simulated annealing. Its main feature is the gradient approximation that requires only two measurements of the objective function, regardless of the dimension of the optimization problem. Recall that we want to find the optimal control with loss function : Both Finite Differences Stochastic Approximation (FDSA) and SPSA use the same iterative process: where represents the iterate, is the estimate of the gradient of the objective function evaluated at , and is a positive number sequence converging to 0. If is a p-dimensional vector, the component of the symmetric finite difference gradient estimator is: FD: 1 ≤i ≤p, where is the unit vector with a 1 in the place, and is a small positive number that decreases with n. With this method, 2p evaluations of J for each are needed. When p is large, this estimator loses efficiency. Let now be a random perturbation vector. The component of the stochastic perturbation gradient estimator is: SP: Remark that FD perturbs only one direction at a time, while the SP estimator disturbs all directions at the same time (the numerator is identical in all p components). The number of loss function measurements needed in the SPSA method for each is always 2, independent of the dimension p. Thus, SPSA
https://en.wikipedia.org/wiki/Lactivibrio
Lactivibrio is a genus of bacteria from the family of Synergistaceae with one known species (Lactivibrio alcoholicus). Lactivibrio alcoholicus has been isolated from mesophilic granular sludge from Tokyo in Japan.
https://en.wikipedia.org/wiki/Suprachiasmatic%20nucleus
The suprachiasmatic nucleus or nuclei (SCN) is a small region of the brain in the hypothalamus, situated directly above the optic chiasm. The SCN is the principal circadian pacemaker in mammals, responsible for generating circadian rhythms. Reception of light inputs from photosensitive retinal ganglion cells allow the SCN to coordinate the subordinate cellular clocks of the body and entrain to the environment. The neuronal and hormonal activities it generates regulate many different body functions in an approximately 24-hour cycle. The idea that the SCN is the main circadian pacemaker in mammals was proposed by Robert Moore, who conducted experiments using radioactive amino acids to find where the termination of the retinohypothalamic projection occurs in rodents. Early lesioning experiments in mouse, guinea pig, cat, and opossum established how removal of the SCN results in ablation of circadian rhythm in mammals. Moreover, the SCN interacts with many other regions of the brain. It contains several cell types and several different peptides (including vasopressin and vasoactive intestinal peptide) and neurotransmitters. Disruptions or damage to the SCN has been associated with different mood disorders and sleep disorders, suggesting the significance of the SCN in regulating circadian timing Neuroanatomy The SCN is situated in the anterior part of the hypothalamus immediately dorsal, or superior (hence supra) to the optic chiasm bilateral to (on either side of) the third ventricle. It consists of two nuclei composed of approximately 10,000 neurons. The morphology of the SCN is species dependent. Distribution of different cell phenotypes across specific SCN regions, such as the concentration of VP-IR neurons, can cause the shape of the SCN to change. The nucleus can be divided into ventrolateral and dorsolateral portions, also known as the core and shell, respectively. These regions differ in their expression of the clock genes, the core expresses them in respon
https://en.wikipedia.org/wiki/Fruit%20sector%20in%20Azerbaijan
The fruit sector in Azerbaijan is a developing industry. The sector covered 171,600 ha. of land in 2016. Grape, apple, orange, pear and pomegranate are one of the major crops in fruit production in Azerbaijan. Statistics Between 2000 and 2016, the area for fruit cultivation has grown 2-3% per year. The area of fruit and berries cultivation areas has increased over the time, while area for grape production stayed more or less the same. The area for hazelnut cultivation has increased by double. Most of the areas for cultivation of fruits and berries are privately owned. Agricultural enterprises cover 7% of the fruit and berries area. Approximately 30% of grape production area is used by agricultural enterprises. Import and Exports In 2014, due to improvement of irrigation systems and provision of subsidies and incentives to farmers, the production of fruit and berries increased, and imports were decreased. Hazelnut Shaki-Zagatala economic region accounts for 74 percent of hazelnut production. In 2005-2015, hazelnut gardens in Azerbaijan increased by 12 thousand 358 hectares, and average yield from one hectare dropped by 21%. The highest indicator in this period was registered in Shaki-Zagatala economic region. Thus, hazelnuts production increased by 5.6 percent in the region and 36.3 percent in the planting of gardens. According to the Presidential Decree of November 16, 2016 "On additional measures for the strengthening of state support for the development of silkworm and nutrition" more than 10 thousand hectares of new hazelnut bins were laid. Taking into account the prospects for the development of bush fruits, the draft State Program on the development of nuts such as almond, almond, walnut and chestnut in 2017-2026 is being developed. The project envisages further strengthening of the work carried out in this direction and increase the area of hazelnuts by 42 thousand hectares to 80 thousand hectares. Citrus fruits The basis for the development of citru
https://en.wikipedia.org/wiki/Alexander%20Merkurjev
Aleksandr Sergeyevich Merkurjev (, born September 25, 1955) is a Russian-American mathematician, who has made major contributions to the field of algebra. Currently Merkurjev is a professor at the University of California, Los Angeles. Work Merkurjev's work focuses on algebraic groups, quadratic forms, Galois cohomology, algebraic K-theory and central simple algebras. In the early 1980s Merkurjev proved a fundamental result about the structure of central simple algebras of period dividing 2, which relates the 2-torsion of the Brauer group with Milnor K-theory. In subsequent work with Suslin this was extended to higher torsion as the Merkurjev–Suslin theorem. The full statement of the norm residue isomorphism theorem (also known as the Bloch-Kato conjecture) was proven by Voevodsky. In the late 1990s Merkurjev gave the most general approach to the notion of essential dimension, introduced by Buhler and Reichstein, and made fundamental contributions to that field. In particular Merkurjev determined the essential p-dimension of central simple algebras of degree (for a prime p) and, in joint work with Karpenko, the essential dimension of finite p-groups. Awards Merkurjev won the Young Mathematician Prize of the Petersburg Mathematical Society for his work on algebraic K-theory. In 1986 he was an invited speaker at the International Congress of Mathematicians in Berkeley, California, and his talk was entitled "Milnor K-theory and Galois cohomology". In 1995 he won the Humboldt Prize, an international prize awarded to renowned scholars. Merkurjev gave a plenary talk at the second European Congress of Mathematics in Budapest, Hungary in 1996. In 2012 he won the Cole Prize in Algebra for his work on the essential dimension of groups. In 2015 a special volume of Documenta Mathematica was published in honor of Merkurjev's sixtieth birthday. Bibliography Books Max-Albert Knus, Alexander Merkurjev, Markus Rost, Jean-Pierre Tignol: The book of involutions, American Mathe
https://en.wikipedia.org/wiki/Myxococcaceae
Myxococcaceae is a family of gram-negative, rod-shaped bacteria. The family Myxococcaceae is encompassed within the myxobacteria ("slime bacteria"). The family is ubiquitously found in soils, marine, and freshwater environments. Production of compounds with medical uses by Myxococcaceae makes them useful in human health fields. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI) Morphology and Behavior Cells can be motile with gliding and swarming behavior. The vegetative cell shape in the Myxococcaceae family is long rods, which vary in size between members. The most common fruiting body morphs are soft hump and knob shaped with possible colors of yellow, peach, white, or orange depending on species. Myxococcaceae are spore producing bacteria and are delineated by their spore shape. The myxospores are oval to round and are optically refractive. Quorum sensing (QS) behavior is limited in this family. However, there is evidence that some members of the family produce molecules that interrupt the QS of other microbes, behavior potentially useful in predation. Relevance Bacteria in the order of Myxococcales have led to scientific discoveries including the first genome to be sequenced, the primary observation of plasmid replication, and the first discovery of bacteriophage. Members of the Myxococcaceae produce a wide range of secondary metabolites having useful functions and applications. Compounds with anti-microbial, anti-parasitic, and in rare cases, anti-HIV activities have been isolated from the Myxococcaceae. See also List of bacterial orders List of bacteria genera
https://en.wikipedia.org/wiki/Polydimethylsiloxane
Polydimethylsiloxane (PDMS), also known as dimethylpolysiloxane or dimethicone, is a silicone polymer with a wide variety of uses, from cosmetics to industrial lubrication. It is particularly known for its unusual rheological (or flow) properties. PDMS is optically clear and, in general, inert, non-toxic, and non-flammable. It is one of several types of silicone oil (polymerized siloxane). Its applications range from contact lenses and medical devices to elastomers; it is also present in shampoos (as it makes hair shiny and slippery), food (antifoaming agent), caulk, lubricants and heat-resistant tiles. Structure The chemical formula of PDMS is , where n is the number of repeating monomer units. Industrial synthesis can begin from dimethyldichlorosilane and water by the following net reaction: The polymerization reaction evolves hydrochloric acid. For medical and domestic applications, a process was developed in which the chlorine atoms in the silane precursor were replaced with acetate groups. In this case, the polymerization produces acetic acid, which is less chemically aggressive than HCl. As a side-effect, the curing process is also much slower in this case. The acetate is used in consumer applications, such as silicone caulk and adhesives. Branching and capping Hydrolysis of generates a polymer that is terminated with silanol groups (). These reactive centers are typically "capped" by reaction with trimethylsilyl chloride: Silane precursors with more acid-forming groups and fewer methyl groups, such as methyltrichlorosilane, can be used to introduce branches or cross-links in the polymer chain. Under ideal conditions, each molecule of such a compound becomes a branch point. This can be used to produce hard silicone resins. In a similar manner, precursors with three methyl groups can be used to limit molecular weight, since each such molecule has only one reactive site and so forms the end of a siloxane chain. Well-defined PDMS with a low polydispersity
https://en.wikipedia.org/wiki/Generalized%20arithmetic%20progression
In mathematics, a generalized arithmetic progression (or multiple arithmetic progression) is a generalization of an arithmetic progression equipped with multiple common differences – whereas an arithmetic progression is generated by a single common difference, a generalized arithmetic progression can be generated by multiple common differences. For example, the sequence is not an arithmetic progression, but is instead generated by starting with 17 and adding either 3 or 5, thus allowing multiple common differences to generate it. A semilinear set generalizes this idea to multiple dimensions -- it is a set of vectors of integers, rather than a set of integers. Finite generalized arithmetic progression A finite generalized arithmetic progression, or sometimes just generalized arithmetic progression (GAP), of dimension d is defined to be a set of the form where . The product is called the size of the generalized arithmetic progression; the cardinality of the set can differ from the size if some elements of the set have multiple representations. If the cardinality equals the size, the progression is called proper. Generalized arithmetic progressions can be thought of as a projection of a higher dimensional grid into . This projection is injective if and only if the generalized arithmetic progression is proper. Semilinear sets Formally, an arithmetic progression of is an infinite sequence of the form , where and are fixed vectors in , called the initial vector and common difference respectively. A subset of is said to be linear if it is of the form where is some integer and are fixed vectors in . A subset of is said to be semilinear if it is a finite union of linear sets. The semilinear sets are exactly the sets definable in Presburger arithmetic. See also Freiman's theorem
https://en.wikipedia.org/wiki/Eukaryotic%20chromosome%20structure
Eukaryotic chromosome structure refers to the levels of packaging from raw DNA molecules to the chromosomal structures seen during metaphase in mitosis or meiosis. Chromosomes contain long strands of DNA containing genetic information. Compared to prokaryotic chromosomes, eukaryotic chromosomes are much larger in size and are linear chromosomes. Eukaryotic chromosomes are also stored in the cell nucleus, while chromosomes of prokaryotic cells are not stored in a nucleus. Eukaryotic chromosomes require a higher level of packaging to condense the DNA molecules into the cell nucleus because of the larger amount of DNA. This level of packaging includes the wrapping of DNA around proteins called histones in order to form condensed nucleosomes. History The double helix was discovered in 1953 by James Watson and Francis Crick. Other researchers made very important, but unconnected findings about the composition of DNA. Ultimately it was Watson and Crick who put all of these findings together to come up with a model for DNA. Later, chemist Alexander Todd determined that the backbone of a DNA molecule contained repeating phosphate and deoxyribose sugar groups. The biochemist Erwin Chargaff found that adenine and thymine always paired while cytosine and guanine always paired. High resolution X-ray images of DNA that were obtained by Maurice Wilkins and Rosalind Franklin suggested a helical, or corkscrew like shape. Some of the first scientists to recognize the structures now known as chromosomes were Schleiden, Virchow, and Bütschli. The term was coined by Heinrich Wilhelm Gottfried von Waldeyer-Hartz, referring to the term chromatin, was introduced by Walther Flemming. Scientists also discovered plant and animal cells have a central compartment called the nucleus. They soon realized chromosomes were found inside the nucleus and contained different information for many different traits. Structure In eukaryotes, such as humans, roughly 3.2 billion nucleotides are spread out
https://en.wikipedia.org/wiki/Maximal%20common%20divisor
In abstract algebra, particularly ring theory, maximal common divisors are an abstraction of the number theory concept of greatest common divisor (GCD). This definition is slightly more general than GCDs, and may exist in rings in which GCDs do not. Halter-Koch (1998) provides the following definition. is a maximal common divisor of a subset, , if the following criteria are met: for all Suppose , and for all . Then .
https://en.wikipedia.org/wiki/Galia%20Dafni
Galia Devora Dafni is a mathematician specializing in harmonic analysis and function spaces. Educated in the US, she works in Canada as a professor of mathematics and statistics at Concordia University. She is also affiliated with the Centre de Recherches Mathématiques, where she is deputy director for publications and communication. Education Dafni lived in Texas as a teenager. After beginning her undergraduate studies at the University of Texas at Austin, Dafni transferred to Pennsylvania State University, where she earned a bachelor's degree in 1988 in mathematics and computer science, "with highest distinction and with honors in mathematics". She went to Princeton University for graduate study in mathematics, earning a master's degree in 1990 and completing her Ph.D. in 1993. Her doctoral dissertation, Hardy Spaces on Strongly Pseudoconvex Domains in and Domains of Finite Type in , was supervised by Elias M. Stein. Career After another year as an instructor at Princeton, Dafni continues through three postdoctoral positions: as Charles B. Morrey Jr. Assistant Professor of Mathematics at the University of California, Berkeley from 1994 to 1996, as Ralph Boas Assistant Professor of Mathematics at Northwestern University from 1996 to 1998, and as a postdoctoral fellow and research assistant professor at Concordia University from 1998 to 2000. Her move to Montreal and Concordia was motivated in part by a two-body problem with her husband, who also worked in Montreal. Finally, in 2000, she obtained a regular-rank assistant professorship at Concordia, supported by a 5-year NSERC University Faculty Award, through a program to support women in STEM. She obtained tenure there as an associate professor in 2005, and since became a full professor. Personal life Dafni is married to Henri Darmon, a mathematician at another Montreal university, McGill University. They met in the early 1990s at Princeton, where Darmon was a postdoctoral researcher.
https://en.wikipedia.org/wiki/Sessility%20%28botany%29
In botany, sessility (meaning "sitting", used in the sense of "resting on the surface") is a characteristic of plant parts (such as flowers and leaves) that have no stalk. Plant parts can also be described as subsessile, that is, not completely sessile. A sessile flower is one that lacks a pedicel (flower stalk). A flower that is not sessile is pedicellate. For example, the genus Trillium is partitioned into multiple subgenera, the sessile-flowered trilliums (Trillium subgen. Sessilia) and the pedicellate-flowered trilliums. Sessile leaves lack petioles (leaf stalks). A leaf that is not sessile is petiolate. For example, the leaves of most monocotyledons lack petioles. The term sessility is also used in mycology to describe a fungal fruit body that is attached to or seated directly on the surface of the substrate, lacking a supporting stipe or pedicel.
https://en.wikipedia.org/wiki/Warewulf
Warewulf is a computer cluster implementation toolkit that facilitates the process of installing a cluster and long term administration. It does this by changing the administration paradigm to make all of the slave node file systems manageable from one point, and automate the distribution of the node file system during node boot. It allows a central administration model for all slave nodes and includes the tools needed to build configuration files, monitor, and control the nodes. It is totally customizable and can be adapted to just about any type of cluster. From the software administration perspective it does not make much difference if you are running 2 nodes or 500 nodes. The procedure is still the same, which is why Warewulf is scalable from the admins perspective. Also, because it uses a standard chroot'able file system for every node, it is extremely configurable and lends itself to custom environments very easily. While Warewulf was designed to be a high-performance computing (HPC) system, it is not an HPC system in itself. Warewulf is more along the lines of a distributed Linux distribution, or more specifically a system for replicating and managing small, lightweight Linux systems from one master. Using Warewulf, HPC packages such as LAM/MPI/MPICH, Sun Grid Engine, PVM, etc. can be easily deployed throughout the cluster. Warewulf solves the problem of slave node management rather than being a strict HPC specific system (even though it was designed with HPC in mind). Because of this it is as flexible as a home grown cluster, but administratively scales very well. As a result of this flexibility and ease of customization, Warewulf has been used not only on production HPC implementations, but also development systems like KASY0 (the first system to break the one hundred dollar per GFLOPS barrier), and non HPC systems such as web server cluster farms, intrusion detection clusters, and high-availability clusters. See also oneSIS – another diskless cluster p
https://en.wikipedia.org/wiki/Gene%20desert
Gene deserts are regions of the genome that are devoid of protein-coding genes. Gene deserts constitute an estimated 25% of the entire genome, leading to the recent interest in their true functions. Originally believed to contain inessential and “junk” DNA due to their inability to create proteins, gene deserts have since been linked to several vital regulatory functions, including distal enhancing and conservatory inheritance. Thus, an increasing number of risks that lead to several major diseases, including a handful of cancers, have been attributed to irregularities found in gene deserts. One of the most notable examples is the 8q24 gene region, which, when affected by certain single nucleotide polymorphisms, lead to a myriad of diseases. The major identifying factors of gene deserts lay in their low GpC content and their relatively high levels of repeats, which are not observed in coding regions. Recent studies have even further categorized gene deserts into variable and stable forms; regions are categorized based on their behavior through recombination and their genetic contents. Although current knowledge of gene deserts is rather limited, ongoing research and improved techniques are beginning to open the doors for exploration on the various important effects of these noncoding regions. History Although the possibility of function in gene deserts was predicted as early as the 1960s, genetic identification tools were unable to uncover any specific characteristics of the long noncoding regions, other than that no coding occurred in those regions. Before the completion of the human genome in 2001 through the Human Genome Project, most of the early associative gene comparisons relied on the belief that essential housekeeping genes were clustered in the same areas of the genome for ease of access and tight regulation. This belief later constructed a hypothesis that gene deserts are therefore previous regulatory sequences that are highly linked (and hence do not u
https://en.wikipedia.org/wiki/Symmetric-key%20algorithm
Symmetric-key algorithms are algorithms for cryptography that use the same cryptographic keys for both the encryption of plaintext and the decryption of ciphertext. The keys may be identical, or there may be a simple transformation to go between the two keys. The keys, in practice, represent a shared secret between two or more parties that can be used to maintain a private information link. The requirement that both parties have access to the secret key is one of the main drawbacks of symmetric-key encryption, in comparison to public-key encryption (also known as asymmetric-key encryption). However, symmetric-key encryption algorithms are usually better for bulk encryption. With exception of the one-time pad they have a smaller key size, which means less storage space and faster transmission. Due to this, asymmetric-key encryption is often used to exchange the secret key for symmetric-key encryption. Types Symmetric-key encryption can use either stream ciphers or block ciphers. Stream ciphers encrypt the digits (typically bytes), or letters (in substitution ciphers) of a message one at a time. An example is ChaCha20. Substitution ciphers are well-known ciphers, but can be easily decrypted using a frequency table. Block ciphers take a number of bits and encrypt them in a single unit, padding the plaintext to achieve a multiple of the block size. The Advanced Encryption Standard (AES) algorithm, approved by NIST in December 2001, uses 128-bit blocks. Implementations Examples of popular symmetric-key algorithms include Twofish, Serpent, AES (Rijndael), Camellia, Salsa20, ChaCha20, Blowfish, CAST5, Kuznyechik, RC4, DES, 3DES, Skipjack, Safer, and IDEA. Use as a cryptographic primitive Symmetric ciphers are commonly used to achieve other cryptographic primitives than just encryption. Encrypting a message does not guarantee that it will remain unchanged while encrypted. Hence, often a message authentication code is added to a ciphertext to ensure that changes to
https://en.wikipedia.org/wiki/Euphorbia%20obtusifolia
The scientific name Euphorbia obtusifolia has been used for at least three species of Euphorbia: Euphorbia obtusifolia is a synonym of Euphorbia terracina , native from Macaronesia through Hungary and the Mediterranean to the Arabian Peninsula Euphorbia obtusifolia is an illegitimate name that has been applied to: Euphorbia lamarckii – of which it is a synonym; native to the western Canary Islands (Tenerife, La Gomera, La Palma and El Hierro); also known by the synonym Euphorbia broussonetii Euphorbia regis-jubae – with which it has been confused; native to the eastern Canary Islands (Gran Canaria, Lanzarote and Fuerteventura), west Morocco and north-western Western Sahara
https://en.wikipedia.org/wiki/Cueva%20Lucero
Cueva Lucero () is a cave and archeological site in the Guayabal barrio of the Juana Díaz municipality, in Puerto Rico. The cave includes more than 100 petroglyphs and pictographs "making it one of the best examples of aboriginal rock art in the Antilles." It has been known to archeologists since at least the early 1900s. Most of its images are zoomorphic. The site is known to locals including rock-climbers and spelunkers and there is some modern graffiti. The cave was listed on the National Register of Historic Places in 2008. Gallery View from inside : See also National Register of Historic Places listings in Juana Díaz, Puerto Rico
https://en.wikipedia.org/wiki/Cache%20language%20model
A cache language model is a type of statistical language model. These occur in the natural language processing subfield of computer science and assign probabilities to given sequences of words by means of a probability distribution. Statistical language models are key components of speech recognition systems and of many machine translation systems: they tell such systems which possible output word sequences are probable and which are improbable. The particular characteristic of a cache language model is that it contains a cache component and assigns relatively high probabilities to words or word sequences that occur elsewhere in a given text. The primary, but by no means sole, use of cache language models is in speech recognition systems. To understand why it is a good idea for a statistical language model to contain a cache component one might consider someone who is dictating a letter about elephants to a speech recognition system. Standard (non-cache) N-gram language models will assign a very low probability to the word "elephant" because it is a very rare word in English. If the speech recognition system does not contain a cache component, the person dictating the letter may be annoyed: each time the word "elephant" is spoken another sequence of words with a higher probability according to the N-gram language model may be recognized (e.g., "tell a plan"). These erroneous sequences will have to be deleted manually and replaced in the text by "elephant" each time "elephant" is spoken. If the system has a cache language model, "elephant" will still probably be misrecognized the first time it is spoken and will have to be entered into the text manually; however, from this point on the system is aware that "elephant" is likely to occur again – the estimated probability of occurrence of "elephant" has been increased, making it more likely that if it is spoken it will be recognized correctly. Once "elephant" has occurred several times, the system is likely to recogni
https://en.wikipedia.org/wiki/Image%20sensor
An image sensor or imager is a sensor that detects and conveys information used to form an image. It does so by converting the variable attenuation of light waves (as they pass through or reflect off objects) into signals, small bursts of current that convey the information. The waves can be light or other electromagnetic radiation. Image sensors are used in electronic imaging devices of both analog and digital types, which include digital cameras, camera modules, camera phones, optical mouse devices, medical imaging equipment, night vision equipment such as thermal imaging devices, radar, sonar, and others. As technology changes, electronic and digital imaging tends to replace chemical and analog imaging. The two main types of electronic image sensors are the charge-coupled device (CCD) and the active-pixel sensor (CMOS sensor). Both CCD and CMOS sensors are based on metal–oxide–semiconductor (MOS) technology, with CCDs based on MOS capacitors and CMOS sensors based on MOSFET (MOS field-effect transistor) amplifiers. Analog sensors for invisible radiation tend to involve vacuum tubes of various kinds, while digital sensors include flat-panel detectors. CCD vs. CMOS sensors The two main types of digital image sensors are the charge-coupled device (CCD) and the active-pixel sensor (CMOS sensor), fabricated in complementary MOS (CMOS) or N-type MOS (NMOS or Live MOS) technologies. Both CCD and CMOS sensors are based on the MOS technology, with MOS capacitors being the building blocks of a CCD, and MOSFET amplifiers being the building blocks of a CMOS sensor. Cameras integrated in small consumer products generally use CMOS sensors, which are usually cheaper and have lower power consumption in battery powered devices than CCDs. CCD sensors are used for high end broadcast quality video cameras, and CMOS sensors dominate in still photography and consumer goods where overall cost is a major concern. Both types of sensor accomplish the same task of capturing light and c
https://en.wikipedia.org/wiki/Rustle%20noise
Rustle noise is noise consisting of aperiodic pulses characterized by the average time between those pulses (such as the mean time interval between clicks of a Geiger counter), known as rustle time (Schouten ?). Rustle time is determined by the fineness of sand, seeds, or shot in rattles, contributes heavily to the sound of sizzle cymbals, drum snares, drum rolls, and string drums, and makes subtle differences in string instrument sounds. Rustle time in strings is affected by different weights and widths of bows and by types of hair and rosin in strings. The concept is also applicable to flutter-tonguing, brass and woodwind growls, resonated vocal fry in woodwinds, and eructation sounds in some woodwinds. Robert Erickson suggests the exploration of accelerando-ritardando scales producible on some acoustic instruments and further variations in rustle noise "because this apparently minor aspect of musical sounds has a disproportionately large importance for higher levels--textures, ensemble timbres, [and] contrasts between music events." (Erickson 1975, p. 71-72) Sources Erickson, Robert (1975). Sound Structure in Music. University of California Press. . Timbre Noise (electronics)
https://en.wikipedia.org/wiki/List%20of%20plants%20used%20in%20herbalism
This is an alphabetical list of plants used in herbalism. Phytochemicals possibly involved in biological functions are the basis of herbalism, and may be grouped as: primary metabolites, such as carbohydrates and fats found in all plants secondary metabolites serving a more specific function. For example, some secondary metabolites are toxins used to deter predation, and others are pheromones used to attract insects for pollination. Secondary metabolites and pigments may have therapeutic actions in humans, and can be refined to produce drugs; examples are quinine from the cinchona, morphine and codeine from the poppy, and digoxin from the foxglove. In Europe, apothecaries stocked herbal ingredients as traditional medicines. In the Latin names for plants created by Linnaeus, the word officinalis indicates that a plant was used in this way. For example, the marsh mallow has the classification Althaea officinalis, as it was traditionally used as an emollient to soothe ulcers. Pharmacognosy is the study of plant sources of phytochemicals. Some modern prescription drugs are based on plant extracts rather than whole plants. The phytochemicals may be synthesized, compounded or otherwise transformed to make pharmaceuticals. Examples of such derivatives include aspirin, which is chemically related to the salicylic acid found in white willow. The opium poppy is a major industrial source of opiates, including morphine. Few traditional remedies, however, have translated into modern drugs, although there is continuing research into the efficacy and possible adaptation of traditional herbal treatments. A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Databases See also Chinese classic herbal formula History of birth control List of branches of alternative medicine List of culinary herbs and spices List of herbs with known adverse effects Materia Medica Medicinal mushrooms Medicinal plants of the American West Medicinal plants traditi
https://en.wikipedia.org/wiki/JBuilder
JBuilder is a discontinued integrated development environment (IDE) for the programming language Java from Embarcadero Technologies. Originally developed by Borland, JBuilder was spun off with CodeGear which was eventually purchased by Embarcadero Technologies in 2008. Oracle had based the first versions of JDeveloper on code from JBuilder licensed from Borland, but it has since been rewritten from scratch. Versions JBuilder 1 through 3 are based on the Delphi IDE. JBuilder 3.5 through 2006 are based on PrimeTime, an all-Java IDE framework. JBuilder 2007 "Peloton" is the first JBuilder release based on the eclipse IDE framework. See also Comparison of integrated development environments
https://en.wikipedia.org/wiki/Cisterna
A cisterna (: cisternae) is a flattened membrane vesicle found in the endoplasmic reticulum and Golgi apparatus. Cisternae are an integral part of the packaging and modification processes of proteins occurring in the Golgi. Function Proteins begin on the cis side of the Golgi (the side facing the ER) and exit on the trans side (the side facing the plasma membrane). Throughout their journey in the cisterna, the proteins are packaged and are modified for transport throughout the cell. The number of cisterna in the Golgi stack is dependent on the organism and cell type. The structure, composition, and function of each of the cisternae may be different inside the Golgi stack. These different variations of Golgi cisternae are categorized into 3 groups; cis Golgi network, medial, and trans Golgi network. The cis Golgi network is the first step in the cisternal structure of a protein being packaged, while the trans Golgi network is the last step in the cisternal structure when the vesicle is being transferred to either the lysosome, the cell surface or the secretory vesicle. The cisternae are shaped by the cytoskeleton of the cell through a lipid bilayer. Post-translational modifications such as glycosylation, phosphorylation and cleavage occur in the Golgi and as proteins travel through it, they go through the cisternae, which allows functional ion channels to be created due to these modifications. Each class of cisternae contains various enzymes used in protein modifications. These enzymes help the Golgi in glycosylation and phosphorylation of proteins, as well as mediate signal modifications to direct proteins to their final destination. Defects in the cisternal enzymes can cause congenital defects including some forms of muscular dystrophy, cystic fibrosis, cancer, and diabetes. The trans-Golgi network is an important part of the Golgi. It is located on the trans face of the Golgi apparatus and is made up of cisternae. The cisternae play a crucial role in the packag
https://en.wikipedia.org/wiki/Peptide%20loading%20complex
The peptide-loading complex (PLC) is a short-lived, multisubunit membrane protein complex that is located in the endoplasmic reticulum (ER). It orchestrates peptide translocation and selection by major histocompatibility complex class I (MHC-I) molecules. Stable peptide-MHC I complexes are released to the cell surface to promote T-cell response against malignant or infected cells. In turn, T-cells recognize the activated peptides, which could be immunogenic or non-immunogenic. Overview A PLC assembly consists of seven subunits, including the transporters associated with antigen processing (TAP1 and TAP2 – jointly referred to as TAP), the oxidoreductase ERp57, the MHC-I heterodimer, and the chaperones tapasin and calreticulin. TAP transports proteasomal degradation products from the cytosol into the lumen of the ER, where they are loaded onto MHC-I molecules. The peptide-MHC-I complexes then move via a secretory pathway to the cell surface, presenting their antigenic load to cytotoxic T-cells. In general, preliminary MHC-I heavy chains are chaperoned by the calnexin–calreticulin system in the ER. Together with β2-microglobulin (β2m), MHC-I heavy chains form assemblies of heterodimers that act as receptors for antigenic peptides. Empty MHC-I heterodimers are recruited by calreticulin and form short-lived macromolecular PLC where the chaperone tapasin further provides stabilization in the MHC-I molecules. Furthermore, ERp57 and tapasin form disulfide-linked conjugates, and tapasin is crucial for maintaining the structural stability of the PLC as well as facilitating optimal peptide loading. After final quality control, during which MHC-I heterodimers undergo peptide editing, stable peptide–MHC-I complexes are released to the cell surface for T-cell recognition. The PLC can serve a large variety of MHC-I allomorphs, thus playing a central role in the differentiation and priming of T lymphocytes, and in controlling viral infections and tumour development. Structur
https://en.wikipedia.org/wiki/Projective%20cover
In the branch of abstract mathematics called category theory, a projective cover of an object X is in a sense the best approximation of X by a projective object P. Projective covers are the dual of injective envelopes. Definition Let be a category and X an object in . A projective cover is a pair (P,p), with P a projective object in and p a superfluous epimorphism in Hom(P, X). If R is a ring, then in the category of R-modules, a superfluous epimorphism is then an epimorphism such that the kernel of p is a superfluous submodule of P. Properties Projective covers and their superfluous epimorphisms, when they exist, are unique up to isomorphism. The isomorphism need not be unique, however, since the projective property is not a full fledged universal property. The main effect of p having a superfluous kernel is the following: if N is any proper submodule of P, then . Informally speaking, this shows the superfluous kernel causes P to cover M optimally, that is, no submodule of P would suffice. This does not depend upon the projectivity of P: it is true of all superfluous epimorphisms. If (P,p) is a projective cover of M, and P' is another projective module with an epimorphism , then there is a split epimorphism α from P' to P such that Unlike injective envelopes and flat covers, which exist for every left (right) R-module regardless of the ring R, left (right) R-modules do not in general have projective covers. A ring R is called left (right) perfect if every left (right) R-module has a projective cover in R-Mod (Mod-R). A ring is called semiperfect if every finitely generated left (right) R-module has a projective cover in R-Mod (Mod-R). "Semiperfect" is a left-right symmetric property. A ring is called lift/rad if idempotents lift from R/J to R, where J is the Jacobson radical of R. The property of being lift/rad can be characterized in terms of projective covers: R is lift/rad if and only if direct summands of the R module R/J (as a right or le
https://en.wikipedia.org/wiki/Borate%20buffered%20saline
Borate buffered saline (abbreviated BBS) is a buffer used in some biochemical techniques to maintain the pH within a relatively narrow range. Borate buffers have an alkaline buffering capacity in the 8–10 range. Boric acid has a pKa of 9.14 at 25 °C. Applications BBS has many uses because it is isotonic and has a strong bactericidal effect. It can be used to dilute substances and has applications in coating procedures. Additives such as [Polysorbate 20] and milk powder can be used to add to BBS's functionality as a washing buffer or blocking buffer. Contents The following is a sample recipe for BBS: 10 mM Sodium borate 150 mM NaCl Adjust pH to pH 8.2 The simplest way to prepare a BBS solution is to use BBS tablets. They are formulated to give a ready to use borate buffered saline solution upon dissolution in 500 ml of deionized water. Concentration of borate and NaCl as well as the pH can vary, and the resulting solution would still be referred to as "borate buffered saline". Borate concentration (giving buffering capacity) can vary from 10 mM to 100 mM. As BBS is used to emulate physiological conditions (as in animal or human body), the pH value is slightly alkaline, ranging from 8.0 to 9.0. NaCl gives the isotonic (mostly used 150 mM NaCl corresponds to physiological conditions: 0.9% NaCl) salt concentration.
https://en.wikipedia.org/wiki/Notice%20Advisory%20to%20Navstar%20Users
A Notice Advisory to Navstar Users (NANU) is a message issued jointly by the United States Coast Guard and the GPS Operations Center at Schriever Space Force Base in Colorado. Such notices (NANUs) provide updates on the general health of individual satellites in the GPS constellation. NANUs are typically issued approximately three days prior to a change in the operation of a GPS satellite, such as a change in orbit or scheduled on-board equipment maintenance. NANU types Forecast outages FCSTDV, or Forecast Delta-V, gives scheduled outage times for Delta-V maneuvers. The satellite is moved during this maintenance and the user may be required to download a new almanac. FCSTMX, or Forecast Maintenance, gives scheduled outage times for Ion Pump Operations or software tests. FCSTEXTD, or Forecast Extension, extends the scheduled outage time "Until Further Notice"; references the original NANU. FCSTSUMM, or Forecast Summary, gives the exact outage times for the scheduled outage, including the FCSTEXTD; sent after the maintenance is complete and the satellite is set healthy to users; references the original NANU. FCSTCANC, or Forecast Cancellation, cancels a scheduled outage; new maintenance time not yet determined; references the original NANU. FCSTRESCD, or Forecast Rescheduled, reschedules a scheduled outage; references the original NANU. FCSTUUFN, or Forecast Unusable Until Further Notice, scheduled outage of indefinite duration not necessarily related to Delta-V or maintenance activities. Unscheduled outages UNUSABLE, or UNUSABLE with a reference NANU, closes out an UNUSUFN NANU and gives the exact outage times for the outage; references the UNUSUFN NANU. UNUSUFN, or Unusable Until Further Notice, notifies users that a satellite will be Unusable to all users until further notice. UNUNOREF, or UNUSABLE with no reference NANU, gives times for outages that were resolved before a UNUSUFN NANU could be sent. Other LEAPSEC, or Leap Second, is used to notify
https://en.wikipedia.org/wiki/Active%20site
In biology and biochemistry, the active site is the region of an enzyme where substrate molecules bind and undergo a chemical reaction. The active site consists of amino acid residues that form temporary bonds with the substrate, the binding site, and residues that catalyse a reaction of that substrate, the catalytic site. Although the active site occupies only ~10–20% of the volume of an enzyme, it is the most important part as it directly catalyzes the chemical reaction. It usually consists of three to four amino acids, while other amino acids within the protein are required to maintain the tertiary structure of the enzymes. Each active site is evolved to be optimised to bind a particular substrate and catalyse a particular reaction, resulting in high specificity. This specificity is determined by the arrangement of amino acids within the active site and the structure of the substrates. Sometimes enzymes also need to bind with some cofactors to fulfil their function. The active site is usually a groove or pocket of the enzyme which can be located in a deep tunnel within the enzyme, or between the interfaces of multimeric enzymes. An active site can catalyse a reaction repeatedly as residues are not altered at the end of the reaction (they may change during the reaction, but are regenerated by the end). This process is achieved by lowering the activation energy of the reaction, so more substrates have enough energy to undergo reaction. Binding site Usually, an enzyme molecule has only one active site, and the active site fits with one specific type of substrate. An active site contains a binding site that binds the substrate and orients it for catalysis. The orientation of the substrate and the close proximity between it and the active site is so important that in some cases the enzyme can still function properly even though all other parts are mutated and lose function. Initially, the interaction between the active site and the substrate is non-covalent and tr
https://en.wikipedia.org/wiki/Hydra%20game
In mathematics, specifically in graph theory and number theory, a hydra game is a single-player iterative mathematical game played on a mathematical tree called a hydra where, usually, the goal is to cut off the hydra's "heads" while the hydra simultaneously expands itself. Hydra games can be used to generate large numbers or infinite ordinals or prove the strength of certain mathematical theories. Unlike their combinatorial counterparts like TREE and SCG, no search is required to compute these fast-growing function values – one must simply keep applying the transformation rule to the tree until the game says to stop. Introduction A simple hydra game can be defined as follows: A hydra is a finite rooted tree, which is a connected graph with no cycles and a specific node designated as the root of the tree. In a rooted tree, each node has a single parent (with the exception of the root, which has no parent) and a set of children, as opposed to an unrooted tree where there is no parent-child relationship and we simply refer to edges between nodes. The player selects a leaf node from the tree and a natural number during each turn. A leaf node can be defined as a node with no children, or a node of degree 1 which is not . Remove the leaf node . Let be 's parent. If , return to stage 2. Otherwise if , let be the parent of . Then create leaf nodes as children of such that the new nodes would appear after any existing children of during a post-order traversal (visually, these new nodes would appear to the right side of any existing children). Then return to stage 2. Even though the hydra may grow by an unbounded number of leaves at each turn, the game will eventually end in finitely many steps: if is the greatest distance between the root and the leaf, and the number of leaves at this distance, induction on can be used to demonstrate that the player will always kill the hydra. If , removing the leaves can never cause the hydra to grow, so the player wins
https://en.wikipedia.org/wiki/Remote%20Desktop%20Services
Remote Desktop Services (RDS), known as Terminal Services in Windows Server 2008 and earlier, is one of the components of Microsoft Windows that allow a user to initiate and control an interactive session on a remote computer or virtual machine over a network connection. RDS was first released in 1998 as Terminal Server in Windows NT 4.0 Terminal Server Edition, a stand-alone edition of Windows NT 4.0 Server that allowed users to log in remotely. Starting with Windows 2000, it was integrated under the name of Terminal Services as an optional component in the server editions of the Windows NT family of operating systems, receiving updates and improvements with each version of Windows. Terminal Services were then renamed to Remote Desktop Services with Windows Server 2008 R2 in 2009. RDS is Microsoft's implementation of thin client architecture, where Windows software, and the entire desktop of the computer running RDS, are made accessible to any remote client machine that supports Remote Desktop Protocol (RDP). User interfaces are displayed from the server onto the client system and input from the client system is transmitted to the server - where software execution takes place. This is in contrast to application streaming systems, like Microsoft App-V, in which computer programs are streamed to the client on-demand and executed on the client machine. RemoteFX was added to RDS as part of Windows Server 2008 R2 Service Pack 1. Overview Windows includes three client components that use RDS: Windows Remote Assistance – only Windows 10 and later Remote Desktop Connection (RDC) Fast user switching The first two are individual utilities that allow a user to operate an interactive session on a remote computer over the network. In case of Remote Assistance, the remote user needs to receive an invitation and the control is cooperative. In case of RDC, however, the remote user opens a new session on the remote computer and has every power granted by its user account's
https://en.wikipedia.org/wiki/Polar%20set
In functional and convex analysis, and related disciplines of mathematics, the polar set is a special convex set associated to any subset of a vector space lying in the dual space The bipolar of a subset is the polar of but lies in (not ). Definitions There are at least three competing definitions of the polar of a set, originating in projective geometry and convex analysis. In each case, the definition describes a duality between certain subsets of a pairing of vector spaces over the real or complex numbers ( and are often topological vector spaces (TVSs)). If is a vector space over the field then unless indicated otherwise, will usually, but not always, be some vector space of linear functionals on and the dual pairing will be the bilinear () defined by If is a topological vector space then the space will usually, but not always, be the continuous dual space of in which case the dual pairing will again be the evaluation map. Denote the closed ball of radius centered at the origin in the underlying scalar field of by Functional analytic definition Absolute polar Suppose that is a pairing. The polar or absolute polar of a subset of is the set: where denotes the image of the set under the map defined by If denotes the convex balanced hull of which by definition is the smallest convex and balanced subset of that contains then This is an affine shift of the geometric definition; it has the useful characterization that the functional-analytic polar of the unit ball (in ) is precisely the unit ball (in ). The prepolar or absolute prepolar of a subset of is the set: Very often, the prepolar of a subset of is also called the polar or absolute polar of and denoted by ; in practice, this reuse of notation and of the word "polar" rarely causes any issues (such as ambiguity) and many authors do not even use the word "prepolar". The bipolar of a subset of often denoted by is the set ; that is, Real polar
https://en.wikipedia.org/wiki/Informal%20mathematics
Informal mathematics, also called naïve mathematics, has historically been the predominant form of mathematics at most times and in most cultures, and is the subject of modern ethno-cultural studies of mathematics. The philosopher Imre Lakatos in his Proofs and Refutations aimed to sharpen the formulation of informal mathematics, by reconstructing its role in nineteenth century mathematical debates and concept formation, opposing the predominant assumptions of mathematical formalism. Informality may not discern between statements given by inductive reasoning (as in approximations which are deemed "correct" merely because they are useful), and statements derived by deductive reasoning. Terminology Informal mathematics means any informal mathematical practices, as used in everyday life, or by aboriginal or ancient peoples, without historical or geographical limitation. Modern mathematics, exceptionally from that point of view, emphasizes formal and strict proofs of all statements from given axioms. This can usefully be called therefore formal mathematics. Informal practices are usually understood intuitively and justified with examples—there are no axioms. This is of direct interest in anthropology and psychology: it casts light on the perceptions and agreements of other cultures. It is also of interest in developmental psychology as it reflects a naïve understanding of the relationships between numbers and things. Another term used for informal mathematics is folk mathematics, which is ambiguous; the mathematical folklore article is dedicated to the usage of that term among professional mathematicians. The field of naïve physics is concerned with similar understandings of physics. People use mathematics and physics in everyday life, without really understanding (or caring) how mathematical and physical ideas were historically derived and justified. History There has long been a standard account of the development of geometry in ancient Egypt, followed by Greek
https://en.wikipedia.org/wiki/Reference%20design
Reference design refers to a technical blueprint of a system that is intended for others to copy. It contains the essential elements of the system; however, third parties may enhance or modify the design as required. When discussing computer designs, the concept is generally known as a reference platform. The main purpose of reference design is to support companies in development of next generation products using latest technologies. The reference product is proof of the platform concept and is usually targeted for specific applications. Reference design packages enable a fast track to market thereby cutting costs and reducing risk in the customer's integration project. As the predominant customers for reference designs are OEMs, many reference designs are created by technology component vendors, whether hardware or software, as a means to increase the likelihood that their product will be designed into the OEM's product, giving them a competitive advantage. Examples NanoBook, a reference design of a miniature laptop Open source hardware (also :Category:Open source hardware) RONJA, a free and open telecommunication technology ("free Internet") VIA OpenBook, a free and open reference design of a laptop
https://en.wikipedia.org/wiki/Elegant%20degradation
Elegant degradation is a term used in engineering to describe what occurs to machines which are subject to constant, repetitive stress. Externally, such a machine maintains the same appearance to the user, appearing to function properly. Internally, the machine slowly weakens over time. Eventually, unable to withstand the stress, it eventually breaks down. Compared to graceful degradation, the operational quality does not decrease at all, but the breakdown may be just as sudden. This term's meaning varies depending on context and field, and may not be strictly considered exclusive to engineering. For instance, this is used as a mechanism in the food industry as applied in the degradation of lignin, cellulose, pentosan, and polymers, among others. The concept is also used to extract chemicals such as the elegant degradation of Paederus fuscipes to obtain pederin and hemiacetal pseuodopederin. In this process degradation is induced by heat. A play with the same name also used it as a metaphor for the current state of the world. See also Fail safe Fail soft
https://en.wikipedia.org/wiki/Pleurisy
Pleurisy, also known as pleuritis, is inflammation of the membranes that surround the lungs and line the chest cavity (pleurae). This can result in a sharp chest pain while breathing. Occasionally the pain may be a constant dull ache. Other symptoms may include shortness of breath, cough, fever, or weight loss, depending on the underlying cause. Pleurisy can be caused by a variety of conditions, including viral or bacterial infections, autoimmune disorders, and pulmonary embolism. The most common cause is a viral infection. Other causes include bacterial infection, pneumonia, pulmonary embolism, autoimmune disorders, lung cancer, following heart surgery, pancreatitis and asbestosis. Occasionally the cause remains unknown. The underlying mechanism involves the rubbing together of the pleurae instead of smooth gliding. Other conditions that can produce similar symptoms include pericarditis, heart attack, cholecystitis, pulmonary embolism, and pneumothorax. Diagnostic testing may include a chest X-ray, electrocardiogram (ECG), and blood tests. Treatment depends on the underlying cause. Paracetamol (acetaminophen) and ibuprofen may be used to decrease pain. Incentive spirometry may be recommended to encourage larger breaths. About one million people are affected in the United States each year. Descriptions of the condition date from at least as early as 400 BC by Hippocrates. Signs and symptoms The defining symptom of pleurisy is a sudden sharp, stabbing, burning or dull pain in the right or left side of the chest during breathing, especially when one inhales and exhales. It feels worse with deep breathing, coughing, sneezing, or laughing. The pain may stay in one place, or it may spread to the shoulder or back. Sometimes, it becomes a fairly constant dull ache. Depending on its cause, pleuritic chest pain may be accompanied by other symptoms: Dry cough Fever and chills Rapid, shallow breathing Shortness of breath Fast heart rate Sore throat followed by pa
https://en.wikipedia.org/wiki/ISO-IR-169
ISO-IR-169 is a character set developed by the Blissymbolics Communication International Institute (BCI), and registered with the ISO-IR registry for use with ISO/IEC 2022 by the Standards Council of Canada. It contains 2304 characters for communicating with Blissymbols, including 2267 Blissymbolic dictionary words taken from Wood, Star and Reich's 1991 Blissymbol Reference Guide. Code charts Character set 0x21 (row 1) Row 1 contains a space, an "error sign", punctuation and ordinal numbers. This is the only row for which Unicode mappings currently exist, as of Unicode 13.0. Character set 0x23 (row 3) Row 3 contains 19 combining indicator characters for combination with Blissymbolic symbols. Character sets 0x30 and onward (rows 16 and onward) Rows 16 and onward include Blissymbolic dictionary words in alphabetical order by English name. They are annotated with respect to which combining indicators may be applied to the word, and whether certain combinations with indicators are forbidden due to being redundant to another encoded word. These characters do not exist in Unicode, as of Unicode 13.0.
https://en.wikipedia.org/wiki/Journal%20of%20Food%20Protection
Journal of Food Protection is a scientific journal that covers original research in food science. The journal is published by Elsevier. Abstracting and indexing The journal is abstracted and indexed for example in: Scopus Science Citation Index PubMed/Medline According to the Journal Citation Reports, the journal has a 2021 impact factor of 2.745.
https://en.wikipedia.org/wiki/Anthidium%20scudderi
Anthidium scudderi is an extinct species of mason bee in the Megachilidae genus Anthidium. The species is solely known from the late Eocene, Chadronian stage, Florissant Formation deposits in Florissant, Colorado, USA. Anthidium scudderi is one of only four extinct species of mason bees known from the fossil record, and with Anthidium exhumatum, one of two species from the Florissant Formation. History and classification The species is known only a single fossil, the holotype, number "No. 2002", is a single specimen of indeterminate genus, originally part of the Samuel Hubbard Scudder collection as specimen "No. 11381". The additional specimen, along with the three A. exhumatum fossils are currently residing in the Museum of Comparative Zoology paleoentomology collections at Harvard University. A. scudderi was first studied by Theodore Cockerell with his 1906 type description being published in the journal Bulletin of the Museum of Comparative Zoology. The specific epithet "scudderi" was coined by Theodore Cockerell in honor of Samuel Scudder who collected the specimens at Florissant. Description The holotype of Anthidium scudderi, while incomplete, is approximately in length but is missing up to of the abdomen tip. The body length and width is noted to probably be larger than in life due to crushing during fossilization. Both the head and thorax are black with possible light patterning, with a large lighter patch on the vertex, the clypeus mostly light, and the mesothorax with two possible light stripes. Though not definitive the light stripes may have been a reddish. The abdomen in contrast is light in tone, possibly yellow in life, with the posterior edges of each segment darkening into a distinct stripe. There are indications of a possible subbasal band running along the abdomen in the subdorsal region. Due to preservation the antennae and legs are not visible in the specimen. The general coloration is similar to the modern Anthidium bernardinum, now
https://en.wikipedia.org/wiki/Fish%20migration
Fish migration is mass relocation by fish from one area or body of water to another. Many types of fish migrate on a regular basis, on time scales ranging from daily to annually or longer, and over distances ranging from a few metres to thousands of kilometres. Such migrations are usually done for better feeding or to reproduce, but in other cases the reasons are unclear. Fish migrations involve movements of schools of fish on a scale and duration larger than those arising during normal daily activities. Some particular types of migration are anadromous, in which adult fish live in the sea and migrate into fresh water to spawn; and catadromous, in which adult fish live in fresh water and migrate into salt water to spawn. Marine forage fish often make large migrations between their spawning, feeding and nursery grounds. Movements are associated with ocean currents and with the availability of food in different areas at different times of year. The migratory movements may partly be linked to the fact that the fish cannot identify their own offspring and moving in this way prevents cannibalism. Some species have been described by the United Nations Convention on the Law of the Sea as highly migratory species. These are large pelagic fish that move in and out of the exclusive economic zones of different nations, and these are covered differently in the treaty from other fish. Salmon and striped bass are well-known anadromous fish, and freshwater eels are catadromous fish that make large migrations. The bull shark is a euryhaline species that moves at will from fresh to salt water, and many marine fish make a diel vertical migration, rising to the surface to feed at night and sinking to lower layers of the ocean by day. Some fish such as tuna move to the north and south at different times of year following temperature gradients. The patterns of migration are of great interest to the fishing industry. Movements of fish in fresh water also occur; often the fish swim upr
https://en.wikipedia.org/wiki/American%20Anthropometric%20Society
The American Anthropometric Society was an association for acquiring and storing brains of eminent persons for the purpose of research. The society was founded in 1889 in Philadelphia. Edward Anthony Spitzka, M.D., professor of anatomy at Jefferson Medical College, presented a paper on March 16, 1906, about his study of six brains bequeathed to the society. The brains are now housed at the Wistar Institute in Philadelphia. Similar organizations Mutual Autopsy Society of Paris, founded 1881 Cornell Brain Association
https://en.wikipedia.org/wiki/Food%20Standards%20Australia%20New%20Zealand
Food Standards Australia New Zealand (FSANZ) (Māori: Te Mana Kounga Kai – Ahitereiria me Aotearoa), formerly Australia New Zealand Food Authority (ANZFA), is the statutory authority in the Australian Government Health portfolio that is responsible for developing food standards for Australia and New Zealand. Description FSANZ develops the standards in consultation with experts, other government agencies and stakeholders; the standards are enforced by state and territory departments, agencies and local councils in Australia, the Ministry for Primary Industries in New Zealand, and the Australian Department of Agriculture, Water and the Environment for food imported into Australia. According to legislation, the recommendations made by the body should be open and accountable, and based upon a rigorous scientific assessment of risk to public health and safety, though FSANZ's commitment to this has been disputed by leading public health and consumer representatives across Australia and New Zealand. All decisions made by FSANZ must be approved by the Australia and New Zealand Food Regulation Ministerial Council, which is composed of the Health Minister from each of the Australian states and territories, and the Health Minister from New Zealand, and other participating Ministers nominated by each jurisdiction. This may lead to political interference in the decision: for example the decision made over hemp seed, when the Food Standards scientists recommended that hemp seed be allowed for sale, the ministers vetoed this because they did not want to appear soft on drugs. Publications from FSANZ include the Australian Total Diet Survey and Shoppers' Guide to Food Additives and labels. Nomenclature This authority is sometimes cited variously as Australia and New Zealand Food Standards/Safety Authority (ANZFSA), possibly incorrect nomenclature arising due to confusion with the old initialism ANZFA, and with the acronym of the New Zealand authority, New Zealand Food Safety, wh
https://en.wikipedia.org/wiki/Body%20fluid
Body fluids, bodily fluids, or biofluids, sometimes body liquids, are liquids within the human body. In lean healthy adult men, the total body water is about 60% (60–67%) of the total body weight; it is usually slightly lower in women (52–55%). The exact percentage of fluid relative to body weight is inversely proportional to the percentage of body fat. A lean man, for example, has about 42 (42–47) liters of water in his body. The total body of water is divided into fluid compartments, between the intracellular fluid compartment (also called space, or volume) and the extracellular fluid (ECF) compartment (space, volume) in a two-to-one ratio: 28 (28–32) liters are inside cells and 14 (14–15) liters are outside cells. The ECF compartment is divided into the interstitial fluid volume – the fluid outside both the cells and the blood vessels – and the intravascular volume (also called the vascular volume and blood plasma volume) – the fluid inside the blood vessels – in a three-to-one ratio: the interstitial fluid volume is about 12 liters; the vascular volume is about 4 liters. The interstitial fluid compartment is divided into the lymphatic fluid compartment – about 2/3, or 8 (6–10) liters, and the transcellular fluid compartment (the remaining 1/3, or about 4 liters). The vascular volume is divided into the venous volume and the arterial volume; and the arterial volume has a conceptually useful but unmeasurable subcompartment called the effective arterial blood volume. Compartments by location intracellular fluid (ICF), which consist of cytosol and fluids in the cell nucleus Extracellular fluid Intravascular fluid (blood plasma) Interstitial fluid Lymphatic fluid (sometimes included in interstitial fluid) Transcellular fluid Health Body fluid is the term most often used in medical and health contexts. Modern medical, public health, and personal hygiene practices treat body fluids as potentially unclean. This is because they can be vectors for infectious
https://en.wikipedia.org/wiki/Stoddard%20engine
Elliott J. Stoddard invented and patented two versions of the Stoddard engine, the first in 1919 and the second in 1933. The general engine classification is an external combustion engine with valves and single-phase gaseous working fluid (i.e. a "hot air engine"). The internal working fluid was originally air, although in modern versions, other gases such as helium or hydrogen may be used. One potential thermodynamic advantage of using valves is to minimize the adverse effects of "unswept volume" in the heat exchangers (sometimes called "dead volume"), which is known to reduce engine efficiency and power output in the valveless Stirling engine. The 1919 Stoddard engine The generalized thermodynamic processes of the 1919 Stoddard cycle are: Adiabatic compression Isobaric heat-addition Adiabatic expansion Isobaric heat-removal The engine design in the patent was using a scotch yoke. The 1933 Stoddard engine In the 1933 design, Stoddard reduced the internal volume of the heat exchangers maintaining the same generalized thermodynamic processes as the 1919 cycle. Gallery
https://en.wikipedia.org/wiki/Signed%20number%20representations
In computing, signed number representations are required to encode negative numbers in binary number systems. In mathematics, negative numbers in any base are represented by prefixing them with a minus sign ("−"). However, in RAM or CPU registers, numbers are represented only as sequences of bits, without extra symbols. The four best-known methods of extending the binary numeral system to represent signed numbers are: sign–magnitude, ones' complement, two's complement, and offset binary. Some of the alternative methods use implicit instead of explicit signs, such as negative binary, using the base −2. Corresponding methods can be devised for other bases, whether positive, negative, fractional, or other elaborations on such themes. There is no definitive criterion by which any of the representations is universally superior. For integers, the representation used in most current computing devices is two's complement, although the Unisys ClearPath Dorado series mainframes use ones' complement. History The early days of digital computing were marked by competing ideas about both hardware technology and mathematics technology (numbering systems). One of the great debates was the format of negative numbers, with some of the era's top experts expressing very strong and differing opinions. One camp supported two's complement, the system that is dominant today. Another camp supported ones' complement, where a negative value is formed by inverting all of the bits in its positive equivalent. A third group supported sign–magnitude, where a value is changed from positive to negative simply by toggling the word's highest-order bit. There were arguments for and against each of the systems. Sign–magnitude allowed for easier tracing of memory dumps (a common process in the 1960s) as small numeric values use fewer 1 bits. These systems did ones' complement math internally, so numbers would have to be converted to ones' complement values when they were transmitted from a register
https://en.wikipedia.org/wiki/Ingrid%27s%20Back
Gnome Ranger II: Ingrid's Back is a text adventure game by Level 9 released in 1988. It is the sequel to Gnome Ranger The game is a standard text adventure with limited graphics on some platforms. Again, a short novella by Peter McBride is included ("The 2nd Gnettlefield Journal") explaining the background to the story and providing hints for play. Plot Having just returned from her "holiday" in the wilderness, the gnome Ingrid Bottomlow must save her home village of Little Moaning from destruction by a greedy property developer, Jasper Quickbuck. To do this she must get the various uncooperative inhabitants of the village to sign her petition. Gameplay Gameplay is similar to Gnome Ranger. The player must explore Ingrid's village while collecting signatures for her petition by interacting with various non-player characters. Reception Computer and Video Games gave Ingrid's Back a very favorable review, with reviewer Keith Cambpell calling it "the most enjoyable Level 9 adventure [he had] played to date" and praising its humor and difficulty curve.
https://en.wikipedia.org/wiki/Diff
In computing, the utility diff is a data comparison tool that computes and displays the differences between the contents of files. Unlike edit distance notions used for other purposes, diff is line-oriented rather than character-oriented, but it is like Levenshtein distance in that it tries to determine the smallest set of deletions and insertions to create one file from the other. The utility displays the changes in one of several standard formats, such that both humans or computers can parse the changes, and use them for patching. Typically, diff is used to show the changes between two versions of the same file. Modern implementations also support binary files. The output is called a "diff", or a patch, since the output can be applied with the Unix program . The output of similar file comparison utilities is also called a "diff"; like the use of the word "grep" for describing the act of searching, the word diff became a generic term for calculating data difference and the results thereof. The POSIX standard specifies the behavior of the "diff" and "patch" utilities and their file formats. History diff was developed in the early 1970s on the Unix operating system, which was emerging from Bell Labs in Murray Hill, New Jersey. The first released version shipped with the 5th Edition of Unix in 1974, and was written by Douglas McIlroy, and James Hunt. This research was published in a 1976 paper co-written with James W. Hunt, who developed an initial prototype of . The algorithm this paper described became known as the Hunt–Szymanski algorithm. McIlroy's work was preceded and influenced by Steve Johnson's comparison program on GECOS and Mike Lesk's program. also originated on Unix and, like , produced line-by-line changes and even used angle-brackets (">" and "<") for presenting line insertions and deletions in the program's output. The heuristics used in these early applications were, however, deemed unreliable. The potential usefulness of a diff tool provoked M
https://en.wikipedia.org/wiki/Glycolytic%20oscillation
In biochemistry, a glycolytic oscillation is the repetitive fluctuation of in the concentrations of metabolites, classically observed experimentally in yeast and muscle. The first observations of oscillatory behaviour in glycolysis were made by Duysens and Amesz in 1957. The problem of modelling glycolytic oscillation has been studied in control theory and dynamical systems since the 1960s since the behaviour depends on the rate of substrate injection. Early models used two variables, but the most complex behaviour they could demonstrate was period oscillations due to the Poincaré–Bendixson theorem, so later models introduced further variables.
https://en.wikipedia.org/wiki/Database%20tuning
Database tuning describes a group of activities used to optimize and homogenize the performance of a database. It usually overlaps with query tuning, but refers to design of the database files, selection of the database management system (DBMS) application, and configuration of the database's environment (operating system, CPU, etc.). Database tuning aims to maximize use of system resources to perform work as efficiently and rapidly as possible. Most systems are designed to manage their use of system resources, but there is still much room to improve their efficiency by customizing their settings and configuration for the database and the DBMS. I/O tuning Hardware and software configuration of disk subsystems are examined: RAID levels and configuration, block and stripe size allocation, and the configuration of disks, controller cards, storage cabinets, and external storage systems such as SANs. Transaction logs and temporary spaces are heavy consumers of I/O, and affect performance for all users of the database. Placing them appropriately is crucial. Frequently joined tables and indexes are placed so that as they are requested from file storage, they can be retrieved in parallel from separate disks simultaneously. Frequently accessed tables and indexes are placed on separate disks to balance I/O and prevent read queuing. DBMS tuning DBMS users and DBA experts DBMS tuning refers to tuning of the DBMS and the configuration of the memory and processing resources of the computer running the DBMS. This is typically done through configuring the DBMS, but the resources involved are shared with the host system. Tuning the DBMS can involve setting the recovery interval (time needed to restore the state of data to a particular point in time), assigning parallelism (the breaking up of work from a single query into tasks assigned to different processing resources), and network protocols used to communicate with database consumers. Memory is allocated for data, executio
https://en.wikipedia.org/wiki/Dynamic%20height
Dynamic height is a way of specifying the vertical position of a point above a vertical datum; it is an alternative for orthometric height or normal height. It can be computed by dividing the location's geopotential number by the normal gravity at 45 degree latitude and zero height (a constant equal to 9.806199203 m/s2). Dynamic height is constant if one remains at the same geopotential (equigeopotential) as one moves from place to place. Because of variations in Earth's gravity, surfaces having a constant difference in dynamic height may be closer or further apart in various places. Dynamic heights are usually chosen so that zero corresponds to the geoid. Dynamic height is the most appropriate height measure when working with the level of water (as in hydrology or oceanography) over a large geographic area; it is used by the Great Lakes Datum in the US and Canada. When differential leveling is done, the path corresponds closely to following a value of dynamic height horizontally, but not to orthometric height for vertical changes measured on the leveling rod. Thus small corrections must be applied to field measurements to obtain either the dynamic height or the orthometric height usually used in engineering. US National Geodetic Survey data sheets give both dynamic and orthometric values. See also Geopotential height, a similar quantity used in meteorology, based on a slightly different gravity value
https://en.wikipedia.org/wiki/Procynosuchus
Procynosuchus (Greek: "Before dog crocodile") is an extinct genus of cynodonts from the Late Permian. It is considered to be one of the earliest and most basal cynodonts. It was 60 cm (2 ft) long. Remains of Procynosuchus have been found in Russia, Germany, Zambia and South Africa. Paleobiology As one of the earliest cynodonts, Procynosuchus has many primitive features, but it also has features that distinguish it from all other early therapsids. Some of these features were interpreted by Kemp (1980) as adaptations for a semi-aquatic lifestyle. For example, the wide zygapophyses of the vertebrae allow for a high degree of lateral flexibility, and Procynosuchus may have used anguilliform locomotion, or eel-like undulation, to swim through the water. The tail of Procynosuchus is also unusually long for a cynodont. The long haemal arches would have given the tail a large lateral surface area for greater propulsion through the water. Relatively flat foot bones may also have been an adaptation toward swimming, as the feet may have been used like paddles. Ridges on the femur are an indication of strong flexor muscles that could have stabilized the leg during limb-driven swimming. When the thigh is pulled back in the water, the lower leg tends to bend forward. Strong flexor muscles would have pulled the lower leg back with the femur, providing the powerful backward thrust that is needed to swim. Discovery Procynosuchus was named by South African paleontologist Robert Broom in 1937. Broom also named the cynodont Cyrbasiodon in 1931. Another genus, Parathrinaxodon, was named by Parrington in 1936. These genera are now regarded as synonyms of Procynosuchus, as they represent the same animal. Under the International Code of Zoological Nomenclature (ICZN), these two names take precedence over Procynosuchus because they were erected earlier. The names Cyrbasiodon and Parathrinaxodon were rarely used after their erection, while the name Procynosuchus has since become widesp
https://en.wikipedia.org/wiki/List%20of%20mathematical%20examples
This page will attempt to list examples in mathematics. To qualify for inclusion, an article should be about a mathematical object with a fair amount of concreteness. Usually a definition of an abstract concept, a theorem, or a proof would not be an "example" as the term should be understood here (an elegant proof of an isolated but particularly striking fact, as opposed to a proof of a general theorem, could perhaps be considered an "example"). The discussion page for list of mathematical topics has some comments on this. Eventually this page may have its own discussion page. This page links to itself in order that edits to this page will be included among related changes when the user clicks on that button. The concrete example within the article titled Rao-Blackwell theorem is perhaps one of the best ways for a probabilist ignorant of statistical inference to get a quick impression of the flavor of that subject. Uncategorized examples, alphabetized Alexander horned sphere All horses are the same color Cantor function Cantor set Checking if a coin is biased Concrete illustration of the central limit theorem Differential equations of mathematical physics Dirichlet function Discontinuous linear map Efron's non-transitive dice Example of a game without a value Examples of contour integration Examples of differential equations Examples of generating functions Examples of groups List of the 230 crystallographic 3D space groups Examples of Markov chains Examples of vector spaces Fano plane Frieze group Gray graph Hall–Janko graph Higman–Sims graph Hilbert matrix Illustration of a low-discrepancy sequence Illustration of the central limit theorem An infinitely differentiable function that is not analytic Leech lattice Lewy's example on PDEs List of finite simple groups Long line Normally distributed and uncorrelated does not imply independent Pairwise independence of random variables need not imply mutual independence. Petersen graph Sierpinski space Simple examp
https://en.wikipedia.org/wiki/Cause%20of%20obsessive%E2%80%93compulsive%20disorder
The cause of obsessive–compulsive disorder is understood mainly through identifying biological risk factors that lead to obsessive–compulsive disorder (OCD) symptomology. The leading hypotheses propose the involvement of the orbitofrontal cortex, basal ganglia, and/or the limbic system, with discoveries being made in the fields of neuroanatomy, neurochemistry, neuroimmunology, neurogenetics, and neuroethology. Drug-induced OCD Many different types of medication can create/induce OCD in patients that have never had symptoms before. A new chapter about OCD in the DSM-5 (2013) now specifically includes drug-induced OCD. Atypical antipsychotics (second generation antipsychotics), such as olanzapine (Zyprexa), have been proven to induce de-novo OCD in patients. Neuroanatomy Although there has been substantial debate regarding the assessment of OCD, current research has gravitated toward structural and functional neuroimaging. These technological innovations have provided a better understanding of the neuroanatomical risk factors of OCD. These studies can be divided into four basic categories: (1) resting studies that compare brain activity at rest in patients with OCD to controls, (2) symptom provocation studies that compare brain activity before and after incitement of symptoms, (3) treatment studies that compare brain activity before and after treatment with pharmacotherapy, and (4) cognitive activation studies that compare brain activity while performing a task in patients with OCD to controls. Data obtained from this research suggests that three brain areas are involved with OCD: the orbitofrontal cortex (OFC), the anterior cingulate cortex (ACC), and the head of the caudate nucleus. Several studies have found that in patients with OCD, these areas: (1) are hyperactive at rest relative to healthy control; (2) become increasingly active with symptom provocation; and (3) no longer exhibit hyperactivity following successful treatment with SRI pharmacotherapy or c
https://en.wikipedia.org/wiki/Scotophor
A scotophor is a material showing reversible darkening and bleaching when subjected to certain types of radiation. The name means dark bearer, in contrast to phosphor, which means light bearer. Scotophors show tenebrescence (reversible photochromism) and darken when subjected to an intense radiation such as sunlight. Minerals showing such behavior include hackmanite sodalite, spodumene and tugtupite. Some pure alkali halides also show such behavior. Scotophors can be sensitive to light, particle radiation (e.g. electron beam – see cathodochromism), X-rays, or other stimuli. The induced absorption bands in the material, caused by F-centers created by electron bombardment, can be returned to their non-absorbing state, usually by light and/or heating. Scotophors sensitive to electron beam radiation can be used instead of phosphors in cathode ray tubes, for creating a light absorbing instead of light emitting image. Such displays are viewable in bright light and the image is persistent, until erased. The image would be retained until erased by flooding the scotophor with a high-intensity infrared light or by electro-thermal heating. Using conventional deflection and raster formation circuitry, a bi-level image could be created on the membrane and retained even when power was removed from the CRT. In Germany, scotophor tubes were developed by Telefunken as blauschrift-röhre ("dark-trace tube"). The heating mechanism was a layer of mica with transparent thin film of tungsten. When the image was to be erased, current was applied to the tungsten layer; even very dark images could be erased in 5–10 seconds. Scotophors typically require a higher-intensity electron beam to change color than phosphors need to emit light. Screens with layers of a scotophor and a phosphor are therefore possible, where the phosphor, flooded with a dedicated wide-beam low-intensity electron gun, produces backlight for the scotophor, and optionally highlights selected areas of the screen if bom
https://en.wikipedia.org/wiki/Trace%20vector%20decoder
A Trace Vector Decoder (TVD) is computer software that uses the trace facility of its underlying microprocessor to decode encrypted instruction opcodes just-in-time prior to execution and possibly re-encode them afterwards. It can be used to hinder reverse engineering when attempting to prevent software cracking as part of an overall copy protection strategy. Microprocessor tracing Certain microprocessor families (e.g. 680x0, x86) provide the capability to trace instructions to aid in program development. A debugger might use this capability to single step through a program, providing the means for a programmer to monitor the execution of the program under test. By installing a custom handler for the trace exception, it is possible to gain control of the microprocessor between the execution of normal program flow instructions. A typical trace vector decoder exception handler decodes the upcoming instruction located outside the exception, as well as re-encoding the previously decoded instruction. Implementations Motorola 680x0 The Motorola 68000 has an instruction-by-instruction tracing facility. When its trace state is enabled, the processor automatically forces a trace exception after each (non-exception) instruction is executed. The following assembly code snippet is an example of a program initializing a trace exception handler on a 68000 system. InstallHandler: MOVE.L #$4E730000,-(SP) ; Push trace exception handler on to stack MOVE.L #$00000010,-(SP) MOVE.L #$0004DDB9,-(SP) MOVE.L #$BD96BDAE,-(SP) MOVE.L #$B386B586,-(SP) MOVE.L #$D046D246,-(SP) MOVE.L #$0246A71F,-(SP) MOVE.L #$00023C17,-(SP) MOVE.W #$2C6F,-(SP) MOVE.L SP,($24).W ; Set trace exception handler vector ORI.W #$A71F,SR ; Enable trace state NOP ; CPU generates a trace
https://en.wikipedia.org/wiki/Algorithm%20selection
Algorithm selection (sometimes also called per-instance algorithm selection or offline algorithm selection) is a meta-algorithmic technique to choose an algorithm from a portfolio on an instance-by-instance basis. It is motivated by the observation that on many practical problems, different algorithms have different performance characteristics. That is, while one algorithm performs well in some scenarios, it performs poorly in others and vice versa for another algorithm. If we can identify when to use which algorithm, we can optimize for each scenario and improve overall performance. This is what algorithm selection aims to do. The only prerequisite for applying algorithm selection techniques is that there exists (or that there can be constructed) a set of complementary algorithms. Definition Given a portfolio of algorithms , a set of instances and a cost metric , the algorithm selection problem consists of finding a mapping from instances to algorithms such that the cost across all instances is optimized. Examples Boolean satisfiability problem (and other hard combinatorial problems) A well-known application of algorithm selection is the Boolean satisfiability problem. Here, the portfolio of algorithms is a set of (complementary) SAT solvers, the instances are Boolean formulas, the cost metric is for example average runtime or number of unsolved instances. So, the goal is to select a well-performing SAT solver for each individual instance. In the same way, algorithm selection can be applied to many other -hard problems (such as mixed integer programming, CSP, AI planning, TSP, MAXSAT, QBF and answer set programming). Competition-winning systems in SAT are SATzilla, 3S and CSHC Machine learning In machine learning, algorithm selection is better known as meta-learning. The portfolio of algorithms consists of machine learning algorithms (e.g., Random Forest, SVM, DNN), the instances are data sets and the cost metric is for example the error rate. So, the
https://en.wikipedia.org/wiki/Homogenization%20%28chemistry%29
Homogenization or homogenisation is any of several processes used to make a mixture of two mutually non-soluble liquids the same throughout. This is achieved by turning one of the liquids into a state consisting of extremely small particles distributed uniformly throughout the other liquid. A typical example is the homogenization of milk, wherein the milk fat globules are reduced in size and dispersed uniformly through the rest of the milk. Definition Homogenization (from "homogeneous;" Greek, homogenes: homos, same + genos, kind) is the process of converting two immiscible liquids (i.e. liquids that are not soluble, in all proportions, one in another) into an emulsion (Mixture of two or more liquids that are generally immiscible). Sometimes two types of homogenization are distinguished: primary homogenization, when the emulsion is created directly from separate liquids; and secondary homogenization, when the emulsion is created by the reduction in size of droplets in an existing emulsion. Homogenization is achieved by a mechanical device called a homogenizer. Application One of the oldest applications of homogenization is in milk processing. It is normally preceded by "standardization" (the mixing of milk from several different herds or dairies to produce a more consistent raw milk prior to processing). The fat in milk normally separates from the water and collects at the top. Homogenization breaks the fat into smaller sizes so it no longer separates, allowing the sale of non-separating milk at any fat specification. Methods Milk homogenization is accomplished by mixing large amounts of harvested milk, then forcing the milk at high pressure through small holes. Milk homogenization is an essential tool of the milk food industry to prevent creating various levels of flavor and fat concentration. Another application of homogenization is in soft drinks like cola products. The reactant mixture is rendered to intense homogenization, to as much as 35,000 psi, so tha
https://en.wikipedia.org/wiki/Franz%20Nissl
Franz Alexander Nissl (9 September 1860, in Frankenthal – 11 August 1919, in Munich) was a German psychiatrist and medical researcher. He was a noted neuropathologist. Early life Nissl was born in Frankenthal to Theodor Nissl and Maria Haas. Theodor taught Latin in a Catholic school and wanted Franz to become a priest. However Franz entered the Ludwig Maximilian University of Munich to study medicine. Later, he specialized in Psychiatry. One of Nissl's university professors was Bernhard von Gudden. His assistant, Sigbert Josef Maria Ganser suggested that Nissl write an essay on the pathology of the cells of the cortex of the brain. When the medical faculty offered a competition for a prize in neurology in 1884, Nissl undertook the brain-cortex study. He used alcohol as a fixative and developed a staining technique that allowed the demonstration of several new nerve-cell constituents. Nissl won the prize, and wrote his doctoral dissertation on the same topic in 1885. Career in medical research and education Professor von Gudden was the judge in Nissl's college-essay competition, and he was so impressed with the study that he offered Nissl an assistantship at the Furstenried castle southwest of Munich, where one of his responsibilities would be to care for the mad Prince Otto. Nissl accepted, and remained in that post from 1885 until 1888. There was a small laboratory at the castle, which enabled Nissl to continue with his neuropathological research. In 1888 Nissl moved to the Institution Blankenheim. In 1889 he went to Frankfurt as second in position under Emil Sioli (1852–1922) at the Städtische Irrenanstalt. There he met neurologist Ludwig Edinger and neuropathologist Karl Weigert, who was developing a neuroglial stain. This work motivated Nissl to study mental and nervous diseases by relating them to observable changes in glial cells, blood elements, blood vessels and brain tissue in general. In Frankfurt Nissl became acquainted with Alois Alzheime
https://en.wikipedia.org/wiki/Biquaternion
In abstract algebra, the biquaternions are the numbers , where , and are complex numbers, or variants thereof, and the elements of multiply as in the quaternion group and commute with their coefficients. There are three types of biquaternions corresponding to complex numbers and the variations thereof: Biquaternions when the coefficients are complex numbers. Split-biquaternions when the coefficients are split-complex numbers. Dual quaternions when the coefficients are dual numbers. This article is about the ordinary biquaternions named by William Rowan Hamilton in 1844. Some of the more prominent proponents of these biquaternions include Alexander Macfarlane, Arthur W. Conway, Ludwik Silberstein, and Cornelius Lanczos. As developed below, the unit quasi-sphere of the biquaternions provides a representation of the Lorentz group, which is the foundation of special relativity. The algebra of biquaternions can be considered as a tensor product , where is the field of complex numbers and is the division algebra of (real) quaternions. In other words, the biquaternions are just the complexification of the quaternions. Viewed as a complex algebra, the biquaternions are isomorphic to the algebra of complex matrices . They are also isomorphic to several Clifford algebras including , the Pauli algebra , and the even part of the spacetime algebra. Definition Let be the basis for the (real) quaternions , and let be complex numbers, then is a biquaternion. To distinguish square roots of minus one in the biquaternions, Hamilton and Arthur W. Conway used the convention of representing the square root of minus one in the scalar field by to avoid confusion with the in the quaternion group. Commutativity of the scalar field with the quaternion group is assumed: Hamilton introduced the terms bivector, biconjugate, bitensor, and biversor to extend notions used with real quaternions . Hamilton's primary exposition on biquaternions came in 1853 in his Lectures on
https://en.wikipedia.org/wiki/Liouville%27s%20equation
For Liouville's equation in dynamical systems, see Liouville's theorem (Hamiltonian). For Liouville's equation in quantum mechanics, see Von Neumann equation. For Liouville's equation in Euclidean space, see Liouville–Bratu–Gelfand equation. In differential geometry, Liouville's equation, named after Joseph Liouville, is the nonlinear partial differential equation satisfied by the conformal factor of a metric on a surface of constant Gaussian curvature : where is the flat Laplace operator Liouville's equation appears in the study of isothermal coordinates in differential geometry: the independent variables are the coordinates, while can be described as the conformal factor with respect to the flat metric. Occasionally it is the square that is referred to as the conformal factor, instead of itself. Liouville's equation was also taken as an example by David Hilbert in the formulation of his nineteenth problem. Other common forms of Liouville's equation By using the change of variables , another commonly found form of Liouville's equation is obtained: Other two forms of the equation, commonly found in the literature, are obtained by using the slight variant of the previous change of variables and Wirtinger calculus: Note that it is exactly in the first one of the preceding two forms that Liouville's equation was cited by David Hilbert in the formulation of his nineteenth problem. A formulation using the Laplace–Beltrami operator In a more invariant fashion, the equation can be written in terms of the intrinsic Laplace–Beltrami operator as follows: Properties Relation to Gauss–Codazzi equations Liouville's equation is equivalent to the Gauss–Codazzi equations for minimal immersions into the 3-space, when the metric is written in isothermal coordinates such that the Hopf differential is . General solution of the equation In a simply connected domain , the general solution of Liouville's equation can be found by using Wirtinger calculus. Its for
https://en.wikipedia.org/wiki/List%20of%20scattering%20experiments
This is a list of scattering experiments. Specific experiments of historical significance Davisson–Germer experiment Gold foil experiments, performed by Geiger and Marsden for Rutherford which discovered the atomic nucleus Elucidation of the structure of DNA by X-ray crystallography Discovery of the antiproton at the Bevatron Discovery of W and Z bosons at CERN Discovery of the Higgs boson at the Large Hadron Collider MINERνA Types of experiment Optical methods Compton scattering Raman scattering X-ray crystallography Biological small-angle scattering with X-rays, or Small-angle X-ray scattering Static light scattering Dynamic light scattering Polymer scattering with X-rays Neutron-based methods Neutron scattering Biological small-angle scattering with neutrons, or Small-angle neutron scattering Polymer scattering with neutrons Particle accelerators Electrostatic nuclear accelerator Linear induction accelerator Betatron Linear particle accelerator Cyclotron Synchrotron Physics-related lists Physics experiments Chemistry-related lists Biology-related lists
https://en.wikipedia.org/wiki/TOL101
TOL101, is a murine-monoclonal antibody specific for the human αβ T cell receptor. In 2010 it was an Investigational New Drug under development by Tolera Therapeutics, Inc. Clinical progress TOL101 is a clinical stage investigational drug. The safety and efficacy of TOL101 is currently the focus of a phase 2 clinical trial in renal transplant patients. Orphan drug status TOL101 was granted "orphan drug" status by the U.S. Food and Drug Administration for the treatment of recent onset immune-mediated Type 1 diabetes and for prophylaxis of acute rejection of solid organ transplantation. Rationale for development There are numerous agents currently under investigation that are capable of modulating T cells. Currently used agents include anti-thymocyte globulin(ATG) and alemtuzumab, which not only affect T cells, but are also capable of modulating many other aspects of the immune system, often resulting in long-term broad spectrum immune suppression. Antibodies specific for CD3 such as teplizumab and otelixizumab show increased specificity for T cells compared to ATG and alemtuzumab, but are still associated with infection and cytokine release syndrome. Targeting the αβ T cells with TOL101 may reduce these issues through two mechanisms. First, infections are expected to be reduced through the preservation of γδ T cells, which have been shown to play an important role in controlling viruses such as cytomegalovirus (CMV), often observed in antibody treated patients. Second, reductions in cytokine release are expected when targeting the αβ TCR because, unlike CD3 proteins, the αβ TCR contains none of the immunoreceptor tyrosine-based activation motifs (ITAMS) required for T cell activation. Mechanism of action TOL101 modulates αβ T cells TOL101 has been shown in in vitro models to specifically modulate αβ T cells. Incubation of peripheral blood monocytes (PBMC) with TOL101 triggers rapid down modulation of the T cell receptor. Importantly, this occurs without T cel
https://en.wikipedia.org/wiki/Incyte
Incyte is an American multinational pharmaceutical company with headquarters in Wilmington, Delaware, and Morges, Switzerland. The company was created in 2002 through the merger of Incyte Pharmaceuticals, founded in Palo Alto, California in 1991 and Incyte Genomics, Inc. of Delaware. The company currently operates manufacturing and R&D locations in North America, Europe, and Asia. Incyte Corporation currently develops and manufacturers prescription biopharmaceutical medications in multiple therapeutic areas including oncology, inflammation, and autoimmunity. History In 2014, Incyte named Hervé Hoppenot president and CEO, and in 2015 he was appointed chairman of the Board of Directors. Hoppenot had previously served as the president of Novartis Oncology; he had been with Novartis since 2003. In September 2015, the company announced it had gained exclusive development and commercial right pertaining to Jiangsu Hengrui Medicine Co., Ltd's anti-PD-1 monoclonal antibody, SHR-1210, in a deal worth $795+ million. In January 2020, Incyte signed a collaboration and license agreement for the global development and commercialization of tafasitamab with MorphoSys. On March 3, 2020, the agreement received antitrust clearance and thus became effective. Pharmaceuticals Incyte Corporation currently has seven marketed and co-marketed pharmaceutical products, including Jakafi (ruxolitinib), Pemazyre (pemigatinib), Monjuvi (tafasitamab-cxix) , Opzelura (Ruxolitinib), Tabrecta (capmatinib), Olumiant (Baricitinib), and Iclusig (ponatinib). In 2013, Novartis acquired Incyte's c-Met inhibitor capmatinib (INC280, INCB028060), which is marketed under the brand name Tabrecta. As of 2014, the company was developing baricitinib, an oral JAK1 and JAK2 inhibitor drug for rheumatoid arthritis in partnership with Eli Lilly. It gained EU approval in February 2017. In April 2017, the US FDA issued a rejection, citing concerns about dosing and safety. In May 2018, baricitinib was approved i
https://en.wikipedia.org/wiki/Nfat%20activating%20protein%20with%20itam%20motif%201
NFAT activating protein with ITAM motif 1 is a protein that in humans is encoded by the NFAM1 gene. Function The protein encoded by this gene is a type I membrane receptor that activates cytokine gene promoters such as the IL-13 and TNF-alpha promoters. The encoded protein contains an immunoreceptor tyrosine-based activation motif (ITAM) and is thought to regulate the signaling and development of B-cells. [provided by RefSeq, Jul 2008].